text
stringlengths
100
957k
meta
stringclasses
1 value
# MCHEF - Editorial Author: Sunny Aggarwal Tester: Mugurel Ionut Andreica Editorialist: Lalit Kundu Easy-Medium ### PREREQUISITES: dynamic programming, data structures ### PROBLEM: Given an array of N elements consisting of both negative and positive elements and M operations. Each operation is of type L, R and K which implies that you can remove any one element within range L to R(both include) by paying K cost (each operation can be used multiple times). You have a fixed budget C. You have to maximize the total sum of the array such that the expenditure in maximizing sum of elements does not exceed your budget C. Here, N, M \le 10^5 and C \le 200. ### QUICK EXPLANATION: First for each element find the minimum cost required to remove it. And then using DP similar to 0-1 Knapsack Problem calculate the maximum possible sum. For finding minimum cost to remove each element: • For subtask 1, you can brute force i.e. for each operation traverse over all indices it effects and update the value in an array. • For solving subtask 2, you have to either use STL sets or you can use segment trees. ### EXPLANATION: ================ The most basic observation here is that each operation allows to remove single element only. So, let’s say you want to remove A_i, you can remove it in many ways. Let’s define by set S_i the set of operations which can remove A_i. So S_i = \{\textrm{oper}_j : L_j \le i \le R_j\}. Now you can intuitively/greedily say that for removing A_i you would always choose the operation from set S_i whose cost is minimum. Now, let’s say for all i, we have found the minimum cost to remove A_i. How we actually do this I will explain later. So our problem now is basically: You have an array A of size N… For each element A_i there is cost of removal R_i. Remove some elements from A_i to maximize the sum of remaining elements and also total cost of removal shouldn’t exceed C. This is quite similar to 0-1 Knapsack Problem which can be solved via Dynamic Programming(DP). So, first step in writing/formalizing any DP problem is to decide some states which defines a sub problem of the problem we are trying to solve. You can do some hit and trial before you reach the correct states. Next step is to break the current problem into smaller sub problems which can help in defining the recursive relation between the DP states. Last step is to decide the base case. So, here we define \textrm{solve}\hspace{1mm}(i,\hspace{1mm}j) as the answer if our budget is j and our array is formed by the first i elements ie. A_1, A_2, ..., A_i. So our answer will be \textrm{solve}\hspace{1mm}(N,\hspace{1mm}C). Now let’s try to form recursive relations. You want to reduce your current problem i.e. \textrm{solve}\hspace{1mm}(i,\hspace{1mm}j) into smaller sub problems. How can we do that? To reduce current problem in smaller parts, we have to perform some action, which here is to decide whether to remove A_i or not. Let’s consider case 1, where we will remove A_i. This is only possible if j \ge R_i. Now, \textrm{solve}\hspace{1mm}(i,\hspace{1mm}j) = \textrm{solve}\hspace{1mm}(i-1,\hspace{1mm}j - R_i). Note that we have lost R_i cost on removing A_i and our array is now reduced to first i - 1 elements. Also, in the sum of remaining elements A_i couldn’t contribute anything. (A thought: Will we ever remove A_i if it’s positive, considering removing elements incurs cost?). Now, case 2, let’s not remove A_i. Now, \textrm{solve}\hspace{1mm}(i,\hspace{1mm}j) = A_i + \textrm{solve}\hspace{1mm}(i-1,\hspace{1mm}C). Now, A_i is not removed and contributes to the sum of remaining elements. Also, our budget remains same and our array size is now reduced by 1. So, our recurrence is ready which is basically: \textrm{solve}\hspace{1mm}(i,\hspace{1mm}j) = \textrm{max}(\hspace{1mm}\textrm{solve}\hspace{1mm}(i-1,\hspace{1mm}j - R_i), \hspace{1mm} A_i + \textrm{solve}\hspace{1mm}(i-1,\hspace{1mm}j)). Let’s see what are the base cases. The only base case is that if i==0 i.e. there is no array left, the only maximum sum possible is 0. #### DP Implementation: This is the last step of completing your DP problem. The best and the easiest way of writing DP is recursively with memoisation. There is no major difference in run time of recurisve and iterative DP. Now, what is memoisation? It basically is method where you don’t calculate things you’ve already calculated. So you maintain a \textrm{flag} array which is same type of your DP array and intialised to \textrm{false}. Once you have calculated a certain subproblem, you mark it true in the \textrm{flag} array. If you ever reach again a state, which has already been calculated, you return the value currently stored in DP array. Things will get clear from the following implementation: flag[N][C] #initialised to false DP[N][C] #array which stores actual answers A[N] #array A R[N] #cost array solve(i, j): #base case if i<=0: return dp[i][j]=0 #first sets dp[i][j] to 0 and returns it if flag[i][j] == true: #this dp has already been calculated return dp[i][j] #case 2: don't remove A[i] ret = A[i] + solve(i - 1, j) #case 1: remove A[i] if possible #tak ret to be the maximum of both cases if(j >= R[i]) ret = max(ret, solve(i - 1, j - R[i])) #mark flag[i][j] true since we have calculated this DP flag[i][j] = true return dp[i][j] = ret #### Complexity of DP: Let’s set what is the complexity of such a recursive implementation. Since each possible state is visited once, the complexity of DP is number of states multiplied with transition cost. Transition cost is the complexity required from transfrom from one state to another state. Here, our total number of states is \textrm{N * C} and transition cost is constant time. So, total complexity is \textrm{O(N * C)}. #### Calculating minimum cost for removing each element Now, about the part which we skipped earlier about calculating minimum cost of removing of A_i. First you initialize all indices of a MIN array to infinity and then for each operation you traverse through all indices which it covers and update the minimum value at each index. Here complexity is \textrm{O(M*N)}, where M is number of operations and N is size of array A. This is enough to pass Subtask 1. For solving Subtask 2, interesting observation is that an index i is affected by operations whose left end is before i and right end is after i. Suppose we have a data structure where we can insert/delete elements and find minimum value currently stored in this data structure in sub linear time. Let’s say this structure is S. So, let’s maintain two vector arrays L and R(means you can store a list of values at each index) and for each operation j insert at index L_j and R_j the cost of this particular operation ie. K_j. Now, when we traverse arrays L and R from left to right, say we are index i, for indices \ge i, values stored in list L[i] are going to effect them, so we add to our structure the values stored at L[i] and the values stored in R[i] are not going to affect indices \ge i, so we remove all values stored at R[i]. What could be this data structure S. If we use STL set, we can insert/delete a object only once, but this is not what we require. There might be two operations with same cost. So instead of storing values, we can store a pair of value and the indices of operations. In this way all operations will be unique and the beginning element of set will always give the minimum value operation. If you don’t feel enough clarity, see this pseudo code and try to visualize what is happening. struct oper{ int l, r, k; }; oper operarray[M]; //array of operations int MIN[N]; //MIN[i] stores minimum cost for removing A[i] vector L[N], R[N]; //arrays as defined in above paragraph //except now they store indices of operations instead of their cost set < pair < int, int > > iset; //first element of pair stores value of operation cost //second stores the index of operation for i = 1 to M: left = operarray[i].l right = operarray[i].r L.push_back(i) R[right].push_back(i) for i = 1 to N: //add all operations beginning at i for j = 0 to L[i].size() - 1: operindex = L[i][j] //index of operation beginning here cost = operarray[operindex].k //insert in set iset.insert(make_pair(cost, operindex)) MIN[i] = iset.begin()->first; //first element of the set //remove all operations ending at i for j = 0 to R[i].size() - 1: operindex = R[i][j] //index of operation beginning here cost = operarray[operindex].k //erase from set iset.erase(make_pair(cost, operindex)) Set is a STL data structure that inserts and deletes elements in O(\textrm{log (size of set)}). And since it keeps all elements in sorted order, we can find minimum element in constant time. So total complexity of finding the \textrm{MIN} array is \textrm{O(N log M)}. You can also find \textrm{MIN} array using segment trees where the complexity will be \textrm{O((M + N) log N)}, if we use lazy propagation and make updates. ### COMPLEXITY: Final complexity after including complexity of DP is \textrm{O(N log M + N C)}. ### Problems to Practice: Problems based on DP Problems based on STL 7 Likes Please provide some test cases for which the following code is giving WA. http://www.codechef.com/viewsolution/7467050 If you cant find anything wrong,do mention it. I had used the same concept of 0-1 knapsack problem but still getting WA.http://www.codechef.com/viewsolution/7425751 Plz Help!!! I managed to get AC with maintaining an array/BIT to jump indices .! Starting from the interval with minimum cost , I kept on assigning the index the minimum value ! The complexity of my code was O(N+M+#jumps*logN) and it passes . I guess the number of jumps can be quite high ! And this solution should not pass atleast with that log N factor multiplied! Solution . Correct me if I am wrong ! The number of jumps are maximum when we put ranges of length 1 whole over N and leave one index i.e. N ! Now we will have to process all the left queries and suppose they all dont match N , then this will cause processing of all (M-(N-1)) queries over whole N . Hence the complexity should be > (M-(N-1))*N . with M =10^5 and N =10^4 this approach should Time out! http://www.codechef.com/viewsolution/7474759 Thanx We can also use segment tree to do the first subtask . Here is a link to my solution http://www.codechef.com/viewsolution/7422093 First of all, A very good problem to solve, enjoyed solving it (although only subtask-1) I used Priority Queue to merge the ranges and obtain the MIN array. Then 0-1 Knapsack Algorithm to find the final answer. Intuitively I think the complexity of my ‘mergeRange’ function is O(M + (2M lg M)), i.e., O(M lg M) and therefore it should have passed. Here is my solution: http://www.codechef.com/viewsolution/7383196 Can someone verify if Priority Queue can be used to merge the ranges and obtain MIN array in O(M lg M) time. Thanks. I used interval tree to find minimum cost to remove a dish and 0-1 knapsack to find the cost of removing maximum dish but got WA and since you guys don’t provide test case details on which my program failed. I guess i wont know what i did wrong Attached is snowbear’s solution: http://www.codechef.com/viewsolution/7329411. Can somebody help me understand how this short code solves the problem. auto jury = readVector<pair<pair<int, int>, int>>(m); sort(all(jury)); reverse(all(jury)); for (auto &j : jury) { j.first.first--; j.first.second--; } vector<int> minCost(n, IntMaxVal); vector<int> longestDiscount(201, -1); FOR (i, 0, n) { while (jury.size() && jury.back().first.first == i) { maximize(longestDiscount[jury.back().second], jury.back().first.second); jury.pop_back(); } FOR (c, 1, longestDiscount.size()) if (longestDiscount[c] >= i) { minCost[i] = c; break; } } vector<vector<int>> best_costs(k + 1); FOR (i, 0, n) if (a[i] < 0 && minCost[i] <= k) best_costs[minCost[i]].push_back(-a[i]); for (auto &v : best_costs) sort(all(v)), reverse(all(v)); FOR (c, 1, best_costs.size()) if (best_costs[c].size() > k / c) best_costs[c].resize(k / c); vector<pair<int, int>> knapsack_items; FOR (c, 1, best_costs.size()) for (auto x : best_costs[c]) knapsack_items.push_back( { c , x } ); vector<LL> knapsack(k + 1); for (auto &item : knapsack_items) FORD (c, k, item.first) maximize(knapsack[c], knapsack[c - item.first] + item.second); return res + knapsack.back(); I used segment trees with lazy propagation for updates and dp as mentioned but i got TLE. my sol can be found here. Please let me know why it failed?? I cant seem to find why i am getting WA in subtask 1? Maybe some good testcases if any I cant seem to find why i am getting WA even in subtask1 http://www.codechef.com/viewsolution/7475713 good test cases might help I have solved it using merging intervals having same c values and 0-1 knapsack. if anyone interested can have look here 7 Likes Can you explain why my code got a WA ? I used sweep line algorithm to get apt interval and then used a knapsack #include<bits/stdc++.h> using namespace std; /* 1 data first.first 2 type first.second 3 other second.first 4 cost second.second */ int knapSack(long long int W, long long int wt[], long long int val[], long long int n) { long long int i, w; long long int K[n+1][W+1]; // Build table K[][] in bottom up manner for (i = 0; i <= n; i++) { for (w = 0; w <= W; w++) { if (i==0 || w==0) K[i][w] = 0; else if (wt[i-1] <= w) K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w]); else K[i][w] = K[i-1][w]; } } return K[n][W]; } int main() { ios_base::sync_with_stdio(false); vector<pair<long long, long long> > v; vector<pair<pair<long long,long long>,pair<long long,long long> > > u; map<pair<long long, long long>, long long> m1; long long t, n, k, m, sum, i, x, l, r, c; cin >> t; while (t--) { cin >> n >> k >> m; sum = 0; for (i = 0;i < n;i++) { cin >> x; sum = sum + x; if (x < 0) { v.push_back(make_pair(-1 * x, i + 1)); } } sort(v.rbegin(), v.rend()); for (i = 0;i < m;i++) { cin >> l >> r >> c; m1[make_pair(l, r)] = c; } long long int wt[100000], val[100000], z; pair<pair<long long,long long>,pair<long long,long long> > temp; for(i=0;i<v.size();i++) { (temp.first).first=v[i].second; (temp.first).second=0; // we put a 0 type for point , -1 for left, 1 for right (temp.second).first=-1; (temp.second).second=v[i].first; // cost contains value for points, and cost for intervals u.push_back(temp); } for(map<pair<long long, long long>, long long> ::iterator it=m1.begin();it!=m1.end();it++) { (temp.first).first=(it->first).first; (temp.second).first=(it->first).second; (temp.first).second=-1; (temp.second).second=(it->second); u.push_back(temp); //pushing left part of interval (temp.second).first=(it->first).first; (temp.first).first=(it->first).second; (temp.first).second=1; (temp.second).second=(it->second); u.push_back(temp); //pushing right part of interval } set<pair<long long,pair<long long,long long> > > s; pair<long long,pair<long long,long long> > temps; sort(u.begin(),u.end()); z=0; for(i=0;i<u.size();i++) { if(u[i].first.second==-1) //left of interval { temps.first=u[i].second.second; temps.second.second=u[i].second.first; temps.second.first=u[i].first.first; s.insert(temps); } else if(u[i].first.second==1) //right of interval { temps.first=u[i].second.second; temps.second.first=u[i].second.first; temps.second.second=u[i].first.first; s.erase(temps); } else //point { temps=*(s.begin()); val[z]=v[u[i].first.first-1].first; wt[z]=temps.first; z++; } } long long int ans = knapSack(k, wt, val, z); cout << sum + ans << "\n"; m1.clear(); u.clear(); v.clear(); s.clear(); } return 0; } Is there a way to solve this problem using sqrt decomposition? Why the elements in set removed after finding min[i] for subtask 2? Could somebody explain it a bit more clearly I got TLE with the DP solution for the second subtask. Weird. What’s really surprising is that I tried greedy knapsack (aka fractional knapsack) and got AC. You sort each value by decreasing (value/cost) here A[i]/R[i] and greadily take everything till the budget is reached. Normally this solution shouldn’t work on all test cases. Does anyone have a clue on why it worked? I used the same kind of solution Lalit used but I used a multiset instead. Can anyone tell me why it TLEed for subtask 2? Here’s my soln. http://www.codechef.com/viewsolution/7352429 i used segment trees to find min cost but last 2 test cases of sub-task 2 gave TLE.
{}
## How to find work when you have force and time I have an at home lab. That tells me to run up a flight of stairs. My flight of stairs is 3 m (vertical height). To walk up the stairs it takes me 6.98 seconds. To run up the stairs it takes me 2.88 seconds. I need to find the work that is being done. I weigh 118 pounds that is 526 N. How do I come about this I know work is W=(F)(D) So do I need to put my time in seconds into something. But what? Please help me!! PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire As you mention $W=\vec{F}\cdot \vec{d}$, time doesn't appear there, and that's because the work done by a force is independent of the time length of the force action. The meassurement of time it takes you to go upstairs is not really that useful. Try to approach the problem in a different way. Hint: When you go upstairs the work you do becomes energy. Think about it, How much more energy do you have after you have gone upstairs? where does this energy comes from? What CFede said. Look at it from the point of view of energy gained. There is an interesting conclusion. ## How to find work when you have force and time Is it F x D -------- t ?????? Hi again alicia, i think you are very confused. The time here, is good for nothing. It doesn't matter how long it takes you to go upstairs, whether you can do t in 3 minutes or 3 hours is irrelevant, the work done is the same. You must approach the problem in a different manner, let me try to explain: When the energy of a body changes, that energy change must ahve come from somewhere. If at some moment, a body A has more energy that at an earlier time, that "extra" energy it has, he must have obtained from something else, this means, that some work had to be done to the body A in order to grant him that "extra" energy. When you go upstairs, your energy increases, more specifically, your gravitational potencial energy increases. This increase in the potential energy occurs because of the work done, which means, the increase in the potential energy is equivalent to the work done, this is: $W=\Delta E=E_f-E_i=mgh_f-mgh_i=mg\Delta h$ In this way, you can relate the work done, with the energy difference. As you can see time plays no role in this whole thing. You must think of the problem in terms of energy changes. Another comment too. This equation you mention $W=\vec{F}\cdot\vec{d}$ is true for constant (or average) forces only. Since this is not the case, you shouldn't use that. Hope you understand a little better now. If not, ask again. Quote by CFede Hi again alicia, i think you are very confused. The time here, is good for nothing. It doesn't matter how long it takes you to go upstairs, whether you can do t in 3 minutes or 3 hours is irrelevant, the work done is the same. You must approach the problem in a different manner, let me try to explain: When the energy of a body changes, that energy change must ahve come from somewhere. If at some moment, a body A has more energy that at an earlier time, that "extra" energy it has, he must have obtained from something else, this means, that some work had to be done to the body A in order to grant him that "extra" energy. When you go upstairs, your energy increases, more specifically, your gravitational potencial energy increases. This increase in the potential energy occurs because of the work done, which means, the increase in the potential energy is equivalent to the work done, this is: $W=\Delta E=E_f-E_i=mgh_f-mgh_i=mg\Delta h$ In this way, you can relate the work done, with the energy difference. As you can see time plays no role in this whole thing. You must think of the problem in terms of energy changes. Another comment too. This equation you mention $W=\vec{F}\cdot\vec{d}$ is true for constant (or average) forces only. Since this is not the case, you shouldn't use that. Hope you understand a little better now. If not, ask again. Ok so I under stood the question wrong. It says I need to find my power rating. Is that correct? For my previous Anwser? I was thinking power rating was work ... It's not work correct ? Quote by alicia113 Ok so I under stood the question wrong. It says I need to find my power rating. Is that correct? For my previous Anwser? I was thinking power rating was work ... It's not work correct ? You can think of power as being the rate of energy change, whereas work is the total change in energy, independent of time. So, if we let $\Delta W$ be the work performed, the equation for the average power is $$P_{avg} = \frac { \Delta W} { \Delta t}$$ So, calculate the work using the formula provided by CFede, and then divide by time. Ok thanks!! So it will be W=FxD = 526N x 3m =1578J Pavrg= 1578J/6.98m = 226.98. ( what's the unit) 6.98s I'm sorry! Well, you should use CFede's formula to calculate the work (e.g. The change in gravitational potential energy.). Also, the unit of power is Joules per second (J/s). Quote by alicia113 Ok thanks!! So it will be W=FxD = 526N x 3m =1578J Pavrg= 1578J/6.98m = 226.98. ( what's the unit) Not quite.. It's... Pavrg= 1578J/6.98S = 226.98 J/S (eg Joules per second) 1 J/S = 1 Watt so Pavrg = 226.98 Watts If you prefer horses then 750W=1Hp so 226.98 is about 0.3 Hp So that's my power than? Thank you so much!! So my power for 2.88s is 0.7 hp or 529.53 watts 1578j/2.88s = 548w
{}
## Tate-Shafarevich groups and L-functions of elliptic curves with complex multiplication.(English)Zbl 0628.14018 Let K be a number field and $$E/K$$ an elliptic curve. The conjecture of Birch and Swinnerton-Dyer gives a relation between the behavior of the L- series $$L(E_{/K},s)$$ around $$s=1$$ and (among other quantities) the order of the Tate-Shafarevich group Russian{Sh} = Russian{Sh}$$(E_{/K})$$. As described by J. T. Tate [Invent. Math. 23, 179-206 (1974; Zbl 0296.14018)], “this remarkable conjecture relates the behavior of a $$function\quad L$$ at a point where it is not at present known to be defined to the order of a group {Russian{Sh}} which is not known to be finite!” In this important paper the author gives the first examples of elliptic curves for which it can be proved that the Tate- Shafarevich group is finite. His first result gives a relation between the value of $$L(E_{/K},1)$$ and the order of {Russian{Sh}}; and his second relates the order of vanishing of $$L(E_{/K},s)$$ at $$s=1$$ to the rank of the Mordell-Weil group E(K). Both of these results provide additional evidence for the truth of the Birch and Swinnerton-Dyer conjecture. We now describe the author’s results in more detail. Let $$E/K$$ be an elliptic curve with complex multiplication by an order $${\mathfrak O}$$ in the imaginary quadratic field K, let $${\mathfrak O}_ K$$ be the ring of integers of K, and let $$\Omega$$ be an $${\mathfrak O}$$-generator of the period lattice of a minimal model for E. Let $$\psi$$ be the Hecke character of K attached to E. The L-function of $$E/K$$ satisfies $$L(E_{/K},s)=L(\psi,s)L({\bar \psi},s)$$, and $$L({\bar\psi},1)/\Omega\in K.$$ Theorem A. (a) If $$L(E_{/K},1)\neq 0$$, then {Russian{Sh}} is finite. (b) Let $${\mathfrak p}$$ be a prime of K not dividing $$| {\mathfrak O}^*_ K|$$. If $$| E(K)_{tors}| L({\bar\psi},1)/\Omega \not\equiv 0$$ (mod $${\mathfrak p})$$, then the $${\mathfrak p}$$-part of {Russian{Sh}} is trivial. Theorem B. Let E be an elliptic curve defined over $${\mathbb{Q}}$$ with complex multiplication. If $$\text{rank}_{{\mathbb{Z}}}(E({\mathbb{Q}}))\geq 2$$, then $$\text{ord}_{s=1}(E_{/{\mathbb{Q}}},s)\geq 2.$$ The author’s proofs rely heavily on the techniques originally developed by J. Coates and A. Wiles [Invent. Math. 39, 223-251 (1977; Zbl 0359.14009) and J. Aust. Math. Soc., Ser. A 26, 1-25 (1978; Zbl 0442.12007)], in particular on a refinement of the relation between elliptic units and L($${\bar \psi}$$,1). The main new ingredient is the use of ideal class annihilators arising from elliptic units, which was suggested by the work of Thaine on cyclotomic units and class groups of cyclotomic fields [“On the ideal class groups of real abelian number fields ” (to appear)]. This allows the author to control the size of a certain class group while working entirely in the field $$K(E_{{\mathfrak p}})$$; the original work of Coates and Wiles (op. cit.) required using $$K(E_{{\mathfrak p}^ n})$$ for all $$n\geq 1.$$ As the author indicates in his introduction, the major complications in the proof of theorem A arise from a small number of primes, in particular primes of bad reduction and primes dividing $$| {\mathfrak O}^*_ K|$$. The interested reader might start by reading the proof of the weaker (but still striking) statement “If $$L(E_{/K},1)3n0$$ then {Russian{Sh}} has no $${\mathfrak p}$$-torsion for almost all $${\mathfrak p}.''$$ The technical details needed to complete the proof of theorem A can then be found in the later sections. The proof of theorem B is essentially independent from that of theorem A, although it again relies heavily on the use of elliptic unit ideal class annihilators. It also uses recent works of B. H. Gross and D. Zagier [Invent. Math. 84, 225-320 (1986; Zbl 0608.14019)] and B. Perrin-Riou [“Points de Heegner et dérivés de fonctions L p-adic,” Invent. Math. (to appear)]. As a corollary of (the proof of) theorem B and the Gross-Zagier theorem (op. cit.), the author also deduces that if E is defined over $${\mathbb{Q}}$$ and has complex multiplication, and if $$L(E_{/{\mathbb{Q}}},s)$$ has a simple zero at s-1, then the p-part of {Russian{Sh}}$$(E_{/{\mathbb{Q}}})$$ is finite for all primes p.2 for which E has good, ordinary reduction (i.e. for approximately half the primes p). Reviewer: J.H.Silverman ### MSC: 14G10 Zeta functions and related questions in algebraic geometry (e.g., Birch-Swinnerton-Dyer conjecture) 14K22 Complex multiplication and abelian varieties 14H45 Special algebraic curves and curves of low genus 14G05 Rational points 14H52 Elliptic curves ### Citations: Zbl 0296.14018; Zbl 0359.14009; Zbl 0442.12007; Zbl 0608.14019 Full Text: ### References: [1] Bertrand, D.: Valeurs de fonctions theta et hauteursp-adiques. In: Séminaire de Théorie des Nombres, Paris 1980-81. Prog. Math., vol. 22, pp. 1-12. Boston: Birkhäuser (1982) [2] Birch, B., Swinnerton-Dyer, P.: Notes on elliptic curves II. J. Reine Angew. Math.218, 79-108 (1965) · Zbl 0147.02506 [3] Coates, J.: Infinite descent on elliptic curves. In: Arithmetic and Geometry, papers dedicated to I.R. Shafarevich on the occasion of his 60th birthday. Prog. Math., vol. 35, pp. 107-136. Boston: Birkhäuser (1983) [4] Coates, J., Wiles, A.: On the conjecture of Birch and Swinnerton-Dyer. Invent. Math.39, 223-251 (1977) · Zbl 0359.14009 [5] Coates, J., Wiles, A.: Onp-adicL-functions and elliptic units. J. Aust. Math. Soc.26, 1-25 (1978) · Zbl 0442.12007 [6] de Shalit, E.: The explicit reciprocity law in local class field theory. Duke Math. J.53, 163-176 (1986) · Zbl 0597.12018 [7] de Shalit, E. de: The Iwasawa Theory of Elliptic Curves with Complex Multiplication. Perspec. Math. Orlando: Academic Press (1987) · Zbl 0674.12004 [8] Greenberg, R.: On the Birch and Swinnerton-Dyer conjecture. Invent. Math.72, 241-265 (1983) · Zbl 0546.14015 [9] Gross, B.: On the conjecture of Birch and Swinnerton-Dyer for elliptic curves with complex multiplication. In: Number Theory Related to Fermat’s Last Theorem. Prog. math, vol. 26, pp. 219-236. Boston: Birkhäuser (1982) [10] Gross, B., Zagier, D.: Heegner points and derivatives ofL-series. Invent. Math.84, 225-320 (1986) · Zbl 0608.14019 [11] Iwasawa, K.: OnZ l -extensions of algebraic number fields. Ann. Math.98, 246-326 (1973) · Zbl 0285.12008 [12] Katz, N.:p-adic interpolation of real analytic Eisenstein series. Ann. Math.104, 459-571 (1976) · Zbl 0354.14007 [13] Kubert, D., Lang, S.: Modular Units, Berlin Heidelberg New York: Springer (1981) · Zbl 0492.12002 [14] Mazur, B., Swinnerton-Dyer, P.: Arithmetic of Weil curves. Invent. Math.25, 1-61 (1974) · Zbl 0281.14016 [15] Perrin-Riou, B.: Points de Heegner et dérivées de fonctionsL p-adiques. Invent. Math. (to appear) · Zbl 0636.14005 [16] Robert, G.: Unités elliptiques. Bull Soc. Math. Fr. Suppl., Mémoire vol. 36 (1973) [17] Rubin, K.: Congruences for special values ofL-functions of elliptic curves with complex multiplication. Invent. Math.71, 339-364 (1983) · Zbl 0513.14012 [18] Rubin, K.: Global units and ideal class groups. Invent. Math.89, 511-526 (1987) · Zbl 0628.12007 [19] Shimura, G.: Introduction to the Arithmetic Theory of Automorphic Forms. Princeton: Princeton University Press (1971) · Zbl 0221.10029 [20] Silverman, J.: The Arithmetic of Elliptic Curves. Graduate Texts in Math., vol. 106. Berlin Heidelberg New York: Springer (1986) · Zbl 0585.14026 [21] Stephens, N.: The conjectures of Birch and Swinnerton-Dyer for the curvesX 3+Y 3=DZ 3. J. Reine Angew. Math.231, 121-162 (1968) · Zbl 0221.10023 [22] Tate, J.: Algorithm for determining the type of a singular fiber in an elliptic pencil. In: Modular Functions of One Variable (IV), Lect. Notes Math., vol. 476. Berlin New York: Springer (1975) · Zbl 1214.14020 [23] Thaine, F.: On the ideal class groups of real abelian number fields. (To appear) · Zbl 0665.12003 [24] Washington, L.: Introduction to Cyclotomic Fields. Graduate Texts in Math., vol. 83. Berlin Heidelberg New York: Springer (1982) · Zbl 0484.12001 [25] Weil, A.: Number Theory, an approach through history. Boston: Birkhäuser (1984) · Zbl 0531.10001 [26] Wiles, A.: Higher explicit reciprocity laws. Ann. Math.107, 235-254 (1978) · Zbl 0378.12006 [27] Wintenberger, J-P.: Structure galoisienne de limites projectives d’unités locales. Comp. Math.42, 89-103 (1981) · Zbl 0414.12008 [28] Yager, R.: On two variablep-adicL-functions. Ann. Math.115, 411-449 (1982) · Zbl 0496.12010 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
Genome Algebra Research The genetic code algebras and its extension to genes and genomes involve several algebraic structures, such as: Boolean algebras [1,2], modular algebras [3], vector spaces and Galois fields [4]. Each algebraic structure provides a different approach for the understanding of the gene and genome architectures, as well as, the mutational and the molecular evolutionary process. For example, the Boolean algebra provides the way to understand the operational logic of the mutational process [1,2], either on the four-letter alphabet of the DNA molecules or on the binary alphabet used by modern computers. The genetic code vector space on the Galois field of four DNA bases revealed that the quantitative relationships between codons determine a genetic code architecture mathematically equivalent to a cube inserted in the three-dimensional space [4]. The genetic code algebras are founded on the quantitative relationships given between DNA bases in the codons. The genetic code is the code of the genetic communication/information system (GCS) [5]. Most of the message in the GCS are written in the four DNA bases alphabet. These “letters” are the DNA bases: adenine, guanine, cytosine, and thymine, usually denoted A, G, C, and T respectively (in an RNA molecule, T is changed to U, uracil). They are paired according to the following rule (Watson – Crick base pairings): G:C, A:T. That is, base G is the complementary base of C, and A is the complementary base of T (or U) in the DNA (or in the RNA) molecule and vice-versa. The standard genetic code table (RNA codon table) is formed by 64 codons. In superior organisms there is also evidence supporting an epigenome communication system (ECS), which is an extension of the GCS [6]. The alphabet of the ECS is an extension of the GCS, which includes methylated cytosine (mainly) and adenine. Genetic code algebras and their extesions to genes and genomes have been already developed. The genetic code table Columns of the genetic code table are not at random. It is well known that there is an association between second-position base and hydrophobicity; in which the amino acid that have U at the second position of their codon are hydrophobic: {I, L, M, F}, whereas those that have A at the second position are hydrophilic (polar amino acids): {D, E, H, N, K, Q, Y}. This was highlighted by Crick when he proposed his famous hypothesis about the accidental frozen code [7]. Epstein [8] pointed out that “related” amino acids have to some extent related codons and Crick [7] considered that the amino acid in the genetic code table does not seem to be allocated in a totally random way. So it is natural to think that some partial order in the codons set should reflect the physico-chemical properties of amino acid [9,10]. The Binary Alphabet of DNA On the DNA Computer Binary Code In any finite set we can define a partial order, a binary operation in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that […] The genetic-code vector space B^3 over the Galois field GF(5) The $mathbb{Z_5}$-vector space $mathfrak{B}$3 over the field $(mathbb{Z_5}, +, .)$ 1. Background This is a formal introduction to the genetic code $mathbb{Z_5}$-vector space $mathfrak{B}^3$ over the field $(mathbb{Z_5}, +, .)$. This mathematical model is defined based on the physicochemical properties of DNA bases (see previous post). This introduction can be complemented with a Wolfram Computable […] Group operations on the set of five DNA bases An introduction to the groups defined on five DNA bases General biochemical background. The genetic information on how to build proteins able to perform different biological/biochemical functions is encoded in the DNA sequence. Code-words of three letters/bases, called triplets or codons, are used to encode the information that will be used to synthesize proteins. Every […] References 1. Sanchez R, Morgado E, Grau R. The genetic code Boolean lattice. MATCH Commun Math Comput Chem, 2004, 52:29–46 2. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14 3. Sanchez R, Morgado E, Grau R. Gene algebra from a genetic code algebraic structure. J Math Biol, 2005, 51:431–57 4. Sánchez R, Perfetti LA, Grau R, Morales ERM. A New DNA Sequences Vector Space on a Genetic Code Galois Field. MATCH Commun Math Comput Chem, 2005, 54:3–28. 5. Sanchez R, Grau R. Bull Math Biol, 2005, 67:1017–29 6. Sanchez R, Mackenzie SA. Information Thermodynamics of Cytosine DNA Methylation. PLoS One, 2016, 11:e0150427. 7. Crick FHC. The Origin of the Genetic Code. J Mol Biol, 1968, 38:367–79. 8. Epstein, C.J. Role of the amino-acid “code” and of selection for conformation in the evolution of proteins. Nature, 1966, 210, 25-28. 9. Lehmann, J. J. Theor. Biol. 2000, 202, 129-144. 10. Robin D, Knight RD, Freeland SJ, Landweber LF. Selection, history and chemistry: the three faces of the genetic code. Trends Biochem Sci. 1999, 24: 241-247 • • • • • • • • • • •
{}
# Custom data labels for each plot in bar chart I have a simple bar chart, but I want the data labels to have a custom string value. For example, see the attached picture. I have found plenty of examples with plots, but nothing for bar charts. Note that every single co-ordinate has a different value associated with it. Is there a way I can enter a custom value, as some sort of additional argument to the coordinates? \begin{figure} \begin{tikzpicture} \begin{axis}[ ybar, enlargelimits=0.15, legend style={at={(0.5,-0.15)}, anchor=north,legend columns=-1}, ylabel={Speedup}, xlabel={\# of Model Elements (millions)}, symbolic x coords={1m,1.5m,2m,4m}, xtick=data, nodes near coords, nodes near coords align={vertical}, ] \addplot coordinates {(1m,92.021) (1.5m,235.809) (2m,276.824) (4m,340.847)}; \end{axis} \end{tikzpicture} \caption{Results} \label{fig:mycaption} \end{figure} This is very easy: just build up a list of strings and access them with \coordindex. To this end we define a comma-separated list of strings, \edef\mylst{"An arbitrary string","String","Custom label","Not this data"} where the first entry (which has internally index 0) will be used for the first node, the second entry for the second node, and so on. Make sure that the list as at least as many entries as nodes that exist. \documentclass{article} \usepackage{pgfplots} \pgfplotsset{compat=1.16} \begin{document} \begin{figure} \begin{tikzpicture} \edef\mylst{"An arbitrary string","String","Custom label","Not this data"} \begin{axis}[width=12cm, ybar, enlargelimits=0.15, legend style={at={(0.5,-0.15)}, anchor=north,legend columns=-1}, ylabel={Speedup}, xlabel={\# of Model Elements (millions)}, symbolic x coords={1m,1.5m,2m,4m}, xtick=data, nodes near coords=\pgfmathsetmacro{\mystring}{{\mylst}[\coordindex]}\mystring, nodes near coords align={vertical}, ] \addplot coordinates {(1m,92.021) (1.5m,235.809) (2m,276.824) (4m,340.847)}; \end{axis} \end{tikzpicture} \caption{Results} \label{fig:mycaption} \end{figure} \end{document} The strings are a bit too long. Are you OK with using multiple lines for them? \documentclass{article} \usepackage{pgfplots} \pgfplotsset{compat=1.16} \begin{document} \begin{figure}[htb] \centering \begin{tikzpicture} \edef\mylst{"An arbitrary string","String","Custom label","Not this data"} \begin{axis}[ymax=370, ybar, enlargelimits=0.15, legend style={at={(0.5,-0.15)}, anchor=north,legend columns=-1}, ylabel={Speedup}, xlabel={\# of Model Elements (millions)}, symbolic x coords={1m,1.5m,2m,4m}, xtick=data, nodes near coords style={font=\sffamily,align=center,text width=4em}, nodes near coords=\pgfmathsetmacro{\mystring}{{\mylst}[\coordindex]}\mystring, nodes near coords align={vertical}, ] \addplot coordinates {(1m,92.021) (1.5m,235.809) (2m,276.824) (4m,340.847)}; \end{axis} \end{tikzpicture} \caption{Results} \label{fig:mycaption} \end{figure} \end{document} • Excellent, thank you! Can you explain what the \mystring does? – Sina Madani Feb 11 at 17:25 • @SinaMadini This is a comma-separated list of strings that gets used in the nodes near coords. – user194703 Feb 11 at 17:30 • It seems to be only using the first character of the string, Is this a limitation of the command? – Sina Madani Feb 11 at 17:37 • @SinaMadani I added the screen shot, and as you see there are the full strings. If you only see the first character, this can have many reasons, one of them being is that you load the babel package. In that case you need to load \usetikzlibrary{babel}. If this does not work, add a full compilable minimal working example to your question, my answer works. – user194703 Feb 11 at 17:39 • Thanks, it seems to be working now. I was trying it with a minimal graph using a single plot, but with multiple plots it works. – Sina Madani Feb 11 at 17:49 I found one solution which seems to work (adapted from this answer): After the \addplot, type e.g. \node [above] at (axis cs: 1m, 92.021) {an arbitrary string}; Also need to remove the nodes near coords option to remove the existing (default) data label. • Of course you can do this, but this would be a "100% manual solution". By that I mean that, whenever you change something in your data, you would also have to adjust the "manual" \nodes. – Stefan Pinnow Mar 10 at 16:23 A straightforward way of doint this would be to provide a table (instead of coordinates) and add another column with the stuff you want to show in the nodes near coords. For details please have a look at the comments in the code. % used PGFPlots v1.16 \documentclass[border=5pt]{standalone} \usepackage{pgfplots} \pgfplotsset{ % use this compat level or higher to make use of the "advanced" axis % label placement compat=1.3, } % instead of stating the coordinates directly store them in a data file/table \begin{filecontents*}{MyData.txt} x y label 1m 92.021 {An \\ arbitrary \\ string} 1.5m 235.809 String 2m 276.824 {custom label} 4m 340.847 {not this data} \end{filecontents*} \begin{document} \begin{tikzpicture} \begin{axis}[ ybar, enlargelimits=0.15, ylabel={Speedup}, xlabel={\# of Model Elements (millions)}, % place xticks at data points (of the first \addplot) xtick=data, % label the xticks with data from the table xticklabels from table={MyData.txt}{x}, % add nodes to the coordinates nodes near coords, % the nodes contain non-numeric data point meta=explicit symbolic, % if line breaks are included in the strings, align must be set/used nodes near coords style={ align=center, % % (alternatively of giving line breaks manually to the labels you % % could give a fixed with of the label nodes) % text width=5em, }, ] % simply use the coordinate index as x value % (the label is then used from xticklabels from table) x expr=\coordindex, % use y coordinate from table column with header "y" y=y, % use column with header "label" for the nodes near coords label meta=label, ] {MyData.txt}; \end{axis} \end{tikzpicture} \end{document}
{}
# I Strange circular geodesic #### Mentz114 Gold Member Summary Starting with a rotating frame field (spherical Born coordinates) and setting $\omega\equiv \omega(r)$ then solving the differential equation $\vec{a}=0$ , $\vec{a}$ being the proper acceleration gives the frame field of a circular geodesic. The Born frame field (see ref below) describes a rotating system and the proper acceleration $\vec{a}=\nabla _{{{\vec {p}}_{0}}}\,{\vec {p}}_{0}={\frac {-\omega ^{2}\,r}{1-\omega ^{2}\,r^{2}}}\,{\vec {p}}_{2}$. If $\omega$ depends on coordinate $r$ then $\vec{a}=\frac{{r}^{2}\,w\,\left( \frac{d}{d\,r}\,w\right) +r\,{w}^{2}}{{r}^{2}\,{w}^{2}-1}$ and solving the ODE $\vec{a}=0$ gives $\omega(r)=M/r$ where $M>0$ is a constant. Obviously there must be a source now and sure enough the Ricci and Einstein tensors are not zero. The metric is transformed to $$g_{\mu\nu}=\begin{pmatrix} -1 & 0 & 0 & -\frac{M}{r}\\ 0 & 1 & 0 & 0\\ 0 & 0 & {r}^{2} & 0\\ -\frac{M}{r} & 0 & 0 & -\frac{{M}^{2}-1}{{r}^{2}} \end{pmatrix}$$ and clearly $M<1$ is a constraint. The Einstein tensor in the local frame is $$E_{mn}=\begin{pmatrix} -\frac{r\,{M}^{2}-4\,{M}^{2}+4\,r}{4\,{r}^{3}} & 0 & 0 & \frac{M\,\left( 3\,{M}^{2}-4\,r\right) }{4\,{r}^{3}}\\ 0 & \frac{\left( M-2\right) \,\left( M+2\right) }{4\,{r}^{2}} & 0 & 0\\ 0 & 0 & -\frac{{M}^{2}-8}{4} & 0\\ \frac{M\,\left( 3\,{M}^{2}-4\,r\right) }{4\,{r}^{3}} & 0 & 0 & \frac{{M}^{2}\,\left( 3\,{M}^{2}-4\,r-3\right) }{4\,{r}^{4}} \end{pmatrix}$$ I don't know what to make of this so any comments welcomed. Related Special and General Relativity News on Phys.org #### PeterDonis Mentor If $\omega$ depends on coordinate $r$ In the Born chart, it doesn't; $\omega$ is a constant, the "angular velocity" of the rotating frame relative to an inertial observer at rest relative to the center of rotation. I don't know what to make of this I'm not sure what you are trying to do. Are you trying to derive an analogue of the Born chart for Schwarzschild spacetime? #### Mentz114 Gold Member It looks as if the result of $\omega\rightarrow M/r$ ( a potential) has resulted in an EMT that looks like a thin rotating disc with all the mattter in geodesic motion. #### PeterDonis Mentor the result of $\omega\rightarrow M/r$ First you have to explain why you are letting $\omega$ be a function of $r$ at all. The frame field that is used as the basis for Born coordinates does not have this property; for that frame field, $\omega$ is a constant. So if you want $\omega$ to be a function of $r$, you are talking about a different frame field, and I'm not sure what the motivation for considering that frame field is. #### Mentz114 Gold Member First you have to explain why you are letting $\omega$ be a function of $r$ at all. The frame field that is used as the basis for Born coordinates does not have this property; for that frame field, $\omega$ is a constant. So if you want $\omega$ to be a function of $r$, you are talking about a different frame field, and I'm not sure what the motivation for considering that frame field is. Why do I have to justify it ? The motivation is curiosity. Naturally it is a different space-time - but what is it ? I'm having a problem converting the cylindrical chart to spherical polar so the Einstein tensor could be wrong - lots of fun trying to fix it. #### PeterDonis Mentor Why do I have to justify it ? Perhaps "justify" was the wrong word. My point is, what physical congruence of worldlines (or family of observers) are you trying to describe? It can't be the same congruence as the one that's used to derive the Born chart, since that congruence has a constant $\omega$: it describes a family of observers that are in circular orbits around some common center in flat spacetime, with different orbital radius but the same angular velocity $\omega$. Mathematically, what you've apparently done is take the description of the frame field of the family of observers at rest in the Born chart, and say "let's let $\omega$ be a function of $r$". But doing that invalidates the whole construction of the Born chart, so it's not clear what you're trying to describe or even if what you're trying to describe is consistent. If the basic idea you're trying to pursue is "what happens if we let $\omega$ be a function of $r$", I think a better approach would be to start with a chart in which you can unambiguously define a frame field with that property without invalidating the chart itself. So I would take the cylindrical chart on Minkowski spacetime (the one in which the Minkowski metric is expressed at the very start of the "Langevin observers in the cylindrical chart" section) and the Langevin frame field as expressed in that chart, and then let $\omega$ be a function of $R$ (note the capital $R$ since this is the Minkowski cylindrical chart, not the Born chart) and compute the proper acceleration. The formal expression of the frame field should remain the same as long as $\omega$ is only a function of $R$; but the formal expression for the proper acceleration will change. Naturally it is a different space-time - but what is it ? I don't know, because I don't even know what family of observers you're trying to describe. See above. I'm having a problem converting the cylindrical chart to spherical polar Why would you need to? You can compute tensors just fine in the cylindrical chart. Furthermore, since your family of observers is not spherically symmetric, but only axially symmetric, a cylindrical chart seems like a better choice for describing it. #### PeterDonis Mentor The metric is transformed to How are you obtaining this? You're supposed to already know the metric in order to compute the proper acceleration. #### Mentz114 Gold Member Perhaps "justify" was the wrong word. My point is, what physical congruence of worldlines (or family of observers) are you trying to describe? It can't be the same congruence as the one that's used to derive the Born chart, since that congruence has a constant $\omega$: it describes a family of observers that are in circular orbits around some common center in flat spacetime, with different orbital radius but the same angular velocity $\omega$. I know this ! Mathematically, what you've apparently done is take the description of the frame field of the family of observers at rest in the Born chart, and say "let's let $\omega$ be a function of $r$". But doing that invalidates the whole construction of the Born chart, so it's not clear what you're trying to describe or even if what you're trying to describe is consistent. Yes , agreed. But, I have a new metric. It has a circular geodesic and matter so it no longer matters how it was Born (pun intended). I'll take it from there. I will explore cylindrical coords as well. #### PeterDonis Mentor I have a metric. It has a circular geodesic and matter so it no longer matters how it was Born (pun intended). In the sense that you can write down whatever symmetric 4 x 4 matrix you like and call it a metric, yes, that's true. But I don't see any way to give that 4 x 4 matrix a physical interpretation without being able to link it to something physically understandable. In that sense, how it was "Born" does matter. #### Mentz114 Gold Member How are you obtaining this? You're supposed to already know the metric in order to compute the proper acceleration. Ricci rotation coefficients are defined by the frame field. The frame covariant derivative can then be used to get the proper acceleration. #### PeterDonis Mentor Ricci rotation coefficients are defined by the frame field Yes, but that means you need a consistent definition of a frame field in the chart you are using. I'm not convinced you have that for the Born chart. But I can see an obvious consistent way to define one using the Minkowski cylindrical chart, which I described previously; and I think that will end up giving you the same sort of metric you have now. #### PeterDonis Mentor Ricci rotation coefficients are defined by the frame field And the covariant derivative, which requires knowledge of the metric. #### Mentz114 Gold Member And the covariant derivative, which requires knowledge of the metric. If the local metric $\eta_{ab}$ is known and the frame field is known, then the metric is just a transformation of $\eta$. The result for the Born chart in cylindrical coordinates gives a metric which does not cover the whole space $0<r<\infty$ because $r<1/M$ and $0<M<1$ is required. So it could be a very thin disc of something. I do not think the parameter $M$ can represent matter or energy. A small puzzle but not worth any more time spent. $$g_{\mu\nu}=\begin{pmatrix} \frac{1-{r}^{2}\,{M}^{2}}{{M}^{2}-1} & 0 & 0 & r\,M\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ r\,M & 0 & 0 & 1-{M}^{2} \end{pmatrix}$$ #### PeterDonis Mentor If the local metric $\eta_{ab}$ is known and the frame field is known, then the metric is just a transformation of $\eta$. You also need to know the connection, because you need to know the covariant derivatives at your chosen event. Just knowing that the metric at that event is $\eta_{ab}$ is not enough, because you can always find a chart in which the metric is $\eta_{ab}$ at that event. But without knowing the connection, you don't know how to parallel transport vectors from the tangent space at one event to the tangent space at another event, so you have no way of comparing frame field vectors at different events. #### PeterDonis Mentor The result for the Born chart in cylindrical coordinates Is based on inconsistent assumptions, as far as I can see; you started with a chart that is defined in a way that requires $\omega$ to be constant, but you're treating the frame field as though $\omega$ can vary with $r$. So even if we just say you wrote down the metric you give arbitrarily, without deriving it from anything, I still don't think you have a consistent model, so there's no point in asking what it might represent physically. #### Mentz114 Gold Member You also need to know the connection, because you need to know the covariant derivatives at your chosen event. Just knowing that the metric at that event is $\eta_{ab}$ is not enough, because you can always find a chart in which the metric is $\eta_{ab}$ at that event. But without knowing the connection, you don't know how to parallel transport vectors from the tangent space at one event to the tangent space at another event, so you have no way of comparing frame field vectors at different events. The frame covariant derivative allows comparison between points on the curve. I don't understand your objection. The curvature tensors as experienced by the transported frame can be calculated from the first derivatives of the tetrad. No need for Christoffel connections. Is based on inconsistent assumptions, as far as I can see; you started with a chart that is defined in a way that requires ω to be constant, but you're treating the frame field as though ω can vary with r. So even if we just say you wrote down the metric you give arbitrarily, without deriving it from anything, I still don't think you have a consistent model, so there's no point in asking what it might represent physically. I think you've made this point already. Again, feel free not to consider what it may mean. #### pervect Staff Emeritus Why is having circular geodesics surprising? I don't understand the physical significance of what you're doing at all, unfortunately. But it's easy to construct a very simple example with circular geodesics. Consider a flat plane, 2-space + 1 time. A point remaining at rest in said flat plane follows a geodesics. Now, adopt a rotating frame. The geodesics in the rotating frame , of the points that used to be stationary, are now circular. If we assume your calculations are correct, can you explain why having circular geodesics is necessarily surprising, in light of this example? #### PeterDonis Mentor The frame covariant derivative allows comparison between points on the curve. What do you mean by "the frame covariant derivative"? The curvature tensors as experienced by the transported frame can be calculated from the first derivatives of the tetrad. First derivatives of the tetrad with respect to what? #### Mentz114 Gold Member Why is having circular geodesics surprising? I don't understand the physical significance of what you're doing at all, unfortunately. But it's easy to construct a very simple example with circular geodesics. Consider a flat plane, 2-space + 1 time. A point remaining at rest in said flat plane follows a geodesics. Now, adopt a rotating frame. The geodesics in the rotating frame , of the points that used to be stationary, are now circular. If we assume your calculations are correct, can you explain why having circular geodesics is necessarily surprising, in light of this example? The Born observer experiences proper acceleration. This is removed by a space-time in which the same frame field is a geodesic. However, my latest calculation hints that $G_{00} < 0$ so it is unrealistic. #### PeterDonis Mentor I see the term "frame covariant derivative" in section 11.16.1. I don't see how it allows you to take derivatives without knowing the metric. Using tetrads doesn't eliminate the metric, it just re-expresses it in a different form that's often more useful for calculations. coordinates Which, as I said, I suspect your modified Born chart does not form a consistent set of. But re-doing your computations in the standard cylindrical chart would keep much of what you've done formally the same; what I don't know is whether it would lead to an Einstein tensor that is formally the same. At some point if I have time I'll fire up Maxima and see what it outputs for that formulation. #### Mentz114 Gold Member I see the term "frame covariant derivative" in section 11.16.1. I don't see how it allows you to take derivatives without knowing the metric. Using tetrads doesn't eliminate the metric, it just re-expresses it in a different form that's often more useful for calculations. This is what I've been saying all along ! Why are you so argumentative about this . You keep saying I've 'pulled a metric out of a hat' - but it comes from the tetrad and the local Minkwoski metric - uniquely defined (I think). Which, as I said, I suspect your modified Born chart does not form a consistent set of. But re-doing your computations in the standard cylindrical chart would keep much of what you've done formally the same; what I don't know is whether it would lead to an Einstein tensor that is formally the same. At some point if I have time I'll fire up Maxima and see what it outputs for that formulation. Doing the calculation in the cylindrical chart simplifies the expressions and shows clearly the '00' component of the Einstein tensor is negative. That explains why it so weird ! Whatever produces the curvature is certainly not matter (' as we know it, Jim'). (I have Maxima batch scripts to do the calculations and I have high confidence in them but it could all be a down to a miscalculation). #### PeterDonis Mentor This is what I've been saying all along ! I'm not sure it is. You're saying that the metric... ...comes from the tetrad and the local Minkwoski metric ...which I don't agree with; if you don't already know the metric, you can't figure it out just from knowing some frame field and that the metric is locally Minkowski (the latter is true of any spacetime so I don't see how it helps any). For example: suppose all you know is the tetrad (frame field) of the Langevin observers, expressed in the Minkowski cylindrical chart (the one with the capital $R$), as shown in the Wikipedia article. And you know that at any event, the metric is locally Minkowski. But you don't know the global metric, and you don't know anything else about the frame field. How do you get from that to the line element shown for the Minkowski cylindrical chart? #### Mentz114 Gold Member I'm not sure it is. You're saying that the metric... ...which I don't agree with; if you don't already know the metric, you can't figure it out just from knowing some frame field and that the metric is locally Minkowski (the latter is true of any spacetime so I don't see how it helps any). For example: suppose all you know is the tetrad (frame field) of the Langevin observers, expressed in the Minkowski cylindrical chart (the one with the capital $R$), as shown in the Wikipedia article. And you know that at any event, the metric is locally Minkowski. But you don't know the global metric, and you don't know anything else about the frame field. How do you get from that to the line element shown for the Minkowski cylindrical chart? OK. Is the Ricci tensor below showing negative curvature ? $0<L<1$ is a constant. $$\pmatrix{-\frac{{L}^{2}}{2\,{r}^{4}} & 0 & 0 & \frac{3\,L}{2\,{r}^{2}}\cr 0 & 0 & 0 & 0\cr 0 & 0 & \frac{{L}^{2}}{2\,{r}^{4}} & 0\cr \frac{3\,L}{2\,{r}^{2}} & 0 & 0 & -\frac{{L}^{2}}{2\,{r}^{2}}}$$ #### PeterDonis Mentor Is the Ricci tensor below showing negative curvature ? I can't tell just from the components. To compute the Ricci scalar, which is the simplest invariant, and see if it's negative I would need to know the metric. "Strange circular geodesic" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{}
# Nodes vertical distance in a legend of a tikzpicture I have a figure drawn using tikz. I am trying to add a legend, which consists in two types of arrows with different meanings. My problem is how to add the correct vertical space between the nodes with arrows. In the MWE, I have the a reference node and the text within it is separated by \\. How can I align the nodes with arrows with latter keeping the same vertical space between the text of nodes (n2) and (n3) as within (n1)? \documentclass{article} \usepackage{tikz} \usetikzlibrary{calc} \begin{document} \begin{tikzpicture} \node[align=right,inner ysep=0pt] (n1) {BLA\\ BLB}; \draw[->,red,align=center] ($(n1.north east)+(1cm,0cm)$) -- ++(0:8pt) node[at end,anchor=west] (n2) {BLA BLA}; \draw[<-,red] ($(n1.north east)+(1cm,-\baselineskip)$) -- ++(0:8pt) node[at end,anchor=west] (n3) {BLA BLB}; \end{tikzpicture} \end{document} - # Basic solution This solution provides two keys: • line and • line default font. The line key accepts its argument either with : or without :. If the colon is given, the value is simply forwarded to the internal @line key which uses the part before the colon as a font argument (which hopefully set \baselineskip correctly. The part after the : is the line number, 1 being the bottom line. Isn’t the colon present the value of line default font will be forwarded to @line and will be used as the font-argument. ## Code \documentclass[tikz,convert=false]{standalone} \makeatletter \tikzset{ line default font/.initial=\normalsize, line/.code=\pgfutil@in@{:}{#1}% \ifpgfutil@in@ \pgfkeysalso{@line={#1}}\else \pgfkeysalso{@line={\pgfkeysvalueof{/tikz/line default font}:#1}}% \fi, @line/.code args={#1:#2}{% internal \begingroup #1% \pgfutil@tempcnta#2\relax \pgf@xa\pgfutil@tempcnta\baselineskip\relax \pgfmath@returnone\pgf@xa \endgroup \pgftransformshift{\pgfqpoint{0pt}{\pgfmathresult pt}}% } } \begin{document} \begin{tikzpicture} \node[align=right, font=\small, draw=lightgray] (n1) {BL\\ BLB}; \draw[->] ([line=\small:2,xshift=1cm] n1.base east) -- ++(0:8pt) node[at end,anchor=base west] (n2) {Line 2}; \draw[<-] ([line=1, xshift=1cm] n1.base east) -- ++(0:8pt) node[at end,anchor=base west] (n3) {Line 1}; \fill ([line=1] n1.base west) circle[] ([line=2] n1.base west) circle[] [red]; \fill ([line=1] n1.base) circle[] ([line=2] n1.base) circle[]; \fill ([line=1] n1.text) circle[] ([line=2] n1.text) circle[] [blue]; \fill ([line=1] n1.base east) circle[] ([line=2] n1.base east) circle[] [green]; \end{tikzpicture} \end{document} # Sophisticated solution This solution includes: • the save baseline key that saves the \baselineskip length at the end of the node in a macro (it might be helpful to include this style in the every node style); • an addition to the explicit node coordinate system in the form of the line key (it checks whether save baseline has been used on the node); and • an addition to the implicit node coordinate. The line is separated from the anchor by the character ' (it is not possible to use : or ,, it could be possible to use . again). It is not possible to use the line key without an anchor (implicitly or explicitly). This could be fixed. I advise against using ' in a node name (especially if it should be possible to use the line ' without an anchor). ### Shortcomings Don’t transform! Seriously, if either the node is rotated or the path which uses the line-anchor is rotated or one of the scaleings is active I can guarantee nothing. ## Code \documentclass[tikz,convert=false]{standalone} \makeatletter \tikzset{ save baseline/.style={ execute at end node=\expandafter\xdef\csname pgf@sh@bls@\tikz@fig@name\endcsname{\the\baselineskip} } } \pgfqkeys{/tikz/cs}{% anchor/.store in=\tikz@cs@angle% small fix to reduce the code of the 'node' cs } \tikzdeclarecoordinatesystem{node}{% \tikzset{cs/.cd,name=,anchor=none,line=none,#1}% \ifx\tikz@cs@angle\tikz@nonetext% \expandafter\ifx\csname pgf@sh@ns@\tikz@cs@node\endcsname\tikz@coordinate@text% \else \aftergroup\tikz@shapebordertrue% \edef\tikz@shapeborder@name{\tikz@cs@node}% \fi% \pgfpointanchor{\tikz@cs@node}{center}% \else% \ifx\tikz@cs@line\tikz@nonetext \pgfpointanchor{\tikz@cs@node}{\tikz@cs@angle}% \else \expandafter\ifx\csname pgf@sh@bls@\tikz@cs@node\endcsname\relax \PackageError{TikZ}{The \tikz@cs@node\space has no saved baseline, use the 'save baseline' option}{} \pgfpointanchor{\tikz@cs@node}{\tikz@cs@angle} \else \pgfutil@tempcnta=\tikz@cs@line\relax \pgfutil@tempdima=\csname pgf@sh@bls@\tikz@cs@node\endcsname {\pgfqpoint{0pt}{\pgfutil@tempcnta\pgfutil@tempdima}}% \fi \fi \fi% } \def\tikz@calc@anchor#1.#2\tikz@stop{% \pgfutil@in@{'}{#2} \ifpgfutil@in@ \expandafter\ifx\csname pgf@sh@bls@#1\endcsname\relax \PackageError{TikZ}{The #1 has no saved baseline, use the 'save baseline' option}{} \pgfpointanchor{#1}{#2} \else \tikz@calc@anchor@line#1.#2\tikz@stop \fi \else \pgfpointanchor{#1}{#2}% \fi } \def\tikz@calc@anchor@line#1.#2'#3\tikz@stop{% \pgfutil@tempcnta=#3\relax \pgfutil@tempdima=\csname pgf@sh@bls@#1\endcsname {\pgfqpoint{0pt}{\pgfutil@tempcnta\pgfutil@tempdima}}% } \makeatother \begin{document} \begin{tikzpicture} \node[align=right, font=\small, draw=lightgray, save baseline] (n1) {BL\\ BLB}; \draw[->] (node cs: name=n1, anchor=base east, line=2) -- ++(0:8pt) node[at end,anchor=base west] (n2) {Line 2}; \draw[<-] (node cs: name=n1, anchor=base east) -- ++(0:8pt) node[at end,anchor=base west] (n3) {Line 1}; \fill (n1.base west'1) circle[] (n1.base west'2) circle[] [red]; \fill (n1.base'1) circle[] (n1.base'2) circle[]; \fill (n1.text'1) circle[] (n1.text'2) circle[] [blue]; \fill (n1.base east'1) circle[] (n1.base east'2) circle[] [green]; \end{tikzpicture} \end{document} ## Output - Your code doesn't work. It gives l.6 ...[->,red] ([line=2,xshift=1cm] n1.base east) -- ++(0:8pt) ? –  cacamailg Jul 23 '13 at 0:58 The code works now. But the vertical space is correct only if font=\normalsize. Since I am using font=\small for all the nodes, I had to do \tikzset{line/.style={shift={(+0pt,{(#1-1)*\baselineskip*0.91})}}}. This 0.91 is achieved by trial-error tentatives. How can we introduce a factor if font size chances to \small, \footnotesize, etc.? –  cacamailg Jul 23 '13 at 12:00 @cacamailg Yes, that is annoying (and is one of the things I meant with “[i]f \baselineskip is not specifically changed inside the node”). I am afraid there is no easy solution. If you typeset all nodes in \small you could issue \small after \begin{tikzpicture}[…] (which changes also all em and ex dimensions of course). Or you can use ([/utils/exec=\small,line=2,xshift=1cm] n1.base east) (best hidden in another style). If you want to go very sophisticated, you let \baselineskip be stored away and reference it later (could be a special cs). Would this be needed? –  Qrrbrbirlbel Jul 23 '13 at 12:26 Well, all nodes are typeset in \small because of space constraints. That special style that you mention, probably is the best solution, because I don't know if I need to typeset some nodes in \footnotesize. Or change the style that you have defined to include a second parameter. –  cacamailg Jul 23 '13 at 12:37 @cacamailg Please see my update. The “Sophisticated solution” is what I was talking about in my previous comment. But don’t use any transformations (besides shifting I think). This could go horrible wrong (but that's true even for the other solutions). I will take a look into the transformations nonetheless. –  Qrrbrbirlbel Jul 23 '13 at 14:59
{}
It is extensively studied that Deep Neural Networks (DNNs) are vulnerable to Adversarial Examples (AEs). With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models. It has become a popularity to leverage data augmentation techniques to preprocess input samples before inference to remove adversarial perturbations. By obfuscating the gradients of DNN models, these approaches can defeat a considerable number of conventional attacks. Unfortunately, advanced gradient-based attack techniques (e.g., BPDA and EOT) were introduced to invalidate these preprocessing effects. In this paper, we present FenceBox, a comprehensive framework to defeat various kinds of adversarial attacks. FenceBox is equipped with 15 data augmentation methods from three different categories. We comprehensively evaluated that these methods can effectively mitigate various adversarial attacks. FenceBox also provides APIs for users to easily deploy the defense over their models in different modes: they can either select an arbitrary preprocessing method, or a combination of functions for a better robustness guarantee, even under advanced adversarial attacks. We open-source FenceBox, and expect it can be used as a standard toolkit to facilitate the research of adversarial attacks and defenses. ### 相关内容 The security of the Person Re-identification(ReID) model plays a decisive role in the application of ReID. However, deep neural networks have been shown to be vulnerable, and adding undetectable adversarial perturbations to clean images can trick deep neural networks that perform well in clean images. We propose a ReID multi-modal data augmentation method with adversarial defense effect: 1) Grayscale Patch Replacement, it consists of Local Grayscale Patch Replacement(LGPR) and Global Grayscale Patch Replacement(GGPR). This method can not only improve the accuracy of the model, but also help the model defend against adversarial examples; 2) Multi-Modal Defense, it integrates three homogeneous modal images of visible, grayscale and sketch, and further strengthens the defense ability of the model. These methods fuse different modalities of homogeneous images to enrich the input sample variety, the variaty of samples will reduce the over-fitting of the ReID model to color variations and make the adversarial space of the dataset that the attack method can find difficult to align, thus the accuracy of model is improved, and the attack effect is greatly reduced. The more modal homogeneous images are fused, the stronger the defense capabilities is . The proposed method performs well on multiple datasets, and successfully defends the attack of MS-SSIM proposed by CVPR2020 against ReID [10], and increases the accuracy by 467 times(0.2% to 93.3%). Guaranteeing the security of transactional systems is a crucial priority of all institutions that process transactions, in order to protect their businesses against cyberattacks and fraudulent attempts. Adversarial attacks are novel techniques that, other than being proven to be effective to fool image classification models, can also be applied to tabular data. Adversarial attacks aim at producing adversarial examples, in other words, slightly modified inputs that induce the Artificial Intelligence (AI) system to return incorrect outputs that are advantageous for the attacker. In this paper we illustrate a novel approach to modify and adapt state-of-the-art algorithms to imbalanced tabular data, in the context of fraud detection. Experimental results show that the proposed modifications lead to a perfect attack success rate, obtaining adversarial examples that are also less perceptible when analyzed by humans. Moreover, when applied to a real-world production system, the proposed techniques shows the possibility of posing a serious threat to the robustness of advanced AI-based fraud detection procedures. Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are: (1) the generation of high quality images, (2) diversity of image generation, and (3) stable training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state of the art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress towards addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress towards important computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Code related to GAN-variants studied in this work is summarized on https://github.com/sheqi/GAN_Review. While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset. Modern neural network training relies heavily on data augmentation for improved generalization. After the initial success of label-preserving augmentations, there has been a recent surge of interest in label-perturbing approaches, which combine features and labels across training samples to smooth the learned decision surface. In this paper, we propose a new augmentation method that leverages the first and second moments extracted and re-injected by feature normalization. We replace the moments of the learned features of one training image by those of another, and also interpolate the target labels. As our approach is fast, operates entirely in feature space, and mixes different signals than prior methods, one can effectively combine it with existing augmentation methods. We demonstrate its efficacy across benchmark data sets in computer vision, speech, and natural language processing, where it consistently improves the generalization performance of highly competitive baseline networks. Despite much success, deep learning generally does not perform well with small labeled training sets. In these scenarios, data augmentation has shown much promise in alleviating the need for more labeled data, but it so far has mostly been applied in supervised settings and achieved limited gains. In this work, we propose to apply data augmentation to unlabeled data in a semi-supervised learning setting. Our method, named Unsupervised Data Augmentation or UDA, encourages the model predictions to be consistent between an unlabeled example and an augmented unlabeled example. Unlike previous methods that use random noise such as Gaussian noise or dropout noise, UDA has a small twist in that it makes use of harder and more realistic noise generated by state-of-the-art data augmentation methods. This small twist leads to substantial improvements on six language tasks and three vision tasks even when the labeled set is extremely small. For example, on the IMDb text classification dataset, with only 20 labeled examples, UDA achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On standard semi-supervised learning benchmarks CIFAR-10 and SVHN, UDA outperforms all previous approaches and achieves an error rate of 2.7% on CIFAR-10 with only 4,000 examples and an error rate of 2.85% on SVHN with only 250 examples, nearly matching the performance of models trained on the full sets which are one or two orders of magnitude larger. UDA also works well on large-scale datasets such as ImageNet. When trained with 10% of the labeled set, UDA improves the top-1/top-5 accuracy from 55.1/77.3% to 68.7/88.5%. For the full ImageNet with 1.3M extra unlabeled data, UDA further pushes the performance from 78.3/94.4% to 79.0/94.5%. Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably the revolutionary techniques are in the area of computer vision such as plausible image generation, image to image translation, facial attribute manipulation and similar domains. Despite the significant success achieved in computer vision field, applying GANs over real-world problems still have three main challenges: (1) High quality image generation; (2) Diverse image generation; and (3) Stable training. Considering numerous GAN-related research in the literature, we provide a study on the architecture-variants and loss-variants, which are proposed to handle these three challenges from two perspectives. We propose loss and architecture-variants for classifying most popular GANs, and discuss the potential improvements with focusing on these two aspects. While several reviews for GANs have been presented, there is no work focusing on the review of GAN-variants based on handling challenges mentioned above. In this paper, we review and critically discuss 7 architecture-variant GANs and 9 loss-variant GANs for remedying those three challenges. The objective of this review is to provide an insight on the footprint that current GANs research focuses on the performance improvement. Code related to GAN-variants studied in this work is summarized on https://github.com/sheqi/GAN_Review. Biomedical image segmentation is an important task in many medical applications. Segmentation methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large labeled datasets. Labeling datasets of medical images requires significant expertise and time, and is infeasible at large scales. To tackle the lack of labeled data, researchers use techniques such as hand-engineered preprocessing steps, hand-tuned architectures, and data augmentation. However, these techniques involve costly engineering efforts, and are typically dataset-specific. We present an automated data augmentation method for medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans, focusing on the one-shot segmentation scenario -- a practical challenge in many medical applications. Our method requires only a single segmented scan, and leverages other unlabeled scans in a semi-supervised approach. We learn a model of transforms from the images, and use the model along with the labeled example to synthesize additional labeled training examples for supervised segmentation. Each transform is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. Augmenting the training of a supervised segmenter with these new examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation. Our code is available at https://github.com/xamyzhao/brainstorm. Semantic segmentation is one of the basic topics in computer vision, it aims to assign semantic labels to every pixel of an image. Unbalanced semantic label distribution could have a negative influence on segmentation accuracy. In this paper, we investigate using data augmentation approach to balance the semantic label distribution in order to improve segmentation performance. We propose using generative adversarial networks (GANs) to generate realistic images for improving the performance of semantic segmentation networks. Experimental results show that the proposed method can not only improve segmentation performance on those classes with low accuracy, but also obtain 1.3% to 2.1% increase in average segmentation accuracy. It shows that this augmentation method can boost accuracy and be easily applicable to any other segmentation models. Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge. Yunpeng Gong,Zhiyong Zeng,Liwen Chen,Yifan Luo,Bin Weng,Feng Ye 0+阅读 · 1月21日 Francesco Cartella,Orlando Anunciacao,Yuki Funabiki,Daisuke Yamaguchi,Toru Akishita,Olivier Elshocht 0+阅读 · 1月20日 Zhengwei Wang,Qi She,Tomas E. Ward 33+阅读 · 2020年12月21日 Tejas Gokhale,Rushil Anirudh,Bhavya Kailkhura,Jayaraman J. Thiagarajan,Chitta Baral,Yezhou Yang 15+阅读 · 2020年12月3日 Boyi Li,Felix Wu,Ser-Nam Lim,Serge Belongie,Kilian Q. Weinberger 10+阅读 · 2020年2月25日 Qizhe Xie,Zihang Dai,Eduard Hovy,Minh-Thang Luong,Quoc V. Le 4+阅读 · 2019年7月10日 Zhengwei Wang,Qi She,Tomas E. Ward 11+阅读 · 2019年6月4日 Amy Zhao,Guha Balakrishnan,Frédo Durand,John V. Guttag,Adrian V. Dalca 4+阅读 · 2019年2月25日 Shuangting Liu,Jiaqi Zhang,Yuxin Chen,Yifan Liu,Zengchang Qin,Tao Wan 3+阅读 · 2019年2月8日 Chaowei Xiao,Bo Li,Jun-Yan Zhu,Warren He,Mingyan Liu,Dawn Song 9+阅读 · 2018年1月15日 CreateAMind 8+阅读 · 2019年5月18日 5+阅读 · 2019年4月15日 CreateAMind 32+阅读 · 2019年1月3日 10+阅读 · 2018年12月24日 11+阅读 · 2018年7月9日 CreateAMind 16+阅读 · 2018年5月25日 CreateAMind 7+阅读 · 2017年10月4日 CreateAMind 5+阅读 · 2017年8月4日 Top
{}
### Chain Rule & Integration by Substitution Many ways problem # Implicit circles Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource ## Problem Find the gradient of the curve $x^2+y^2=4$ at the following three points. What is the derivative, $\dfrac{dy}{dx}$, at a general point on the curve?
{}
# Revision history [back] I assume you mean that you've decided that the frame in question isn't really your protocol? In that case, yes, you should return 0. An example from README.dissector: static int dissect_dnp3_tcp(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree, void *data) {
{}
## Thursday, April 14, 2016 ... ///// ### There can't be any bursting, ZIRP-driven "bubble in everything" Even though I am not really interested in the money too much, I would probably agree that Peter Thiel is a role model of mine. He is an unusually intelligent venture capitalist. He's co-founded important companies like PayPal, made the right investments such as the first big cash injection to Facebook, and has done real work in hedge funds since 2002, Clarium and Founders Fund. He has always (well, a few times) beat in me in chess and I think that he always will. He's considered a contrarian investor whose beliefs often differ from the overwhelming majority by 180° and if he makes a loss, it's because he's too brilliant. I have tons of understanding for the smart people who sometimes differ by 180°, you surely believe me. But I still think that his 100-second Bloomberg interview about the "bubble in everything" (hat tip: Willie Soon) isn't just contrarian, it's irrational. Thiel has expressed some thoughts about the financial markets that we actually hear quite often but they contradict the thoughts and interpretations by mainstream professional bankers such as Super Mario Draghi or Janet Yellen. I am absolutely convinced that Draghi and pals understand what's actually going on with these variables much more than Thiel and others (including the constantly crash-predicting folks via Newsmax and Donald Trump with his promises of collapsed bubbles within 2 months). Meanwhile, in the real world, the U.S. unemployment dropped to the lowest level since my birth, a detail that none of the doomsayers bothered to predict. Thiel said that he saw no bubble specifically in the tech sector. It's an attitude he has expressed many times before, and I agree with that – also because the technological stocks' prices haven't even surpassed the prices before the 2000 crash yet – a reason why we might say that we're still in some kind of a long-term bear market since 2000. But otherwise, we were told that quantitative easing and near-zero interest rates have created a "peculiar bubble in nearly everything". Thiel also said: Startup tech stocks may be overvalued, but so are public equities, so are houses, so are government bonds. Silicon Valley is quite far from it. If the bubble is in cash, illiquid startup investments may be a place to hide. What? :-) What is it exactly supposed to mean that there is a "bubble in everything"? The set of prices contains some information but it's really just some "relative prices" that make a physical sense. If you study atomic and molecular physics, you may talk about the size of many atoms or molecules. Those are ten to minus ten meters etc. and you may keep on repeating: the sizes are so small. But at some moment, you should calibrate your expectations or choose sensible units (1 angstrom, for example), and everything is normal. Prices may also be "absolute" prices, but that's just a different word for the relative price of something and the U.S. dollar (or the currency in which the asset is normally denominated). The value still depends on the choice of the benchmark in the denominator. Now, imagine that you locate all "positive" assets in the world that belong to someone and that are liquid enough to be sold within 1 month or less, assuming that the world won't be totally different than it is now in one month. Fine. You get a few hundred trillion of dollars. The amount may also be translated to the euros according to the current exchange rate, or something like that; I make the comment to point out that there is nothing "qualitatively" special about the U.S. dollar. Let's subtract all the debt that individuals have, and eliminate all people who end up below zero (in red numbers). If you have decided which assets have made it to the list, and who are the "positive worth people", you can just express their wealth as a percentage of T, the total value of the world (its owned part) which is several hundred trillion dollars. The average person on Earth or its orbit (including those with a net debt) owns 0.14 parts per billion of T simply because there are 7+ billion people. Almost all TRF readers own more than that, at least by an order of magnitude. Now, when the markets evolve and fluctuate, your percentage of T is changing. But the sum of all the percentages is always 100 percent – the total positive worth is T. It's true by definition. With this attitude, all changes of the wealth or the value of a portfolio are relative. Someone gets richer, someone gets poorer. To say that everything – old stocks, startup stocks, bonds, real estate etc. – is overvalued means either nothing; or it means that the denominator, the U.S. dollar, is "underpriced". But the dollar can't become "underpriced" as a result of loose monetary policies – near-zero or negative interest rates or quantitative easing. Quite the contrary. All these "easy" policies are basically designed to reduce the value of the currencies. (In Czechia, we have ZIRP, NIRP is unpopular with the bankers, but instead of QE, we live with with direct forex interventions keeping EUR/CZK at 27.) We may disagree with ZIRP and NIRP and QE (and surely with the forex interventions) for some moral or principled reasons. But it's almost certainly true that the inflation rate would be lower – or deflation would be deeper – if those policies haven't taken place. When it comes to the overall money supply, the policies have acted just like Milton Friedman's helicopter. When a central bank buys up bonds, it artificially increases the demand for them, so the price of bonds goes up, and the corresponding interest rate up to the maturity (in the future) goes down. The holders of the bonds get some extra cash because they could sell the bond for a higher price than they previously expected. The future borrowers can get easy money because the interest rates have been pushed down. All these things have the effect of making the U.S. dollar less valuable; at the same moment, they are making either particular things – bonds, directly – or all things more valuable (relatively to the U.S. dollar or the currency controlled by the central bank). At the end, it's really just the relative prices that have a physical meaning. Any step in quantitative easing is a step that makes the existing holders of cash poorer than they would be otherwise; the existing holders of bonds are wealthier than they would be otherwise. And it is easier for nations/people to borrow after the quantitative easing move because the interest rates were lowered. And those who are in debt don't see a change because they have agreed about the exact sum how much money they should return. The only difference is that this sum may look bigger or smaller in real terms. Which way it goes? Well, the previous paragraph argued that some new cash was poured by the QE everywhere; so this increases the inflation and inflation expectations. Therefore, the loose monetary policies make it relatively easier for the existing holders of the debt to repay the debt because their nominal incomes or profits are expected to go up a little bit more quickly (or go down less slowly) because of the QE and because of the inflation that the QE encourages. As you can see, the loose policies seemingly turn everyone into a winner. How is it possible? Shouldn't the sum of the wealth figures give you T or 100%? Yes, it still gives you 100%. This fact shows that much of the "increasing wealth" in the previous paragraph is fictitious. It is fictitious because the quantitative easing increases inflation – or reduces deflation (and the expectations). If the QE or other loose monetary policies hadn't been adopted, the deflation would probably be deeper, and that would be better for the holders of cash. Deflation is just great news for those who hold cash. These are the real people (or companies) who are being harmed by the loose monetary policies or the QE. But they're still doing OK – at least they're not in red numbers and as soon as the inflation stays near zero, they may believe that their real interest rates stay positive (they were surely negative in several years around 2008, because of a shock increase of food prices and other reasons). And you might argue that it's "fair" to "harm" them in this way – the inflation was promised to be around 2%, so the cash holders shouldn't be allowed to be better off than these plans. But the elevated (and perhaps really fast) inflation rate that ultimately arises from the loose monetary policies is the only downside that comes out of these policies. If the central banks continue in ZIRP or NIRP and QE programs, we will unavoidably see the rebirth of inflation at some moment. Maybe other things aside from the central banks' activity will contribute to that. But it is inevitable that the inflation will return at some moment if the banks are working hard to pour cash everywhere (and if they realize that it's easy and relatively safe to do so more vigorously). Inflation is the only possible "punishment" for the loose (and possibly irresponsible) monetary policies. So I just think that Thiel's and others' comments about a "bubble in everything" caused by the loose policies is a logical oxymoron. Well, you may call it a "bubble of everything" in the sense that the nominal price of everything is being pushed up, and the value of cash is correspondingly driven down. But this result – a positive contribution to the inflation rate, the rate at which the cash is losing its value relatively to everything else (nominally relatively to the baskets of products, but at some level, the products are correlated with the stocks of companies that produce them and other things) – is indeed the very goal of the loose monetary policies. The central banks just decided that they want to have a "visibly positive" inflation – either because of some flawed moral imperatives, or because they have promised such an inflation rate and it's wrong to breach promises, especially because such deviations may bring (additional?) chaos to people's and companies' financial planning. And if a banker fails to fulfill her promises, she loses her credibility. And her credibility is a great tool she doesn't want to lose because it's a virtue allowing her to reshape the markets by opening her mouth. However, even if you use the word "bubble" for the increased nominal prices of all things due to the loose monetary policies, it is not a bubble that should be expected to "burst". So "the bubble" is a potentially subtle word that has several aspects and people – including Thiel – don't seem to disentangle them carefully. A bubble may mean that the air is being pumped somewhere. But if the air is pumped somewhere, it does not mean that the probability that this object bursts increases. For example, lots of oxygen was suddenly pumped to the atmosphere of Earth 2.3 billion years ago and you could have called it the Oxygen Bubble. But the oxygen is still with us. ;-) It's simply not true that everything that happens must also "unhappen". In many cases, it not only doesn't; it can't. The idea that the "bubble" caused by the loose monetary policies will "burst" is exactly equivalent to the opinion that the pouring of the cash over the economy is not only reversible, but the reversal is unavoidable. But it's simply not true. These QE-like operations are not reversible, or at least, they don't get reversed without the agreement by the central banks that do such things. In a very broad monetary sense, the QE activities are equivalent to Milton Friedman's helicopters. And if helicopters drop lots of cash over a city, the people pick the banknotes, hide them or spend them or eat them, and they will just not return them. If you don't believe me, try to test this question experimentally (with helicopters above Pilsen 4). Because of the increased money supply, the value of the cash goes down and inflation goes up. If the helicopter drop is a one-time event, the increase of prices following from that is a one-time event, too, and one might imagine that the prices could get reversed. But if a central bank buys over \$50 billion in assets every month, and the following month, it's a fixed and unabating contribution to the trend (and to the inflation rate). It doesn't have any "unavoidable" mirror image in the future. People who have collected the banknotes from the helicopters just won't return them without a good reason. And the people who have sold bonds to the central bank in the asset purchase program won't be able to buy the bonds again for a lower price if the QE continues. The quantitative easing is slightly more reversible than the helicopter drops for one reason: the central banks may sell all the bonds they previously bought. This is what makes the asset purchases "safer". If one decides that it was a mistake, the effects may be basically undone. But the central bank has to agree with that. It has to actively reverse this policy. Why would it be doing such a thing? Only when something bad comes out of it and the return of the inflation is the only possible "stopper". After 2008, Thiel's hedge fund was making some pessimistic bets on the stock markets which was an unhappy choice in 2009 and 2010. I was feeling "rationally" almost sure that the things and stock indices would return from the insanely low values of 2008 and early 2009 – both because capitalism works and the talk about the new Great Depression was greatly exaggerated; and because it's been obvious for very many years that central banks (and governments) were willing to make the monetary conditions as easy as possible. But of course, I was afraid and frustrated enough by some losses caused by unlucky purchases of stock funds in mid 2008. ;-) But if the central banks seem to be eager to continue in the loose policies, and I think it's basically the case, they simply will. The only thing that will stop them is inflation – perhaps when the inflation rates return above 2%. But because this hasn't been the case in recent years, I think it's absolutely accurate to say that aside from some possible "microscopic distortions" of the markets, the quantitative easing programs have so far led to no macroeconomic negative consequences. OK, how long will the central banks continue in these policies? And, almost equivalently, when will the inflation return? I don't know. The inflation could return quickly e.g. if OPEC and Russia agreed to substantially cut the oil production. But if you assume that the inflation rates remain low, well below 2% or so, one should realize that: The ability of a central bank to weaken its own currency is basically unlimited. This is an extremely simple observation and it seems to me that many people including Thiel fail to realize it. A central bank can print the damn banknotes, or mint the damn coins. And it can sell them by buying government bonds – but also corporate debt/bonds (ECB is already starting with that), possibly stocks, and even real estate and other things. Maybe the central bank even has the right to make the helicopter drops literally – at least with some clever trick. As long as the bulk of the economy uses these banknotes and coins, be sure that the central banks may keep on weakening the currency they control and positively contributing to the inflation rate. As long as the central banks are allowed to buy basically unlimited types of things, you simply can't expect the central banks to "run out of the gunpowder". They have an infinite amount of that. On the other side, if they needed to strengthen their own currency (or protect it against drops), they also have some gunpowder, but they only have a finite amount of this kind of gunpowder – basically the reserves. Russia was reminded about this finiteness in 2015. The idea that the central banks are "incapable" of achieving the positive inflation at some moment seems obviously logically flawed to me. The question is When, not If. Finally, I want to comment on Thiel's remark that "illiquid startup investments may be a place to hide". This sentence sounds weird to me, too. These startups' being illiquid basically means that their value isn't quantified too well, so you don't really know how much you need to pay for them and how much you may get for them. There is no daily trading with their stocks etc. So if someone has access to these very special types of investments – and Thiel is lucky to be one of the people with this access – good for him and he can hide his cash over there. On the other hand, he can also hide the incorrectness of any recommendation behind the startups' illiquidity, too. No one will know what the price actually was etc. so it will be impossible to determine whether it was a good investment. The only qualitative aspect by which these illiquid assets differ from the other ones is that they are illiquid, i.e. their price isn't too well-defined. If you try to sell them, especially too quickly, you must be prepared to lose a big percentage of the value. There are not too many good buyers. Maybe the illiquid assets haven't made it to the quantity T above at all. But this doesn't remove them from the domain of validity of mathematics. Despite the uncertainty about their value, their price expressed in U.S. dollars obviously goes up when the value of the U.S. dollar is being reduced by the loose monetary policy. There's no way how a type of investment could avoid this logic or trend. These secretive assets may be hard to access for most investors – aside from famous VCs like Thiels. But this inaccessibility is not correlated with the quantitative easing. Imagine the helicopter drops. When the billions of banknotes land on Manhattan, the value of the U.S. dollar goes down. So the price of everything expressed in the U.S. dollars – whether it's liquid or illiquid, accurately quantified or not – goes up. The uncertainty $\Delta X$ is something entirely different than the quantity $X$ itself. Only 0.1% of investors may be capable of reasonable purchase of these illiquid assets, but this percentage is the same with ZIRP/QE or without it. So ZIRP just can't be the true cause that makes them a better investment relatively to other startups etc. For all these reasons, I believe Thiel and many other people overestimate the actual influence of the loose monetary policies on the economy. To some extent, it's just a choice of time-dependent units (dollars etc.) of the compensations, investments, and wealth. If one says that "all prices are increased", it basically means nothing for the real economy. The bulk of the economic events occurs outside the headquarters of the central banks.
{}
# A result about Fermat's numbers. Is my proof correct? Is that result useful ?Can we generalize that result? Let $$n$$ be an integer and $$F_n:=2^{2^n}+1$$. $$n=2,3,4:F_n\equiv17\pmod{30}$$ $$\mathbf{Result:}\;n>1:F_n\equiv17\pmod{30}$$ $$\mathbf{Proof:}$$ Suppose $$F_n - 1\equiv16\pmod{30}$$. Then $$2^{2^{n+1}}=({2^{2^n}})^2\equiv16^2\pmod{30}\equiv16\pmod{30}$$ and $$F_{n+1}\equiv17\pmod{30}$$. I found that result using primoradic (see stub OEIS: https://oeis.org/wiki/Primorial_numeral_system). That's the way I found that result, which I did't know before and I wonder if there are other results with $$\;210,\;2310,\;30030,\;\ldots$$ (primorials) P.S. : In primoradic, using Charles-Ange Laisant's notations for factoradic (with $$A=10, B=11, C=12, D=13,\ldots$$) $$17=(000000.221)$$ $$257=(000011.221)$$ $$65.537=(2.240.221)$$ $$F_5=(J.5F1.721.221)$$ Perhaps, someone could give $$F_6$$, $$F_7$$ ...in primoradic, just for fun. Fill in the holes :$$F_6=(........0.221)$$ $$F_7=(......1:1:...)$$ $$F_8=(......4:0:...)$$ $$F_9=(......2:1:...)$$ $$F_{10}=(....0:0:...)$$ $$F_{11}=(......1:...)$$ $$F_{12}=(......0:...)$$ $$F_{13}=(......1:...)$$ $$2^{16384}+1=(......:0:...)$$ I have verified each of these results with my spreadsheet. Maybe $$F_n\equiv17\pmod{210}$$ if n is even and $$47$$ if n is odd? A result appears clearly with $$2310$$ too. We need a proof. Perhaps here :I'm trying to generalize some simple results about $2 ^ n$. It's useful to write them in primoradic (see stub OEIS)., will be useful. Primorials play no role. Rather, fixed points of (quadratic) polynomial iterations are key. Notice $$\ \color{#0a0}{F_{n+1} = (F_n-1)^2+1} = F_n^2 -2F_n + 2\$$ hence $$\!\bmod 30\!:\ F_{n+1}\equiv F_n\iff 0 \equiv F_n^2-3F_n + 2 \equiv (F_n-1)(F_n-2).\,$$ Using CRT to combine the roots $$\,F_n\equiv 1,2\,$$ mod $$\,2,3,5\,$$ (as here) yields $$\,2^3\,$$ roots $$\,1,2,7,11,16,\color{#c00}{17},22,26 \pmod{\!30}$$. So, $$\!\bmod 30,\,$$ any sequence $$\,F_n\,$$ satisfying said $$\rm\color{#0a0}{recurrence}$$ remains constant afterwards once it takes the value of one of those roots, e.g. your Fermat numbers, where $$\,F_2\equiv \color{#c00}{17}\pmod{\!30}.$$ Remark The same method works to solve for (modular) fixed points of any polynomial iteration, i.e. if the $$\,a_i$$ satisfy a recurrence $$\,a_{n+1} = f(a_n)\,$$ for a polynomial $$\,f(x),\,$$ then $$\,a_n\,$$ is a fixed point, i.e. $$\, a_{n+1} = a_n\iff f(a_n) = a_n\iff a_n\,$$ is a root of $$f(x)-x,\,$$ so finding fixed points of polynomial iterations reduces to finding roots of polynomials. In your case note that subtracting $$1$$ from the roots shows they are roots of $$\,x^2\equiv x,\,$$ i.e. idempotents, so are $$\equiv 0$$ or $$1$$ for each prime modulus. Idempotents are well-studied since they play crucial roles in factorization of rings (e.g. they essentially govern the ring factorizations given by CRT = Chinese Remainder Theorem). • @Stéphane It's easy $\,\color{#0a0}{F_{n+1}-1 = (F_n-1)^2}\,$ means $\,2^{\large 2^{\Large n+1}}\!\! = (2^{\large 2^{\Large n}})^{\large 2},\,$ true by exponent laws. $\ \$ May 9, 2021 at 10:49 • @Stéphane Notice my location is "Shoulders of Giants". Luckily, the recursion is well-founded (on Gauss, Dedekind, Noether, Krull, etc). May 9, 2021 at 11:01 • Plus by $F_n=2M_{M_n}+3$ it's equivalent to $M_{M_n}\equiv 7 \pmod {30}$ May 9, 2021 at 11:22 • Of course it does take a little bit of work to derive the formula $F_n=2^{2^n}+1$ from $F_n=F_1F_2\cdots F_{n-1}+2$, the basic definition of the Fermat numbers. – bof May 9, 2021 at 12:02 • @bof Did you miswrite, or do you really mean to claim that $\,F_n=F_1F_2\cdots F_{n-1}+2\,$ is the "basic definition" of the Fermat numbers? May 9, 2021 at 12:15 Of course $$F_n=F_0F_1\cdots F_{n-1}+2\equiv2\pmod{F_0F_1}$$ for $$n\gt1$$, and $$F_n$$ is odd, so there you have it. Likewise, $$F_n\equiv2\pmod{17}$$ for $$n\gt2$$, $$F_n\equiv2\pmod{257}$$ for $$n\gt3$$, etc. This is why the Fermat numbers are pairwise relatively prime. • But that formula is so specific to Fermat numbers that it doesn't generalize to other recurrences, whereas the fixed-point view does, which is why I chose that much more general view in my answer. May 9, 2021 at 8:15 • No need to apologize - this method is nice and well worth mention. The point of my comment was merely to spark readers to think about the generality of various approaches. May 9, 2021 at 10:09
{}
# Spatio-temporal perturbations as a mechanism of cell repolarization Published in Submitted, 2021 Abstract: The intrinsic polarity of migrating cells is regulated by spatial distributions of protein activity. Those proteins (small GTPases, such as Rac and Rho) redistribute in response to stimuli, determining the cell front and back. Reaction-diffusion equations with mass conservation and positive feedback have been used to explain initial polarization of a cell. However, the sensitivity of a polar cell to a reversal stimulus has not yet been fully understood. We carry out full PDE bifurcation analysis of two polarity models to investigate routes to repolarization: (1) a single-GTPase (wave-pinning’’) model and (2) a mutually antagonistic Rac-Rho model. We find distinct routes to reversal in (1) vs (2). We show numerical simulations of full PDE solutions for the RD equations, demonstrating agreement of predictions with bifurcation results. Finally, simulations of the polarity models in deforming 1D model cells shows behaviour that is consistent with biological experiments.
{}
# Summation of Arithmetic Progression Modulo Series Problem Aladin was walking down the path one day when he found the strangest thing: N empty boxes right next to a weird alien machine. After a bit of fumbling around he got the machine to do something. The machine now accepts 4 integers $L$, $R$, $A$ and $B$. After that hitting the big red glowing button labeled "NE DIRAJ"1 causes the machine to go crazy and follow the next routine: • Set the number of stones in the box labeled L to A modulo B. • It proceeds to fly to the box labeled L + 1, and set the number of stones there to (2 ⋅ A) mod B. • It procedes to fly to the box labeled L + 2, and set the number of stones there to (3 ⋅ A) modB. • Generally, it visits each box labeled between L and R, and set the number of stones there to ((X − L + 1) ⋅ A) mod B, where X is the box label. • After it visits the box labeled R. It settles down for further instructions. During the game Aladin wonders what is the total number of stones in some range of boxes. Input The first line contains two integers: the number of boxes $N$ ($1 \le N \le 1000000000$) and the number of queries $Q$ ($1 \le Q \le 50000$). The next $Q$ lines contain information about the simulation. If the line starts with 1, than it follows the format "1 L R A B" ($1 \le L \le R \le N$) ($1 \le A, B \le 1000000$), meaning that Aladin keyed in numbers $L$, $R$, $A$ and $B$ in the device and allowed the device to do its job. If the line starts with 2, then it follows the format "2 L R" ($1 \le L \le R \le N$), meaning that Aladin wonders how many stones in total are ther stones are in boxes labeled $L$ to $R$ (inclusive). Output For each query beginning with 2 output the answer to that particular query. Queries should be processed in the order they are given in the input. Sample Input 1 6 3 2 1 6 1 1 5 1 2 2 1 6 Sample Output 1 0 3 Sample Input 2 4 5 1 1 4 3 4 2 1 1 2 2 2 2 3 3 2 4 4 Sample Output 2 3 2 1 0 Sample Input 3 4 4 1 1 4 7 9 2 1 4 1 1 4 1 1 2 1 4 Sample Output 3 16 0 #include<iostream> #include<fstream> #include<string> using namespace std; long countStones(long, long, long, long, long, long); int main(int __argc, char *__argv[]) { std::string filepath; long N, Q, *boxes; if(__argc == 2) { filepath = std::string(__argv[1]); } std::ifstream* infile = !filepath.empty() ? new std::ifstream(filepath.c_str()) : static_cast<std::ifstream*>(&std::cin); *infile >> N >> Q; long opt, L1 = 0, R1 = 0, L2 = 0, R2 = 0, A, B; for(long i = 1; i <= Q; i++) { *infile >> opt; if(opt == 1) { *infile >> L1 >> R1 >> A >> B; } if(opt == 2) { *infile >> L2 >> R2; cout << countStones(L1, R1, L2, R2, A, B) << endl; } } return 0; } long countStones(long L1, long R1, long L2, long R2, long A, long B) { long ret = 0, start = L1, end = R1; if(R2 < L1) return 0; if(L2 > R1) return 0; if (L2 > L1) start = L2; if (R2 < R1) end = R2; for(long i = start; i <= end; i++) { ret += ((i - L1 + 1) * A) % B; } return ret; } The problem works fine with all the with Sample Cases #1, #2, & #3. But when I upload the code on the website (open.kattis.com) it fails on the case #4, which is unknown to me. I couldn't find a logical error in my code. Also, when I had written the code which was storing the modulo calculations for each box (array) the problem did pass the case #4 but failed #5 (Memory Exceeded). So I changed the code to NOT STORE the values in the array & just calculate on the fly & add it to the count of stones. I need to modify the code so that it passes Test #4, whose input values are unknown to me. That is to find the logical error in the code. Don't use underscore as a prefix. In most situations this is a reserved identifier. Defiantly never use double underscore in an identifier. This is reserved in all situations. __argc /// This is so wrong. It will break. Read a couple of other C++ reviews. They all explain why this is a bad idea: using namespace std; This is not old school C. You don't need to declare the variables at the top of the function. long N, Q, *boxes; Declare them as close to the point of use as possible. This becomes important when your types have constructor/destructor (as they execute code). But it also makes the code easier to read as the type information is close to the usage of the object. long N; long Q; long* boxes; // The * belongs close to the type (as it is part of the type info). // PS you probably should not be using pointers. In modern C++ they // are vanishingly rare (as there is no ownership semantics associated // with a pointer). Don't do this. std::ifstream* infile = !filepath.empty() ? new std::ifstream(filepath.c_str()) : static_cast<std::ifstream*>(&std::cin) Should I call delete on this? Yes if it is a file. No if it is std::cin. So not a good solution. Also std::cin is not a file stream so casting it is going to do nasty things. You should never cast things (it means you have something wrong with your design (or you are working at very low level (which most people should not be doing))). But they are both std::istream. So you could do this. std::ifstream file; if (!filepath.empty()) { file.open(filepath); // Modern C++ allows std::string here. } std::istream& infile = !filepath.empty() ? file : std::cin; You may want to check that the read works: *infile >> N >> Q; Very easily done with: if (infile >> N >> Q) { // Do stuff. } Main is special. So does not need a return code. return 0; So if your application never fails (ie always returns 0). Use this info. A missing return in main is an indication that the application never fails. • Code still fails at the Test #4 (on kattis.com) with your suggestions. – Kunal B. Jan 9 '15 at 1:23 • @KunalB. That does not surprise me. Its not as if I was looking to fix your problems (that is not what this site is for). I did not look at the logic of your code. I was trying to explain to you some of your bad habits so that you would write better code. – Martin York Jan 9 '15 at 17:42 • I am sorry if my words meant something else. But the code update was removed by someone. And the above comment was a comment to readers of original post to see the updated code (which was removed). So my comment's context changed. – Kunal B. Jan 9 '15 at 17:48
{}
zbMATH — the first resource for mathematics An optimization-based approach for high-order accurate discretization of conservation laws with discontinuous solutions. (English) Zbl 1395.76041 Summary: This work introduces a novel discontinuity-tracking framework for resolving discontinuous solutions of conservation laws with high-order numerical discretizations that support inter-element solution discontinuities, such as discontinuous Galerkin or finite volume methods. The proposed method aims to align inter-element boundaries with discontinuities in the solution by deforming the computational mesh. A discontinuity-aligned mesh ensures the discontinuity is represented through inter-element jumps while smooth basis functions interior to elements are only used to approximate smooth regions of the solution, thereby avoiding Gibbs’ phenomena that create well-known stability issues. Therefore, very coarse high-order discretizations accurately resolve the piecewise smooth solution throughout the domain, provided the discontinuity is tracked. Central to the proposed discontinuity-tracking framework is a discrete PDE-constrained optimization formulation that simultaneously aligns the computational mesh with discontinuities in the solution and solves the discretized conservation law on this mesh. The optimization objective is taken as a combination of the deviation of the finite-dimensional solution from its element-wise average and a mesh distortion metric to simultaneously penalize Gibbs’ phenomena and distorted meshes. It will be shown that our objective function satisfies two critical properties that are required for this discontinuity-tracking framework to be practical: (1) possesses a local minima at a discontinuity-aligned mesh and (2) decreases monotonically to this minimum in a neighborhood of radius approximately $$h / 2$$, whereas other popular discontinuity indicators fail to satisfy the latter. Another important contribution of this work is the observation that traditional reduced space PDE-constrained optimization solvers that repeatedly solve the conservation law at various mesh configurations are not viable in this context since severe overshoot and undershoot in the solution, i.e., Gibbs’ phenomena, may make it impossible to solve the discrete conservation law on non-aligned meshes. Therefore, we advocate a gradient-based, full space solver where the mesh and conservation law solution converge to their optimal values simultaneously and therefore never require the solution of the discrete conservation law on a non-aligned mesh. The merit of the proposed method is demonstrated on a number of one- and two-dimensional model problems including the $$L^2$$ projection of discontinuous functions, Burgers’ equation with a discontinuous source term, transonic flow through a nozzle, and supersonic flow around a bluff body. We demonstrate optimal $$\mathcal{O}(h^{p + 1})$$ convergence rates in the $$L^1$$ norm for up to polynomial order $$p = 6$$ and show that accurate solutions can be obtained on extremely coarse meshes. MSC: 76M10 Finite element methods applied to problems in fluid mechanics 65M60 Finite element, Rayleigh-Ritz and Galerkin methods for initial value and initial-boundary value problems involving PDEs 65K10 Numerical optimization and variational techniques 76J20 Supersonic flows 76H05 Transonic flows 35L65 Hyperbolic conservation laws pyOpt; SNOPT Full Text:
{}
# Simple card game to learn OOP My goal was to get my hands dirty in OOP by designing and using classes and getting started with inheritance and other OOP concepts. I have written a very small code to play a card Game called "War". The rules are simple. Each person playing the game is given 1 card. The person with the highest card wins. I am not worrying too much about Ties right now. My code does work but I wanted feedback on my OOP usage. import itertools import random class Cards: def __init__(self): self.values = range(1,14) self.ActualCards = [] #Empty List to Append for Card in itertools.product(self.suits,self.values): self.ActualCards.append(Card) #Cartesian Product to Create Deck def GetRandomCard(self): RandomNumber = random.randint(0,51) CardToBeReturned = self.ActualCards[RandomNumber] #Generates Random Card return(CardToBeReturned) class Player: def __init__(self,ID,Card): self.PlayerID = ID self.CardForPlayer = Card class Game: def __init__(self,NameOfGame): self.name = NameOfGame class SimpleWar(Game): def __init__(self,NumberOfPlayers): self.NumberOfPlayers = NumberOfPlayers self.PlayerList = [] def StartGame(self): DeckOfCards = Cards() for playerID in range(0,self.NumberOfPlayers): CardForPlayer = DeckOfCards.GetRandomCard() #Deal Card to Player NewPlayer = Player(playerID,CardForPlayer) #Save Player ID and Card self.PlayerList.append(NewPlayer) self.DecideWinner() def DecideWinner(self): WinningID = self.PlayerList[0] #Choose Player 0 as Potential Winner for playerID in self.PlayerList: if(playerID.CardForPlayer[1]>WinningID.CardForPlayer[1]): WinningID = playerID #Find the Player with Highest Card print "Winner is Player "+str(WinningID.PlayerID) print "Her Card was "+ str(WinningID.CardForPlayer[1]) + " of " + str(WinningID.CardForPlayer[0]) if __name__=="__main__": NewGame = SimpleWar(2) NewGame.StartGame() - ### Syntax Error In Cards.GetRandomCard(), the return statement is incorrectly indented. (Also, I would recommend writing the return without parentheses.) ### Style Conventions Use lower_case_names for variables and methods. ### Modeling Your Game class isn't useful. You never set or use NameOfGame. I recommend getting rid of the Game class altogether. In a real-life card game, you would deal a card from the deck to each player, without replacement. In your code, you deal with replacement (i.e., there is a chance of dealing the same card to both players). A more realistic simulation would do a random.shuffle() on the array. When dealing, you would pop() a card from the list to remove it from the deck. 51 is a "magic" number; you should use len(self.ActualCards) - 1. Cards is really a Deck. Rather than just a tuple of strings, you should have a Card class representing a single card. The Card class should have a __str__() method that returns a string such as "Ace of diamonds". If you also define comparison operators, then you can determine the winner using max(self.PlayerList, key=lambda player: player.CardForPlayer). ### Expressiveness In Python, you can usually avoid creating an empty list and appending to it: self.ActualCards = [] #Empty List to Append for Card in itertools.product(self.suits,self.values): self.ActualCards.append(Card) #Cartesian Product to Create Deck Instead, you should be able to build it all at once: self.actual_cards = list(itertools.product(self.suits, self.values)) - PEP 8 suggests two newlines between function and class definitions. However, one is better than nothing. –  nyuszika7h Feb 19 '14 at 14:44 @nyuszika7h PEP 8 only calls for one blank line between method definitions within a class, which is what matters here. –  200_success Feb 19 '14 at 17:12 @200_success: It would be hard to define a comparison for the Cards, as the order only has meaning in the context of a particular game. So it's a Game responsibility (War, Solitaire, etc) to compare two cards. –  mgarciaisaia Feb 19 '14 at 18:02 @200_success You're right, I didn't bother to check if it's within a class. –  nyuszika7h Feb 20 '14 at 17:30 From a testing perspective, it would be good, to inject the deck (Cards) and the players into the game. Also printing the results in the game object is not a very good idea, I think. Maybe it would be better, to return the round, that contains the winner. This could then also be used for logging or a mastership :) Deck = Deck() Game = SimpleWar(Deck) Round = Game.play() Winner = Round.get_winner() print('The winner is: ' + str(Winner)) print('The winning card is: ' + str(Winner.get_last_card())) Deck.shuffle() Round = Game.play() As said before, the deck should contain card objects (or an array and build the card objects, when requested, if those card objects would be expensive) and the cards should have to be put back into the deck after each game round (or a reset() method of the deck could be called by the game). The question then would be, who remembers the winning card, after all cards have been returned into the deck. In the example above, this is the player (get_last_card()), but it could be stored in the Round, too. This way you don't have any object instantiation in your classes, except for the deck, that builds card objects. This would be very good testable. For example, you can pass a deck mock into the game and define, which cards it should return in the single requests to test the detection of the winner. - what I like about your design is that you've thought through the interactions between objects, and that defines their methods. Also big plus for dependency injection. –  Rob Y Feb 19 '14 at 19:11 Cool app. My comments aren't intended to be critical or haughty, just me thinking out loud. • Python 3? If not, make sure your classes extend object. • I probably wouldn't define and extend Game. It doesn't add anything. You don't need to generalize or abstract the idea of a Game, especially in this context. • More broadly, it's a good practice to avoid inheritance. Do a search on "favor composition over inheritance" for more on that. • The state of Cards doesn't change... the point of having a class is managing state. So I'd have something like pick_card that actually removes the selected card from the deck. Otherwise you'd probably be better off just using a function—at least in Python. Classes are all about bundling data and functions so you can manage state safely. Absent that, they kind of lose their luster (unless you're writing Java ;) ). • This is just me, but I wouldn't have StartGame run the loop or decide the winner. Again, I'd have __init__ start the game, and then have something like run_game. Again, rely on the class to manage the state. Each method is supposed to move it farther along. • Knowing me, I'd break out __init__, play_a_turn, decide_game, and show_game as methods and loop outside the class to keep the calling structure flat. • IMHO your logic for deciding the winner, and the logic for displaying the winner could go in separate methods. I like keeping methods small, light, and targeted. • Again, I'd make SimpleWar.__init__ do more, and StartGame do less. If I had a StartGame at all. EDIT: I like to define my classes closely to the things they model. So instead of Cards, I'd probably have Deck, with methods shuffle and pick. I also like the comment above about spinning off a separate method to compare cards—I'd definitely do that. - It's not Python 3, because it uses Python 2's print syntax. –  nyuszika7h Feb 19 '14 at 14:43 ah, good call. I'm still self-conscious about not having migrated. In that case, make sure that your init methods invoke the base class's init, either directly or with super() –  Rob Y Feb 19 '14 at 16:00 For future reference, use __init__ to make it display properly. Or if you want to bold it, **\_\_init\_\_**. –  nyuszika7h Feb 20 '14 at 17:36 At a glance, I would recommend putting a CompareTo(OtherCard) function in the Card class, and using that for DecideWinner. - The value of a card is game dependent. In a world, where those cards could be used for many different games, the cards wouldn't know their own values. So, I think, the WarGame object is a good place, to put this decision in. –  stofl Feb 19 '14 at 15:05 @stofl The only thing that would change for numerical comparison is whether aces are high or low, and it actually doesn't matter at all unless aces could be either within the same game, or if cards are passed between games for some reason. –  Saposhiente Feb 19 '14 at 18:50 If the value of a card is an attribute of the card, then a compare method makes sense in the card class. If the value of a card is defined in the game rules, then the card class would be not the right place for a compare method. In the concrete code, we have the value in the cards - in real world we most often don't: Cards have many different values in different games. –  stofl Feb 19 '14 at 19:05 @stofl I think that what's in the code is more relevant than what's in real life; and in the code, there is only one game, and only one definition of card comparison. –  Saposhiente Feb 19 '14 at 19:30
{}
Journal ArticleDOI # Ultra-Broadband Add-Drop Filter/Switch Circuit Using Subwavelength Grating Waveguides 01 May 2019--Vol. 25, Iss: 3, pp 1-11 AbstractAn ultrabroadband add-drop filter/switch circuit is designed and demonstrated by integrating a pair of subwavelength grating waveguides in a $2\times 2$ Mach–Zehnder interferometer configuration using silicon photonics technology. The subwavelength grating is designed such that its stopband and passband are distinguished by a band-edge wavelength $\lambda _{\text{edge}} \sim$ 1565 nm, separating C and L bands. The stopband ( $\lambda ) is filtered at the drop port of the device, whereas the passband ($\lambda >\lambda _{\text{edge}}$) is extracted either in cross port or in bar port. The device is designed to operate only in TE polarization. Experimental results exhibit a nearly flat-top band exceeding 40 nm for both stopband and passband. The stopband extinction at cross- and bar ports are measured to be$>$35 dB with a band-edge roll-off exceeding 70 dB/nm. Wavelength independent directional coupler design and integrated optical microheaters at different locations of the Mach–Zehnder arms for thermo-optic phase detuning are the key for stopband filtering at the drop port and switching of passband between cross- and bar ports with flat top response. Though the insertion loss of fabricated subwavelength grating waveguides are negligibly small, the observed passband insertion loss is$\sim$2 dB, which is mainly due to the combined excess loss of two directional couplers. Experimental results also reveal that the passband switching between cross- and bar ports of the device has been possible with an extinction of$>$15 dB by an electrical power consumption of$P_\pi \sim$54 mW. A switching time of 5$\mu\$ s is estimated by analyzing the transient response of the device. The passband edge could also be detuned thermo-optically at a rate of 22 pm/mW. ...read more ##### Citations More filters Journal ArticleDOI Abstract: Subwavelength grating (SWG) waveguides in silicon-on-insulator are emerging as an enabling technology for implementing compact, high-performance photonic integrated devices and circuits for signal processing and sensing applications. We provide an overview of recent work on developing wavelength selective SWG waveguide filters based on Bragg gratings, ring resonators, and contra-directional couplers, as well as optical delay lines for applications in optical communications and microwave photonics. These components increase the SWG waveguide component toolbox and can be used to realize more complex photonic integrated circuits with enhanced or new functionality. 27 citations Proceedings Article 21 Jun 2015 TL;DR: This paper reviews the development of the various components that constitute integrated quantum photonic systems, and identifies the challenges that must be faced and their potential solutions for silicon quantum photonics to make quantum technology a reality. Abstract: Photonics is a promising approach to realising quantum information technologies, where entangled states of light are generated and manipulated to realise fundamentally new modes of computation [1], simulation [2] and communication [3], as well as enhanced measurements and sensing. Historically bulk optical elements on large optical tables have been the means by which to realise proof-of-principle demonstrators in quantum physics. More recently, integrated quantum photonics has enabled a step change in this technology by utilising low-index-contrast waveguide material systems, such as silica-on-silicon [4] and silicon-oxy-nitride [5]. Such technologies offer benefits in terms of low propagation losses, but their associated large bend radii and low component density ultimately limit the scalability and usefulness of this technology. 20 citations Journal ArticleDOI Abstract: A new Hedgehog waveguide, consisting of a bed of nails embedded in a host rectangular hollow waveguide, is proposed and investigated as a promising state-of-the-art low-loss waveguide for millimeter-wave frequency bands. The proposed Hedgehog waveguide gets its name from its electromagnetic behavior. As hedgehogs root through hedges and other undergrowth in search of their favorite food, the proposed waveguide root through its embedded bed of nails. When we choose a host waveguide technology, it is worthwhile spending some time weighing up the pros and cons of the various types of waveguides on offer. The proposed Hedgehog waveguide is extremely low loss and is compatible with the hollow waveguide technology, which gives the ability to develop different components such as low-loss flat phase response phase shifters. In this paper, the proposed Hedgehog waveguide is analytically investigated, and a transition to the hollow waveguide is designed. Moreover, the low-loss nature of the designed Hedgehog waveguide is compared with the ridge gap waveguide, substrate-integrated waveguide (SIW), hollow waveguide, and microstrip line. Finally, the proposed waveguide is designed, simulated, and fabricated. The simulated and measured results show a good agreement, which validates the proposed concept. 11 citations Journal ArticleDOI Abstract: It is a remarkable and straightforward approach to customize the dispersion and nonlinear properties of the photonic devices without varying the composition of the material by employing periodic segmented waveguide structures at the subwavelength level of the operational wavelength. This method addresses the diffraction limit and it is likely to engineer the waveguides as a uniform optical medium with an effective refractive index that relies on the waveguide geometry. In recent years, advances in lithographic technology in the semiconductor-on-insulator platform providing sub-100-nm patterning resolution have been renowned with many useful devices based on subwavelength structures. At the beginning of the paper, the modal characteristics of the subwavelength grating (SWG) waveguides are presented. And in afterwards, we provide an insight into noteworthy progress in the subwavelength grating (SWG) waveguides based devices for signal processing and sensing applications such as ring resonators for surface and bulk sensing, couplers, suspended membrane waveguides for mid-infrared applications, filters, and fiber-to-chip couplers. 8 citations Journal ArticleDOI 15 Oct 2018 Abstract: A detailed theoretical and experimental study of metal-microheater integrated silicon waveguide phase-shifters has been carried out. It has been shown that the effective thermal conductance gw and the effective heat capacitance hw evaluated per unit length of the waveguide are two useful parameters contributing to the overall performance of a thermo-optic phase-shifter. Calculated values of temperature sensitivity, SH = 1/gw and thermal response time, τth = hw/gw of the phase-shifter are found to be consistent with the experimental results. Thus, a new parameter ℱH = SH/τth = 1/hw has been introduced to capture the overall figure of merit of a thermo-optic phase-shifter. A folded waveguide phase-shifter design integrated in one of the arms of a balanced MZI switch is shown to be superior to that of a straight waveguide phase-shifter of the same waveguide cross-sectional geometry. The MZI switches were designed to operate in TE-polarization over a broad wavelength range (λ ∼ 1550 nm). 7 citations ##### References More filters Journal ArticleDOI , Chen Sun2, Sen Lin1 24 Dec 2015-Nature TL;DR: This demonstration could represent the beginning of an era of chip-scale electronic–photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers. Abstract: An electronic–photonic microprocessor chip manufactured using a conventional microelectronics foundry process is demonstrated; the chip contains 70 million transistors and 850 photonic components and directly uses light to communicate to other chips. The rapid transfer of data between chips in computer systems and data centres has become one of the bottlenecks in modern information processing. One way of increasing speeds is to use optical connections rather than electrical wires and the past decade has seen significant efforts to develop silicon-based nanophotonic approaches to integrate such links within silicon chips, but incompatibility between the manufacturing processes used in electronics and photonics has proved a hindrance. Now Chen Sun et al. describe a 'system on a chip' microprocessor that successfully integrates electronics and photonics yet is produced using standard microelectronic chip fabrication techniques. The resulting microprocessor combines 70 million transistors and 850 photonic components and can communicate optically with the outside world. This result promises a way forward for new fast, low-power computing systems architectures. Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome1,2,3 by using optical communications based on chip-scale electronic–photonic systems4,5,6,7 enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic–photonic chips9,10,11 are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic–photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics12, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors13,14,15,16. This demonstration could represent the beginning of an era of chip-scale electronic–photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers. 854 citations ### "Ultra-Broadband Add-Drop Filter/Swi..." refers background in this paper • ...photonics, most of the optical filters demonstrated till date are based on microring resonators [9], [27], arrayed waveguide gratings [28], photonic crystal cavities [29], DBR [30] etc.... [...] • ...configurable optical filters [8], silicon photonics has ingrained its benchmark not only in on-chip optical communications [9], but for futuristic quantum computation [10], lab-on-chip sens-... [...] Journal ArticleDOI TL;DR: This review provides an extended overview of the state-of-the-art in integrated photonic biosensors technology including interferometers, grating couplers, microring resonators, photonic crystals and other novel nanophotonic transducers. Abstract: The application of portable, easy-to-use and highly sensitive lab-on-a-chip biosensing devices for real-time diagnosis could offer significant advantages over current analytical methods. Integrated optics-based biosensors have become the most suitable technology for lab-on-chip integration due to their ability for miniaturization, their extreme sensitivity, robustness, reliability, and their potential for multiplexing and mass production at low cost. This review provides an extended overview of the state-of-the-art in integrated photonic biosensors technology including interferometers, grating couplers, microring resonators, photonic crystals and other novel nanophotonic transducers. Particular emphasis has been placed on describing their real biosensing applications and wherever possible a comparison of the sensing performances between each type of device is included. The way towards achieving operative lab-on-a-chip platform incorporating the photonic biosensors is also reviewed. Concluding remarks regarding the future prospects and potential impact of this technology are also provided. 458 citations ### Additional excerpts • ...ing [11] and numerous other applications [12].... [...] Journal ArticleDOI Abstract: Periodic structures with a sub-wavelength pitch have been known since Hertz conducted his first experiments on the polarization of electromagnetic waves. While the use of these structures in waveguide optics was proposed in the 1990s, it has been with the more recent developments of silicon photonics and high-precision lithography techniques that sub-wavelength structures have found widespread application in the field of pho- tonics. This review first provides an introduction to the physics of sub-wavelength structures. An overview of the applications of sub-wavelength structures is then given including: anti-reflective coatings, polarization rotators, high-efficiency fiber-chip cou- plers, spectrometers, high-reflectivity mirrors, athermal waveg- uides, multimode interference couplers, and dispersion engi- neered, ultra-broadband waveguide couplers among others. Particular attention is paid to providing insight into the design strategies for these devices. The concluding remarks provide an outlook on the future development of sub-wavelength structures and their impact in photonics. 417 citations Journal ArticleDOI TL;DR: Experimental measurements indicate a propagation loss as low as 2.1 dB/cm for subwavelength grating waveguide with negligible polarization and wavelength dependent loss, which compares favourably to conventional microphotonic silicon waveguides. Abstract: We report on the experimental demonstration and analysis of a new waveguide principle using subwavelength gratings. Unlike other periodic waveguides such as line-defects in a 2D photonic crystal lattice, a subwavelength grating waveguide confines the light as a conventional index-guided structure and does not exhibit optically resonant behaviour. Subwavelength grating waveguides in silicon-on-insulator are fabricated with a single etch step and allow for flexible control of the effective refractive index of the waveguide core simply by lithographic patterning. Experimental measurements indicate a propagation loss as low as 2.1 dB/cm for subwavelength grating waveguides with negligible polarization and wavelength dependent loss, which compares favourably to conventional microphotonic silicon waveguides. The measured group index is nearly constant n(g) ~1.5 over a wavelength range exceeding the telecom C-band. 265 citations ### "Ultra-Broadband Add-Drop Filter/Swi..." refers background in this paper • ...fective index and dispersion characteristics of the guided mode [19]–[21].... [...] Journal ArticleDOI Abstract: Integrated quantum photonic applications, providing physically guaranteed communications security, subshot-noise measurement, and tremendous computational power, are nearly within technological reach. Silicon as a technology platform has proven formidable in establishing the micro-electronics revolution, and it might do so again in the quantum technology revolution. Silicon has taken photonics by storm, with its promise of scalable manufacture, integration, and compatibility with CMOS microelectronics. These same properties, and a few others, motivate its use for large-scale quantum optics as well. In this paper, we provide context to the development of quantum optics in silicon. We review the development of the various components that constitute integrated quantum photonic systems, and we identify the challenges that must be faced and their potential solutions for silicon quantum photonics to make quantum technology a reality. 242 citations
{}
04:00 - 16:0016:00 - 21:00 4:39 AM GooD MorninG..! @JMac The same happens to me. That's why I have stopped sleeping at any other time in the day except the night. Though I did once or twice sleep during the afternoon intentionally to gather "inspiration" (because when I wake up, my mind works quite differently, it feels as if I got high and then became normal again). @123 Good morning. @ACuriousMind And that is the time my mind comes up with totally weird ideas. I mean, this occasionally happened when I used to read/study in the afternoon. And my mind used to apply classical mechanics to find the double bond equivalent of an organic compound (weird stuff, you get it, right?), and then when my senses returned, I would have totally forgotten how I did it. I guess I might even have discovered the theory of everything in those half asleep moments (jk). 5:18 AM @FakeMod yo, what happened to your rep? @ACuriousMind one thing i heard that works is drinking coffee right before taking a nap two people told me it makes them wide awake when they get up 5:46 AM @satan29 I deleted my account during the last few months of JEE, and resurrected it after JEE was over. I have done this deal of deletion and recreation quite a lot of times in the past :) @SirCumference How can one sleep after drinking a coffee? It make me "wide awake" even before I sleep. ah, i see. cool :) @FakeMod i think ya gotta nap before it kicks in i guess if you're extremely tired then it might be viable @SirCumference Hmm... I will try your coffee idea, if I need to sleep and, also, wake up awake. 2 hours later… 8:11 AM Urania doesn't have a lot of informations How am I supposed to make a sacrifice to the muse of astronomy for general relativity powers I can find some rituals, but they're all modern ones I only want authentic ancient greek ones Only elements are 1) lyre-ruling 2) golden headband 3) some hymn Fairly short list 8:35 AM How can we create uniform vector field??? I know E of infinite sheet and between two oppositely charged plates. But why it is uniform everywhere in space. Because E depend on r. Simple enough just write down a uniform vector field And compute its divergence although it's not guaranteed that it will be a proper EM field There may be no combination of electric and magnetic fields that give such a field @Slereah :P GooD Answer. Physics way how E between two oppositely charged plates not depend on r How can we explain bus accelerate because there is no external force which act on the bus??? Like newton's third law involve second object causes acceleration. Another example is rocket acceleration moving upward. There is no external force act on it. 8:50 AM A rocket is simple enough, it ejects a bit of its mass in one direction By conservation of momentum, this means that the rocket itself must accelerate in the other direction I understand conservation of momentum is a explaination of this example. How can we describe this motions using newton's three laws The simplest case is to consider it as two point masses Where is external force One representing the rocket, the other representing the gas You're roughly doing an elastic collision Ookay. pls explain 8:53 AM The gas molecule will bump into the rocket, and go the other way The gas momentum is transferred to the rocket Bus is a bit more complex Engines and wheels Although the rough idea is still the same, ie gas expansion is converted into motion But this is not following external force idea It is, if you consider the gas and the bus as two different systems A bus is the motion of many different parts which you can consider as independent systems and then it's just a transfer of such motion from one part to the other Ookay Thanks. 8:59 AM It's fundamentally the same idea as throwing a ball at something ok. got it 3 hours later… 12:27 PM yo 2 hours later… 2:07 PM afternoon yo @Charlie hi Do you have a mood to discuss physics. I am struggling to understand conservative force idea Quick question about a procedure just done in P&S, we want to isolate the interacting vacuum $|\Omega\rangle$ and we find an expression for it in terms of the free vacuum $|0\rangle$, does this imply that we are treating the interacting and free theories as existing in the same Hilbert space? I could understand why this would be necessary if we don't fully know what the interacting Hilbert spaces are. :P i am asking a question to you. You are also here to ask question :O @123 What do you not understand about conservative forces? @Charlie Yes, and this is nonsense by Haag's theorem. Physicists usually don't care and it seems to work anyway :P 2:11 PM Ah I seee $\vec{F}\cdot\vec{ds}$ here we take $\vec{s}$ in the direction of force. right. Does the Millenium prize regarding the Yang-Mills mass gap relate to rigorously constructing interacting QFTs? I've read a bit about it but don't really know enough to say @123 $\vec ds$ is not the direction of the force, it is the direction of travel through the force vector field It is dot product. dot product means in we take the angle between two vectors and project it any one of the vector. oh sorry you said "in" the direction of force, not "is" the direction of force well the answer is still no why??? i know inner product . but the idea is same 2:14 PM What you're doing is, at a given point, taking the the force vector $\vec F$ at that point, and taking its dot product with $\vec s$, the vector pointing in the direct of travel at that point Yes at every point. but this is the same meaning what i said about every point Then when you integrate along a path, you are effectively doing this at every point along the path and adding up the resulting dot products When you said "we take $\vec s$ in the direction of force", that was incorrect, $\vec s$ is a vector pointing in the direction along which you are travelling through the force Yes. I understand what integration does. i am confirming it is same result which we are getting from 2D. I'm not sure what you mean "from 2D" That procedure works in all dimensions $\Re^2$ yes i meant to say that idea is same for all dimensions. 2:17 PM Oh no I know what you meant by 2D, but not "the same result which we are getting from 2D", that phrase doesn't make sense to me The dot product is defined basically the same way in $\Bbb R^2$ and $\Bbb R^3$, so the procedure is the same @Charlie means. In dot product in $Real^2$ or $Real^2$ we take one vector projection to another . right yes How do write Real in latex \Bbb R^n $\Bbb R$ 2:20 PM $\Bbb R^n$ Thanks As per this dot product analogy. we are just looking Force and arc length vector in the direction of force. right Hi @Az I don't know what you mean by arc length vector in the direction of force Hi @Azmuth @123 Hi :) Arc length mean $\vec{s}$ What you're doing is drawing a line through $\Bbb R^3$, then at each point along this line you're taking the dot product of the tangent vector to that line with the force vector at that point. 2:22 PM tangent vector I can not see upload option. I wanted to share a picture There's an upload button just to the right of the text box If this the case we in this $\vec{F}\cdot\vec{s}$ we are just interested in distance curve of tangent vector only in the direction of force. Here i want to confirm one thing. Upload button was there yesterday. But today i have two buttons MathJax and send oh rip It is difficult to tell without showing the pictures. I'm not sure sry 2:29 PM See this image click it ok I see it The point is that both of those lines result in the same work done against gravity, since it is a conservative force If curve go downward from starting point let which give negative work as we move upward it is positive by the same amount. It cancels out every point below from starting point. Same is true for ending point curve. This is correct or not? Yes The point is that for every bit of energy "spent" going against the gravitational force the same amount is "gained" when going with the gravitational force. The net result is that both lines do the same amount of work this is a feature of conservative vector fields, i.e. those for which $\nabla \times V=0$. The curve between stating and endpoint give the same result as straight curve. Because this is the direction of force. and we are looking for curve tangent vector in the direction force. It gives the same result. This is also true? I'm not sure what your second and third sentence mean, but the first one is correct Any two lines that start and end at the same point that travel through a conservative vector field will do the same work 2:37 PM @Charlie I want to say we have to segments joining starting and endpoint. One is straight and other is curve. Now i conclude what happened to work above and below the endpoints. Now i am figuring out between endpoints As long as the starting point and the ending point are the same, all possible curves do the exact same amount of work against the field The straight line segment is also the direction of force. If we take dot product between these two is same result which is given by the curved one with the force. Because we are taking dot product here. @Charlie Yes because any thing below and above the endpoints cancel out completely and between curve by dot product we emphasize that just take the component of curve in the direction of force. Is that true? If I've understood what you've said correctly, then yes :P :P what you don't understand from my bad english :O Your english is fine, it's just trying to understand your reasoning behind what you believe to be true 2:43 PM The picture i shared i was talking about this. In this picture i have divided curve in three parts. 1. Curve below the starting point 2. Curve above the endpoint The important points about conservative vector fields are: 1. Work done is independent of path. 2. The vector field has zero curl $\nabla \times \vec V=0$. 3. The vector field can be written as the gradient of some potential scalar field $\vec V=\nabla f$. 3. Curve between endpoints Points 2 and 3 actually are equivalent but still First I want to digest your point no 1 the i want to move your point no 2. This is also needed understanding. The point is that any line that isn't directly from $A$ to $B$ adds up in such a way that it cancels itself out until it is equivalent to the line directly from $A$ to $B$. 2:46 PM If we take $\vec{F}\cdot\vec{s}$ Anything below and above endpoints give the result zero. Is that true? Note that in the picture you linked, it doesn't matter if you first travel in the opposite direction, you end up having to go back up through the force which cancels out the energy "gained" by travelling with the field @Charlie Ahaaa. you are right because we talking dot product between them. $\vec F\cdot \vec s$ is not equal to zero at all points below the starting point and above the end points @Charlie Whyyyyy???? This is equivalent to saying that $A+B=A+5+B-5$. Because the field isn't equal to zero below the starting point and above the ending point, what we're saying is that they cancel out, not that they are both equal to zero 2:51 PM No it should be zero let say when we go down from starting point (bottom one i am taking starting point) it give us some scalar number then we move back to starting point up it give us another number which should additive inverse of same number when giong below. the sum to be zero... Yes, what you've said there is correct Thanks otherwise my mind blows up. The total $\vec F\cdot d\vec s$ moving down and then back up again is zero, but at each point $\vec F \cdot d\vec s$ is not $=0$, which is what it sounded like you were saying The big thing about work being independent of path is that it doesn't really matter how you end up in your final position. The potential energy just depends on where you are in the field, not the path you took to get there. I did not want to say at each point. 2:52 PM I think you understand, it was just a miscommunication yeah @Charlie yes yes. thanks i was just saying any curve below and above endpoints add up to zero Well, a special property of conservative vector fields is that the net work done by any closed loop is zero The curve between the endpoints is same as we take it as straight line joining the two point this is we give this task to dot product to do this. right. @Charlie This is why is zero because it is sum up to zero. If I understand what you're saying, yes hahaha.... :P Now i am curve only between enpoints. 2:56 PM In a closed loop, if you add up the dot product of the tangent vector to the curve and the force field vector at each point you get zero For a closed loop in a conservative field In close loop i understand. what if it is not close loop. The just go below and achieve the same height. In my picture when you only see curve below starting point. which goes down first then up and stop it where it has same height as starting point. In a field like gravity that is just in the vertical direction, your horizontal position doesn't change the work done; but other conservative fields can depend on your horizontal position as well. In this picture we end up at the same "height" above the ground, but the net work is not zero The point is that the gravitational field is (assumed to be) homogenous 3:01 PM Yeah that's a good example of not only a horizontal field where horizontal distance does matter. OoooKay... whould it be zero or not by same height Yes, because the field is homogenous Good example. Because in your example it is seen it is not zero. But in my example it is not a closed loop. But note that if the curve were closed in my example it would be zero Yes, it doesn't matter that the loop isn't closed in your example because the field is completely homogenous, in my image the field isn't homogenous we can say that if field is homogenous by achieving the same height also give result zero. 3:05 PM Well, homogenity has a direction associated with it It's also worth noting that there are still a set of equipotential lines in that example where there are several locations where potential is the same and the loop isn't closed; but the field is more complicated so those potential lines aren't just horizontal. Thanks @Charlie . Now i have few more questions. Why we take dot Force with curved tangent vector. Is there ant physics way interpretation or benefit? We take the dot product of the force with the direction of travel because that is by definition how we calculate work done @JMac Hi. Yes you are right. what is benefit of equipotential surface? 3:08 PM If you moved side to side in your example, the dot product is 0 because it doesn't move in the direction of force. @Charlie Problem is that. We can feel displacement, time, mass, velocity , force, acceleration. How we feel energy work. KE PE. @123 It shows a set of locations where the potential energy is the same even though position is different in a conservative field hyperphysics.phy-astr.gsu.edu/hbase/electric/equipot.html It's the red dashed lines in those pictures. Gravity is basically like a rotated version of that constant electric field example I'm not sure what you mean by "we can feel" those things Feel means we can measure distance it is real life experience also for time, force etc.. What about KE, PE, Work done We can only measure changes in energy, we cannot directly measure the absolute "energy" of a system 3:12 PM you can see there are lot of lot of questions about energy in this forum problem is same, this idea doesn't have any real life experience people actually don't feel it visualize it in their brain. Is there any intuitive way of defining or understanding KE PE work-done??? I don't like the word intuitive because I don't know what you consider intuitive, but energy isn't something you can visualise afaik, it's a fairly abstract concept afaik = as far as I know * Yeah energy seems "intuitive" enough to me; but it's still abstract as well. What happened to the system. If system has more energy or less energy any explanation. So we can create comparative model of energy. Again we can only measure differences in energy, not absolute values 3:16 PM This guys videos are incredible We can say a particle has "more" kinetic energy if it has a greater value of $\frac{1}{2}mv^2$ Those god damn drawings of lines make some sense now... @Charlie Energy is abstract that's why people will struggle to understand it and have a lot of lot of question just about this topic. sure @bolbteppa Thanks I have seen this about 3 years ago. 3:18 PM Yo EVry 1 here What is the difference if system has low and high energy? @Azmuth Yo What do you mean "difference", what kind of difference are you expecting? @123 High energy atomic/molecular systems are unstable and seek to stablize by discharge of energy Yeah, note that that is again talking about relative energy Any example related to mechanics? An object with high potential energy seeks to lower it by converting it to other forms... @Charlie :P The basic way to think about energy I've usually found good is "the capacity to perform work". That capacity allows systems to perform work on each other and change their kinetic and potential energies. The changes in kinetic energy can look complicated on the molecular scale and smaller (like heat); but that's basically what is going on. This one on pions and kaons is stunning, drawing in the hidden curve explains everything @JMac Ah, yes, that's a better explaination. 3:22 PM I have seen all the videos at this channel. Beside math. Is there any mechanics example at which we can say two objects having same mass but one has high energy and another has lower energy? Yes, two particles, one travelling at greater velocity than the other You are again talking about relative energy, which is perfectly reasonable to define, what isn't easy to define is "absolute" energy because that requires you to define a "zero" energy Another example: One mass is sitting still on the floor. The other mass is sitting still on the shelf several feet above. The higher mass has more energy in the presence of gravity. @Charlie hahaha.. You are right. So i can say that energy is another kind physics which written in different formulation (like LM). but there is transformation between NM and LM Everything is relative! Time, Life, soul, position and stupidity..... Well I'm not sure about Stupidity... I'm not sure what you mean "energy is another kind of physics" Lagrangian and Newtonian mechanics both have a concept of energy 3:26 PM @Azmuth :O A (quantum) mechanics example is mirror nuclei (particles with same overall number of protons and neutrons) almost have the same mass and the energy spectra look almost the same but the one with more protons has higher energies due to the higher coulomb repulsion :p I almost got an heartattack with that ping sound.... speakers were at 150% :P @Azmuth Lower the sound ;P @Azmuth I don't think that's how %'s work? Linux Does! aslimixer plugin to amplify sounds! :) 3:28 PM Oh that's not your speakers though. @Charlie It means energy is just mathematical doesn't have any real life experience? Mine, but with a different name Well, energy is a physical thing, mathematicians don't ever talk about energy so it's definitely physical @Azmuth Well your speakers often have their own sound control; you generally can't set that past 100 because it physically can't go higher. You can crank the gain on the computer side though. @Charlie I protest. @JMac crank the gain on the computer side though That's what ASLIMIXER does 3:30 PM @Charlie :P because students ask questions we we can't experience energy. they can experience distance ,time etc.. @Azmuth Mute it Sure but that doesn't make it a purely mathematical thing :P @Azmuth Still not your speakers past 100% if we're being pedantic, just your computer sounds past 100% @JMac It increases tho! @123 Done! can anyone check if colab.research.google.com is down now or the link is not opening for me. No. @Charlie gave me many many good ideas. your participation is also needed. :P @123 I have a lot of practical experience with energy. I took an entire course called "Energy Management" which was basically about tracking energy through different processes and looking at ways to maximize the amount of useful work from the energy. So things like processing plants, you can trace where all the energy comes from and goes, and how much of it actually gets used for your main goal(s). 3:33 PM @Charlie Forget it. Pls explain why curl is zero in conservative force? @JMac Great so can have discussion on this? Because a conservative vector field can always be written as the gradient of some potential scalar field, $\vec V=\nabla f$, it is then a mathematical identity that $\nabla \times (\nabla f)=0$ Sure, what exactly do you want to know? I feel like it's something I just got more and more intuition about the more I studied physics. "The curl of a gradient field is always zero" yep, it's a scalar field with straight lines the scalar fields can have curved lines for example that surrounding a central potential 3:36 PM I see... @JMac My questions are same. we can clearly explain the object at earth surface or top of table by gravity distance etc.. this is same explanation which we give in chapters before work and energy. how do we connect students the same example with the idea of energy? temperature is generally a simple measurement of (molecular) energy. @Charlie You are right . Now if see if we already took dot product between two vector there is no perpendicular component remain. Thanks ...which reminds me of neumaiers "thermal interpretation of QM" 3:39 PM Does my explanation correct about curl? @Charlie The cross and dot product are definitely different operations, I'm not sure what you meant If I can write a vector field $\vec V$ as the gradient field of some scalar field $f$, it is a mathematical fact that $\nabla \times \nabla f=0$ @123 The way I found was good was just getting comfortable with the basics of energy you see in physics books and have some understanding of the calculations. Then after that you can look at things like thermodynamics and heat transfer which highly involve looking at how energy flows between systems and the effects. you said curl of gradient is zero The following are important identities involving derivatives and integrals in vector calculus. == Operator notation == === Gradient === For a function f ( x , y , z ) {\displaystyle f(x,y,z)} in three-dimensional Cartesian coordinate variables, the gradient is the vector field: grad ⁡ ( f ) = ∇ f = (... i take gradient as dot product and curl as cross product. 3:41 PM Gradient is not the same as the dot product You can't take the curl of a dot product Vector algebra is important while learning about Maxwell's eqnX :) @Charlie You written earleir $\del\cross(\del\cdot\pi)$ \nabla is the correct symbol and for cross product is small x Note $\nabla f\neq \nabla\cdot f$, the second operation is not defined for cross product the symbol is \times 3:44 PM Ahh ok ok i used it several times my fault thanks @Charlie Yes this is not same. The cross product is defined $$\times : V\rightarrow V,$$ the dot product is defined $$\cdot: V\times V\rightarrow \Bbb R,$$ and the gradient is defined as $$\text{grad}:C^2(\Bbb R)\rightarrow V.$$ OK i saw again your expression you did not used dot product. Yes sorry mistakenly i wrote that. Is it possible to give an example for KE, PE and work to students. more speed means more KE . But speed can be visualize energy don't Not sure what you mean 3:50 PM $\vec{P} = m\vec{v}$ and $KE = \frac{1}{2}mv^2$ @123 Use Graphs for visualisation, students will understand it better. Don't get hung up trying to visualise energy momentum can be defined as quantity of motion. Like the more mass object has needed more force to stop same is for velocity. I have given explanation about momentum friend. Not all energy is really easy to "see". Consider a pressurized container. From the outside, it looks pretty normal and not high energy; but then if you open a hole in it; suddenly all that stored energy escapes. You can't really visualize the energy until it does something else. I'm not sure if trying to visualize it like motion helps. but energy also has mass and velocity terms in different way. explaining some other quantity of same object at the same time. What is this quantity in physics way? 3:53 PM 31 mins ago, by JMac The basic way to think about energy I've usually found good is "the capacity to perform work". That capacity allows systems to perform work on each other and change their kinetic and potential energies. The changes in kinetic energy can look complicated on the molecular scale and smaller (like heat); but that's basically what is going on. @JMac Problem is that energy is derived from work. The question is same then what is work. @Charlie visualisation questions do frequently appear in JEE Like it is the motion only in the direction of force. @Charlie graphs... They are pretty tricky and hard 3:54 PM @Charlie Let me give you an example frnds. Sure but that's not a visualisation of energy itself which is what it sounds like you're asking for @123 @123 $W= \sum_i \vec F. \vec s_i$ Note for conservative forces, $\oint \vec F. d \vec s = 0$ If we pull a box on floor at some angle amount of force which parallel to the direction displacement is working to move the box (say x-axis) and perpendicular (y-axis) component of force balanced by box weight. @Charlie this one @Azmuth You are right . But i am talking about any curve. 3:58 PM But these aren't asking you to visualise energy itself though, which isn't possible, they're asking you about how energy changes in certain systems :P @Charlie But that greatly helps in improving understanding. oh sure, but it's still not visualising energy itself Does anyone here plays Among us? somewhat very close to visualisation @123 Work is essentially the quantifiable change in an objects position in a conservative field and/or a change in it's relative motion; which can be quantified described and equated using energy. If take an example of projectile the only force acting on it is gravity. 04:00 - 16:0016:00 - 21:00
{}
# Mark Giesbrecht School of Computer Science Email: mwg@uwaterloo.ca Research Interests: Computer algebra, algebraic algorithms and computational complexity
{}
# reaction limestone products ## What happens when acid reacts with limestone? | Questions ... Jun 14, 2008 · Limestone - calcium carbonate, (CaCO3) - dissolves and at a low rate. Sodium carbonate (Na2CO3) dissolves faster than limestone; it reacts with acids in a chemical reaction, producing carbon dioxide. reply Answer Permalink Submitted by Manu (not verified) on Sun, 17/12/2017 - 13:51 Yes it does reply Yes, limestone reacts with acids ## Calcination of Limestone – IspatGuru May 02, 2013 · The decomposition reaction of the limestone is CaCO3= CaO + CO2 (g). The activation energy of the calcination reaction is generally between 37 kcal/mol to 60 kcal/mol, with values predominantly nearer to 50 kcal/mol. These values are compared with the theoretical value (at equilibrium) being between 39 kcal/mol to 41 kcal/mol. ## Lime Production from Limestone - Current Technology limestone products are commonly used in industrial processes and are naturally occurring consisting of high levels of calcium, magnesium carbonate and minerals lime is used in many industries to neutralize acid waste and as an alkali for chemical processes, in agriculture, soil stabilization, building, and industrial purposes such as cement and ## Techniques for Determining Limestone Composition and ... Oct 01, 2009 · Limestone quality affects sulfur dioxide (SO2) removal, reaction tank sizing, limestone consumption rate, and composition of the gypsum product and waste streams. Reactivity is a ## Study of a binder based on alkaline activated limestone ... New binder based on limestone activation by concentrate solution of sodium hydroxide. • Mortars with optimized composition reach 20 MPa at 28 days. • Reaction products are pirssonite and portlandite. • SEM-EDX measurements allowed the identification of the microstructure of pirssonite. • ## The limestone cycle - Limestone [GCSE Chemistry only ... The carbon dioxide reacts with the calcium hydroxide to form white calcium carbonate, which is insoluble and so turns the limewater ‘milky’. calcium hydroxide + carbon dioxide → calcium carbonate +... ## What happens when limestone is heated? - Quora Ket’s stick to limestone, calcium carbonate. When heated it will decompose to form carbon dioxide and calcium oxide. This is the basis of lime products such as lime mortar, lime putty etc. much used historically in building industry. ## 13.2 Acid-base reactions | Types of reactions | Siyavula Powdered limestone $$(\text{CaCO}_{3})$$ can also be used but its action is much slower and less effective. These substances can also be used on a larger scale in farming and in rivers. Limestone (white stone or calcium carbonate) is used in pit latrines (or long drops). The limestone is a base that helps to neutralise the acidic waste. ## Calcium Chloride - 3V Tech Calcium chloride is produced as a product solution from reaction of calcium carbonate and hydrochloric acid upon the following reaction: CaCO 3 + 2 HCl = CaCl 2 + H 2 O + CO 2 Limestone is used as source of calcium carbonate. The purification of the product is mainly accomplished by adding Ca (OH) 2, in order to remove Magnesium. ## Production process | Nordkalk Limestone is an essential raw material Production process Nordkalk extracts limestone and processes it into crushed and ground limestone, concentrated calcite, and quick and slaked lime. The product range also includes dolomite and wollastonite. Scroll down to explore the production process step-by-step! Leader in Northern Europe ## Techniques for Determining Limestone Composition and ... Oct 01, 2009 · Limestone quality affects sulfur dioxide (SO 2) removal, reaction tank sizing, limestone consumption rate, and composition of the gypsum product and waste streams. ## What happens when carbonic acid reacts with calcium ... Limestone is mostly made up of the mineral calcium carbonate (CaCO3). Or, if there is more acid, two hydrogen ions will react with a carbonate to form carbonic acid – H2CO3 – which will decompose to form carbon dioxide – CO2 – which eventually bubbles off into the atmosphere, and water H2O. ## The limestone cycle - Limestone [GCSE Chemistry only ... The limestone cycle Calcium carbonate. Calcium carbonate, calcium oxide and calcium hydroxide are all made from limestone and have important applications so it is important to know how they are made. ## Limestone and Acid Rain Limestone is one familiar form of calcium carbonate. Acids in acid rain promote the dissolution of calcium carbonate by reacting with the carbonate anion. This produces a solution of bicarbonate. Because surface waters are in equilibrium with atmospheric carbon dioxide there ## The Kinetics of Calcination of High Calcium Limestone support chemical lime and other lime products production facilities over a reasonable length of time, 2.0 Theory At calcinations temperatures of about 900oC dissociation of limestone proceeds gradually from the outer surface of the stone particle inward like a growing shell [5] Simple conservation shows that the rate of the reaction is ## What is formed when limestone reacts with hydrochloric ... Mar 12, 2012 · The reaction between limestone and hydrochloric acid is an acid-carbonate reaction producing a salt, carbon dioxide and water. Limestone is chemically known as ## LIMESTONE Limestone used in areas adjacent to water that is chemically purified should be tested to ensure that there is no reaction between the stone and the purification chemicals. (See Horizontal Surfaces chapter for more information.) ## Calcium Chloride - 3V Tech Production process. Calcium chloride is produced as a product solution from reaction of calcium carbonate and hydrochloric acid upon the following reaction: CaCO 3 + 2 HCl = CaCl 2 + H 2 O + CO 2. Limestone is used as source of calcium carbonate. The purification of the product is mainly accomplished by adding Ca (OH) 2, in order to remove ... ## Thermal decomposition of calcium carbonate | Experiment ... 1.6 LIMESTONE [b] calcium carbonate, calcium oxide and calcium hydroxide as the chemical names for limestone quicklime and slaked lime respectively [c] the cycle of reactions involving limestone and products made from it, including the exothermic reaction of quicklime with water and the reaction oflimewater with carbon dioxide; Northern Ireland ## 12.2 Factors Affecting Reaction Rates – Chemistry The rates at which reactants are consumed and products are formed during chemical reactions vary greatly. We can identify five factors that affect the rates of chemical reactions: the chemical nature of the reacting substances, the state of subdivision (one large lump versus many small particles) of the reactants, the temperature of the reactants, the concentration of the reactants, and the ... ## Weathering: the decay of rocks and the source of sediments ... • Good soils form on limestone and mafic iggypneous rocks. Many plant nutrients are released by chemical weathering. • Poor soils form on quartzPoor soils form on quartz-rich rocks likerich rocks like sandstone, quartzite, or quartz-rich granites. Relatively few nutrients released for plantsRelatively few nutrients released for plants. ## Liming to Improve Soil quality - Home | NRCS The effectiveness of agricultural limestone depends on the degree of fineness because reaction rate depends on the size of the material (surface area) in contact with the soil. Agricultural limestone contains both coarse and fine materials. Many states require 75 to 100 % of the limestone to pass an 8- to 10-mesh screen and that 25% ## Lime And Limestone Chemistry And Technology Production The chemistry of the reactions is as follows: Heating the limestone (calcium carbonate) drives off carbon dioxide gas leaving behind lime, the base calcium oxide. CaCO 3 (s) ? CaO (s) + CO 2 (g) The lime is white and will have a more crumbly texture than the original limestone. The chemistry of limestone Joint Earth Science Education Initiative ... ## 11.17 Lime Manufacturing - US EPA 2/98 Mineral Products Industry 11.17-1 11.17 Lime Manufacturing 11.17.1 Process Description 1-5 Lime is the high-temperature product of the calcination of limestone. Although limestone deposits are found in every state, only a small portion is pure enough for industrial lime manufacturing. To be ## The Kinetics of Calcination of High Calcium Limestone support chemical lime and other lime products production facilities over a reasonable length of time, 2.0 Theory At calcinations temperatures of about 900oC dissociation of limestone proceeds gradually from the outer surface of the stone particle inward like a growing shell [5] Simple conservation shows that the rate of the reaction is ## Lime - Sulphuric Acid Manufactured by calcining limestone at about 1315°C (2400°F) which drives off the chemically bound carbon dioxide. CaCO3 + heat = CaO + CO2. CaCO3•MgCO3 + heat = CaO•MgO + 2CO2: Slaked Lime: Slaked lime is a slurry formed by reacting quicklime with water. Slaking is an highly exothermic reaction with considerable amounts of heat evolved. ## Effects of Armoring on Limestone Neutralization of AMD limestone surfaces and hinders dissolution rates ,but iron hydrolysis and precipitation reactions increase the required neutralization time. Our objective was to develop an empirical model for the neutralization rate of acid mine drainage by limestone. The effects of limestone surface area, dissolved iron concentration and iron armoring were ## 11.17 Lime Manufacturing - US EPA 2/98 Mineral Products Industry 11.17-1 11.17 Lime Manufacturing 11.17.1 Process Description 1-5 Lime is the high-temperature product of the calcination of limestone. Although limestone deposits are found in every state, only a small portion is pure enough for industrial lime manufacturing. To be ## Thermal decomposition of calcium carbonate | Experiment ... 1.6 LIMESTONE [b] calcium carbonate, calcium oxide and calcium hydroxide as the chemical names for limestone quicklime and slaked lime respectively [c] the cycle of reactions involving limestone and products made from it, including the exothermic reaction of quicklime with water and the reaction oflimewater with carbon dioxide; Northern Ireland ## Liming to Improve Soil quality - Home | NRCS The effectiveness of agricultural limestone depends on the degree of fineness because reaction rate depends on the size of the material (surface area) in contact with the soil. Agricultural limestone contains both coarse and fine materials. Many states require 75 to 100 % of the limestone to pass an 8- to 10-mesh screen and that 25% ## Thermal Decomposition of Calcium Carbonate (solutions ... Limestone is a common rock that is a very useful material. For example, it is used for building and road making, and also as a starting material for making many other products. This activity illustrates some of the chemistry of limestone (calcium carbonate) and other materials made from it. ## The Chemistry of Acid Rain Acid–base reactions can have a strong environmental impact. For example, a dramatic increase in the acidity of rain and snow over the past 150 years is dissolving marble and limestone surfaces, accelerating the corrosion of metal objects, and decreasing the pH of natural waters. ## Equilibrium - Chemistry & Biochemistry It means that the products of a chemical reaction, under certain conditions, can be combined to re-form the reactants. In 1045 we learned mostly about reactions that proceed to completion. These reactions are considered irreversible because the energy that would be required to reverse the process of the forward reaction is prohibitive. ## 9.3 Stoichiometry of Gaseous Substances, Mixtures, and ... Feb 10, 2015 · The study of the chemical behavior of gases was part of the basis of perhaps the most fundamental chemical revolution in history. French nobleman Antoine Lavoisier, widely regarded as the “father of modern chemistry,” changed chemistry from a qualitative to a quantitative science through his work with gases.He discovered the law of conservation of matter, discovered the role of oxygen in ... ## What is the chemical formula of limestone? - Quora Answer (1 of 13): Limestone, a sedimentary rock that is dominantly composed of the calcium-bearing carbonate minerals calcite and dolomite. Calcite is chemically calcium carbonate (formula CaCO3). Dolomite is chemically calcium-magnesium carbonate (formula CaMg(CO3)2). ## What is the reaction between limestone and hydrochloric ... Answer (1 of 5): Chemically limestone is Calcium carbonate (CaCO3). So,When it reacts with HCl it forms Calcium chloride and Hydrogen Carbonate. CaCO3+HCl=CaCl2+H2CO3 is a very unstable compound and it readily breaks into Carbon dioxide and Water. Again, CaCO3+HCl=CaCl2+CO2+H2O ## Decomposition Reaction | Types and Classification of ... Decomposition Reaction is the separation of a substance or material into two or more substances or materials that might differ from each other and from the original or unique substance. Read more about the Decomposition reaction its process and chemical reaction at Vedantu ## 5 Chemical Weathering Examples and How They Occur The acids create a reaction when they hit stone, causing the surface to wear and the composition to soften. Acidification can also be caused by organisms like lichens, which are created from algae and fungi. One well-known case of rapid weathering and blackening of stone is the weathering on the 1,000-year-old Leshan Giant Buddha in China. The ...
{}
# American Institute of Mathematical Sciences November  2013, 12(6): 2811-2827. doi: 10.3934/cpaa.2013.12.2811 ## Well-posedness and long time behavior of an Allen-Cahn type equation 1 UMR 6086 CNRS. Laboratoire de Mathématiques et Applications - Université de Poitiers, SP2MI - Boulevard Marie et Pierre Curie - Téléport 2, BP30179 - 86962 Futuroscope Chasseneuil Cedex, France Received  August 2011 Revised  January 2012 Published  May 2013 The aim of this article is to study the existence and uniqueness of solutions for an equation of Allen-Cahn type and to prove the existence of the finite-dimensional global attractor as well as the existence of exponential attractors. Citation: Haydi Israel. Well-posedness and long time behavior of an Allen-Cahn type equation. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2811-2827. doi: 10.3934/cpaa.2013.12.2811 ##### References: show all references ##### References: [1] Gianni Gilardi. On an Allen-Cahn type integrodifferential equation. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 703-709. doi: 10.3934/dcdss.2013.6.703 [2] Luyi Ma, Hong-Tao Niu, Zhi-Cheng Wang. Global asymptotic stability of traveling waves to the Allen-Cahn equation with a fractional Laplacian. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2457-2472. doi: 10.3934/cpaa.2019111 [3] Georgia Karali, Yuko Nagase. On the existence of solution for a Cahn-Hilliard/Allen-Cahn equation. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 127-137. doi: 10.3934/dcdss.2014.7.127 [4] Hirokazu Ninomiya, Masaharu Taniguchi. Global stability of traveling curved fronts in the Allen-Cahn equations. Discrete & Continuous Dynamical Systems, 2006, 15 (3) : 819-832. doi: 10.3934/dcds.2006.15.819 [5] Hongmei Cheng, Rong Yuan. Multidimensional stability of disturbed pyramidal traveling fronts in the Allen-Cahn equation. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1015-1029. doi: 10.3934/dcdsb.2015.20.1015 [6] Murat Uzunca, Ayşe Sarıaydın-Filibelioǧlu. Adaptive discontinuous galerkin finite elements for advective Allen-Cahn equation. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 269-281. doi: 10.3934/naco.2020025 [7] Tatsuki Mori, Kousuke Kuto, Tohru Tsujikawa, Shoji Yotsutani. Representation formulas of solutions and bifurcation sheets to a nonlocal Allen-Cahn equation. Discrete & Continuous Dynamical Systems, 2020, 40 (8) : 4907-4925. doi: 10.3934/dcds.2020205 [8] Xinlong Feng, Huailing Song, Tao Tang, Jiang Yang. Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation. Inverse Problems & Imaging, 2013, 7 (3) : 679-695. doi: 10.3934/ipi.2013.7.679 [9] Christos Sourdis. On the growth of the energy of entire solutions to the vector Allen-Cahn equation. Communications on Pure & Applied Analysis, 2015, 14 (2) : 577-584. doi: 10.3934/cpaa.2015.14.577 [10] Paul H. Rabinowitz, Ed Stredulinsky. On a class of infinite transition solutions for an Allen-Cahn model equation. Discrete & Continuous Dynamical Systems, 2008, 21 (1) : 319-332. doi: 10.3934/dcds.2008.21.319 [11] Ciprian G. Gal, Maurizio Grasselli. The non-isothermal Allen-Cahn equation with dynamic boundary conditions. Discrete & Continuous Dynamical Systems, 2008, 22 (4) : 1009-1040. doi: 10.3934/dcds.2008.22.1009 [12] Eleonora Cinti. Saddle-shaped solutions for the fractional Allen-Cahn equation. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 441-463. doi: 10.3934/dcdss.2018024 [13] Zhuoran Du, Baishun Lai. Transition layers for an inhomogeneous Allen-Cahn equation in Riemannian manifolds. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1407-1429. doi: 10.3934/dcds.2013.33.1407 [14] Charles-Edouard Bréhier, Ludovic Goudenège. Analysis of some splitting schemes for the stochastic Allen-Cahn equation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4169-4190. doi: 10.3934/dcdsb.2019077 [15] Dalibor Pražák. Exponential attractor for the delayed logistic equation with a nonlinear diffusion. Conference Publications, 2003, 2003 (Special) : 717-726. doi: 10.3934/proc.2003.2003.717 [16] Changchun Liu, Hui Tang. Existence of periodic solution for a Cahn-Hilliard/Allen-Cahn equation in two space dimensions. Evolution Equations & Control Theory, 2017, 6 (2) : 219-237. doi: 10.3934/eect.2017012 [17] Cristina Pocci. On singular limit of a nonlinear $p$-order equation related to Cahn-Hilliard and Allen-Cahn evolutions. Evolution Equations & Control Theory, 2013, 2 (3) : 517-530. doi: 10.3934/eect.2013.2.517 [18] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5321-5335. doi: 10.3934/dcdsb.2020345 [19] Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6207-6228. doi: 10.3934/dcdsb.2021015 [20] Suting Wei, Jun Yang. Clustering phase transition layers with boundary intersection for an inhomogeneous Allen-Cahn equation. Communications on Pure & Applied Analysis, 2020, 19 (5) : 2575-2616. doi: 10.3934/cpaa.2020113 2020 Impact Factor: 1.916
{}
## Precalculus: Concepts Through Functions, A Unit Circle Approach to Trigonometry (3rd Edition) $x^2-4xy+4y^2$ We square (expand) the binomial to obtain: $$(x - 2y)^2= (x-2y)(x-2y)=x^2+2\cdot x \cdot (-2y)+(-2y)^2=x^2-4xy+4y^2 .$$
{}
# Algebra help needed. Overwhelmed with other papers. See attached Algebra help needed. Overwhelmed with other papers. See attached $Algebra help needed. Overwhelmed with other papers. See attached$ ## This Post Has 7 Comments 1. LilFreaky666 says: 1. A The wording of the phrasing is rude and sounds ungrateful and will likely start a fight, sending the situation backwards in a direction not intended or wanted. 2. C Getting into a habit makes it easier to keep doing things and will create a time set aside for it. Other options would create problems such as failing or slipping grades from not paying attention in class or not going it sleep deprivation. 3. B This would separate her "new identity" from her husband, and be the opposite of what Heather wants to do. If she wants to get him more involved, she should include him in that aspect of her life by doing the other things listed as answers. 2. lovelyheart5337 says: I believe the answer is: Learned helplessness Learned helplessness refers to a condition when a certain organism (including human) is exposed to a situation where it's forced to constantly experience unavoidable negative stimuli (i.e: pain). Overtime, this would most likely make that organism to accept its fate without making any effort to avoid the negative stimuli again. 3. Expert says: Ithink ghe answer to this question is b just trying to others $Find the perimeter of the shape below a- 7.4 units b- 8.9 units c- 11.2 units d- 13.6 units$ 4. asdfjk6421 says: Step-by-step explanation: 5. 166386 says: The first one is A, because putting a negative connotation towards the Husband would be conflictive, The 2nd one is also because with a set schedule, Heather can do everything she needs to. The third one is A because obviously if she doesn't conjoin both lives, the Husband will not be involved. 6. Expert says: Geometric pattern of 2 i’m guessing. not really sure about what you’re asking but i hope this ! 7. bankrollbaby01 says: Answer for 1 Choice A is the best. I’m feeling overwhelmed trying to manage my new schedule with school, home, and family life, and could really use extra support right now. Choice C is the best. Dedicate the same time each afternoon to completing coursework. Answer for 3 Choice C is the best. Let her husband read her papers and other assignments.
{}
Evaluate the integral over the helicoid [Surface integrals] 1. Feb 3, 2012 ysebastien 1. The problem statement, all variables and given/known data Evaluate the integral $\int\int_S \sqrt{1+x^{2}+y^{2}}dS$ where S:{ r(u,v) = (ucos(v),usin(v),v) |$0\leq u\leq 4,0\leq v\leq 4\pi$ } 2. The attempt at a solution Here is my attempt, I am fairly sure I am right, but it is an online assignment and it keeps telling me I am wrong. I just wanted to double check before I contact the professor to see if he made a mistake. Let x=ucos(v),y=usin(v) and, the jacobian determinant is $u(cos^{2}(v)+sin^{2}(v))=u$ now my new integral is $\int_0^{4\pi} \int_0^4 u\sqrt{(1+u^{2})} dudv$ Now this is a fairly straightforward problem and I do a simple substitution to get let $\phi=1+u^{2},du=2u$ $\frac{1}{2}\int_0^{4\pi} \int \sqrt{\phi} d\phi dv=2\pi[\frac{2}{3}\phi^{\frac{3}{2}}]=2\pi[\frac{2}{3}(1+u^{2})^{\frac{3}{2}}]_0^4$ and finally after plugging in the values, I get $2\pi(\frac{2}{3} 17^{\frac{3}{2}} - \frac{2}{3})$ Does anyone else see a flaw in this? again, I am pretty sure I am right but would appreciate it immensely if someone could point out my mistake! Thank you EDIT: Also if I made any typos in the equations my apologies, this is my first time editing with latex commands Last edited: Feb 3, 2012 2. Feb 3, 2012 Nevermind.
{}
• Electric charge quantisation in 331 models with exotic charges • # Fulltext https://www.ias.ac.in/article/fulltext/pram/094/0084 • # Keywords Electric charge quantisation; 331 models; gauge anomalies cancellation; particles with exotic electric charges • # Abstract The extensions of the Standard Model based on the $SU(3)_{C} \otimes SU(3)_{L} \otimes U(1)_{X}$ gauge group are known as 331 models. Different properties such as the fermion assignment and the electric charges of the exotic spectrum, that define a particular 331 model, are fixed by a $\beta$ parameter. In this article, we study the electric charge quantisation in two versions of the 331 models, set by the conditions $\beta = 1/(3\sqrt{3})$ and $\beta = 0$. In these frameworks, arise exotic particles, for instance, new leptons and gauge bosons with a fractional electric charge. Additionally, depending on the version, quarks with non-standard fractional electric charges or even neutral appear. Considering the definition of electric charge operator as a linear combination of the group generators that annihilates the vacuum, classical constraints from the invariance of the Lagrangian, and gauge and mixed gauge-gravitational anomalies cancellation, the quantisation of the electric charge can be verified in both versions. • # Author Affiliations 1. Grupo de Investigación en Física, Universidad San Ignacio de Loyola, Av. La Fontana 550, La Molina, Lima, Peru • # Pramana – Journal of Physics Volume 96, 2022 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019 Click here for Editorial Note on CAP Mode © 2021-2022 Indian Academy of Sciences, Bengaluru.
{}
## Friday, 7 November 2014 ### New blog Please visit dhruvgairola.com to view my latest blog posts. I am discontinuing using blogger as my blog hosting platform. ## Sunday, 2 November 2014 ### Google removes my app from playstore. This is a notification that your application, Scrabble Game, with package ID com.dhruvg.apps.bingledemo, is currently in violation of our developer terms. REASON FOR WARNING: Violation of the spam provisions of the Content Policy. Please refer to the spam policy help article for more information. • Your title and/or description attempts to impersonate or leverage another popular product without permission. Please remove all such references. Do not use irrelevant, misleading, or excessive keywords in apps descriptions, titles, or metadata. Your application will be removed if you do not make modifications to your application’s description to bring it into compliance within 7 days of the issuance of this notification. If you have additional applications in your catalog, please also review them for compliance. Note that any remaining applications found to be in violation will be removed from the Google Play Store. Please also consult the Policy and Best Practices and the Developer Distribution Agreement as you bring your applications into compliance. You can also review this Google Play Help Center article for more information on this warning. All violations are tracked. Serious or repeated violations of any nature will result in the termination of your developer account, and investigation and possible termination of related Google accounts. I asked them for an explanation of the violating keywords. I had an idea that this might be the "Scrabble" keyword. However, my app is built only for tournament level Scrabble players. It is very specialized, and removing that keyword would probably result in very few downloads. Regardless, I removed all but one reference to the "Scrabble" keyword. However, this wasn't enough : This is a notification that your application, Scrabble Bingo Game, with package ID com.dhruvg.apps.bingledemo, has been removed from the Google Play Store. REASON FOR REMOVAL: Violation of the spam provisions of the Content Policy. Please refer to the keyword spam policy help article for more information. • Your title and/or description attempts to impersonate or leverage another popular product without permission. Please remove all such references. Do not use irrelevant, misleading, or excessive keywords in apps descriptions, titles, or metadata. This particular app has been disabled as a policy strike. If your developer account is still in good standing, you may revise and upload a policy compliant version of this application as a new package name. This notification also serves as notice for remaining, unsuspended violations in your catalog, and you may avoid further app suspensions by immediately unpublishing any apps in violation of (but not limited to) the above policy. Once you have resolved any existing violations, you may republish the app(s) at will. Before publishing applications, please ensure your apps’ compliance with the Developer Distribution Agreement and Content Policy. All violations are tracked. Serious or repeated violations of any nature will result in the termination of your developer account, and investigation and possible termination of related Google accounts. If your account is terminated, payments will cease and Google may recover the proceeds of any past sales and the cost of any associated fees (such as chargebacks and payment transaction fees) from you. If you feel we have made this determination in error, you can visit the Google Play Help Center article for additional information regarding this removal. My appeal : My app has been in the store for 2 years now, and I have so far complied with all requirements. It is a free app (I earn no money from adds) and the app was built by me alone. I only made it to help competitive tournament-level Scrabble players improve their game. The app is innately tied to the game of Scrabble, hence the "Scrabble" keyword is necessary to promote the game, because it is only directed at improving a players Scrabble vocabulary via 7-word scores. I implore you to reconsider reinstating my app on the play store. At least advise me on the specific keyword violation so I can remove it. I have no intention of spamming users. My game is only built for Scrabble players, so how can I remove the Scrabble keyword from the game title? I would be grateful for any help and guidance and am willing to cooperate fully. And the automated response : Hi, We have reviewed your appeal and will not be reinstating your app. This decision is final and we will not be responding to any additional emails regarding this removal. If your account is still in good standing and the nature of your app allows for republishing you may consider releasing a new, policy compliant version of your app to Google Play under a new package name. We are unable to comment further on the specific policy basis for this removal or provide guidance on bringing future versions of your app into policy compliance. Instead, please reference the REASON FOR REMOVAL in the initial notification email from Google Play. Lessons learnt : - Google has terrible customer service. There is no way for me to talk to a human being about my problem. - All information about downloads, ratings, etc is lost and apparently, this decision is irreversible. I will avoid developing apps for Android (though I highly doubt iOS is any any better in these types of situations). - This is my last blog post on Google's blogger platform. I think it's time for me to avoid relying on just a single platform/provider/company. Mostly because of this line - "Serious or repeated violations of any nature will result in the termination of your developer account, and investigation and possible termination of related Google accounts." - All future blog posts will be hosted on Github pages instead. ## Friday, 15 August 2014 ### Paper summary : Injecting Utility into Anonymized Datasets by Kifer et. Al., SIGMOD '06 Summary • In this paper, we will introduce a formal approach to measuring utility. Using this measure, we will show how to inject utility into k-anonymous and l-diverse tables, while maintaining the same level of privacy. Introduction • k-anonymity and l-diversity rely on generalizations to preserve privacy. • In the real-world, many attributes often need to be suppressed in order to guarantee privacy, which is bad for utility no matter what operations are being performed on the data. • One solution is to publish marginals (i.e., contingency tables for a subset of the attributes) along with the original anonymized data. This would require anonymizing the marginals too (also via generalizations) in order to preserve privacy. • However, there are many possible subsets of attributes (marginals) for which contingency tables can be built. How do we decide which particular collection of marginals to publish? Preliminaries • (Defn 2.3) K-anonymity : Table $D$ satisfies k-anonymity if $\forall t \in D$, there are at least $k-1$ other tuples that have the same values as $t$ for every QI (quasi identifier attribute). Note that we assume that the set of all non-sensitive attributes form the QI. • (Defn 2.4) Anonymized group : An anonymized group is a setwise maximal set of tuples that have the same (generalized) value for each non-sensitive attribute. • (Defn 2.5) (c,l)-diversity : Let $c>0$ be a constant and $q$ be an anonymized group. Let $S$ be a sensitive attribute and $s_{1},..., s_{m}$ be the values of $S$ that appear in $q$ and let $r_{1},..., r_{m}$ be their frequency counts. Let $r_{(1)},...,r_{(m)}$ be those counts sorted in a descending order. We say $q$ satisfies (c,l)-diversity wrt $S$ if $r_{(1)} \leq c \sum_{i=l}^{m}r_{(i)}$. Existing utility measures • Generalization height is one utility measure. • Another measure is discernability, which assigns a cost to each tuple based on how many other tuples are indistinguishable from it. It is the sum of squares of anonymized groups, plus $|D|$ times the number of suppressed tuples. • Both of the above measures do not consider the distributions of the tuples. • A third measure is the classification metric, appropriate when one wants to train a classifier over the anonymized data. Thus, one attribute is treated as a class label. This metric assigns a penalty of 1 for every tuple suppressed. If a tuple $t$ is not suppressed, it looks at the majority class label of $t$'s anonymized group. If the class label of tuple $t$ is different from the majortiy, assign a penalty of 1. This metric the sum of all penalties. But it is not clear what happens if one wants to build classifiers for several different attributes. • A fourth measure is the information to privacy loss ratio, also designed for classification. However, it suffers from the same weakness as the classification metric. Proposed utility measure • We view the data as an iid sample generated from some distribution $F$. • Suppose tuples in our table have (discrete valued) attributes $U_{1},..., U_{n}$. Then we can estimate $F$ using $\hat{F_{1}}$ where $\hat{F_{1}}$ corresponds to $P(t.U_{1}=u_{1},...,t.U_{n} = u_{n})$ where $t.U_{1}$ refers to the attribute value $U_{1}$ for tuple $t$. • Now suppose we are given anonymized marginals (e.g., 23% of tuples have the age attribute appearing between [46-50] years old while 77% appear between [50-55]). We can view 23% and 77% as constraints i.e., the marginals represent constraints. We can compute the (maximum entropy) probability distribution that corresponds to these constraints, $\hat{F_{2}}$. • (It turns out that the maximum entropy is also the maximum likelihood estimate associated with log linear models.) • We now have $\hat{F_{1}}$ associated with the original data and $\hat{F_{2}}$ associated with the anonymized marginals. We can compare them using the standard KL (kullback-liebler) divergence, which is minimized with $\hat{F_{1}} = \hat{F_{2}}$. • Since our goal is to determine which anonymized marginals to publish, we want to minimize KL-divergence between the various possible $\hat{F_{2}}$ and a fixed $\hat{F_{1}}$. Extending privacy definitions • We can extend k-anonymity and l-diversity to collections of anonymized marginals. • (Defn 4.1) k-link anonymity : A collection of anonymized marginals $M_{1},...,M_{r}$ satisfies k-link anonymity if for all i = $1,...,r$ and for all tuples $t \in NonSensitive(M_{i})$, either $M_{i}(t) = 0$ or $M_{i}(t) \geq k$. $NonSensitive(M_{i})$ refers to the non sensitive attributes which $M_{i}$ is comprised of while $M_{i}(t)$ refers to the number of the tuples which have the same attribute values as $M_{i}$. • We must also be sure that an adversary cannot use combinatorial techniques to determine that a tuple with a certain value for its quasi-identifiers exists in the original table and that the number of such tuples is less than $k$. • (Defn 4.2) k-combinatorial anonymity : Let $D$ be the domain of the nonsensitive attributes. A collection of anonymized marginals $M_{1},... ,M_{r}$ satisfies k-combinatorial anonymity if for all $t \in D$ one of the following holds: 1. For all tables $T$ consistent with $M_{1},... ,M_{r}$, $T(t) = 0$ 2. There exists a table $T$ consistent with $M_{1},... ,M_{r}$ st $T(t) \geq k$ • (Defn 4.3) MaxEnt l-diversity : $M_{1},... ,M_{r}$ satisfy MaxEnt l-diversity if the maximum entropy distribution that is consistent with $M_{1},... ,M_{r}$ satisfies l-diversity. Experiments • Experiments showed that even a very simple search for anonymized marginals can yield dramatic results c.f. to just a single anonymized table. ## Thursday, 7 August 2014 ### Paper summary : The Cost of Privacy by Brickell, KDD '08 Summary • Association of QIs (quasi identifier attributes) with sensitive attributes is known as sensitive attribute disclosure (when breached) c.f., membership disclosure which is concerned with identifying whether an individual is in a table. • Goal of the paper is to evaluate the tradeoff bet. incremental gain (by generalization and suppression) in data mining utility and the degradation in privacy caused by publishing QI attributes with sensitive attributes (by the very same generalization and suppression). • Privacy loss is the increase in adversary's ability to learn sensitive attributes corresponding to a given identity. • Utility gain is the increase in accuracy of machine learning (ml) tasks evaluated on the sanitized dataset. Notations • Tuples $t_{i}$ and $t_{j}$ are quasi-equivalent if $t_{i}[Q] = t_{j}[Q]$ for the QI attribute $Q$. We can thus partition the original table $T$ into QI classes (equivalence classes), denoted as $<t_{j}>$. • Let $\epsilon_{Q} \subseteq T$ be a set of representative records for each equivalence class in $T$. • Consider a subset of tuples $U = \{u_{1}, u_{2}, ..., u_{p}\} \subseteq T$. For any sensitive attribute value $s$, $U_{s} = \{u \in U | u[S] = s\}$. We define $p(U,s) = \frac{|U_{s}|}{|U|}$. • Semantic privacy is concerned with prior/posterior knowledge of adversary. Anonymization is usually performed by randomly perturbing attribute values. • In contrast, syntactic privacy (microdata privacy) is only concerned about the distribution of attribute values in the sanitized table without any care about the adversary's knowledge. Privacy here is performed by generalization and suppression. • This paper is only concerned with semantic privacy because of the numerous weaknesses of syntactic privacy (weaknesses of k-anonymity, l-diversity, t-closeness were presented by the authors). Attack model • Adversary only sees $T'$ (sanitized table). • We assume that generalization and suppression is only carried out to QI attibutes, not to sensitive attributes. Why? To keep sanitized data truthful'' [31,34] (this reasoning doesn't make much sense to me). • We define adversary's baseline knowledge, $A_{base}$, which is the minimum information he/she can learn from trivial santization (publishing QIs and sensitive attributes in two separate tables). Formally, $A_{base}$ is a prob. density function of the sensitive attribute values in $T$ i.e., $A_{base} = <p(T, s_{1}), p(T,s_{2}), ... , p(T, s_{l})>$. • Adversary's posterior knowledge is denoted by $A_{san}$ i.e., what he/she learns from $T'$ about sensitive attributes of some target $t \in T$. The adversary can identify the equiv. class for $t$ (i.e., $<t>$) from $T'$ since the related generalization hierarchy is totally ordered. Thus, formally, $A_{san} = <p(<t>, s_{1}), p(<t>, s_{2}), ..., p(<t>, s_{l})>$. • We can now define sensitive attribute disclosure : $A_{diff}(<t>) = \frac{1}{2} \sum_{i=1}^{l}|p(T, s_{i}) - p(<t>, s_{i})|$ $A_{quot}(<t>) = |\frac{p(<t>, s)}{p(T, s)}|$ Basically, these values define how much more the adversary learns from observing santizied QI attributes that he would have learnt from trivial sanitization. Semantic Privacy • (Defn 1)$\delta$-disclosure privacy : $<t>$ is $\delta$-disclosure private wrt $S$ (sensitive attribute) for all $s \in S$ if $A_{quot}(<t>) = |\log \frac{p(<t>, s)}{p(T,s)}| < \delta$. A table is $\delta$-disclosure private if $\forall t \in \epsilon_{Q}, <t>$ is $\delta$-disclosure private (intuitively, this means that the distribution of sensitive attribute values within each QI class is roughly the same as the others in $T$). • Defn 1 allows us to relate $\delta$ with the information gain parameter (defined as $GAIN(S,Q) = H(S) - H(S|Q)$ used by decision tree classifiers such as ID3 and C4.5). Refer to lemma 1 below. • (Lemma 1) If $T$ satisfies $\delta$-disclosure privacy, then $GAIN(S,Q) < \delta$ (proof is easy enough to understand). Syntactic Privacy • These privacy measures refer to k-anonymity (i.e., $\forall t_{j} \in \epsilon_{Q}, |<t_{j}>| \geq k$), l-diversity, (c, l)-diversity and t-closeness. These measures have many weaknesses. Ultimately, the authors are only concerned with sematic privacy and not with syntactic privacy. Measuring privacy • We have a bound on disclosure based on definition 1. However, defintion 1 is insufficient for expressing privacy because actual table instances may have less sensitive attribute disclosure (thus more privacy) than permitted by the definition. • Thus, we define sensitive attribute disclosure as : $A_{know} = \frac{1}{|T|} \sum_{t \in \epsilon_{Q}} |<t>| \cdot A_{diff}(<t>)$ where $know$ refers to knowledge gain. • Another metric quantifies the attacker's ability to guess $t$'s sensitive attribute using a greedy strategy (to guess the most common sensitive attribute value in $<t>$). For $<t>$, let $s_{max}(<t>)$ be the most common sensitive attribute value in $<t>$. Thus, $A_{acc} = \frac{1}{|T|} \sum_{t \in \epsilon_{Q}}(|<t>| \cdot p(<t>, s_{max}(<t>))) - p(T, s_{max}(T))$ where $acc$ stands for adversarial accuracy gain. • $A_{acc}$ underestimates the amount of info leaked by $T'$ because it doesn't consider the shifts in the probabilty of non-majority sensitive attribute values. Utility • One approach to measuring utility is to minimize the amount of generalization and suppression to QI attributes [10]. • Utility measurement is innately tied to the computations that are to be performed on $T'$. We are concerned with measuring the utility of classification tasks. • Intuitively, the utility of $T'$ should be measured by how well cross attribute correlations are preserved after sanitization. • In our context, we are using the semantic definition of privacy and semantic definition of utility and wish to evaluate the tradeoff between them. • For some workload $w$, we measure workload specific utility of trivially sanitized datasets (where privacy is maximum), denoted by $U_{base}^{w}$. One example for utility is the accuracy of the classifier. • We then compute $U_{san}^{w}$ for $T'$. • $U_{san}^{w} - U_{base}^{w}$ is the utility gain. If this value is close to 0, sanitization is pointless (i.e. trivial sanitization provides as much utility as any sophisticated sanitization algorithm while providing as much privacy as possible). • Another utility measure is $U_{max}^{w}$ on the original dataset $T$. If this value is low, the workload is inappropriate for the data regardless of sanitization because even if users were given $T$, utility would have been low. Experiments • Experiments suggest that trivial sanitization produces equivalent utility and better privacy than non-trivial generalization and suppression. • Authors used implementation of generalization and suppression from LeFerre[20]. Weka was used for classification. • Of course if sanitization has been correctly performed, it is impossible to build a good classifier for $S$ (sensitive attribute) based only on $Q$ (QI attribute) because good sanitization must destroy any correlation between $S$ and $Q$. • Perhaps it is not surprising that sanitization makes it difficult to build an accurate classifier for $S$. But experiments on the UCI dataset show that sanitization also destroys correlation between $S$ and neutral attributes. • Experiments indicate that it is difficult to find DBs on which sanitization permits both privacy and utility. • However, we can construct artifical DBs for specific scenarios that permits both privacy and accuracy (example provided is very contrived). ## Wednesday, 6 August 2014 ### Paper summary : The Boundary Between Privacy and Utility in Data Publishing by Rastogi et. Al., VLDB '07 Summary • Definition of privacy : an attackers prior belief of tuple $t$ belonging to private instance $I$ divided by his posterior belief of tuple $t$ belonging to $I$ given the published instance $V$. • Prior is given by $Pr[t] \leq k \frac{n}{m}$ where $n = |I|$, $m = |D|$ ($D$ refers to the domain of the attribute values of $I$), and $k$ is some constant. This is a reasonable definition because $I$ contains $n$ tuples out of $m$ possible tuples. • Posterior is given by $Pr[t | V] \leq \gamma$ for some $\gamma$. • Definition of utility : Utility is the ability to estimate counting queries (example of a counting query- SELECT count(*) WHERE country=Canada''). Utility is conveyed by the formula : $Q(I) - \widetilde{Q}(V) \leq \rho \sqrt{n}$ where $n = |I|$, $Q(I)$ refers to a counting query over the instance $I$, $\widetilde{Q}(V)$ refers to a counting query estimate over the published instance $V$ and $\rho$ is some number. • One result of the paper is that if $k$ is $\Omega (\sqrt{m})$ (hence $Pr[t] = \Omega(\frac{n}{\sqrt{m}})$, which represents a very powerful attacker with a large prior) then no algorithm can achieve both utility and privacy. • If $k$ is $O(1)$ (i.e., $Pr[t] = O(\frac{n}{m})$, which means that the attacker is weaker) then we describe an algorithm that achieves a privacy/utility tradeoff given by the formula $\rho = \sqrt{\frac{k}{\gamma}}$. The idea is that if $\rho$ is small ($\rho$ represents query estimation error, shown in the RHS of the utility formula), then utility is high because query estimation error is low. $k$ conveys prior while $\gamma$ conveys the posterior (definition of privacy). • The described anonymization algorithm is a randomized algorithm st for any $Q$ (query), the probability that the algorithm outputs $V$ st $Q(I) - \widetilde{Q}(V) \leq \rho \sqrt{n}$ is $1-\epsilon$ for some $\epsilon$. • For every tuple, the attackers prior is either small or equal to 1. • (Defn 2.1) Let $d \in [0,1]$. A d-bounded adversary is one for which $\forall t \in D$ either $Pr[t] \leq d$ or $Pr[t] = 1$. If $Pr$ is tuple independent, it is called a d-independent adversary. • For a d-independent adversary, there is no correlation amongst tuples though there may be correlation among attributes. • (Defn 2.3) An algorithm is (d-$\gamma$)-private for all d-independent adversaries if (i) $\frac{d}{\gamma} \leq \frac{Pr[t|V]}{Pr[t]}$ and (ii) $Pr[t|V] \leq \gamma$ • If $t$ fails condition (i), then there is a negative leakage (i.e., adversary learns more about the fact that tuple $t$ is not in $I$), while if it fails condition (ii), this is a positive leakage (i.e., adversary learns more about the fact that the tuple $t$ is in $I$). • (Defn 2.5) A randomized algorithm is called $(\rho, \epsilon)$-useful if it has an estimator (i.e., $\widetilde{Q}$) st for any $Q$ and $I$: $Pr[|Q(I) - \widetilde{Q}(V)| \geq \rho \sqrt{n}] \leq \epsilon$. Impossibility result • Prove that if the attackers prior is large (i.e., $k=\Omega(\sqrt{m})$ thus, $Pr[t] = \Omega(\frac{n}{\sqrt{m}})$ thus $d= \Omega(\frac{n}{\sqrt{m}})$ by defn 2.1), then no algorithm can achieve both privacy and utility. • We first show that if $d = \Omega(\frac{n}{\sqrt{m}})$, no $(d, \gamma)$-private algorithm can achieve even little utility. • (Defn 3.1) Statistical difference between 2 distributions over a domain $X$ is : $SD(Pr_{A}, Pr_{B}) = \sum_{x \in X}{|Pr_{A}(x) - Pr_{B}(x)|}$. • Connection between SD and utility : • Let $Q$ be a large query (i.e., returns a sizable fraction if executed on the domain). • When we execute $\widetilde{Q}$, we get errors which depend on $n=|I|$. • If there is utility in algorithm $A$, the users should be able to distinguish between when $Q(I) = n$ and when $Q(I) = 0$. This means we should be able to differentiate between $Pr_{A}(V) = Pr[V | E_{Q}]$ and $Pr_{B}(V) = Pr[V | E_{Q}']$, where $E_{Q}$ refers to the event where $Q(I) = n$ while $E_{Q}'$ refers to the event where $Q(I) = 0$. • Intuitively, $SD(Pr_{A}, Pr_{B})$ should be large in order to differentiate $Pr_{A}$ and $Pr_{B}$. This would mean that utility exists in algorithm A, hence we can get a reasonable estimate for $Q$ by running $\widetilde{Q}$ on $V$. • An algorithm is considered meaningless if $SD(Pr_{A}, Pr_{B})$ is smaller than 0.5 for 2/3 of the queries that a user executes. • (Theorem 3.3) For all $\gamma < 1$, there exists a constant $c$ (independent of $m,n, \gamma$) st algorithm $A$ is not $(d, \gamma)$-private for any $d \geq \frac{1}{c} \frac{\gamma}{1- \gamma} \frac{n}{\sqrt{m}}$ (i.e., no meaninigful algorithm can offer $(k \frac{n}{m}, \gamma)$-privacy for $k=\Omega(\sqrt{m})$). • But then absence of $(d, \gamma)$-privacy means that there exists some d-independent adversary for which the following happens for some tuple $t$ : either (i) positive leakage (i.e., $Pr[t] \leq d$ but $Pr[t|V] \geq \gamma$) or (ii) negative leakage (i.e., $Pr[t|V] << Pr[t]$). Hence, we have proven the impossibility result. Algorithm • Let us assume that $k=O(1)$. Now we can develop an algorithm to publish $V$ from $I$ so that $V$ has some utility. • $\alpha \beta$ algorithm 1. Let $D = D_{1} \times D_{2} \times ... \times D_{l}$ 2. $\forall t \in I$, insert into $V$ with probability $\alpha + \beta$ 3. $\forall t \in D \setminus I$, insert into $V$ with probability $\beta$ 4. Publish $V, \alpha, \beta, D$ • Query estimation algorithm 1. Let $D = D_{1} \times D_{2} \times ... \times D_{l}$ 2. Compute $Q(V) = n_{V}$ 3. Compute $Q(D) = n_{D}$ 4. $\widetilde{Q}(V) = \frac{n_{v} - \beta n_{D}}{\alpha}$ • (Theorem 4.3) $\alpha \beta$ algorithm is $(d, \gamma)$-private where $d \leq \gamma$ if we choose $\alpha$ and $\beta$ st $\frac{\beta}{\alpha + \beta} \geq \frac{d(1-\gamma)}{\gamma (1-d)}$ and $\alpha + \beta \leq 1- \frac{d}{\gamma}$. • (Theorem 4.4) $\alpha \beta$ algorithm is $(\rho, \epsilon)$-useful. • One extension of the algorithm described in the paper is where $\alpha \beta$ algorithm is modified so that multiple views can be published over time. ## Thursday, 24 July 2014 ### Paper summary : Authorization-Transparent Access Control for XML under the Non-Truman Model by Kanza et. Al., EDBT '06 • Truman model - invalid queries are modified. • Non-truman model - invalid queries rejected. • Security policies specified using XPath. e.g., for /Dept[Name=CS]/Course exclude //Grade • Not only XML elements but edges and paths can also be concealed. • Query is locally valid for dataset $D$ and set of rules $R$ if it conceals all relationships of $R$. Query is globally valid for any $D$ which conforms to some schema $S$ if $R$ is concealed for all $D$. • Document model $D = (X, E_{D}, root_{D}, labelOf_{D}, valueOf_{D})$.$X$ refers to nodes (nodes contain metadata like ids), $E_{D}$ refers to elements of the XML (i.e., attribute values, this is, in reality, a metadata of a node). $root_{D}$ is the root node, $labelOf_{D}$ is a function $f: X \to E_{D}$ and $valueOf_{D}$ is a function $f: X \to A$ where $A$ is a set of element values of atomic nodes (nodes with no outgoing edges). • k-concealment of relationship $(A,B)$ for some elements $A, B$ in $D$: given $b \in B$, we want a subset $A_{k} \subset A$ st (i) no element in $A_{k}$ is ancestor of another (ii) user cannot infer which among $k$ is an ancestor of $B$. • Expansion of $D$ is $D''$, created by replacing $E_{D}$ with $E'$ (children) and $E''_{D}$ (descendents). • A transitive closure $\bar{D}$ is an expansion of $D$ st there is an edge bet. every 2 nodes connected by direct path in $D$ and an edge from every node to itself. • An XPath query is equivalent to evaluating the query over the transitive closure of the document. • To prune a document expansion is to remove from $D''$ all edges that connect restricted nodes. • Universe of expansions of $D$ is the set of all $D''$ st $prune(\bar{D}$) = prune($D'')$. Denote this by $U_{R}(D)$. • (Local validity) Query $Q$ is locally valid if $Q(D) = Q(D''), \forall D'' \in U_{R}(D)$. • (Pseudo validity) Query $Q$ is pseudo valid if $Q(D) = Q(prune(\bar{D}))$. You can leak relationships here e.g., singleton source disclosure (example 11). • (Global validity) $Q$ is globally valid for $R$ give schema $S$ if $Q$ is locally valid $\forall D$ that conform to $S$. This is more restrictive than local validity but offers benefits (no need to check $Q$ over all $D$ if $Q$ is globally valid, saving computation). • $R$ is coherent if it is impossible to infer any hidden relationships from the relationships not pruned by the rules. • $R$ has incomplete concealment in $D$ if either (i) $D$ has 3 elements $e1,e2,e3$ st $prune(\bar{D})$ has an edge from $e1$ to $e2$ and $e2$ to $e3$ but none from $e1$ to $e3$ or (ii) $D$ has 3 elements $e1,e2,e3$ st $prune(\bar{D})$ has an edge from $e1$ to $e3$ and $e2$ to $e3$ but none from $e1$ to $e2$. • (Coherent) R is coherent if incomplete concealment does not occur in $D$. • Encapsulating edge in $\bar{D}$: $(e1, e2)$ encapsulates $(e1' e2')$ if there is a path running through all of these edges and $e1$ appears before (or at the same spot) as $e1'$ or if $e2$ appears after (or at the same spot) as $e2'$. • If $R$ is coherent, $\forall (e1,e2)$ in $\bar{D}$ which are removed by pruning, if there is an edge $(e1',e2')$ which is encapsulated by $(e1,e2)$, then $(e1',e2')$ should be removed too. • Weakness of approach so far is singleton source disclosure i.e., $\exists \rho=(x1,x2) \in R$ st $(x1(D), x1x2(D))$ is not 2-concealed ($x1(D)$ means that $x1$ was the query whose answer exsits in $D$). Solution : algorithm $computeK$ is provided. • (Theorem 1) Algo $computeK$ returns $k$ st (i) all the relationships in $R$ are k-concealed (ii) $\exists r \in R$ which is not $k+1$ concealed. ### Crucial 16GB RAM upgrade for my Macbook Pro (Late 2011) It had become really taxing for me to work on simultaneous applications on my 4GB macbook pro. Eclipse especially consumes an inordinate amount on memory. Hence I decided to purchase a 16GB RAM card on Amazon. The innards of my Macbook Pro (late 2011)! The difference between 4GB and 16GB is truly astounding. Now I can actually open up RStudio and Eclipse simultaneously. What a luxury :| As an aside, it is ridiculous how shipping a RAM card from the US to Canada even with the shipping charges saves you a significant amount compared to buying the RAM card here. This is true for almost all products. Why Canada, WHY?!
{}
# Application of Correspondence theorem for rings Can someone help me with the following problem? I am just now getting familiar with the concepts of the Correspondence Theorem for rings, the Substitution Principle, and principal ideals. But don't know how to put them all together. Problem: Let $R$ be a ring and $x,y \in R.$ Set $\overline{R}=R/(x)$ and let $\overline{y}$ denote the coset of $y$ in $\overline{R}.$ Let $(\overline{y})$ be the principal ideal of $\overline{R}$ generated by $\overline{y}.$ According to the Correspondence Theorem, an ideal of $\overline{R}$ has the form $\frac{J}{(x)}$, with $(x) \subseteq J$, where the notation $\frac{J}{(x)}$ stands for $\frac{J}{(x)}=\{\overline{a}|a \in J\}.$ How can I show that $$(\overline{g})=\frac{(f,\ g)}{(f)}$$ and $$\frac{R}{(f,g)} \cong \frac{\overline{R}}{(\overline{g})}?$$ Because of the correspondence theorem, the first equation comes down to the fact that $(f,g)$ is the pre-image of $(\bar g)$ with respect to the map $R \to R/(f)$. The second isomorphism is obtained by using one of the isomorphism theorems applied on $(f) \subset (f,g) \subset R$ and the first equation.
{}
# Characterize the commutative rings with trivial group of units This question suggested me the following: Characterize the commutative unitary rings $R$ with trivial group of units, that is, $R^{\times}=\{1\}$. The local case was solved here long time ago and it's very simple. In general, such a ring must have characteristic $2$ and the Jacobson radical $J(R)=(0)$. At present I know two classes of examples: direct products of $\mathbb Z/2\mathbb Z$ and polynomial rings over $\mathbb Z/2\mathbb Z$. (One can also ask for such a characterization in the non-commutative case, but I'm primarily interested in the commutative setting.) • Why do you think that there is a characterization? I doubt it. – Martin Brandenburg Feb 23 '14 at 15:59 • (1) If $R^\times=\{1\}$, then the same holds for any (unitary) subring of $R$; (2) $\mathbb F_2[x,y]/(xy)$ has only one unit. (3) If $R$ is reduced and $(R/P)^\times=\{1\}$ for all minimal prime ideals $P$ of $R$, then $R$ has only one unit. (4) Because of (1), it will even be hard to characterize integral domains with only one unit. – Cantlog Feb 28 '14 at 23:25 • @Cantlog How did you come up with the example (2)? – user26857 Mar 1 '14 at 22:39 • @user121097: thanks to (3). – Cantlog Mar 2 '14 at 16:19 • @Cantlog: Just a side note: your points (1)-(3) can be immediately deduced from the first paragraph in the first update to my answer (although I deal with the graded (i.e. projective) case, the relevant arguments extend to the affine case) – zcn Mar 3 '14 at 3:28 This is only a partial answer, to record some thoughts: If $R$ is semilocal, i.e. has only finitely many maximal ideals (in particular, if $R$ is finite), then one can characterize these rings as precisely the finite products of copies of $\mathbb{F}_2$. If $R$ is semilocal, then any surjection $R \twoheadrightarrow R/I$ induces a surjection on unit groups $R^\times \twoheadrightarrow (R/I)^\times$ (see here for proof). In particular, for any maximal ideal $m$ of $R$, $R/m$ is a field with only one unit, hence must be $\mathbb{F}_2$. Since the Jacobson radical of $R$ is $0$, there is an isomorphism $R \cong \prod_{m \in \text{mSpec}(R)} R/m = \prod \mathbb{F}_2$ by Chinese Remainder, and conversely any finite product of $\mathbb{F}_2$'s does indeed have only one unit. If there are infinitely many maximal ideals, then it is not clear if every residue field at a maximal ideal is $\mathbb{F}_2$. If this is the case though, then although Chinese remainder fails, one can still realize $R$ as a subring of a product of copies of $\mathbb{F}_2$, so we get a characterization in this case. Thus: If $R$ is a subring of a direct product of copies of $\mathbb{F}_2$, then $R$ has trivial unit group. The converse holds if every maximal ideal of $R$ has index $2$; in particular it holds if $R$ is semilocal. Update: There are more examples of such rings than products of $\mathbb{F}_2$ or polynomial rings over $\mathbb{F}_2$ though. If $S = \mathbb{F}_2[x_1, \ldots]$ is a polynomial ring over $\mathbb{F}_2$ (in any number of variables) then for any homogeneous prime ideal $P \subseteq S$ (necessarily contained in the irrelevant ideal $(x_1, \ldots)$), the ring $S/P$ has trivial unit group. Since the property of having trivial unit group passes to products and subrings, the same holds if $P$ is only assumed to be radical (and still homogeneous). Conversely, any ring $R$ with trivial unit group is a reduced $\mathbb{F}_2$-algebra, hence has a presentation $R \cong \mathbb{F}_2[x_1, \ldots]/I$, where $I$ is radical. We can even realize it as a demohogenization $R \cong (\mathbb{F}_2[t, x_1, \ldots]/J)/(t-1)$, where $J$ is a homogeneous radical ideal. Thus if any dehomogenization (at a variable) of a ring of the form $S/I$, where $I$ is a homogeneous radical ideal, has trivial unit group, this would yield a characterization. This in turn is equivalent to asking whether or not the multiplicative set $1 + (t - 1)$ is saturated in $S/I$ (at this point, I must leave this line of reasoning as is, but would welcome any feedback). Update 2: Upon reflection, it's easy to see that not every dehomogenization of a graded reduced $\mathbb{F}_2$-algebra will have trivial unit group, i.e. for $\mathbb{F}_2[t,x,y]/(xy - t^2)$, setting $t = 1$ gives $\mathbb{F}_2[x,y]/(xy - 1)$ which has nontrivial units. I'll have to think a little more about the right strengthening of the condition on $I$. • If $R$ is semilocal, then $R$ is a finite direct product of $\mathbb F_2$. (I didn't mention this in my question since I've considered it as a kind of obvious remark.) Thanks for your thoughts anyway. – user26857 Feb 24 '14 at 9:30 • @user121097: You're correct in that $R$ is already isomorphic to a product of $\mathbb{F}_2$'s in the semilocal case. In my opinion this isn't particularly trivial to prove though (it comes down to showing that every residue field is $\mathbb{F}_2$, which is not just an obvious remark). Also, any finite ring is obviously semilocal, so I'm a little confused why you seemed to be enlightened by the other answer in the finite case – zcn Feb 25 '14 at 3:56 • Thanks for the update. I'll take a closer look as soon as possible. (In the semilocal case just use CRT and find that $R$ is a product of fields, and then it's immediate that these fields are $\mathbb F_2$. I'm not so sure that one can use the same technique in the non-commutative setting, that's why I found interesting the finite non-commutative case, although it's an immediate consequence of Artin-Wedderburn theorem.) – user26857 Feb 25 '14 at 8:27 • Yes, that's a much easier argument than mine above. I'd also like to see your thoughts on the update - I'll keep thinking about it for a bit longer – zcn Feb 25 '14 at 8:40 If the ring is finite, then the ring must be boolean (and hence commutative). I have handled this problem in an article, which you can find on the web: Rodney Coleman: Some properties of finite rings. • Btw, after updating a little my poor knowledge in non-commutative algebra I think you can prove the same result (with the same arguments) for semi-local rings. – user26857 Feb 25 '14 at 9:33 I don't know if this actually contributes more than what has been already said, but in Rings of zero-divisors (1958, Theorem 3), Cohn shows that if $R$ is a commutative ring without nontrivial units, then every element $x\in R$ is either idempotent or transcendental (over $\mathbb{F}_2$). Moreover $R$ is a subdirect product of extension fields of $\mathbb{F}_2$. These assertions are easy to prove: the subdirect product part is due to $J(R)=0$. If $x\in R$ is algebraic, then $\mathbb{F}_2[x]$ is finite-dimensional over $\mathbb{F}_2$, hence a product of extension fields without nontrivial units, hence a product of copies of $\mathbb{F}_2$, hence $x$ is idempotent. In fact, the same argument is done by Cohn for any $F$-algebra $R$, $F$ a field, $R$ without units outside of $F$. EDITED: For the noncommutative case, what we can say with the same arguments is very similar: If $R$ is a ring without nontrivial units, then $R$ is an $\mathbb{F}_2$-algebra in which every element is either idempotent of transcendental. Moreover, $R$ is a subdirect product of domains. The statement about the elements has the same proof as in the commutative case (since $\mathbb{F}_2(x)$ is commutative). Let us prove the last claim: Since $x^2=0$ implies $(1+x^2)=(1+x)^2=1$, we have $1+x=1$, so $x=0$; hence $R$ is reduced (in particular semiprime). In a semiprime ring, the intersection of prime ideals is $0$; since every prime ideal contains a minimal prime ideal, the intersection of minimal prime ideals is also $0$. Now, in a reduced ring, every minimal prime ideal is completely prime, so that its quotient ring is a domain. Therefore $R$ is a subdirect product of the domains $R/P_i$, where $\{P_i\}_{i\in I}$ is the family of completely prime ideals of $R$. ## protected by user26857May 31 '15 at 9:49 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
{}
# User:Emran M. Qassem/e/m Ratio ## Overview In this lab we measured the ratio of e/m for electrons using an electric field on a charged particle.SJK 01:12, 13 October 2010 (EDT) 01:12, 13 October 2010 (EDT) actually you mean a magnetic field on a charged particle having been accelerated by an electric field Knowing the electric field, measuring voltage, current, and the radius of curvature of the generated electron loop, we can calculate the e/m ratio. The accepted value is $1.76 * 10 ^{-11} \frac{coul}{kg}$ as gathered from the lab manual. (Steve Koch 01:14, 13 October 2010 (EDT):Typo in your accepted value, should be +11) ## Procedure After setting everything up correctly according to the lab manual, we set voltages and currents, took measurements and logged them in the spreadsheet. Our first set of measurements, we didn't understand what was needed so we took two sets of current measurements for 5 different voltage measurements. Once we discovered that we needed to take 10 measurements, 5 currents at a constant voltage, then 5 voltages at a constant current, we did that and created some spreadsheets with that data. Once we had the data, we used the LINEST spreadsheet function to give us a fitted line based on a slope and intersection, and from that we generated data points and built a graph based on them. ## Results SJK 01:24, 13 October 2010 (EDT) 01:24, 13 October 2010 (EDT) In your primary notebook, I see that you calculated a few different values for the e/m ratio, but here you only report one, which happens to be closest to the acceptable. You don't say why, and of course you would need to say that. In this case, I don't think the one you report is your "best," despite it being the closest--this experiment is known to have substantial unavoidable systematic error, so you actually shouldn't be close with careful measurements. Using the results from the best fit line graphs, and using the equations given to us in the lab manual, we calculated e/m and found it to be $1.83 * 10^11 \pm 8.5 * 10^9 \frac{C}{kg}$ based on the error from our best fit line. I was quite satisfied with these results. SJK 01:19, 13 October 2010 (EDT) 01:19, 13 October 2010 (EDT) Satisfied with some good measurements and analysis, I'd say yet. However, satisfied that you're measuring the value without lots of systematic error? That discussion is lacking, but should be in your future labs. In this case here, your 68% confidence interval is something like 1.74 to 1.92 * 10^11 ... which is consistent with the accepted value--but so far I don't know how you did that since it should be impossible with this apparatus!. ## Error The calculated result is off by 1 sigma from the accepted value, which is at about 68 percent accuracy. Error can be caused by our data collection as we could not get the exact measurement of the electron particle radius of curvature as the bulb had distortion and was very difficult to measure because of the low intensity of the particle beam which reduced visibility. Systemic error can be caused by the equipment. ## Conclusion Although our results were fairly reasonable, our data collection was all over the place at first, as we were not clear on what we needed to do. We learne how to use the spreadsheet and how to make graphs with best fit lines, which also very useful for this and future experiments.
{}
# Synopsis: Measuring Entanglement Among Many Particles Multiparticle entanglement has been difficult to characterize, but a new method provides a way to assess this with measurements of the total spin. Entanglement of multiparticle systems is desired for precision interferometry, quantum computing, and fundamental tests of quantum mechanics. However, characterizing the level of entanglement in such large systems is far from straightforward. A group of European physicists has devised a new measure of entanglement based on the collective spin. In Physical Review Letters, they demonstrate their method on an atomic cloud, showing that it contains inseparable clusters of at least $28$ entangled atoms. In a system of only a few particles, physicists can evaluate the entanglement by mapping out all particle correlations. But the number of measurements needed for this so-called quantum tomography grows exponentially with the number of particles. This method is therefore impractical for gauging the entanglement in, for example, a Bose-Einstein condensate (BEC) with thousands of atoms. Researchers have formulated other entanglement measures, but they only apply to specific types of multiparticle entangled states. Carsten Klempt of the Leibniz University of Hannover, Germany, and his colleagues have developed a novel criterion for characterizing entanglement. Their method involves measuring the sum of all the individual spins in a large ensemble of particles and then evaluating its fluctuations. Compared to previous work, the criterion that Klempt and colleagues use is sensitive to a wider range of entangled states. The team created one particular state, a so-called Dicke state, with $8000$ atoms from a BEC and measured the total spin using a strong magnetic field gradient. From their entanglement measure, they estimate that the largest grouping of entangled particles contains $28$ atoms or more, which presents the largest number measured for Dicke states so far. – Michael Schirber ### Announcements More Announcements » ## Subject Areas Quantum Information Nanophysics Read More » ## Next Synopsis Materials Science Read More » ## Related Articles Quantum Information ### Synopsis: Entangled Static Evidence of quantum entanglement is uncovered in an unlikely place: the electrical noise in a simple quantum conductor chilled to near zero. Read More » Atomic and Molecular Physics ### Viewpoint: Single Dot Meets Single Ion Researchers show that a single photon can transfer an excitation from a quantum dot to an ion. Read More » Atomic and Molecular Physics ### Synopsis: Cold Atoms, Meet Flux Quanta A cloud of atoms trapped close to a superconducting ring can detect the magnetic field inside the ring with single-quantum sensitivity. Read More »
{}
# How to properly add .NET assemblies to Powershell session? I have a .NET assembly (a dll) which is an API to backup software we use here. It contains some properties and methods I would like to take advantage of in my Powershell script(s). However, I am running into a lot of issues with first loading the assembly, then using any of the types once the assembly is loaded. The complete file path is: C:\rnd\CloudBerry.Backup.API.dll In Powershell I use: $dllpath = "C:\rnd\CloudBerry.Backup.API.dll" Add-Type -Path$dllpath I get the error below: Add-Type : Unable to load one or more of the requested types. Retrieve the At line:1 char:9 + Add-Type <<<< -Path $dllpath + CategoryInfo : NotSpecified: (:) [Add-Type], ReflectionTypeLoadException + FullyQualifiedErrorId : System.Reflection.ReflectionTypeLoadException,Microsoft.PowerShell.Commands.AddTypeComma ndAdd-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Using the same cmdlet on another .NET assembly, DotNetZip, which has examples of using the same functionality on the site also does not work for me. I eventually find that I am seemingly able to load the assembly using reflection: [System.Reflection.Assembly]::LoadFrom($dllpath) Although I don't understand the difference between the methods Load, LoadFrom, or LoadFile that last method seems to work. However, I still seem to be unable to create instances or use objects. Each time I try, I get errors that describe that Powershell is unable to find any of the public types. I know the classes are there: $asm = [System.Reflection.Assembly]::LoadFrom($dllpath) $cbbtypes =$asm.GetExportedTypes() $cbbtypes | Get-Member -Static ---- start of excerpt ---- TypeName: CloudBerryLab.Backup.API.BackupProvider Name MemberType Definition ---- ---------- ---------- PlanChanged Event System.EventHandler1[CloudBerryLab.Backup.API.Utils.ChangedEventArgs] PlanChanged(Sy... PlanRemoved Event System.EventHandler1[CloudBerryLab.Backup.API.Utils.PlanRemoveEventArgs] PlanRemoved... CalculateFolderSize Method static long CalculateFolderSize() Equals Method static bool Equals(System.Object objA, System.Object objB) GetAccounts Method static CloudBerryLab.Backup.API.Account[], CloudBerry.Backup.API, Version=1.0.0.1, Cu... GetBackupPlans Method static CloudBerryLab.Backup.API.BackupPlan[], CloudBerry.Backup.API, Version=1.0.0.1,... ReferenceEquals Method static bool ReferenceEquals(System.Object objA, System.Object objB) SetProfilePath Method static System.Void SetProfilePath(string profilePath) ----end of excerpt---- Trying to use static methods fail, I don't know why!!! [CloudBerryLab.Backup.API.BackupProvider]::GetAccounts() Unable to find type [CloudBerryLab.Backup.API.BackupProvider]: make sure that the assembly containing this type is load ed. At line:1 char:42 + [CloudBerryLab.Backup.API.BackupProvider] <<<< ::GetAccounts() + CategoryInfo : InvalidOperation: (CloudBerryLab.Backup.API.BackupProvider:String) [], RuntimeException + FullyQualifiedErrorId : TypeNotFound Any guidance appreciated!! ## 6 Answers Could you surround the Add-Type with a try catch and print the LoaderExceptions property, as the error is stating to do. It may provide an exception with a more detailed error message. try { Add-Type -Path "C:\rnd\CloudBerry.Backup.API.dll" } catch {$_.Exception.LoaderExceptions | % { Write-Error $_.Message } } • isn't there a way to use relative path? – Amit Sep 18 '16 at 6:06 • The opening curly bracket must be on the same line as the percent sign: $_.Exception.LoaderExceptions | % { Aug 24 '20 at 19:41 • At the time of posting, which was I believe PS2.0? That was valid syntax. Has this changed in 6+? Asking for my own curiosity. Oct 24 '20 at 21:46 He says, ".LoadWithPartialName" has been deprecated. Therefore, instead of continuing to implement Add-Type with that method, it uses a static, internal table to translate the "partial name" to a "full name". In the example given in the question, CloudBerry.Backup.API.dll does not have an entry in PowerShell's internal table, which is why [System.Reflection.Assembly]::LoadFrom($dllpath) works. It is not using the table to look up a partial name. Some of the methods above either did not work for me or were unclear. Here's what I use to wrap -AddPath calls and catch LoaderExceptions: try { Add-Type -Path "C:\path\to.dll" } catch [System.Reflection.ReflectionTypeLoadException] { Write-Host "Message:$($_.Exception.Message)" Write-Host "StackTrace:$($_.Exception.StackTrace)" Write-Host "LoaderExceptions:$($_.Exception.LoaderExceptions)" } The LoaderExceptions are hidden inside the error record. If the add-type error was the last one in the error list, use $Error[0].InnerException.LoaderExceptions to show the errors. Most likely, your library is dependent on another that has not been loaded. You can either Add-Type each one, or just make a list and use the -ReferencedAssemblies argument to Add-Type. • Try \$Error[0].Exception.LoaderExceptions and following Eris's advise. Jun 11 '14 at 15:33 I used the following setup to load a custom csharp control in powershell. It allows the control to be customized and utilized from within powershell. http://justcode.ca/wp/?p=435 and here is the codeproject link with source http://www.codeproject.com/Articles/311705/Custom-CSharp-Control-for-Powershell • Welcome to Server Fault! We really do prefer that answers contain content not pointers to content. Whilst this may theoretically answer the question, it would be preferable to include the essential parts of the answer here, and provide the link for reference. Mar 10 '13 at 3:20 I guess by now you MIGHT have found an answer to this phenomena. I ran upon this post after encountering the same problem...I could load the assembly, and view the types contained in the assembly, but was unable to instanciate a instance of it from a static class. Was it EFTIDY. Tidy, EFTidyNet.TidyNet.Options or what? Ooooo Weeee...Problems...problems...it could be anything. And looking through the static methods and types of the DLL didn't reveal anything promising. Now I was getting depressed. I had it working in a compiled C# program, but for my use I wanted to have it running in a interpeted language...powershell. My solution I found, and it's still being proven, but I'm elated and wanted to share this. Build a small console.exe app exercising the func that I was interested in and then view it in something that would decompile it or show the IL code. I used Red-Gate's reflector and the powershell language generator add-in and Wallah! it showed what the proper constructor string was! :-) Try it. and I hope it works for whomever is faced with this problem. • And... what was the proper constructor string? This doesn't really answer the question the way it's written. Also, welcome to ServerFault! Aug 5 '15 at 3:48
{}
CGAL 4.7 - Scale-Space Surface Reconstruction WeightedApproximation_3 Concept Reference ## Definition A concept for computing an approximation of a weighted point set. This approximation can be used to fit other points to the point set. Has Models: CGAL::Weighted_PCA_approximation_3 ## Types typedef unspecified_type FT defines the field number type. typedef unspecified_type Point defines the point type. ## Constructors WeightedApproximation_3 (unsigned int size) constructs an approximation of a undefined point set. More... ## Point Set. void set_point (unsigned int i, const Point &p, const FT &w) changes a weighted point in the set. More... std::size_t size () const gives the size of the weighted point set. ## Approximation bool compute () computes the approximation. More... bool is_computed () const checks whether the approximation has been computed successfully. ## Fitting Point fit (const Point &p) fits a point to the approximation. More... ## Constructor & Destructor Documentation WeightedApproximation_3::WeightedApproximation_3 ( unsigned int size) constructs an approximation of a undefined point set. The point set holds a fixed number of points with undefined coordinates. Parameters size is the size of the points set. Note this does not compute the approximation. ## Member Function Documentation bool WeightedApproximation_3::compute ( ) computes the approximation. Returns whether the approximation converges. If the approximation does not converge this may indicate that the point set is too small, or the affine hull of the points cannot contain the approximation. Point WeightedApproximation_3::fit ( const Point & p) fits a point to the approximation. Parameters p is the point to fit. Returns the point on the approximation closest to p. Precondition The approximation must have been computed. void WeightedApproximation_3::set_point ( unsigned int i, const Point & p, const FT & w ) changes a weighted point in the set. This invalidates the approximation. compute() should be called after all points have been set. Precondition i must be smaller than the total size of the point set.
{}
# One step equations (review) ## Interactive practice questions By substituting into the equation, identify whether the following statements are true or false. a $x=6$x=6 is a solution for the equation $8x=51$8x=51. True A False B True A False B b $x=4$x=4 is a solution for the equation $2x=2\times4$2x=2×4. True A False B True A False B c $x=19$x=19 is a solution for the equation $\frac{x}{2}=8$x2=8. True A False B True A False B d $x=24$x=24 is a solution for the equation $\frac{x}{3}=8$x3=8. True A False B True A False B Easy Approx a minute What step can be taken to solve the equation $x-5=4$x5=4? When $10$10 is added to a number, the result is $12$12. What is the number? When a number is multiplied by $10$10, the result is $60$60. What is the number?
{}
# How to distinguish the continuous and categorical variable based on the number of unique values? I have a very large dataset contained 600,000 rows and 400 columns. Most of the variables are anonymous and named as Vi, i=1,2,...,400, and all of them are either int or float. When I am dealing with the dataset, I have to point out which variable is categorical and which is continuous. Cause I do not know the meaning of the features, I thought the only clue I have is the number of unique values. So I ran the codes and get such outcomes: for f in X_train.columns: print(X_train[f].nunique()) And the outcomes I got is: 20902 36807 57145 18683 24821 7311 8727 41838 39847 23028 22774 10437 10530 217850 90375 3126 7 1787 3 3 3 3 3 3 3 3 4 39974 51727 54282 157077 101 9288 77 101 13 14 14 5854 10938 2338 5775 2426 4650 2366 1509 5971 4729 2207 6813 6674 80299 172652 82646 176011 3444 2125 70656 3 25 43 3 1231 1476 205 3 3 4 1253 1103 3 3 1328 1260 319 1657 5 13553 1108 1597 1216 1199 5 3 60 61 3 219 881 89 12332 4444 641 5529 11 62 33 11377 114 76 119 500 74 332 4 8 8 7 52 6 8 32 5 4 9 49 55 649 9 7 5 10 10 10 641 688 2651 77 2340 19 24 50 77 26 365 115655 49 2552 31 62 8 9 15 20 17 2836 3451 215 2240 2282 8 32 2747 49 104 522 394 93 101 81 9 17 79 As you can see some of them are obvious, they are lower than 10, and I will definitely think they are categorical variables. And some of them have higher than 100,000 unique values, which is obviously continuous. The hard part is how to decide the threshold. Those variables have #unique values between 50~500, they seem to be continuous but considered my dataset is very large, even if one variable has 500 categories, I will also it is reasonable to treat it as a categorical variable. Does anyone have any good suggestions? I will thank you in advance! • Be careful not to conflate "discrete" with "categorical" in your question. Apr 26, 2020 at 16:38 • Continuity is, in part, a modeling decision. There exists no universal criterion based on inspecting unique values along that will make that decision for you. – whuber Apr 26, 2020 at 17:27
{}
# JEE Advanced Hyperbola Important Questions ## JEE Advanced Important Questions of Hyperbola Hyperbola is one of the important topics that come under the conic section in the syllabus of JEE Advanced 2020. The below PDF consists of important questions on Hyperbola for JEE Advanced and the solutions to the same. Hyperbola is a curve that can be defined to be the locus, of the points in the planar region - • which have a constant positive difference • between their distances from two fixed points.
{}
# A General Theory for Deadlock Avoidance in Wormhole-Routed Networks 1 RESEDAS - Software Tools for Telecommunications and Distributed Systems INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications Abstract : Most of the machines from the last generation of distributed memory parallel computers possess specific routers that are used to communicate with non-neighboring nodes in the network. Among the several technologies, wormhole routing is usually prefered because it allows low channel-setup time and reduces the dependency between latency and inter-node distance. However, wormhole routing is very susceptible to deadlock because messages are allowed to hold many resources while requesting others. Therefore, designing deadlock-free routing algorithms using few hardware facilities is a major problem for wormhole-routed networks. Even though maximizing the degree of adaptiveness of a routing algorithm is not necessarily the best way to obtain the largest efficiency/complexity ratio, adding some adaptiveness improves the performance. It is therefore of great interest to answer the following question: given any network $G$ and any adaptive routing function R on G, is R deadlock-free or not? Characterizations of deadlock-free routing functions in terms of channel dependency graph have been known for a long time for deterministic routing, but only for a short time for adaptive routing, and only for some particular definitions of the routing functions. In this paper, we give a general framework to study deadlock-free routing functions. First we give a general definition that captures many specific definitions of the literature (namely vertex-dependent, input-dependent, source-dependent, path-dependent, etc.). With this general definition, we give a necessary and sufficient condition that characterizes deadlock-free routing functions for a large class of definitions. Using our results, we study several adaptive routing algorithms that have been proposed for meshes, and we derive a new algorithm that offers a higher degree of adaptiveness. Keywords : Type de document : Article dans une revue IEEE Transactions on Parallel and Distributed Systems, Institute of Electrical and Electronics Engineers, 1998, 9 (7), pp.626--638 Domaine : https://hal.inria.fr/inria-00098494 Contributeur : Publications Loria <> Soumis le : lundi 25 septembre 2006 - 17:02:02 Dernière modification le : jeudi 5 avril 2018 - 12:30:08 ### Identifiants • HAL Id : inria-00098494, version 1 ### Citation Eric Fleury, Pierre Fraigniaud. A General Theory for Deadlock Avoidance in Wormhole-Routed Networks. IEEE Transactions on Parallel and Distributed Systems, Institute of Electrical and Electronics Engineers, 1998, 9 (7), pp.626--638. 〈inria-00098494〉 ### Métriques Consultations de la notice
{}
# How do you solve 2^n=sqrt(3^(n-2))? Dec 12, 2016 $n = \frac{2 \ln \left(3\right)}{2 \ln \left(2\right) + \ln \left(3\right)}$ #### Explanation: There are a variety of methods, but I'd like to start by squaring both sides to undo the square root. ${\left({2}^{n}\right)}^{2} = {\left(\sqrt{{3}^{n - 2}}\right)}^{2}$ The left hand side will use ${\left({a}^{b}\right)}^{c} = {a}^{b c}$: ${2}^{2 n} = {3}^{n - 2}$ Typically, to solve an exponential such as this, we want to take the logarithm with a base of whatever the base of the exponential is. However, since we have two bases here, we can use an arbitrary logarithm. A common logarithm to use, although it doesn't really matter, is the natural logarithm $\ln \left(x\right)$ which has a base of $e$. $\ln \left({2}^{2 n}\right) = \ln \left({3}^{2 - n}\right)$ Rewriting these using $\ln \left({a}^{b}\right) = b \ln \left(a\right)$ gives: $2 n \ln \left(2\right) = \left(2 - n\right) \ln \left(3\right)$ Expanding the right hand side: $2 n \ln \left(2\right) = 2 \ln \left(3\right) - n \ln \left(3\right)$ Grouping the terms with $n$ and factoring: $2 n \ln \left(2\right) + n \ln \left(3\right) = 2 \ln \left(3\right)$ $n \left(2 \ln \left(2\right) + \ln \left(3\right)\right) = 2 \ln \left(3\right)$ Solving: $n = \frac{2 \ln \left(3\right)}{2 \ln \left(2\right) + \ln \left(3\right)}$
{}
Uppercase greek letters in math in italic/slanted format I read that in math, capital Greek letters should be italic, just like all the other (Latin alphabet) letters. How do I do this? I use \usepackage[charter]{mathdesign} as my font. • This is a style decision; some typographic tradition, for instance in France, want capital Greek in italics, others (British and American) want them upright. – egreg Oct 27 '12 at 15:00 • @egreg isn't it rather that the French want their lowercase Greek upright? – user4686 Oct 27 '12 at 15:44 • – amcnabb Sep 13 '13 at 20:09 According to the manual of mathdesign, \usepackage[charter,greekuppercase=italicized]{mathdesign} • Thanks, that works! Regarding “correct“: I have the Python way of thinking: There should be one obvious way to do things. So I want people to read my stuff and immediately know what I meant. Since all latin and lowercase greek letters are in italic, it makes no sense to me that uppercase should not be in italic as well. And I can use \mathrm \Delta as a Laplace operator now, since \Delta looks different this way. – Martin Ueding Oct 27 '12 at 15:40
{}
## Topology Seminar (Main Talk): Instanton knot homology and equivariant gauge theory Seminar | October 7 | 4:10-5 p.m. | 3 Evans Hall Nikolai Saveliev, University of Miami Department of Mathematics The instanton knot homology is a Floer theory recently defined by Kronheimer and Mrowka using gauge theory on orbifolds; it has been instrumental in proving that the Khovanov homology is an unknot detector. We show how replacing gauge theory on an orbifold with an equivariant gauge theory on its double branched cover simplifies the matters and allows for explicit calculations for several classes of knots. This is a joint project with Prayat Poudel. events@math.berkeley.edu
{}
# mass of cl ion You will need to use the BACK BUTTON on your browser to come back here afterwards. . another electron to remove the particle as positively charged particle. The molecular ion containing the 35 Cl isotope has a relative formula mass of 78. m/z means relative mass over charge Chlorine -35 is about 3 times more abundant than chlorine The sample must be in gaseous form, laser beam is used to Visit A-Level Chemistry to download comprehensive revision materials - for UK or international students! the formula mass of sodium chloride is 58.5. the formula mass of chloride is 35.5. therefore you need to scale up from the % mass of chloride ion to the % mass of sodium chloride. Weights of atoms and isotopes are from NIST article. High Performance Liquid Chromatography (HPLC), https://www.chemguide.co.uk/analysis/masspec/elements.html, Hydrogen Bonding in Hydrogen Flouride (HF), http://www.docbrown.info/page04/4_71atomMSintro.htm. In move through the tube. It reacts with sodium vigorously forming sodium chloride. is not used to separate the positive ions. Kinst The carbons and hydrogens add up to 28 - so the various possible molecular ions could be: If you have the necessary maths, you could show that the chances of these arrangements occurring are in the ratio of 9:6:1 - and this is the ratio of the peak heights. (a) Calculate the percentage purity of the salt. ions to collide with air particles which affect the motion of particles to For every mole of FeCl3 there are 3 mols of Cl- ions, so dividing this by 3 gives us the number of moles of FeCl3. Deflection On analysis of an impure sample of rock salt, it was found to contain by mass 57.5% of chlorine as chloride ion. Cl2+→ Cl + Cl+ The Cl atom is neither accelerated nor deflected in the machine it is not ionized in the ionizatio… Mass spectrometry is a technique used to determine relative isotopic masses of different elements and relative abundance of the isotopes. The low pressure vacuum is needed to stop the The chloride ion /ˈklɔːraɪd/ is the anion (negatively charged ion) Cl . Mass spectrometry is an important method which is used to identify elements and compounds by their mass spectrum. Molar mass of BaCl2 = 208.233 g/mol Convert grams Barium Chloride to moles or moles Barium Chloride to grams. Molecular mass (molecular weight) is the mass of one molecule of a substance and is expressed in the unified atomic mass units (u). . Are you a chemistry student? The Chlorine is a yellow-green gas at room temperature. which help to know relative atomic mass of ionized particle. Chlorine has a melting point of -101.5 °C and a boiling point of -34.04 °C. presence of five peaks for chlorine shows the ratio of heights for peaks 1 and the seven electrons in the third outermost shell acting as its valence The What mass of Cl – ion is present in 240.0 mL of H 2 O, which has a density of 1.00 g/mL? Mass spectrum of 2-chloropropane is given below. If you don't know the right bit of maths, just learn this ratio! strike the ion detection system where they generate a small electrical current. characteristics e.g. The M+ and M+2 peaks are therefore at m/z values given by: So . this process the ionized particle which has smaller mass has smaller time of The mass percentage of chloride ion in a 25.00 -mL. Most of the ions moving from the ionization chamber to the mass analyzer have lost a single electron, so they have a charge of 1+. This page explains how the M+2 peak in a mass spectrum arises from the presence of chlorine or bromine atoms in an organic compound. The carbons and hydrogens add up to 29. The one containing 37Cl has a relative formula mass of 80 - hence the two lines at m/z = 78 and m/z = 80. displayed as m/z versus peak height. For #6, you can do the mass as above, and subtract the mass of 14 electrons from it. Chlorine has two stable isotopes: chlorine – 35 Based on the mass of AgCl formed, we can determine the mass of Cl-ions present. The Cl + ions will pass through the machine and will give lines at 35 and 37, depending on the isotope and you would get exactly the pattern in the last diagram. It contains 6.02 x 10^23 grams of sodium chloride. Continued heating ruptures the counter ion layer and promotes stabilization by the nitrate ions produced by the addition of slightly excess silver nitrate and nitric acid (Skoog, 317-319). Chloride: Chloride ion is derived from dissociation of HCl acid or any other chloride compound. That means that a compound containing 1 bromine atom will have two peaks in the molecular ion region, depending on which bromine isotope the molecular ion contains. The atomic weight of chlorine is 35.453 amu. has a relative abundance of 75.76% ,whereas chlorine – 37 has a relative The molar mass of BaCl2 is 208.23 g/mol. The ions are not stable so some will formchlorine atom and a Cl+ ion. Chlorine is such an element which contain more It is an extremely reactive element and a strong oxidising agent: among the elements, it has the highest electron affinity and the third-highest electronegativity on the Pauling scale, behind only oxygen and fluorine. into the ionization chamber, the electrons are knocked off, and give molecular Chloride ion is unreactive. Chlorine has The molecular ion containing the 35 Cl isotope has a relative formula mass of 78. So . Lee Your name goes here, not mine! Notice that the mass of a single ion changes only in the 4th decimal place or so) Most of the rest of these problems can be done pretty much the same way. So the molar atomic mass of magnesium is 24.305 grams per mole. It is an essential electrolyte located in all body fluids responsible for maintaining acid/base balance, transmitting nerve impulsesand regulating fluid in and out of cells. It is 12 units of a given substance. like mass particles move down the tube. plate as P in above diagram. M+ and M+2.the molecular ion containing 35Cl isotope has relative The sample must in the gaseous phase, In case than one atom per molecule. What mass (g) of barium chloride (BaCla) is contained in 500 mL of a barium chloride solution that has a chloride ion concentration of 0.386 M? In The colloid obtained is made of silver chloride as primary adsorption layer and nitrate ions make up the counter-ion layer to stabilize the system. We convert the mass to moles of AgCl as the first step. Under room temperature, chlorine exists as a diatomic molecule (Cl 2). Chloride salts such as sodium chloride are often very soluble in water. (1 u is equal to 1/12 the mass of one atom of carbon-12) Molar mass (molar weight) is the mass of one mole of a substance and is expressed in g/mol. Gravimetric Chloride Unknown #88 T.A. Exact Mass: 211.810166 g/mol: Computed by PubChem 2.1 (PubChem release 2019.06.18) Monoisotopic Mass: 209.813116 g/mol: Computed by PubChem 2.1 (PubChem release 2019.06.18) Topological Polar Surface Area: 0 Ų: Computed by Cactvs 3.4.6.11 (PubChem release 2019.06.18) Heavy Atom Count: 6: Computed by PubChem: Formal Charge-6: Computed by PubChem: Complexity: 0 If this is the first set of questions you have done, please read the introductory page before you start. The atomic weight of chlorine given on the periodic table is Atomic mass of Cl-= 35.45 g/mole mmoles of Cl- A = M gNO 33 x V A = 0.1002 × (26.90-0.20) mL = 2.675 mmoles mass of Cl- = 2.675 mmoles × = 94.83 mg electrons. Exact Mass: 34.968853 g/mol: Computed by PubChem 2.1 (PubChem release 2019.06.18) Monoisotopic Mass: 34.968853 g/mol: Computed by PubChem 2.1 (PubChem release 2019.06.18) Topological Polar Surface Area: 0 Ų: Computed by Cactvs 3.4.6.11 (PubChem release 2019.06.18) Heavy Atom Count: 1: Computed by PubChem: Formal Charge-1: Computed by PubChem: Complexity: 0 charged plates. and chorine – 37. Chlorate: The molar mass of chlorate ion is 83.44 g/mol. all types of mass spectroscopy they include vaporing atoms or molecules in high The fragmentation of 2-chloropropane formed are. sample of seawater was determined by titrating the sample with silver nitrate, precipitating silver chloride. Organic compound having one chlorine atom show molecular peaks Ion detection system is labelled as N. ions We need two equivalents of the atomic mass of chlorine, because we have two equivalents of chloride ions per magnesium chloride unit. Notice that the peak heights are in the ratio of 3 : 1. (3 pts) COMPANY atomic mass 80. Chlorine is such an element which contain morethan one atom per molecule. Think about the possible combinations of chlorine-35 and chlorine-37 atoms in a Cl 2 + ion. We can look up the values of these in our periodic table. So . reach the ion detector system. The Cl atom is neither accelerated nor deflected in the machine it is not ionized in the ionization chamber and simply lost. Bromine has two isotopes, 79Br and 81Br in an approximately 1:1 ratio (50.5 : 49.5 if you want to be fussy!). If an ion of Na+ has a mass of 23, and Cl- has a mass of 35, how many grams of NaCl are added to 100ml of water to make a 3M solution? The chloride content of a soluble salt can be determined by precipitation of the chloride anion as silver chloride according to the reaction: Ag+(aq) + Cl–(aq) AgCl(s) 1.1 If fully precipitated, the mass of the chloride (m Cl) in the AgCl(s) precipitate (m AgCl) will be equal to the mass of chloride … if you have 3 lines in the molecular ion region (M+, M+2 and M+4) with gaps of 2 m/z units between them, and with peak heights in the ratio of 9:6:1, the compound contains 2 chlorine atoms. This process is called fragmentation. The ratio observed in this case is 9:6:1.the compound containing 2 chlorine atom have difference in ratio due to isotopes attached with fragments. Molar mass of Cl (ion) is 35.4530 g/mol Convert between Cl(ion) weight and moles The one containing 37 Cl has a relative formula mass of 80 - hence the two lines at m/z = 78 and m/z = 80. detector is known as mass spectrum (plural mass spectra). But Cl+ ions will pass through the machine and give lines at 35 and 37 it is depend upon isotopes. ions are formed similarly through electron bombardment and the ions which are First, use the density of H 2 O to determine the mass of the sample: $240.0\cancel{mL}\times \frac{1.00\: g}{\cancel{mL}}=240.0\, g$ Now we can use the definition of ppm: Chlorine is a chemical element with the symbol Cl and atomic number 17. – 37, the weighted average is closer to 35 than 37. All It has two isotopes Cl-35 and Cl-37, so it contain Negative plates accelerate the positive ions to chlorine atom and a Cl+ ion. a)580 b) 5.8 c)17.4 d)35 The . different positive ions particles are released, the resulting product from the The substance which is to be analyzed is injected in the high vacuum tube system which has extremely low pressure particles are ionized through colliding with beam of high speed electron. of ions due to magnetic field is label as R. Magnetic field deflect the mono-positive 3 atoms of Cl-35 and 1 atom of Cl-37. The molecular ion peaks (M+ and M+2) each contain one chlorine atom - but the chlorine can be either of the two chlorine isotopes, 35 Cl and 37 Cl. For example, if an ion has a mass of 18 units and a charge of 1+, its m/z value is 18. By using the percent mass of chloride in the compound, we were able to determine the cation in the unknown solute to be sodium, given that it expected the cation to have an atomic mass of 22.34 g/mol and a charge of +1. Cl 2 is a yellow – greenish color gas. That means that there will be 3 times more molecules containing the lighter isotope than the heavier one. Negative The one containing 37 Cl has a relative formula mass of 80 - hence the two lines at m/z = 78 and m/z = 80. if you have two lines in the molecular ion region with a gap of 2 m/z units between them and with almost equal heights, this shows the presence of a bromine atom in the molecule. If an ion has a mass of 36 units and a 2+ charge, its m/z value is also 18. Chlorine atom is reactive. The molecular ion containing the 35Cl isotope has a relative formula mass of 78. Notice that the peak heights are in the ratio of 3 : 1. The data is A complete A-Z dictionary of chemistry terms. The ions of chloride are highly mobile and are transported to closed basins or ocean. In this process magnetic field With one exception, they have been simplified by omitting all the minor lines with peak heights of 2% or less of the base peak (the tallest peak). ions. It took 42.58 $\mathrm{mL}$ of 0.2997 M silver nitrate solution to reach the equivalence point in the titration. 17.Key Words Acceptable Chloride Limit, Chloride Threshold, Corrosion, Linear Polarization Resistance, Acid- The moving charge particles create a magnetic field The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. mass spectrum of an organic compound having chlorine atoms also show different is a chemical element with the symbol Cl . and Q as high voltage supply label. Chlorine atom is electrically neutral. The relative atomic mass of magnesium is 24.305. . The mass spectrum of chlorine is good example of molecular element. Chloride occurs naturally in foodstuffs at levels normally less than 0.36 mg/g. Solution. Multiply by the molecular mass of … Atomic mass of Cu = 63.55 Atomic mass of Cl = 35.45 Atomic mass of CuCl 2 = 1(63.55) + 2(35.45) Atomic mass of CuCl 2 = 63.55 + 70.9 For bimolecular ions, ¾ of the chlorine isotopes are Cl 35 and ¼ of In chlorine atom, the number of protons (17) is equal to the number of electrons (17). In the above diagram the symbol K as sample It consist of molecules so when it passedinto the ionization chamber, the electrons are knocked off, and give molecularion, Cl2+. Total mass of the Element (Part) x 100% = Percent Composition: Total mass of compound (Whole) Example 1. Less frequently, the word chloride may also form part o… Chloride ion is negatively charged. . It consist of molecules so when it passed 35.47 u. total chloride ion content by the weight of cementitious materials determined by acid-soluble method was preferred as the representation of chloride content in concrete. The collision of high energy electrons with atoms or molecules causes The molecular ion peaks (M+ and M+2) each contain one chlorine atom - but the chlorine can be either of the two chlorine isotopes, 35Cl and 37Cl. mass spectrum. ions according to their increasing mass towards the ion detection system. of chlorine: Cl (g) + e–              =        Cl + (g) + e–. Chloride: The molar mass of chloride ion is 35.45 g/mol. In chloride ion, there are 17 protons but 18 electrons. Calculate the percent by weight of sodium (Na) and chlorine (Cl) in sodium chloride (NaCl) Calculate the molecular mass (MM): MM = 22.99 + 35.45 = 58.44 ; Calculate the total mass of Na present: 1 Na is present in the formula, mass = 22.99 One chlorine atom in a compound. Ag+ and Cl- are 1:1 so the number of moles of Cl- to react with Ag+ is the same so then take 0.0408 moles and multiply it by the molar mass of Cl-, which more or less wont be different that what it is as the element, 0.0408 moles of Cl- x 35.45 grams of Cl- / 1 mole of Cl- = .145 grams of Cl- For #2 and #4, you'll add the mass of 1 or 3 electrons to the mass of the chlorine atom or nitrogen atom. Chlorine has similar properties like fluorine, bromine and iodine. Same pattern of peaks are observed at m/z = 63 and m/z = 65 due to chlorine atoms which is attached to CH3CH forming positive ion. In this mass spectrum the peak heights of chlorine are in the ratio of 3:1 which show that lighter isotope of chlorine is attached with more number of molecules as compared to the heavier isotope. It is also called as TOF type. That pattern is due to fragment ions also containing one chlorine atom - which could either be 35Cl or 37Cl. around itself which interact with magnetic field of the system at point R [2]. and it has atomic number 17. That reflects the fact that chlorine contains 3 times as much of the 35Cl isotope as the 37Cl one. ion, Cl2+. That reflects the fact that chlorine contains 3 times as much of the 35 Cl isotope as the 37 Cl one. The organic compound having two chlorine atoms show three peaks due different combinations of isotopes of chlorine are attached with carbon and hydrogen. their time of flight. high energy electrons from a heated metal element into the vaporized sample abundance of 24.24%. The effect of chlorine or bromine atoms on the mass spectrum of an organic compound. Chlorine is the second member of halogen group it under analysis and causes ionization of the atoms or molecule form positive called ionization. 1st 2nd 3rd Mass unknown, g 0.1876 0.1693 0.1932 Mass crucible, g 22.1986 20.2955 19.2289 Mass, crucible + precipitate, g 22.5279 20.6149 19.5033 Mass, precipitate, g 0.3293 0.3194 0.2744 Mass chloride, g 0.08145757 0.079008653 0.067877189 % chloride in unknown It is the mass of 12 carbon atoms. 2 is 3: 1. This process is called fragmentation. It contains 6.02 x 10^23 particles of a given substance. The problem is that you will also record lines for the unfragmented Cl 2 + ions. Molecular weight calculation: 137.327 + 35.453*2 ›› Percent composition by element the isotope of chlorine is cl37. atomic mass 78 whereas molecular ion containing 37Cl has relative Chlorine has two stable isotopes; chlorine-37 (25%) and chlorine -35 (75%).There are five main peaks of isotopes of chlorine of various isotopic monatomic ions. Origin. The principle of this method is also include ionization, acceleration to donate constant kinetic energy to all ions, ions drift, ions detection and also data analysis, all things are controlled and carried out with the help of computers now a days. = a proportionality constant based on the instrument settings and High voltage electron gun falls a beam of It has two isotopes Cl-35 and Cl-37, so it contain3 atoms of Cl-35 and 1 atom of Cl-37. the electric field strength, length of analyzing tube etc. electronic configuration [Ne] 3s23p5   with It also deals briefly with the origin of the M+4 peak in compounds containing two chlorine atoms. flight in the drift region so in this case ions are separate on the base of Different isotopes have different relative abundances ,chlorine – 35 accelerate electrons and produced positive ions. Mass spectrometer separates and counts the numbers of Unlike compounds containing chlorine, though, the two peaks will be very similar in height. ››Barium Chloride molecular weight. produced as a result of bombardment are accelerated between electrically Molar Mass. You might also have noticed the same pattern at m/z = 63 and m/z = 65 in the mass spectrum above. The fragmentation that produced those ions was: The lines in the molecular ion region (at m/z values of 98, 100 ands 102) arise because of the various combinations of chlorine isotopes that are possible. It is formed when the element chlorine (a halogen) gains an electron or when a compound such as hydrogen chloride is dissolved in water or other polar solvents. sent to computer for analysis and display as mass spectrum. The ions are not stable so some will form Deflection mass spectrometer consists of ionization, acceleration the positive ions, which in turn deflection of ions and ion detection followed by deflection, separation and detection. if you look at the molecular ion region, and find two peaks separated by 2 m/z units and with a ratio of 3 : 1 in the peak heights, that tells you that the molecule contains 1 chlorine atom. The concentration of Cl – ion in a sample of H 2 O is 15.0 ppm. vacuum and create electron bombardment to generate a beam of positive ions This small current convert into electronic signals appear as ion peaks which Atoms and isotopes are from NIST article made of silver chloride as primary adsorption layer and ions. Origin of the 35 Cl isotope as the representation of chloride content in concrete less frequently, two. Word chloride may also form part o… chlorine is such an element which contain than... And isotopes are from NIST article under room temperature, chlorine exists as a diatomic molecule Cl. Molecule ( Cl 2 + ion chlorine as chloride ion, there are 17 but. Nor deflected in the ratio of heights for peaks 1 and 2 is 3: 1 settings and characteristics.. N. ions strike the ion detection system where they generate a small electrical current it has similar properties fluorine... Heights for peaks 1 and 2 is a yellow – greenish color gas 80! From the presence of chlorine is good Example of molecular element electron to remove the particle as positively particle. Containing one chlorine atom - which could either be 35Cl or 37Cl 1.00 g/mL, give. Combinations of isotopes of chlorine are attached with carbon and Hydrogen signals appear as ion peaks which sent to for. Has two isotopes Cl-35 and 1 atom of Cl-37 peaks are therefore at m/z 78! System at point R [ 2 ] set of questions you have done, please read introductory... 3S23P5 with the origin of the element ( part ) x 100 =... It was found to contain by mass 57.5 % of chlorine is good Example of molecular element -34.04 °C cl37! Number of protons ( 17 ) is equal to the number of electrons ( 17.! Data is displayed as m/z versus peak height contain morethan one atom per.! Of chlorine or bromine atoms in an organic compound Cl-ions present electrons ( 17 ) convert grams chloride. Which interact with magnetic field of the element ( part ) x 100 % = Percent:! ) 5.8 c ) 17.4 d ) 35 a complete A-Z dictionary of chemistry...., though, the two lines at 35 and 37 it is depend upon.. Their mass spectrum melting point of -101.5 °C and a Cl+ ion A-Z dictionary chemistry! Be very similar in height of chemistry terms it contain3 atoms of Cl-35 and 1 atom of Cl-37 of you. Cl and it has atomic number 17 formed, we can determine the mass to moles or Barium! About 3 times as much of the M+4 peak in a Cl 2 is yellow... It passedinto the ionization chamber, the word chloride may also form part o… chlorine is such an element contain... Than the heavier one seven electrons in the third outermost shell acting as its valence electrons method was as! Nitrate solution to reach the equivalence point in the titration point in third. Reach the equivalence point in the ratio observed in this case is 9:6:1.the compound containing 2 chlorine -! Table is 35.47 u configuration [ Ne ] 3s23p5 with the symbol Cl and has. Chlorine atom, the two peaks will be very similar in height two isotopes and... Is the first set of questions you have done, please read introductory. To 35 than 37, laser beam is used to determine relative masses. Is made of silver chloride method which is used to separate the positive ions halogens, it appears between and! The one containing 37Cl has a mass spectrum of … the concentration of Cl – ion is 83.44 g/mol length! \$ of 0.2997 M silver nitrate solution to reach the equivalence point in the ionization and... Total mass of 36 units and a 2+ charge, its m/z mass of cl ion. And Q as high voltage supply label table is 35.47 u different combinations of chlorine-35 and atoms! Hydrogen Bonding in Hydrogen Flouride ( HF ), http: //www.docbrown.info/page04/4_71atomMSintro.htm mass! Not ionized in the ratio observed in this case is 9:6:1.the compound containing 2 atom... G/Mol convert grams Barium chloride to moles or moles Barium chloride to moles AgCl! The ions are not stable so some will form chlorine atom and 2+! Than the heavier one move through the tube for analysis and display as mass.. Effect of chlorine given on the mass spectrum fragment ions also containing one atom!, https: //www.chemguide.co.uk/analysis/masspec/elements.html, Hydrogen Bonding in Hydrogen Flouride ( HF ), http: //www.docbrown.info/page04/4_71atomMSintro.htm peak height the. Collision of high energy electrons with atoms or mass of cl ion causes another electron remove! And subtract the mass to moles or moles Barium chloride to moles or moles Barium chloride moles! Spectrum arises from the presence of chlorine is good Example of molecular element signals as... Is 35.45 g/mol of these in our periodic table three peaks due different combinations of isotopes of chlorine given the... Neither accelerated nor deflected in the ratio of 3: 1 through the.! Second member of halogen group it has similar properties like fluorine, and. Point R [ 2 ] to come BACK here afterwards NIST article 35.47 u 2... The lighter isotope than the heavier one compounds containing chlorine, though, the number of (! Cl+ ion versus peak height of electrons ( 17 ) is equal to number. Periodic table and its properties are mostly intermediate between them done, please read the introductory page before start... In our periodic table is good Example of molecular element which sent to computer for analysis and display as spectrum. Point R [ 2 ] magnetic field is not used to accelerate electrons and produced ions. Relative isotopic masses of different elements and compounds mass of cl ion their mass spectrum above will need use... Or bromine atoms on the periodic table and its properties are mostly intermediate between them chlorine given on mass!, just learn this ratio this small current convert into electronic signals appear as ion peaks which to! In chlorine atom - which could either be 35Cl or 37Cl moles Barium chloride to or! Contains 6.02 x 10^23 grams of sodium chloride are often very soluble in water case is 9:6:1.the containing! Relative formula mass of 78 2+ charge, its m/z value is also 18 35 than 37 atoms the. Chlorine atom and a 2+ charge, its m/z value is also 18 for UK or students. Dissociation of mass of cl ion acid or any other chloride compound as primary adsorption and. You will need to use the BACK BUTTON on your browser to come BACK here afterwards field,. Of molecules so when it passedinto the ionization chamber and simply lost based on the instrument settings and characteristics.! But 18 electrons is cl37 the ions are not stable so some will atom... Morethan one atom per molecule just learn this ratio, Hydrogen Bonding Hydrogen. Of analyzing tube etc not ionized in the third outermost shell acting its!, Hydrogen Bonding in Hydrogen Flouride ( HF ), http: //www.docbrown.info/page04/4_71atomMSintro.htm collision of high energy electrons atoms. Is 83.44 g/mol purity of the isotopes 17.4 d ) 35 a complete A-Z dictionary of chemistry terms electrons... – 37, the number of protons ( 17 ), just learn this ratio of chloride in. Which help to know relative atomic mass of the system at point R [ 2 ] a! To use the BACK BUTTON on your browser to come BACK here afterwards of 14 electrons from.. Moving charge particles create a magnetic field of the salt shell acting as its valence electrons small current into... Per mole its m/z value is also 18 BUTTON on your browser to come BACK here afterwards of. Dissociation of HCl acid or any other chloride compound two isotopes Cl-35 and Cl-37, so it 3. Atom, the number of protons ( 17 ) is equal to the number of protons ( 17 ) -101.5... O, which has a relative formula mass of magnesium is 24.305 grams per mole display mass... Such an element which contain more than one atom per molecule electrical current to electrons... Process magnetic field is not used to identify elements and compounds by their spectrum. Another electron to remove the particle as positively charged particle Chromatography ( HPLC ), https //www.chemguide.co.uk/analysis/masspec/elements.html... As m/z versus peak height was determined by titrating the sample must mass of cl ion in form! Ne ] 3s23p5 with the origin of the chlorine isotopes are Cl and! Here afterwards help to know relative atomic mass of magnesium is 24.305 grams per mole as and. Mostly intermediate between them 35.45 g/mol to contain by mass 57.5 % of chlorine bromine...: total mass of 14 electrons from it field is not used to elements! Isotope as the 37 Cl one the values of these in our table! Contain morethan one atom per molecule its valence electrons and a 2+ charge, its m/z value is also.... Labelled as N. ions strike the ion detection system where they generate a small current! Reach the equivalence point in the ratio of 3: 1 electrons ( )! Particles of a given substance BaCl2 = 208.233 g/mol convert grams Barium chloride moles... Their mass spectrum layer and nitrate ions make up the values of these in periodic... Might also have noticed the same pattern at m/z values given by:.. Equivalence point in the machine it is 12 units of a given substance the 37Cl one convert into signals! Electrical current origin of the halogens, it was found to contain by mass 57.5 % chlorine. Ions also containing one chlorine atom, the number of electrons ( 17 ) equal. Point of -34.04 °C formed, we can determine the mass of =. Stabilize the system at point R [ 2 ] the halogens, it appears between fluorine bromine.
{}
# Polynomial Division Under Certain Remainders Let $$P(x)$$ be a polynomial such that when $$P(x)$$ is divided by $$x-17$$, the remainder is $$14$$, and when $$P(x)$$ is divided by $$x-13$$, the remainder is $$6$$. What is the remainder when $$P(x)$$ is divided by $$(x-13)(x-17)$$? Here was my process, that I'm not sure if it's right or not: We can write $$P(x)$$ in the form of $$P(x)=Q(x)(x-17)(x-13)+cx+d$$ Thus, by the remainder theorem, we have a system of equations: \begin{align*} 14c+d &=6,\\ 6c+d &=14. \end{align*} Solving gets $$c=-1, d=20.$$ Thus, our remainder is $$\boxed{-x+20}.$$ Did I make any flaws during my process. Thanks in advance for helping. :) • Not following. We know that $P(17)=14$, say, from which we deduce that $17c+d=14$. Similarly, $13c+d=6$. Not sure where your equations are coming from. – lulu May 26 '20 at 16:05 • Wait, so we just solve that system of equations? May 26 '20 at 16:07 • Note, by the way, that you can check your tentative answer (or indeed any linear polynomial): divide $-x+20$ itself by each of $x-17$ and $x-13$—do you get remainders of $14$ and $6$ respectively? May 26 '20 at 16:21 • It also might be worth commenting: don't be fooled into believing, from the numbers chosen in the problem, that the polynomial remainder when dividing by $x-17$ must always be between $0$ and $17$ (and similarly for $13$); check the case $P(x) = x^2$ for example. Polynomial remainders have smaller degree, but the size and sign of their coefficients can be arbitrary (literally anything, as linear algebra tells us). May 26 '20 at 16:23 From $$P(x)=Q(x)(x-17)(x-13)+cx+d$$ Now, let $$x=17$$, then we have $$17c+d=14$$ If we let $$x=13$$, then we have $$13c+d=6$$ Now solve for $$c$$ and $$d$$. Subtract the two equations, we ahve $$4c=8 \iff c=2$$. Proceed on to solve for $$d$$ to get the remainder. • $c=2, d=-20.$ Our remainder is $2x-20$ May 26 '20 at 16:10 • yes, that is right. May 26 '20 at 16:11 • as a check, $2x-20=2(x-17)+14=2(x-13)+6$ May 26 '20 at 16:11 • Thank you all. $+1$ $\checkmark$ May 26 '20 at 16:12
{}
My Math Forum As The Limit Goes To Infinity... Calculus Calculus Math Forum June 4th, 2012, 03:24 PM #1 Senior Member   Joined: Jan 2012 Posts: 159 Thanks: 0 As The Limit Goes To Infinity... taking the conjugate of the numerator seems long, am I missing something? June 4th, 2012, 06:23 PM #2 Senior Member     Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 520 Math Focus: Calculus/ODEs Re: As The Limit Goes To Infinity... We are given: $L=\lim_{x\to\infty}\frac{x-\sqrt{x^2+5x+2}}{x-\sqrt{x^2+\frac{x}{2}+1}}$ I would write: $\frac{x-\sqrt{x^2+5x+2}}{x-\sqrt{x^2+\frac{x}{2}+1}}\cdot\frac{\frac{1}{x}}{\ frac{1}{x}}=\frac{1-\sqrt{1+\frac{5}{x}+\frac{2}{x^2}}}{1-\sqrt{1+\frac{1}{2x}+\frac{1}{x^2}}}$ Now we have: $L=\lim_{x\to\infty}\frac{1-\sqrt{1+\frac{5}{x}+\frac{2}{x^2}}}{1-\sqrt{1+\frac{1}{2x}+\frac{1}{x^2}}}$ Which is the the indeterminate form 0/0, thus L'Hôpital's rule gives (after simplification): $L=\lim_{x\to\infty}\frac{2(5x+4)\sqrt{1+\frac{1}{2 x}+\frac{1}{x^2}}}{(x+4)\sqrt{1+\frac{5}{x}+\frac{ 2}{x^2}}}=10$ June 4th, 2012, 06:56 PM #3 Senior Member   Joined: Jan 2012 Posts: 159 Thanks: 0 Re: As The Limit Goes To Infinity... Is there a way to solve without L'Hôpital's rule? I am just studying past tests but our teach said we cannot use L'Hôpital's rule but maybe previous years could... Dunno June 4th, 2012, 07:57 PM #4 Senior Member     Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 520 Math Focus: Calculus/ODEs Re: As The Limit Goes To Infinity... Okay, barring the use of our good friend L'Hôpital, we could complete the square under the radicals to get: $L=\lim_{x\to\infty}\frac{x-\sqrt{$$x+\frac{5}{2}$$^2-\frac{17}{4}}}{x-\sqrt{$$x+\frac{1}{4}$$^2+\frac{3}{4}}}$ Now, we may observe that: $\lim_{x\to\infty}\frac{\sqrt{(x+a)^2+k}}{\sqrt{(x+ a)^2}}=1$ where $k\in\mathbb R$ hence: $\lim_{x\to\infty}\sqrt{(x+a)^2+k}=\lim_{x\to\infty }\sqrt{(x+a)^2}$ thus, we have: $L=\lim_{x\to\infty}\frac{x-\sqrt{$$x+\frac{5}{2}$$^2}}{x-\sqrt{$$x+\frac{1}{4}$$^2}}=\frac{-\frac{5}{2}}{-\frac{1}{4}}=10$ June 4th, 2012, 08:11 PM #5 Senior Member   Joined: Jan 2012 Posts: 159 Thanks: 0 Re: As The Limit Goes To Infinity... nice mark I tried completing the square but made a simple mistake! At least I was headed in the right direction. Tags infinity, limit Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post PhizKid Calculus 4 October 29th, 2012 07:21 AM cacophonyjm Calculus 8 March 11th, 2012 07:29 PM StefSchenkelaars Calculus 7 January 9th, 2011 04:43 PM mechintosh Real Analysis 1 March 1st, 2010 06:54 PM rajend3 Calculus 6 August 21st, 2009 10:17 AM Contact - Home - Forums - Cryptocurrency Forum - Top
{}
#### Explanation: Lay this question out in an algebraic form. Let bananas be $b$. $11 b = 5.17$ We want the term $b$ on its own, so in order to isolate it, we must divide both sides by $11$. Therefore $\frac{11 b}{11} = \frac{5.17}{11}$ $b = 0.47$
{}
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Oct 2018, 08:40 # Fuqua EA Calls in Progress: Join us in the chat | track the decision tracker | See forum posts/summary ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # The average length of all the sides of a rectangle equals twice the Author Message TAGS: ### Hide Tags Current Student Joined: 12 Aug 2015 Posts: 2638 Schools: Boston U '20 (M) GRE 1: Q169 V154 The average length of all the sides of a rectangle equals twice the  [#permalink] ### Show Tags 11 Apr 2017, 09:46 1 2 00:00 Difficulty: 35% (medium) Question Stats: 71% (02:18) correct 29% (02:42) wrong based on 60 sessions ### HideShow timer Statistics The average length of all the sides of a rectangle equals twice the width of the rectangle.If the area of the rectangle is 18,what is its perimeter? A)6√6 B)8√6 C)24 D)32 E)48 _________________ MBA Financing:- INDIAN PUBLIC BANKS vs PRODIGY FINANCE! Getting into HOLLYWOOD with an MBA! The MOST AFFORDABLE MBA programs! STONECOLD's BRUTAL Mock Tests for GMAT-Quant(700+) AVERAGE GRE Scores At The Top Business Schools! Senior Manager Joined: 19 Apr 2016 Posts: 274 Location: India GMAT 1: 570 Q48 V22 GMAT 2: 640 Q49 V28 GPA: 3.5 WE: Web Development (Computer Software) Re: The average length of all the sides of a rectangle equals twice the  [#permalink] ### Show Tags 11 Apr 2017, 10:18 4 stonecold wrote: The average length of all the sides of a rectangle equals twice the width of the rectangle.If the area of the rectangle is 18,what is its perimeter? A)6√6 B)8√6 C)24 D)32 E)48 l*b = 18 -------- given (2b+2l)/4 = 2b 6b = 2l 3b = l 3*18/l = l (using the given data) $$l^2$$ = 54 l = 3√6 therefore b = √6 perimeter = 2*(3√6+√6) = 8√6 Hence option B is correct Hit Kudos if you liked it Senior SC Moderator Joined: 22 May 2016 Posts: 2034 The average length of all the sides of a rectangle equals twice the  [#permalink] ### Show Tags 30 Apr 2017, 11:40 0akshay0 wrote: stonecold wrote: The average length of all the sides of a rectangle equals twice the width of the rectangle.If the area of the rectangle is 18,what is its perimeter? A)6√6 B)8√6 C)24 D)32 E)48 l*b = 18 -------- given (2b+2l)/4 = 2b 6b = 2l 3b = l 3*18/l = l (using the given data) $$l^2$$ = 54 l = 3√6 therefore b = √6 perimeter = 2*(3√6+√6) = 8√6 Hence option B is correct Hit Kudos if you liked it 0akshay0 I have what might be a dumb question whose answer might be glaringly obvious, sorry! The prompt refers to the "average length of all sides." When solving for area in this problem in order to get a side length, see highlight, how do we know that length equals side such that in the equation we can simply divide by L as you did? I was tempted to do the same, but I was unsure about this issue. I solved the same way you did up to L=3W (your "b"). Then instead: L*W = 18 3W*W = 18 W$$^2$$ = 6 W= √6 Perimeter = 2W + 2L = 2W + 2(3W) = 2√6 + 6√6 = 8√6 I think the answer has something to do with weighted averages, but I have fog brain _________________ ___________________________________________________________________ For what are we born if not to aid one another? -- Ernest Hemingway The average length of all the sides of a rectangle equals twice the &nbs [#permalink] 30 Apr 2017, 11:40 Display posts from previous: Sort by # The average length of all the sides of a rectangle equals twice the Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{}
# Compared to most other substances, a great deal of heat is needed to raise the temperature of by a given amount. What trait of water allows this to take place? Aug 27, 2017 Well, for a start, water is extraordinarily dense......... #### Explanation: We know that the density of water under standard conditions is $\rho = 1.0 \cdot g \cdot m {L}^{-} 1$; and thus for a given volume, there is a LARGE number of $H - O$ bonds to heat..... Why should water be so dense? Well, we can probably attribute this to hydrogen bonding, inasmuch there is a great deal of intermolecular interaction between the polar $\stackrel{\delta +}{H} - \stackrel{\delta -}{O}$ bonds... $H - O - \stackrel{\delta +}{H} \cdots \stackrel{\delta -}{O} {H}_{2} \cdots \stackrel{\delta -}{O}$ Molecules of comparable size, are not so dense, and are in fact room temperature gases, consider $N {H}_{3}$, and ${H}_{2} S$.
{}
Chapter 14, Section 14.1, Exercises, Exercise 1 # Explain what is meant by an iterated integral. How is it evaluated? ### Explanation *To give you a brief description, iterated integral is just an integration of more than 1 variable, f(x, y). This is commonly used when getting an area using integration. It is evaluated by integrating first the inner portion of the equation. So here is the process on how to conduct iterated integration: First, recall the formula of iterated integration  For this formula, we will first integrate the "inner" part of the equation.  As you can see above, the function will be integrated with respect to x. This means that x is the variable and we will treat y as constant. Second, evaluate the inner part of the equation.  Third, after evaluating the inner part with respect to x, it is now time to integrate the function with respect to y. In this, x now serves as the constant while y is the variable. *Let us have an example for us to understand how iterated integration is being done.  In this problem, we need to integrate the inner part of the function first. Let us separate it for us to visualize more.  Integrate now with respect to x, treating y as constant.  =  =  Next, integrate with respect to y.  ### Answer Iterated integral is just an integration of more than 1 variable, f(x, y). This is commonly used when getting an area using integration. It is evaluated by integrating first the inner portion of the equation. Page 976
{}
Empirical Formula Mandeep Garcha 2H Posts: 100 Joined: Sat Aug 24, 2019 12:17 am Empirical Formula When using the empirical formula do we always assume that the sample mass is 100g? 904914037 Posts: 65 Joined: Wed Sep 18, 2019 12:16 am Re: Empirical Formula Yes, when we are provided with the percentage mass compositions it is best to assume a sample mass of 100g so that the percentages can be directly converted into grams. For instance, if you had a compound that was 35.7%Al and 64.3%Cl, assuming a sample mass of 100g, we could convert those percentages into 35.7g Al and 64.3gCl and continue the problem from there. You do not have to use 100g, however. For instance, if you used 50g, then you would convert the percentages into masses by finding out what numbers are 35.7% and 64.3% of 50g respectively. But since it allows us to more directly convert the percentages into masses, assuming 100g is much simpler. This is why Professor has suggested we assume a 100g sample mass. I hope this helps! Last edited by 904914037 on Mon Sep 30, 2019 8:25 pm, edited 1 time in total. lasarro Posts: 55 Joined: Thu Jul 11, 2019 12:15 am Re: Empirical Formula You only assume that the sample is 100g to make the calculations easier. For example if a CH4 sample contains 20% carbon but the sample itself is only 10 grams. You could assume for a second that the sample was 100 grams and 20 grams of that was carbon. Then you could use that ratio to say a 10 g sample of CH4 contains only 2 grams of Carbon. Jack Riley 4f Posts: 100 Joined: Sat Aug 24, 2019 12:17 am Re: Empirical Formula You only assume that the sample mass is 100g to make the calculations easier. Because percentages are on a scale of 1-100, assuming that the sample mass is 100g allows you to skip a step and just convert the percentages to masses by changing the units from a percent sign to grams. Paul Hage 2G Posts: 105 Joined: Thu Jul 25, 2019 12:17 am Re: Empirical Formula When given the mass percents of atoms in a compound, it is easiest to assume a 100g sample because then you can simply convert the percentages to masses. It is not necessary to assume a 100g sample due to the fact that the mass percents of each atom will remain the same regardless of the sample mass. However, if we are told that a sample has 30.0% NaCl, then with a 100g sample, we can start the problem by converting 30.0g NaCl to moles and then following the typical procedures of finding empirical formulas. If you were to assume a 200g sample, then the starting mass would be 60.0g NaCl. You would still end up with the same empirical formula, but assuming a 100g sample is the most convenient way to begin a problem when the sample size is not given. RobertXu_2J Posts: 104 Joined: Fri Aug 30, 2019 12:17 am Re: Empirical Formula You don't have to assume that it is 100g, however, it is easier to assume that it is, because you want to convert the mass percentages into actual sample masses, and then use that to find out the molar mass of each element. 100g is a good, nice number, and assuming a 100g sample makes it easier to do your calculations. Jasmine 2C Posts: 184 Joined: Wed Sep 18, 2019 12:18 am Re: Empirical Formula It isn't necessary to say the mass is 100g. It just makes the calculations much easier because we use mass percentage composition and percentages are 100. 905385366 Posts: 54 Joined: Sat Jul 20, 2019 12:16 am Re: Empirical Formula I learned this simple trick in my high school Chemistry class. 1. Percent to mass (change the percent sign into a grams sign) 2. Mass to Mole (Convert mass to moles using molar mass) 3. Divide by Small (divide by the smallest number) 4. Times to whole (times to get whole numbers)
{}
# Thebasic definition of derivative of under root x. ### Precalculus: Mathematics for Calcu... 6th Edition Stewart + 5 others Publisher: Cengage Learning ISBN: 9780840068071 ### Precalculus: Mathematics for Calcu... 6th Edition Stewart + 5 others Publisher: Cengage Learning ISBN: 9780840068071 #### Solutions Chapter 1.5, Problem 122E (a) To determine ## To find: Thebasic definition of derivative of under root x. Expert Solution Thesphere, cylinder and cone will have equal volume Vcyl=Vcon=Vs or 43πr3=πr2h1=13πr2h2 . ### Explanation of Solution Given: All sphere, cylinder and cone have same radius and same volume. Concept used: The volume of the sphere: Vs=43πr3 . The volume of the cylinder: Vcyl=πr2h1 . The volume of the cone: Vcon=13πr2h2 . Calculation: The volume of the sphere: Vs=43πr3 . The volume of the cylinder: Vcyl=πr2h1 . The volume of the cone: Vcon=13πr2h2 . The sphere, cylinder and cone will have equal volume when its moulded to each other the shape of the structure changes volume remain same due to having geometrical shapes different each of the solid geometry have different formula but equal volume. Hence, the sphere, cylinder and cone will have equal volume Vcyl=Vcon=Vs or 43πr3=πr2h1=13πr2h2 . (b) To determine ### To find:The equation of the cylinder’s height and cone height. Expert Solution The height of the cylinder h1=43r and height of the cone h2=4r . ### Explanation of Solution Given: 43πr3=πr2h1 and 43πr3=13πr2h2 . Concept used: The volume of the sphere: Vs=43πr3 . The volume of the cylinder: Vcyl=πr2h1 . The volume of the cone: Vcon=13πr2h2 . Calculation: Vs=Vcyl43πr3=πr2h1 . Dividing by πr2 : h1=43r . Vs=Vcon43πr3=13πr2h2 . Dividing by 13πr2 : h2=4r . Hence the height of the cylinder h1=43r and height of the cone h2=4r . ### Have a homework question? Subscribe to bartleby learn! Ask subject matter experts 30 homework questions each month. Plus, you’ll have access to millions of step-by-step textbook answers!
{}
MO�z1��㕟�M���G(�NE+ǿ;˜t��D��/���1O���D����x�U��0�)�x��q�u���.L�)��eFi(���#f�ќ�1��ϱD�#xz��(�d%jp�M������� ��\/����Yْ�[��&y�K�IN�[�b.�����ʃ�Q���7��~A��������xe'���Bg��=\a����*�[�D�z�S�9�6�H; �. The tri identities are valid for all triangle estimated factors. Absolute value can be looked at as the distance between any number and zero on a traditional number line. Also remember to play some 7th grade math games online for more mental math solving. 7th grade math worksheets - PDF printable math activities for seventh grade children. Plot the two points with the coordinates shown below. Example 1: The Distance Between Points on an Axis . Task cards include multiple choice problems and constructed response with basic Pythagorean Theorem, finding the distance between two points and . 64. Also, determine the That is, given two points $$\left( {{x_1},{y_1}} \right)$$ and $$\left( {{x_2},{y_2}} \right)$$, is it possible to find the point exactly halfway between those two points? 241 Pages. The next schematic shows the 802.3af standard allowing both DC power and digital data to be communicated over the same pairs of … A short summary of this paper. What do the ordered pairs have in common and what does that mean about their location in the coordinate plane? Point of intersection. With the midpoint formula, we can find such a point. First, it helps you apply the things you … Download with Google Download with Facebook. Slope of the line Perpendicular distance. However, as long as the coordinate values are not mixed up and as long as the rules for adding and subtracting negative integers are … ©9 P2y0c1 A2w nKXuItwaA yS ToAfUtmwEa6rNew vLaLzC 6.f P gAylzle nrVilg Mhkt is1 OrxePsBekr ivDeWdL.X w 4M oa Ad5e D 5w 9i Ntbhx 2I6n Gfgi hn 1iytTeb 0ParCek-PAmlgbeSbJr uab. ̅ AB . Find the distance. Calculate the distance (d) between the two points (length of the hypotenuse) using the Pythagorean Theorem. A single-link clustering also closely corresponds to a weighted graph's minimum spanning tree. The y-coordinates are the same, so the line is horizontal. Mensuration calculators. 9.7 Applications of the Midpoint and Distance Formula 9.7 Applications of the Midpoint and Distance Formula You can apply the midpoint formula and the distance formula in real life situations. Worksheet by Kuta Software LLC Geometry 4.1-Distance Between Two Points Name_____ ID: 1 Date_____ Period____ ©B Y2d0z1P6g HKyuat[at vSEosfwtOwuair[eh zLPLlCJ.k d nAmlglL CrLilgehptrsV urzerskeNrSvveddf.-1-Find the distance between each pair of points uing Pythagorean Theorem. This worksheet is totally over the top and has 900 questions on it! distance between the origin (0, 0) and the point (a, b) in the complex plane. For example, the absolute value of 29 is 29, because it is twenty nine spaces from zero on a number line. 3) Graph the points E (-10, -9) and F (-10, -3). Click the following links to download one step equations worksheets as pdf documents. <> A plethora of exercises that include finding, shading, and naming unions, intersections, differences, and complements are provided here. Download Full PDF Package. The following Online PDF Worksheet has a variety of questions with answers provided at the end of the sheet. stream Worksheet -3. The shortest distance is the line segment connecting the point and the line such that the segment is perpendicular to the line. Defining the concept of values with the patient: explaining the difference between values and goals, such as the fact that values cannot be achieved like goals, but they can be used to set goals. As of the time of this writing, using the GIS web service to populate the distances and driving distances takes about 5 min for 50 locations and 45 min for 150 locations. How did we find the distance between two numbers on the number line? The causes of mental disorders are regarded as complex and varying depending on the particular disorder and the individual. Statistics calculators. 10. The printable worksheets in this page cover identifying quadrants, axes, identifying ordered pairs, coordinates, plotting points on coordinate plane and other fun worksheet pdfs to reinforce the knowledge in ordered pairs. Beginning in the 1970s with the use of television in the classroom, to video teleconferencing in the 1980s, to computers in the Creating distance from social pressures: helping the patient distinguish between their own motivations and desires and those of the people and society around them. 1.Find the distance between the spheres x2 + y2 + z2 = 4 and x2 + y2 + z2 + 2x+ 4y+ 6z 86 = 0 (Hint : nd the location and radius of each sphere, and then use a simple geometrical argument to show that the distance between the spheres is the distance between the centers minus the 4 8 16 In the first call to the function, we only define the argument a, which is a mandatory, positional argument.In the second call, we define a and n, in the order they are defined in the function.Finally, in the third call, we define a as a positional argument, and n as a keyword argument.. Let (a, b) and (s, t) be points in the complex plane. • Find the midpoint of an interval. F. Rodríguez Beltrán. It is the vertical distance you have to move in going from A to B. %�쏢 Login Find the slope of ̅ AB .Morton finds the slope of the line between (14, 1) and (18, 17). Click here for Worksheet 1 . It has shown immediate results. A (2,-9) and B (5,4) . DISTANCE BETWEEN TWO POINTS Distances are always positive, or zero if the points coincide. Because we’re only measuring voltage dropped across the subject resistance (and not the wires’ resistances), though, the calculated resistance is indicative of the subject component’s resistance (R subject) alone. READ PAPER. Part C asks students to write two different expressions to represent the distance. The high school is at point (3, 4) and the stadium in Columbus is at point (7, 1). The distance formula exams for teachers Identify the population and the sample: a) A survey of 1353 American households found that 18% of the households own a computer. 2 Worksheet by Kuta Software LLC Kuta Software - Infinite Pre-Algebra Name_____ The Distance Formula Date_____ Period____ One strategy students might use is to count the units between the points. Coordinate geometry calculators. Find the midpoint of . Find AB (the distance from A to B). Theme based Subtraction Problems The colorful theme-based worksheet pdfs for kids in 1st grade through 3rd grade are based on three engaging real-life themes - Beach, Italian Ice and Birthday Party. Use penny2020 as the coupon code. 9) Endpoint: (-7, -6), midpoint: (-10, 8) 10) Endpoint: (5, -9), midpoint: (-2, -4) Find the distance between each pair of points. 7th grade math worksheets to engage children on different topics like algebra, pre-algebra, quadratic equations, simultaneous equations, exponents, consumer math, logs, order of operations, factorization, coordinate graphs and more. is a segment. Use our advanced search page; Browse our curated A-Z index of terms and topics or see our automated list of website topics; Search frequently asked questions or submit a question; Go to the EPA home page or. Some coordinate planes show straight lines with 2 p A mental disorder is an impairment of the mind causing disruption in normal thinking, feeling, mood, behavior, or social interactions, and accompanied by significant distress or dysfunction.. between two points on a coordinate grid. ... Interative Distance Formula Explore the Distance Formula by Clicking and Dragging Two Points. Midpoint. Distance Between Points Worksheets. In this Pythagorean theorem: Distance Between Two Points on a Coordinate Plane worksheet, students will determine the distance between two given points on seven (7) different coordinate planes using the Pythagorean theorem, one example is provided. Part 2: Midpoint Using Formula Only. Round your answer to the nearest tenth, if necessary. 7th grade math worksheets to engage children on different topics like algebra, pre-algebra, quadratic equations, simultaneous equations, exponents, consumer math, logs, order of operations, factorization, coordinate graphs and more. Password. The number of locations for which the distance matrix can be computed is limited by the … The distance from A to B is the same as the distance from B to A. Computing the distance between two integers on a number line: Worksheet 6.1 Name ……………………………… Date ……………………………… Score Distance between Two Points Worksheets. For two points in the complex plane, the distance between the points is the modulus of the difference of the two complex numbers. The difference of … • Find the gradient of an interval. Find the midpoint of . Find AB.Use the Slope formula to solve the following. ID: 1520437 Language: English School subject: Math Grade/level: 6 Age: 10-16 Main content: Absolute value and distance Other contents: Add to my workbooks (0) Download file pdf Embed in my website or blog Add to Google Classroom Worksheet -1. Questions cover identifying points and finding the distance between two … A (2,-9) and B (6,-6) . Is it possible to find the midpoint between any two points in the plane? x��][��6r~�_q����Ŏ��d�VʩdS)���hW�X��(�(�2ݍ��p�3ry�.�@6�7��y8?Ԥ�A῵q���￉�~�PS�Aew����I��p��E�,�C:e����WO�D>�Ó�&)G"��M�������㓿\����/����q?�F�dX�����v/��buSփ7��#�������8N:ޘ�#֊�tyψ��8x�6���l|�k�n�^z�7�8�l�v׈���=��nψ�����lP���R�6��k�n2Vz���� 6��5�4�*����3zψ-�����7���V�]g�-��{�t���,��v�8ͮ����)ߑ�=P��]�o7�Ֆ�l��]#vS�� "�#��5�9y�~�p��h����lP���B�^�q6��w�8�YG��Cu�˵b������o7��ΐ쮻�0�C�oq����q�a�-X�|N�Ũf��߭q�#�v�p���I6®�nq����G����_��8A��6���*׮|u�Z�A�Ja�����m�g�u=Ny��X?�lg����8�9��D���rV��8��E�s�叻F��Z�M���g���ғ7��#���g�U� Find the distance between the points (-3, 4) and (2, 5). DISTANCE BETWEEN TWO POINTS WORD PROBLEMS WORKSHEET (1) Find the distance between the following pairs of points: ... Find the distance between the points … The Distance Formula is a useful tool in finding the distance between two points which can be arbitrarily represented as points \left( {{x_1},{y_1}} \right) and \left( {{x_2},{y_2}} \right).. ... What is the distance between points C(-2, 3) and D(0, 5)? Download Free PDF. ... About This Quiz & Worksheet. The Trigonometric Ratio speaks about the connection between the estimation of the edges and the length of the side of the right-angle triangle. PDF. 25mol O2 2mol C8H18 X 32. The Distance Formula. Also contains mystery pictures, moving points using position and direction, identifying shapes and more. Matrix Calculators. Download. MSA Reference Manual 4th Edition. The distance formula can be applied to calculate the distance between any two points in Euclidean space and it will be very useful in many occasions. Use the coordinates to find the lengths of the short sides of the triangle. 3. Can you calculate the distance between your home and school using the coordinates? Digital data consists of voltage pulses over time (AC, essentially), conducted between the two devices over two sets of twisted-pair wires. Worksheet -5 . Here is another worksheet on Distance Between Points which also has answers at the end of it. In our example, ∆y = 3 1 = 2, the ff between the y-coordinates of the two points. 1. ̅ AB . This paper. Create a free account to download. Andrew File System (AFS) ended service on January 1, 2021. A (-2,-5) and B (4,12) . Round your answer to the nearest tenth, if necessary. �™[��p&5K�0�%��b�YZ�q�h�/7D����|�h���Ė�n���O���яgV���b6�o���l�i�a�|~�k�FM.m�3�v���a,�I�%��������7^X�6�M��{Bm��������?�Ƌ��ö��u��An�p�6ῷ�5��D�I�-��Ci��}�K3�b Y1\/-�~�h�*���9�]�����>�. 37 Full PDFs related to this paper. Determine the difference between the two-digit numbers by following the place value columns correctly. Find the distance between the two points (4, 1) and (10, 9). The general formulas for the change in x and the change in y between a point (x1;y1) and a point (x2;y2) are: ∆x = x2 x1; ∆y = y2 y1: Note that either or both of these might be negative. In Riemannian geometry, all geodesics are locally distance-minimizing paths, but the converse is not true. PDF. It is also helpful for kids who need to prepare their final exams. Area of quadrilateral. In this worksheet, we will practice finding the distance between two points on the coordinate plane using the Pythagorean theorem. 2) Graph the points C (2, 2) and D (6, 2). Click to see our best Video content. • Find the equation of a straight line given a point and the gradient, or two points on the line. b) A recent survey of 2625 elementary school children found that 28% of the children could be classified obese. If you wish to practice what you learned about the distance formula, please feel free to use the math worksheets below. I label my coordinates and plug them into the distance formula. is a segment. Distance between two points. This program got me through Algebra Two with a 99 average. x�E�i���^������n������7�~��X?l6x�k���ͮ�56 ��� ^|�y&�:�"� 7>��?����9�>^]x��^��� �qs������P��������L:����lJ��*N!j�/�z� 1��{؁�d���Q�I��8B*��N�~EP߲Հ�� �y���ۘ�G�'H� ��K|0�]��- Principles and Standards for School Mathematics outlines the essential components of a high-quality school mathematics program. What makes these worksheets easy to use is that each one contains an answer key for easy reference at the end of each exercise. AFS was a file system and sharing platform that allowed users to access and distribute stored content. Practice Questions If dn is the distance of the two clusters merged in step n, and G(n) is the graph that links all data points with a distance of at most dn, then the clusters after step n are the connected components of G(n). Also, we will check how they both are different from each other. Get all of Hollywood.com's best Movies lists, news, and more. This distance is always viewed as a positive, rather than a negative, value. Calculate the distance between the points. What is the distance between (-4, 0) and (5, 0)? The size of the PDF file is 89188 bytes. Algebra calculators. Solve the system of equations. Find the distance of . The text registers the tension between focusing on the speaker and focusing on the audiences and does an admirably job of demonstrating how they are two sides of the same coin. Worksheet -4. MSA Reference Manual 4th Edition. Use the same method to find the distance between (-4, 0) and (5, 0). Try interserver for only $0.01. %PDF-1.4 Many answers. Take A Sneak Peak At The Movies Coming Out This Week (8/12) Better days are here: celebrate with this Spotify playlist WORKSHEET – Extra examples (Chapter 1: sections 1.1,1.2,1.3) 1. A one-step equation is as straightforward as it sounds. Find the distance between the points $$\left( { - 8,6} \right)$$ and $$\left( { - 5, - 4} \right)$$. Free worksheet (pdf) on distance formula includes model problems, practice problems and an online component. To start the Intro to New Material section, I have students think about the distance between points A and B, because these are in different quadrants. Each worksheet is in PDF and hence can printed out for use in school or at home. Points A, B, and C represent seismic stations on Earth's surface. Most sheets are free and you can share the links in your groups. (3, 8) and (7,3) Q1: Find the distance between the point ( − 2 , 4 ) and the point … Determine the equation of the line passing through A(6, 5) and perpendicular to the line y = 2x + 3. Student s can use math worksheets to master a math skill through practice, in a study group or for peer tutoring. Free PDF. The 5 Canons of Rhetoric form a sort of shadow-organization for the early chapters, with the above-mentioned emphasis on ethics introduced early on, and integrated throughout. EXAMPLE Find the distance between the following pairs of points. Distance Between Two Points Word Problems Worksheet - Practice questions with step by step explanation. Current is the same at all points in the circuit, because it is a series loop. • Use the gradient-intercept form of a straight line. Each worksheet is in PDF and hence can printed out for use in school or at home. Distance between a Point and a Line Description Over the past 50 years, we have witnessed a revolution in how technology has affected teaching and learning. You will be pleased to know after struggling a lot he received his first two 100% grades after using your site.Jon. A coordinate grid is superimposed on a highway map of Ohio. Free trial available at KutaSoftware.com Solving One Sep Equations. Ex: (60 , −2); 53 units-2-Create your own worksheets like this one with Infinite Geometry. The answer is yes! If all of the arguments are optional, we can even call the function with no arguments. The relationship between pressure and volume is only observed when the temperature and amount of gas particles do not change. A 11 B 7.8 C 61 D 14.9 2 A high school soccer team is going to Columbus, Ohio to see a professional soccer game. *Exclusive Offer* Looking for new hosting? 5 0 obj Different forms equations of straight lines. CALCULATORS. 2. S A 0 M A a r d i e O h w x i J t a h H R I W n s f c i Z n 9 i B t Q e U D G l e A o f m X e g t E r P y r. 1 Worksheet by Kuta Software LLC Kuta Software - Infinite Geometry Name_____ Period____ Date_____ The Distance Formula Find the distance between each pair of points. 6 Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 1 Name_____ The Distance Formula Date_____ Period____ Find the distance between each pair of points. The distance d(p, q) between two points p and q of M is defined as the infimum of the length taken over all continuous, piecewise continuously differentiable curves γ : [a,b] → M such that γ(a) = p and γ(b) = q. Given points (1 6, 2 0) and (1 6, 1 0), calculate the distance between the two points, and , considering that a length unit = 1 c m. Q4: Use the graph below to determine the points , , , and , and find the area of the shape that results from connecting them. Find the midpoint of . Parabola. Digital Download PDF (3.32 MB) ADD TO CART ... Theorem. We just have to perform one step in order to solve the equation. My intention is for them to write these as the abs value of their distance, but it is also great if they choose to write the distance as the sum of the abs values of each altitude. ��-P�/'�J��'~�$�R��f�R�~�x�\�v��yXq7x��"9�G�a�[���/���߱]1�4����ƛ7��F5lwov�7�a���[::z�V��;�A��-x�8Œ}7�q$o� ��]_Egd�ڂ{MVnC�q�p��(2���~�o�a �_no>�W�uVr{�z�X���������Lc�p�����0�I�mW��=Ѯ�|�ne��l�Jծ�::#W� ��Q㓷_���f5��]_�0*|�{"��t.L��j؟m��I���Q~�o�f؟}�E�Q����&����g��̮l�1z����ʘ��]��@g$ ̅ AB . Find the distance of . 1 Find the distance between points P(8, 2) and Q(3, 8) to the nearest tenth. Distance Between Two Points Practice Questions Click here for Questions . Area of triangle. • Find the distance between two points. AFS was available at afs.msu.edu an… Find AB. Problem 2 is simlar, except that there are more points to find. Thank you for the program. At KidsMathtv.com, we do not pass on to third parties any identifiable information about our users.Your email address and information, will NEVER be given or sold to a third party. Exclusive pdf worksheets on completing Venn diagrams based on a given set of data are also available for practice. Some of them might require representing the Boolean operation between the given sets. Ifyou only come to group and do nothing in between sessions, this group will have limited effectiveness. Point D represents a location at the boundary between the core and the mantle. Worksheet -2. Finding the distance between two points can be just a little harder when one of more negative value is involved. This worksheet contains the distances and travel durations between every two points that are specified in the 1.Locations worksheet. R c eAtl lH UrvilgCh otds w zr9e Psle drZv jeVdY.M U 8M ga 9d4e D GwNiZtAho pI Tn1f LiNnYi4t geZ hA hlcgCeibZr va5 T19. The distance between the two points is the horizontal distance between the x-coordinates, 25 and 21. A comprehensive and coherent set of mathematics standards for each and every student from prekindergarten through grade 12, Principles and Standards is the first set of rigorous, college and career readiness standards for the 21st century. Our printable distance formula worksheets provide adequate practice in substituting the x-y coordinates in the formula: d = √ ((x 2 - x 1) 2 + (y 2 - y 1) 2) to find the distance. This page simply spoils 7th grade teachers with extra resources  for math tests and extra home work. There are 24 equally likely outcomes to the two-part experiment. MSA Reference Manual 4th Edition. We can use it to find the distance d between any two points in the plane. The Distance Formula itself is actually derived from the Pythagorean Theorem which is {a^2} + {b^2} = {c^2} where c is the longest side of a right triangle (also … We have to isolate the variable which comes in the equation. Draw a line to join them and form a right triangle with this line as the hypotenuse. The formula above is known as the distance formula. This homework serves two purposes. We first find the distance between two points that are either vertically or horizontally aligned. Worksheet by Kuta Software LLC-2-Find the other endpoint of the line segment with the given endpoint and midpoint. Click here for Answers. Keep up the good work.Jay. For this reason, we will be givingyou weekly assignments to complete between sessions. 2. • Graph straight lines on the Cartesian plane. This worksheet and attached quiz will help you to figure out what you know about the distance between two points. Worksheet by Kuta Software LLC Geometry HW 3 Finding the Midpoint between two points Name_____ Date_____ Period____ ©Z Z2Q0f1b5C KXugtQaH XS[oIfltkwdaOreX KLrLLCl.T H tAUlrlw cr[isghhHt[sd frjeWsLenrcvueXdc.-1- Find the)midpoint)of the line … is a segment. Use the buttons below to print, open, or download the PDF version of the Calculating the Distance Between Two Points Using Pythagorean Theorem (A) math worksheet. View Distance between two points Distance Using graphs.pdf from AA 1Kuta Software - Infinite Geometry Name_ The Distance Formula Date_ Period_ Find the distance between each pair of points… 26) Name a point that is between 50 and 60 units away from (7, −2) and state the distance between the two points. Well, these two major systems show the different concepts of these formulas. 100. Let’s look at an example. For unknown letters in the word pattern, you can use a question mark. Example 8: On a map’s coordinate grid, Merryville is located at (2>4) andSillytownis located at (2> 2). A) only through Earth's interior, and S-waves travel only on Earth's surface B) fast enough to penetrate the core, and S-waves travel too slowly C) through iron and nickel, while S-waves cannot D) through liquids, while S-waves cannot 3. This strategy is okay, as students master the concept of distance. My son needed help with Algebra 2. To find the distance between the points A(25, 4) and B(21, 4), plot the ordered pairs and draw a line segment between the points. Digital download PDF ( 3.32 MB ) ADD to CART... Theorem for grade. Ab ( the distance between points on the line segment connecting the point the... Lengths of the line is horizontal + 3 of it to prepare their final exams the particular disorder the... Identifying shapes and more needed help with Algebra 2 a file System and sharing platform that allowed to! Distance between the y-coordinates are the same, so the line the core and the individual can the. Response with basic Pythagorean Theorem even call the function with no arguments is worksheet. ( -4, 0 ), -5 ) and D ( 6, 5 ) perpendicular. The shortest distance is the same method to find, but the converse is not true intersections,,! Perform one step equations worksheets as PDF documents number and zero on a number line be weekly! Unknown letters in the equation of a straight line given a point and the line math through... Determine the difference between the two points that are either vertically or horizontally aligned lot he received first. 5 ) and D ( 6, -6 ) to perform one step order. Distances are always positive, or two points that are either vertically or horizontally aligned students might use that.: sections 1.1,1.2,1.3 ) 1 one-step equation is as straightforward as it sounds the. Straightforward as it sounds is simlar, except that there are 24 equally likely outcomes to the y. Practice problems and constructed response with basic Pythagorean Theorem, finding the distance formula by Clicking Dragging. Two-Part experiment short sides of the hypotenuse ) using the coordinates shown below closely corresponds a... Possible to find the distance between two points with the coordinates shown below in between sessions, this group have. The Pythagorean Theorem practice questions • find the distance between two points in coordinate! Is totally Over the past 50 years, we can find such a.., -6 ) formula exams for teachers Part C asks students to write two different to. Shapes and more given a point the converse is not true for two points are optional, we witnessed! Practice questions • find the distance D between any two points in the equation of a high-quality school Mathematics.. Elementary school children found that 28 % of the right-angle triangle triangle factors. Is okay, as students master the concept of distance Graph the points E ( -10, )! Has answers at the end of each exercise that allowed users to access and distribute stored content the! Possible to find questions finding the distance between two points worksheet pdf find the midpoint formula, we will practice finding distance! They both are different from each other ; 53 units-2-Create your own worksheets like this one with Infinite Geometry clustering... Practice questions • find the distance between points on the number line formula exams for teachers Part C students... The PDF file is 89188 bytes Q ( 3, 8 ) to the nearest tenth group or for tutoring! Equation is as straightforward as it sounds are the same as the distance between any two points are. Need to prepare their final exams in Riemannian Geometry, all geodesics are locally distance-minimizing paths, but the is. Formula, we can use it to find the distance between points P ( 8 2! Even call the function with no arguments of them might require representing the Boolean between... Get all of the two points with the coordinates to find the distance between points! Ifyou only come to group and do nothing in between sessions, this will. S can use math worksheets to master a math skill through practice, in a study group or for tutoring... Practice questions • find the distance between two points valid for all triangle estimated factors 29, it..., 2 ) and D ( 6, 5 ) them into the between. 99 average the finding the distance between two points worksheet pdf of distance plug them into the distance between two points ( -3 4. How technology has affected teaching and learning has 900 questions on it different from each other 8. = 3 1 = 2, 5 ) between ( -4, 0 ) D... ( length of the children could be classified obese the variable which in. Calculate the distance between ( -4, 0 ) Algebra two with a 99 average that segment. Move in going from a to B ) for teachers Part C finding the distance between two points worksheet pdf to! Quiz will help you to finding the distance between two points worksheet pdf out what you know about the connection between the of... Two complex numbers Mathematics outlines the essential components of a straight line given a.. Seventh grade children the two-part experiment • find the midpoint formula, please free... Grid is superimposed on a traditional number line at all points in the complex plane the! Expressions to represent the distance formula includes model problems, practice problems and response... Contains an answer key for easy reference at the boundary between the points C ( -2, 3 Graph! Okay, as students master the concept of distance, news, and are. Spaces from zero on a number line model problems, practice problems and an online component My... To find the distance between ( -4, 0 ) and ( s, )... Children found that 28 % of the edges and the line passing through (... D between any number and zero on a given set of data are also for... From B to a the side of the hypotenuse ) using the coordinates to find 3.32 MB ) ADD CART. Links to download one step equations worksheets as PDF documents % of the edges and the.. Of them might require representing the Boolean operation between the two points in the coordinate plane online component the between! Exercises that include finding, shading, and complements are provided here for this reason, have. ∆Y = 3 1 = 2, -9 ) and D ( 6, 5 ) on the line through! 50 years, we will check how they both are different from each other 1 find distance! Mental math solving worksheets - PDF printable math activities for seventh grade.. ( 3.32 MB ) ADD to CART... Theorem the gradient-intercept form of straight... Sharing platform that allowed users to access and distribute stored content letters in coordinate. The y-coordinates of the children could be classified obese afs was available at KutaSoftware.com 7th grade math worksheets PDF! Constructed response with basic Pythagorean Theorem, finding the distance formula you have to move in going from a B. Ex: ( 60, −2 ) ; 53 units-2-Create your own like! In Riemannian Geometry, all geodesics are locally distance-minimizing paths, but the is! Explore the distance between points C ( 2, 5 ) and ( s, t be... On the coordinate plane using finding the distance between two points worksheet pdf coordinates shown below known as the hypotenuse,,! Same at all points in the plane complex and varying depending on the particular disorder and the length of side! Identifying shapes and more than a negative, value two complex numbers any number and on. Position and direction, identifying shapes and more, 8 ) to the tenth. Description Over the top and has 900 questions on it -2, -5 ) and ( 5 0! A recent survey of 2625 elementary school children found that 28 % of the triangle... More points to find the distance D between any two points on the number line for school Mathematics.... A variety of questions with answers provided at the end of each exercise My son needed help finding the distance between two points worksheet pdf 2! Identifying shapes and more vertically or horizontally aligned worksheet on distance between two numbers on the line connecting... Peer tutoring clustering also closely corresponds to a weighted Graph 's minimum spanning tree contains mystery pictures, points... Distance formula includes model problems, practice problems and an online component who need to prepare their final.! Form of a high-quality school Mathematics outlines the essential components of a straight line given a.... Worksheet, we can even call the function with no arguments B, C. Two 100 % grades after using your site.Jon y-coordinates of the PDF file 89188... Most sheets are free and you can use math worksheets below this distance is same. Plot the two points Distances are always positive, or two points ( -3, 4 ) and (,. -10, -3 ) is simlar, except that there are 24 likely... Answer to the nearest tenth, if necessary online for more mental math solving... Interative distance formula the! Through practice, in a study group or for peer tutoring points and the right-angle triangle complex plane the! Converse is not true 100 % grades after using your site.Jon coordinate grid is superimposed on a traditional line! At as the distance between the two points with the midpoint formula, we have a! Kutasoftware.Com 7th grade math worksheets below and ( 2, -9 ) and 2! Worksheets like this one with Infinite Geometry in school or at home a recent survey 2625! % of the sheet D ( 0, 5 ) and ( s, t ) be in! ( 3.32 MB ) ADD to CART... Theorem System and sharing platform that allowed users to access distribute..., in a study group or for peer tutoring form a right triangle with this line the... Worksheet and attached quiz will help you to figure out what you know about the distance from a to )! ∆Y = 3 1 = 2, 2 ) and F ( -10, )! Given a point and the mantle are the same, so the line segment connecting point. Use is that each one contains an answer key for easy reference at the end of it move going...
{}
# Bitwise And of Range 2017-05-13 Recently I had to solve a problem which asked you to determine the bitwise and of a range of nonnegative numbers. There is an obvious linear solution to this problem which simply computes the bitwise and of the range. Bitwise and of [4, 10] = 4 & 5 & 6 & 7 & 8 & 9 & 10 However, after thinking about how the anding ends up “erasing” bits permanently I figured out the following logarithmic solution: def bitwise_and_of_range(begin, end): if begin == end: return begin else: return bitwise_and_of_range(begin >> 1, end >> 1) << 1 Essentially, if you have at least two numbers in the range, the last bit will be zero, so you can compute the bitwise and of the prefixes and append a zero to the result (by shifting it to the left).
{}
## Filters Q&A - Ask Doubts and Get Answers Q # Help me understand! Which of the following graphs represents local minima at x = a ? Which of the following graphs represents local minima at x = a ? • Option 1) • Option 2) • Option 3) • Option 4) Answers (1) 145 Views As we have learned Local maximum and Local minimum - Let  y = f(x)  be the given function then  $x=x_{\circ }$   is a point of local maximum if there exists an open interval containing  $x_{\circ }$  such that  $f(x_{\circ })>f(x)$ for all values of  x  lying in that interval and for $f(x_{\circ })  then it is local minimum in fig and at $x=a,x=c\:\:\:Local\:\: maxima$ $and\:at\:\:x=b,x=d\:\:\:Local\:\: minima$ - wherein Here (D) is the only case where we will have an interval in the neighbourhood of x= a which contains x=a and $f(a)< f(x)$ Option 1) Option 2) Option 3) Option 4) Exams Articles Questions
{}
# Math Help - geometric series 1. ## geometric series given the geometric series s(x) = the sum from n=1 to infinity of (-e^-x)^n. a) set x=1 in the above series and write out the first 4 terms b) find the exact sum of the series s(1) c) fing the exact sum of the series s(x) d) for what values of x does the series from part (c) converge? for part A i set the S(1) = the sum from n=1 to infinity of (-1/e)^n, and i think i got the first 4 terms but i still don't know what it converges to. so on part B i got a sum of s(1) = e/(e+1) is it right? on part C i got S(x) = e^x/(e^x+1) right? since i am not sure about my answers i don't know part D, for what values it converges. thanks K 2. Hello, Originally Posted by kithy given the geometric series s(x) = the sum from n=1 to infinity of (-e^-x)^n. a) set x=1 in the above series and write out the first 4 terms b) find the exact sum of the series s(1) c) fing the exact sum of the series s(x) d) for what values of x does the series from part (c) converge? for part A i set the S(1) = the sum from n=1 to infinity of (-1/e)^n, and i think i got the first 4 terms but i still don't know what it converges to. You're not asked that... Your series is right. so on part B i got a sum of s(1) = e/(e+1) is it right? Right on part C i got S(x) = e^x/(e^x+1) right? Only if $e^{-x}<1$ (else, the series diverges), so it's for x>0 (this is part D). Actually, the exact value of s(x) is : $\sum_{k=1}^{\infty} (-e^{-x})^n=\lim_{N \to \infty} \frac{1+(\frac{1}{e^x})^N}{1+\frac{1}{e^x}}$ (simplify it the way you want) since i am not sure about my answers i don't know part D, for what values it converges. You can use the alternate series test 3. Hi Originally Posted by kithy given the geometric series s(x) = the sum from n=1 to infinity of (-e^-x)^n. so on part B i got a sum of s(1) = e/(e+1) is it right? on part C i got S(x) = e^x/(e^x+1) right? I don't agree. $s(1)=\sum_{n\geq 1} \left(-\frac{1}{\mathrm{e}}\right)^n$ is negative because it's an alternating series and $\left(-\frac{1}{\mathrm{e}}\right)^1=-\frac{1}{\mathrm{e}}<0$ so $s(1)=\frac{\mathrm{e}}{1+\mathrm{e}}$ can't be true. You may try rewriting the series as : $s(1)=\sum_{n\geq 1} \left(-\frac{1}{\mathrm{e}}\right)^n=-\frac{1}{\mathrm{e}}\sum_{n\geq 1} \left(-\frac{1}{\mathrm{e}}\right)^{n-1} =-\frac{1}{\mathrm{e}}\underbrace{\left(1-\frac{1}{\mathrm{e}}+\left(-\frac{1}{\mathrm{e}}\right)^2+\left(-\frac{1}{\mathrm{e}}\right)^3+\ldots\right)}_{\fra c{1}{1-\left(-\frac{1}{\mathrm{e}}\right)}} $ 4. Ow, this is the mistake : The sum of terms of a geometric series is that thing, multiplied by the first term. Here, the first term is $-\frac 1e$ Hence, $s(1)=-\frac 1e \cdot \frac{e}{1+e}=-\frac{1}{1+e}$
{}
# Activity Stream Filter by: Popular Last 7 Days Clear All • 12-10-2017, 07:55 AM Hello, Please I need help solving this : x=1+ω^x+ω^2x 11 replies | 364 view(s) • 12-14-2017, 10:36 PM I have 3 equations: k1ek20 + k3 = 0 k1ek20.1 + k3 = 1 k1ek21 + k3 = 100 I have simplified the first equation to: k1 = -k3 and the second... 12 replies | 239 view(s) • 12-14-2017, 03:22 PM My question is about basic algebra. I am thinking about the "why" here and I'm looking for an intuitive answer. If you have the following... 10 replies | 249 view(s) • 12-13-2017, 11:38 AM Money Growth = Inflation + Real growth rate Inflation = Expected inflation + 1 x (real growth rate - potential growth rate) I know: Money growth... 8 replies | 259 view(s) • 12-11-2017, 09:29 PM Is it mathematically correct to have a decimal as a numerator and one (1) as a denominator as in 0.5/1? I think that it is correct because of... 9 replies | 228 view(s) • 12-14-2017, 09:02 AM If a person weighs 15 stone 3 pounds and there body fat percentage is 33%. Then loses 1 stone in weight which is all actual body fat. What would... 7 replies | 188 view(s) • 12-14-2017, 07:33 PM A rectangular piece of copper is 6 inches longer than it is wide. To make a tray, a one-inch square is cut out of each corner and then the sides are... 6 replies | 213 view(s) • 12-12-2017, 05:08 PM Need help with number VII. I dont know what it means by sub compounding rate and how the final answer is 12. 6 replies | 182 view(s) • 12-14-2017, 01:42 PM Hi everyone. I am struggling with solving the following equation. 11^a=9 (mod 23) I know that we have to use logarithms here, but do not know... 6 replies | 160 view(s) • 12-11-2017, 07:41 PM Charlie is the oldest child in the Brown family. He is seven years less than three times as old as his sister Sally. Sally is three years older than... 4 replies | 154 view(s) • 12-14-2017, 10:01 AM Is there a procedure (or process) that can be followed to transform a formula or an equation that uses metric units to an equivalent one that uses US... 5 replies | 151 view(s) • 12-13-2017, 04:44 PM Suppose line A equals line B and line C equals line D. Can you use the SSS Postulate to prove triangle A equals triangle B? I do not know how to... 6 replies | 140 view(s) • 12-13-2017, 03:59 PM (see attachment) 6 replies | 137 view(s) • 12-15-2017, 11:50 AM I am having a bit of trouble with the following equation: (100p-4000/100p)=.3 Honestly, I don't know where to start with this one and would... 5 replies | 106 view(s) • 12-11-2017, 08:17 PM having trouble finding all the angles. I can get: sec 2x = 2 = cos 2x = 1/2 2x= pi/3 x= pi/6 5 replies | 199 view(s) • 12-14-2017, 11:29 AM Hello Again!! I've just finished my first semester of calculus.. YAY! I still have a few questions on some topics from the class that I didn't... 4 replies | 182 view(s) • Yesterday, 03:22 PM Hello, I have the following problems: 1. Draw a right angled triangle of which an acute angle measures 60 degrees and the radius of the incircle... 4 replies | 64 view(s) • 12-13-2017, 05:18 PM Let's say i have the numbers 1 to 100 Within the range of these numbers there is a combination of 25 numbers which ascend but never repeat. For... 4 replies | 162 view(s) • 12-14-2017, 01:58 AM Problem: 3((e^2x)/(e^x-3))+2=7 What I did. (3e^2x)/(3e^x-3)=5 3e^2x-(x-3)=5 e^2x-(x-3)=5/3 5 replies | 137 view(s) • 12-14-2017, 11:51 PM What's the range? 5 replies | 131 view(s) • 12-14-2017, 02:07 AM Problem: (x^2-9)/(3x^2+1) 3x^2+1 --> (3x+1)(x-1) so why isn't the domain all reals except 1 and 1/3? Thanks 5 replies | 118 view(s) • 12-14-2017, 06:12 PM Hi all, I hope this is the right board to post this question! I have a spreadsheet that was forwarded to me, it has a couple of formulas for a... 3 replies | 105 view(s) • 12-13-2017, 10:57 AM Hi. I've got a couple of problems which i can't continue solving because i get stuck and im not figuring a way to solve it. Here's one of the... 3 replies | 73 view(s) • 12-10-2017, 05:36 PM Came across this interesting(?) problem: We have straight path X to Y, distance = 4d miles. A leaves from X at same time as B leaves from Y.... 3 replies | 147 view(s) • 12-12-2017, 12:36 AM Here is the question: Find the volume of the solid obtained by rotating the region bounded by the curve y = 4 − x2, thex-axis and the y-axis about... 2 replies | 135 view(s) • 12-13-2017, 06:28 PM Hello all, I have the following problem: The values in blue are known. So C is known, F is known, and the relation between angles S and T is... 3 replies | 132 view(s) • 12-12-2017, 08:10 PM First year engineering checking in, revision for Semester One tests and cant find any notes on how to solve this, Q: f(x)=1/ (i) Find... 2 replies | 115 view(s) • 12-11-2017, 10:35 PM Problem: Divide $1,500,000 into three parts one group:$25 second group $50 third group$100 Each group must contain the same number... 2 replies | 115 view(s) More Activity
{}
# Effective Tensorflow ## Tensorflow Basics The most striking difference between Tensorflow and other numerical computation libraries such as numpy is that operations in Tensorflow are symbolic. This is a powerful concept that allows Tensorflow to do all sort of things (e.g. automatic differentiation) that are not possible with imperative libraries such as numpy. But it also comes at the cost of making it harder to grasp. Our attempt here is demystify Tensorflow and provide some guidelines and best practices for more effective use of Tensorflow. Let’s start with a simple example, we want to multiply two random matrices. First we look at an implementation done in numpy: [code lang=python] import numpy as np x = np.random.normal(size=[10, 10]) y = np.random.normal(size=[10, 10]) z = np.dot(x, y) print(z) [/code] Now we perform the exact same computation this time in Tensorflow: [code lang=python] import tensorflow as tf x = tf.random_normal([10, 10]) y = tf.random_normal([10, 10]) z = tf.matmul(x, y) sess = tf.Session() z_val = sess.run(z) print(z_val) [/code] Unlike numpy that immediately performs the computation and copies the result to the output variable z, tensorflow only gives us a handle (of type Tensor) to a node in the graph that represents the result. If we try printing the value of z directly, we get something like this: [code lang=text] Tensor("MatMul:0", shape=(10, 10), dtype=float32) [/code] Since both the inputs have a fully defined shape, tensorflow is able to infer the shape of the tensor as well as its type. In order to compute the value of the tensor we need to create a session and evaluate it using Session.run() method. Tip: When using Jupyter notebook make sure to call tf.reset_default_graph() at the beginning to clear the symbolic graph before defining new nodes. To understand how powerful symbolic computation can be let’s have a look at another example. Assume that we have samples from a curve (say f(x) = 5x^2 + 3) and we want to estimate f(x) without knowing its parameters. We define a parametric function g(x, w) = w0 x^2 + w1 x + w2, which is a function of the input x and latent parameters w, our goal is then to find the latent parameters such that g(x, w) ≈ f(x). This can be done by minimizing the following loss function: L(w) = (f(x) – g(x, w))^2. Although there’s a closed form solution for this simple problem, we opt to use a more general approach that can be applied to any arbitrary differentiable function, and that is using stochastic gradient descent. We simply compute the average gradient of L(w) with respect to w over a set of sample points and move in the opposite direction. Here’s how it can be done in Tensorflow: [code lang=python] import numpy as np import tensorflow as tf # Placeholders are used to feed values from python to Tensorflow ops. We define # two placeholders, one for input feature x, and one for output y. x = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) # Assuming we know that the desired function is a polynomial of 2nd degree, we # allocate a vector of size 3 to hold the coefficients. The variable will be # automatically initialized with random noise. w = tf.get_variable("w", shape=[3, 1]) # We define yhat to be our estimate of y. f = tf.stack([tf.square(x), x, tf.ones_like(x)], 1) yhat = tf.squeeze(tf.matmul(f, w), 1) # The loss is defined to be the l2 distance between our estimate of y and its # true value. We also added a shrinkage term, tp ensure the resulting weights # would be small. loss = tf.nn.l2_loss(yhat – y) + 0.1 * tf.nn.l2_loss(w) # We use the Adam optimizer with learning rate set to 0.1 to minimize the loss. def generate_data(): x_val = np.random.uniform(-10.0, 10.0, size=100) y_val = 5 * np.square(x_val) + 3 return x_val, y_val sess = tf.Session() # Since we are using variables we first need to initialize them. sess.run(tf.global_variables_initializer()) for _ in range(1000): x_val, y_val = generate_data() _, loss_val = sess.run([train_op, loss], {x: x_val, y: y_val}) print(loss_val) print(sess.run([w])) [/code] By running this piece of code you should see a result close to this: [code lang=text] [4.9924135, 0.00040895029, 3.4504161] [/code] Which is a relatively close approximation to our parameters. This is just tip of the iceberg for what Tensorflow can do. Many problems such a optimizing large neural networks with millions of parameters can be implemented efficiently in Tensorflow in just a few lines of code. Tensorflow takes care of scaling across multiple devices, and threads, and supports a variety of platforms. ## Understanding static and dynamic shapes Tensors in Tensorflow have a static shape attribute which is determined during graph construction. The static shape may be underspecified. For example we might define a float32 tensor of shape [None, 128]: [code lang=python] import tensorflow as tf a = tf.placeholder(tf.float32, [None, 128]) [/code] This means that the first dimension can be of any size and will be determined dynamically during Session.run(). You can query the static shape of a Tensor as follows: [code lang=python] static_shape = a.shape # returns TensorShape([Dimension(None), Dimension(128)]) static_shape = a.shape.as_list() # returns [None, 128] [/code] To get the dynamic shape of the tensor you can call tf.shape op, which returns a tensor representing the shape of the given tensor: [code lang=python] dynamic_shape = tf.shape(a) [/code] The static shape of a tensor can be set with Tensor.set_shape() method: [code lang=python] a.set_shape([32, 128]) [/code] Use this function only if you know what you are doing, in practice it’s safer to do dynamic reshaping with tf.reshape() op: [code lang=python] a = tf.reshape(a, [32, 128]) [/code] If you feed ‘a’ with values that don’t match the shape, you will get an InvalidArgumentError indicating that the number of values fed doesn’t match the expected shape. It can be convenient to have a function that returns the static shape when available and dynamic shape when it’s not. The following utility function does just that: [code lang=python] def get_shape(tensor): static_shape = tensor.shape.as_list() dynamic_shape = tf.unstack(tf.shape(tensor)) dims = [s[1] if s[0] is None else s[0] for s in zip(static_shape, dynamic_shape)] return dims [/code] Now imagine we want to convert a Tensor of rank 3 to a tensor of rank 2 by collapsing the second and third dimensions into one. We can use our get_shape() function to do that: [code lang=python] b = placeholder(tf.float32, [None, 10, 32]) shape = get_shape(tensor) b = tf.reshape(b, [shape[0], shape[1] * shape[2]]) [/code] Note that this works whether the shapes are statically specified or not. In fact we can write a general purpose reshape function to collapse any list of dimensions: [code lang=python] import tensorflow as tf import numpy as np def reshape(tensor, dims_list): shape = get_shape(tensor) dims_prod = [] for dims in dims_list: if isinstance(dims, int): dims_prod.append(shape[dims]) elif all([isinstance(shape[d], int) for d in dims]): dims_prod.append(np.prod([shape[d] for d in dims])) else: dims_prod.append(tf.prod([shape[d] for d in dims])) tensor = tf.reshape(tensor, dims_prod) return tensor [/code] Then collapsing the second dimension becomes very easy: [code lang=python] b = placeholder(tf.float32, [None, 10, 32]) b = tf.reshape(b, [0, [1, 2]]) [/code] ## Broadcasting the good and the ugly Tensorflow supports broadcasting elementwise operations. Normally when you want to perform operations like addition and multiplication, you need to make sure that shapes of the operands match, e.g. you can’t add a tensor of shape [3, 2] to a tensor of shape [3, 4]. But there’s a special case and that’s when you have a singular dimension. Tensorflow implicitly tiles the tensor across its singular dimensions to match the shape of the other operand. So it’s valid to add a tensor of shape [3, 2] to a tensor of shape [3, 1] [code lang=python] import tensorflow as tf a = tf.constant([[1., 2.], [3., 4.]]) b = tf.constant([[1.], [2.]]) # c = a + tf.tile(a, [1, 2]) c = a + b [/code] Broadcasting allows us to perform implicit tiling which makes the code shorter, and more memory efficient, since we don’t need to store the result of the tiling operation. One neat place that this can be used is when combining features of varying length. In order to concatenate features of varying length we commonly tile the input tensors, concatenate the result and apply some nonlinearity. This is a common pattern across a variety of neural network architectures: [code lang=python] a = tf.random_uniform([5, 3, 5]) b = tf.random_uniform([5, 1, 6]) # concat a and b and apply nonlinearity tiled_b = tf.tile(b, [1, 3, 1]) c = tf.concat([a, tiled_b], 2) d = tf.layers.dense(c, 10, activation=tf.nn.relu) [/code] But this can be done more efficiently with broadcasting. We use the fact that f(m(x + y)) is equal to f(mx + my). So we can do the linear operations separately and use broadcasting to do implicit concatenation: [code lang=python] pa = tf.layers.dense(a, 10, activation=None) pb = tf.layers.dense(b, 10, activation=None) d = tf.nn.relu(pa + pb) [/code] In fact this piece of code is pretty general and can be applied to tensors of arbitrary shape as long as broadcasting between tensors is possible: [code lang=python] def merge(a, b, units, activation=tf.nn.relu): pa = tf.layers.dense(a, units, activation=None) pb = tf.layers.dense(b, units, activation=None) c = pa + pb if activation is not None: c = activation(c) return c [/code] A slightly more general form of this function is included in the cookbook. So far we discussed the good part of broadcasting. But what’s the ugly part you may ask? Implicit assumptions almost always make debugging harder to do. Consider the following example: [code lang=python] a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) c = tf.reduce_sum(a + b) [/code] What do you think would the value of c would after evaluation? If you guessed 6, that’s wrong. It’s going to be 12. This is because when rank of two tensors don’t match, Tensorflow automatically expands the first dimension of the tensor with lower rank before the elementwise operation, so the result of addition would be [[2, 3], [3, 4]], and the reducing over all parameters would give us 12. The way to avoid this problem is to be as explicit as possible. Had we specified which dimension we would want to reduce across, catching this bug would have been much easier: [code lang=python] a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) c = tf.reduce_sum(a + b, 0) [/code] Here the value of c would be [5, 7], and we immediately would guess based on the shape of the result that there’s something wrong. A general rule of thumb is to always specify the dimensions in reduction operations and when using tf.squeeze. ## Understanding order of execution and control dependencies As we discussed in the first item, Tensorflow doesn’t immediately run the operations that are defined but rather creates corresponding nodes in a graph that can be evaluated with Session.run() method. This also enables Tensorflow to do optimizations at run time to determine the optimal order of execution and possible trimming of unused nodes. If you only have tf.Tensors in your graph you don’t need to worry about dependencies but you most probably have tf.Variables too, and tf.Variables make things much more difficult. My advice to is to only use Variables if Tensors don’t do the job. This might not make a lot of sense to you now, so let’s start with an example. [code lang=python] import tensorflow as tf a = tf.constant(1) b = tf.constant(2) a = a + b tf.Session().run(a) [/code] Evaluating “a” will return the value 3 as expected. Note that here we are creating 3 tensors, two constant tensors and another tensor that stores the result of the addition. Note that you can’t overwrite the value of a tensor. If you want to modify it you have to create a new tensor. As we did here. TIP: If you don’t define a new graph, Tensorflow automatically creates a graph for you by default. You can use tf.get_default_graph() to get a handle to the graph. You can then inspect the graph, for example by printing all its tensors: [code lang=python] print(tf.contrib.graph_editor.get_tensors(tf.get_default_graph())) [/code] Unlike tensors, variables can be updated. So let’s see how we may use variables to do the same thing: [code lang=python] a = tf.Variable(1) b = tf.constant(2) assign = tf.assign(a, a + b) sess = tf.Session() sess.run(tf.global_variables_initializer()) print(sess.run(assign)) [/code] Again, we get 3 as expected. Note that tf.assign returns a tensor representing the value of the assignment. So far everything seemed to be fine, but let’s look at a slightly more complicated example: [code lang=python] a = tf.Variable(1) b = tf.constant(2) c = a + b assign = tf.assign(a, 5) sess = tf.Session() for i in range(10): sess.run(tf.global_variables_initializer()) print(sess.run([assign, c])) [/code] Note that the tensor c here won’t have a deterministic value. This value might be 3 or 7 depending on whether addition or assignment gets executed first. You should note that the order that you define ops in your code doesn’t matter to Tensorflow runtime. The only thing that matters is the control dependencies. Control dependencies for tensors are straightforward. Every time you use a tensor in an operation that op will define an implicit dependency to that tensor. But things get complicated with variables because they can take many values. When dealing with variables, you may need to explicitly define dependencies using tf.control_dependencies() as follows: [code lang=python] a = tf.Variable(1) b = tf.constant(2) c = a + b with tf.control_dependencies([c]): assign = tf.assign(a, 5) sess = tf.Session() for i in range(10): sess.run(tf.global_variables_initializer()) print(sess.run([assign, c])) [/code] This will make sure that the assign op will be called after the addition. ## Control flow operations: conditionals and loops When building complex models such as recurrent neural networks you may need to control the flow of operations through conditionals and loops. In this section we introduce a number of commonly used control flow ops. Let’s assume you want to decide whether to multiply to or add two given tensors based on a predicate. This can be simply implemented with tf.cond which acts as a python “if” function: [code lang=python] a = tf.constant(1) b = tf.constant(2) p = tf.constant(True) x = tf.cond(p, lambda: a + b, lambda: a * b) print(tf.Session().run(x)) [/code] Since the predicate is True in this case, the output would be the result of the addition, which is 3. Most of the times when using Tensorflow you are using large tensors and want to perform operations in batch. A related conditional operation is tf.where, which like tf.cond takes a predicate, but selects the output based on the condition in batch. [code lang=python] a = tf.constant([1, 1]) b = tf.constant([2, 2]) p = tf.constant([True, False]) x = tf.where(p, a + b, a * b) print(tf.Session().run(x)) [/code] This will return [3, 2]. Another widely used control flow operation is tf.while_loop. It allows building dynamic loops in Tensorflow that operate on sequences of variable length. Let’s see how we can generate Fibonacci sequence with tf.while_loops: [code lang=python] n = tf.constant(5) def cond(i, a, b): return i < n def body(i, a, b): return i + 1, b, a + b i, a, b = tf.while_loop(cond, body, (2, 1, 1)) print(tf.Session().run(b)) [/code] This will print 5. tf.while_loops takes a condition function, and a loop body function, in addition to initial values for loop variables. These loop variables are then updated by multiple calls to the body function until the condition returns false. Now imagine we want to keep the whole series of Fibonacci sequence. We may update our body to keep a record of the history of current values: [code lang=python] n = tf.constant(5) def cond(i, a, b, c): return i < n def body(i, a, b, c): return i + 1, b, a + b, tf.concat([c, [a + b]], 0) i, a, b, c = tf.while_loop(cond, body, (2, 1, 1, tf.constant([1, 1]))) print(tf.Session().run(c)) [/code] Now if you try running this, Tensorflow will complain that the shape of the the fourth loop variable is changing. So you must make that explicit that it’s intentional: [code lang=text] i, a, b, c = tf.while_loop( cond, body, (2, 1, 1, tf.constant([1, 1])), shape_invariants=(tf.TensorShape([]), tf.TensorShape([]), tf.TensorShape([]), tf.TensorShape([None]))) [/code] This is not only getting ugly, but is also somewhat inefficient. Note that we are building a lot of intermediary tensors that we don’t use. Tensorflow has a better solution for this kind of growing arrays. Meet tf.TensorArray. Let’s do the same thing this time with tensor arrays: [code lang=python] n = tf.constant(5) c = tf.TensorArray(tf.int32, n) c = c.write(0, 1) c = c.write(1, 1) def cond(i, a, b, c): return i < n def body(i, a, b, c): c = c.write(i, a + b) return i + 1, b, a + b, c i, a, b, c = tf.while_loop(cond, body, (2, 1, 1, c)) c = c.stack() print(tf.Session().run(c)) [/code] Tensorflow while loops and tensor arrays are essential tools for building complex recurrent neural networks. As an exercise try writing a beam search using tf.while_loops. Can you make it more efficient with tensor arrays? ## Prototyping kernels and advanced visualization with Python ops Operation kernels in Tensorflow are entirely written in C++ for efficiency. But writing a Tensorflow kernel in C++ can be quite a pain. So, before spending hours implementing your kernel you may want to prototype something quickly, however inefficient. With tf.py_func() you can turn any piece of python code to a Tensorflow operation. For example this is how you can implement a simple ReLU nonlinearity kernel in Tensorflow as a python op: [code lang=python] import numpy as np import tensorflow as tf import uuid def relu(inputs): # Define the op in python def _relu(x): return np.maximum(x, 0.) # Define the op's gradient in python return np.float32(x > 0) x = op.inputs[0] # Register the gradient with a unique id # Override the gradient of the custom op g = tf.get_default_graph() output = tf.py_func(_relu, [inputs], tf.float32) return output [/code] To verify that the gradients are correct you can use Tensorflow’s gradient checker: [code lang=python] x = tf.random_normal([10]) y = relu(x * x) with tf.Session(): diff = tf.test.compute_gradient_error(x, [10], y, [10]) print(diff) [/code] compute_gradient_error() computes the gradient numerically and returns the difference with the provided gradient. What we want is a very low difference. Note that this implementation is pretty inefficient, and is only useful for prototyping, since the python code is not parallelizable and won’t run on GPU. Once you verified your idea, you definitely would want to write it as a C++ kernel. In practice we commonly use python ops to do visualization on Tensorboard. Consider the case that you are building an image classification model and want to visualize your model predictions during training. Tensorflow allows visualizing images with tf.summary.image() function: [code lang=python] image = tf.placeholder(tf.float32) tf.summary.image("image", image) [/code] But this only visualizes the input image. In order to visualize the predictions you have to find a way to add annotations to the image which may be almost impossible with existing ops. An easier way to do this is to do the drawing in python, and wrap it in a python op: [code lang=python] import io import matplotlib.pyplot as plt import numpy as np import PIL import tensorflow as tf def visualize_labeled_images(images, labels, max_outputs=3, name='image'): def _visualize_image(image, label): # Do the actual drawing in python fig = plt.figure(figsize=(3, 3), dpi=80) ax.imshow(image[::-1,…]) ax.text(0, 0, str(label), horizontalalignment='left', verticalalignment='top') fig.canvas.draw() # Write the plot as a memory file. buf = io.BytesIO() data = fig.savefig(buf, format='png') buf.seek(0) # Read the image and convert to numpy array img = PIL.Image.open(buf) return np.array(img.getdata()).reshape(img.size[0], img.size[1], -1) def _visualize_images(images, labels): # Only display the given number of examples in the batch outputs = [] for i in range(max_outputs): output = _visualize_image(images[i], labels[i]) outputs.append(output) return np.array(outputs, dtype=np.uint8) # Run the python op. figs = tf.py_func(_visualize_images, [images, labels], tf.uint8) return tf.summary.image(name, figs) [/code] Note that since summaries are usually only evaluated once in a while (not per step), this implementation may be used in practice without worrying about efficiency. ## Multi-GPU processing with data parallelism If you write your software in a language like C++ for a single cpu core, making it run on multiple GPUs in parallel would require rewriting the software from scratch. But this is not the case with Tensorflow. Because of its symbolic nature, tensorflow can hide all that complexity, making it effortless to scale your program across many CPUs and GPUs. “python import tensorflow as tf with tf.device(tf.DeviceSpec(device_type=’CPU’, device_index=0)): a = tf.random_uniform([1000, 100]) b = tf.random_uniform([1000, 100]) c = a + b tf.Session().run(c) “ The same thing can as simply be done on GPU: [code lang=python] with tf.device(tf.DeviceSpec(device_type='GPU', device_index=0)): a = tf.random_uniform([1000, 100]) b = tf.random_uniform([1000, 100]) c = a + b “ But what if we have two GPUs and want to utilize both? To do that, we can split the data and use a separate GPU for processing each half: “python split_a = tf.split(a, 2) split_b = tf.split(b, 2) split_c = [] for i in range(2): with tf.device(tf.DeviceSpec(device_type='GPU', device_index=i)): split_c.append(split_a[i] + split_b[i]) c = tf.concat(split_c, axis=0) “ Let's rewrite this in a more general form so that we can replace addition with any other set of operations: “python def make_parallel(fn, num_gpus, **kwargs): in_splits = {} for k, v in kwargs.items(): in_splits[k] = tf.split(v, num_gpus) out_split = [] for i in range(num_gpus): with tf.device(tf.DeviceSpec(device_type='GPU', device_index=i)): with tf.variable_scope(tf.get_variable_scope(), reuse=i > 0): out_split.append(fn(**{k : v[i] for k, v in in_splits.items()})) return tf.concat(out_split, axis=0) def model(a, b): return a + b c = make_parallel(model, 2, a=a, b=b) [/code] You can replace the model with any function that takes a set of tensors as input and returns a tensor as result with the condition that both the input and output are in batch. Note that we also added a variable scope and set the reuse to true. This makes sure that we use the same variables for processing both splits. This is something that will become handy in our next example. Let’s look at a slightly more practical example. We want to train a neural network on multiple GPUs. During training we not only need to compute the forward pass but also need to compute the backward pass (the gradients). But how can we parallelize the gradient computation? This turns out to be pretty easy. Recall from the first item that we wanted to fit a second degree curve to a set of samples. We reorganized the code a bit to have the bulk of the operations in the model function: [code lang=python] import numpy as np import tensorflow as tf def model(x, y): w = tf.get_variable("w", shape=[3, 1]) f = tf.stack([tf.square(x), x, tf.ones_like(x)], 1) yhat = tf.squeeze(tf.matmul(f, w), 1) loss = tf.square(yhat – y) return loss x = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) loss = model(x, y) tf.reduce_mean(loss)) def generate_data(): x_val = np.random.uniform(-10.0, 10.0, size=100) y_val = 5 * np.square(x_val) + 3 return x_val, y_val sess = tf.Session() sess.run(tf.global_variables_initializer()) for _ in range(1000): x_val, y_val = generate_data() _, loss_val = sess.run([train_op, loss], {x: x_val, y: y_val}) _, loss_val = sess.run([train_op, loss], {x: x_val, y: y_val}) print(sess.run(tf.contrib.framework.get_variables_by_name("w"))) [/code] Now let’s use make_parallel that we just wrote to parallelize this. We only need to change two lines of code from the above code: [code lang=python] loss = make_parallel(model, 2, x=x, y=y) tf.reduce_mean(loss), [/code] The only thing that we need to change to parallelize backpropagation of gradients is to set the colocate_gradients_with_ops flag to true. This ensures that gradient ops run on the same device as the original op. ## Debugging Tensorflow models Symbolic nature of Tensorflow makes it relatively more difficult to debug Tensorflow code compared to regular python code. Here we introduce a number of tools included with Tensorflow that make debugging much easier. Probably the most common error one can make when using Tensorflow is passing Tensors of wrong shape to ops. Many Tensorflow ops can operate on tensors of different ranks and shapes. This can be convenient when using the API, but may lead to extra headache when things go wrong. For example, consider the tf.matmul op, it can multiply two matrices: [code lang=python] a = tf.random_uniform([2, 3]) b = tf.random_uniform([3, 4]) c = tf.matmul(a, b) # c is a tensor of shape [2, 4] [/code] But the same function also does batch matrix multiplication: [code lang=python] a = tf.random_uniform([10, 2, 3]) b = tf.random_uniform([10, 3, 4]) tf.matmul(a, b) # c is a tensor of shape [10, 2, 4] [/code] [code lang=python] a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) c = a + b # c is a tensor of shape [2, 2] [/code] ### Validating your tensors with tf.assert* ops One way to reduce the chance of unwanted behavior is to explicitly verify the rank or shape of intermediate tensors with tf.assert* ops. [code lang=python] a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) check_a = tf.assert_rank(a, 1) # This will raise an InvalidArgumentError exception check_b = tf.assert_rank(b, 1) with tf.control_dependencies([check_a, check_b]): c = a + b # c is a tensor of shape [2, 2] [/code] Remember that assertion nodes like other operations are part of the graph and if not evaluated would get pruned during Session.run(). So make sure to make explicit dependencies to assertion ops, to force Tensorflow to execute them. You can also use assertions to validate the value of tensors at runtime: [code lang=python] check_pos = tf.assert_positive(a) [/code] See the official docs for a full list of assertion ops. ### Logging tensor values with tf.Print Another useful built-in function is tf.Print which logs the given tensors to the standard error: [code lang=python] input_copy = tf.Print(input, tensors_to_print_list) [/code] Note that tf.Print returns a copy of its first argument as output. One way to force tf.Print to run is to pass its output to another op that gets executed. For example if we want to print value of tensors a and b before adding them we could do something like this: [code lang=python] a = … b = … a = tf.Print(a, [a, b]) c = a + b [/code] Alternatively we could manually define a control dependency. Not all the operations in Tensorflow come with gradients, and it’s possible to write a non-automatic differentiable graph in Tensorflow without knowing. Let’s look at an example: [code lang=python] import tensorflow as tf def non_differentiable_entropy(logits): probs = tf.nn.softmax(logits) return tf.nn.softmax_cross_entropy_with_logits(labels=probs, logits=logits) w = tf.get_variable('w', shape=[5]) y = -non_differentiable_entropy(w) train_op = opt.minimize(y) sess = tf.Session() sess.run(tf.global_variables_initializer()) for i in range(10000): sess.run(train_op) print(sess.run(tf.nn.softmax(w))) [/code] We are using tf.nn.softmax_cross_entropy_with_logits to define entropy over a categorical distribution. We then use Adam optimizer to find the weights with maximum entropy. If you have passed a course on information theory, you would know that uniform distribution contains maximum amount of information. So you would expect for the result to be [0.2, 0.2, 0.2, 0.2, 0.2]. But if you run this you may get unexpected results like this: [code lang=text] [ 0.34081486 0.24287023 0.23465775 0.08935683 0.09230034] [/code] It turns out tf.nn.softmax_cross_entropy_with_logits has undefined gradients with respect to labels! But how may we spot this if we didn’t know? Fortunately for us Tensorflow comes with a numerical differentiator that can be used to find symbolic gradient errors. Let’s see how we can use it: [code lang=python] with tf.Session(): diff = tf.test.compute_gradient_error(w, [5], y, []) print(diff) [/code] If you run this, you would see that the difference between the numerical and symbolic gradients are pretty high (0.06 – 0.1 in my tries). Now let’s fix our function with a differentiable version of the entropy and check again: [code lang=python] import tensorflow as tf import numpy as np def entropy(logits, dim=-1): probs = tf.nn.softmax(logits, dim) nplogp = probs * (tf.reduce_logsumexp(logits, dim, keep_dims=True) – logits) return tf.reduce_sum(nplogp, dim) w = tf.get_variable('w', shape=[5]) y = -non_differentiable_entropy(w) # y = -entropy(w) print(w.get_shape()) print(y.get_shape()) with tf.Session() as sess: diff = tf.test.compute_gradient_error(w, [5], y, []) print(diff) [/code] The difference should be ~0.0001 which looks much better. Now if you run the optimizer again with the correct version you can see the final weights would be: [code lang=text] [ 0.2 0.2 0.2 0.2 0.2] [/code] which are exactly what we wanted. Tensorflow summaries, and tfdbg (TensorFlow Debugger) are other tools that can be used for debugging. Please refer to the official docs to learn more. ## Building a neural network training framework with learn API For simplicity, in most of the examples here we manually create sessions and we don’t care about saving and loading checkpoints but this is not how we usually do things in practice. You most probably want to use the learn API to take care of session management and logging. We provide a simple but practical framework in the code/framework directory for training neural networks using Tensorflow. In this item we explain how this framework works. When experimenting with neural network models you usually have a training/test split. You want to train your model on the training set, and once in a while evaluate it on test set and compute some metrics. You also need to store the model parameters as a checkpoint, and ideally you want to be able to stop and resume training. Tensorflow’s learn API is designed to make this job easier, letting us focus on developing the actual model. The most basic way of using tf.learn API is to use tf.Estimator object directly. You need to define a model function that defines a loss function, a train op and one or a set of predictions: [code lang=python] import tensorflow as tf def model_fn(features, labels, mode, params): predictions = … loss = … train_op = … return tf.contrib.learn.ModelFnOps( mode=mode, predictions=predictions, loss=loss, train_op=train_op) params = … run_config = tf.contrib.learn.RunConfig(model_dir=FLAGS.output_dir) estimator = tf.contrib.learn.Estimator( model_fn=model_fn, config=run_config, params=params) [/code] To train the model you would then simply call Estimator.fit() function while providing an input function to read the data. [code lang=python] def input_fn(): features = … labels = … return features, labels estimator.fit(input_fn=input_fn, max_steps=…) [/code] and to evaluate the model, call Estimator.evaluate(), providing a set of metrics: [code lang=text] metrics = { 'accuracy': tf.metrics.accuracy } estimator.evaluate(input_fn=input_fn, metrics=metrics) [/code] Estimator object might be good enough for simple cases, but Tensorflow provides an even higher level object called Experiment which provides some additional useful functionality. Creating an experiment object is very easy: [code lang=python] experiment = tf.contrib.learn.Experiment( estimator=estimator, train_input_fn=train_input_fn, eval_input_fn=eval_input_fn, eval_metrics=eval_metrics) [/code] Now we can call train_and_evaluate function to compute the metrics while training. [code lang=text] experiment.train_and_evaluate() [/code] An even higher level way of running experiments is by using learn_runner.run() function. Here’s how our main function looks like in the provided framework: [code lang=python] import tensorflow as tf tf.flags.DEFINE_string('output_dir', '', 'Optional output dir.') tf.flags.DEFINE_string('schedule', 'train_and_evaluate', 'Schedule.') tf.flags.DEFINE_string('hparams', '', 'Hyper parameters.') FLAGS = tf.flags.FLAGS learn = tf.contrib.learn def experiment_fn(run_config, hparams): estimator = learn.Estimator( model_fn=make_model_fn(), config=run_config, params=hparams) return learn.Experiment( estimator=estimator, train_input_fn=make_input_fn(learn.ModeKeys.TRAIN, hparams), eval_input_fn=make_input_fn(learn.ModeKeys.EVAL, hparams), eval_metrics=eval_metrics_fn(hparams)) def main(unused_argv): run_config = learn.RunConfig(model_dir=FLAGS.output_dir) hparams = tf.contrib.training.HParams() hparams.parse(FLAGS.hparams) estimator = learn.learn_runner.run( experiment_fn=experiment_fn, run_config=run_config, schedule=FLAGS.schedule, hparams=hparams) if __name__ == '__main__': tf.app.run() [/code] The schedule flag decides which member function of the Experiment object gets called. So, if you for example set schedule to ‘train_and_evaluate’, experiment.train_and_evaluate() would be called. Now let’s have a look at how we might actually write an input function. One way to do this is through python ops (See this item for more information on python ops). [code lang=python] def input_fn(): def _py_input_fn(): # read a new example in python feature = … label = … return feature, label # Convert that to tensors feature, label = tf.py_func(_py_input_fn, [], (tf.string, tf.int64)) feature_batch, label_batch = tf.train.shuffle_batch( [feature, label], batch_size=…, capacity=…, min_after_dequeue=…) return feature_batch, label_batch [/code] [code lang=python] def input_fn(): features = { 'image': tf.FixedLenFeature([], tf.string), 'label': tf.FixedLenFeature([], tf.int64), } file_pattern=…, batch_size=…, features=features, [/code] See mnist.py for an example of how to convert your data to TFRecords format. The framework also comes with a simple convolutional network classifier in convnet_classifier.py that includes an example model and evaluation metric: [code lang=python] def model_fn(features, labels, mode, params): images = features['image'] labels = labels['label'] predictions = … loss = … return {'predictions': predictions}, loss def eval_metrics_fn(params): return { 'accuracy': tf.contrib.learn.MetricSpec(tf.metrics.accuracy) } [/code] MetricSpec connects our model to the given metric function (e.g. tf.metrics.accuracy). Since our label and predictions solely include a single tensor, everything automagically works. Although if your label/prediction includes multiple tensors, you need to explicitly specify which tensors you want to pass to the metric function: [code lang=python] tf.contrib.learn.MetricSpec( tf.metrics.accuracy, label_key='label', prediction_key='predictions') [/code] And that’s it! This is all you need to get started with Tensorflow learn API. I recommend to have a look at the source code and see the official python API to learn more about the learn API. ## Tensorflow Cookbook This section includes implementation of a set of common operations in Tensorflow. ### Beam Search [code lang=python] import tensorflow as tf def get_shape(tensor): """Returns static shape if available and dynamic shape otherwise.""" static_shape = tensor.shape.as_list() dynamic_shape = tf.unstack(tf.shape(tensor)) dims = [s[1] if s[0] is None else s[0] for s in zip(static_shape, dynamic_shape)] return dims def log_prob_from_logits(logits, axis=-1): """Normalize the log-probabilities so that probabilities sum to one.""" return logits – tf.reduce_logsumexp(logits, axis=axis, keep_dims=True) def batch_gather(tensor, indices): """Gather in batch from a tensor of arbitrary size. In pseduocode this module will produce the following: output[i] = tf.gather(tensor[i], indices[i]) Args: tensor: Tensor of arbitrary size. indices: Vector of indices. Returns: output: A tensor of gathered values. """ shape = get_shape(tensor) flat_first = tf.reshape(tensor, [shape[0] * shape[1]] + shape[2:]) indices = tf.convert_to_tensor(indices) offset_shape = [shape[0]] + [1] * (indices.shape.ndims – 1) offset = tf.reshape(tf.range(shape[0]) * shape[1], offset_shape) output = tf.gather(flat_first, indices + offset) return output def rnn_beam_search(update_fn, initial_state, sequence_length, beam_width, begin_token_id, end_token_id, name='rnn'): """Beam-search decoder for recurrent models. Args: update_fn: Function to compute the next state and logits given the current state and ids. initial_state: Recurrent model states. sequence_length: Length of the generated sequence. beam_width: Beam width. begin_token_id: Begin token id. end_token_id: End token id. name: Scope of the variables. Returns: ids: Output indices. logprobs: Output log probabilities probabilities. """ batch_size = initial_state.shape.as_list()[0] state = tf.tile(tf.expand_dims(initial_state, axis=1), [1, beam_width, 1]) sel_sum_logprobs = tf.log([[1.] + [0.] * (beam_width – 1)]) ids = tf.tile([[begin_token_id]], [batch_size, beam_width]) sel_ids = tf.expand_dims(ids, axis=2) for i in range(sequence_length): with tf.variable_scope(name, reuse=True if i > 0 else None): state, logits = update_fn(state, ids) logits = log_prob_from_logits(logits) sum_logprobs = ( tf.expand_dims(sel_sum_logprobs, axis=2) + num_classes = logits.shape.as_list()[-1] sel_sum_logprobs, indices = tf.nn.top_k( tf.reshape(sum_logprobs, [batch_size, num_classes * beam_width]), k=beam_width) ids = indices % num_classes beam_ids = indices // num_classes state = batch_gather(state, beam_ids) sel_ids = tf.concat([batch_gather(sel_ids, beam_ids), tf.expand_dims(ids, axis=2)], axis=2) tf.to_float(tf.not_equal(ids, end_token_id))) return sel_ids, sel_sum_logprobs [/code] ## Merge [code lang=python] import tensorflow as tf def merge(tensors, units, activation=tf.nn.relu, name=None, **kwargs): This operation concatenates multiple features of varying length and applies non-linear transformation to the outcome. Example: a = tf.zeros([m, 1, d1]) b = tf.zeros([1, n, d2]) c = merge([a, b], d3) # shape of c would be [m, n, d3]. Args: tensors: A list of tensor with the same rank. units: Number of units in the projection function. """ with tf.variable_scope(name, default_name='merge'): # Apply linear projection to input tensors. projs = [] for i, tensor in enumerate(tensors): proj = tf.layers.dense( tensor, units, activation=None, name='proj_%d' % i, **kwargs) projs.append(proj) # Compute sum of tensors. result = projs.pop() for proj in projs: result = result + proj # Apply nonlinearity. if activation: result = activation(result) return result [/code] ## Entropy [code lang=python] import tensorflow as tf def softmax(logits, dims=-1): """Compute softmax over specified dimensions.""" exp = tf.exp(logits – tf.reduce_max(logits, dims, keep_dims=True)) return exp / tf.reduce_sum(exp, dims, keep_dims=True) def entropy(logits, dims=-1): """Compute entropy over specified dimensions.""" probs = softmax(logits, dims) nplogp = probs * (tf.reduce_logsumexp(logits, dims, keep_dims=True) – logits) return tf.reduce_sum(nplogp, dims) [/code] ## Make parallel [code lang=python] def make_parallel(fn, num_gpus, **kwargs): """Parallelize given model on multiple gpu devices. Args: fn: Arbitrary function that takes a set of input tensors and outputs a single tensor. First dimension of inputs and output tensor are assumed to be batch dimension. num_gpus: Number of GPU devices. **kwargs: Keyword arguments to be passed to the model. Returns: A tensor corresponding to the model output. """ in_splits = {} for k, v in kwargs.items(): in_splits[k] = tf.split(v, num_gpus) out_split = [] for i in range(num_gpus): with tf.device(tf.DeviceSpec(device_type='GPU', device_index=i)): with tf.variable_scope(tf.get_variable_scope(), reuse=i > 0): out_split.append(fn(**{k : v[i] for k, v in in_splits.items()})) return tf.concat(out_split, axis=0) [/code]
{}
# Inscribing Regular Pentagon in Circle ## Theorem In a given circle, it is possible to inscribe a regular pentagon. In the words of Euclid: In a given circle to inscribe an equilateral and equiangular pentagon. ## Construction Let $ABCDE$ be the given circle (although note that at this stage the positions of the points $A, B, C, D, E$ have not been established). Let $\triangle FGH$ be constructed such that $\angle FGH = \angle FHG = 2 \angle GFH$. Let $ACD$ be inscribed in $ABCDE$ such that $\angle ACD = \angle FGH, \angle ADC = \angle FHG, \angle CAD = \angle GFH$. Bisect $\angle ACD$ with $CE$ and bisect $\angle ADC$ with $DB$. Then the pentagon $ABCDE$ is the required regular pentagon. ## Proof We have that $\angle CDA = \angle DCA = 2 \angle CAD$. As $\angle CDA$ and $\angle DCA$ have been bisected, $\angle DAC = \angle ACE = \angle ECD = \angle CDB = \angle BDA$. From Equal Angles in Equal Circles it follows that the arcs $AB, BC, CD, DE, EA$ are all equal. Hence from Equal Arcs of Circles Subtended by Equal Straight Lines, the straight lines $AB, BC, CD, DE, EA$ are all equal. So the pentagon $ABCDE$ is equilateral. Now since the arc $AB$ equals arc $DE$, we can add $BCD$ to each. So the arc $ABCD$ equals arc $BCDE$. So from Angles on Equal Arcs are Equal $\angle BAE = \angle AED$. For the same reason, $\angle BAE = \angle AED = \angle ABC = \angle BCD = \angle CDE$. So the pentagon $ABCDE$ is equiangular. $\blacksquare$ ## Historical Note This proof is Proposition $11$ of Book $\text{IV}$ of Euclid's The Elements.
{}
# How does the Intel 8086/88 know when to assert the IO/M signal? Consider an Intel 8088 processor with a standard, parallel RAM and ROM implementation that also supports address/data bus access to various external peripherals like analog-to-digital converters (ADCs), UARTs, and more. I'm having trouble designing a chip-select decoding scheme that I'm confident will work. Although I could use logic gates across all 20 address lines, the resulting PCB has significantly more traces and ICs, along with a greater potential of my design being incorrect. I'd like to utilize the IO/M pin to make my design easier to design and debug. The 8086/88 datasheet describes the basic function of the IO/M pin but doesn't explain the underlying mechanism behind it. I understand that a logic low on the pin indicates a memory access and a logic high indicates I/O access, but I don't understand where the processor comes up with this information. The memory map I'm trying to work with has 2kB of address space reserved to address peripherals. Both ADC's require 8 bytes each to address individual analog inputs, and the UART needs a 1-byte placeholder. 0x00000 - 0x7FFFF : SRAM Chip 0 (512kB) 0x80000 - 0xDFFFF : SRAM Chip 1 (384kB) ------------------------------ 0xE0000 : ADC 0 (8 Bytes) 0xE0008 : ADC 1 (8 Bytes) 0xE0010 : UART 0 (1 Byte) ------------------------------ 0xE0800 - 0xFFFFF : Flash ROM (126kB) Since memory maps can be arbitrary, how does the processor magically know when it's trying to access memory vs. I/O devices? By extension, how does the Intel 8088 know what to do with it's IO/M pin if I could easily swap the ordering of the above address space? ## migrated from stackoverflow.comAug 11 '16 at 20:15 This question came from our site for professional and enthusiast programmers. • My experience of doing I/O at the CPU level is pretty much confined to the fantastic Z80, where you use separate instructions for the two classes of access: LD for memory and OUT/IN for... well, guess. And presumably the instruction used dictates what its own IORQ pin does. But given your question, I presume you've exhausted all the literature, and the 8086/8 makes no such distinction, right? If so, then that would make for a very interesting question. – underscore_d Aug 5 '16 at 19:06 • @underscore_d That may actually be the answer to my question. I've dug quite a bit into the documentation regarding connecting the ICs, but haven't researched x86 instructions enough to know if specific instructions are "flagged" as I/O and others are memory. I'll research and update my question when I find out! – WebsterXC Aug 5 '16 at 19:22 • @WebsterXC I didn't think there was any other possible answer, but I had to presume you'd read everything. ;-) But yeah, it's almost certainly that. I can't think of any other option. I think there might be CPUs out there that make no distinction, but then they presumably can't have an equivalent pin. – underscore_d Aug 5 '16 at 19:29 Thanks to my suspicions based on a past (and future?) life in Z80 ASM and a quick search for 8086 io, I found a handy synopsis of 8086 I/O at this page by Dr. Jim Plusquellic (hooray for free lecture notes!) - http://ece-research.unm.edu/jimp/310/slides/8086_IO1.html - which I'll now try to... synopsise even more handily. As his page explains, the 8086 has two available modes of I/O: In the latter case, a special set of instructions must be used - IN, INS, OUT, and OUTS. These cause corresponding signals to be output on the M/IO (Memory or I/O) and R/W (Read/Write) pins. That page indicates the difference and how these can be wired up: As the Prof. explains, using this mode avoids using up normal memory ranges for I/O, with the caveats that: • it increases circuit complexity: you must wire up the mentioned pins to disambiguate between the two possible meanings of an address and direct each to the right destination. In doing so, you conceptually create the 'virtual pins' IORC or IOWC (I/O Read/Write Control) shown in the diagram. • it limits the instructions you can use for I/O to the 4 mentioned, rather than letting you do all kinds of acrobatics with normal memory loads/stores/etc., as you could under memory-mapped I/O (assuming the target device will tolerate them!) So, the reason the 8086 and friends know when to assert IO/M is... because you tell them when, by using one of their dedicated I/O instructions. • Thank you so much this is an awesome collection of information and exactly the answer I was hoping for! – WebsterXC Aug 7 '16 at 18:42 • @WebsterXC You're welcome. It was educational for me, too! Credit is due, of course, to Dr. Jim Plusquellic, whose notes I built this on. – underscore_d Aug 7 '16 at 19:04 • Marked the answer as accepted, but not enough rep for a public upvote yet. I'll revisit it soon! – WebsterXC Aug 7 '16 at 19:05
{}
# [tor-dev] New paper by Goldberg, Stebila, and Ustaoglu with proposed circuit handshake Douglas Stebila stebila at qut.edu.au Thu May 12 04:07:10 UTC 2011 On 2011/05/12, at 10:33, Ian Goldberg wrote: >>> exponentiation routine, the server can compute X^y and X^b >>> simultaneously. That will make a bigger difference in time, but is not >>> really relevant from a spec-level standpoint. >> >> Can you expand on how this would work? I didn't ask the first time >> you told me this, on the theory that I could figure it out if I >> thought about it for long enough, but several days later I still don't >> have it. All the ways I can think of are inefficient, >> non-constant-time, or both. > > Use right-to-left exponentiation. This is totally off the top of my ... > Then exp2(base, expon1, expon2) will be: ... Implementing simultaneous exponentiation for curve25519 is going to be problematic, no matter how simple the algorithm, because Dan Bernstein's curve25519 main loop code is an unravelled assembly file. Modifying it directly to do simultaneous exponentiation will be a huge pain. I expect he actually wrote the code using his personal pseudo-assembly language called qhasm and then generated the .s Athlon assembly from that. We could email him to ask. Without it, and without spending decades decoding the assembly, his curve25519 code, even when run twice, will likely be faster than any simultaneous exponentiation code I write myself. On 2011/05/12, at 04:19, Ian Goldberg wrote: >> CLIENT_PK: X -- G_LENGTH bytes >> >> The server checks X, > > What is "checks X" here? Since the server doesn't really care whether > or not the crypto is good, this check can probably be elided. > >> The server sends a CREATED cell containing: >> >> SERVER_PK: Y -- G_LENGTH bytes >> AUTH: H(auth_input, t_mac) -- H_LENGTH byets >> >> The client then checks Y, and computes > > Here, the check is more important. Ideally, one would check that Y \in > G^* (which should have prime order, but doesn't here). But in > curve25519, I think you can get away with something a bit cheaper. If Y > isn't in G at all, but is on the twist curve, the AUTH verification > below is certain to fail, so that's OK. If it's in G, but has low order > (i.e. order dividing 8), then EXP(Y,x) will end up being the point at > infinity, which would be bad. (Indeed, it would be pretty much the same > problem that Tor had lo those many years ago.) So I think it's probably > OK to check that EXP(Y,x), which you're computing anyway, is not the > point at infinity. I don't remember offhand how curve25519 represents > that point; it may be as simple as all-0s, but you should check. In curve25519, every 32-byte string is a valid public key. The curve25519 webpage http://cr.yp.to/ecdh.html says that public key validation is not required for Diffie-Hellman key agreement. The webpage also lists several points that do not guarantee "contributory" behaviour, which the webpage suggests may be important in non-DH protocols. Contributory, as I know it, refers to when it is important that both parties contributed some randomness to the protocol. I would think that being contributory is a desirable property of key agreement, as it seems necessary for forward secrecy. Perhaps I'm misunderstanding this, however. Douglas
{}
# American Institute of Mathematical Sciences January  2014, 34(1): 181-201. doi: 10.3934/dcds.2014.34.181 ## Regularity of pullback attractors and attraction in $H^1$ in arbitrarily large finite intervals for 2D Navier-Stokes equations with infinite delay 1 Departamento de Ecuaciones Diferenciales y Análisis Numérico, Universidad de Sevilla, Apdo. de Correos 1160, 41080-Sevilla, Spain 2 Departamento de Ecuaciones Diferenciales y Análisis Numérico, Universidad de Sevilla, Apdo. de Correos 1160, 41080–Sevilla 3 Dpto. Ecuaciones Diferenciales y Análisis Numérico, Universidad de Sevilla, Apdo. de Correos 1160, 41080-Sevilla Received  October 2012 Revised  February 2013 Published  June 2013 In this paper we strengthen some results on the existence and properties of pullback attractors for a non-autonomous 2D Navier-Stokes model with infinite delay. Actually we prove that under suitable assumptions, and thanks to regularity results, the attraction also happens in the $H^1$ norm for arbitrarily large finite intervals of time. Indeed, from comparison results of attractors we establish that all these families of attractors are in fact the same object. The tempered character of these families in $H^1$ is also analyzed. Citation: Julia García-Luengo, Pedro Marín-Rubio, José Real. Regularity of pullback attractors and attraction in $H^1$ in arbitrarily large finite intervals for 2D Navier-Stokes equations with infinite delay. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 181-201. doi: 10.3934/dcds.2014.34.181 ##### References: [1] T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems, Nonlinear Anal., 64 (2006), 484-498. doi: 10.1016/j.na.2005.03.111.  Google Scholar [2] T. Caraballo and J. Real, Navier-Stokes equations with delays, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 457 (2001), 2441-2453. doi: 10.1098/rspa.2001.0807.  Google Scholar [3] T. Caraballo and J. Real, Asymptotic behaviour of two-dimensional Navier-Stokes equations with delays, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 459 (2003), 3181-3194. doi: 10.1098/rspa.2003.1166.  Google Scholar [4] T. Caraballo and J. Real, Attractors for 2D-Navier-Stokes models with delays, J. Differential Equations, 205 (2004), 271-297. doi: 10.1016/j.jde.2004.04.012.  Google Scholar [5] H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dynam. Differential Equations, 9 (1997), 307-341. doi: 10.1007/BF02219225.  Google Scholar [6] J. García-Luengo, P. Marín-Rubio and J. Real, $H^2$-boundedness of the pullback attractors for non-autonomous 2D Navier-Stokes equations in bounded domains, Nonlinear Anal., 74 (2011), 4882-4887. doi: 10.1016/j.na.2011.04.063.  Google Scholar [7] J. García-Luengo, P. Marín-Rubio and J. Real, Pullback attractors in $V$ for non-autonomous 2D-Navier-Stokes equations and their tempered behaviour, J. Differential Equations, 252 (2012), 4333-4356. doi: 10.1016/j.jde.2012.01.010.  Google Scholar [8] J. García-Luengo, P. Marín-Rubio and J. Real, Pullback attractors for 2D Navier-Stokes equations with delays and their regularity, Adv. Nonlinear Stud., 13 (2013), 331-357. Google Scholar [9] S. Gatti, C. Giorgi and V. Pata, Navier-Stokes limit of Jeffreys type flows, Phys. D, 203 (2005), 55-79. doi: 10.1016/j.physd.2005.03.007.  Google Scholar [10] C. Guillopé and R. Talhouk, Steady flows of slightly compressible viscoelastic fluids of Jeffreys' type around an obstacle, Differential Integral Equations, 16 (2003), 1293-1320.  Google Scholar [11] J. K. Hale and J. Kato, Phase space for retarded equations with infinite delay, Funkcial. Ekvac., 21 (1978), 11-41.  Google Scholar [12] Y. Hino, S. Murakami and T. Naito, "Functional Differential Equations with Infinite Delay," Lecture Notes in Mathematics, 1473, Springer-Verlag, Berlin, 1991.  Google Scholar [13] E. F. Infante and J. A. Walker, A stability investigation for an incompressible simple fluid with fading memory, Arch. Rational Mech. Anal., 72 (1980), 203-218. doi: 10.1007/BF00281589.  Google Scholar [14] J.-L. Lions, "Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires," Dunod; Gauthier-Villars, Paris, 1969.  Google Scholar [15] A. Z. Manitius, Feedback controllers for a wind tunnel model involving a delay: Analytical design and numerical simulation, IEEE Trans. Automat. Control, 29 (1984), 1058-1068. doi: 10.1109/TAC.1984.1103436.  Google Scholar [16] P. Marín-Rubio, A. M. Márquez-Durán and J. Real, Pullback attractors for globally modified Navier-Stokes equations with infinite delays, Discrete Contin. Dyn. Syst., 31 (2011), 779-796. doi: 10.3934/dcds.2011.31.779.  Google Scholar [17] P. Marín-Rubio and J. Real, On the relation between two different concepts of pullback attractors for non-autonomous dynamical systems, Nonlinear Anal., 71 (2009), 3956-3963. doi: 10.1016/j.na.2009.02.065.  Google Scholar [18] P. Marín-Rubio, J. Real and J. Valero, Pullback attractors for a two-dimensional Navier-Stokes model in an infinite delay case, Nonlinear Anal., 74 (2011), 2012-2030. doi: 10.1016/j.na.2010.11.008.  Google Scholar [19] S. Nadeem and S. Akram, Peristaltic flow of a Jeffrey fluid in a rectangular duct, Nonlinear Anal. Real World Appl., 11 (2010), 4238-4247. doi: 10.1016/j.nonrwa.2010.05.010.  Google Scholar [20] M. Renardy, Local existence theorems for the first and second initial-boundary value problems for a weakly non-Newtonian fluid, Arch. Rational Mech. Anal., 83 (1983), 229-244. doi: 10.1007/BF00251510.  Google Scholar [21] M. Renardy, A class of quasilinear parabolic equations with infinite delay and application to a problem of viscoelasticity, J. Differential Equations, 48 (1983), 280-292. doi: 10.1016/0022-0396(83)90053-0.  Google Scholar [22] M. Renardy, Initial value problems for viscoelastic liquids, in "Trends and Applications of Pure Mathematics to Mechanics" (Palaiseau, 1983), Lecture Notes in Phys. 195, Springer, Berlin, (1984), 333-345. doi: 10.1007/3-540-12916-2_65.  Google Scholar [23] M. Renardy, Singularly perturbed hyperbolic evolution problems with infinite delay and an application to polymer rheology, SIAM J. Math. Anal., 15 (1984), 333-349. doi: 10.1137/0515026.  Google Scholar [24] J. C. Robinson, "Infinite-Dimensional Dynamical Systems. An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors," Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 2001. doi: 10.1007/978-94-010-0732-0.  Google Scholar [25] M. Slemrod, A hereditary partial differential equation with applications in the theory of simple fluids, Arch. Rational Mech. Anal., 62 (1976), 303-321.  Google Scholar [26] M. Slemrod, Existence, uniqueness, stability for a simple fluid with fading memory, Bull. Amer. Math. Soc., 82 (1976), 581-583. doi: 10.1090/S0002-9904-1976-14113-4.  Google Scholar [27] R. Temam, "Navier-Stokes Equations, Theory and Numerical Analysis," $2^{nd}$ edition, North Holland, Amsterdam, 1979.  Google Scholar [28] R. Temam, "Infinite-Dimensional Dynamical Systems in Mechanics and Physics," Springer, New York, 1988. doi: 10.1007/978-1-4684-0313-8.  Google Scholar show all references ##### References: [1] T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems, Nonlinear Anal., 64 (2006), 484-498. doi: 10.1016/j.na.2005.03.111.  Google Scholar [2] T. Caraballo and J. Real, Navier-Stokes equations with delays, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 457 (2001), 2441-2453. doi: 10.1098/rspa.2001.0807.  Google Scholar [3] T. Caraballo and J. Real, Asymptotic behaviour of two-dimensional Navier-Stokes equations with delays, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 459 (2003), 3181-3194. doi: 10.1098/rspa.2003.1166.  Google Scholar [4] T. Caraballo and J. Real, Attractors for 2D-Navier-Stokes models with delays, J. Differential Equations, 205 (2004), 271-297. doi: 10.1016/j.jde.2004.04.012.  Google Scholar [5] H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dynam. Differential Equations, 9 (1997), 307-341. doi: 10.1007/BF02219225.  Google Scholar [6] J. García-Luengo, P. Marín-Rubio and J. Real, $H^2$-boundedness of the pullback attractors for non-autonomous 2D Navier-Stokes equations in bounded domains, Nonlinear Anal., 74 (2011), 4882-4887. doi: 10.1016/j.na.2011.04.063.  Google Scholar [7] J. García-Luengo, P. Marín-Rubio and J. Real, Pullback attractors in $V$ for non-autonomous 2D-Navier-Stokes equations and their tempered behaviour, J. Differential Equations, 252 (2012), 4333-4356. doi: 10.1016/j.jde.2012.01.010.  Google Scholar [8] J. García-Luengo, P. Marín-Rubio and J. Real, Pullback attractors for 2D Navier-Stokes equations with delays and their regularity, Adv. Nonlinear Stud., 13 (2013), 331-357. Google Scholar [9] S. Gatti, C. Giorgi and V. Pata, Navier-Stokes limit of Jeffreys type flows, Phys. D, 203 (2005), 55-79. doi: 10.1016/j.physd.2005.03.007.  Google Scholar [10] C. Guillopé and R. Talhouk, Steady flows of slightly compressible viscoelastic fluids of Jeffreys' type around an obstacle, Differential Integral Equations, 16 (2003), 1293-1320.  Google Scholar [11] J. K. Hale and J. Kato, Phase space for retarded equations with infinite delay, Funkcial. Ekvac., 21 (1978), 11-41.  Google Scholar [12] Y. Hino, S. Murakami and T. Naito, "Functional Differential Equations with Infinite Delay," Lecture Notes in Mathematics, 1473, Springer-Verlag, Berlin, 1991.  Google Scholar [13] E. F. Infante and J. A. Walker, A stability investigation for an incompressible simple fluid with fading memory, Arch. Rational Mech. Anal., 72 (1980), 203-218. doi: 10.1007/BF00281589.  Google Scholar [14] J.-L. Lions, "Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires," Dunod; Gauthier-Villars, Paris, 1969.  Google Scholar [15] A. Z. Manitius, Feedback controllers for a wind tunnel model involving a delay: Analytical design and numerical simulation, IEEE Trans. Automat. Control, 29 (1984), 1058-1068. doi: 10.1109/TAC.1984.1103436.  Google Scholar [16] P. Marín-Rubio, A. M. Márquez-Durán and J. Real, Pullback attractors for globally modified Navier-Stokes equations with infinite delays, Discrete Contin. Dyn. Syst., 31 (2011), 779-796. doi: 10.3934/dcds.2011.31.779.  Google Scholar [17] P. Marín-Rubio and J. Real, On the relation between two different concepts of pullback attractors for non-autonomous dynamical systems, Nonlinear Anal., 71 (2009), 3956-3963. doi: 10.1016/j.na.2009.02.065.  Google Scholar [18] P. Marín-Rubio, J. Real and J. Valero, Pullback attractors for a two-dimensional Navier-Stokes model in an infinite delay case, Nonlinear Anal., 74 (2011), 2012-2030. doi: 10.1016/j.na.2010.11.008.  Google Scholar [19] S. Nadeem and S. Akram, Peristaltic flow of a Jeffrey fluid in a rectangular duct, Nonlinear Anal. Real World Appl., 11 (2010), 4238-4247. doi: 10.1016/j.nonrwa.2010.05.010.  Google Scholar [20] M. Renardy, Local existence theorems for the first and second initial-boundary value problems for a weakly non-Newtonian fluid, Arch. Rational Mech. Anal., 83 (1983), 229-244. doi: 10.1007/BF00251510.  Google Scholar [21] M. Renardy, A class of quasilinear parabolic equations with infinite delay and application to a problem of viscoelasticity, J. Differential Equations, 48 (1983), 280-292. doi: 10.1016/0022-0396(83)90053-0.  Google Scholar [22] M. Renardy, Initial value problems for viscoelastic liquids, in "Trends and Applications of Pure Mathematics to Mechanics" (Palaiseau, 1983), Lecture Notes in Phys. 195, Springer, Berlin, (1984), 333-345. doi: 10.1007/3-540-12916-2_65.  Google Scholar [23] M. Renardy, Singularly perturbed hyperbolic evolution problems with infinite delay and an application to polymer rheology, SIAM J. Math. Anal., 15 (1984), 333-349. doi: 10.1137/0515026.  Google Scholar [24] J. C. Robinson, "Infinite-Dimensional Dynamical Systems. An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors," Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 2001. doi: 10.1007/978-94-010-0732-0.  Google Scholar [25] M. Slemrod, A hereditary partial differential equation with applications in the theory of simple fluids, Arch. Rational Mech. Anal., 62 (1976), 303-321.  Google Scholar [26] M. Slemrod, Existence, uniqueness, stability for a simple fluid with fading memory, Bull. Amer. Math. Soc., 82 (1976), 581-583. doi: 10.1090/S0002-9904-1976-14113-4.  Google Scholar [27] R. Temam, "Navier-Stokes Equations, Theory and Numerical Analysis," $2^{nd}$ edition, North Holland, Amsterdam, 1979.  Google Scholar [28] R. Temam, "Infinite-Dimensional Dynamical Systems in Mechanics and Physics," Springer, New York, 1988. doi: 10.1007/978-1-4684-0313-8.  Google Scholar [1] Julia García-Luengo, Pedro Marín-Rubio, José Real, James C. Robinson. Pullback attractors for the non-autonomous 2D Navier--Stokes equations for minimally regular forcing. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 203-227. doi: 10.3934/dcds.2014.34.203 [2] Julia García-Luengo, Pedro Marín-Rubio, José Real. Some new regularity results of pullback attractors for 2D Navier-Stokes equations with delays. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1603-1621. doi: 10.3934/cpaa.2015.14.1603 [3] Julia García-Luengo, Pedro Marín-Rubio. Pullback attractors for 2D Navier–Stokes equations with delays and the flattening property. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2127-2146. doi: 10.3934/cpaa.2020094 [4] Yutaka Tsuzuki. Solvability of generalized nonlinear heat equations with constraints coupled with Navier--Stokes equations in 2D domains. Conference Publications, 2015, 2015 (special) : 1079-1088. doi: 10.3934/proc.2015.1079 [5] C. Foias, M. S Jolly, O. P. Manley. Recurrence in the 2-$D$ Navier--Stokes equations. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 253-268. doi: 10.3934/dcds.2004.10.253 [6] Pedro Marín-Rubio, José Real. Pullback attractors for 2D-Navier-Stokes equations with delays in continuous and sub-linear operators. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 989-1006. doi: 10.3934/dcds.2010.26.989 [7] Grzegorz Łukaszewicz. Pullback attractors and statistical solutions for 2-D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 643-659. doi: 10.3934/dcdsb.2008.9.643 [8] Pedro Marín-Rubio, Antonio M. Márquez-Durán, José Real. Pullback attractors for globally modified Navier-Stokes equations with infinite delays. Discrete & Continuous Dynamical Systems, 2011, 31 (3) : 779-796. doi: 10.3934/dcds.2011.31.779 [9] Songsong Lu, Hongqing Wu, Chengkui Zhong. Attractors for nonautonomous 2d Navier-Stokes equations with normal external forces. Discrete & Continuous Dynamical Systems, 2005, 13 (3) : 701-719. doi: 10.3934/dcds.2005.13.701 [10] Luca Bisconti, Davide Catania. Remarks on global attractors for the 3D Navier--Stokes equations with horizontal filtering. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 59-75. doi: 10.3934/dcdsb.2015.20.59 [11] Wenlong Sun. The boundedness and upper semicontinuity of the pullback attractors for a 2D micropolar fluid flows with delay. Electronic Research Archive, 2020, 28 (3) : 1343-1356. doi: 10.3934/era.2020071 [12] Patrick Penel, Milan Pokorný. Improvement of some anisotropic regularity criteria for the Navier--Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1401-1407. doi: 10.3934/dcdss.2013.6.1401 [13] Daomin Cao, Xiaoya Song, Chunyou Sun. Pullback attractors for 2D MHD equations on time-varying domains. Discrete & Continuous Dynamical Systems, 2021  doi: 10.3934/dcds.2021132 [14] Milan Pokorný, Piotr B. Mucha. 3D steady compressible Navier--Stokes equations. Discrete & Continuous Dynamical Systems - S, 2008, 1 (1) : 151-163. doi: 10.3934/dcdss.2008.1.151 [15] Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5421-5448. doi: 10.3934/dcdsb.2020352 [16] Yutaka Tsuzuki. Solvability of $p$-Laplacian parabolic logistic equations with constraints coupled with Navier-Stokes equations in 2D domains. Evolution Equations & Control Theory, 2014, 3 (1) : 191-206. doi: 10.3934/eect.2014.3.191 [17] J. Huang, Marius Paicu. Decay estimates of global solution to 2D incompressible Navier-Stokes equations with variable viscosity. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4647-4669. doi: 10.3934/dcds.2014.34.4647 [18] Hakima Bessaih, Benedetta Ferrario. Statistical properties of stochastic 2D Navier-Stokes equations from linear models. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 2927-2947. doi: 10.3934/dcdsb.2016080 [19] Ruihong Ji, Yongfu Wang. Mass concentration phenomenon to the 2D Cauchy problem of the compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2019, 39 (2) : 1117-1133. doi: 10.3934/dcds.2019047 [20] Xin-Guang Yang, Rong-Nian Wang, Xingjie Yan, Alain Miranville. Dynamics of the 2D Navier-Stokes equations with sublinear operators in Lipschitz-like domains. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3343-3366. doi: 10.3934/dcds.2020408 2020 Impact Factor: 1.392
{}
# Electric and Magnetic field 1. May 2, 2015 Will a static electron be influenced by a magnetic field. 2. May 2, 2015 ### Staff: Mentor What do you mean by static electron? Do you mean a stationary electron relative to a static magnetic field like an ordinary magnet? The force on the electron is: F = qv x B where q is the charge of the electron and v is its velocity and B is the magnetic field vector. So ask yourself what is the force on the electron if it's not moving and that should answer your question. 3. May 2, 2015 ### vanhees71 Well, if there's only a magnetic field in the restframe of the electron, there'll be no force on the electron (see the previous posting). But if the magnetic field is time-dependent there's also an electric field due to Faraday's Law, $$\frac{1}{c} \partial_t \vec{B}+\vec{\nabla} \times \vec{E}=0.$$ Then, of course the force on the electron is the full Lorentz force, $$\vec{F}=q \left (\vec{E}+\frac{\vec{v}}{c} \times \vec{B} \right ).$$ So then it will be affected. You have to always look at both the electric and the magnetic field. In fact, electric and magnetic fields are just a split of the one and only electromagnetic field into components with respect to an arbitrary inertial reference frame. NB: I always use Heaviside-Lorentz units, because they are the most natural ones for electromagnetism. 4. May 2, 2015 ### tech99 May we say, therefore, that the electrons in a receiving antenna move only in response to the E-field of a passing wave? 5. May 2, 2015 ### vanhees71 No, because when the electron moves, there's also a force due to the magnetic field, as written above. 6. May 6, 2015 7. May 6, 2015 ### vanhees71 The charge produces of course a field, and in principle you have to take it into account. This is the socalled "radiation reaction" and is a tremendously difficult problem, which has not a full resolution for a point partice within classical electrodynamics. Have a look at the usual textbooks (Landau Lifshitz vol. II, Jackson etc.).
{}
etc., but using a finite sum you obviously need to check out the positive-definiteness. How to generate a symmetric positive definite matrix? It is proved that all the proposed definitions of magnitude coincide for compact positive definite metric spaces and further results are proved about the behavior of magnitude as a function of such spaces. Nearly all random matrices are full rank, so the loop I show will almost always only iterate once and is very very unlikely … The chapter is both reabable and comprehensive. Finally, note that an alternative approach is to do a first try from scratch, then use Matrix::nearPD() to make your matrix positive-definite. I changed 5-point likert scale to 10-point likert scale. I have a set a={x1,x2,x3}, b={y1,y2,y3} and c={z1,z2,z3}. 1. If the factorization fails, then the matrix is not symmetric positive definite. Apply random Jacobi Rotations on a positive definite diagonal matrix... $\endgroup$ – user251257 Mar 31 '16 at 14:55 So all we have to do is generate an initial random matrix with full rank and we can then easily find a positive semi-definite matrix derived from it. You can take eigenvals(M) of a matrix M to prove it is positive definite. Sign in to comment. Is there some know how to solve it? 2,454 11 11 silver badges 25 25 bronze badges $\endgroup$ add a comment | Your Answer Thanks for contributing an answer to Cross Validated! Because the diagonal is 1 and the matrix is symmetric. I didn't find any way to directly generate such a matrix. "Error: cannot allocate vector of size ...Mb", R x64 3.2.2 and R Studio. Best Answer. How to simulate 100 nos. Learn more about positive semidefinite matrix, random number generator There are about 70 items and 30 cases in my research study in order to use in Factor Analysis in SPSS. + A^3 / 3! etc., but using a finite sum you obviously need to check out the positive-definiteness. Sometimes, depending of my response variable and model, I get a message from R telling me 'singular fit'. This paper introduces a new method for generating large positive semi-definite covariance matrices. https://www.mathworks.com/matlabcentral/answers/424565-how-to-generate-a-symmetric-positive-definite-matrix#answer_394409, https://www.mathworks.com/matlabcentral/answers/424565-how-to-generate-a-symmetric-positive-definite-matrix#comment_751966, https://www.mathworks.com/matlabcentral/answers/424565-how-to-generate-a-symmetric-positive-definite-matrix#answer_341940, https://www.mathworks.com/matlabcentral/answers/424565-how-to-generate-a-symmetric-positive-definite-matrix#comment_623968, https://www.mathworks.com/matlabcentral/answers/424565-how-to-generate-a-symmetric-positive-definite-matrix#comment_751937, https://www.mathworks.com/matlabcentral/answers/424565-how-to-generate-a-symmetric-positive-definite-matrix#comment_751938, https://www.mathworks.com/matlabcentral/answers/424565-how-to-generate-a-symmetric-positive-definite-matrix#comment_751942. I need a random matrix with preassigned correlation for Monte Carlo simulation. Correlation matrices are symmetric and positive definite (PD), which means that all the eigenvalues of the matrix are positive. It is a real symmetric matrix, and, for any non-zero column vector z with real entries a and b , one has z T I z = [ a b ] [ 1 0 0 1 ] [ a b ] = a 2 + b 2 {\displaystyle z^{\textsf {T}}Iz={\begin{bmatrix}a&b\end{bmatrix}}{\begin{bmatrix}1&0\\0&1\end{bmatrix}}{\begin{bmatrix}a\\b\end{bmatrix}}=a^{2}+b^{2}} . When I look at the Random Effects table I see the random variable nest has 'Variance = 0.0000; Std Error = 0.0000'. user-specified eigenvalues when covMethod = "eigen". A way to check if matrix A is positive definite: A = [1 2 3;4 5 6;7 8 9]; % Example matrix An easy way to obtain an infinite signal is to use the periodic extension of a finite signal. It is based on univariate GARCH volatilities of a few, uncorrelated key risk factors to provide more realistic term structure forecasts in covariance matrices. 0. Choose a web site to get translated content where available and see local events and offers. Definition 1: An n × n symmetric matrix A is positive definite if for any n × 1 column vector X ≠ 0, X T AX > 0. https://www.mathworks.com/matlabcentral/answers/123643-how-to-create-a-symmetric-random-matrix#answer_131349, Andrei your solution does not produce necessary sdp matrix (which does not meant the matrix elements are positive), You may receive emails, depending on your. The period $$m$$ should be at least $$2p - 1$$ to avoid periodic effects. I increased the number of cases to 90. Test method 2: Determinants of all upper-left sub-matrices are positive: Determinant of all . Frequently in physics the energy of a system in state x is represented as XTAX (or XTAx) and so this is frequently called the energy-baseddefinition of a positive definite matrix. This note describes a methodology for scaling selected off-diagonal rows and columns of such a matrix to achieve positive definiteness. But did not work. Show Hide all comments. Show Hide all comments. So, I did something like this. Survey data was collected weekly. I didn't find any way to directly generate such a matrix. Positive Definite Matrix Calculator | Cholesky Factorization Calculator . Also, it is the only symmetric matrix. Because it is symmetric and PD, it is a valid covariance matrix. A matrix is positive definite if all it's associated eigenvalues are positive. You can do this in software packages like Mathematica or R. Alternatively, you can draw a given number of individuals from a multivariate normal distribution and compute their covariance matrix. The simplest to produce is a square matrix size(n,n) that has the two positive … generate large GARCH covariance matrices with mean-reverting term structures. Unable to complete the action because of changes made to the page. Computes the Cholesky decomposition of a symmetric positive-definite matrix A A A or for batches of symmetric positive-definite matrices. How to get a euclidean distance within range 0-1? data from above scenario? I would like to generate a hermitian positive definite matrix Z based on random rayleigh fading channel matrix H. The rayleigh fading channel with i.i.d, zero-mean, and unit-variance complex Gaussian random variables. So here is a tip: you can generate a large correlation matrix by using a special Toeplitz matrix. Only the second matrix shown above is a positive definite matrix. Covariance matrix of image data is not positive definite matrix. The matrix exponential is calculated as exp(A) = Id + A + A^2 / 2! Key words: positive definite matrix, Wishart distribution, multivariate normal (Gaussian) distribution, sample correlation coefficients, generating random matrices 2000 Mathematics Subject Classification: 62H10 Still, for small matrices the difference in computation time between the methods is negligible to check whether a matrix is symmetric positive definite. Matlab flips the eigenvalue and eigenvector of matrix when passing through singularity; How to determine if a matrix is positive definite using MATLAB; How to generate random positive semi-definite matrix with ones at the diagonal positions; How to create sparse symmetric positive definite … Joe, H. (2006) Generating Random Correlation Matrices Based on Partial Correlations. Between the 1960s and the present day, the use of morphology in plant taxonomy suffered a major decline, in part driven by the apparent superiority of DNA-based approaches to data generation. Does anybody know how can I order figures exactly in the position we call in Latex template? Only regression/ trend line equation and R value are given. The rWishart() R function states that the scale matrix should be positive definite. Choices are “eigen”, “onion”, “c-vine”, or “unifcorrmat”; see details below. However, I found that *Lehmer* matrix is a positive definite matrix that when you raise each element to a nonnegative power, you get a positive semi-definite matrix. the eigenvalues are (1,1), so you thnk A is positive definite, but the definition of positive definiteness is x'Ax > 0 for all x~=0 if you try x = [1 2]; then you get x'Ax = -3 So just looking at eigenvalues doesn't work if A is not symmetric. Learn more about correlation, random, matrix, positive, symmetric, diagonal Sign in to answer this question. I could generate the matrices using an uniform distribution (as far as I could see, this is the standard method) and then force it to be positive-definite using this. 0 ⋮ Vote. You can also select a web site from the following list: Select the China site (in Chinese or English) for best site performance. So, I did something like this. Mean and standard deviation are not given. References. Only the second matrix shown above is a positive definite matrix. I'm trying to normalize my Affymetrix microarray data in R using affy package. So How do I generate a positive definite sparse symmetric matrix? However, happy to pass on any results or information that could be helpful outside of providing the raw data. Sign in to answer this question. I am very new to mixed models analyses, and I would appreciate some guidance. Can you tell me the solution please. upper-left sub-matrices must be positive. positive semidefinite matrix random number generator I'm looking for a way to generate a *random positive semi-definite matrix* of size n with real number in the *range* from 0 to 4 for example. However, this approach is infeasible given a large matrix, say $1000 \times 1000$ or more. Follow 487 views (last 30 days) Riccardo Canola on 17 Oct 2018. In the previous example, the matrix was generated by the vector {5,4,3,2,1}. Accelerating the pace of engineering and science. What is your suggested solution, when the correlation matrix is not positive definite? My sample size is big(nearly 30000). Can anybody help me understand this and how should I proceed? Author(s) Weiliang Qiu weiliang.qiu@gmail.com Harry Joe harry@stat.ubc.ca. But, i get a warning Error: cannot allocate vector of size 1.2 Gb. 0 ⋮ Vote. Vote. The matrix exponential is calculated as exp(A) = Id + A + A^2 / 2! How to Generate/simulate data from R value and regression equation? For more information on this approach, see Armin Schwartzman's notes (, Virginia Polytechnic Institute and State University. This definition makes some properties of positive definite matrices much easier to prove. generate positive definite matrix with identical diagonal elements. Dimension of the matrix to be generated. I tried to it but program shows the eror massage. Is there any better way? I think a crucial insight is that multiplying a matrix with its transpose will give a symmetrical square matrix. I guess it depends on your simulation which covariance matrices you need. numeric. The paper ends with an algorithm for generating uniformly distributed positive definite matrices with preliminary fixed diagonal elements. The matrix exponential is calculated as exp(A) = Id + A + A^2 / 2! Generating positive definite Toeplitz matrices. Given below is the useful Hermitian positive definite matrix calculator which calculates the Cholesky decomposition of A in the form of A=LL , where L is the lower triangular matrix and L is the conjugate transpose matrix of L. A=16*gallery … So my questions are: 1. Hence, I divided each distance with the mean of set a to make it smaller with range of 0-1: I'm not sure if this is mathematically correct or not. share | cite | improve this answer | follow | answered Oct 27 '19 at 18:27. Today, we are continuing to study the Positive Definite Matrix a little bit more in-depth. Could anyone please suggest an efficient way to generate a positive semidefinite matrix? If this is the case, there will be a footnote to the correlation matrix that states "This matrix is not positive definite." Commented: Andrei Bobrov on 2 Oct 2019 Accepted Answer: Elias Hasle. I have to generate a symmetric positive definite rectangular matrix with random values. As is always the case for the generation of random objects, you need to be careful about the distribution from which you draw them. I understand that this makes it harder for you to figure out what could be causing this, especially if this issue has not arisen for yourself or others in the past. If $$m = p$$ then the matrix will be circulant Toeplitz. First, inverse Wishart is the natural psd covariance matrix for normally distributed data (, Finally, the matrix exponential of a symmetrical matrix is positive definite. A matrix is positive definite fxTAx > Ofor all vectors x 0. But do they ensure a positive definite matrix, or just a positive semi definite one? Mar 28, 2015. The identity matrix = [] is positive-definite (and as such also positive semi-definite). How can I increase memory size and memory limit in R? More specifically, we will learn how to determine if a matrix is positive definite or not. Joe, H. (2006) Generating Random Correlation Matrices Based on Partial Correlations. But its still better to produce a positive-definite covariance matrix in a principled way from some model. Follow 377 views (last 30 days) Riccardo Canola on 17 Oct 2018. If I want my covariance matrix to be (d x d), then I only have d*(d-1)/2 parameters to generate. B=A.^(1/2) %scale down to range 0-4. The Problem There are four situations in which a researcher may get a message about a matrix being "not positive definite." I am running linear mixed models for my data using 'nest' as the random variable. 0. How can I randomly generate data with a given covariance matrix? This method does not require the matrix to be symmetric for a successful test (if the matrix is not symmetric, then the factorization fails). 0 Comments . Our random effects were week (for the 8-week study) and participant. Section 6 contains a closer examination of a special subclass of the P-matrices (mimes) ... A totally positive matrix is a square matrix all of whose (principal and non-principal) minors are positive. ≥ 0 complete the action because of changes made to the structure of a matrix not. What are the requirements to the page gallery ( 'lehmer',100 ) % matrix size. Providing the raw data here is a tip: you generate positive definite matrix take eigenvals M... Big ( nearly 30000 ) to use chol on the matrix to generated. Matrices from a Wishart distribution research study in order to use in Factor analysis in SPSS 'm trying to my. See the random effects table i see the random variable matrix ; multiply it with its transpose symmetric...: you can take eigenvals ( M ) of a linear mixed models my! R function states that the scale matrix should be positive preassigned correlation Monte! You can generate a symmetric positive definite matrix/covariance matrix MathWorks is the case, i would like to define of. Matrix, the eigenvalues is less than the given tolerance, that eigenvalue replaced... The problem there are four situations in which a researcher may get a from. On your simulation which covariance matrices you need to check whether a matrix is positive definite or not, covariance... I need a random square matrix, we will… generate a large correlation matrix using... How should i proceed structure of a linear mixed models analyses, its. Local events and offers could understand, this approach, see Armin Schwartzman 's notes (, Virginia Institute... Restriction to the structure of a matrix drawn from a Wishart distribution most method... = 0.0000 ' the same issues likely would not arise: can allocate! Our random effects were week ( for the 8-week study ) and participant are continuing to the... Check out the positive-definiteness use in Factor analysis in SPSS being not positive definite matrices with mean-reverting structures! Models ( linear mixed models with preliminary fixed diagonal elements 30 days ) Riccardo on! Infinite signal is to compute the auxiliary quantities zij, … Generating symmetric positive matrices! Does 'singular fit ' mean in mixed models in animal breeding ) at random! Achieve positive definiteness T AX ≥ 0 on 2 Oct 2019 Accepted Answer Elias. Bobrov on 2 Oct 2019 Accepted Answer: Elias Hasle a linear mixed models,... ], then the matrix is symmetric positive then it is a tip: you can generate large... Other MathWorks country sites are not optimized for visits from your location random square matrix know can! Of providing the raw data for any n × 1 column vector x, x T AX 0. M\ ) should be positive a large matrix, or just a positive definite all eigenvalues of a matrix from... Ax ≥ 0 fxTAx > Ofor all vectors x 0 … Dimension of the matrix is not positive symmetric... On the matrix exponential of a linear mixed models analysis replaced with zero, depending of my response variable model! To pass on any results or information that could be helpful outside of providing the raw data: not. Thousand of dollar you obviously need to help your work see Armin Schwartzman 's notes generate positive definite matrix Virginia. ( last 30 days ) Riccardo Canola on 17 Oct 2018 my data using 'nest ' as random. Is semi-definite and symmetric positive definite rectangular matrix with its transpose will give a symmetrical matrix is positive definite will... For a positive definite. gallery ( 'lehmer',100 ) % matrix of size 1.2 Gb in R affy! Wothke ( 1993 ) compute the auto-covariance of an infinite signal approaches have... Join ResearchGate to find set! For a positive definite the previous example, the matrix in to several sub matrices, progressively... And its inverse is positive definite rectangular matrix with random values the leading developer of computing. Describes a methodology for scaling selected off-diagonal rows and columns of such a with... Matrices for simulation studies “ eigen ”, or “ unifcorrmat ” ; see details.! ) Weiliang Qiu weiliang.qiu @ gmail.com Harry Joe Harry @ stat.ubc.ca i have to generate a symmetric positive definite matrix. $\endgroup$ – Cauchy Mar 31 '16 at 6:43 $\begingroup$ @ Cauchy: is! Matrix of image data is not symmetric positive definite. Generating large positive semi-definite is compute! ) in order to reduce generate positive definite matrix, compute the auxiliary quantities zij, … symmetric! Unable to complete the action because of changes made to the distributions of the matrix... Share | cite | improve this Answer | follow | answered Oct '19. Eigen ”, “ c-vine ”, “ onion ”, “ onion ” “... ), which means that all the eigenvalues is less than or equal to,.... Mb '', R x64 3.2.2 and R Studio random correlation matrices Based on your which. Big ( nearly 30000 ) to help your work 487 views ( last days! For Monte Carlo simulation i think a crucial insight is that multiplying a matrix positive. Blocks of variables to generate a symmetric positive then it is positive definite matrix just a positive matrix! I increase memory size and memory limit in R using affy package research study order. Matrix with random values of my response variable and model, i come back to the page regression?! [ a ij ] and x = [ a ij ] and x = [ a ij ] and =. An algorithm for Generating uniformly distributed positive definite matrix a is invertible, and from my experience. -- Causes and Cures the seminal work on dealing with not positive definite rectangular matrix its. T AX ≥ 0 $or more available and see local events and offers MathWorks country are! Of image data is not positive definite matrices is Wothke ( 1993.. Eigen is used to compute the auto-covariance of an infinite signal is to compute auto-covariance... Should be positive definite rectangular matrix with its transpose will give a symmetrical matrix is positive if... Such a matrix is positive definite matrices much easier to prove and how should i proceed matrix. Definition makes some properties of positive definite matrix will have all positive pivots matrix its! From a Wishart distribution eigen is used to compute the auxiliary quantities zij, … symmetric... @ Cauchy: Google is your suggested solution, when the correlation by! How should i proceed size... Mb '', R x64 3.2.2 R. Understand this and how should i proceed in Factor analysis in SPSS a symmetrical square ;... Semi-Definite and symmetric positive definite. is infeasible given a large correlation matrix can have a zero,! Random, positive-definite covariance matrices for simulation studies to normalize my Affymetrix microarray data in R break matrix. Computer image recognition has re-kindled the interest in morphological techniques ) R function that... Be at least \ ( m\ ) should be positive generate positive definite matrix matrix eigen is used to compute the eigenvalues absolute... A sample with artificial data, but i imagine that the same likely! Approach, see Armin Schwartzman 's notes (, Virginia Polytechnic Institute and State.. M to prove matrices Based on Partial Correlations effect was whether or not participants were assigned the technology )... Seminal work on dealing with not positive definite ( PD ), which means that generate positive definite matrix the eigenvalues is than! Eigenvalues in absolute value is thousand of dollar allocate vector of size 1.2 Gb,! On this approach is infeasible given a large matrix, say$ 1000 \times \$! Better to produce a positive-definite covariance matrices for simulation studies from some model would appreciate some guidance distributed. Range 0-1 models analyses, and its inverse is positive definite matrix a little bit more in-depth what are requirements. I 'm trying to normalize my Affymetrix microarray data in R to obtain an infinite signal on 2 2019! Forces positive-definiteness '' on an existing matrix if any of the resulting random variables image is. Can not allocate vector of size 1.2 Gb in R generated by the vector 1,0.8,0.6,0.4,0.2. Scaling selected off-diagonal rows and columns of such a matrix drawn from a Wishart distribution vector x, T. Simply attempt to use chol on the matrix will be circulant Toeplitz P-matrices with additional properties semi-definite covariance you. Think a crucial insight is that multiplying a matrix is not positive definite. MATLAB and. Web site to get translated content where available and see local events and offers be... To directly generate such a matrix use the periodic extension of a linear mixed models share | cite | this. Exponential is calculated as exp ( a ) = Id + a A^2. Based on your simulation which covariance matrices hot Network Questions a matrix to achieve positive definiteness i the... … Generating symmetric positive definite matrices is Wothke ( 1993 ) depends on your location, we will… generate positive! ) is closer to set a { 5,4,3,2,1 } P-matrices, some of which yield P-matrices with additional properties zero! Distance within range 0-1 this Note describes a methodology for scaling selected off-diagonal rows and columns such... In SPSS set b or set c ) is closer to set a my R memory.size and memory.limit for positive. With an algorithm for Generating large positive semi-definite covariance matrices vector { 5,4,3,2,1 }, x T ≥..., a correlation matrix can have a zero eigenvalues, but i imagine the. To prove it is symmetric positive definite is to simply attempt to use on. Error = 0.0000 ' new method for Generating large positive semi-definite is to simply attempt to use on! Qiu weiliang.qiu @ gmail.com Harry Joe Harry @ stat.ubc.ca computing software for engineers and scientists from location. Ij ] and x = [ a ij ] and x = [ x i ] then. Made to the page b or set c ) is closer to set a to some extent with animal!
{}
# TeXShop - XeLaTeX: How to include tamil scripts? I am using TeXShop - XeLaTeX on a MacBook Pro with Mountain Lion. I used the following: \documentclass{article} \usepackage[no-math]{fontspec} \begin{document} அ \end{document} I got an error: Undefined control sequence \UTF ... Could anyone give some suggestions so that I can use Tamil unicode fonts as part of my LaTeX file. - Welcome to TeX.SX. A tip: If you indent lines by 4 spaces, then they're marked as a code sample. You can also highlight the code and click the "code" button ({}). Usually, we don't put a greeting or a "thank you" in our posts. While this might seem strange at first, it is not a sign of lack of politeness, but rather part of our trying to keep everything very concise. Upvoting is the preferred way here to say "thank you" to users who helped you. –  Claudio Fiandrino Sep 19 '12 at 8:07 In a current TeX-System you example does not give an error. It doesn't give the glyph either as the font doesn't contain it, you will have to switch to another font e.g. with {\fontspec{Arial Unicode MS}அ}. –  Ulrike Fischer Sep 19 '12 at 8:22 You have to declare a font that covers Tamil; for instance on my machine I have InaiMathi; then polyglossia can take care of some aspects of Tamil. \documentclass{article} \usepackage[no-math]{fontspec} \newfontfamily\tamilfont{InaiMathi} \usepackage{polyglossia} \setmainlanguage{tamil} \begin{document} அ தமிழ் \end{document} The file must be saved as UTF-8 and processed with XeLaTeX. - When I did the above, I got these error messages in the log: –  poornima Sep 20 '12 at 4:58 Sorry for the previous goof up. When I did as egret (also Ulrich Fisher) suggested, I got the same error message. 1st line of the log: This is XeTeX, Version 3.1415926-2.4-0.9998 (TeX Live 2012) (format=xelatex 2012.6.30) –  poornima Sep 21 '12 at 14:56
{}
# [.net] C# -- implementing operator++ ## Recommended Posts I am experiencing an InvalidProgramException, and can't quite seem to figure out the cause. The documentation says this is probably a bug either in the C# compiler or the JIT compiler. Running PEVerify yields a StackUnderflow error at the line I can trace the problem to, so I assume the problem is not in the JIT portion. But before I bring this to MSs attention, I want to make sure I'm not doing something retarded. The following file reproduces the error consistantly: namespace ErrorTest { class Program { public Program(int _) { i = _; } public Program Begin() { return new Program(0); } public static Program operator ++(Program b) { Program tmp = new Program(b.i); tmp.i++; return tmp; } public int i; static void Main(string[] args) { Program x = new Program(0); x.Begin()++; //++x.Begin(); //This also fails } } } Realistically, x.begin()++; could be replaced with (new Program(0))++; and the results would be the same. I'm guessing the problem is that the temporary Program object is getting lost somewhere. The operator++ implementation is what I'm mostly concerned with. Is that the typical methodology [construct, modify, and return a temporary]? It produces the sort of results one would expect, and I've found a few snippets scattered around the internet that would support it, but I want to make sure. Unfortunately, I'm finding it rather hard to dissect the MSIL myself, but if it helps here's what I get out of the dissassembler: .method private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 14 (0xe) .maxstack 2 .locals init ([0] class ErrorTest.Program x) IL_0000: nop IL_0001: ldc.i4.0 IL_0002: newobj instance void ErrorTest.Program::.ctor(int32) IL_0007: stloc.0 IL_0008: call class ErrorTest.Program ErrorTest.Program::op_Increment(class ErrorTest.Program) IL_000d: ret } // end of method Program::Main PEVerify says the stack underflow occurs at offet 0x8, which I assume refers to IL_0008 in that code. If anybody could verify that I'm not doing anything obviously wrong, I'd appreciate it. CM ##### Share on other sites Hello I belive that you must use it like this: ((Program)x++).Begin(); ##### Share on other sites Quote: Original post by HoleHelloI belive that you must use it like this:((Program)x++).Begin(); I do not want to change x, I want to change x.Begin(). The example code is a little odd, but in my original code "Program" is a container, and "Program.Begin()" is an iterator. So there's a type change that prevents that reordering from being valid. CM ##### Share on other sites I would suggest running this by the people at the MSDN forums. There's actual softies on the C# team who lurk there and can take a better look at it. ##### Share on other sites To my knowledge, the ++ operator is automatically generated from the + operator... ##### Share on other sites Technically what it's doing is right, it's only doing it incorrectly. According to the ECMA C# standard: §14.5.9 The operand of a postfix increment or decrement operation shall be an expression classified as a variable, a property access, or an indexer access. The result of the operation is a value of the same type as the operand. So, since in the case you've presented, what you're applying the post-increment operator to is not any of variable, property or indexer, your program is mostly what is incorrect. Note that I say mostly, not completely, as the compiler also seems to be screwing up (which is obvious, as you do not get a compiler error nor does it throw an InvalidOperationException, which I believe is what the correct exception would be in this case). After looking at the IL, it seems that the program is not loading an instance of Program onto the evaluation stack before calling Program.op_increment, and when (as far as I know) the JIT attempts to verify the code, it detects this and throws an InvalidProgramException. I would recommend, as Promit did, for you to post about this either on Usenet or MSDN or somewhere that someone from Microsoft would see it who would be able to do something about it. Like I said, I believe that this is the correct outcome for the program as the construct is invalid, just that the compiler is going about expressing it incorrectly (most likely due to incorrect or non-thorough checks when working with user-defined increment and decrement operators, for example, try applying ++ to a constant such as 10). ##### Share on other sites Oh, also, for reference I patched the IL of the Main method to actually execute correctly. Of course my code won't actually match up to that of the code generator, it still would be a good reference for what it should be. (Note that this is minus your call to Begin(), this could be what you get after inlining the code to Begin()): .method private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 24 (0x18) .maxstack 1 .locals init (int32 V_0) IL_0000: nop IL_0001: ldc.i4.s 10 IL_0003: stloc.0 IL_0004: ldloc.0 IL_0005: call void [mscorlib]System.Console::WriteLine(int32) IL_000a: nop //////// New Code IL_000b: ldc.i4.0 IL_000c: newobj instance void ErrorTest.Program::.ctor(int32) //////// End New Code IL_0011: call class ErrorTest.Program ErrorTest.Program::op_Increment(class ErrorTest.Program) IL_0016: pop IL_0017: ret } // end of method Program::Main ##### Share on other sites I think the AP is definately correct. I just tried (new int())++;, and it results in a [somewhat confusing] error message: "The left-hand side of an assignment must be a variable, property or indexer." I'll definately take this over to MSDN, thanks. CM ##### Share on other sites Just a little more fun with ++. Given the specific warning I got from (new int())++, I decided to replace the Begin() in my original code with a read only property with the same name. I assumed it would fail, because it would try and set the new value to the property, and I didn't provide a set method. But the change was easy enough to make. The result was the compiler crashing and me sending an error report to MS. I assume this is a less elegant version of the Internal Compiler Error you get in C++. Quote: Original post by Rob LoachTo my knowledge, the ++ operator is automatically generated from the + operator... ++ has to be implemented individually, because you can overload + several times and none of them have to be integral. However, you get both the preincrement and the postincrement versions from a single overload. You were probably thinking of +=, which is generated for you if you overload +. CM ## Create an account Register a new account • ## Partner Spotlight • ### Forum Statistics • Total Topics 627681 • Total Posts 2978611 • 13 • 12 • 10 • 12 • 22
{}
# Scientific notation  Scientific notation Scientific notation is a way of writing numbers that accommodates values too large or small to be conveniently written in standard decimal notation. Scientific notation has a number of useful properties and is commonly used in calculators, and by scientists, mathematicians, doctors, and engineers. In scientific notation all numbers are written like this: $a \times 10^b$ ("a times ten raised to the power of b"), where the exponent b is an integer, and the coefficient a is any real number (but see normalized notation below), called the significand or mantissa (though the term "mantissa" may cause confusion as it can also refer to the fractional part of the common logarithm). If the number is negative then a minus sign precedes a (as in ordinary decimal notation). Ordinary decimal notation Scientific notation (normalized) 300 3×102 4,000 4×103 5,720,000,000 5.72×109 0.0000000061 6.1×10−9 ## Normalized notation Any given number can be written in the form of a×10b in many ways; for example 350 can be written as 3.5×102 or 35×101 or 350×100. In normalized scientific notation, the exponent b is chosen such that the absolute value of a remains at least one but less than ten (1 ≤ |a| < 10). For example, 350 is written as 3.5×102. This form allows easy comparison of two numbers of the same sign in a, as the exponent b gives the number's order of magnitude. In normalized notation the exponent b is negative for a number with absolute value between 0 and 1 (e.g., minus one half is −5×10−1). The 10 and exponent are usually omitted when the exponent is 0. Note that 0 itself cannot be written in normalised scientific notation since the mantissa would have to be zero and the exponent undefined. In many fields, scientific notation is normally in this way, except during intermediate calculations or when an unnormalised form, such as engineering notation, is desired. (Normalized) scientific notation is often called exponential notation—although the latter term is more general and also applies when a is not restricted to the range 1 to 10 (as in engineering notation for instance) and to bases other than 10 (as in 315× 220). ## E notation A calculator display showing the Avogadro constant in E notation Most calculators and many computer programs present very large and very small results in scientific notation. Because superscripted exponents like 107 cannot always be conveniently represented on computers, typewriters and calculators, an alternative format is often used: the letter E or e represents times ten raised to the power of, thus replacing the × 10, followed by the value of the exponent. Note that the character e is not related to the mathematical constant e or the exponential function ex (a confusion that is less likely with capital E); and though it stands for exponent, the notation is usually referred to as (scientific) E notation or (scientific) e notation, rather than (scientific) exponential notation (though the latter also occurs). ### Examples and alternatives • In the C++, FORTRAN, MATLAB, Perl, Java[1] and Python programming languages, 6.0221418E23 or 6.0221418e23 is equivalent to 6.0221418×1023. FORTRAN also uses "D" to signify double precision numbers.[2] • The ALGOL 60 programming language uses a subscript ten "10" character instead of the letter E, for example: 6.02214151023.[3] • The ALGOL 68 programming language has the choice of 4 characters: e, E, \, or 10. By examples: 6.0221415e23, 6.0221415E23, 6.0221415\23 or 6.02214151023.[4] • Decimal Exponent Symbol is part of "The Unicode Standard 6.0" e.g. 6.0221415⏨23 - it was included to accommodate usage in the programming languages Algol 60 and Algol 68. • The TI-83 series and TI-84 Plus series of calculators use a stylized E character to display decimal exponent and the 10 character to denote an equivalent Operator[7]. • The Simula programming language requires the use of & (or && for long), for example: 6.0221415&23 (or 6.0221415&&23).[5] ## Engineering notation Engineering notation differs from normalized scientific notation in that the exponent b is restricted to multiples of 3. Consequently, the absolute value of a is in the range 1 ≤ |a| < 1000, rather than 1 ≤ |a| < 10. Though similar in concept, engineering notation is rarely called scientific notation. This allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example, 12.5×10−9 m can be read as "twelve-point-five nanometers" or written as 12.5 nm, while its scientific notation counterpart 1.25×10−8 m would likely be read out as "one-point-two-five times ten-to-the-negative-eighth meters". ## Use of spaces In normalized scientific notation, in E notation, and in engineering notation, the space (which in typesetting may be represented by a normal width space or a thin space) that is allowed only before and after "×" or in front of "E" or "e" is sometimes omitted, though it is less common to do so before the alphabetical character.[6] ### Examples • An electron's mass is about 0.00000000000000000000000000000091093822 kg. In scientific notation, this is written 9.1093822×10−31 kg. • The Earth's mass is about 5973600000000000000000000 kg. In scientific notation, this is written 5.9736×1024 kg. • The Earth's circumference is approximately 40000000 m. In scientific notation, this is 4×107 m. In engineering notation, this is written 40×106 m. In SI writing style, this may be written "40 Mm" (40 megameters). • An inch is 25400 micrometers. Describing an inch as 2.5400×104 µm unambiguously states that this conversion is correct to the nearest micrometer. An approximated value with only three significant digits would be 2.54×104 µm instead. In this example, the number of significant zeros is actually infinite (which is not the case with most scientific measurements, which have a limited degree of precision). It can be properly written with the minimum number of significant zeros used with other numbers in the application (no need to have more significant digits that other factors or addends). Or a bar can be written over a single zero, indicating that it repeats forever. The bar symbol is just as valid in scientific notation as it is in decimal notation. ### Significant figures #### Ambiguity of the last digit in scientific notation It is customary in scientific measurements to record all the significant digits from the measurements, and to guess one additional digit if there is any information at all available to the observer to make a guess. The resulting number is considered more valuable than it would be without that extra digit, and it is considered a significant digit because it contains some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together). Additional information about precision can be conveyed through additional notations. In some cases, it may be useful to know how exact the final significant digit is. For instance, the accepted value of the unit of elementary charge can properly be expressed as 1.602176487(40)×10−19 C,[7] which is shorthand for 1.602176487±0.000000040×10−19 C. ### Order of magnitude Scientific notation also enables simpler order-of-magnitude comparisons. A proton's mass is 0.0000000000000000000000000016726 kg. If this is written as 1.6726×10−27 kg, it is easier to compare this mass with that of the electron, given above. The order of magnitude of the ratio of the masses can be obtained by comparing the exponents instead of the more error-prone task of counting the leading zeros. In this case, −27 is larger than −31 and therefore the proton is roughly four orders of magnitude (about 10000 times) more massive than the electron. Scientific notation also avoids misunderstandings due to regional differences in certain quantifiers, such as billion, which might indicate either 109 or 1012. ## Using scientific notation ### Converting To convert from ordinary decimal notation to scientific notation, move the decimal separator the desired number of places to the left or right, so that the significand will be in the desired range (between 1 and 10 for the normalized form). If you moved the decimal point n places to the left then multiply by 10n; if you moved the decimal point n places to the right then multiply by 10n. For example, starting with 1230000, move the decimal point six places to the left yielding 1.23, and multiply by 106, to give the result 1.23×106. Similarly, starting with 0.000000456, move the decimal point seven places to the right yielding 4.56, and multiply by 10−7, to give the result 4.56×10−7. If the decimal separator did not move then the exponent multiplier is logically 100, which is correct since 100 = 1. However, the exponent part "× 100" is normally omitted, so, for example, 1.234×100 is just written as 1.234. To convert from scientific notation to ordinary decimal notation, take the significand and move the decimal separator by the number of places indicated by the exponent — left if the exponent is negative, or right if the exponent is positive. Add leading or trailing zeroes as necessary. For example, given 9.5 × 1010, move the decimal point ten places to the right to yield 95000000000. Conversion between different scientific notation representations of the same number is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and the exponent parts. The decimal separator in the significand is shifted n places to the left (or right), corresponding to division (multiplication) by 10n, and n is added to (subtracted from) the exponent, corresponding to a canceling multiplication (division) by 10n. For example: 1.234×103 = 12.34×102 = 123.4×101 = 1234 ### Basic operations Given two numbers in scientific notation, $x_0=a_0\times10^{b_0}$ and $x_1=a_1\times10^{b_1}$ Multiplication and division are performed using the rules for operation with exponential functions: $x_0 x_1=a_0 a_1\times10^{b_0+b_1}$ and $\frac{x_0}{x_1}=\frac{a_0}{a_1}\times10^{b_0-b_1}$ Some examples are: $5.67\times10^{-5} \times 2.34\times10^2 \approx 13.3\times10^{-3} = 1.33\times10^{-2}$ and $\frac{2.34\times10^2}{5.67\times10^{-5}} \approx 0.413\times10^{7} = 4.13\times10^6$ Addition and subtraction require the numbers to be represented using the same exponential part, so that the significant can be simply added or subtracted. :$x_1 = c \times10^{b_0}$ Next, add or subtract the significants: $x_0 \pm x_1=(a_0\pm c)\times10^{b_0}$ An example: $2.34\times10^{-5} + 5.67\times10^{-6} = 2.34\times10^{-5} + 0.567\times10^{-5} \approx 2.91\times10^{-5}$ ## Notes and references 2. ^ http://www.math.hawaii.edu/lab/197/fortran/fort3.htm#double 3. ^ Report on the Algorithmic Language ALGOL 60, Ed. P. Naur, Copenhagen 1960 4. ^ "Revised Report on the Algorithmic Language Algol 68". September 1973. Retrieved April 30, 2007. 5. ^ "SIMULA Standard As defined by the SIMULA Standards Group - 3.1 Numbers". August 1986. Retrieved October 6, 2009. 6. ^ Samples of usage of terminology and variants: [1], [2], [3], [4], [5], [6] 7. ^ NIST value for the elementary charge Wikimedia Foundation. 2010. ### Look at other dictionaries: • scientific notation — scientific notation,   wissenschaftliche Notation …   Universal-Lexikon • scientific notation — n. a mathematical expression used to represent any decimal number as a number between one and ten raised to a specific power of ten (Ex.: 4.1 × 100 for 4.1, 4.1 × 101 for 41, 4.10 × 102 for 410, 4.1 x 10 1 for 0.41, 4.1 × 10 2 for 0.041): often… …   English World dictionary • Scientific Notation — Als wissenschaftliche Notation (englisch: scientific notation) bezeichnet man die beiden Varianten moderner Zahlendarstellung: die Exponentialdarstellung, auch traditionelle wissenschaftliche Notation oder Normdarstellung genannt, und die… …   Deutsch Wikipedia • scientific notation — noun : a method of expressing a number by giving only the significant figures within particular limits of accuracy and indicating multiplication by the proper power of 10 (as in 1.591 (10) 20) * * * a method for expressing a given quantity as a… …   Useful english dictionary • scientific notation — slankiojo kablelio formatas statusas T sritis informatika apibrėžtis Skaičiaus vaizdavimo kompiuteryje pavidalas, tinkamas plataus intervalo (labai dideliems ir labai mažiems) ↑realiesiems skaičiams pavaizduoti. Skaičių sudaro du komponentai:… …   Enciklopedinis kompiuterijos žodynas • scientific notation — noun a) a method of writing, or of displaying real numbers as a decimal number between 1 and 10 followed by an integer power of 10 The number 0.00236 is written in scientific notation as 2.36x10 or as 2.36E 3. b) an alternative …   Wiktionary • scientific notation — /saɪəntɪfɪk noʊˈteɪʃən/ (say suyuhnt1f1k noh tayshuhn) noun a method of writing numbers in terms of a decimal number between 1 and 10 multiplied by a power of 10, as when 2 300 000 is written as 2.3 × 106. Also, power of 10 notation …   Australian English dictionary • scientific notation — noun Date: circa 1934 a widely used floating point system in which numbers are expressed as products consisting of a number between 1 and 10 multiplied by an appropriate power of 10 (as in 1.591 × (10) 20 or 1.591 × 10 20) …   New Collegiate Dictionary • scientific notation — a method for expressing a given quantity as a number having significant digits necessary for a specified degree of accuracy, multiplied by 10 to the appropriate power, as 1385.62 written as 1.386 × 103. [1960 65] * * * …   Universalium • Scientific notation — Научная запись, научная нотация …   Краткий толковый словарь по полиграфии
{}
# How do you solve the following system?: 8x +6y =9 , - 5x -7y = -2 May 3, 2017 With some arrangement. $x = \frac{51}{26}$ and $y = - \frac{29}{26}$ #### Explanation: Expand the first equation with term 5 Expand the second equation with 8. Now you have: $40 x + 30 y = 45$ $- 40 x - 56 y = - 16$ Now sum these up: $- 26 y = 29$ or $y = - \frac{29}{26}$ Now you can find x using the first or second equation: $8 x - \frac{6 \cdot 29}{26} = 9$ $8 x = 9 + \frac{6 \cdot 29}{26}$ $8 x = \left(\frac{117}{13}\right) + \left(\frac{3 \cdot 29}{13}\right)$ $8 x = \frac{204}{13}$ $x = \frac{204}{104}$ or $x = \frac{51}{26}$ $x = 1 \frac{25}{26}$, $y = - 1 \frac{21}{182}$ #### Explanation: $8 x + 6 y = 9$ ....................(i) $- 5 x - 7 y = - 2$ ....................(ii) You can solve this system by using Elimination method. You can eliminate either $x$ or $y$ here. I will eliminate $x$. So multiplying eq.(i) by $+ 5$ and eq(ii) by $+ 8$, you will get $40 x + 30 y = 45$ ......................(iii) & $- 40 x - 56 y = - 16$ ........................(iv) Adding eq (iii) and (iv), you will get $- 26 y = 29$ $\Rightarrow$ $y = - \frac{29}{26}$ Substituting $y = - \frac{29}{26}$ in eq(i), you will get $8 x + \left(- \frac{29}{26}\right) \cdot 6 = 9$ $\Rightarrow$ $8 x = 9 + \frac{87}{13}$ $\Rightarrow$ $8 x = \frac{117 + 87}{13}$ $\Rightarrow$ $8 x = \frac{204}{13}$ $\Rightarrow$ $x = \frac{204}{104} = \frac{51}{26}$
{}
# Favoured neutron excitations in superdeformed $^{147}$Gd Abstract : The goal of this communication is to propose a model of interaction of the cultures based on the concept of radiation by analogy with the quantum model of the light. Just as the light can be considered in modern physics either a flow of particles or an undulation, in the same way, to the granular dimension of the information emitted by an organization in its data flows, we will add a wave-like dimension in the form of radiation. The socio technical device of radiation is a particular form of Distic Keywords : Document type : Journal articles http://hal.in2p3.fr/in2p3-00005335 Contributor : Yvette Heyd <> Submitted on : Wednesday, April 5, 2000 - 8:46:31 AM Last modification on : Thursday, March 25, 2021 - 2:48:02 PM ### Identifiers • HAL Id : in2p3-00005335, version 1 ### Citation C. Theisen, J.P. Vivien, I. Ragnarsson, C.W. Bedausang, F.A. Beck, et al.. Favoured neutron excitations in superdeformed $^{147}$Gd. Physical Review C, American Physical Society, 1996, 54, pp.2910. ⟨in2p3-00005335⟩ Record views
{}
105 views Two CSMA/CD stations  are each trying to transmit long (multi-frame) files.After each frame is sent,they contend for the channel,using  the binary exponential back-off algorithm. What is the probability that the contention ends on round k, and what is the mean number of rounds per contention period?
{}
## College Physics (7th Edition) Using Ideal gas law : $NR/V=P/T=3/303$ For $T=-20^{\circ}C=253K$ $P=\frac{NRT}{V}=\frac{3\times253}{303}=2.5atm$
{}
# Find Surface Area obtained by rotating a curve? Find the area of the surface obtained by rotating the curve y=2e^(2y) from y=0 to y=4 about the y-axis. Any help on this would be greatly appreciated. This has my whole hall stumped. We know that you have to use the equation 2pi*int(g(y)sqrt(1+(derivative of function)^2), but cannot figure out how to integrate this correctly. What I have gotten so far: y=2e^(2y) [when u=2y, du/2=dx] y=e^u New bounds: 1 to e^4 2pi*int(e^u*sqrt(1+(e^u)^2) How do you go from there? Any help would be greatly appreciated. Related Calculus and Beyond Homework Help News on Phys.org Sorry, the problem is x=2e^(2y) rock.freak667 Homework Helper $x=2e^{2y} \Rightarrow \frac{dx}{dy}=4e^{2y}$ $$S=2\pi \int _{0} ^{4} x\sqrt{1+\left(\frac{dx}{dy}\right)^2} dy$$ $$S=2\pi \int _{0} ^{4} 2e^{2y}\sqrt{1+(4e^{2y})^2} dy$$ Let $u=4e^{2y} \Rightarrow \frac{du}{dy}=8e^2y \Rightarrow \frac{du}{4}=2e^{2y}dy$ $$\equiv S=2\pi \int \frac{1}{4} \sqrt{1+u^2} du$$ I think a hyperbolic trig. substitution will work here e.g.t=sinhx (OR if you want t=secx) Thank you very much. I got this far, but tried to use normal trig substitution. IT goes without saying that it didn't really work for me. rock.freak667 Homework Helper $$\equiv S=2\pi \int \frac{1}{4} \sqrt{1+u^2} du$$ Let $u=sect \Rightarrow \frac{du}{dt}=secttant \Rightarrow du=secttant dt$ $$\frac{\pi}{2}\int \sqrt{1+sec^2t}secttant dt \equiv \frac{\pi}{2}\int \sqrt{tan^2t}secttant dt$$ $$\frac{\pi}{2}\int secttan^2t dt \equiv \frac{\pi}{2}\int sect(sec^2t-1) dt$$ Long and tedious but it should work. you could just use trig substitution where something in the form sqrt(A^2+X^2) should be approached making the substitution u=tangent(theta) so you could plus u into sqrt(1+u^2) and end up with sec(theta) then have du=sec^2(theta) then for 2pi*integtal(0.25sqrt(1+u^2))du you would get pi/2*integral(sec(theta)*sec^2(theta))d(theta) rock.freak667 Homework Helper you could just use trig substitution where something in the form sqrt(A^2+X^2) should be approached making the substitution u=tangent(theta) so you could plus u into sqrt(1+u^2) and end up with sec(theta) then have du=sec^2(theta) then for 2pi*integtal(0.25sqrt(1+u^2))du you would get pi/2*integral(sec(theta)*sec^2(theta))d(theta) Ahhh yes my mistake..sect would be the wrong trig fn....tant is much better...My mistake....Though I prefer the hyperbolic ones to the trig ones rock.freak667 Homework Helper Find the surface area by rotating the curve x=(1/3)y^(3/2) - y^(1/2) about the y- axis between 1 and 3. Not making much progress with this question, any help would be appreciated. $$S= \int_{y_1} ^{y_2} 2 \pi x ds \ where \ ds=\sqrt{1+ \left( \frac{dx}{dy} \right) ^2} dy$$
{}
### Hepic_Antony_Skarlatos's blog By Hepic_Antony_Skarlatos, 7 years ago, The problem I try to solve is: http://codeforces.com/contest/505/problem/B My code is: http://pastebin.com/Av4Ry8nS In the third testcase,i receive "runntime error". This error is due of my code,or due of c# language? Because in the past,I had a similar error with c#,but not with c++. Thank you for your time !!! • +5 » 7 years ago, # |   0 visited = new bool[N+1,M+1]; for (int i = 0; i <= N; ++i) for (int j = 0; j <= N; ++j) visited[i,j] = false; This code have a bug)And i think it have more) » 7 years ago, # |   0 40. visited = new bool[N+1,M+1]; 46. for (int j = 0; j <= N; ++j) 47. visited[i,j] = false; // RE if N > M » 7 years ago, # |   +5 Thank you both for your time to check my source. That was the error.
{}
## Geomesa SimpleFeatureType In GeoTools, a SimpleFeatureType defines the schema for your data. It is similar to defining a SQL database table, as it consists of strongly-typed, ordered, named attributes (columns). Likewise, in Geomesa SimpleFeatureType defines the names and types of attributes in a schema.There are some predefined SimpleFeatureType coming with the Geomesa tools. We can use a specification string or a TypeSafe configuration to define a SimpleFeatureType. A SimpleFeatureType definition consists of an attributes array, and an optional user-data section. attributes is an array of column definitions, each of which must include a name and a type. user-data element consists of key-value pairs that will be set in the user data for the SimpleFeatureType. For example: twitter = { fields = [ { name = user_id , type = String, index = true } { name = user_name , type = String } { name = text , type = String } { name = in_reply_to_user_id , type = String } { name = in_reply_to_status_id, type = String } { name = hashtags , type = String } { name = media , type = String } { name = symbols , type = String } { name = urls , type = String } { name = user_mentions , type = String } { name = retweet_count , type = Int } { name = lang , type = String } { name = place_name , type = String } { name = place_type , type = String } { name = place_country_code , type = String } { name = place_full_name , type = String } { name = place_id , type = String } { name = dtg , type = Date } { name = geom , type = Point, srid = 4326 } ] user-data = { geomesa.table.sharing = "false" } } Checkout next post about geomesa converter, which will discuss how to transform source file into SimpleFeatureType in Geomesa datastore.
{}
# [texhax] reading latex version number at compile time Daniel Greenhoe dgreenhoe at gmail.com Sat Nov 14 01:02:20 CET 2015 ```Is there any way to read the latex and/or dvipdfmx version numbers at compile time and pass the string(s) to the metadata of the final pdf file? For example, if I use the <hyperref> package with this command in the preamble \hypersetup{% pdfcreator={string1}, pdfproducer={string2}, } ,can string1 and string2 somehow be written to at compile time to contain the version number(s)?
{}
Comment Share Q) # The vertical component of earth magnetic field at a plane is $\large\frac{1}{\sqrt 3}$ times the horizontal component. What is the value of angle of dip at this place ? $\begin{array}{1 1} 30^{\circ} \\ 60^{\circ} \\ 90^{\circ} \\ 0^{\circ} \end{array}$ $B_v= \large\frac{1}{\sqrt 3}$$B_H$ $\tan \delta = \large\frac{B_v}{B_H}$ $\tan \delta = \large\frac{1}{\sqrt 3}$ $\delta = 30^{\circ}$ Answer : $30^{\circ}$
{}
Search # Elementary algebra Elementary algebra is the most basic form of algebra taught to students who are presumed to have no knowledge of mathematics beyond the basic principles of arithmetic. While in arithmetic only numbers and their arithmetical operations (such as +, −, ×, ÷) occur, in algebra one also uses symbols (such as a, x, y) to denote numbers. This is useful because: • It allows the general formulation of arithmetical laws (such as a + b = b + a for all a and b), and thus is the first step to a systematic exploration of the properties of the real number system • It allows the reference to "unknown" numbers, the formulation of equations and the study of how to solve these (for instance "find a number x such that 3x + 2 = 10) • It allows the formulation of functional relationships (such as "if you sell x tickets, then your profit will be 3x - 10 dollars") These three are the main strands of elementary algebra, which should be distinguished from abstract algebra, a much more advanced topic generally taught to college seniors. In algebra, an "expression" may contain numbers, variables and arithmetical operations; examples are a + 3 and x2 - 3. An "equation" is the claim that two expressions are equal. Some equations are true for all values of the involved variables (such as a + (b + c) = (a + b) + c); these are also known as "identities". Other equations contain symbols for unknown values and we are then interested in finding those values for which the equation becomes true: x2 - 1 = 4. These are the "solutions" of the equation. As in arithmetic, in algebra it is important to know precisely how mathematical expressions are to be interpreted. This is governed by the order of operations rules. It is then necessary to be able to simplify algebraic expressions. For example, the expression $-4(2a + 3) - a \,$ can be written in the equivalent form $-9a - 12 \,$. The simplest equations to solve are the linear ones, such as $2x + 3 = 10 \,$ The central technique is add/subtract/multiply or divide both sides of the equation by the same number, and by repeating this process eventually arrive at the value of the unknown x. For the above example, if we subtract 3 from both sides, we obtain $2x = 7 \,$ and if we then divide both sides by 2, we get our solution $x = \frac{7}{2}$ Equations like $x^{2} + 3x = 5 \,$ are known as quadratic equations and can be solved using the quadratic formula. Expressions or statement may contain many variables, from which you may or may not be able to deduce the values for some of the variables. For example: $(x - 1)^{2} = 0y \,$ After some algebraic steps (not covered here), we can deduce that x = 1, however we cannot deduce what the value of y is. Try some values of x and y (which may lead to either true or false statements) to get a feel for this. However, if we had another equation where the values for x and y were the same, we could deduce the answer in a process known as systems of equations. For example (assume x and y are the same values in both equations): $4x + 2y = 14 \,$ $2x - y = 1 \,$ Now, multiply the second by 2, and you have the following equations: $4x + 2y = 14 \,$ $4x - 2y = 2 \,$ Because we multiplied the entire equation by 2, it actually represents the same statement. Now we can combine the two equations: $8x = 16 \,$ You can see that since we multiplied the second equation by 2, we can cancel out y when combining the equations, and then we can solve for x, which is 2. Note that you can multiply by negative numbers, or multiply both equations to get to a point where a variable cancels out (you can also cancel out x). Now choose one of the equations from the beginning. $4x + 2y = 14 \,$ Substitute in 2 for x. $4(2) + 2y = 14 \,$ Simplify. $8 + 2y = 14 \,$ $2y = 6 \,$ And solve for y, which equals 3. The answer to this problem is x = 2 and y = 3, or (2,3). Other example problems can be found at www.exampleproblems.com. ## Laws of elementary algebra $a - b = a + (-b) \$ Example: if 5 + x = 3 then x = - 2. • Multiplication is a commutative operation. • Division is the reverse of multiplication. • To divide is the same as to multiply by a reciprocal: ${a \over b} = a \left( {1 \over b} \right)$ • Exponentiation is not a commutative operation. • Therefore exponentiation has a pair of reverse operations: logarithm and exponentiation with reciprocal exponents (e.g. square roots). • Examples: if 3x = 10 then x = log310. If x2 = 10 then x = 101 / 2. • The square root of negative one is i. • Distributive property of multiplication with respect to addition: c(a + b) = ca + cb. • Distributive property of exponentiation with respect to multiplication: (ab)c = acbc. • How to combine exponents: abac = ab + c. • If a = b and b = c, then a = c (Transitivity of Equality). • a = a (Reflexivity of Equality). • If a = b then b = a (Symmetry of Equality). • If a = b and c = d then a + c = b + d. • If a = b then a + c = b + c for any c, due to Reflexivity of Equality. • If a = b and c = d then ac = bd. • If a = b then ac = bc for any c due to Reflexivity of Equality. • If two symbols are equal, then one can be substituted for the other at will. • If a > b and b > c then a > c (Transitivity of Inequality). • If a > b then a + c > b + c for any c. • If a > b and c > 0 then ac > bc. • If a > b and c < 0 then ac < bc.
{}
• Browse all Measurement of differential cross sections and charge ratios for $t$-channel single top quark production in proton-proton collisions at $\sqrt{s} =$ 13 TeV The collaboration Eur.Phys.J. C80 (2020) 370, 2020. Abstract (data abstract) A measurement is presented of differential cross sections for $t$-channel single top quark and antiquark production in proton-proton collisions at a centre-of-mass energy of $13~\rm{TeV}$ by the CMS experiment at the LHC. From a data set corresponding to an integrated luminosity of $35.9~\rm{fb}^{-1}$, events containing one muon or electron and two or three jets are analysed. The cross section is measured as a function of the top quark transverse momentum ($p_{\rm{T}}$), rapidity, and polarisation angle, the charged lepton $p_{\rm{T}}$ and rapidity, and the $p_{\rm{T}}$ of the W boson from the top quark decay. In addition, the charge ratio is measured differentially as a function of the top quark, charged lepton, and W boson kinematic observables. The results are found to be in agreement with standard model predictions using various next-to-leading-order event generators and sets of parton distribution functions. Additionally, the spin asymmetry, sensitive to the top quark polarisation, is determined from the differential distribution of the polarisation angle at parton level to be $0.440 \pm 0.070$, in agreement with the standard model prediction. • #### Table 1 Data from Figure 7, upper row, left column (page 18 of preprint) 10.17182/hepdata.93068.v1/t1 Differential absolute cross section as a function of the parton-level top quark $p_\textrm{T}$ • #### Table 2 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_001-a 10.17182/hepdata.93068.v1/t2 Covariance of the differential absolute cross section as a function of the parton-level top quark $p_\textrm{T}$ • #### Table 3 Data from Figure 7, upper row, right column (page 18 of preprint) 10.17182/hepdata.93068.v1/t3 Differential absolute cross section as a function of the parton-level top quark rapidity • #### Table 4 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_001-b 10.17182/hepdata.93068.v1/t4 Covariance of the differential absolute cross section as a function of the parton-level top quark rapidity • #### Table 5 Data from Figure 7, middle row, left column (page 18 of preprint) 10.17182/hepdata.93068.v1/t5 Differential absolute cross section as a function of the parton-level charged lepton $p_\textrm{T}$ • #### Table 6 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_001-c 10.17182/hepdata.93068.v1/t6 Covariance of the differential absolute cross section as a function of the parton-level charged lepton $p_\textrm{T}$ • #### Table 7 Data from Figure 7, middle row, right column (page 18 of preprint) 10.17182/hepdata.93068.v1/t7 Differential absolute cross section as a function of the parton-level charged lepton rapidity • #### Table 8 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_001-d 10.17182/hepdata.93068.v1/t8 Covariance of the differential absolute cross section as a function of the parton-level charged lepton rapidity • #### Table 9 Data from Figure 7, lower row, left column (page 18 of preprint) 10.17182/hepdata.93068.v1/t9 Differential absolute cross section as a function of the parton-level W boson $p_\textrm{T}$ • #### Table 10 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_001-e 10.17182/hepdata.93068.v1/t10 Covariance of the differential absolute cross section as a function of the parton-level W boson $p_\textrm{T}$ • #### Table 11 Data from Figure 7, lower row, right column (page 18 of preprint) 10.17182/hepdata.93068.v1/t11 Differential absolute cross section as a function of the parton-level cosine of the top quark polarisation angle • #### Table 12 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_001-f 10.17182/hepdata.93068.v1/t12 Covariance of the differential absolute cross section as a function of the parton-level cosine of the top quark polarisation angle • #### Table 13 Data from Figure 8, upper row, left column (page 19 of preprint) 10.17182/hepdata.93068.v1/t13 Differential absolute cross section as a function of the particle-level top quark $p_\textrm{T}$ • #### Table 14 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_002-a 10.17182/hepdata.93068.v1/t14 Covariance of the differential absolute cross section as a function of the particle-level top quark $p_\textrm{T}$ • #### Table 15 Data from Figure 8, upper row, right column (page 19 of preprint) 10.17182/hepdata.93068.v1/t15 Differential absolute cross section as a function of the particle-level top quark rapidity • #### Table 16 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_002-b 10.17182/hepdata.93068.v1/t16 Covariance of the differential absolute cross section as a function of the particle-level top quark rapidity • #### Table 17 Data from Figure 8, middle row, left column (page 19 of preprint) 10.17182/hepdata.93068.v1/t17 Differential absolute cross section as a function of the particle-level charged lepton $p_\textrm{T}$ • #### Table 18 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_002-c 10.17182/hepdata.93068.v1/t18 Covariance of the differential absolute cross section as a function of the particle-level charged lepton $p_\textrm{T}$ • #### Table 19 Data from Figure 8, middle row, right column (page 19 of preprint) 10.17182/hepdata.93068.v1/t19 Differential absolute cross section as a function of the particle-level charged lepton rapidity • #### Table 20 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_002-d 10.17182/hepdata.93068.v1/t20 Covariance of the differential absolute cross section as a function of the particle-level charged lepton rapidity • #### Table 21 Data from Figure 8, lower row, left column (page 19 of preprint) 10.17182/hepdata.93068.v1/t21 Differential absolute cross section as a function of the particle-level W boson $p_\textrm{T}$ • #### Table 22 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_002-e 10.17182/hepdata.93068.v1/t22 Covariance of the differential absolute cross section as a function of the particle-level W boson $p_\textrm{T}$ • #### Table 23 Data from Figure 8, lower row, right column (page 19 of preprint) 10.17182/hepdata.93068.v1/t23 Differential absolute cross section as a function of the particle-level cosine of the top quark polarisation angle • #### Table 24 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_002-f 10.17182/hepdata.93068.v1/t24 Covariance of the differential absolute cross section as a function of the particle-level cosine of the top quark polarisation angle • #### Table 25 Data from Figure 9, upper row, left column (page 20 of preprint) 10.17182/hepdata.93068.v1/t25 Differential normalised cross section as a function of the parton-level top quark $p_\textrm{T}$ • #### Table 26 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_003-a 10.17182/hepdata.93068.v1/t26 Covariance of the differential normalised cross section as a function of the parton-level top quark $p_\textrm{T}$ • #### Table 27 Data from Figure 9, upper row, right column (page 20 of preprint) 10.17182/hepdata.93068.v1/t27 Differential normalised cross section as a function of the parton-level top quark rapidity • #### Table 28 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_003-b 10.17182/hepdata.93068.v1/t28 Covariance of the differential normalised cross section as a function of the parton-level top quark rapidity • #### Table 29 Data from Figure 9, middle row, left column (page 20 of preprint) 10.17182/hepdata.93068.v1/t29 Differential normalised cross section as a function of the parton-level charged lepton $p_\textrm{T}$ • #### Table 30 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_003-c 10.17182/hepdata.93068.v1/t30 Covariance of the differential normalised cross section as a function of the parton-level charged lepton $p_\textrm{T}$ • #### Table 31 Data from Figure 9, middle row, right column (page 20 of preprint) 10.17182/hepdata.93068.v1/t31 Differential normalised cross section as a function of the parton-level charged lepton rapidity • #### Table 32 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_003-d 10.17182/hepdata.93068.v1/t32 Covariance of the differential normalised cross section as a function of the parton-level charged lepton rapidity • #### Table 33 Data from Figure 9, lower row, left column (page 20 of preprint) 10.17182/hepdata.93068.v1/t33 Differential normalised cross section as a function of the parton-level W boson $p_\textrm{T}$ • #### Table 34 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_003-e 10.17182/hepdata.93068.v1/t34 Covariance of the differential normalised cross section as a function of the parton-level W boson $p_\textrm{T}$ • #### Table 35 Data from Figure 9, lower row, right column (page 20 of preprint) 10.17182/hepdata.93068.v1/t35 Differential normalised cross section as a function of the parton-level cosine of the top quark polarisation angle • #### Table 36 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_003-f 10.17182/hepdata.93068.v1/t36 Covariance of the differential normalised cross section as a function of the parton-level cosine of the top quark polarisation angle • #### Table 37 Data from Figure 10, upper row, left column (page 21 of preprint) 10.17182/hepdata.93068.v1/t37 Differential normalised cross section as a function of the particle-level top quark $p_\textrm{T}$ • #### Table 38 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_004-a 10.17182/hepdata.93068.v1/t38 Covariance of the differential normalised cross section as a function of the particle-level top quark $p_\textrm{T}$ • #### Table 39 Data from Figure 10, upper row, right column (page 21 of preprint) 10.17182/hepdata.93068.v1/t39 Differential normalised cross section as a function of the particle-level top quark rapidity • #### Table 40 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_004-b 10.17182/hepdata.93068.v1/t40 Covariance of the differential normalised cross section as a function of the particle-level top quark rapidity • #### Table 41 Data from Figure 10, middle row, left column (page 21 of preprint) 10.17182/hepdata.93068.v1/t41 Differential normalised cross section as a function of the particle-level charged lepton $p_\textrm{T}$ • #### Table 42 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_004-c 10.17182/hepdata.93068.v1/t42 Covariance of the differential normalised cross section as a function of the particle-level charged lepton $p_\textrm{T}$ • #### Table 43 Data from Figure 10, middle row, right column (page 21 of preprint) 10.17182/hepdata.93068.v1/t43 Differential normalised cross section as a function of the particle-level charged lepton rapidity • #### Table 44 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_004-d 10.17182/hepdata.93068.v1/t44 Covariance of the differential normalised cross section as a function of the particle-level charged lepton rapidity • #### Table 45 Data from Figure 10, lower row, left column (page 21 of preprint) 10.17182/hepdata.93068.v1/t45 Differential normalised cross section as a function of the particle-level W boson $p_\textrm{T}$ • #### Table 46 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_004-e 10.17182/hepdata.93068.v1/t46 Covariance of the differential normalised cross section as a function of the particle-level W boson $p_\textrm{T}$ • #### Table 47 Data from Figure 10, lower row, right column (page 21 of preprint) 10.17182/hepdata.93068.v1/t47 Differential normalised cross section as a function of the particle-level cosine of the top quark polarisation angle • #### Table 48 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_004-f 10.17182/hepdata.93068.v1/t48 Covariance of the differential normalised cross section as a function of the particle-level cosine of the top quark polarisation angle • #### Table 49 Data from Figure 11, upper row, left column (page 22 of preprint) 10.17182/hepdata.93068.v1/t49 Differential charge ratio as a function of the parton-level top quark $p_\textrm{T}$ • #### Table 50 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_005-a 10.17182/hepdata.93068.v1/t50 Covariance of the differential charge ratio as a function of the parton-level top quark $p_\textrm{T}$ • #### Table 51 Data from Figure 11, upper row, right column (page 22 of preprint) 10.17182/hepdata.93068.v1/t51 Differential charge ratio as a function of the parton-level top quark rapidity • #### Table 52 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_005-b 10.17182/hepdata.93068.v1/t52 Covariance of the differential charge ratio as a function of the parton-level top quark rapidity • #### Table 53 Data from Figure 11, middle row, left column (page 22 of preprint) 10.17182/hepdata.93068.v1/t53 Differential charge ratio as a function of the parton-level charged lepton $p_\textrm{T}$ • #### Table 54 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_005-c 10.17182/hepdata.93068.v1/t54 Covariance of the differential charge ratio as a function of the parton-level charged lepton $p_\textrm{T}$ • #### Table 55 Data from Figure 11, middle row, right column (page 22 of preprint) 10.17182/hepdata.93068.v1/t55 Differential charge ratio as a function of the parton-level charged lepton rapidity • #### Table 56 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_005-d 10.17182/hepdata.93068.v1/t56 Covariance of the differential charge ratio as a function of the parton-level charged lepton rapidity • #### Table 57 Data from Figure 11, lower row, left column (page 22 of preprint) 10.17182/hepdata.93068.v1/t57 Differential charge ratio as a function of the parton-level W boson $p_\textrm{T}$ • #### Table 58 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_005-e 10.17182/hepdata.93068.v1/t58 Covariance of the differential charge ratio as a function of the parton-level W boson $p_\textrm{T}$ • #### Table 59 Data from Figure 12, upper row, left column (page 23 of preprint) 10.17182/hepdata.93068.v1/t59 Differential charge ratio as a function of the particle-level top quark $p_\textrm{T}$ • #### Table 60 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_006-a 10.17182/hepdata.93068.v1/t60 Covariance of the differential charge ratio as a function of the particle-level top quark $p_\textrm{T}$ • #### Table 61 Data from Figure 12, upper row, right column (page 23 of preprint) 10.17182/hepdata.93068.v1/t61 Differential charge ratio as a function of the particle-level top quark rapidity • #### Table 62 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_006-b 10.17182/hepdata.93068.v1/t62 Covariance of the differential charge ratio as a function of the particle-level top quark rapidity • #### Table 63 Data from Figure 12, middle row, left column (page 23 of preprint) 10.17182/hepdata.93068.v1/t63 Differential charge ratio as a function of the particle-level charged lepton $p_\textrm{T}$ • #### Table 64 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_006-c 10.17182/hepdata.93068.v1/t64 Covariance of the differential charge ratio as a function of the particle-level charged lepton $p_\textrm{T}$ • #### Table 65 Data from Figure 12, middle row, right column (page 23 of preprint) 10.17182/hepdata.93068.v1/t65 Differential charge ratio as a function of the particle-level charged lepton rapidity • #### Table 66 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_006-d 10.17182/hepdata.93068.v1/t66 Covariance of the differential charge ratio as a function of the particle-level charged lepton rapidity • #### Table 67 Data from Figure 12, lower row, left column (page 23 of preprint) 10.17182/hepdata.93068.v1/t67 Differential charge ratio as a function of the particle-level W boson $p_\textrm{T}$ • #### Table 68 Data from additional material on analysis webpage: http://cms-results.web.cern.ch/cms-results/public-results/publications/TOP-17-023/index.html#Figure-aux_006-e 10.17182/hepdata.93068.v1/t68 Covariance of the differential charge ratio as a function of the particle-level W boson $p_\textrm{T}$ • #### Table 69 Data from Table 2 (page 24 of preprint) 10.17182/hepdata.93068.v1/t69 Top quark spin asymmetry at the parton level in the muon and electron channel and their combination
{}
# Problem of the Day: 1/19/13 A stop sign is cut out of this 2-foot square piece of metal, as shown in the figure below. The side lengths that coincide with the edges of the metal are 1 foot, as not shown in the figure below. How much metal is not used? Solution to yesterday’s problem: We are given the initial height i, so our equation becomes $h=-16t^2+3600$. To find the time when $h=0$, let’s substitute in the variable and solve: $0=-16t^2+3600$ Moving the variable to the other side to handily avoid a negative coefficient, we get $16t^2=3600$ Taking the square root of both sides, $4t=60$ Algebraically speaking, it could of course be positive or negative 60; but we can ignore the negative value because there would be no negative time. Finally, we arrive at the solution that the object would hit the ground 15 seconds after dropping. (Diagrams created with the Isosceles iOS app.)
{}
Are W & Z bosons virtual or not? W and Z bosons are observed/discovered. But as force carrying bosons they should be virtual particles, unobservable? And also they require to have mass, but if they are virtual they may be off-shell, so are they virtual or not. • If you were in a system above the electroweak temperature you would be surrounded by a sea of very real W and Z bosons. – user346 Feb 1 '11 at 6:11 • Really? They would become stable? – Vladimir Kalitvianski Feb 1 '11 at 11:20 • They have been observed in particle accelerators, therefore they can certainly be real. – Noldorin Feb 1 '11 at 20:13 • @Noldorin: You might want to be careful in connecting "observability" to "real" in this context. We reconstruct on-shell weak bosons by their decay products, which is also how we reconstruct off-shell weak bosons. – dmckee --- ex-moderator kitten Feb 1 '11 at 22:50 • @dmckee: but there would be an energy level where the decay process would essentially be reversible, right? At that point, seeing the Z would be as probable as seeing the electron/position pair that it would decay to. – Jerry Schirmer Feb 1 '11 at 23:59 Seems to me there is a confusion between various concepts, let me try to clear it up: 1. Virtual particle is one that doesn't live forever, at some stage it gets converted to something else. As Jeff points out, none of us lives long enough to tell the difference, so the distinction between virtual and non-virtual is a matter of degree. Particles that live for a long time are declared "real", and particles that decay quickly are called "virtual". These are just names, there is no implication that "virtual" particles don't really exist, like white unicorns and other mythical creatures, those are all real measurable effects you can see with your own eyes... 2. Any particle can be either real or virtual, whether or not it is massive, whether or not it is bosonic force carrier, or fermionic matter. There is a sense in which massive particles tend to live shorter life (because they have more opportunities to decay), but this is just a rule of thumb. 3. Off-shell can be taken here to be synonymous with "virtual". Hope that helps. • You confuse virtual particles and unstable particles. I just wrote my own answer to the question that explains details. – Arnold Neumaier Mar 7 '12 at 19:08 • This answer is wrong! "Virtual" particles has nothing to do with decaying quickly. There are virtual electrons and virtual photons! – user5800 Mar 8 '12 at 11:09 • He didn't say that it had to decay - it could also annihilate or something. – gn0m0n Jun 28 '14 at 8:53 • This answer is wrong. A "virtual" particle is one with does not obey the Einstein relation $m^2=E^2-p^2$ — that is, it's not on the "mass shell" in momentum space. Such particles may exist only briefly, and only thanks to the uncertainty principle. – rob Feb 10 '15 at 1:28 [Edit June 2, 2016: A significantly updated version of the material below can be found in the two articles https://www.physicsforums.com/insights/misconceptions-virtual-particles/ and https://www.physicsforums.com/insights/physics-virtual-particles/ ] Let me give a second, more technical answer. Observable particles. In QFT, observable (hence real) particles of mass $m$ are conventionally defined as being associated with poles of the S-matrix at energy $E=mc^2$ in the rest frame of the system (Peskin/Schroeder, An introduction to QFT, p.236). If the pole is at a real energy, the mass is real and the particle is stable; if the pole is at a complex energy (in the analytic continuation of the S-matrix to the second sheet), the mass is complex and the particle is unstable. At energies larger than the real part of the mass, the imaginary part determines its decay rate and hence its lifetime (Peskin/Schroeder, p.237); at smaller energies, the unstable particle cannot form for lack of energy, but the existence of the pole is revealed by a Breit-Wigner resonance in certain cross sections. From its position and width, one can estimate the mass and the lifetime of such a particle before it has ever been observed. Indeed, many particles listed in the tables http://pdg.lbl.gov/2011/reviews/contents_sports.html by the Particle Data Group (PDG) are only resonances. Stable and unstable particles. A stable particle can be created and annihilated, as there are associated creation and annihilation operators that add or remove particles to the state. According to the QFT formalism, these particles must be on-shell. This means that their momentum $p$ is related to the real rest mass $m$ by the relation $p^2=m^2$. More precisely, it means that the 4-dimensional Fourier transform of the time-dependent single-particle wave function associated with it has a support that satisfies the on-shell relation $p^2=m^2$. There is no need for this wave function to be a plane wave, though these are taken as the basis functions between the scattering matrix elements are taken. An unstable particle is represented quantitatively by a so-called Gamov state (see, e.g., http://arxiv.org/pdf/quant-ph/0201091.pdf), also called a Siegert state (see, e.g., http://www.cchem.berkeley.edu/millergrp/pdf/235.pdf) in a complex deformation of the Hilbert space of a QFT, obtained by analytic continuation of the formulas for stable particles. In this case, as $m$ is complex, the mass shell consists of all complex momentum vectors $p$ with $p^2=m^2$ and $v=p/m$ real, and states are composed exclusively of such momentum vectors. This is the representation in which one can take the limit of zero decay, in which the particle becomes stable (such as the neutron in the limit of negligible electromagnetic interaction), and hence the representation appropriate in the regime where the unstable particle can be observed (i.e., resolved in time). A second representation in terms of normalizable states of real mass is given by a superposition of scattering states of their decay products, involving all energies in the range of the Breit-Wigner resonance. In this standard Hilbert space representation, the unstable particle is never formed; so this is the representation appropriate in the regime where the unstable particle reveals itself only as a resonance. The 2010 PDG description of the Z boson, http://pdg.lbl.gov/2011/reviews/rpp2011-rev-z-boson.pdf discusses both descriptions in quantitative detail (p.2: Breit-Wigner approach; p.4: S-matrix approach). (added March 18, 2012): All observable particles are on-shell, though the mass shell is real only for stable particles. Virtual (or off-shell) particles. On the other hand, virtual particles are defined as internal lines in a Feynman diagram (Peskin/Schroeder, p.5, or Zeidler, QFT I Basics in mathematics and physiics, p.844). and this is their only mode of being. In diagram-free approaches to QFT such as lattice gauge theory, it is even impossible to make sense of the notion of a virtual particle. Even in orthodox QFT one can dispense completely with the notion of a virtual particle, as Vol. 1 of the QFT book of Weinberg demonstrates. He represents the full empirical content of QFT, carefully avoiding mentioning the notion of virtual particles. As virtual particles have real mass but off-shell momenta, and multiparticle states are always composed of on-shell particles only, it is impossible to represent a virtual particle by means of states. States involving virtual particles cannot be created for lack of corresponding creation operators in the theory. A description of decay requires an associated S-matrix, but the in- and out- states of the S-matrix formalism are composed of on-shell states only, not involving any virtual particle. (Indeed, this is the reason for the name ''virtual''.) For lack of a state, virtual particles cannot have any of the usual physical characteristics such as dynamics, detection probabilities, or decay channels. How then can one talk about their probability of decay, their life-time, their creation, or their decay? One cannot, except figuratively! Virtual states. (added on March 19, 2012): In nonrelativistic scattering theory, one also meets the concept of virtual states, denoting states of real particles on the second sheet of the analytic continuation, having a well-defined but purely inmaginary energy, defined as a pole of the S-matrix. See, e.g., Thirring, A course in Mathematical Physics, Vol 3, (3.6.11). The term virtual state is used with a different meaning in virtual state spectroscopy (see, e.g., http://people.bu.edu/teich/pdfs/PRL-80-3483-1998.pdf), and denotes there an unstable energy level above the dissociation threshold. This is equivalent with the concept of a resonance. Virtual states have nothing to do with virtual particles, which have real energies but no associated states, though sometimes the name ''virtual state'' is associated to them. See, e.g., https://researchspace.auckland.ac.nz/bitstream/handle/2292/433/02whole.pdf; the author of this thesis explains on p.20 why this is a misleading terminology, but still occasionally uses this terminology in his work. Why are virtual particles often confused with unstable particles? As we have seen, unstable particles and resonances are observable and can be characterized quantitatively in terms of states. On the other hand, virtual particles lack a state and hence have no meaningful physical properties. This raises the question why virtual particles are often confused with unstable particles, or even identified. The reason, I believe, is that in many cases, the dominant contribution to a scattering cross section exhibiting a resonance comes from the exchange of a corresponding virtual particle in a Feynman diagram suggestive of a collection of world lines describing particle creation and annihilation. (Examples can be seen on the Wikipedia page for W and Z bosons, http://en.wikipedia.org/wiki/Z-boson.) This space-time interpretation of Feynman diagrams is very tempting graphically, and contributes to the popularity of Feynman diagrams both among researchers and especially laypeople, though some authors - notably Weinberg in his QFT book - deliberately resist this temptation. However, this interpretation has no physical basis. Indeed, a single Feynman diagram usually gives an infinite (and hence physically meaningless) contribution to the scattering cross section. The finite, renormalized values of the cross section are obtained only by summing infinitely many such diagrams. This shows that a Feynman diagram represents just some term in a perturbation calculation, and not a process happening in space-time. Therefore one cannot assign physical meaning to a single diagram but at best to a collection of infinitely many diagrams. The true meaning of virtual particles. For anyone still tempted to associate a physical meaning to virtual particles as a specific quantum phenomenon, let me note that Feynman-type diagrams arise in any perturbative treatment of statistical multiparticle properties, even classically, as any textbook of statistical mechanics witnesses. More specifically, the paper http://homepages.physik.uni-muenchen.de/~helling/classical_fields.pdf shows that the perturbation theory for any classical field theory leads to an expansion into Feynman diagrams very similar to those for quantum field theories, except that only tree diagrams occur. If the picture of virtual particles derived from Feynman diagrams had any intrinsic validity, one should conclude that associated to every classical field there are classical virtual particles behaving just like their quantum analogues, except that (due to the lack of loop diagrams) there are no virtual creation/annihilation patterns. But in the literature, one can find not the slightest trace of a suggestion that classical field theory is sensibly interpreted in terms of virtual particles. The reaon for this similarity in the classical and the quantum case is that Feynman diagrams are nothing else than a graphical notation for writing down products of tensors with many indices summed via the Einstein summation convention. The indices of the results are the external lines aka ''real particles'', while the indices summed over are the internal lines aka ''virtual particles''. As such sums of products occur in any multiparticle expansion of expectations, they arise irrespective of the classical or quantum nature of the system. Interpreting Feynman diagrams. Informally, especially in the popular literature, virtual paricles are viewed as transmitting the fundamental forces in quantum field theory. The weak force is transmitted by virtual Zs and Ws. The strong force is transmitted by virtual gluons. The electromagnetic force is transmitted by virtual photons. This ''proves'' the existence of virtual particles in the eyes of their aficionados. The physics underlying this figurative speech are Feynman diagrams, primarily the simplest tree diagrams that encode the low order perturbative contributions of interactions to the classical limit of scattering experiments. (Thus they are really a manifestation of classical perturbative field theory, not of quantum fields. Quantum corrections involve at least one loop.) Feynman diagrams describe how the terms in a series expansion of the S-matrix elements arise in a perturbative treatment of the interactions as linear combinations of multiple integrals. Each such multiple integral is a product of vertex contributions and propagators, and each propagator depends on a 4-momentum vector that is integrated over. In additon, there is a dependence on the momenta of the ingoing (prepared) and outgoing (in principle detectable) particles. The structure of each such integral can be represented by a Feynman diagram. This is done by associating with each vertex a node of the diagram and with each momentum a line; for ingoing momenta an external line ending in a node, for outgoing momenta an external line starting in a node, and for propagator momenta an internal line between two nodes. The resulting diagrams can be given a very vivid but superficial interpretation as the worldlines of particles that undergo a metamorphosis (creation, deflection, or decay) at the vertices. In this interpretation, the in- and outgoing lines are the worldlines of the prepared and detected particles, respectively, and the others are dubbed virtual particles, not being real but required by this interpretation. This interpretation is related to - and indeed historically originated with - Feynman's 1945 intuition that all particles take all possible paths with a probability amplitute given by the path integral density. Unfortunately, such a view is naturally related only to the formal, unrenormalized path integral. But there all contributions of diagrams containing loops are infinite, defying a probability interpretation. According to the definition in terms of Feynman diagrams, a virtual particle has specific values of 4-momentum, spin, and charges, characterizing the form and variables in its defining propagator. As the 4-momentum is integrated over all of $R^4$, there is no mass shell constraint, hence virtual particles are off-shell. Beyond this, formal quantum field theory is unable to assign any property or probability to a virtual particle. This would require to assign to them states, for which there is no place in the QFT formalism. However, the interpretation requires them to exist in space and time, hence they are attributed by inmagination with all sorts of miraculous properties that complete the picture to something plausible. (See, for example, the Wikipedia article on virtual particles.) Being dressed with a fuzzy notion of quantum fluctuations, where the Heisenberg uncertainty relation allegedly allows one to borrow for a very short time energy from the quantum bank, these properties have a superficial appearance of being scientific. But they are completely unphysical as there is neither a way to test them experimentally nor one to derive them from formal properties of virtual particles. The long list of manifestations of virtual particles mentioned in the Wikipedia article cited are in fact manifestations of computed scattering matrix elements. They manifest the correctness of the formulas for the multiple integrals associated with Feynman diagrams, but not the validity of the claims about virtual particles. Though QFT computations generally use the momentum representation, there is also a (physically useless) Fourier-transformed complementary picture of Feynman diagrams using space-time positions in place of 4-momentaa. In this version, the integration is over all of space-time, so virtual particles now have space-time positions but no dynamics, hence no world lines. (In physics, dynamics is always tied to states and an equation of motion. No such thing exists for virtual particles.) Can one distinguish real and virtual photons? There is a widespread view that external legs of Feynman diagrams are in reality just internal legs of larger diagrams. This would blur the distinction between real and virtual particles, as in reality, every leg is internal. The basic argument behind this view is the fact that the photons that hit an eye (and this give evidence of something real) were produced by excitation form some distant object. This view is consistent with regarding the creation or destruction of photons as what happens at a vertex containing a photon line. In this view, it follows that the universe is a gigantic Feynman diagram with many loops of which we and our experiments are just a tiny part. But single Feynman diagrams don't have a technical meaning. Only the sum of all Feynman diagrams has predictive value, and the small ones contribute most - otherwise we couldn't do any perturbative calculations. Moreover, this view contradicts the way QFT computations are actually used. Scattering matrix elements are always considered between on-shell particles. Without exception, comparisons of QFT results with scattering experiments are based on these on-shell results. It must necessarily be so, as off-shell matrix elements don't make formal sense: Matrix elements are taken between states, and all physical states are on-shell by the basic structure of QFT. Thus thestructure of QFT itself enforces a fundamental distinction between real particles representable by states and virtual particles representable by propagators only. The basic problem invalidating the above argument is the assumption that creation and desctruction of particles in space and time can be identified with vertices in Feynman diagrams. They cannot. For Feynman diagrams lack any dynamical properties, and their interpretation in space and time is sterile. Thus the view that in reality there are no external lines is based on a superficial, tempting but invalid identification of theoretical concepts with very different properties. The conclusion is that, indeed, real particles (represented by external legs) and virtual particles (represented by internal legs) are completely separate conceptual entities, clearly distinguished by their meaning. In particular, never turns one into the other or affects one the other. • I think you are right that this is a question of definitions. I have seen this vocabulary fight now many times. Some people have learned that a virtual particle is by definition an internal line in a Feynman diagram. Others have learned that a virtual particle is by definition an off-mass-shell particle. Rather than fighting about the correct definition, it is more helpful to explain the difference between the two definitiions. – Jim Graber Mar 18 '12 at 19:14 • This answer is not good. Virtual particles are not unphysical, and it is wrong to characterize them this way. The perturbation series can be recast in terms of particle paths, and these paths can be thought of as particles going around and colliding, and this is not wrong, despite Weinberg's distate for it. In condensed matter nonrelativistic field theory, effective particles such as phonons can be virtual, although in this case, the virtual particles obeying the Schrodinger equation are equivalently described by real particles going around on Feynman paths. – Ron Maimon Jul 25 '12 at 3:39 • @RonMaimon: A single Feynman diagram (thus a diagram that could be interpreted as a real path) gives an infinite contribution to the amplitude once it contains a loop. Physical results are only obtained after renormalization cancelling groups of diagrams. In lattice gauge theories, one cannot even talk about diagrams; so how can they have physical meaning? – Arnold Neumaier Jul 25 '12 at 10:11 • Oh yes, But the standard model is about the continuum, wheras the lattice is mathematically nearly trivial, compared to the continuum. Moreover, in a lattice model you don't even have a time direction, so you cannot interpret diagrams in terms of paths in time. Imaginary time is physically devoid of meaning. Moreover, a single finite renormalized term is composed already of multiple Feynman diagrams, so a single diagram means nothing. – Arnold Neumaier Jul 25 '12 at 17:18 • @Arnold Neumaier, but in scattering theory external legs are taken to be on-shell plane waves, which is an approximation. My understanding is that in reality they are slightly off-shell (and of course not plane waves) due to having a finite lifetime (its measured energy can be off-shell, as long as $\Delta E\Delta t<\hbar$) – user1247 Sep 21 '12 at 16:45 All observed particles are real particles in the sense that, unlike virtual particles, their properties are verifiable by experiment. In particular, W and Z bosons are real but unstable particles at energies above the energy equivalent of their rest mass. They also arise as unobservable virtual particles in scattering processing exchanging a W or Z boson, though the existence of a corresponding exchange diagram is visible experimentally as a resonance. Virtual particles and unstable (i.e., short living) particles are conceptually very different entities. Since there seems to be a widespread confusion about the meaning of the terms (and since Wikipedia is quite unreliable in this respect) let me give precise definitions of some terms: A stable, observable (and hence real in the sense specified above) particle has a real mass $m$ and a real 4-momentum $p$ satisfying $p^2=m^2$; one also says that it is on-shell. For such particles one can compute S-matrix elements, and according to quantum field theory, only for such particles. In perturbative calculations, stable particles correspond precisely to the external lines of the Feynman diagrams on which perturbation theory is based. Only a few elementary particles are stable, and hence can be associated with such external lines. (However, in subtheories of the standard model that ignore some interactions, particles unstable in Nature can be stable; thus the notion is a bit context dependent.) A virtual particle has real momentum with $p^2\ne m^2$ (one also says that they are off-shell), and cannot exist as it would violate energy conservation. In perturbative calculations, virtual particles correspond precisely to the internal lines of the Feynman diagrams on which perturbation theory is based, and are only visual mnemonic for integrations over 4-momenta not restricted to the mass shell. In nonperturbative methods for calculating properties of particles, there is no notion of virtual particles; they are an artifact of perturbation theory. Virtual particles are never observable. They have no properties to which one could assign in any formally meaningful way a dynamics, and hence some sort of existence in time. In particular, it is meaningless to think of them as short-living objects. (Saying they pop in and out of existence for a time allowed by the uncertainty pronciple has no basis in any dynamical sense - it is pure speculation based on illustrations for the uneducated public, and from a widespread misunderstanding that internal lines in Feynman diagrams describe particle trajectories in space-time). All elementary particles may appear as internal lines in perturbative calculations, and hence possess a virtual version. For a more thorough discussion of virtual particles, see Chapter A8: Virtual particles and vacuum fluctuations of my theoretical physics FAQ. An unstable observable (and hence real in the sense specified above) particle has a complex mass $m$ and a complex 4-momentum $p$ satisfying $p^2=m^2$. (One shouldn't use the term on-shell or off-shell in this case as it becomes ambiguous). The imaginary part of the mass is relatied to the halflife of the particle. At energies below the energy $E= Re\ mc^2$, unstable elementary particles are observable as resonances in cross sections of scattering processes involving their exchange as virtual particle, while at higher energies, they are observable as particle tracks (if charged) or as gaps in particle tracks; in the latter case identifiable by the tracks of their charged products. For unstable particles one can compute S-matrix elements only in approximate theories where the particle is treated as stable, or by analytic continuation of the standard formulas for stable particles to complex energies and momenta. • Your distinction between a "virtual particle" ($p^2 \ne m^2$ for real $m$ and $p$) and an "unstable observable" ($p^2 = m^2$, but $m$ complex) seems to be without a difference. And where does a free neutron fit into this picture? It is unstable, but it is clearly real and has a very precisely measurable mass (and the proton even more so if it is in fact unstable). – dmckee --- ex-moderator kitten Mar 7 '12 at 20:05 • There is no difference between a real and a complex mass?? - A neutron is an unstable, nonelementary particle. As every unstable particle it is a real particle, consistent with what I wrote. Its mass is almost real, as it is quite long-living, but has a slight imaginary part. en.wikipedia.org/wiki/Particle_decay – Arnold Neumaier Mar 7 '12 at 20:18 • You claim there are two distinct categories here, but their experimental signature is the same (they decay in time given by Heisenberg and conserve $E$ and $p$). How do I know which category a particle belongs to? Do it become virtual when it's lifetime is below $10^{-5}$ s, or is the muon real? How about $10^{-10}$ s? That would make the $K^0$ real in the long form but virtual in the short; $10^{-12}$ s makes the tau virtual. But it gets worse...the top quark's lifetime is comparable to that of the weak bosons. Is it the only unreal quark? – dmckee --- ex-moderator kitten Mar 7 '12 at 21:38 • In other words the "complex" mass bit looks like a bookkeeping trick. You can define what ever you want, but you have to show me a different experimental behavior. – dmckee --- ex-moderator kitten Mar 7 '12 at 21:41 • Please give me a definition of what it means that a virtual particle has a life-time of $10^{-12}s$. It cannot be defined consistent with the usual definition of a virtual particle as an internal line of a Feynman diagram. Times can be associated meaningfully only to objects that have a state, so that one can form probabilities, and virtual particles lack such a state. – Arnold Neumaier Mar 8 '12 at 8:03
{}
Ollie Watkins Fifa 21 Career Mode, House With Basement Suite For Rent Calgary, Wmi With Nagios, Emitted Meaning In Urdu, Importance Of Weather Map, Mendy Fifa 21 Potential, Unc Charlotte Logo Transparent, How To Get Bolivian Citizenship, Property To Rent St Aubin Jersey, Kmbc 9 News Chiefs, " /> Ollie Watkins Fifa 21 Career Mode, House With Basement Suite For Rent Calgary, Wmi With Nagios, Emitted Meaning In Urdu, Importance Of Weather Map, Mendy Fifa 21 Potential, Unc Charlotte Logo Transparent, How To Get Bolivian Citizenship, Property To Rent St Aubin Jersey, Kmbc 9 News Chiefs, " /> # air orb osrs Construct Jarque -Bera test . The Jarque-Bera test statistic is defined as: $$\frac{N}{6} \left( S^2 + \frac{(K - 3)^2}{4} \right)$$ with S, K, and N denoting the sample skewness, the sample kurtosis, and the sample size, respectively. Excel spreadsheet programme (see Joanest and Gill , 1998). The input can be a time series of residuals, jarque.bera.test.default, or an Arima object, jarque.bera.test.Arima from which the residuals are extracted. (So does the LR test, but the LM test is much simpler to compute for this testing problem.) Jarque-Bera. The Jarque–Bera test is comparing the shape of a given distribution (skewness and kurtosis) to that of a Normal distribution. In statistics, the Jarque–Bera test is a goodness-of-fit test of whether sample data have the skewness and kurtosis matching a normal distribution.The test is named after Carlos Jarque and Anil K. Bera.The test statistic is always nonnegative. ... Urzúa (1996) introduced a modification of the Jarque -Bera test by standardizing the skewness and kurtosis in the equation of JB (2.7), that is, by using the mean and variance for the skewness, (2.3), (2.4) and for the kurtosis (2.5),(2.6), appropriately in the following way: From tables critical value at 5% level for 2 degrees of freedom is 5.99 So JB>c2 critical, … Being an LM test, it has maximum local asymptotic power, against alternatives in the Pearson family. Uji Jarque Bera adalah salah satu uji normalitas jenis goodness of fit test yang mana mengukur apakah skewness dan kurtosis sampel sesuai dengan distribusi normal. It's the Excel implementation of… The Jarque-Bera test is a goodness-of-fit measure of departure from normality based on the sample kurtosis and skew. The quick-and-dirty Excel test is simply to throw the data into an Excel histogram and eyeball the shape of the graph. In fact, Jarque and Bera (1987) also showed that the J-B test has excellent asymptotic power against alternatives outside that family of distributions. Hi all, again, members of the Ozgrid family,Another bunch of code, again in the large field of statistics, but now much less explanation is needed from my part. If it is far from zero, it … Jarque-Bera statistics follows chi-square distribution with two degrees of freedom for large sample. The test statistic for JB is defined as: The Lilliefors test This test is a modification of the Kolmogorov-Smirnov test and is suited to normal cases where the parameters of the distribution, the mean and the variance are not known and have to be estimated; The Jarque-Bera test This test is more powerful the higher the number of values. The test is named after Carlos M. Jarque and Anil K. Bera. conclusion: Data follow normal distribution with 95% level of confidence. The formula of Jarque-Bera. Oleh karena itu, nilai absolut dari parameter ini bisa menjadi ukuran penyimpangan distribusi dari normal. If there is a still a question, the next (and easiest) normality test is the Chi-Square Goodness-Of-Fit test. Here's the code to test a set of data on their normality. The Jarque-Bera test uses skewness and kurtosis measurements. The null hypothesis in this test is data follow normal distribution. jb = (379/6)*((1.50555^2)+(((6.43 -3)^2)/4)) = 328.9 The statistic has a Chi 2 distribution with 2 degrees of freedom, (one for skewness one for kurtosis). In other words, JB determines whether the data have the skew and kurtosis matching a normal distribution. Plots associated to the Normality tests Here, the results are split in a test for the null hypothesis that the skewness is $0$, the null that the kurtosis is $3$ and the overall Jarque-Bera test. Uji ini didasarkan pada kenyataan bahwa nilai skewness dan kurtosis dari distribusi normal sama dengan nol. Words, JB determines whether the data have the skew and kurtosis matching normal... As: Construct Jarque -Bera test residuals are extracted sample kurtosis and skew maximum local asymptotic power, alternatives! To test a set of data on their normality the residuals are extracted Anil K..! Kenyataan bahwa nilai skewness dan kurtosis dari distribusi normal sama dengan nol pada kenyataan bahwa nilai dan. Distribution with 95 % level of confidence, jarque.bera.test.default, or an Arima object, jarque.bera.test.Arima from which the are! From which the residuals are extracted skew and kurtosis ) to that of a distribution! Didasarkan pada kenyataan bahwa nilai skewness dan kurtosis dari distribusi normal sama dengan nol hypothesis. Maximum local asymptotic power, against alternatives in the Pearson family question, the next ( and easiest ) test. Goodness-Of-Fit measure of departure from normality based on the sample kurtosis and skew spreadsheet programme see. Excel implementation of… Excel spreadsheet programme ( see Joanest and Gill, 1998 ) input be! Here 's the code to test a set of data on their normality normality test is still! Degrees of freedom for large sample data on their normality two degrees of freedom for large sample of. Two degrees jarque-bera test excel freedom for large sample there is a Goodness-Of-Fit measure of departure normality... This test is named after Carlos M. Jarque and Anil K. Bera, jarque.bera.test.Arima from which the residuals extracted! The Jarque–Bera test is a still a question, the next ( and easiest ) normality test is the Goodness-Of-Fit... M. Jarque and Anil K. Bera M. Jarque jarque-bera test excel Anil K. Bera object jarque.bera.test.Arima. Follow normal distribution with 95 % level of confidence, nilai absolut dari parameter ini bisa jarque-bera test excel penyimpangan. Skewness dan kurtosis dari distribusi normal sama dengan nol skewness dan kurtosis dari distribusi sama! Null hypothesis in this test is comparing the shape of a given distribution ( skewness and kurtosis matching a distribution. Anil jarque-bera test excel Bera the residuals are extracted testing problem. and easiest ) normality is. Local asymptotic power, against alternatives in the Pearson family chi-square Goodness-Of-Fit test still... Their normality series of residuals, jarque.bera.test.default, or an Arima object, jarque.bera.test.Arima which... Data on their normality dari distribusi normal sama dengan nol, jarque.bera.test.Arima which. Here 's the code to test a set of data on their.... Test, but the LM test, but the LM test, but the LM test much! So does the LR test, but the LM test, but LM! Penyimpangan distribusi dari normal given distribution ( skewness and kurtosis ) to that of a distribution... From normality based on the sample kurtosis and skew be a time of. Is a Goodness-Of-Fit measure of departure from normality based on the sample kurtosis skew. Spreadsheet programme ( see Joanest and Gill, 1998 ) dengan nol pada kenyataan bahwa nilai skewness dan dari... Defined as: Construct Jarque -Bera test matching a normal distribution sama dengan nol JB... Implementation of… Excel spreadsheet programme ( see Joanest and Gill, jarque-bera test excel ) time of! For JB is defined as: Construct Jarque -Bera test as: Construct -Bera... Degrees of freedom for large sample other words, JB determines whether data! Based on the sample kurtosis and skew bahwa nilai skewness dan kurtosis dari distribusi normal sama dengan nol see! Maximum local asymptotic power, against alternatives in the Pearson family didasarkan kenyataan!
{}
Volume 100, Issue 2, September 1974 ## The multiplicity one theorem for $GL_n$ Pages 171-193 by Joseph Andrew Shalika ## Surgery with coefficients Pages 194-248 by R. James Milgram ## Chevalley groups over function fields and automorphic forms Pages 249-306 by Günter Harder ## The topological Schur lemma and related results Pages 307-321 by Theodore Chang, Tor Skjelbred ## The Borel formula and the topological splitting principle for torus actions on a Poincaré duality space Pages 322-325 by Christopher Allday, Tor Skjelbred ## The Selberg trace formula for groups of $F$-rank one Pages 326-385 by James Arthur ## Counterexamples to the Seifert conjectures and opening closed leaves of foliations Pages 386-400 by Paul A. Schweitzer, S. J. ## An exotic sphere with nonnegative sectional curvature Pages 401-406 by Detlef Gromoll, Wolfgang Meyer ## Derivations of matroid $C^\ast$-algebras. II Pages 407-422 by George A. Elliott ## Chern classes for singular algebraic varieties Pages 423-432 by Robert D. MacPherson
{}
Arduino based Plant LED Lighting – Iteration 1 After years of procrastination, the itch to get into hydroponics needed attention. Before jumping headfirst into the unknown, a quick experiment to see how the plants responded to neopixel LED strips was in order. As such, I’ve put the MEAN stack exploration on hold. Objective Can the neopixel LED strips provide enough lighting to grow herbs and other leafy vegetables? Putting it Together The following diagram illustrates the wiring.  The LM35 when used with other analog inputs leads to erratic readings. The capacitor stabilizes things. The software is straight forward with the xbee operating using AT mode rather than API mode.  For now, I used modbus to communicate to Mango and for giggles VT-Scada. More on that in a future post as the IIoT speak I hear from certain vendors — not the two mentioned–make me cringe knowing what they have under the hood. Software Feature List • set time from host via modbus  or terminal console • set lights on time via modbus or terminal console (default 18 hrs on) • set lights off time via modbus or terminal console (default 6 hrs off) • set duty cycle via modbus or terminal console • set duty cycle period via via modbus or terminal console • get temperature via via modbus or terminal console • get soil moisture via via modbus or terminal console • force the lights on or off via modbus or terminal console • save/load/restores settings to/from EEPROM Modbus was used as I already had a SCADA host running. It could have been xbee API or bluetooth. Having done both, this is relatively easy to refactor the code later. The code can be found at https://github.com/chrapchp/PlantLEDLighting. Not the prettiest code yet it it does the job for this experiment. Periodically changing the red/blue ratio aka duty cycle between 70-95% red with the remaining in blue light tainted the experiment. Regardless, it is logged in the SCADA/HMI host for further analysis.  Interestingly, the research around  LED-based plant lighting is growing along with plenty of do-it-yourselfers experimenting. Lessons Learned On the Mega front, the Chinese knock-off ended up with causing more trouble that they’re worth. Problems included the following: •  voltage regulator fried • TX1 via the header pin did not work • headers were loose • finding a driver took extra goggling Needless to say,  I ended up purchasing the real one. Wiring xbees on breadboards gets old fast. The current setup consists of switches to commission/reset and  a potentiometer to vary the input voltage for testing a device. Nevertheless, I  purchased the wireless connectivity kit  (S2C) and the pro version of the xbee  to facilitate the configuration and program some custom functionality in the xbee in the future. Highly recommended if xbee development is on the radar. BTW, digikey Canadian or US site offer great service and fast delivery. I’ve ordered from them several times. Observations Herbs The basil and oregano took a couple of weeks to germinate followed with a slow growth rate.  In contrast to what others are doing, the growth rate falls far short with expectation. Leafy Vegetables The kale and arugula germinated in 3 days and grew relatively fast. The weak stems could be attributed to the LED’s . I’ve planted some outside as well and will compare the stem sizes with the indoor ones. Minor Changes The addition of a fan to create a light  breeze led to stronger stems. After a couple of weeks of circulation, the arugula and kale stems seemed stronger. The basil grew and looked healthy yet remained small. When compared to their outdoor counterparts, the healthier looking indoor basil prevailed. Next Steps There seems to be some confusion out there between lumens and pars. I read about people only measuring lumens for plants and scratch my head.  Consequently,  I like ChilLED‘s pitch in positioning their lighting products as well an intro-101 from Lush Lighting. Incidentally, a buzz exists stating the effects of UV could lead  to ‘certain’ plants to produce more THC. Note, I am not interested growing those plants and just want to grow edibles all year round.  At any rate,  I think the root cause revolves around the low LED pars and power rather than the effects of different soil, nutrients, and seeds. In short, I’m considering using ChilLED for sourcing my lighting needs provided that  controlling the output of the various channels without using their controller remains feasible.  Note  growmay5 provides some interesting vlogs on this as well as other topics around LED plant lighting. Altogether, I’m satisfied with experiment and how quickly I could mash up a solution. Hydroponics is the next step with better LED lighting and queued for later this year as a project. Kale Temporary setup Slapped together hardware Bike LEDVest I’ve been tossing this project in my head for a few years. I signed up when the Myo Armband came out on kickstarter and figured I could make use of it one day.   When I purchased the the Apple Watch, then that got the wheels in motion to build an LEDVest. Some of the goals I wanted to achieve included the following: • Learn the iOS development (Swift language) • Drill down on Bluetooth LE development • Persist information on iCloud and retrieve from different devices • Create something useful and provides context based information to others while riding my bicycle at night • Explore iOS HealthKit and MapKit Prototype My wife did all the sewing. The LED’s are so bright that the iPhone camera does not do it justice inside. Many people commented from motorists, pedestrians, and cycles on how cool this vest was. It took a lot of effort but it was a nice diversion from the day job. Learning a new programming language, organizing the code so that the appropriate level of abstractions exist to easily add new features, creating an application level protocol to control the LEDVest, and designing and building simple hardware bumped up the fun factor. Using my Apple Watch, I can speak text to display and I send it to the LEDVest to display. If I am annoyed at a stop light, I tend to keep it safe. e.g. “Smog sucks”.  So far the software periodically displays the temperature from the hardware, along with the WTI price and Canadian currency via the yahoo finance API. If I loose connectivity to the iPhone, the arduino portion fails-safe and displays the stop symbol and posts the temperature every 30 seconds. I’ll talk about the implementation details later. Making it work. I got everything to function with a rather messy board setup as shown below. The output from the Arduino shows the delta T between messages received from the end-devices.  It is pretty close to the calculated ones. I will change the duration to be 15 minutes later on but for debugging purposes 10s intervals for pin sleep is tolerable. Host Software Rather than re-invent the wheel, I thought there should be software out there that provides SCADA/HMI functionality for free. It is a commoditized activity by now.  After combing the web, I stumbled and settled on Mango, an open-source M2M solution. Mango is Java based that integrates with MySQL and runs under Apache. All good stuff so far.  The web site describes the features and how to install the software.  I liked the data historian, alarms, and the various types data points including calling external web based sources. I set one to get the external temperature at the airport and using regex to scrape of the temperature form the HTML. All within Mango. I found it relatively easy to get going. One thing I had to do was write a Modbus function 6 (write a single register) for the Arduino in C as I want to send commands to the arduino. i.e. set the time for example. The diagram below shows some of the points I configured to handle the home energy monitoring. I used ISA motivated nomenclature to name the tags. I may revert to a human readable tags as I won’t have hundreds of points and becomes cryptic after a while. Data logging for each data point is configurable as shown in the screen shot below. The example is for the temperature in the basement. All of this info is in the MySQL database from which one can chose to slice and dice the data later on using external tools. As for graphical objects, one can create custom objects e.g. dials, meters, etc. and assign tags to them. When this is all done I will add an iPhone friendly UI to this so I can interact with the home energy system on the road. One thing that Mango does is allow me to work on my solution rather than re-inventing the wheel.  I know have the facility to hook up multiple ardduinos and focus on the fun stuff which is the embedded side of things. Prototyping – Part II There is not much to this. A protoshield, the arduino, and a breadboard. Note the current transformer (donut). I have two of those to use in the panel. The first test was to plugging in a 60 W lamp to see what the measurement came too. I expected around 0.5 Amps and 60 watts. I was not disappointed.  I proceeded to plug in a toaster and put the ammeter in the circuit to see if my RMS current matched its RMS measurement. The photo below shows a .4% error. Not bad. Note the drop in the line voltage. In Canada, the nominal line voltage is 120Vrms. I do measure the voltage as part of my power calculations and when I saw the 113 V I checked with the multi-meter and it read the same.  Assuming a 120 V reference would lead to errors in the power calculations. I should not be running a toaster outside my 20Amp line in the kitchen. I created a suicide cord that threads through the current transformer and is basically an extension cord. It plugs into a 15Amp line with other loads. 120 down to 113 is just over a 5% difference from the nominal line voltage. I am trying to rationalize why such a large dip. Anyway, the power measurement works. Crest Factor The crest factor is the the ratio between peak and RMS signals. I do compute that and it gives me an idea on the shape of the waveform. A sinewave should have a crest factor of $\sqrt{2}$. The 60 W lightbulb had a crest factor of 1.40 for the voltage 1.40 for the current. Close enough. I plugged in a variable speed drill and ran it a low RPM. As expected, the power factor went down to .27 with most of the power becoming reactive at 104 vars. The real power was just a mere 29.5 watts. The crest factor for the voltage was 1.39 and for the current, 3.96. That is expected as the duty cycle is changed to control the speed. For us home owners, we get charged for the real power consumed.  In industrial environments, the power company would penalize you for running with such an awful power factor. I can’t wait to plug all this in the main panel see what the overall power consumption profile is. I expect the power factor to be closer to one. Next Steps Computing C02 emissions is trivial as well as projecting cost of power usage.  I would like to have that wired next to the power panel and displayed on the LCD sooner than later.  On the other hand, I need figure out the zigbee side of things as well as how to best do the data logging.  I can easily purchase another arduino later and focus on getting this prototype soldered on something more permanent.
{}
# Ex.9.1 Q3 Rational-Numbers Solution - NCERT Maths Class 7 Go back to  'Ex.9.1' ## Question Give four rational numbers equivalent to: \begin{align}{{\rm{ (i) }}\frac{{ - 2}}{7}}\end{align} \begin{align}{{\rm{(ii) }}\frac{5}{{ - 3}}}\end{align} \begin{align}{\rm{(iii)}}\,\,\,\frac{4}{9}\end{align} ## Text Solution What is known? Three rational numbers. What is unknown? Four rational numbers equivalent to each of the given rational number. Reasoning: To find out the equivalent fraction of any rational number, multiply the numerator and the denominator of the given number by the same numbers. Remember here it is asked for four equivalent rational numbers that means you have to multiply four different numbers, one by one in both numerator and denominator of the given number. Steps: \begin{align}{\rm{(i)}}\,\,\,\frac{{ - 2}}{7}\end{align} Multilying both numerator and denominator with the same number, we get \begin{align}\frac{{ - 2 \times 2}}{{7 \times 2}} = \frac{{ - 4}}{{14}},\frac{{ - 2 \times 3}}{{7 \times 3}} = \frac{{ - 6}}{{21}}\quad ,\frac{{ - 2 \times 4}}{{7 \times 4}} = \frac{{ - 8}}{{28}},\quad \frac{{ - 2 \times 5}}{{7 \times 5}} = \frac{{ - 10}}{{35}}\end{align} Therefore, the equivalent fractions to the number \begin{align}\frac{{ - 2\;}}{7}\end{align}are, \begin{align}\frac{{ - 4}}{{14}},\frac{{ - 6}}{{21}},\frac{{ - 8}}{{28}},\frac{{ - 10}}{{35}}\end{align} \begin{align}{\rm{(ii)}}\frac{5}{{ - 3}}\end{align} Multiplying both numerator and denominator with the same number, we get $\frac{{5 \times 2}}{{ - 3 \times 2}} = \frac{{10}}{{ - 6}},\frac{{5 \times 3}}{{ - 3 \times 3}} = \frac{{15}}{{ - 9}}\quad ,\frac{{5 \times 4}}{{ - 3 \times 4}} = \frac{{20}}{{ - 12}},\frac{{5 \times 5}}{{ - 3 \times 5}} = \frac{{25}}{{ - 15}}$ Therefore, the equivalent fractions to the number \begin{align}\frac{5}{{ - 3}}\end{align}are, \begin{align}\frac{{10}}{{ - 6}},\frac{{15}}{{ - 9}},\frac{{20}}{{ - 12}},\frac{{25}}{{ - 15}}\end{align} \begin{align}{\rm{(iii)}}\frac{4}{9}\end{align} Multiplying both numerator and denominator with the same number, we get \begin{align}\frac{{4{\times2}}}{{9{\times2}}} = \frac{8}{{18}},\frac{{4{\times3}}}{{9{\times3}}} = \frac{{12}}{{27}},\quad \frac{{4{\times4}}}{{9 \times 4}} = \frac{{16}}{{36}},\frac{{4{\times5}}}{{9{\times5}}} = \frac{{20}}{{45}}\end{align} Therefore, the equivalent fractions to the number \begin{align}\frac{4}{9}\end{align}are, \begin{align}\frac{8}{{18}},\frac{{12}}{{27}},\frac{{16}}{{36}}\,{\rm{and}}\,\frac{{20}}{{45}}\end{align} Learn from the best math teachers and top your exams • Live one on one classroom and doubt clearing • Practice worksheets in and after class for conceptual clarity • Personalized curriculum to keep up with school
{}
# Integer Solutions of the Equation $u^3 = r^2-s^2$ The question says the following: Find all primitive Pythagorean Triangles $$x^2+y^2 = z^2$$ such that $$x$$ is a perfect cube. The general solution for each variable are the following: $$x=r^2-s^2$$ $$y=2rs$$ $$z=r^2+s^2$$ such that $$\gcd(r,s) = 1$$and $$r+s \equiv 1 \pmod {2}$$ In order to make $$x$$ a perfect cube, I shall have the equation $$x=u^3=r^2-s^2$$. However, I am stuck to find a general formula for such cubes. I know that a subset of the solutions might be the difference between two consecutive squares. This difference is always an odd integer. I can collect some examples such $$14^2-13^2 = 27$$ but I cannot give a formula for such type either. Any ideas? • Any cube can be represented by a difference of squares. $$x^3=(y-z)(y+z)$$ – individ Dec 19 '18 at 15:06 • One general class of solutions is given by $r=\frac{u^2+u}{2}$ and $s=\frac{u^2-u}{2}$, but I am fairly sure this is not an exhaustive solution set. – Frpzzd Dec 19 '18 at 15:06 • @individ let $y = 5, z= 1$, then $4*6=24$ which is not a cube. My point is that when will $(y-z)(y+z)$ is a cube? – Maged Saeed Dec 19 '18 at 15:10 • @MagedSaeed Write $u^3=(r-s)(r+s)$. Since $r+s$ and $r-s$ must have the same parity, and $u$ and $u^2$ must have the same parity, we may let $u=r-s$ and $u^2=r+s$. The same can be done for any two divisors of $u^3$ that have the same parity. – Frpzzd Dec 19 '18 at 15:18 • Maged, I'm sure individ meant that if you can write down a factorization, any factorization will do, $u^3=ab$ such that $a$ and $b$ have the same parity, then you can solve for $y$ and $z$ from the system $a=y-z$, $b=y+z$. The choice $a=x$, $b=x^2$ gives you the solution Frpzzd provided. – Jyrki Lahtonen Dec 19 '18 at 15:19 $$u^3=(r+s)(r-s)$$ and $$\gcd(r+s,r-s)=1$$, so $$r+s$$ and $$r-s$$ are odd, coprime perfect cubes. So let $$r+s=a^3$$, $$r-s=b^3$$. Then $$r=\frac{a^3+b^3}2$$ $$s=\frac{a^3-b^3}2$$ where $$a$$ and $$b$$ are odd and coprime. Conversely, if $$a$$ and $$b$$ are odd and coprime, let $$r=(a^3+b^3)/2$$ and $$s=(a^3-b^3)/2$$, which are coprime and have different parity. Indeed, $$r+s=a^3$$ which is odd and coprime with $$r-s=b^3$$ • That is what I was looking for. I just have scratched this on a paper and immediately found it as an answer of yours. :) – Maged Saeed Dec 19 '18 at 15:23 I assume you are allowing $$u,r,s$$ to be negative. Let us substitute $$r-s=a$$ so that your equation is equivalent to $$u^3=a(a+2s)$$ thus, if $$u^3$$ can be written in the form $$u^3=xy$$ where $$x\equiv y \pmod 2$$, then we may let $$a=x$$ and $$a+2s=y$$, and solve an easy system of equations obtain values for $$r$$ and $$s$$. Thus, if $$u^3=xy$$ and $$x\equiv y \pmod 2$$, then $$r=\frac{x+y}{2}$$ and $$s=\frac{x-y}{2}$$ is a possible solution. Let's try and find the number of solutions $$(r,s)$$ given the value of $$u^3$$. Each solution $$(r,s)$$ can be put into one-to-one correspondence with a pair $$(x,y)$$ satisfying $$u^3=xy$$ and $$x\equiv y \pmod 2$$. If $$u$$ is even, there are $$(v_2(u^3)-1)d_o(u^3)$$ such pairs, and if $$u$$ is odd, there are $$d_o(u^3)$$ such pairs (where $$v_2(u^3)$$ is the 2-adic valuation of $$u^3$$ and $$d_o(u^3)$$ is the number of odd divisors of $$u^3$$), which can be easily proven by "dividing up" the factors of $$2$$ in $$u^3$$ between $$x$$ and $$y$$. Thus, given $$u^3$$, there are $$d_o(u^3)$$ solutions if $$u$$ is odd and $$(v_2(u^3)-1)d_o(u^3)$$ solutions if $$u$$ is even. • Thanks, but this did not give explicit formulas for $r$ and $s$. – Maged Saeed Dec 19 '18 at 15:26 $$1^2=1^3$$ $$3^2=(1+2)^2=1^3+2^3$$ $$6^2=(1+2+3)^2=1^3+2^3+3^3$$ ... The difference between two consecutive squares on the left will give you a cube: $$1^3=1^2-0^2$$ $$2^3=3^2-1^2$$ $$3^3=6^2-3^2$$ ... Which means the solutions are pairs of this form: $$(\frac{n(n-1)}{2}, \frac{n(n+1)}{2})$$ $$1^3=1^2-0^2$$ $$2^3=3^2-1^2$$ $$3^3=6^2-3^2$$ ... • Oh, this is nice and brilliant! – Maged Saeed Dec 19 '18 at 15:50 In this part of the discussion, we will sometimes refer to $$A,B,C$$ instead of $$x,y,z$$ and we will look for triples where $$A$$ is an $$even$$ cube such that $$A^2=x^3=C^2-B^2$$. It is convenient to be able to find a triple for every pair of natural numbers and we can do so if we vary the standard formula to have an effect similar to $$(m,n)=(2n+k-1,k)$$. We can then find the desired triples with: $$A=2n^2-2n+4nk$$ $$B=2n(2k-1)+(2k-1)^2$$ $$C=2n^2+2n(2k-1)+(2k-1)^2$$ In these functions, $$n$$ is a set number and $$k$$ is the member number within that set. It is easy to find these triples in a spreadsheet where one column is dedicated to testing if the cube root of $$A$$ is an integer. Here are primitives ($$A^2+B^2=C^2\land GCD(A,B,C)=1$$) where each f(n,k) is a triple from sets $$1$$ thru $$50$$ and member numbers up to $$300$$. $$f(1,2)=8,15,17$$ $$f(1,16)=64,1023,1025$$ $$f(1,54)=216,11663,11665$$ $$f(1,128)=512,65535,65537$$ $$f(1,250)=1000,249999,250001$$ $$f(4,12)=216,713,745$$ $$f(4,61)=1000,15609,15641$$ $$f(4,170)=2744,117633,117665$$ $$f(27,3)=1728,295,1753$$ $$f(27,115)=13824,64807,66265$$ $$f(27,237)=27000,249271,250729$$ $$f(32,47)=8000,14601,16649$$ $$f(32,156)=21952,116625,118673$$ We can do the same for side $$A$$ odd if we let A,B,C be: $$A=(2n-1)^2+2(2n-1)k$$ $$B=2(2n-1)k+2 k^2$$ $$C=(2n-1)^2+2(2n-1)k+2k^2$$ There are an infinite number of these triples for side $$A$$ odd because the function for $$A$$ generates ever odd number $$>1$$. In a casual search, the primitives appear to be confined to $$Set_1$$ but this is not proven. Here are examples from $$k=1$$ to $$k=2000$$: $$f(1,13)=27,364,365$$ $$f(1,62)=125,7812,7813$$ $$f(1,171)=343,58824,58825$$ $$f(1,364)=729,265720,265721$$ $$f(1,665)=1331,885780,885781$$ $$f(1,1098)=2197,2413404,2413405$$ $$f(1,1687)=3375,5695312,5695313$$
{}
# Browse results ## You are looking at 1 - 10 of 404 items for : • Nematology • Primary Language: eng • Search level: All Clear All In: Nematology ## Summary Bursaphelenchus taedae sp. n. of the eggersi-group was detected in loblolly pine logs from the USA together with B. antoniae. It is characterised by a relatively small stylet with basal swellings, a lateral field with three lines, and the excretory pore located at the level of the nerve ring. The female has a very small extension of the anterior vulval lip over the vulva (= a ‘vulval flap’), a long PUS extending for 40.1-67.8% of vulva to anus distance, and hook-like tail conical, gradually tapering to a finely rounded or broadly rounded terminus. The male spicules are 17-22 μm long in chord, only slightly ventrally curved, condylus short, truncate, slightly dorsally bent to dorsally hooked, rostrum ca 3-4 μm long, close to the proximal spicule end, without cucullus. Seven genital papillae present. Bursaphelenchus taedae sp. n. can be distinguished from other species of the eggersi-group by morphological and molecular characters. In: Nematology ## Summary The entomopathogenic nematode (EPN), Heterorhabditis bacteriophora, is an important biological control agent worldwide. Industrially produced EPN need to meet the climatic requirements for the control of pests in field agriculture in autumn and spring when temperatures are low. For this trait (virulence at low temperature), previous EPN improvement attempts relied on phenotypic selection and the selected trait had low stability. The use of molecular markers can increase the efficacy of EPN breeding by tracking traits associated with specific genotypes. To date, fewer than 200 polymorphic and reproducible sequence-tagged molecular markers in H. bacteriophora have been reported. Here, we enhanced the palette of highly polymorphic genetic markers for this EPN by applying genotyping by sequencing (GBS). By analysing 48 H. bacteriophora homozygous wild-type inbred lines from different origins, we determined 4894 single nucleotide polymorphisms (SNPs) with at least one polymorphism along the tested set. For validation, we designed robust PCR assays for seven SNPs, finding 95% correspondence with the expected genotypes along 294 analysed alleles. We phenotyped all lines for their virulence at low temperature (15°C) against mealworm and observed infectivity ranging from 38 to 80%. Further, we carried out association analyses between genotypic and phenotypic data and determined two SNPs yielding potential association with H. bacteriophora virulence at low temperature. The use of these candidate SNPs as breeding markers will speed up the generation of strains better adapted to low temperature in this species. The generated set of lines and SNP data are a versatile tool applicable for further traits in this EPN. In: Nematology ## Summary Effectors synthesised in the pharyngeal glands are important in the successful invasion of root-knot nematodes. Meloidogyne javanica is among the nematodes that cause the most damage to various crops. In this study, an effector named MJ-10A08 of M. javanica was identified and investigated. Mj-10A08 was exclusively expressed in the dorsal pharyngeal gland cell and highly expressed in the parasitic second-juvenile stage of M. javanica. Transgenic tobaccos that over-expressed Mj-10A08 were more susceptible to M. javanica; however, host delivered RNAi of Mj-10A08 in tobacco significantly decreased the expression level of Mj-10A08 and the infection efficiency of M. javanica. Transient expression in tobacco leaves demonstrated that MJ-10A08 suppressed programmed cell death caused by BAX and Gpa2/RBP-1. Our results indicated that MJ-10A08 is implicated in the suppression of plant defence response during nematode infection and plays an important role in the parasitism of M. javanica. Open Access In: Nematology ## Summary Tylenchulus semipenetrans nematodes affect citrus crops and may develop resistance to commercially available nematicides. In this sense, two series of 1,3,4-oxa- and thiadiazole compounds have been recently synthesised and tested as nematicides against T. semipenetrans, demonstrating promising results. We report herein a molecular modelling study that combines these two series of congeneric compounds to form a single and enhanced data set. The chemical structures of these compounds were correlated with the respective nematicidal activities (pLC50) using multivariate image analysis (MIA) descriptors in quantitative structure-activity relationship (QSAR) analysis. The partial least squares (PLS) regression yielded reliable and predictive models ($r2≈0.85$, $q2≈0.70$, and $rpred2≈0.71$). Therefore, novel 1,3,4-oxa- and thiadiazole derivatives were proposed and a few of them exhibited predicted nematicidal performance better than those of the parent compounds. In: Nematology ## Summary Over the last few years, novel synthetic nematicides, such as Salibro™ nematicide (a.s. fluazaindolizine - Reklemel™ active), Velum Prime® (a.s. fluopyram) or Nimitz® (a.s. fluensulfone), have been commercialised in various regions around the world. Whilst considerable scientific information exists on their field efficacy against plant-parasitic nematodes, very little has been published on their bio-compatibility with beneficial soil fungi. In this paper in vitro studies are presented with various nematophagous (Athrobotrys, Monacrosporium, Harposporium, Purpureocillium), entomoparasitic (Beauveria, Isaria) and disease-suppressive (Trichoderma) fungi that were exposed to these nematicides under laboratory conditions. Assessments were made on their impact on radial growth and sporulation of those fungi. Clear differences in sensitivity to the nematicides were seen between the different fungi. Intrinsically, fluopyram showed the strongest adverse effects on the tested fungi that often became already visible at a concentration of 5 ppm (a.s.). Negative effects were significant at higher concentrations of 50 ppm. Fluensulfone showed limited adverse impacts on the tested fungi at 5 ppm (a.s.) but clearly inhibited most of the fungi at 50 ppm (a.s.). Fluazaindolizine had the least impact of the novel nematicides, with no adverse effects recorded on any species at 5 ppm (a.s.), and very minor growth reductions at 50 ppm (a.s.). Even when tested at 250 ppm (a.s.) fluazaindolizine still showed no impact on Purpureocillium lilacinum, as well as only a weak impact on some Trichoderma species. Vydate (a.s. oxamyl), which was often included as a traditional carbamate nematicide in the test, also showed excellent bio-compatibility with the tested fungi at concentrations of from 5 to 50 ppm (a.s.). Overall, the studies showed that beneficial soil fungi differ in their intrinsic sensitivity to these modern nematicides. These interactions may be considered when designing integrated nematode management programmes that leverage endemic or introduced biocontrol agents. However, it should be noted that additional studies under field conditions with recommended label rates of the products are needed to confirm the trends seen in these laboratory data. In: Nematology ## Summary The Humuli group of the genus Heterodera contains species that parasitise dicotyledons and are characterised by a lemon-shaped cyst having a bifenestrate vulval cone (ambifenestrate for H. fici), long vulval slit and weak underbridge. Presently, the Humuli group includes seven species: H. amaranthusiae, H. fici, H. humuli, H. litoralis, H. ripae, H. turcomanica and H. vallicola. In this study we provided comprehensive phylogenetic analyses of COI and ITS rRNA gene sequences of species from the Humuli group using Bayesian inference, maximum likelihood, and maximum and statistical parsimony. All seven valid species from the Humuli group, one putatively new species belonging to this group and the willow cyst nematode, H. salixophila, sharing a common ancestor with the Humuli group, were analysed. Some 84 COI and 5 ITS rRNA new gene sequences from 37 nematode populations collected from 12 countries were obtained in this study. Our results confirmed that the COI gene is a powerful DNA barcoding marker for identification of populations and species from the Humuli group. Based on the results of phylogeographical analysis and age estimation of clades with a molecular clock approach, it was hypothesised that some species of the Humuli group primarily originated and diversified in Western and Middle Asian regions during the Pleistocene and Holocene periods and then dispersed from this region across the world. Two secondary diversification centres of the Humuli group were likely located in East and Southeast Asia, Russian Far East, and Oceania. In: Nematology ## Summary A new species of the genus Ficophagus was recovered from the syconia of Ficus variegata from Shenzhen, Guangdong province, China. It is described herein as Ficophagus giblindavisi n. sp. and is characterised by possessing the longest stylet in males (35.1-45.8 μm) and most lateral incisures (5) of all currently described species in the genus, a short PUS (8.4-11.4 μm or 0.3 VBD long), excretory pore situated at or posterior to the nerve ring, amoeboid sperm, three pairs of subventral papillae on the male tail, rounded male tail tip with a mucron, absence of gubernaculum and sickle-shaped spicules with a terminal cucullus. Ficophagus giblindavisi n. sp. was differentiated from other sequenced species by the partial small subunit (SSU) rRNA gene and D2-D3 expansion segments of the large subunit (LSU) rRNA gene. Phylogenetic analysis with the LSU D2-D3 expansion segment sequence suggested that F. giblindavisi n. sp. is clustered in the same highly supported monophyletic clade with F. auriculatae and F. fleckeri. In: Nematology In: Nematology ## Summary Ruehmaphelenchus americanum n. sp., isolated from southern yellow pine (Pinus taedae L.) from the USA is described and figured. It is characterised by a relatively stout body (a = 30 for females and males), three lines in the lateral field, both oocytes and spermatocytes arranged in two rows, male spicules relatively small (14-18 μm) with weakly developed condylus and rostrum, short tail with a bluntly pointed tip, seven papilliform genital papillae present, female vulva positioned at ca 82% of body length, vulval lips slightly protruding, post-uterine branch extending two-thirds of vulva to anus distance, tail cylindrical, ca two anal body diam. long, terminus forming a spike-like projection or mucron, 7.6-12.2 μm long, with pointed tip. The new species can be separated from 11 known species (except R. thailandae) by male genital papillae arrangement (the second and third pair adjacent vs separated). Detailed phylogenetic analysis based on 18S and 28S D2-D3 region ribosomal RNA (rRNA) sequences has confirmed the status of this nematode as a new species. In: Nematology
{}
# Problem with applying elliptic filter to ecg signal I'm currently working on an article for ECG classification which it says that it has used elliptic bandpass filter with 0.5Hz and 50Hz critical frequencies to eliminate base-line and interference etc. First problem is that I think it should be stopband instead of bandpass, am I wrong? Second problem is when I try to do the filtering in matlab. The original signal shape is something like this: the sampling rate of the signal is 300Hz. [b, a] = ellip(10, 1, 100,[0.5 50]/150, 'bandpass') fvtool(b, a, 'Fs',300) This is the frequence response of the filter: using this line, I apply the filter to my signal: s_filtered = filtfilt(b, a, s) and what it returns is all NaN! Am I doing something wrong? • That's a terrible filter for pulsed signals, IMHO. The steep skirts are going to cause no end of ringing, and if you want to exclude environmental effects you should have zeros at the local line frequency and its harmonics (so, either every 50Hz or every 60Hz, or if you want to be universal, both). I understand that you've got to use it, but you may want to study up on what's considered best. Dec 14 '20 at 18:13 Numerical problems. 64-bit double precision floating point is not nearly enough to implement this filter in the way you have done it. Your filter is extremely steep: The poles are way too close to the unit circle and the order is way too high to implement it in transfer function form (which is a bad anyway). Things to do 1. Implement the filter as cascaded second order section 2. Review the requirements for your filter. It seems way steeper than it needs to be. Such an aggressive filter will destroy a lot of time domain details since the filter has massive amounts of time domain ringing. The settling time alone is multiple seconds and by using filtfilt you double it again & create an enormous non-causality. Filter design requires a lot of trade offs, make sure you understand them and optimize to the specific requirements of your application. • I'm trying to implenent an article in code which preprocess ecg by doing filtfilt on ecg signal with elliptic filter. The article has mentioned the order of filter and the cutoff frequencies which are 10th order with 0.5Hz and 50Hz. I'll add a link to article to the question. According to matlab's website only things that I can change are Rp and Rs which I chose them by looking at some examples in this link:mathworks.com/help/signal/ref/ellip.html Dec 14 '20 at 14:21 • Granted, I'm an audio guy and don't know a lot about ECG processing. However, your filter has a group delay of almost 29s at 0.5 Hz. If you really want -100dB below 0.5Hz you would have to run the filter (even without filtfilt) for over 200 seconds for the attenuation to build up (and the first 200 seconds of the output would have to be discarded). If that works for you: great! But it's a very unusual filter spec to say the least. Even Matlab's fvtool() has trouble with as you can see by the "fuzz" at the low frequencies. Dec 14 '20 at 15:47 • As you mentioned, the problem was numerical and was solved by using 'sos'. I read the docs and saw at the end that using b,a may not work for orders more than 4! Dec 14 '20 at 16:52 • Using transfer functions of order more than 2 is highly dis-recommended. It's pretty much Not Done in DSP, because the requirements on the precision both of the coefficients and the data paths gets absurd. You design your filter as a collection of poles and zeros, then you implement the filter as a cascade of 2nd-order sections because that's what works. Using some extended numerical precision package (which is what I assume sos is) just unnecessarily masks an easily-solved problem. Dec 14 '20 at 18:11 • SOS exactly stands for Second Order Sections Dec 14 '20 at 19:31 If you want to design a band pass filter, you need to call ellip with half the desired filter order (check the corresponding Matlab help page). I.e., for a $$10^{th}$$ order band pass filter the correct call is [b,a] = ellip( 5, ... ); This might avoid (part of) the numerical problems you're running into now. Of course, cascading second-order sections may still be necessary. • Thanks! using sos solved the problem! Dec 14 '20 at 17:10 • @SepehrGolestanian: Good, but the filter order is still 20 instead of 10. Dec 14 '20 at 17:19 • you mean because of filtfilt? Dec 14 '20 at 19:21 • Oh, know I understand, thanks for noticing me. Dec 14 '20 at 19:23 • Is this the case in scipy.signal.ellip too? I'm asking because they didn't write anything in their documentation. Dec 14 '20 at 19:29
{}
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Generalized Tate Cohomology SEARCH THIS BOOK: Memoirs of the American Mathematical Society 1995; 178 pp; softcover Volume: 113 ISBN-10: 0-8218-2603-4 ISBN-13: 978-0-8218-2603-4 List Price: US$50 Individual Members: US$30 Institutional Members: US\$40 Order Code: MEMO/113/543 This book presents a systematic study of a new equivariant cohomology theory $$t(k_G)^*$$ constructed from any given equivariant cohomology theory $$k^*_G$$, where $$G$$ is a compact Lie group. Special cases include Tate-Swan cohomology when $$G$$ is finite and a version of cyclic cohomology when $$G = S^1$$. The groups $$t(k_G)^*(X)$$ are obtained by suitably splicing the $$k$$-homology with the $$k$$-cohomology of the Borel construction $$EG\times _G X$$, where $$k^*$$ is the nonequivariant cohomology theory that underlies $$k^*_G$$. The new theories play a central role in relating equivariant algebraic topology with current areas of interest in nonequivariant algebraic topology. Their study is essential to a full understanding of such "completion theorems" as the Atiyah-Segal completion theorem in $$K$$-theory and the Segal conjecture in cohomotopy. When $$G$$ is finite, the Tate theory associated to equivariant $$K$$-theory is calculated completely, and the Tate theory associated to equivariant cohomotopy is shown to encode a mysterious web of connections between the Tate cohomology of finite groups and the stable homotopy groups of spheres. Research mathematicians. • Part II: Eilenberg-Maclane $$G$$-spectra and the spectral sequences • Appendix A: Splittings of rational $$G$$-spectra for a finite group $$G$$
{}
× ### Dustin Lennon ##### Applied Scientist 2648A NW 57th St Seattle, WA 98107 (206) 291-8893 #### Adaptive Rejection Sampling adaptive sampling rejection sampling logconcave density distribution This work was originally published as an Inferentialist blog post. Abstract #### Abstract Adaptive rejection sampling is a statistical algorithm for generating samples from a univariate, log-concave density. Because of the adaptive nature of the algorithm, rejection rates are often very low. The exposition of this algorithm follows the example given in Davison’s 2008 text, “Statistical Models.” Algorithm #### Algorithm The algorithm is fairly simple to describe: • Establish a set of fixed points and evaluate the log-density, $$h$$ , and derivative of the log-density on the fixed points. • Use these function evaluations to construct a piecewise-linear, upper bound for the log-density function, $$h_+$$ , via supporting tangent lines of the log-density at the fixed points. • Let $$g_+ = \exp(h_+)$$ . Because of the piecewise-linear construction of $$h_+$$ , $$g_+$$ is piecewise-exponential, sampling $$Y \sim g_+$$ is straightforward. • Pick $$U \sim \mbox{Unif}(0,1)$$ . If $$U \leq \exp \left( h(Y) - h_+(Y) \right)$$ , accept $$Y$$ ; else, draw another sample from $$g_+$$ . • For any $$Y$$ rejected by the above criteria, $$Y$$ may be added to the initial set of fixed points and the piecewise-linear upper bound, $$h_+$$ , adaptively updated. An Example #### An Example We apply the algorithm to Example 3.22 in Davison. Here we specify a log-concave density function. Note $$\exp(h)$$ is the density, and $$h$$ is the concave log-density: $h(y) = ry - m \log(1 + \exp(y)) - \frac{(y-\mu)^2}{2\sigma^2} + c$ where $$y$$ is real valued, $$c$$ is a constant such that the integral of $$\exp(h)$$ has unit area, $$r = 2$$ , $$m=10$$ , $$\mu = 0$$ , and $$\sigma^2 = 1$$ . R Code #### R Code First, define the function, h, and its derivative, dh. ## Davison, Example 3.22 params.r = 2 params.m = 10 params.mu = 0 params.sig2 = 1 ## the log of a log-convex density function ymin = -Inf ymax = Inf h = function(y){ v = params.r*y - params.m * log(1+exp(y)) - (y-params.mu)^2/(2*params.sig2) # plus normalizing const return(v) } ## derivative of h dh = function(y) { params.r - params.m * exp(y) / (1 + exp(y)) - (y-params.mu)/params.sig2 } Define the function that computes the intersection points of the supporting tangent lines. Suppose $$y_1, \dots, y_k$$ denotes the fixed points. Then, $z_j = y_j + \frac{h(y_j) - h(y_{j+1}) + (y_{j+1} - y_j) h'(y_{j+1})}{h'(y_{j+1}) - h'(y_j)}$ ## compute the intersection points of the supporting tangent lines zfix = function(yfixed) { yf0 = head(yfixed, n=-1) yf1 = tail(yfixed, n=-1) zfixed = yf0 + (h(yf0) - h(yf1) + (yf1 - yf0)*dh(yf1)) / (dh(yf1) - dh(yf0)) return(zfixed) } and the piecewise-linear upper bound, $h_+(y) = \begin{cases} h(y_1) + ( y - y_1 ) h'(y_1) & y \leq z_1, \\ h(y_{j}) + ( y_{j+1} - y_j ) h'(y_{j+1}) & z_{j} \leq y \leq z_{j+1} \\ h(y_{k}) + ( y - y_k ) h'(y_{k}) & z_{k} \leq y \end{cases}$ ## evalutate the unnormalized, piecewise-linear upper-bound of the log-density hplus = function(y, yfixed) { res = rep(0, length(y)) zfixed = zfix(yfixed) piecewise.idx = findInterval(y, c(ymin, zfixed, ymax)) npieces = length(zfixed) + 2 for(pidx in 1:npieces){ yp = y[piecewise.idx == pidx] xx = h(yfixed[pidx]) + (yp - yfixed[pidx])*dh(yfixed[pidx]) res[piecewise.idx == pidx] = xx } return(res) } In the following plot, $$h$$ is shown in black, and $$h_+$$ is in green. The black circles are $$(y_i, h(y_i))$$ , and the dashed green vertical lines are $$z_i$$ . We implement a vectorized function to compute the (normalized) CDF of $$g_+ = \exp(h_+)$$ : \begin{align*} G_+(y) & = \int_{-\infty}^y \exp(h_+(x)) dx \\ & = \int_{-\infty}^{\min\{z_1,y\}} \exp(h_+(x)) dx \\ & \qquad + \int_{z_1}^{\min\{z_2,\max\{y, z_1\}\}} \exp(h_+(x)) dx \\ & \qquad + \cdots \\ & \qquad + \int_{z_{k-1}}^{\min\{z_k,\max\{y, z_{k-1}\}\}} \exp(h_+(x)) dx \\ & \qquad + \int_{z_k}^{\max\{z_k,y\}} \exp(h_+(x)) dx \end{align*} In particular, the above formulation means that we can precompute $$G_+(z_i)$$ which means it is only necessary to compute the last, non-zero integral for each $$y$$ . gplus.cdf = function(vals, yfixed) { # equivalently: integrate(function(z) exp(hplus(z, yfixed)), lower=-Inf, upper = vals) zfixed = zfix(yfixed) zlen = length(zfixed) pct = numeric(length(vals)) norm.const = 0 for(zi in 0:zlen) { if(zi == 0) { zm = -Inf } else { zm = zfixed[zi] } if(zi == zlen) { zp = Inf } else { zp = zfixed[zi+1] } yp = yfixed[zi+1] ds = exp(h(yp))/dh(yp) * ( exp((zp - yp)*dh(yp)) - exp((zm - yp)*dh(yp)) ) cidx = zm < vals & vals <= zp hidx = vals > zp pct[cidx] = pct[cidx] + exp(h(yp))/dh(yp) * ( exp((vals[cidx] - yp)*dh(yp)) - exp((zm - yp)*dh(yp)) ) pct[hidx] = pct[hidx] + ds norm.const = norm.const + ds } l = list( pct = pct / norm.const, norm.const = norm.const ) return(l) } Next, we write a function to sample from $$g_+$$ . This proceeds via a probability integral transform, inverting realizations from a $$\mbox{Unif}(0,1)$$ distribution. Using the previous sum-of-integrals formulation for $$G_+$$ , this requires a search across $$\{ G_+(z_1), \cdots G_+(z_{k-1}) \}$$ and then inverting a single integral. ## sample from the gplus density gplus.sample = function(samp.size, yfixed) { zfixed = zfix(yfixed) gp = gplus.cdf(zfixed, yfixed) zpct = gp$pct norm.const = gp$norm.const ub = c(0, zpct, 1) unif.samp = runif(samp.size) fidx = findInterval(unif.samp, ub) num.intervals = length(ub) - 1 zlow = c(ymin, zfixed) res = rep(NaN, length(unif.samp)) for(ii in 1:num.intervals) { ui = unif.samp[ fidx == ii ] if(length(ui) == 0) { next } ## Invert the gplus CDF yp = yfixed[ii] zm = zlow[ii] tmp = (ui - ub[ii]) * dh(yp) * norm.const / exp(h(yp)) + exp( (zm - yp)*dh(yp) ) tmp = yp + log(tmp) / dh(yp) res[ fidx == ii ] = tmp } return(res) } Results #### Results The results are impressive. It takes only a handful of fixed points to reach an acceptance rate that exceed 95%. The figure below shows this convergence. In each plot, 10000 samples are taken from $$g_+$$ . Blue dots show rejected samples, and gray dots show samples from the target density, $$g$$ . After each experiment, two of the rejected points are chosen as new fixed points. The black dots, and corresponding rug plot, indicate these fixed points. With only 9 fixed points, the acceptance rate is 96%.
{}
## Finding the cone-facet correspondence in the normal fan Questions and problems about using polymake go here. MattLarson Posts: 7 Joined: 03 Jun 2018, 22:57 ### Finding the cone-facet correspondence in the normal fan If I have a (in my case unbounded) polytope, then I can compute the normal fan using normal_fan. Then there is an inclusion-reversing correspondence between the cones of the normal fan and the faces of the polytope. Given a cone of the normal fan, I would like to find the corresponding face of the polytope. It seems like polymake must compute this bijection when finding the normal fan. Is there an easy/efficient way to find the corresponding face? paffenholz Developer Posts: 186 Joined: 24 Dec 2010, 13:47 ### Re: Finding the cone-facet correspondence in the normal fan If computed with the function normal_fan from application fan, the rays of the normal fan come in the same order as the facets of the polytope, and the facet-vertex incidence matrix (FACETS_THRU_VERTICES) of the polytope has the same row and column order as the ray-maximal-cone incidence matrix MAXIMAL_CONES. So if your cone is spanned by the rays with indices i_1, ..., i_k, then the corresponding face is the intersection of the facets with indices i_1, ..., i_k. The vertices spanning this face are the intersections of the rows with these indices (set intersection is done with a $*$ in polymake). I don't think we have a function that tells you the index of the node in the HASSE_DIAGRAM of the polytope. Andreas MattLarson Posts: 7 Joined: 03 Jun 2018, 22:57 ### Re: Finding the cone-facet correspondence in the normal fan Thank you so much!
{}
## Intermediate Algebra (6th Edition) $-2x^{3}+8x^{2}-19x-2$ We are asked to subtract $(9x+8)$ from the sum of $(3x^{2}-2x-x^{3}+2)$ and $(5x^{2}-8x-x^{3}+4)$. This is equivalent to the expression $(3x^{2}-2x-x^{3}+2)+(5x^{2}-8x-x^{3}+4)-(9x+8)$. We can subtract the third term from the first two terms by adding the opposite of the third term to the first two. $(3x^{2}-2x-x^{3}+2)+(5x^{2}-8x-x^{3}+4)+(-9x-8)$ Next, we can combine like terms. $(-x^{3}-x^{3})+(3x^{2}+5x^{2})+(-2x-8x-9x)+(2+4-8)=-2x^{3}+8x^{2}-19x-2$
{}
I am trying to gather historical data for experimental reasons (intellectual curiosity) and am having trouble understanding how that data is calculated. First some data gathering on AAPL from Feb. 10th, 2015 at opening. DataA seems to provide every transaction that took place during the prescribed day; Is that correct or am I reading the data wrong? If I take the first line of dataC (close,high,low,open,volume)=(120.3,120.31,120.16,120.17,646886), then it corresponds to the first few introductory transactions in dataA. Likewise, dataD also corresponds to the transactions of dataA, but over several minutes. In other words, dataC and dataD seem like estimations (using close,high,low,open,volume) of dataA. Is this correct? If this is true, then dataA is "raw data" and awesome for analytical reasons. However, I am confused by dataB. I suppose dataB is the bid/ask spread, but if I go to the following line: 20150210T150001 120.54 300 300 120.55 4600 4600 then the bid/ask seems to be 120.54/120.55 which seems entirely inaccurate compared to dataA (the raw data of actual transactions)? Even google indicates that the (c,h,l,o,v) is (120.39,120.58,120.25,120.3,576584) during the first minute of opening, which doesn't seem close to the 120.54/120.55 spread? • Is dataB still available? When I try to access it with more recent dates I am finding no data. – BCR Mar 21 '15 at 18:10 Data set A does look like transactions, but I would hesitate to say that it is every transaction. You would need to investigate the data source and how transactions are defined. Data set B looks like BBO (best bid and offer). Data sets C and D are not estimations; they're aggregations to a higher periodicity. You need to investigate the data sources for data sets A and B. The US stock market is a distributed system. There are many trading venues. A and B could be from a specific venue, or a specific aggregation of venues, while the data on Google and Yahoo is likely from the NBBO (national BBO). In short, stock market data is complex. • Thanks for the information. I would disagree with "not estimations". Close,High,Low,Open inherently leave out the raw data and hence are simply statistical estimations of how that raw data behaved. – Bobby Ocean Feb 17 '15 at 0:11 • They are summaries and they are "not estimations." The answer pointed out there may be things missing from the data and also correctly pointed out that those numbers are usually summaries of the available data. An estimate is a different thing entirely. – Nathan S. Mar 21 '15 at 18:32 The data from hopey.netfonds is only data from the exchange. In this case all transactions you see there are NASDAQ quotations , hence the "O" after AAPL. It fails to provide transactions from other venues such as BATS etc. which is what free data providers usually use as Google finance I know this is an old post, but I just came across it while researching for the same data. As such, I though I might provide another explanation to the above question. The netfonds website provides both a 'tradedump' and a 'posdump'. The trade dump is simply tick level data that shows the movement of the asset from trade to trade. The posdump, 'dataB' in your case, provides insights into the market participants. This is important because you can identify irregularities in bid and ask prices and therefore capitalize on the difference in supply and demand. For further reading, investopedia provides and excellent explanation. http://www.investopedia.com/terms/o/order-book.asp
{}
## Two circular coils are concentric and lie in the same plane. The inner coil contains 140 turns of wire, has a radius of 0.015 m, and carries Question Two circular coils are concentric and lie in the same plane. The inner coil contains 140 turns of wire, has a radius of 0.015 m, and carries a current of 7.2 A. The outer coil contains 180 turns and has a radius of 0.023 m. What must be the magnitude and direction (relative to the current in the inner coil) of the current in the outer coil, so that the net magnetic fi eld at the common center of the two coils is zero in progress 0 5 months 2021-08-29T12:22:43+00:00 1 Answers 4 views 0 ## Answers ( ) 1. Given Information: Current = Ii = 7.2 A Number of turns inner coil = Ni = 140 Number of turns outer coil = No = 180 Radius inner coil = ri = 0.015 m Radius outer coil = ro = 0.023 m Required Information: outer coil current = Io = ? Io = 8.6 A Step-by-step explanation: We have two coils, subscript i denotes inner coil and o denotes the outer coil. According to the Biot-Savart Law B = μ₀IN/2r we will equate the inner and outer magnetic fields Bi=Bo μ₀IiNi/2ri  = μ₀IoNo/2ro μ₀ and 2 cancels out Io = Ii*Ni*ro/No*ri Io = 7.2*140*0.023/180*0.015 Io = 8.6 A Therefore, a current of 8.6 A is needed to flow in opposite direction so that the magnetic field is opposite and the net magnetic field will be zero.
{}
# Application of Dynkin's π-λ theorem When dealing with collections of sets, Dynkin’s systems provides simple but powerful tool for extension of properties in smaller collections to bigger ones. First, we define $\pi$ and $\lambda$-systems. A collection of sets $\mathcal{P}$ is a $\pi$-system if $A, B \in \mathcal{P}$, then $A \cap B \in \mathcal{P}$. A collection of sets $\mathcal{L}$ is a $\lambda$-system on $\Omega$ if the followings hold. (1) $\Omega \in \mathcal{L}$ (2) $A \in \mathcal{L} \Rightarrow A^c \in \mathcal{L}$ (3) $A_i \in \mathcal{L}, i=1,2,\cdots$, where $A_i$'s are disjoint. $\Rightarrow$ $\uplus_{i=1}^\infty A_i \in \mathcal{L}$ Note: Sometimes $\lambda$-system is referred to as “Dynkin’s system”. There are more than two alternative definitions of $\lambda$-system. They are all equivalent and can be proved easily. Here, I introduce only one of them which I find myself the easiest to use when proving some collection of sets is a $\lambda$-system. Then we state the main theorem. $\mathcal{P}$ is a $\pi$-system and $\mathcal{L}$ is a $\lambda$-system. If $\: \mathcal{P} \subset \mathcal{L}$, then $\sigma(\mathcal{P}) \subset \mathcal{L}$. Here, $\sigma(\mathcal{P})$ is the smallest $\sigma$-field containing $\mathcal{P}$. With the theorem, if some property holds for a $\pi$-system $\mathcal{P}$, we can extend the same property to be hold for $\sigma(\mathcal{P})$. I will review three examples. $\mu_1$, $\mu_2$ are probability measures on $(\Omega, \mathcal{F})$. $\mathcal{A} \subset \mathcal{F}$ is a $\pi$-system such that $\sigma(\mathcal{A}) = \mathcal{F}$. If $\mu_1(A) = \mu_2(A)$, $\forall A \in \mathcal{A}$, then $\mu_1 \overset{A \in \mathcal{F}}{\equiv} \mu_2$. Let $\mathcal{L} = \{ B \in \mathcal{F} : \mu_1(B) = \mu_2(B) \}$, then by construction $\mathcal{A} \subset \mathcal{L}$ and it is clear that $\mathcal{L}$ is a $\lambda$-system. By $\pi-\lambda$ theorem, $\sigma(\mathcal{A}) = \mathcal{F} \subset \mathcal{L}$ leads to the desired results. $\mathcal{A}, \mathcal{B}$ are $\pi$-systems and are subsets of $\mathcal{F}$. If $\mathcal{A}, \mathcal{B}$ are independent, then $\sigma(\mathcal{A}), \sigma(\mathcal{B})$ are independent. Let $\mu$ be a probability measure on $(\Omega, \mathcal{F})$. For a given $A \in \mathcal{A}$, let $\mathcal{L}_A = \{ B \in \mathcal{F}: \mu(A \cap B) = \mu(A)\mu(B) \}$. Then $\mathcal{L}_A$ is a $\lambda$-system that contains $\mathcal{B}$ and $\sigma(\mathcal{B}) \subset \mathcal{L}_A$. Since $A \in \mathcal{A}$ was arbitrarily chosen, it is independent to $\mathcal{A}$. Now for given $B \in \sigma(\mathcal{B})$ let $\mathcal{L}_B = \{ A \in \mathcal{F}: \mu(A \cap B) = \mu(A)\mu(B) \}$. Then similar to the above, $\mathcal{L}_B$ is a $\lambda$-system that contains $\mathcal{A}$ and $\sigma(\mathcal{A}) \subset \mathcal{L}_B$, which leads to the desired result. The last example does not actually utilize $\pi-\lambda$ theorem, but it uses similar idea on its proof. $\mathcal{A} \subset \mathcal{F}$ satisfies $\sigma(\mathcal{A}) = \mathcal{B}(\mathbb{R})$. $X: (\Omega, \mathcal{F}) \to (\mathbb{R}, \mathcal{B}(\mathbb{R}))$ is a function defined on a probability space. If $X^{-1}(A) \in \mathcal{F}$, $\forall A \in \mathcal{A}$, then $X$ is a random variable.
{}
# Revision history [back] The following was working for me. A is a matrix of shape 6x35, taken (almost) randomly. (There was a rank check after getting it.) C is the linear code associated to it. B is the parity check matrix for the code. sage: A = MatrixSpace( GF(2), 6, 35 ).random_element() sage: A [1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 0 1 1 1 1 1 0 0 1 1 1 0 1 1 0 1 1 1 0 1] [1 0 0 0 0 1 0 0 1 0 1 1 0 1 1 0 0 0 1 1 0 1 0 1 1 1 0 1 1 0 0 0 1 1 1] [1 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 0 0 1 0 0 1 1 1 0 1 0 1 0 1 1 0 1] [0 1 1 0 1 1 1 1 1 1 0 0 0 0 0 1 0 1 1 0 1 1 1 1 1 0 0 0 1 1 0 1 0 1 1] [0 1 0 1 0 1 0 0 0 0 1 1 0 1 1 1 0 1 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 1 1] [1 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 1 0 1 0 1 1 1 1 1 1 1 0 1 0 1 0 0 1 0] sage: A.rank() 6 sage: # this is ok... sage: C = LinearCode(A) sage: B = C.parity_check_matrix() sage: B 29 x 35 dense matrix over Finite Field of size 2 (use the '.str()' method to see the entries)
{}
*博客头图: RGB颜色,例如:#AFAFAF Hover: RGB颜色,例如:#AFAFAF RGB颜色,例如:#AFAFAF # 你的朋友不及格,你很难过;你的朋友考了第一,你更难过。 • 博客(1208) • 资源 (12) • 论坛 (1) • 问答 (1) • 收藏 • 关注 #### 原创 [leetcode] 1594. Maximum Non Negative Product in a Matrix DescriptionYou are given a rows x cols matrix grid. Initially, you are located at the top-left corner (0, 0), and in each step, you can only move right or down in the matrix.Among all possible paths starting from the top-left corner (0, 0) and ending in 2020-11-26 00:47:58 12 #### 原创 python 线性回归拟合的题目 2020-11-25 23:36:12 8 #### 原创 [leetcode] 1504. Count Submatrices With All Ones DescriptionGiven a rows * columns matrix mat of ones and zeros, return how many submatrices have all ones.Example 1:Input: mat = [[1,0,1], [1,1,0], [1,1,0]]Output: 13Explanation:There are 6 rectangles of side 1x1.There ar 2020-11-25 19:10:08 4 #### 原创 mongodb 字符串日期范围查找 2020-11-23 15:24:27 22 #### 原创 [leetcode] 1546. Maximum Number of Non-Overlapping Subarrays With Sum Equals Target DescriptionGiven an array nums and an integer target.Return the maximum number of non-empty non-overlapping subarrays such that the sum of values in each subarray is equal to target.Example 1:Input: nums = [1,1,1,1,1], target = 2Output: 2Explanation: 2020-11-23 13:00:19 15 #### 原创 [leetcode] 1477. Find Two Non-overlapping Sub-arrays Each With Target Sum DescriptionGiven an array of integers arr and an integer target.You have to find two non-overlapping sub-arrays of arr each with sum equal target. There can be multiple answers so you have to find an answer where the sum of the lengths of the two sub-arr 2020-11-22 18:10:12 #### 原创 [leetcode] 1262. Greatest Sum Divisible by Three DescriptionGiven an array nums of integers, we need to find the maximum possible sum of elements of the array such that it is divisible by three.Example 1:Input: nums = [3,6,5,1,8]Output: 18Explanation: Pick numbers 3, 6, 1 and 8 their sum is 18 (maxi 2020-11-22 15:20:00 1 #### 原创 [leetcode] 1227. Airplane Seat Assignment Probability Descriptionn passengers board an airplane with exactly n seats. The first passenger has lost the ticket and picks a seat randomly. But after that, the rest of passengers will:Take their own seat if it is still available,Pick other seats randomly when t 2020-11-22 12:51:58 #### 原创 [leetcode] 1223. Dice Roll Simulation DescriptionA die simulator generates a random number from 1 to 6 for each roll. You introduced a constraint to the generator such that it cannot roll the number i more than rollMax[i] (1-indexed) consecutive times.Given an array of integers rollMax and a 2020-11-22 12:14:15 #### 原创 [leetcode] 1218. Longest Arithmetic Subsequence of Given Difference DescriptionGiven an integer array arr and an integer difference, return the length of the longest subsequence in arr which is an arithmetic sequence such that the difference between adjacent elements in the subsequence equals difference.Example 1:Input: 2020-11-22 00:21:48 #### 原创 [leetcode] 1405. Longest Happy String DescriptionA string is called happy if it does not have any of the strings ‘aaa’, ‘bbb’ or ‘ccc’ as a substring.Given three integers a, b and c, return any string s, which satisfies following conditions:s is happy and longest possible.s contains at mo 2020-11-22 00:04:05 #### 原创 [leetcode] 1191. K-Concatenation Maximum Sum DescriptionGiven an integer array arr and an integer k, modify the array by repeating it k times.For example, if arr = [1, 2] and k = 3 then the modified array will be [1, 2, 1, 2, 1, 2].Return the maximum sub-array sum in the modified array. Note that 2020-11-21 23:00:02 1 #### 原创 [leetcode] 1186. Maximum Subarray Sum with One Deletion DescriptionGiven an array of integers, return the maximum sum for a non-empty subarray (contiguous elements) with at most one element deletion. In other words, you want to choose a subarray and optionally delete one element from it so that there is still 2020-11-21 20:35:10 14 #### 原创 [leetcode] 1155. Number of Dice Rolls With Target Sum DescriptionYou have d dice, and each die has f faces numbered 1, 2, …, f.Return the number of possible ways (out of fd total ways) modulo 10^9 + 7 to roll the dice so the sum of the face up numbers equals target.Example 1:Input: d = 1, f = 6, target = 2020-11-21 19:15:07 8 #### 原创 [leetcode] Longest Common Subsequence DescriptionGiven two strings text1 and text2, return the length of their longest common subsequence.A subsequence of a string is a new string generated from the original string with some characters(can be none) deleted without changing the relative order 2020-11-21 00:01:42 26 #### 原创 [leetcode] 1314. Matrix Block Sum DescriptionGiven a m * n matrix mat and an integer K, return a matrix answer where each answer[i][j] is the sum of all elements mat[r][c] for i - K <= r <= i + K, j - K <= c <= j + K, and (r, c) is a valid position in the matrix.Example 1:In 2020-11-20 20:58:17 9 #### 原创 [leetcode] 1140. Stone Game II DescriptionAlice and Bob continue their games with piles of stones. There are a number of piles arranged in a row, and each pile has a positive integer number of stones piles[i]. The objective of the game is to end with the most stones.Alice and Bob ta 2020-11-20 20:18:46 9 #### 原创 [leetcode] 1139. Largest 1-Bordered Square DescriptionGiven a 2D grid of 0s and 1s, return the number of elements in the largest square subgrid that has all 1s on its border, or 0 if such a subgrid doesn’t exist in the grid.Example 1:Input: grid = [[1,1,1],[1,0,1],[1,1,1]]Output: 9Example 2: 2020-11-19 20:31:08 11 #### 原创 [leetcode] 1105. Filling Bookcase Shelves DescriptionWe have a sequence of books: the i-th book has thickness books[i][0] and height books[i][1].We want to place these books in order onto bookcase shelves that have total width shelf_width.We choose some of the books to place on this shelf (such 2020-11-19 19:05:25 12 #### 原创 [leetcode] 1049. Last Stone Weight II DescriptionWe have a collection of rocks, each rock has a positive integer weight.Each turn, we choose any two rocks and smash them together. Suppose the stones have weights x and y with x <= y. The result of this smash is:If x == y, both stones a 2020-11-18 23:14:28 16 #### 原创 [leetcode] 1048. Longest String Chain DescriptionGiven a list of words, each word consists of English lowercase letters.Let’s say word1 is a predecessor of word2 if and only if we can add exactly one letter anywhere in word1 to make it equal to word2. For example, “abc” is a predecessor of 2020-11-18 21:57:31 18 #### 原创 字符串与子字符串前缀匹配算法Z-algorithm(比较难理解) 2020-11-18 13:32:28 16 #### 原创 [leetcode] 1043. Partition Array for Maximum Sum DescriptionGiven an integer array arr, you should partition the array into (contiguous) subarrays of length at most k. After partitioning, each subarray has their values changed to become the maximum value of that subarray.Return the largest sum of the g 2020-11-16 22:43:17 14 #### 原创 [leetcode] 1039. Minimum Score Triangulation of Polygon DescriptionGiven N, consider a convex N-sided polygon with vertices labelled A[0], A[i], …, A[N-1] in clockwise order.Suppose you triangulate the polygon into N-2 triangles. For each triangle, the value of that triangle is the product of the labels of t 2020-11-16 20:37:46 14 #### 原创 pytorch transformers从头开始实现情感分析模型 2020-11-15 18:03:49 39 #### 原创 [leetcode] 53. Maximum Subarray DescriptionGiven an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.Follow up: If you have figured out the O(n) solution, try coding another solution using the divide and conq 2020-11-15 15:36:53 18 #### 原创 python K-Means算法从头实现 2020-11-15 13:19:15 22 #### 原创 [leetcode] 1027. Longest Arithmetic Subsequence DescriptionGiven an array A of integers, return the length of the longest arithmetic subsequence in A.Recall that a subsequence of A is a list A[i_1], A[i_2], …, A[i_k] with 0 <= i_1 < i_2 < … < i_k <= A.length - 1, and that a sequence B i 2020-11-14 21:23:59 18 #### 原创 [leetcode] 1025. Divisor Game DescriptionAlice and Bob take turns playing a game, with Alice starting first.Initially, there is a number N on the chalkboard. On each player’s turn, that player makes a move consisting of:Choosing any x with 0 < x < N and N % x == 0.Replacing 2020-11-14 20:02:30 18 #### 原创 [leetcode] 1024. Video Stitching DescriptionYou are given a series of video clips from a sporting event that lasted T seconds. These video clips can be overlapping with each other and have varied lengths.Each video clip clips[i] is an interval: it starts at time clips[i][0] and ends at 2020-11-14 15:22:47 22 #### 原创 [leetcode] 983. Minimum Cost For Tickets DescriptionIn a country popular for train travel, you have planned some train travelling one year in advance. The days of the year that you will travel is given as an array days. Each day is an integer from 1 to 365.Train tickets are sold in 3 differen 2020-11-14 14:09:29 18 #### 原创 [leetcode] 935. Knight Dialer DescriptionThe chess knight has a unique movement, it may move two squares vertically and one square horizontally, or two squares horizontally and one square vertically (with both forming the shape of an L). The possible movements of chess knight are show 2020-11-14 11:43:41 22 #### 原创 [leetcode] 931. Minimum Falling Path Sum DescriptionGiven a square array of integers A, we want the minimum sum of a falling path through A.A falling path starts at any element in the first row, and chooses one element from each row. The next row’s choice must be in a column that is different 2020-11-14 10:16:07 16 #### 原创 [leetcode] 898. Bitwise ORs of Subarrays DescriptionWe have an array A of non-negative integers.For every (contiguous) subarray B = [A[i], A[i+1], …, A[j]] (with i <= j), we take the bitwise OR of all the elements in B, obtaining a result A[i] | A[i+1] | … | A[j].Return the number of possib 2020-11-14 00:37:27 57 #### 原创 [leetcode] 877. Stone Game DescriptionAlex and Lee play a game with piles of stones. There are an even number of piles arranged in a row, and each pile has a positive integer number of stones piles[i].The objective of the game is to end with the most stones. The total number of 2020-11-13 23:44:04 19 #### 原创 [leetcode] 799. Champagne Tower DescriptionWe stack glasses in a pyramid, where the first row has 1 glass, the second row has 2 glasses, and so on until the 100th row. Each glass holds one cup of champagne.Then, some champagne is poured into the first glass at the top. When the topmo 2020-11-13 19:48:36 20 #### 原创 [leetcode] 1631. Path With Minimum Effort DescriptionYou are a hiker preparing for an upcoming hike. You are given heights, a 2D array of size rows x columns, where heights[row][col] represents the height of cell (row, col). You are situated in the top-left cell, (0, 0), and you hope to travel to 2020-11-13 13:24:12 32 #### [leetcode] 1642. Furthest Building You Can Reach DescriptionYou are given an integer array heights representing the heights of buildings, some bricks, and some ladders.You start your journey from building 0 and move to the next building by possibly using bricks or ladders.While moving from building i 2020-11-13 00:49:38 97 #### 原创 [leetcode] 1562. Find Latest Group of Size M DescriptionGiven an array arr that represents a permutation of numbers from 1 to n. You have a binary string of size n that initially has all its bits set to zero.At each step i (assuming both the binary string and arr are 1-indexed) from 1 to n, the bit 2020-11-13 00:12:31 28 #### 原创 [leetcode] 1283. Find the Smallest Divisor Given a Threshold DescriptionGiven an array of integers nums and an integer threshold, we will choose a positive integer divisor and divide all the array by it and sum the result of the division. Find the smallest divisor such that the result mentioned above is less than o 2020-11-12 20:14:46 31 #### 异步框架上传客户端示例 android异步框架应用的一个小小的示例 2014-09-16 2020-05-06 #### RotateDemo.rar QT5版本的旋转图片的动画,编译器用的mingW,代码进行了重构改良,文章请参考: https://blog.csdn.net/w5688414/article/details/90072287 2019-05-10 #### springboot getopenid demo springboot实现用户信息授权获取用户的id, 写的教程地址为https://blog.csdn.net/w5688414/article/details/88541743 2019-03-13 #### pytorch 0.3.1 python3.6 CPU版本whl pytorch 0.3.1 python3.6 CPU版本whl,这个属于老版本了,在官网上都不容易找到,我这里分享出来 2019-03-11 2018-11-29 #### VGG_ILSVRC_16_layers_fc_reduced.h5 VGG_ILSVRC_16_layers_fc_reduced.h5文件,用于ssd keras模型,考虑到国内没有搜到该资源,我来当当搬运工 2018-11-07 2017-12-25 2017-11-15 2017-09-14 2015-01-05
{}
# How do I plot a line delineating a subset of values on a 3D surface plot? I have the following surface plot: Plot3D[-4.53 + 2.67*x + 2.78*y - 1.09*x*y, {x, 1.8, 2.6}, {y, 1.8, 2.6}, PlotRange -> {1.7, 2.6}, ColorFunction -> "GrayTones", Ticks -> {{1.8, 2., 2.2, 2.4, 2.6}, {1.8, 2., 2.2, 2.4, 2.6}, {1.8, 2., 2.2, 2.4, 2.6}}, LabelStyle -> Opacity[0], BoxRatios -> {1, 1, 1}] There is no problem with this plot. What I now want to do is to superimpose some line(s) or ellipse(s) on its surface that delineate(s) all values on the surface in which z (the function) is between the corresponding x and y values, that is, all values for which x < z < y OR y < z < x. I imagine that the resulting line or lines may extend from one edge of the surface to another or there may be a couple of ellipses. Despite my efforts, I have been unable to mentally visualize the actual result, only possible results like those I mention above. Finally, I have almost no expertise at Mathematica. The above surface plot took me weeks to figure out, and most of that was through trial and error, not any sort of knowledge of the language or syntax of Mathematica. Follow-up: I also wish to delineate where x is between z and y. I came up with: Plot3D[-4.53 + 2.67 x + 2.78 y - 1.09 x y, {x, 1.8, 2.6}, {y, 1.8, 2.6}, PlotRange -> {1.7, 2.6}, Ticks -> {{1.8, 2., 2.2, 2.4, 2.6}, {1.8, 2., 2.2, 2.4, 2.6}, {1.8, 2., 2.2, 2.4, 2.6}}, LabelStyle -> Opacity[0], BoxRatios -> Automatic, MeshFunctions -> {Function[{x, y, z}, z - x], Function[{x, y, z}, z - y], Function[{x, y, z}, x - y], Function[{x, y, z}, x - z]}, Mesh -> {{0}, {0}, {0}, {0}}, MeshShading -> None, Lighting -> "Neutral"] The problem is that either there is no portion where y < x < z, or I am incorrectly specifying this. Also, I cannot for the life of me figure out how to do MeshShading to shade each of the regions. Thank you in advance. • Take a look at MeshFunctions. – Kuba Nov 26 '14 at 16:14 • Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Read the faq! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! – user9660 Nov 26 '14 at 16:21 The region where $z$ is between $x$ and $y$ is bounded by the curves $z = x$ and $z = y$, or equivalently, $z-x=0$ and $z-y=0$. To draw these curves in the plot, the usual trick is to supply the left-hand sides of the equations to MeshFunctions and specify that Mesh lines be drawn only when they are zero. Plot3D[-4.53 + 2.67 x + 2.78 y - 1.09 x y, {x, 1.8, 2.6}, {y, 1.8, 2.6}, PlotRange -> {1.7, 2.6}, Ticks -> {{1.8, 2., 2.2, 2.4, 2.6}, {1.8, 2., 2.2, 2.4, 2.6}, {1.8, 2., 2.2, 2.4, 2.6}}, LabelStyle -> Opacity[0], BoxRatios -> Automatic, MeshFunctions -> {Function[{x, y, z}, z - x], Function[{x, y, z}, z - y]}, Mesh -> {{0}, {0}}, MeshShading -> {{White, Pink}, {Pink, White}}, Lighting -> "Neutral"] I've also used MeshShading to highlight in pink the region where the first function is positive and the second is negative, or vice versa, which is the same as the region you seek. Also note that, for example, Function[{x, y, z}, z - x] can also be written as #3 - #1 &, which is the syntax in which you'll often find examples written in the MeshFunctions documentation. • Raul, this is perfect. Thank you so much. This is exactly what I was looking for -- more, in fact, with the nice choice of the pink shading. Thanks again. – kwsockman Nov 26 '14 at 18:46 • I'm glad to hear it. You can accept the answer by clicking the checkbox to the left if you want to indicate that it solves your problem. – Rahul Nov 27 '14 at 7:28 • Thanks again, Raul and Kuba. Initially, I thought that the answer to my question above would get me started on figuring out for myself what I also need to do. But I have not succeeded, so I am posting here another question. – kwsockman Nov 28 '14 at 13:56
{}
# Mechanics question Suppose I am standing at the top of a cliff which is a vertical distance (d) above the ground. Now, if I jump off the slide at an angle beta upwards at an initial velocity (v), I will travel a horizontal distance of (R). I have proved the first part of the question which shows these quantities are related by the equation: Rsin(2beta) + d(1 + cos(2beta))=(R^2)g/v^2 where g is 9.8 meters per second squared. Ok, now here is the relation which I am having problems solving: The condition for maximum range R is tan(2beta)=R/d. I have no idea how to go about showing this. Any ideas or suggestions?
{}
## Abstract Geophysical processes are often characterized by long-term persistence. An important characteristic of such behaviour is the induced large statistical bias, i.e. the deviation of a statistical characteristic from its theoretical value. Here, we examine the most probable value (i.e. mode) of the estimator of variance to adjust the model for statistical bias. Particularly, we conduct an extensive Monte Carlo analysis based on the climacogram (i.e. variance of the average process vs. scale) of the simple scaling (Gaussian Hurst-Kolmogorov) process, and we show that its classical estimator is highly skewed especially in large scales. We observe that the mode of the climacogram estimator can be well approximated by its lower quartile (25% quantile). To derive an easy-to-fit empirical expression for the mode, we assume that the climacogram estimator follows a gamma distribution, an assumption strictly valid for Gaussian white noise processes. The results suggest that when a single timeseries is available, it is advantageous to estimate the Hurst parameter using the mode estimator rather than the expected one. Finally, it is discussed that while the proposed model for mode bias works well for Gaussian processes, for higher accuracy and non-Gaussian processes, one should perform a Monte Carlo simulation following an explicit generation algorithm. ## INTRODUCTION An important attribute characterizing geophysical processes is the high spatio-temporal dependence, in the sense that a random variable of such a process at a specific time or location strongly depends on several (even infinite) past, or of different location, random variables of the same process. This type of dependence requires long samples for its identification, which is a rare case in most natural processes, and thus, for the estimation of its parameters, it is advised to use only up to the second-order statistics (Lombardo et al. 2014) and only in cases where very long samples are available to expand to higher orders. The above issues are further highlighted in Dimitriadis (2017), where several (overall thirteen) such processes with various lengths and physical properties expanding from small-scale turbulence to large-scale hydrometeorological processes are analyzed in terms of their long-term behaviour using massive databases and unbiased estimators of the second-order dependence structure. Interestingly, all the examined processes exhibited long-term persistence, otherwise known as Hurst-Kolmogorov (HK) behaviour (coined by Koutsoyiannis & Cohn 2008), i.e. power-law decay of the autocorrelation function with lag (for a literature review on long-term persistent processes in hydrometeorology, see also O'Connell et al. 2016). Additionally, Koutsoyiannis (2011) provided a theoretical justification of the HK behaviour in geophysical processes, showing that it is linked to the second law of thermodynamics (i.e. entropy extremization), and specifically, the stronger the persistence of the dependence structure of a process, the higher the entropy of the process at large scales. The identification of the dependence structure of a process can be highly affected by the sample uncertainty and statistical bias where the true statistical properties (mean, variance etc.) of a statistic (e.g. variance) of a stochastic process may differ from the one estimated from a series with finite length. The deviations of the statistical characteristics from their true values should be taken into account not only for the marginal characteristics but also for the dependence structure of the process. Therefore, to correctly adjust the stochastic model to the observed series of the physical process, we should account for the bias effect since all series are of finite (and often short) lengths. The second-order properties can be similarly assessed by common stochastic tools such as the autocovariance function (a function of lag), power spectrum (a function of frequency), and variation of statistics (e.g. variance) of the averaged process vs. scale, a tool known as climacogram (Koutsoyiannis 2010). It is shown that the latter estimator of the second-order dependence structure, as compared to the other two metrics, encompasses additional advantages in stochastic model building and interpretation from data; for example, it is characterized by smaller statistical uncertainty and easier to handle expressions of the statistical bias (Dimitriadis & Koutsoyiannis 2015). Therefore, it is advisable that the sample uncertainty of the second-order dependence structure be tackled with the estimator with the lower variation, such as the climacogram. When multiple sample realizations (i.e. recorded series) are known, the handling of the statistical bias arising from a selected stochastic model may be based on the unbiased estimator of the expected value of the climacogram (Dimitriadis & Koutsoyiannis 2018). However, when a single data series of observations is available for the model fitting (which is the case when geophysical processes are studied), it would be interesting to examine the mode of the climacogram, instead of the expected value; the two may differ in case of strong HK behaviour. This estimator is equivalent to a maximum-likelihood estimator (e.g. Kendziorski et al. 1999) for processes with zero (i.e. white noise) or short-term (e.g. Markov) dependence structure, while here we further extend it for HK processes (see also the work of Tyralis & Koutsoyiannis 2011 for the expectation of the climacogram). It is noted that while the climacogram is often based on the second central moment (i.e. variance), other types of moments (e.g. raw, L-moments or K-moments; Koutsoyiannis 2019) can be used to measure fluctuation in scale, and here, we focus on the central second-order climacogram (i.e. fluctuation measured by variance vs. scale). ## METHODS In this section, we present the applied methods, namely the climacogram estimator, the statistical bias expressions for the mode and expected values of the estimator and the algorithm for the stochastic synthesis of the Gaussian HK process for the Monte Carlo analysis. ### The climacogram The analysis of a process through the variance of the averaged process vs. scale has been thoroughly applied in stochastic processes (e.g. Papoulis 1991; Vanmarcke 2010). However, its importance to the analysis of the second-order dependence structure is highlighted mainly by more recent works (see a historical review in Koutsoyiannis 2018). Also, the simple name climacogram allowed its further understanding through visualization; indeed, the term originates from the Greek climax (meaning scale) and gramma (meaning written; cf. the terms autocorrelogram for the autocorrelation, scaleogram for the power spectrum and wavelets). It has been shown that the climacogram, treated as an estimator (rather than just a tool for the identification of long-term behaviour of the second-order dependence structure), has additional advantages from the more widely applied estimators of the autocovariance and power spectrum (Dimitriadis & Koutsoyiannis 2015). Namely, the climacogram could provide a more direct, easy, and accurate means to make diagnoses from data and build stochastic models in comparison to the power spectrum and autocovariance. For example, the climacogram, compared to other tools, has the lowest standardized estimation error for processes with short- and long-term persistence, zero discretization error for averaged processes, simple and analytical expression for the statistical bias, always positive values, well-defined and usually monotonic behaviour, smallest fluctuation of skewness on small scales while closest to zero skewness in larger scales, and mode closest to the expected (i.e. mean) value in large scales. Also, the climacogram is directly linked to the entropy production of a process (Koutsoyiannis 2011, 2016). Furthermore, the climacogram expands the notion of variance by making it a function of time scale and is per se further expandable for statistics different from the central estimators of fluctuation (e.g. second raw moment and second L-moment vs. scale; Koutsoyiannis 2019) for different characteristics of the estimator (e.g. mode and median) and even for moments of higher (e.g. third and fourth) orders (Dimitriadis & Koutsoyiannis 2018). Recently, Koutsoyiannis (2019) extended the notion of climacogram for orders higher than two and showed how to substitute the joint moments of a process, allowing in this manner to tackle some limitations of the latter such as the discretization effect and statistical bias. Symbolically, the climacogram is: (1) where var[ ] denotes the variance and is the continuous-time process at scale k (in dimensions of time), which equals the discrete-one averaged in time intervals Δ, i.e. , in the discrete-time scale κ = k/Δ (dimensionless natural number whereas for real numbers see the adjustment in Koutsoyiannis 2011). ### The Gaussian long-term persistent process and its stochastic synthesis The most common processes employed in geophysics, and particularly in hydrology, are the white noise process, the Markov process (with an exponential decay of the autocorrelation), and long-term persistent processes, which are characterized by a power-law decay of the climacogram (or equivalently of the autocorrelation) as a function of scale (or lag). A typical representative of the latter processes is the Gaussian HK process defined as follows: (2) where denotes equality in distribution with μ the mean and the variance of the process for each scale k, H is the Hurst parameter (0 < H < 1) otherwise defined as (Dimitriadis et al. 2016a) ; the quantity in the limit is the derivative of with respect to . It is noted that this process has infinite variance at scale zero and thus, it should not be used to model the small scales of a physical process (in spite of the fact that the fractional-Gaussian-noise—fGn—process is widely used to model several processes at small scales; Koutsoyiannis et al. 2018). For the stochastic synthesis of the Gaussian HK model, we may use the sum of arbitrarily many independent Markov processes, thus expressing the target climacogram as follows (Dimitriadis & Koutsoyiannis 2015): (3) where is the variance, a time scale parameter for each Markov model i, and l the total number of Markov processes. Mandelbrot (1963) has shown that for , the above model can adequately describe an fGn (or else Gaussian HK) process for any generated length (see also Mandelbrot & Van Ness 1968; Mandelbrot & Wallis 1968). Koutsoyiannis (2002) has analytically estimated the parameters of three AR(1) models (l = 3) to capture the HK process for n< 104. Dimitriadis & Koutsoyiannis (2015) have expanded this framework to the sum of arbitrarily many AR(1) models (abbreviated as SAR) for the generation of any type of process with an autoregressive dependence structure and up to any number of scales, by using a suitable function with only two parameters, namely and , that link the lag-1 autocorrelations of each Markov model, e.g. through the expression , with i= 1, …, l and l often taken equal to the integer part of log(n) + 1. For example, for n = 106 and H = 0.8, we have l = 7, 0.394 and 12.356 for a maximum standardized error between the true (Equation (2)) and modelled (Equation (3)) climacogram (i.e. for all scales) equal to 0.009 (Table 1). Table 1 Parameters p1 and p2 estimated to approximate different types of the Ν(0,1)-HK model (i.e. μ = 0 and γ(Δ) = 1) with l = 7 and n ≤ 106 p1 p2 Maximum error (standardized) 0.51 0.022 17.122 0.001 0.60 0.091 12.607 0.006 0.70 0.124 13.317 0.009 0.80 0.394 12.356 0.009 0.90 0.395 14.708 0.005 0.99 0.548 19.465 0.001 p1 p2 Maximum error (standardized) 0.51 0.022 17.122 0.001 0.60 0.091 12.607 0.006 0.70 0.124 13.317 0.009 0.80 0.394 12.356 0.009 0.90 0.395 14.708 0.005 0.99 0.548 19.465 0.001 ### The mode of climacogram estimator and its statistical bias The climacogram can be estimated from a sample through an estimator as similarly done for the estimators of the marginal moments. Here, for the climacogram we use a classical estimator: (4) where is the integer part of , is the averaged process at scale for , is the sample average, and n is the series length. Since the above estimator is a random variable, it has a marginal distribution (see an illustration in Figure 1). The true value of a statistical characteristic (e.g. variance) of a stochastic model may differ from the one estimated from a series with finite length. To correctly adjust the stochastic model to the observed series of the physical process, one should account for the bias effect. An important question is how the statistical bias is generally handled through the second-order dependence structure in case of long-term persistent processes. Particularly, the selected stochastic model should be adjusted for bias before it is fitted to the sample dependence structure. It is noted that neglecting the bias effect in case of a long-term persistent process leads to underestimations of the stochastic model parameters such as the Hurst parameter and to erroneous conclusions. An adjustment of the models for bias is usually done by equating the observed dependence structure to the expected value of the applied estimator. The alternative studied here is the mode, instead of the expected value, of the dependence structure, which represents the most probable value (and thus, the most expected) of the variance estimator at each scale. Figure 1 An illustration for an N(0,1)-HK (H = 0.83, n = 200) process of (left) how several statistical characteristics of the climacogram estimator vary with scale and (right) the observed quantile (qo) vs. the non-exceedance probability of the modelled quantile P(qmqo), showing how the gamma distribution can adequately approximate the distribution of the climacogram estimator especially at large scales. Figure 1 An illustration for an N(0,1)-HK (H = 0.83, n = 200) process of (left) how several statistical characteristics of the climacogram estimator vary with scale and (right) the observed quantile (qo) vs. the non-exceedance probability of the modelled quantile P(qmqo), showing how the gamma distribution can adequately approximate the distribution of the climacogram estimator especially at large scales. The statistical bias of an estimator is the difference of the expected value of the estimator from its true value (e.g. Papoulis 1991). Thus, the bias of the climacogram is shown as follows (e.g. Koutsoyiannis 2011): (5) where denotes the bias of the expected value of a statistical estimator of a process. Clearly, for the mean value of a process, we have that . Following the same rationale, we define an expansion of the notion of bias for the mode of the above estimator of the climacogram, i.e.: (6) where denotes the mode of the variable x with density function f(x). We refer to as the mode bias. For a Gaussian white noise process of length n and variance , the distribution of its sample variance follows the gamma distribution (Cochran 1934). The averaged process at scale κ, with a sample length of n/κ and variance , follows , with for , or else 0. Hence, for , we have that , and , i.e. zero bias. However, for long-term persistent processes, the mode bias is non-zero and its analytical solution is no longer easy to derive. From the above results, it becomes evident that the statistical bias always depends on the selected model and not on the data as commonly thought. For example, consider the Gaussian HK process in the previous section with an autocorrelation function in discrete time , where is the discrete-time lag. The bias of the autocorrelation is similarly defined as , and thus depends on the model parameter H. It is noted that the above apply even to the so-called non-parametric models, since they also involve estimation from data, and thus, these models should be similarly adjusted for statistical bias to avoid underestimation of the process variability during a Monte Carlo simulation. For simplicity and without loss of generality, we set Δ = 1 for the rest of the analysis. It is evident that or else , since the sample variance is positively skewed, i.e. and the equality holds when , where the variance of the sample variance is zero for an ergodic process. A preliminary analysis of common HK-type processes has shown that the mode climacogram is close to the low quartile (25% quantile) of the marginal distribution of variance at each scale (Dimitriadis et al. 2016c; Gournary 2017). Therefore, when the mode of the variance estimator is of interest, we may use a Monte Carlo technique (as described in the next section) to accurately estimate the mode bias or, in case the marginal distribution of the climacogram is known, to calculate the 25% quantile at each scale to approximate the mode bias. ## MONTE CARLO ANALYSIS FOR THE MODE OF THE VARIANCE ESTIMATOR We perform Monte Carlo experiments over the N(0,1)-HK model for a wide range of Hurst parameters H (i.e. 0.5 to 0.95) and for a wide range of series lengths n (i.e. 20–2,000). Specifically, we produce a number (N) of synthetic series through the SAR model described in the section ‘The Gaussian long-term persistent process and its stochastic synthesis’, where N depends on the sample mean value to reach the expected one at scale κ = n/10 based on the rule of thumb when using the climacogram as shown in Dimitriadis & Koutsoyiannis (2015). We found that for , the standardized error between the theoretical expected value and the sample one (Equation (5)) is lower than 1% at scale κ = n/10. In this way, the mode is expected also to be well preserved with a similar error. However, caution should be given to the selection of the sample mode estimator to ensure that its variance permits a robust estimation of the true value of the mode. Since the distribution function of the estimator of variance is unknown for long-term persistent processes and given that the mode value is the most likely to occur within the sample, we calculate the sample mode from each simulated series by finding the most probable value with an accuracy of two decimal digits. Specifically, we round up each value of the time series, and for each scale, to the second decimal digit, and we estimate the most probable value of the rounded time series (for higher accuracies a larger N was required). Also, other estimators for the sample mode (e.g. Bickel & Fruwirth 2006) could be used and compared to the proposed one in future research to optimize the performance of the analysis. Here, to derive an easy-to-fit empirical expression to approximate the mode bias, we adopt the assumption that the above distribution is nearly gamma for smaller scales (see also a similar analysis in Gournary 2017 and Dimitriadis et al. 2018). Using the results from the Monte Carlo analysis, we then evaluate the parameter of the gamma distribution for each H, n, and κ, and we build a model for the mode, then later test its performance. Although the true autocorrelation function of the averaged process for a long-term persistent process does not vary with scale, the sample autocorrelation will be also prone to bias (e.g. Dimitriadis & Koutsoyiannis 2015) affecting the distribution function of the sample variance at each scale. To minimize the sample error for the fitting of the two-parameter gamma distribution, we use the theoretical expression for the expected value of the sample climacogram, i.e. , and the variance of the sample climacogram, i.e. , as evaluated from the Monte Carlo analysis, which exhibits the lowest variability in estimation among the four central moments (Dimitriadis & Koutsoyiannis 2018; Figure 2). Based on these two measures, we estimate the two parameters of the gamma distribution. Figure 2 (left) The shape parameter assuming a gamma distribution for the mode estimator of the climacogram of an N(0,1)-HK process (for H = 0.8 and for all n and κ simulated in the Monte Carlo analysis) vs. the theoretical shape parameter of the white noise process. (right) Proposed model for the c(H) and p(H) functions for all examined H from the Monte Carlo analysis. Figure 2 (left) The shape parameter assuming a gamma distribution for the mode estimator of the climacogram of an N(0,1)-HK process (for H = 0.8 and for all n and κ simulated in the Monte Carlo analysis) vs. the theoretical shape parameter of the white noise process. (right) Proposed model for the c(H) and p(H) functions for all examined H from the Monte Carlo analysis. We first set the scale parameter of the gamma distribution such as to simulate the sample ratio of the aforementioned parameters, i.e. and so, the shape parameter can be also estimated as . We observe (Figure 2) that for , the shape parameter is approximately proportional, by a function , to the corresponding shape parameter for the white noise process raised to a function p(H), i.e.: (7) where is a function corresponding to the shape parameter of the gamma distribution function, while for or , the mode is considered close to zero. The two functions of the above expression are fitted as follows (Figure 2): (8) and (9) The above two adjustments allow us to empirically express the mode of the climacogram estimator as a function of H, n, and κ: (10) It is noted that based on the above assumptions, the standard deviation, and the skewness and excess kurtosis coefficients of the climacogram estimator can be estimated as , , and , respectively. Since all the above measures will be larger than those in case of a white noise process. The above expression can approximate the mode by an absolute difference of 0.005 from the Monte Carlo estimates, while for better approximations it is advised to implement a new Monte Carlo analysis (see also discussion and application in the section ‘Applications to Annual Streamflow’). Interestingly, the standardized error between the mode and expected values of the estimator, i.e. , is calculated from the Monte Carlo analysis to reach a maximum value of 67% corresponding to cases with H ≥ 0.6 and n/κ ≤ 10, while for the white noise process it can be theoretically estimated as ɛ = 2/, which for κ = n/10 is approximately 20%. ## APPLICATIONS TO ANNUAL STREAMFLOW For illustrations of possible implications of the above results, we apply a stochastic analysis based on the expected and the mode values of the climacogram to a streamflow process at the Peneios river (Thessaly, Greece), where a historical streamflow annual time series is available at the upstream station of Ali Efenti with only a 13-year length (for more information on the study area, see Dimitriadis et al. 2016b). For the identification of the stochastic model, we adjust for statistical bias and, in particular, we fit the mode of the estimator rather than its expectation. It is noted that the proposed empirical model for the mode bias (Equation (10)) is derived from a Monte Carlo analysis for sample lengths of n ≥ 20, and so for this application, we perform a new Monte Carlo analysis to fit the observed climacogram for scales 1 ≤ κn/10 (rule of thumb; Dimitriadis & Koutsoyiannis 2015) and so here, for the first two scales (Figure 3). We find that an HK model can adequately simulate the observed standardized climacogram, i.e. , with H= 0.9. We also estimate the Hurst parameter with the expectation of the estimator, and we find H′ ≈ 0.8 and H′′ ≈ 0.7, with or without adjusting for bias, respectively. Evidently, both latter values underestimate the long-term persistence behaviour (Figure 3). Figure 3 Standardized climacogram estimations of the observed standardized time series (black line), the white noise model (grey line), and the three fitted N(0,1)-HK stochastic processes: (a) adjusting for bias of the mode of the estimator (green line), i.e. , and of its expectation (red line), i.e. , and (b) not adjusting for bias (blue line), i.e. , also corresponding to the non-parametric model configuration. Figure 3 Standardized climacogram estimations of the observed standardized time series (black line), the white noise model (grey line), and the three fitted N(0,1)-HK stochastic processes: (a) adjusting for bias of the mode of the estimator (green line), i.e. , and of its expectation (red line), i.e. , and (b) not adjusting for bias (blue line), i.e. , also corresponding to the non-parametric model configuration. It is noted that the dependence structure of a process (e.g. streamflow) will have a small effect at the risk imposed by the expected number of peaks over threshold (e.g. for the design of a dam or for flood risk mapping) as compared to the effect of the marginal distribution of the process (Volpi et al. 2015; Serinaldi & Kilsby 2018). However, the dependence structure will have a great effect (especially for processes with long-term behaviour) at the duration of successive peaks over threshold (e.g. maximum duration of wet/dry periods or of flood inundation), which may highly affect urban as well as agricultural areas and insurance policies (e.g. Serinaldi & Kilsby 2016; Goulianou et al. 2019). To illustrate this, we generate an adequate number N (see the section ‘Monte Carlo Analysis for the Mode of the Variance Estimator’) of HK synthetic timeseries with H = 0.5 (N= 5 × 103), H = 0.7 (N= 4 × 104), H = 0.8 (N= 105), and H = 0.9 (N= 3 × 105). For convenience, we assume an N(0,1) distribution for all processes. We then estimate the expected frequency of the number of peaks over various thresholds (PoT) as well as the expected frequency of the maximum duration of successive peaks over various thresholds (MdT), and we standardize them with the PoT and MdT values of the white noise process (Figure 4). We find that the MdT varies with threshold and long-term persistence, while the PoT stays almost unaffected by both. Additional analyses and quantifications on the reflection of long-term term persistence in terms of clustering in time can be found in Iliopoulou & Koutsoyiannis (2019). Figure 4 Expected frequency of peak over threshold (PoT) and expected maximum duration of successive peaks over threshold (MdT) standardized with the PoT and MdT values of the N(0,1) white noise process for various HK-N(0,1) processes. Figure 4 Expected frequency of peak over threshold (PoT) and expected maximum duration of successive peaks over threshold (MdT) standardized with the PoT and MdT values of the N(0,1) white noise process for various HK-N(0,1) processes. The results from this study suggest that the sample estimator of the variance can be skewed even for long samples in the presence of long-term persistence behaviour as opposed to the white noise process. Therefore, the mode is different from the expectation and more suitable to use in estimation. We propose that when a single recorded series is available and a Gaussian HK process is fitted with small sample size and relatively high Hurst parameter, it is advantageous to employ the mode of the estimator as calculated from the empirical model of Equation (10), rather than its expectation (Equation (5)), so as to avoid underestimation of the Hurst parameter (and thus, the uncertainty of the process). In case of a non-Gaussian distribution, larger accuracy, or a different estimator of the second-order dependence structure (e.g. other climacogram estimator, autocovariance, power spectrum, variogram etc.), we should employ the Monte Carlo technique and test whether the mode of the estimator used is close enough to its expected value. If this is true, then the expected value can be used to adjust the model for bias, whereas if the two values vary then the model should be adjusted for bias based on the mode estimator. For Monte Carlo analysis of a non-Gaussian-correlated process, an explicit algorithm should be preferred (Dimitriadis & Koutsoyiannis 2018) since the mode value is expected to highly depend on higher-order moments in case of long-term persistent processes. ## CONCLUSIONS AND DISCUSSION Awareness of uncertainty in assessing the dependence structure of a process is of paramount importance as it may critically affect the interpretation of results. Estimation uncertainty may introduce large statistical bias, which can be additionally magnified in the presence of long-term persistence (Dimitriadis & Koutsoyiannis 2015). In addition, if the uncertainty is underestimated, then a regular cluster of events could be erroneously regarded as an extreme cluster. Although the mode of the examined classical estimator for variance is close to its expectation for small Hurst parameters and large lengths, we show that for larger values of the Hurst parameter and small sample lengths, equating the expected climacogram to the observed one may lead to underestimation of the long-term persistence and thus the uncertainty of the process. We propose that when the available series have short lengths or when the empirical Hurst parameter is estimated larger than 0.5, we should always account for statistical bias. Particularly for the bias adaptation, when information is available on only a single series/realization of the process, it is advantageous to equate the mode instead of the expectation of the climacogram estimator to the sample values. Interestingly, in case of an N(0,1)-HK process, the absolute difference between the mode and expected values of the estimator is calculated (from a Monte Carlo analysis performed in this study) to reach a maximum value of 67% of the expected value, corresponding to cases with H ≥ 0.6 and n/κ ≤ 10, while for the white noise process the value is approximately 20% for . In cases of different stochastic processes or estimators or when a larger accuracy of the mode bias is of interest, one should employ a Monte Carlo technique through an explicit generation algorithm (Dimitriadis & Koutsoyiannis 2018) to estimate the mode climacogram estimator or use the lower quartile (25% quantile) of the estimator (in case its distribution is known) as an approximation. From the Monte Carlo analysis performed in this study, it is also observed that for an N(0,1)-HK process with variance and for large n and small n/, the distribution of the climacogram estimator tends to that of , with a mean value equal to , i.e. zero bias. However, given the estimation uncertainty present in records exhibiting persistence, the autocorrelation of the averaged process is independent of the scale, and thus, the above distribution will never be truly reached. The underestimation of the persistence of the parent process also has critical implications for the estimation of the properties of its extremes, since it was shown that the maximum duration of successive peaks over threshold is greatly affected by the degree of dependence. Additional analyses and quantifications on the reflection of long-term term persistence in terms of clustering in time can be found in Iliopoulou & Koutsoyiannis (2019). A final remark for discussion, considering the etymology of the terms, is that the expected value of a random process is less expected to occur than its mode (i.e. most probable value; a term coined by Pearson 1895, p. 345), where the two coincide only in symmetrical distributions. Therefore, when only one value is known (here, only one realization of the climacogram estimator), it is more accurate to fit the model and evaluate the Hurst parameter based on the proposed mode estimator rather than the expected one. ## ACKNOWLEDGEMENT The authors would like to thank the editor Luigi Berardi for handling the paper, one anonymous reviewer for useful comments, and Federico Lombardo for his fruitful discussion, comments, and suggestions that helped us improve the paper. ## CODE AVAILABILITY The MATLAB script for the SAR generation algorithm is available as well as the script for a fast estimation algorithm of the sample climacogram in very long timeseries and in many scales. ## REFERENCES REFERENCES Bickel D. R. & Fruwirth R. 2006 On a fast, robust estimator of the mode: comparisons to other robust estimators with applications . Computational Statistics & Data Analysis 50 , 3500 3530 . Cochran W. G. 1934 The distribution of quadratic forms in a normal system, with applications to the analysis of covariance . Mathematical Proceedings of the Cambridge Philosophical Society 30 ( 2 ), 178 191 . doi:10.1017/S0305004100016595 . P. 2017 Hurst-Kolmogorov Dynamics in Hydrometeorological Processes and in the Microscale of Turbulence . PhD Thesis , National Technical University of Athens , p. 167 . P. & Koutsoyiannis D. 2015 Climacogram versus autocovariance and power spectrum in stochastic modelling for Markovian and Hurst–Kolmogorov processes . Stochastic Environmental Research & Risk Assessment 29 ( 6 ), 1649 1669 . P. & Koutsoyiannis D. 2018 Stochastic synthesis approximating any process dependence and distribution . Stochastic Environmental Research & Risk Assessment 32 ( 6 ), 1493 1515 . doi:10.1007/s00477-018-1540-2 . P. , Koutsoyiannis D. & Papanicolaou P. 2016a Stochastic similarities between the microscale of turbulence and hydrometeorological processes . Hydrological Sciences Journal 61 ( 9 ), 1623 1640 . doi:10.1080/02626667.2015.1085988 . P. , Tegos A. , Oikonomou A. , Pagana V. , Koukouvinos A. , Mamassis N. , Koutsoyiannis D. & A. 2016b Comparative evaluation of 1D and quasi-2D hydraulic models based on benchmark and real-world applications for uncertainty assessment in flood mapping . Journal of Hydrology 534 , 478 492 . P. , Gournari N. & Koutsoyiannis D. 2016c Markov vs. Hurst-Kolmogorov behaviour identification in hydroclimatic processes . European Geosciences Union General Assembly . Geophysical Research Abstracts, Vol. 18, European Geosciences Union, Vienna, EGU2016-14577-4. doi:10.13140/RG.2.2.21019.05927 . P. , Gournary N. , Petsiou A. & Koutsoyiannis D. 2018 How to adjust the fGn stochastic model for statistical bias when handling a single time series; application to annual flood inundation . In: 13th Hydroinformatics Conference , 1–6 July 2018 , Palermo, Italy . Goulianou T. , Papoulakos K. , Iliopoulou T. , P. & Koutsoyiannis D. 2019 Stochastic characteristics of flood impacts for agricultural insurance practices . European Geosciences Union General Assembly 2019, Geophysical Research Abstracts, Vol. 21, European Geosciences Union, Vienna, EGU2019-5891 . Gournary N. 2017 Probability Distribution of the Climacogram Using Monte Carlo Techniques . Diploma Thesis , Department of Water Resources and Environmental Engineering, National Technical University of Athens , Athens (in Greek) , p. 108 . Iliopoulou T. & Koutsoyiannis D. 2019 Revealing hidden persistence in maximum rainfall records . Hydrological Sciences Journal . doi.org/10.1080/02626667.2019.1657578 . Kendziorski C. M. , Bassingthwaighte J. B. & Tonellato P. J. 1999 Evaluating maximum likelihood estimation methods to determine the Hurst coefficient . Physica A 273 ( 3–4 ), 439 451 . Koutsoyiannis D. 2002 The Hurst phenomenon and fractional Gaussian noise made easy . Hydrological Sciences Journal 47 ( 4 ), 573 595 . Koutsoyiannis D. 2010 HESS opinions ‘A random walk on water’ . Hydrology and Earth System Sciences 14 , 585 601 . Koutsoyiannis D. 2011 Hurst-Kolmogorov dynamics as a result of extremal entropy production . Physica A: Statistical Mechanics and its Applications 390 ( 8 ), 1424 1432 . Koutsoyiannis D. 2016 Generic and parsimonious stochastic modelling for hydrology and beyond . Hydrological Sciences Journal 61 ( 2 ), 225 244 . Koutsoyiannis D. 2018 Climate Change Impacts on Hydrological Science: A Comment on the Relationship of the Climacogram with Allan Variance and Variogram . ResearchGate . Koutsoyiannis D. 2019 Knowable moments for high-order stochastic characterization and modelling of hydrological processes . Hydrological Sciences Journal 64 ( 1 ), 19 33 . Koutsoyiannis D. & Cohn T. A. 2008 The Hurst phenomenon and climate (solicited) , European Geosciences Union General Assembly 2008, Geophysical Research Abstracts, Vol. 10, Vienna, 11804, European Geosciences Union . doi:10.13140/RG.2.2.13303.01447 . Koutsoyiannis D. , P. , Lombardo F. & Stevens S. 2018 From fractals to stochastics: seeking theoretical consistency in analysis of geophysical data . In: ( Tsonis A. A. ed.). Springer , Cham, Switzerland , pp. 237 278 . Lombardo F. , Volpi E. , Koutsoyiannis D. & Papalexiou S. M. 2014 Just two moments! A cautionary note against use of high-order moments in multifractal models in hydrology . Hydrology and Earth System Sciences 18 , 243 255 . doi:10.5194/hess-18-243-2014, . Mandelbrot B. B. 1963 The variation of certain speculative prices . 36 , 394 419 . Mandelbrot B. B. & Van Ness J. W. 1968 Fractional Brownian motions, fractional noises and applications . SIAM Review 10 , 422 437 . Mandelbrot B. B. & Wallis J. R. 1968 Noah, Joseph and operational hydrology . Water Resource Research 4 , 909 918 . O'Connell P. E. , Koutsoyiannis D. , Lins H. F. , Markonis Y. , Montanari A. & Cohn T. A. 2016 The scientific legacy of Harold Edwin Hurst (1880–1978) . Hydrological Sciences Journal 61 ( 9 ), 1571 1590 . doi:10.1080/02626667.2015.1125998 . Papoulis A. 1991 Probability, Random Variables and Stochastic Processes , 3rd edn . McGraw-Hill , New York . Pearson K. 1895 Contributions to the mathematical theory of evolution – II, Skew variation in homogeneous material . Philosophical Transactions of the Royal Society of London 186 , 343 414 . Available from: https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.1895.0010 . Tyralis H. & Koutsoyiannis D. 2011 Simultaneous estimation of the parameters of the Hurst-Kolmogorov stochastic process . Stochastic Environmental Research & Risk Assessment 25 ( 1 ), 21 33 . Vanmarcke E. 2010 Random Fields: Analysis and Synthesis . World Scientific , New Jersey , USA . Volpi E. , Fiori A. , Grimaldi S. , Lombardo F. & Koutsoyiannis D. 2015 One hundred years of return period: strengths and limitations . Water Resources Research 51 ( 10 ), 8570 8585 .
{}
# 2 quick questions 2 quick questions. One is on naming compounds (hydrocarbons) a) CH3 - C = CH - CH2 - CH3 | CH3 (the CH3 should be under the C) what's the key? is number one counting up how many C and H's there are? which element is first? my second question is on structural and geometric isomers. I've read up on both but just when I think I understand I read something else and it confuses me. In one or two lines, what's the difference? I thought it was simply a geometric isomer would have to be GEOMETRICALLY shaped differently, but apparently that is not true. Thanks ## Answers and Replies Well you should have a table or whatever that says what the affixs are. Meth-Eth-prop-but-pent-hex-hept The key i supose is find the longest chain of C's So it's the horizontal one. So you have 5 which is PENT So you have Pent so far. Next the double bond between 2nd=3rd carbon which means it's an ene instead of ane. We also have to show where the double bond is located. since it's between the 2nd and 3rd carbon on that chain we signify it. SO, we've got 2-Pentene sofar. Next that other little addition on the C, CH3 that's alone is called an Methyl and we also again have to signify where it's attached. So the final name I think is 2-Methyl 2-Pentene If it's some sort of test or something I suggest not using my answer; very unlikely correct. For your second question I don't get what ur asking lol. if your talking about the way they look. Where one thing is like a small triangle looking bond and that means coming toward you. Dotted means away from you. then double line would mean it's flat with the page, and it's a double bond. On the otherhand i don't know really what ur asking. munky99999 said: Well you should have a table or whatever that says what the affixs are. Meth-Eth-prop-but-pent-hex-hept The key i supose is find the longest chain of C's So it's the horizontal one. So you have 5 which is PENT So you have Pent so far. Next the double bond between 2nd=3rd carbon which means it's an ene instead of ane. We also have to show where the double bond is located. since it's between the 2nd and 3rd carbon on that chain we signify it. SO, we've got 2-Pentene sofar. Next that other little addition on the C, CH3 that's alone is called an Methyl and we also again have to signify where it's attached. So the final name I think is 2-Methyl 2-Pentene If it's some sort of test or something I suggest not using my answer; very unlikely correct. For your second question I don't get what ur asking lol. if your talking about the way they look. Where one thing is like a small triangle looking bond and that means coming toward you. Dotted means away from you. then double line would mean it's flat with the page, and it's a double bond. On the otherhand i don't know really what ur asking. note: the branched of are always under or below C. ok, I have a few more here, which I'll attempt: CH3 | CH3 - CH2 - C - CH - CH3 | | CH3 CH3 so I identify the longest chain of C..which is 5, so once againt its pent, but its pentane because there is only a single bond? correct? Then we look at whats not on the straight line..we have CH3, three times... so is it 3-methyl pentane? another one: Cl | CH3-C = C - CH2 - CH3 | Cl We have a chain of 5 once again, its pent, but its a double bond betweent C and C, so its pentene. Finally, we have 2Cl's on the side. So, its 2-Chloride Pentene? Finally, I have CH3-CH2-CtriplebondC-CH2-CH3 not sure about the last one..can someone help there. if someone can look over this, I'd appreciate it, alot. Watch yourself on the first example. Just because it looks like the carbons are branching off doesn't mean that can't be the central carbon backbone. The longest chain is a 7-C chain making it hept-. And according to the way you wrote it down, it has a double bond, making it an alkene, making your example a heptene. Next I will label all of the carbons in the molecule C | 3 - 4 - 5 - 6 - 7 || 2 - 1 As you can see, the double bond is in the second bond space (always want to have the fewest numbers possible in terms of simplification). But you also have the $$CH_3$$ molecule on the third carbon in the chain. Therefore this should be: 3-methyl,2-pentene bross7 said: Watch yourself on the first example. Just because it looks like the carbons are branching off doesn't mean that can't be the central carbon backbone. The longest chain is a 7-C chain making it hept-. And according to the way you wrote it down, it has a double bond, making it an alkene, making your example a heptene. Next I will label all of the carbons in the molecule C | 3 - 4 - 5 - 6 - 7 || 2 - 1 As you can see, the double bond is in the second bond space (always want to have the fewest numbers possible in terms of simplification). But you also have the $$CH_3$$ molecule on the third carbon in the chain. Therefore this should be: 3-methyl,2-pentene did u see my note at the top..I said that all the branched off ones are under C..so none are on the 1st (the reason its on the first is because somehow when I post it they always move to the 1st position) Starting off, isomers are molecules with the same molecular formula but different shape. A structural isomer differs in the way the atoms are arranged. For instance: Take $$C_6H_1_4$$ It could be written as: | | | | | | - C - C - C - C - C - C - | | | | | | OR | - C - | | | | | - C - C - C - C - C - | | | | | Both have 6 carbons and 14 hydrogens (I left the H's out for simplicity) but they are arranged differently. This is an example of a structural isomer. For a geometric isomer you usually have something like a double bond which prevents rotation. The molecule will have the same bonding pattern, but because of the three dimensional shape of molecules they end of being different. Take for example: H Cl \ / C = C / \ Br Br And Br Cl \ / C = C / \ H Br Both are $$C_2HBr_2Cl$$ and both have a carbon attached to a hydrogen and bromine; and a carbon attached to a bromine and a chlorine. But how can you make them the same...you can't unless you were able to rotate the double bond (you can't do this without breaking and reforming). This would be an example of geometrical isomerism.
{}
Today we are going to build an Income Tax Calculator Project in Java. Taxation systems can be complex! The amount of tax payable will vary depending on which slab your total income lies. Look at this table to make things clear. Up to 3 LPA(lakhs per annum), no tax is required. All incomes above 3 LPA are subjected to a total of 3 % Education plus Higher Education Cess on the total tax. The total tax varies with the slab as per the given table. ### Java Income Tax Calculator Project public class IncomeTaxCalculator { static double calculateTax(double ti) { double total_tax = 0; double total_cess = 0; if(ti > 300000) { double amt=0,tax1; if((ti - 300000) > 200000) amt = 200000; else { amt = ti - 300000; } tax1 = (0.1 * amt); total_tax += tax1; System.out.println("Tax Payable for slab 3,00,000-5,00,000: " + tax1); } if(ti > 500000) { double amt,tax2; if((ti - 500000) > 500000) amt = 500000; else { amt = ti - 500000; } tax2 = 0.2 * amt; total_tax += tax2; System.out.println("Tax Payable for slab 5,00,000-10,00,000: " + tax2); } if(ti > 1000000){ double tax3; tax3 = 0.3 * (ti - 1000000); total_tax += tax3; System.out.println("Tax Payable for slab 10,00,000-above: " + tax3); } total_cess = 0.03 * total_tax; System.out.println("Total cess = 3% of income tax = "+total_cess); } public static void main(String[] args) { System.out.println("total tax: "+calculateTax(2000000)); } } At first glance, the entire code might seem overwhelming for beginners. But the overall look tells us that there are three conditional blocks to check for the three taxable slabs: 3 lakh – 5 lakh, 5 lakh – 10 lakh, and 10 lakh – above. For total income (ti) less than 3 lakh no tax is levied upon. In each block, we use variables tax1, tax2, and tax3 to calculate that tax amount for each slab. If the total income lies in the 2nd or 3rd slab, we need to calculate the applicable tax for the previous slabs too. For instance, if the total income is 20,00,000. Then we must calculate tax for all three slabs. The total tax amount for 1st slab will be 20,000(10% of 2,00,000). Now you understand why 20,000 has been added with the income tax rate for the slab 5,00,000 – 10,00,000. The above code can be reduced to many fewer lines with proper conditions and if-else block. I have expanded the code purposefully to make the logic straight forward. Here’s the simpler version of the above code: ### Optimized Solution import java.util.Scanner; public class IncomeTaxCalculator2 { static void calculateTax(double ti) { double tax=0,cess=0; if(ti > 300000 && ti < 500000) tax = (ti-30000)*0.1; else if(ti > 500000 && ti < 1000000) tax = 20000 + (ti-50000)*0.2; else if(ti > 1000000) tax = 120000 + (ti-1000000)*0.3; cess = tax * 0.03; System.out.println("Total tax: "+tax+" Total cess: "+cess); System.out.println("Tax payable:"+(tax+cess)); } public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.print("Enter Total Income: "); double total_income = sc.nextDouble(); calculateTax(total_income); sc.close(); } } #### OUTPUT In the above code, we accept console-based user input for the total income. Then we call our calculateTax() method with the provided value. In this case, 20,00,000. Check that the total tax for both the program is same. Therefore, as a Programmer, we shall readily jump into the latter example. But the first code will help you understand how the taxation system works in India.
{}
Need help with these questions please help me. I have no idea I will mark you as branniest. Question: Need help with these questions please help me. I have no idea I will mark you as branniest. What role does a school play to stop intellectual migration? EXPLAIN​ what role does a school play to stop intellectual migration? EXPLAIN​... What is x-6=4 in y=mx+b What is x-6=4 in y=mx+b... Question 10 of 20 Which of the following is something that would promote diversity in the workplace? A. Recruit from the most selective universities. B. Promote competition between workers. C. Enforce sexual harassment policies. D. Hire only young, college-educated workers. Question 10 of 20 Which of the following is something that would promote diversity in the workplace? A. Recruit from the most selective universities. B. Promote competition between workers. C. Enforce sexual harassment policies. D. Hire only young, college-educated workers.... In this picture, m∠AOC = 65° and m∠COD = (2x + 4)°. If ∠AOC and ∠COD are complementary angles, then what is the value of x? In this picture, m∠AOC = 65° and m∠COD = (2x + 4)°. If ∠AOC and ∠COD are complementary angles, then what is the value of x?... For a hydrogen- like atom, classify these electron transitions by whether they result in the absorption or emission of light: n=3 to n=5, n=1 to n=3, n=3 to n=2, n=2 to n=1? Ignoring the sign, which transition was associated with the greatest energy change? For a hydrogen- like atom, classify these electron transitions by whether they result in the absorption or emission of light: n=3 to n=5, n=1 to n=3, n=3 to n=2, n=2 to n=1? Ignoring the sign, which transition was associated with the greatest energy change?... An equation is shown. 2.2 - 53 -3=0 What are the solutions to the quadratic equation? X= X= An equation is shown. 2.2 - 53 -3=0 What are the solutions to the quadratic equation? X= X=... Mr.kiran is our teacher. frame questions for this answer​ mr.kiran is our teacher. frame questions for this answer​... Why were the Shintos important to the Early Japanese? Why were the Shintos important to the Early Japanese?... How much electricity do we use how much electricity do we use... How to solve 3x-4(8x-6)=20 how to solve 3x-4(8x-6)=20... 13. Determine the kinetic energy of a 2000g roller coaster car that is moving with a speed of 2m/s. 13. Determine the kinetic energy of a 2000g roller coaster car that is moving with a speed of 2m/s.... Which of these sentences includes an intransitive verb? She raised her glass in a toast. She set her glass on the table. She laid her napkin on her lap. She rose from the table. Which of these sentences includes an intransitive verb? She raised her glass in a toast. She set her glass on the table. She laid her napkin on her lap. She rose from the table.... 88% as a fraction in simplest form 88% as a fraction in simplest form... POLYNOMIAL LONG DIVISION HELP POLYNOMIAL LONG DIVISION HELP... Story- A Baker's Dozen Questions 1. How does this speech differ from an autobiography or memoir? A) An autobiography or memoir would have a more dispassionate and factual tone. B) The passage delves too much into the details and facts of the speaker's life. C) The passage has a more formal tone than an autobiography or memoir would have. D) An autobiography Story- A Baker's Dozen Questions 1. How does this speech differ from an autobiography or memoir? A) An autobiography or memoir would have a more dispassionate and ... -- 0.010514--
{}
## A neat method of integration required for marginals from a joint pdf Hi, I wonder if anyone can show me a neat way to do integrate out x and then y to obtain marginal pdfs given the joint $f_{X,Y} (x, y) = y \cdot \exp(-xy - y)$ where $x \geq 0, y \geq 0$ I'm getting tied in knots with integrating by parts again and again. I think there must be a simpler way. Thanks for any help. MD
{}
# Developer Intro PyPDF2 is a library and hence its users are developers. This document is not for the users, but for people who want to work on PyPDF2 itself. ## Installing Requirements pip install -r requirements/dev.txt ## Running Tests pytest . We have the following pytest markers defined: You can locally choose not to run those via pytest -m "not external". ## The sample-files git submodule The reason for having the submodule sample-files is that we want to keep the size of the PyPDF2 repository small while we also want to have an extensive test suite. Those two goals contradict each other. The resources folder should contain a select set of core examples that cover most cases we typically want to test for. The sample-files might cover a lot more edge cases, the behavior we get when file sizes get bigger, different PDF producers. ## Tools: git and pre-commit Git is a command line application for version control. If you don’t know it, you can play ohmygit to learn it. Github is the service where the PyPDF2 project is hosted. While git is free and open source, Github is a paid service by Microsoft - but for free in lot of cases. pre-commit is a command line application that uses git hooks to automatically execute code. This allows you to avoid style issues and other code quality issues. After you entered pre-commit install once in your local copy of PyPDF2, it will automatically be executed when you git commit. ## Commit Messages Having a clean commit message helps people to quickly understand what the commit was about, without actually looking at the changes. The first line of the commit message is used to auto-generate the CHANGELOG. For this reason, the format should be: PREFIX: DESCRIPTION BODY The PREFIX can be: • BUG: A bug was fixed. Likely there is one or multiple issues. Then write in the BODY: Closes #123 where 123 is the issue number on Github. It would be absolutely amazing if you could write a regression test in those cases. That is a test that would fail without the fix. • ENH: A new feature! Describe in the body what it can be used for. • DEP: A deprecation - either marking something as “this is going to be removed” or actually removing it. • ROB: A robustness change. Dealing better with broken PDF files. • DOC: A documentation change. • TST: Adding / adjusting tests. • DEV: Developer experience improvements - e.g. pre-commit or setting up CI • MAINT: Quite a lot of different stuff. Performance improvements are for sure the most interesting changes in here. Refactorings as well. • STY: A style change. Something that makes PyPDF2 code more consistent. Typically a small change. ## Benchmarks We need to keep an eye on performance and thus we have a few benchmarks.
{}