text
stringlengths
49
10.4k
source
dict
deep-learning, nlp, rnn, transformer, attention-mechanism Title: What are the hidden states in the Transformer-XL? Also, how does the recurrence wiring look like? After exhaustively reading the many blogs and papers on Transformers-XL, I still have some questions before I can say that I understand Transformer-XL (and by extension XLNet). Any help in this regard is hugely appreciated. when we say hidden states are transferred from one segment to another, what exactly is included in these hidden states? Are the weights of the networks implementing the attention mechanism (i.e. calculating the Q, K and V) included? Are the weights involved in calculating the input word embedding included in the hidden state? When the hidden states are transferred during recurrence, is this transfer from the encoder of one segment to the encoder of the next segment? Or is it from the decoder of the current segment to the encoder of the next segment? Is the decoder involved at all in the hidden state transfer? I see images like the following in the following in the papers and blogs. what do the dots represent? encoders? decoders? or an entire unit? I guess the answer to my second question will shed a light on this one too. Thank you By hidden states, they mean outputs of the layers, i.e., what you get after the feed-forward sub-layer. For Transformer-XL, it is important that these are also what you use as an input to the self-attention. Therefore, at inference time, if you want to compute the states recursively by segments (presumably because you cannot fit the entire input int he memory), this is the only thing you need to remember from the previous steps to continue the computation. There is no encoder, you can imagine Transformer-XL as a decoder-only model. Transfering the states just means remembering them, so you can do the self-attention over them, but you can no longer back-propagate through them because you only remember the values and the entire graph telling how you got them. The dots in the scheme correspond to the hidden states: one state per input subword and per layer. The lines between them are the self-attention links.
{ "domain": "datascience.stackexchange", "id": 8135, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "deep-learning, nlp, rnn, transformer, attention-mechanism", "url": null }
quantum-mechanics, operators, heisenberg-uncertainty-principle where we have introduced the selfadjoint operator $$H = 0 Q_0 +(\pi/2) Q_{\pi/2} + \pi Q_{\pi} + (3\pi/2) Q_{3/2} $$ with obviously $$Q_0 := P_1\:,\quad Q_{\pi/2}:= P_{i}\:, \quad Q_{\pi}:= P_{-1}\:, \quad Q_{3\pi/2}:= P_{-i}\:.$$ $H$ has pure point spectrum made of the four eigenvalues $0, \pi/2, \pi, 3\pi/2$. Let us finally consider the time evolutor $U_t = e^{-itH}$. According to the definitions above, it reads $$U_t = e^{-i0t} Q_0 +e^{-it\pi/2} Q_{\pi/2} + e^{-it\pi} Q_{\pi} + e^{-i3t\pi/2} Q_{3/2}\:.$$ As a consequence: $$U_0 =I \quad \mbox{and}\quad U_{-1} = F\:.$$ This discussion permits to prove that the requested lower limit for $$\sigma^{(t)2}_X \sigma^{(t+\delta t)2}_P$$ is zero.
{ "domain": "physics.stackexchange", "id": 62245, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, operators, heisenberg-uncertainty-principle", "url": null }
c++, ascii-art, role-playing-game cout << pclass << " " << name << endl; cout << "[1] Traveller's Encounter" << endl; cout << "[2] Inventory" << endl; cout << "[3] Rest (Returns you to full health/mana)" << endl; cout << "[4] Assign Skillpoints [" << skill << " available]" << endl; cout << "[5] Shop" << endl; cout << "[6] Questhall" << endl; cout << "[7] Dungeons" << endl; cout << "[98] Save Game" << endl; cout << "[99] Exit" << endl; cin >> input; if (input == "1") { goto sarena; } else if (input == "99") { goto leave; } else if (input == "2") { goto inventory; } else if (input == "3") { mhp = stre * 20; mmana = inte * 10; hp = mhp; hp = hp * (bphp+1); hp = hp + bhp; mana = mmana; goto menue; } else if (input == "4") { goto askill; } else if (input == "5") { goto shop; } else if (input == "6") { goto questhall; } else if (input == "7") { goto dungeonmenue; } else if (input == "98") { goto savegame; } goto menue; // Setting Up Enemys sarena:
{ "domain": "codereview.stackexchange", "id": 26839, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, ascii-art, role-playing-game", "url": null }
Most introductory books on Linear Algebra have a Theorem which says something like Let $A$ be a square $n \times n$ matrix. Then the following are equivalent: • $A$ is invertible. • $\det(A) \neq 0$. • The columns of $A$ are linearly independent. • The columns of $A$ span $R^n$. • The columns of $A$ are a basis in $R^n$. • The rows of $A$ are linearly independent. • The rows of $A$ span $R^n$. • The rows of $A$ are a basis in $R^n$. • The reduced row echelon form of $A$ has a leading 1 in each row. and many other conditions..... What does this mean, it simply means that if you want to check if any of these conditions is true or false, you can simply pick whichever other condition from the list and check it instead.. Your question is: Can instead of third or fourth condition, check the second? That's exactly what the Theorem says: YES. • Thanks, a few sections later they give a similar explanation like you said - it just wasn't in the one where they are expecting you to solve these types of problems. – eWizardII Nov 6 '11 at 18:37 • Does the converse apply for all of those? For example, if det=0, does that always imply that the rows are linearly independent? – Asad Saeeduddin Oct 7 '13 at 16:28 • @Asad equivalent means they are all true or all false. If $\det(A)=0$ means the second one is false, which means ALL are false. So yes, $\det(A)=0$ implies the rows are linearly DEPENDENT. – N. S. Oct 7 '13 at 16:33 • @N.S. I see, thanks. – Asad Saeeduddin Oct 7 '13 at 17:06
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.971563964485063, "lm_q1q2_score": 0.8032033819117779, "lm_q2_score": 0.8267117876664789, "openwebmath_perplexity": 328.18813980033383, "openwebmath_score": 0.9089985489845276, "tags": null, "url": "https://math.stackexchange.com/questions/79356/using-the-determinant-to-verify-linear-independence-span-and-basis/79384" }
clustering, model-evaluations, overfitting, model-selection How do I plot the final clusters when my dataset has more than 2 dimensions? I have seen a lot of visualizations around clustering (like the one below): Should I use PCA to reduce the features to 2 and then plot the clusters? or is there another way to do this? To answer your initial question, yes you can use silhouette score with different clustering methods. You could also use the Davies-Bouldin Index or the Dunn Index. Regarding over-fitting, (this is my personal suggestion) but you could train the model n times on different types of the same data to see if there clustering is the same even though the values are changed. Short example: If you have to cluster 5 apples and 6 oranges, the cluster should be the same for 10 apples and 12 oranges. You can find a bit more detail on this here: https://datascience.stackexchange.com/a/20292/103857 For your third query: Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the key aspect of your question. Read up on multidimensional scaling (MDS) for this. Finally, color your points according to cluster membership. (source for third query: https://stats.stackexchange.com/a/173823) Regarding pca, its subjective. PCA works well with high correlation. If your dimensions are like apples and oranges then your directly effecting your models performance, so do keep that in check. A bit of eda would help before you dive into that.
{ "domain": "datascience.stackexchange", "id": 8175, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "clustering, model-evaluations, overfitting, model-selection", "url": null }
symmetry-breaking By the same dimensional analysis argument you see that $[\psi] = [m]^{3/2}$, and so the VEV $v_q$ has dimensions of mass cubed.
{ "domain": "physics.stackexchange", "id": 13837, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "symmetry-breaking", "url": null }
nmr-spectroscopy, history-of-chemistry Title: Historically, how was the connection between NMR peaks and hydrogen atoms made? A large part of the basis of this question may be due to the student who didn’t pay attention in the history section of the NMR course (and not during the technical details bit either) but I find the question intriguing nonetheless. My basic knowledge of the history of NMR spectroscopy is that in the very old, almost prehistoric days, Bloch and Purcell performed some experiments that applied radiofrequency to a sample and by chance got some radiofrequency signal returned. Then I have a big black box and arrive at modern spectrometers that tune themselves into hydrogen spectra, carbon spectra and see beautiful Fourier-transformed spectra on my PC screen (unless it’s carbon, in which case the baseline is, of course, fuzzy). I imagine that there had to be a lot of research invested until it was realised that each signal in the Fourier-transformed spectrum corresponds to a hydrogen atom, that they are shifted around depending on how electron-rich or -poor they are, how couplings work and so on. I can imagine that most of this stemmed from trial and error: a certain combination of magnetic field and radiofrequency gives a signal, Fourier transformation seems to happen anywhere, and once you have rationalised that ‘signals’ correspond to ‘hydrogens’ in a certain type of spectrum (likely the one that gave most resonance anyway) you’re half done and the rest is a walk in the park. But there is one step that seem very unintuitive: Realising that ‘signals’ correspond to hydrogens. There is another important step, namely figuring out that solutes can be analysed if hydrogen-free ($\ce{CCl4}$) or at least protium free (deuterated) solvents are used. Now either it was figured out first that hydrogen is the cause for the signals, in which case choosing the correct solvents is a non-issue, but capturing signals from the solute may be. Or it was figured out that using certain solvents gave ‘cleaner’ spectra, in which case it might also be interesting to know how the solvents were chosen.
{ "domain": "chemistry.stackexchange", "id": 6372, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nmr-spectroscopy, history-of-chemistry", "url": null }
– 1)/4 = (3x­1 + y1 + 15)/20. var vidDefer = document.getElementsByTagName('iframe'); By using Pythagoras theorem, OB^2 = OA^2~+~AB^2 AB^2 = OB^2~-~OA^2 AB = \sqrt{OB^2~-~OA^2 } = \sqrt{10^2~-~6^2} = \sqrt{64}= 8 cm To know more about properties of a tangent to a circle, download … for (var i=0; i
{ "domain": "exaton.hu", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9861513881564148, "lm_q1q2_score": 0.8838395350696157, "lm_q2_score": 0.8962513724408292, "openwebmath_perplexity": 358.98978621027345, "openwebmath_score": 0.4151161313056946, "tags": null, "url": "https://www.exaton.hu/qn573/c5fe24-tangent-of-a-circle-example" }
newtonian-mechanics, fluid-dynamics, aerodynamics Title: Why airplanes fly: the final truth Questions about the reasons aircrafts fly are frequent among scientist. Since the time I was in high school, even if I now work on the other side of the fluid world (Low $Re$ regime), I've kept asking my professors, advisors, colleagues, what was their own explanation of flight. I know about the controversy about the push downward, considered a common fallacy by the NASA website, and about the Anderson argument, denying the erroneous principles of equal times, and the overstimated role of the Bernoulli theorem. My best overall and simplest explanation is taken from Anderson, and consists in the following: Somehow the air reaching the first edge of the wing, after the interaction with it, is going donward. This must be the result of some kind of force, and therefore, for the third Newton's Law, there must be an opposite force of equal strenght in the opposite direction, which pushes the aircraft up. First: why does the air go down? Answer: the angle of attack and shape of the airfoil, together with simple pressure and stagnation arguments. Second: which is the role of the Bernoulli theorem here? If the air is pushed down by means of the "geometry", we don't need the difference of velocity between the upper and the lower part of the wing, but we have this just as a consequence of the change of pressure (due to the shape). Is that right? My second question, actually, is about the most common and sophisticated explanation: the Starting Vortex Balance. The main argument is: due to the Kutta condition (a body with a sharp trailing edge which is moving through a fluid will create about itself a circulation of sufficient strength to hold the rear stagnation point at the trailing edge) the vorticity "injected"by means of the viscous diffusion by the boundary layer generated near the airfoil, to the surrounding flow, transforms in a continuum of mini-starting-vortexes. This leaves the airfoil, and remains (nearly) stationary in the flow.It rapidly decays through the action of viscosity.
{ "domain": "physics.stackexchange", "id": 80955, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, fluid-dynamics, aerodynamics", "url": null }
electrostatics, electric-fields, electric-current, charge Title: When is the free charge density zero at the boundary of dielectrics It is known that across the interface of two different dielectrics, the electric displacement field must satisfy $$(\mathbf{D}_2-\mathbf{D}_1)\cdot\mathbf{\hat{n}}=\sigma$$ where $\sigma$ is free surface density charge in the boundary. My question is: if both materials are dielectrics (i.e. thay have no free charge), how could $\sigma$ (which is free charge indeed) appear at the boundary? In dielectrics with different permittivities but no conductivity, there will be no free charge at the interface upon application of en electric field. However, if the dielectrics also possess different conductivities, which leads to a current flowing across the interface, in general, a free interface charge will accumulate at the interface so that the stationary normal electric currents (produced by the normal electric fields together with the conductivities) fulfill the current continuity condition. If the conductivities are equal, there will be no interface charge generation.
{ "domain": "physics.stackexchange", "id": 57046, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, electric-fields, electric-current, charge", "url": null }
quantum-mechanics, epr-experiment It's eigenstates are antisymmetrized like if $\hat{O}_{A}$ is a spin operator, the $|\psi\rangle$ is its eigenstate. You can check that then the expectation values of antisymmetrized operators $\hat{O}_A$ and $\hat{O}_B$ factorize for $|\psi\rangle$ and don't factorize for $|\chi\rangle$. So we can say that for $|\psi\rangle$ the spins of the particles in detectors $A$ and $B$ (we don't identify them as particles #1 and #2!) are not entangled (it's analog of the separable state) whereas for $|\chi\rangle$ they are (it's EPR state for identical particles) So the point is you can observe the spin of the particle going to detector $A$, but not the spin of the first particle! We can "distinguish" the particle at some region by using local operators at that region but actually we are measuring the (anti)symmetrized wavefunction of all particles of this type at once so it's not a real "label". So in the EPR experiment with identical particles the entanglement should be stated not in terms of the particles #1 and #2 but in terms of the observables for detectors $A$ and $B$.
{ "domain": "physics.stackexchange", "id": 27581, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, epr-experiment", "url": null }
with equality iff ##\mathbf A## is a scalar multiple of ##\mathbf A^T##. You could prove this with Schur's Inequality. Alternatively (perhaps using the vec operator to help) recognize that the trace gives an inner product. Direct application of Cauchy Schwarz gives you ##\big \vert trace\big(\mathbf B^T \mathbf A \big) \big \vert = \big \vert vec\big( \mathbf B\big)^T vec\big( \mathbf A\big)\big \vert \leq \big \Vert vec\big( \mathbf B\big)\big \Vert_2 \big \Vert vec\big( \mathbf A\big)\big \Vert_2 =\big \Vert \mathbf B \big \Vert_F \big \Vert \mathbf A \big \Vert_F## with equality iff ##\mathbf B = \gamma \mathbf A##. (Also note trivial case: if one or both matrices is filled entirely with zeros, then there is an equality.) In your real skew symmetric case, ##\mathbf B = \mathbf A^T## and ##\gamma = -1##. And of course in the real symmetric case ##\gamma = 1## Pushoam Pushoam Then, for the anti - symmetric matrix, ##tr (A^2)= tr (-AA^T) = - A_{ij}A_{ij}## = negative of the sum of the elements of the matrix. Right? I missed to write square of the elements. The corrected one: Then, for the anti - symmetric matrix, ##tr (A^2)= tr (-AA^T) = - A_{ij}A_{ij}## = negative of the sum of the square of the elements of the matrix. Homework Helper Gold Member I am not very familiar with some of the algebra mentioned, though I don’t think it is very difficult.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9525741214369554, "lm_q1q2_score": 0.8358616753386429, "lm_q2_score": 0.8774767826757123, "openwebmath_perplexity": 1591.3757532355291, "openwebmath_score": 0.952686071395874, "tags": null, "url": "https://www.physicsforums.com/threads/calculating-eigenvalues-help.934324/" }
swift, a-star let straightLineDistances = [ "biskra" : ["annaba":220.0], "batna" : ["annaba":140.0], "barika" : ["annaba":200.0], "setif" : ["annaba":100.0], "constantine": ["annaba":80.0], "bejaia" : ["annaba":30.0], "oued" : ["annaba":320.0], "annaba" : ["annaba":0.0] ] final class Vertex : State, CustomDebugStringConvertible { let label : String init(label:String) { self.label = label } func successors() -> [Successor<Vertex>] { return adjacencyList[label]!.map { x in Successor<Vertex>(state:Vertex(label: x.0),cost: x.1) } } func heuristic(goal:Vertex) -> Double { return straightLineDistances[label]![goal.label]! } var id : String { return label } var debugDescription : String { return id } } let solution = AStar(Vertex(label: "biskra"), goal: Vertex(label: "annaba")) print(solution) And the output solutions was the expected A* solution. But I'm more concerned about the elegance of this implementation This code isn't bad at all. You make great use of the generic features of Swift and you also go for the functional approach over the iterative one whenever it's a good fit. Here are some ways to make your code "Swiftier":
{ "domain": "codereview.stackexchange", "id": 20275, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "swift, a-star", "url": null }
A question on the same concept The number of television sets sold by Store R last month was approximately what percent less than the number of television sets sold by Store T last month? ( The number of television sets sold by Store R was 20 and number of television sets sold by Store T was 45 as per the attached figure) A) 40% B) 56% C) 86% D) 95% E) 125% so simplify it - R is what % less than T so T is after THAN and becomes BEFORE and R becomes AFTER. Now we are looking for % less = $$\frac{Before-After}{Before}*100=\frac{45-20}{45}*100=\frac{2500}{45}=55.55$$% or ~56% But say you took the other way $$=\frac{45-20}{20}*100=\frac{2500}{20}=125$$% .. AND the wrong answer is there in the choice. so be careful I would add more examples with a slight different wordings slightly later _________________ Percentage increase/decrease- WHAT should be the denominator??   [#permalink] 29 Jan 2019, 05:54 Display posts from previous: Sort by
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9890130593124399, "lm_q1q2_score": 0.8131557808473645, "lm_q2_score": 0.8221891239865619, "openwebmath_perplexity": 2259.335982646407, "openwebmath_score": 0.6461113691329956, "tags": null, "url": "https://gmatclub.com/forum/percentage-increase-decrease-what-should-be-the-denominator-287528.html" }
ros-melodic, docker Title: Problems installing ROS1 Packages I'm normaly working with ROS2, so probably my Problems originate from not having that much experience with ROS in general and especially not with ROS1, but I have a package that's only provided in ROS1 but that I need for generating some worlds. Hover, I really struggle to install it. What I tried First thing I tried was just running rosdep install virtual_maize_field which resulted in the following error: ERROR: Rosdep cannot find all required resources to answer your query Missing resource virtual_maize_field ROS path [0]=/opt/ros/melodic/share/ros ROS path [1]=/opt/ros/melodic/share Next, I tried cloning the Repository and then trying to install it with rosdep via the path: git clone https://github.com/FieldRobotEvent/virtual_maize_field.git rosdep install --from-path virtual_maize_field This resulted in a lot of packages being installed, but when I tried to run a script, I just got the response that the package could not be found. [rospack] Error: package 'virtual_maize_field' not found After that, I tried following this article, but that got me the same result as before; the package couldn't be found :/ Thanks in advance :) Originally posted by isiko on ROS Answers with karma: 13 on 2022-05-13 Post score: 0 Your linked repository seems to have a ros2 branch there: https://github.com/FieldRobotEvent/virtual_maize_field/tree/ros2 Did you try it? The code seems to be mostly non-ROS specific to be honest, it can be ported to ROS2. Did you build it with catkin make? rosdep is for dependency packages Originally posted by ljaniec with karma: 3064 on 2022-05-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by isiko on 2022-05-14: Thanks, didn't even realize that :D
{ "domain": "robotics.stackexchange", "id": 37665, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic, docker", "url": null }
Probability of more than 1 goal at the end of a match Probability of a match ending with - 0 goals is 40%, - 1 goal is 45% and - more than 1 goal is 15%. Now at half-time, the score is 1-0. What is the probability of more than 1 goal at the end of the match? A. Is it still 15%, or B. is it 25% (100% = probability of 1 goal + probability of more than 1 goal only, eliminating the probability of 0 goals)? Or is there anything I missed out? Thanks for the help. • There's something troubling with this question, and that is your implicit assumption that being 1-0 at half time maintains the original probabilities in some way. It's possible that being 1-0 at half time makes the teams behave differently. The leading team being more defensive now, making the probability of new goals very unlikely. Or maybe it drives the losing team into more aggressive tactics, increasing the probability that one of the teams will make a new goal. I will have to make some simplifying assumptions in order to give a reasonable answer. I will have to assume that: 1. the score in each half follows the same distribution, and 2. the score in the second half is independent of the score in the first half. In real life, I doubt this is entirely true. Maybe if one team scores in the first half, it?s more likely that there?s a goal in the second half because the team who?s behind is playing more aggressively. But I think I need to make this assumption to answer your question based on the information given. Let?s define two random variables: $$X_{1}$$ = the number of goals in the first half and $$X_{2}$$ = the number of goals in the second half. Again, I?m assuming that the X's are independent and identically distributed.
{ "domain": "matchmaticians.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9838471637570106, "lm_q1q2_score": 0.8599584184032565, "lm_q2_score": 0.8740772450055545, "openwebmath_perplexity": 403.5599812197584, "openwebmath_score": 0.7680110335350037, "tags": null, "url": "https://matchmaticians.com/questions/u937mh/probability-of-more-than-1-goal-at-the-end-a-match-question" }
The statement wishing to be proven is as mentioned equivalent to whether or not for n>2 and k\in\Bbb N we have the following relation:$$k^n < (k+2)^n - (k+1)^n$$This is false. Consider the validity of the statement for k=6, n=3. One has 6^3=216 and (6+2)^3 - (6+1)^3 = 512 - 343 = 169$$6^3 = 216> 169 = (6+2)^3 - (6+1)^3$$Why ... 4 In hoping to see this more clearly, let's start with an example. Consider \tau the euclidean topology on \mathbb{R}, that is to say the topology induced by the distant d(x,y) := |x-y|. The open interval (1,2) is an element of the topology \tau for any, i.e. (1,2) is open. Therefore (1,2) \in \tau. More generally, any interval of the form ... 3 You have the correct approach. Just clean up a few details. You are trying to show that given any \epsilon > 0, there exists a partition P_\epsilon such that U(P_\epsilon,f) - L(P_\epsilon,f) < \epsilon. First define M,$$M := \sup_{x \in [a,b]} |f(x)|.$$Then it follows that for i = 1, 2, \ldots, n we have$$M_i - m_i \leqslant 2M,$$... 3 Since (e^x)' = e^x for all x, and e^x > 1 for x > 0, we have, for x > 0, \begin{array}\\ e^x-1 &=\int_0^x e^t dt\\ &> \int_0^x 1 dt \qquad\text{since } e^x > 1 \text{ for } x > 0\\ &=x\\ \end{array} 3 That is the proof. Why are you asking?! :) 3 HINT: Show that the relation
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9546474168650673, "lm_q1q2_score": 0.8056222760107878, "lm_q2_score": 0.8438950986284991, "openwebmath_perplexity": 194.02123763679134, "openwebmath_score": 0.9418020844459534, "tags": null, "url": "http://math.stackexchange.com/tags/proof-writing/hot?filter=month" }
The theory is a strange and novel mix of logic and topology and computer science. From the web site: Homotopy Type Theory refers to a new interpretation of Martin-Löf’s system of intensional, constructive type theory into abstract homotopy theory. Propositional equality is interpreted as homotopy and type isomorphism as homotopy equivalence. Logical constructions in type theory then correspond to homotopy-invariant constructions on spaces, while theorems and even proofs in the logical system inherit a homotopical meaning. As the natural logic of homotopy, constructive type theory is also related to higher category theory as it is used e.g. in the notion of a higher topos. To me this is a little bit easier to read than Jacob Lurie's Higher Topos Theory. In that case, he is writing to motivate some universal constructions - $\infty$-categories, etc - that appear in Topology. - This is an interesting comment, but has nothing to do with the question. –  Andres Caicedo Sep 3 '14 at 22:22 @AndresCaicedo I am arguing Voting theory and Social Choice theory are a kind of 'Mathematical Philosophy'. Instead of a book, I offered an online course. Hmm... maybe this is not metamathematics ? –  john mangual Sep 3 '14 at 22:26 Yes, clearly the title of the question is a misnomer, but the poster has specified the subject they mean. –  Andres Caicedo Sep 3 '14 at 22:31 The edit is nice. Again, what does it have to do with "Incompleteness theorems, Hilbert's tenth problem, the Continuum Hypothesis,..."? (It is clearly relevant to the study of foundations, of course.) –  Andres Caicedo Sep 3 '14 at 23:00
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9546474207360066, "lm_q1q2_score": 0.8095102352365953, "lm_q2_score": 0.8479677602988602, "openwebmath_perplexity": 697.4472118240425, "openwebmath_score": 0.7635596394538879, "tags": null, "url": "http://math.stackexchange.com/questions/918490/where-can-i-learn-about-mathematical-philosophy" }
contact three more members (round 2), each of whom contacts three more members (round 3), etc. How many sides does the polygon have?. The Geometric Mean (G. 1, 2, 6, 24, 120,. c) Find the value of the 15 th term. With the adoption of the Common Core curriculum, a new topic added to Algebra 2 is sequences. Students should have the sequence right before they start the work. 3 Geometric Sequences and Series 667 Finding the nth Term Given a Term and the Common Ratio One term of a geometric sequence is a 3= 5. The 1560 members of the Great Pumpkin Society have a method of quickly notifying members. 999 999 97 ×!10!" 5) a)!=10 b)! =110 6)!!=1. This is a geometric sequence with first terma 5 2, and common ratio quence is the common ratio: given by r 5 a n11 a n 5 2n11 2n 52. 3, 6, 12, 24, 48, … Write an equation for this arithmetic sequence and find the. You can boost up your problem solving on arithmetic and geometric progressions through this wiki. Geometric sequence can be defined by a series where a fixed amount is multiplied to reach at each of the number of the series, starting from the first. In addition, they can write two equations for the nth terms of geometric sequences and also extend the formula to find a term in a geometric sequence given a term in the sequence and the common ratio. Four real world problems are included in this Geometric Sequences and Series resource. Sequence A : Sequence B : Solution: Sequence A is an arithmetic sequence since every. Let us consider a G. Khan Academy 156,911 views. 5)&Determine&the&number&of&terms&in&each&arithmetic&sequence& a)&38,36,34,…!!,−20& & & & & b)&D5,D8,D11,&…,&D269& 6)&Determine&!&and&!&and&then&write&the. The president and treasurer each contact three members (round 1), each of whom contact three more members (round 2), each of whom contacts three more members (round 3), etc. Let's first compare sequences to relations or functions from the Algebraic Functions section. Determine the general term
{ "domain": "amicidellacattolica.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9780517418068649, "lm_q1q2_score": 0.8213032658134325, "lm_q2_score": 0.839733963661418, "openwebmath_perplexity": 681.0068812401423, "openwebmath_score": 0.5177739858627319, "tags": null, "url": "http://dtzi.amicidellacattolica.it/geometric-series-word-problems-pdf.html" }
we will learn how to predict a future value using the least-squares regression method. In PLS, the predictors are replaced by x-scores. X̄ = Mean of x values Ȳ = Mean of y values SD x = Standard Deviation of x SD y = Standard Deviation of y r = (NΣxy - ΣxΣy) / sqrt ((NΣx 2 - (Σx) 2) x (NΣy) 2 - … Some Example (Python) Code.
{ "domain": "mailigniter.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9820137931962462, "lm_q1q2_score": 0.8051465669776268, "lm_q2_score": 0.8198933381139646, "openwebmath_perplexity": 337.46152239111694, "openwebmath_score": 0.5033019781112671, "tags": null, "url": "https://mailigniter.com/benzoic-acid-sedtgz/page.php?tag=b3b2bb-least-squares-regression-method-formula" }
vba, excel Dim rngTopLeftCell As Range Dim rngSearchRange As Range Dim strErrorMessage As String '/====================================================================================================================================================== '/================================================== '/ Open Worksheet '/================================================== wbCurrentWorkbook.Activate wsCurrentWorksheet.Activate wsCurrentWorksheet.Cells.EntireRow.Hidden = False '/================================================== '/ Find TopLeftCell '/================================================== If IsMissing(lngEndRow) Then lngEndRow = wsCurrentWorksheet.Rows.Count If IsMissing(lngEndColumn) Then lngEndColumn = wsCurrentWorksheet.Columns.Count
{ "domain": "codereview.stackexchange", "id": 15374, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, excel", "url": null }
python, web-scraping, beautifulsoup, selenium, scrapy from attr import dataclass from bs4 import BeautifulSoup, SoupStrainer, Tag from requests import Session @dataclass class Result: caption: str when: date path: str @classmethod def from_list_item(cls, item: Tag) -> 'Result': return cls( caption=item.a.text, path=item.a['href'], when=date.fromisoformat(item.find('span', recursive=False).text), ) class TsinghuaSite: subdoc: ClassVar[SoupStrainer] = SoupStrainer(name='ul', class_='search_list') def __init__(self): self.session = Session() def __enter__(self) -> 'TsinghuaSite': return self def __exit__(self, exc_type, exc_val, exc_tb): self.session.close() def search(self, query: str) -> Iterable[Result]: with self.session.post( 'https://www.ctwx.tsinghua.edu.cn/search.jsp', params={'wbtreeid': 1001}, data={ 'lucenenewssearchkey': b64encode(query.encode()), '_lucenesearchtype': '1', 'searchScope': '0', 'x': '0', 'y': '0', }, ) as resp: resp.raise_for_status() doc = BeautifulSoup(markup=resp.text, features='html.parser', parse_only=self.subdoc) for item in doc.find('ul', recursive=False).find_all('li', recursive=False): yield Result.from_list_item(item) def main(): with TsinghuaSite() as site: query = '尹至' results = tuple(site.search(query))
{ "domain": "codereview.stackexchange", "id": 41822, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, web-scraping, beautifulsoup, selenium, scrapy", "url": null }
fourier-series, complex Update for $\alpha=0$ It is trivial that $\exp(-j\alpha n) \stackrel{\alpha=0}{=} 1$ then the sum equals $N$. It can be thought as the case of the final result after $(b)$ when $\alpha \to 0$ then $$e^{-j\frac{\alpha}{2}(N-1)} \frac{\sin(N\alpha/2)}{\sin(\alpha/2)} \stackrel{\alpha \to 0}{\to} N$$ where the limit is evaluated by L'Hôpital's rule. Further interesting discussion can be found in Laurent Duval's answer.
{ "domain": "dsp.stackexchange", "id": 6925, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fourier-series, complex", "url": null }
general-relativity, cosmological-constant Title: Why is the cosmological constant on the left of the EFE? My confunion stems from the common knowledge that: in the Einstein Field equations, the terms are ordered so that terms for the matter and energy are on the right, and on the left you find the terms for curvature of spacetime, due to the aforementioned energy and matter. At first look this question appears fairly obvious; it's on the left because it was discovered after the field equations were formulated and had to be slotted in later to balance everything out. However if you consider what I mentioned in the first paragraph- the left side has the terms for curvature, I find it confusing that amongst all these terms for curvature there is the cosmological constant: which is, according to Wikipedia: 'the value of the energy density of the vacuum of space.' Which is an energy term. This led me to think, The constant is a balancing term so that the equations work, this may sound ridiculous but are we sure it is actually an energy term, representative of dark energy. The fact that it is on the left with the curvature terms got me thinking if there was a possibility it's just an intrinsic value of spacetime that arises due to curvature? The other obvious answer is that you just move it over to the right somewhere during the calculations. This doesn't make sense to me either, because, if the idea of: energy on the right and curvature on the left, requires terms to be shifted around in order to work then surely it's not supposed to work like that. There is probably a very simple answer to this that I'm overlooking, or I'm over thinking it, the thought just bugged me, any insight would be helpful. The cosmological constant can be placed anywhere in the EFE. Einstein placed it on the left, I'd think, with the idea that it is simply a property of spacetime. It is much more recently (20, 30 years ago?) that it's been labeled as dark energy.
{ "domain": "physics.stackexchange", "id": 41450, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, cosmological-constant", "url": null }
evolution, dna, natural-selection It seems plausible to me that we (advanced life) could have a biological mechanism to "write" needed alterations into either our own DNA or our reproductive DNA over time, triggering the very specific evolutionary developments necessary to our survival without relying on random mutation. My question: Is this possible? Does any similar mechanism exist that we know of? If not, how can so many specific (advanced) evolutionary leaps be otherwise explained? This entire answer will be long, so read the short part first, then read the rest if you (or anyone else) is curious. Citations are included in the long section. I can include additional citations in the short section if needed. Long Story Short Your question touches on some common misconceptions about how the evolutionary process. Organisms don't "want" to evolve traits. Traits evolve through the biological processes of random mutation and natural selection. Organisms do not "want" to evolve traits. (Well, OK, I'd love to evolve an extra pair of hands but that is not possible.) Natural selection works by modifying existing traits. Your turtle can stare all she wants at food out of reach but she will not evolve a longer neck. Instead, natural variation exists among neck lengths of the turtles because of variation of the genes that determine features related to overall boxy size. Those individuals with longer necks may be able to get a bit more food, live a little longer, and reproduce a little more. They will pass along their genes to their offspring, so perhaps more of their offspring will also have longer necks. Over many generations, the turtles may have somewhat longer necks. A common misconception is that the traits of organisms are precisely adapted for a specific need. They are not, for a few reasons. First, natural selection occurs relative to the current environment. Adaptations that work well in one environment may not be so useful in another environment. Environments are rarely stable over evolutionary time so traits are subject to constant change. Next, as mentioned above, natural selection can only work on what traits are present. While an extra set of arms would be handy, I am a tetrapod. My four appendages, along with the appendages of all other tetrapods, trace back to our common ancestor. The appendages of all tetrapods are modifications of that ancestral trait.
{ "domain": "biology.stackexchange", "id": 4330, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "evolution, dna, natural-selection", "url": null }
signal-analysis, audio, frequency-spectrum TEMP_SOUND = "/tmp/vibrato.delme.wav" TEMP_DATA = "/tmp/vibrato.delme.txt" DURATION = 2 VERBOSE = False print("Recording for {} seconds...".format(DURATION)) subprocess.call( "/opt/local/bin/sox -d {} trim 0 {} >/dev/null 2>&1".format(TEMP_SOUND, DURATION), shell=True) print(" ... done. Analyzing...") subprocess.call( "/usr/local/bin/aubiopitch {} | tail -n +10 > {}".format(TEMP_SOUND, TEMP_DATA), shell=True) short_list = [] short_max = 5 long_list = [] long_max = 10 last_slope = None first_crossing_time = None time_freq_re = re.compile(r'([^ ]+) ([^ ]+)') samples = 0
{ "domain": "dsp.stackexchange", "id": 5741, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "signal-analysis, audio, frequency-spectrum", "url": null }
c++, optimization, matrix W00 = new int*[m_Height]; W01 = new int*[m_Height]; W02 = new int*[m_Height]; W03 = new int*[m_Height]; W04 = new int*[m_Height]; W05 = new int*[m_Height]; W06 = new int*[m_Height]; W07 = new int*[m_Height]; W08 = new int*[m_Height]; W09 = new int*[m_Height]; W10 = new int*[m_Height]; W11 = new int*[m_Height]; W12 = new int*[m_Height]; W13 = new int*[m_Height]; W14 = new int*[m_Height]; W15 = new int*[m_Height]; EXP_IN00 = new int*[m_Height]; EXP_IN01 = new int*[m_Height]; EXP_IN02 = new int*[m_Height]; EXP_IN03 = new int*[m_Height]; EXP_IN04 = new int*[m_Height]; EXP_IN05 = new int*[m_Height]; EXP_IN06 = new int*[m_Height]; EXP_IN07 = new int*[m_Height]; EXP_IN08 = new int*[m_Height]; EXP_IN09 = new int*[m_Height]; EXP_IN10 = new int*[m_Height]; EXP_IN11 = new int*[m_Height]; EXP_IN12 = new int*[m_Height]; EXP_IN13 = new int*[m_Height]; EXP_IN14 = new int*[m_Height]; EXP_IN15 = new int*[m_Height];
{ "domain": "codereview.stackexchange", "id": 6326, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, optimization, matrix", "url": null }
# How do I prove this function is monotonic? Let $f:\mathbb R\to \mathbb R$ be a function such that $f(x+y)=f(x)+f(y)$ and $f(xy)=f(x)f(y)$ for every $x,y\in \mathbb R$ and $f(1)=1$. In order to prove this function is 1-1, I just need to prove this function is monotonic. Anyone has some ideas how to proceed? Thanks • Monotonic only implies one-to-one if the function is continuous. – Jack M May 11 '15 at 0:45 • @JackM If this function is monotonic I can prove this one is $1-1$. – user42912 May 11 '15 at 0:47 • You have enough info to prove this is the identity function – matt biesecker May 11 '15 at 0:50 • @mattbiesecker I know, if this function is monotone I can prove this function is the identity. – user42912 May 11 '15 at 0:52 • It is easy to prove on $Q$, $f$ is id. – Yimin May 11 '15 at 0:57 We'll show that $f$ is monotone increasing. Notice that if $x\geq 0$ then $f(x)=f(\sqrt{x})^2\geq 0$. Thus if $x\geq y$, then $x-y \geq 0$, so $f(x)-f(y) = f(x-y) \geq 0$, so that $f(x) \geq f(y)$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9748211568364099, "lm_q1q2_score": 0.8037029204289325, "lm_q2_score": 0.8244619177503206, "openwebmath_perplexity": 342.14008321364116, "openwebmath_score": 0.9118213057518005, "tags": null, "url": "https://math.stackexchange.com/questions/1276443/how-do-i-prove-this-function-is-monotonic" }
particle-physics, collision, protons, large-hadron-collider Title: What is diffractive dissociation in collisions? Correct me if I am wrong: If we collide protons with protons then we can have elastic scattering, inelastic scattering (large momentum transfer between partons), and diffractive dissociation. I am assuming that these three processes occur with a significant dependence on the impact parameter. I understand the first two processes (elastic, inelastic). However, the concept of diffractive dissociation is somewhat confusing. Is it just a process in which after the interaction one or both of the protons dissociate into a spray of partons (which eventually hadronize). If so, how are those processes interesting (except maybe to understand the pdf of the proton slightly better). Are all such diffractive dissociation processes discarded in a physics analysis. Any insight is appreciated. Thanks. Just some terminology: Elastic scattering refers to processes in which both the projectile and the target stay intact, and no extra particles are produced. Inelastic scattering is everything else. "Diffractive" scattering is not a completely precise term. It refers to high energy, small angle scattering, which can be treated using the semiclassical approximation, in analogy with geometric optics (hence the name). Diffraction can be elastic or inelastic. Indeed, in the black disk limit the two cross sections are equal. Diffractive dissociation is a diffractive process in which the projectile is disintegrated, but the target stays intact (or vice versa). The text book example, originally studied by Glauber and others, is the diffractive dissociation of the deuteron (a very weakly bound nucleus) in scattering on a heavy nucleus. This calculation is described in standard text books on quantum mechanics, for example in Landau and Lifschitz. So why do we care about diffractive dissociation in QCD? Historically, it was important to ignore "soft" processes, and focus on hard scattering (reactions like DIS, jet production, etc.). Hard scattering revealed partons inside the proton, scaling, and asymptotic freedom. It established QCD as the correct theory of the strong interaction.
{ "domain": "physics.stackexchange", "id": 75576, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, collision, protons, large-hadron-collider", "url": null }
ros, navigation, clearpath, rosserial, publisher Title: using ros::shutdown() in arduino code Hey there guys, I am trying to write an arduino based code for my robot, that would allow me to use a button on the PS3 joystick as an emergency kill-switch. I thought using ros::shutdown() would work, but apparently the arduino ros library doesn't recognize it as a function. Here is the code. #include <ArduinoHardware.h> #include <ros.h> #include <geometry_msgs/Twist.h> #include <sensor_msgs/Joy.h> ros::NodeHandle nh; geometry_msgs::Twist msg; void joyCall(const sensor_msgs::Joy& joy){ if (joy.buttons[14]==1) ros::shutdown(); } ros::Subscriber<sensor_msgs::Joy> sub("bluetooth_teleop/joy", &joyCall); ros::Publisher pub("/cmd_vel", &msg); void setup() { nh.initNode(); nh.advertise(pub); nh.subscribe(sub); } void loop(){ msg.linear.x=0.1; pub.publish(&msg); nh.spinOnce(); } Basically, I am publishing a constant velocity to the velocity node, but i want to kill the publisher as soon as a button on the joystick is pressed. However, I get this compiler error: sketch_jun24a.ino: In function ‘void joyCall(const sensor_msgs::Joy&)’: sketch_jun24a.ino:13:3: error: ‘shutdown’ is not a member of ‘ros’ Any help would be extremely useful. Thanks! Originally posted by Adi on ROS Answers with karma: 26 on 2015-06-25 Post score: 0 Apparently ros::shutdown() is not a function in the arduino library, so that's that! In case anyone else evr has the same problem, remember this message!
{ "domain": "robotics.stackexchange", "id": 22012, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, navigation, clearpath, rosserial, publisher", "url": null }
- Applying the Log sum inequality (direct consequence of Jensen inequality) $$\sum a_i \log \frac{ b_i}{a_i} \le a \log \frac{b}{a}$$ where $a=\sum a_i$ and $b=\sum b_i$, $a_i\ge 0$, $b_i\ge 0$; setting $b_i=x_i/n$ $a_i=1/n$ we get $$\frac{1}{n}\sum \log x_i \le \log \left( \frac{\sum x_i}{n}\right)$$ or $$\log\left (\frac{x_1+ \ldots + x_n}{n} \right) \geq \frac{ \log x_1 +\log x_2 +\cdots \log x_n}{n}$$ - Let $S(n)$ denote the statement $$S(n):\; \frac{x_1+x_2+\cdots+x_n}{n}\geq\sqrt[n]{x_1x_2\ldots x_n},\quad n\in\mathbb{N}.$$ Base step ($n=1$): The statement $S(1)$ says that $\frac{x_1}{1}\geq\sqrt[1]{x_1}$, which is true because $x_1 = x_1$. Base step ($n=2$): The statement $S(2)$ says that $$\frac{x_1+x_2}{2}\geq\sqrt{x_1x_2},\tag{1}$$ which is true because $$a\leq x \leq b \longleftrightarrow a+b\geq x+\frac{ab}{x}, \qquad 0<a\leq b,\; x>0$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462179994596, "lm_q1q2_score": 0.8145240399177142, "lm_q2_score": 0.8244619242200082, "openwebmath_perplexity": 213.34958618805925, "openwebmath_score": 0.9681494832038879, "tags": null, "url": "http://math.stackexchange.com/questions/691807/proofs-of-am-gm-inequality/830897" }
Your answer should be: Total sum-(sum of multiple of $3 +$ sum of multiple of $7 -$ sum of multiple of $21)$ Since $21$ is LCM of $3$ & $7$ • The word you used "LCM" made my concept clear. Thank you – Suleman Jan 2 '17 at 18:51 • I'm not sure whether the analogy with sets is helpful here, since sets don't have a notion of "negative membership", so the sum of the elements in $\mathbb{N}_{100} \setminus (3\,\mathbb{N}_{100} \cup 7\,\mathbb{N}_{100})$ would be exactly right, since the multiples of $21$ don't need to be "added back in". – Joshua Taylor Jan 2 '17 at 20:56 • @JoshuaTaylor If you define the "summation measure" on $\mathbb{N}$ to be $\mu(\{x\}) = x$, then invoking the Inclusion-Exclusion principle shows this. – Henricus V. Jan 3 '17 at 0:31 Number of multiples of $3$ between $1$ & $100$ = $[\frac{100}{3}]$ = $33$, Number of multiples of $7$ between $1$ & $100 =$ $[\frac{100}{7}]$ = $14$, Number of multiples of $21$ between $1$ & $100$ = $[\frac{100}{21}]$ = $4$, Where $[x]$ is the box function. Sum of first 100 terms excluding terms divisible by $3$ and $7$: $$S = \sum_{i=1}^{100} i - 3\sum_{i=1}^{33} i - 7\sum_{i=1}^{14} i + 21\sum_{i=1}^4 i.$$ Apply the formula for $\sum_{i=1}^n i$ and take it from here. • Fair and easy. +1 – I am Back Jan 3 '17 at 4:35
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9759464443071381, "lm_q1q2_score": 0.8089998804673327, "lm_q2_score": 0.828938806208442, "openwebmath_perplexity": 407.89529391580675, "openwebmath_score": 0.5868532657623291, "tags": null, "url": "https://math.stackexchange.com/questions/2080700/find-sum-of-numbers-from-1-100-which-are-not-divisible-by-3-and-7/2080760" }
term, so X(ejωˆ) = e−jωnˆ 0 To emphasize the importance of this and other DTFT relationships, we use the notation ←→DTFT to denote the forward and inverse transforms in one statement: GATE Questions & Answers of Applications of Fourier Transforms. Cosine. So: ( ) sinc ( )0 0 0( ) n S f Af nf f nfτ τδ +∞ =−∞ = −∑ adjoint allroots binomial determinant diff expand ezunits factor fourier-transform fourier-transform-periodic-rectangular fourier-transform-periodic-sawtooth fourier-transform-plane-square fourier-transform-pulse-cos fourier-transform-pulse-unit-impulse gamma hermite ilt ilt-unit-impulse implicit-plot integrate invert laplace legendrep nusum Fourier Transform of aperiodic and periodic signals - C. • For a signal that is very long, e. The sinc function is the Fourier Transform of the box function. The Fourier Transform is about circular paths (not 1-d sinusoids) and Euler's formula is a clever way to generate one: Must we use imaginary exponents to move in a circle? Nope. 9-2 The Fourier transform for the rectangular aperiodic pulse is shown as a function of co. The unitary Fourier transforms of the rectangular function are ∫ − ∞ ∞ ⋅ − = ⁡ = (), using ordinary frequency f, and Fourier Transform of the Rectangular Pulse lim sinc , T k 2 XTc ω ωω →∞ π ⎛⎞ == ∈⎜⎟ ⎝⎠ \ Tck T →∞ |()|X ω arg( ( ))X ω • Given a signal x(t), its Fourier transform is defined as • A signal x(t) is said to have a Fourier transform in the ordinary sense if the above integral converges The Fourier Transform in the General Case X()ω Xxtedt() ,ωωjtω ∞ − The Fourier coefficient of a rectangular pulse train is given by , where is the pulse height, is the duty cycle, is the period of the pulse train, is the delay of the pulse in seconds, and . Now to a point forced on us by the fact that we are collecting our spectrum digitally. 9-1. T. March 1, 2007 The FourierTransform 2 Fourier Transform of unit impulse x(t) = δ(t)
{ "domain": "party-building.org", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9863631675246405, "lm_q1q2_score": 0.808712579162333, "lm_q2_score": 0.8198933271118221, "openwebmath_perplexity": 914.1921820473224, "openwebmath_score": 0.8902535438537598, "tags": null, "url": "https://party-building.org/xhymcs/fourier-transform-of-periodic-rectangular-pulse.html" }
electrostatics, photoelectric-effect Title: Effect of electric field on Photoelectric effect The effect of electric potential on the threshold frequency in the Lenard's photoelectric experiment I have been taught that electrons can ejected from the atom by applying strong electric field(field emission), and also by photoelectric effect even light of certain frequency for certain atom can cause ejection of electron. So combining these two I was wondering if the electric field could decrease threshold frequency as electric field is already applying a force i.e less energy required as electric field is also causing ejection of electron simultaneously with the light ray According to Davisson–Germer experiment on electron says they proved that wavelength inversely proportional to root of potential difference en.m.wikipedia.org/wiki/Davisson–Germer_experiment
{ "domain": "physics.stackexchange", "id": 56881, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, photoelectric-effect", "url": null }
• Try to use a choose b form for coefficient to generalize to higher n – Shrey Joshi Sep 7 '18 at 2:14 • Instead of messing with formulas, try induction on the number of $a_i$. – Jason DeVito Sep 7 '18 at 2:21 • Are $\,a_k\,$ assumed to be real or complex? – dxiv Sep 7 '18 at 2:24 • Hint: the corresponding formulas are those which stated in Vieta’s Theorem, I.e. it means that $a_1,a_2,a_3,\ldots ,a_n$ are the roots of the polynomial $x^n=0$. – BAI Sep 7 '18 at 15:10 • A nice exercise! What if the equation is true only for infinite $t$? – dmtri Sep 7 '18 at 15:15 If there exist an $\,a_k \ne 0\,$ then for $\,t = -1/a_k\,$ the LHS is $\,0\,$ i.e. different from the RHS which is $\,1\,$. Therefore the equality can hold for all real $\,t\,$ iff $\,a_k = 0 \mid k=1,2,\ldots,n\,$. Note: the above assumes that the coefficients $\,a_k\,$ are real.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9787126531738088, "lm_q1q2_score": 0.8279345596479871, "lm_q2_score": 0.8459424295406088, "openwebmath_perplexity": 401.1954619053166, "openwebmath_score": 0.9744126796722412, "tags": null, "url": "https://math.stackexchange.com/questions/2908193/is-it-true-1ta-11ta-2-cdots1ta-n-1-forall-t-in-bbb-r-%E2%9F%BA-a-1-a/2908221" }
computational-chemistry, orbitals, software, symmetry What I find strange with this interpretation is the choice of orbitals - why would I choose 3 $s$ orbitals, while only one $p_x$ and 2 $p_z$? You are correct in your interpretation of the OCC flag (that command corresponds to $3$ electrons in $\mathrm{a_g}$ orbitals, $1$ in $\mathrm{b_{3u}}$, $1$ in $\mathrm{b_{2u}}$, and $2$ in $\mathrm{b_{1u}}$). (Because these are orbitals, not overall wavefuction symmetries we use lowercase letters) Your confusion seems to be arising from visualizing the orbitals which you are including. Remember that these representations refer to the symmetry of the molecular orbital, not atomic orbitals. $\mathrm{a_g}$ refers to the set of orbitals which have no phase change under all of the symmetry operations, so that corresponds to the $\sigma(\ce{1s})$, $\sigma(\ce{2s})$, and $\sigma(\ce{p_z})$, orbitals. Similarly, $\mathrm{b_{1u}}$ refers to the filled $\sigma^*(\ce{1s})$, $\sigma^*(\ce{2s})$. Finally the $\mathrm{b_{3u}}$, and $\mathrm{b_{2u}}$ orbitals refer to the filled $\pi(\ce{p_x})$ and $\pi(\ce{p_y})$ orbitals. As we know, the occupation of $\ce{N2}$ is: $$ \ce{\sigma(1s)^2 \sigma(2s)^2 \sigma^*(1s)^2 \sigma^*(2s)^2 \pi(p_x)^2 \pi(p_y)^2 \sigma(p_z)^2}, $$ and using the $D_\mathrm{2h}$ point group we can rewrite this as:
{ "domain": "chemistry.stackexchange", "id": 10843, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computational-chemistry, orbitals, software, symmetry", "url": null }
cnn Title: Traditional 2D CNN Formula In [4D U-Nets for Multi-Temporal Remote Sensing Data Classification] they give the following formula for the traditional 2D CNN. But I’m confused about the w_i,j in this formula: From my knowledge I would say that if you have a NxNxd input, your kernel will have shape HxWxd in this case. So the ‘depth’ of the kernel matches the ‘depth’ of the input. In this formula it seems that they are re-using the same HxW kernel against every ‘depth’ level d. I would therefore say that it should be w_i,j,c instead of w_i,j. Am I missing something here? You are not missing anything. You are right, the article is mistaken. You can check the renowned CS231N course documentation to check for yourself: https://cs231n.github.io/convolutional-networks/
{ "domain": "datascience.stackexchange", "id": 11677, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cnn", "url": null }
python if __name__ == "__main__": path = 'log/server.log' search_list = ['3BAA5C42', '3BAA5B84', '3BAA5C57', '3BAA5B67'] #find every occurance of device ID and find their corresponding SQL # guids (unique ID) unique_ids_dict = dict() for element in search_list: unique_ids_dict[element] = find_device_IDs(path, element) #Now for each unique ID find if string ["Exception occurred", # "Packet record has been added"] is found in it's SQL guid list. search_with_in_deviceID = ["Exception occurred", "Packet record has been added"] num_exceptions_dict = dict() for elem in search_list: num_exceptions_dict[elem] = find_num_occurences(path, elem, search_with_in_deviceID, list(unique_ids_dict[elem].values())[0]) print_stats(num_exceptions_dict) and here's a small server log for you to experiment on. This code works but the time elapsed to run through a log file of 55000 lines is 42 seconds $ time python findUniqueVal_log.py real 0m42.343s user 0m42.245s sys 0m0.100s I would like guidelines on:
{ "domain": "codereview.stackexchange", "id": 32976, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
Now integrate $(\spadesuit)$. We have \begin{align} \int_0^1\int_0^1 f(x,y) dx dy & =\int_0^1 \int_0^1 \left(\dfrac{dx}{1-xy^2}\right)dy\\ & = -\int_0^1 \dfrac{\log(1-y^2)}{y^2} dy\\ & = \log 4 \tag{$\heartsuit$} \end{align} where we make use of the fact that $$\int \dfrac{dx}{1-xy^2} = - \dfrac{\log(1-y^2)}{y^2} + \text{constant}$$ and $$- \dfrac{\log(1-y^2)}{y^2} = \dfrac{\log(1-x^2)}{x} + \log(1+x) - \log(1-x) + \text{constant}$$ Now comparing $(\diamondsuit)$ and $(\heartsuit)$, we get that $$\sum_{k=1}^{\infty} \dfrac1k \dfrac1{2k-1} = 2 \log 2$$ -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9817357179075084, "lm_q1q2_score": 0.8243968296040263, "lm_q2_score": 0.8397339676722393, "openwebmath_perplexity": 482.358792764763, "openwebmath_score": 0.9655860662460327, "tags": null, "url": "http://math.stackexchange.com/questions/34086/sum-limits-k-1-infty-1-over-k1-over-2k-1-how-to-show-that-this-is" }
Knowing that $f(R) \subseteq S$, we need only show $f$ maps $R$ onto $S$. Suppose $s \in S$. Then there exists polynomial $p(X) \in R[X]$ s.t. $f(p(X)) = s$. Let $p(X) = \sum_{i=0}^n \enspace r_i X^i$ where coefficients $r_i \in R$. Now $f(X) = q(Y)$ for some polynomial $q(Y) \in S[Y]$, and degree of $q(Y)$ must be positive for $f(R[X]) = S[Y]$. Apply $f$ to $p(X)$ and compare degrees: $$s = \sum_{i=0}^n \enspace f(r_i) q(Y)^i$$ Since $S$ is an integral domain, the degrees of $q(Y)^i$ are positive for $i \gt 0$. Thus the only nonzero coefficient of $p(X)$ is $r_0$, which shows $f(r_0) = s$. Therefore $f$ maps $R$ onto $S$ and $R \cong S$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9783846653465638, "lm_q1q2_score": 0.8276571027285476, "lm_q2_score": 0.8459424314825853, "openwebmath_perplexity": 167.0213924928917, "openwebmath_score": 0.940856397151947, "tags": null, "url": "https://math.stackexchange.com/questions/13504/does-rx-cong-sx-imply-r-cong-s" }
proteins, amino-acids The third group, as expected had the highest anabolic rate The first group (protein and carbohydrate drink) had better anabolic rate when compared with the liquid meal group though they both took the same amount of EAA. The difference was in carbohydrate content. The liquid meal had higher carbohydrates content when compared with the first group. Whey has insulinogenic effect invitro. Whey also has the same effect whether taken before or after exercise while the protein drink has better effect when taken before exercise. Among amino acids leucine, when taken alone has anabolic effect comparable to a mixture of EAA. In conclusion: However, whey contains various bioactive peptides that act to enhance recovery and potentially in other ways that positively effect the adaptive process to exercise (12). These bioactive peptides are not found in EAA, BCAA, or Leucine, and appear to be a unique quality to dairy proteins. So what we can't say is that these aminos are the deciding factor in whey's effects...or that we can use one in place of the other (interchangeably) in all cases. http://www.vpxsports.com/article-detail/supplements/whey-protein-versus-amino-acids-whats-the-difference Please read the article which has more details and graphs and references for further study.
{ "domain": "biology.stackexchange", "id": 3593, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "proteins, amino-acids", "url": null }
java, multithreading, thread-safety, locking, dining-philosophers Title: Dining Philosopher's problem implementation with Java Locking Framework to avoid deadlock Dining philosopher problem is one of the classic problems in computer science. I intended to implement it using Java threads. I attempted using the locking framework that came with Java 5 and used the tryLock() method to avoid deadlock. My implementation is fairly simple. I implemented the runnable interface to represent a philosopher and used executor service to run all these runnable. As a lock, I have used ReentrantLock. I know there are several implementations are already discussed here, but I would like to get some review on my implementation. import java.time.LocalDateTime; import java.time.format.DateTimeFormatter; import java.util.Random; import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.Lock; public class Philosopher implements Runnable { private String name; private final Lock leftFork; private final Lock rightFork; public Philosopher(String name, Lock leftFork, Lock rightFork) { this.name = name; this.leftFork = leftFork; this.rightFork = rightFork; } public void think() { log("thinking"); } public void eat() { //assume, eating requires some time. //let's put a random number try { log("eating"); int eatingTime = getRandomEatingTime(); TimeUnit.NANOSECONDS.sleep(eatingTime); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } @Override public void run() { while (true) { keepThinkingAndEating(); } } private void keepThinkingAndEating() { think(); if (leftFork.tryLock()) { try { log("grabbed left fork"); if (rightFork.tryLock()) { try { log("grabbed right fork"); eat(); } finally { log("put down right fork"); rightFork.unlock(); } } } finally { log("put down left fork"); leftFork.unlock(); } } }
{ "domain": "codereview.stackexchange", "id": 28346, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, multithreading, thread-safety, locking, dining-philosophers", "url": null }
css, html5 Title: Testimonial section CSS inheritance and proper use of HTML5 elements I created this testimonial section as a prototype. HTML <div class="testimonials"> <blockquote> <p>This was pretty good.</p> <cite> <span class="author">– Bobby,</span> Jersey City, NJ </cite> </blockquote> <blockquote> <p>This is pizza is the most bearable thing I ever had in Joisey. But it still sucks, just like everyone from there.</p> <cite> <span class="author">– Vinny,</span> New York, NY </cite> </blockquote> <blockquote> <p>I really savored the smoky undertones of the imported sausage. The cheese had a wonderful texture. It paired well with my pinot noir in a dance of sensations.</p> <cite> <span class="author">– Alex,</span> San Francisco, CA </cite> </blockquote> </div> CSS .testimonials { width: 720px; max-width: 96%; margin: 0 auto; } .testimonials blockquote { background-color: #fff; border-left: 4px #61acca solid; font-size: 21px; line-height: 1.6; } .testimonials blockquote { padding: 10px 20px; } .testimonials blockquote { background-image: url('http://shared-assets.s3.amazonaws.com/codepen/img/elements/quotes/quote-999.jpg'); background-repeat:no-repeat; background-size: 33px 45px; background-position: 10px 5px; }
{ "domain": "codereview.stackexchange", "id": 7491, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "css, html5", "url": null }
organic-chemistry, nomenclature, steroids Which one is correct by latest IUPAC blue book? How are the locants assigned for this molecule? Specifically, which carbon was assigned the locant "10", since it does not appear in the locants of "octahydro"? For natural products and derivatives, three levels of nomenclature are recognized in the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book): trivial names, systematic names, and semisystematic names. A new compound that can be isolated from a natural source is commonly given a trivial name. These trivial names are usually related to the biological origin of the material. When the full structure of a natural product or derivative is known, a systematic name may be generated in accordance with the IUPAC recommendations for nomenclature of organic chemistry. However, the systematic name “(8⁠R,9⁠S,13⁠S,14⁠S,17⁠R)-17-ethynyl-13-methyl-7,8,9,11,12,14,15,16-octahydro-6⁠H-cyclopenta[a]phenanthrene-3,17-diol”, which is proposed in the question is not correct. This name implies that the carbon atoms 13 and 17 of the cyclopenta[a]phenanthrene parent structure remain sp2 hybridized; i.e. that a double bond remains between atom 13 and atom 17, which is not possible since both atoms have four bonds (to other skeletal atoms or to substituents). The correct systematic name is (8⁠R,9⁠S,13⁠S,14⁠S,17⁠R)-17-ethynyl-13-methyl-7,8,9,11,12,13,14,15,16,17-decahydro-6⁠H-cyclopenta[a]phenanthrene-3,17-diol.
{ "domain": "chemistry.stackexchange", "id": 6697, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, nomenclature, steroids", "url": null }
c++, performance, algorithm, matrix, boost bool Matrix::operator!=(Matrix& mx) { if(rows_num != mx.rows_num || cols_num != mx.cols_num) return true; for(int i = 0; i < rows_num; ++i) for(int j = 0; j < cols_num; ++j) if(data[i][j] != mx.data[i][j]) return true; return false; } Matrix Matrix::operator+(const Matrix& mx) { assert(rows_num == mx.rows_num && cols_num == mx.cols_num); Matrix add(rows_num, cols_num); for(int i = 0; i < rows_num; ++i) for(int j = 0; j < cols_num; ++j) add.data[ i ][ j ] = data[ i ][ j ] + mx.data[ i ][ j ]; return add; } Matrix Matrix::operator-(const Matrix& mx) { assert(rows_num == mx.rows_num && cols_num == mx.cols_num); Matrix sub(rows_num, cols_num); for(int i = 0; i < rows_num; ++i) for(int j = 0; j < cols_num; ++j) sub.data[ i ][ j ] = data[ i ][ j ] - mx.data[ i ][ j ]; return sub; } Matrix Matrix::operator*(const Matrix& mx) { assert(cols_num == mx.rows_num); Matrix mult(rows_num, mx.cols_num);
{ "domain": "codereview.stackexchange", "id": 37299, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, algorithm, matrix, boost", "url": null }
ros, camera1394, bumblebee, stereo, bumblebee2 Title: camera1394 and bumblebee2 Hello everyone. I am new to ROS and am currently working on getting the Bumblebee2 node (By Soonhac Hong, source: http://cu-ros-pkg.googlecode.com/svn/trunk/bumblebee2) to work properly in Ubuntu 10.10 for my Pt grey Bumblebee2 stereo camera. After resolving some preliminary issues, I continue to receive the same error message whenever I launch Bumblebee2.launch: " FATAL 1298844152.453260264: [camera] exception opening device: [Camera1394::open]: No cameras found " The node appears in rxgraph. So I thought to myself "maybe the camera1394 node will work properly?" (I have no experience with this node, so that's just a wild guess) and a tried launching camera.launch from the test folder. I received the following messages: " ERROR 1298842394.512674033: [camera] device open failed: [Camera1394::open]: No cameras found ERROR 1298842394.512964522: Unable to open camera calibration file [/cameras/unibrain_calibration.yaml] " I also tried launching stereo_example.launch: " error loading tag: file does not exist [/cameras/unibrain.yaml] XML is rosparam file="/cameras/unibrain.yaml"/ " I have attempted to use the camera using coriander and kino... they both cannot find a camera connected to the bus. I have already troubleshooted common bus permissions issues as per http://www.ros.org/wiki/camera1394/Troubleshooting#No_Bus_Access_Permissions. The camera works in a Windows environment. Any ideas how I can get this thing working??? Thanks for the help!
{ "domain": "robotics.stackexchange", "id": 4886, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, camera1394, bumblebee, stereo, bumblebee2", "url": null }
solid-state-physics Once you found a collection of symmetry operations for your crystal, you can check that pairwise combinations of them are also symmetry operations of your crystal. Eventually you end up with a finite set of symmetry operations, which forms a mathematical group. All point groups of crystals have a finite number of elements. Furthermore, there is a finite number of crystal point groups in 2D (10) and 3D (32). They are all known and tabulated (see, e.g., Point group-Wikipedia) In case of your example, the 2D square lattice in the $x$-$y$ plane: it has a 4-fold rotation symmetry ($C_4$) about an axis in $z$-direction, as you mentioned, but it also has a 2-fold rotation symmetry ($C_2$) about the same axis. In addition, it has mirror symmetries (there are mirror planes along the sides of the squares, but also mirror planes along the diagonals of the squares), which you may also see as two-fold rotations about axes in the $x$-$y$ plane. In total, there are therefore 8 distinct symmetry operations that map the square lattice onto itself: Identity Two four-fold rotations about $z$ A two-fold rotation about $z$ Two two-fold rotations about $x$ and $y$ (sides of the squares are oriented along $x$ and $y$) Two two-fold rotations about axes along the two diagonals of the square These eight symmetry operations form a group called the dihedral group $D_4$.
{ "domain": "physics.stackexchange", "id": 55825, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "solid-state-physics", "url": null }
digital-communications, digital-filters, doppler You'll have to deal with the inter-symbol interference with previous symbols. Which is no big deal – all symbols should have the same probability, so the average phase error you'd get is zero. But, not so much for the fourth power: that's always a positive error, so yes, you get a biased phase correction term, and that means a misestimate of your frequency. Although I only mentioned noise, uncorrelated ISI looks a lot like nose, and thus it should have been among the reasons when I said
{ "domain": "dsp.stackexchange", "id": 9577, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "digital-communications, digital-filters, doppler", "url": null }
javascript, html, dom your code. It the bug is the result of user input then output a response to this effect so that your user will know why the executed failed to perform as expected. I also do not see the value in using a return on an anonymous object literal at the end of your application. Clearly this goes to the architecture of your application in that you want a series of sub-global functions to execute in a particular order that may or may not return anything but none the less result in a the global "filterObj" returning some object literal. This is an old convention, particularly the use of something named "init" that strings together a series of unrelated executions. Part of the reason you are probably using this convention is that all the functions in your application appear to exist in an equal scope directly under the global "filterObj" contain, and this is inefficient. Only create functions in the scope where they are needed, or if some functions are needed in different locations then at the minimum possible scope for reuse. Doing this decreases lookups, which dramatically increases execution speed. Speed in JavaScript really comes down to reducing lookups and using the most appropriate operator or method for a given job. You also have some minor syntax violations in your code. For instance the "attachClickListener" is missing a terminating semicolon. This would prevent a bug free minification of your code. I would suggestion applying the prior mentioned guidance first and then after submitting your code through the JSLint tool.
{ "domain": "codereview.stackexchange", "id": 734, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, html, dom", "url": null }
ros, transform, tf-static, tf2 Title: Using tf_static for almost static transforms Hi, I have a node that will define some TF frames at some points in time along the robot trajectory. They will be defined by transformations with /map as the frame_id. These transformations will eventually change later, but the frequency of change will be very very low, and basically event driven (whenever a loop closing happens). I was wondering if in this case using /tf_static would be a good choice here, given the low frequency of updates on these transforms, even if the transformations are not constant. But I'm getting confused with the latching concept and the following text from the tf2 migration page: It is expected that publishers on "/tf_static" publish using latched topics, the tf2_ros static_transform_publisher does this correctly. Note: avoid multiple latched static transform publishers on /tf_static in the same process, because with multiple latched publishers within the same node only one will latch correctly. So, the first question is: is it possible to implement what I want using /tf_static? If it is indeed possible, how should I implement it to make the latching work properly? Should I create a publisher every time I need to update one of the transforms, destroying it later? Or having only one that would publish everything on /tf_static would do? Originally posted by Mellon on ROS Answers with karma: 51 on 2016-02-16 Post score: 2 I realize this is an old question, but it still deserves an answer. The proper way to do this is to keep a single /tf_static publisher, and publish all of your transforms to it as a single message. tf2_ros provides the StaticTransformBroadcaster class that manages a vector of transforms and manages the details for you. (calling sendTransform automatically adds the transforms you pass and publishes everything in a single message) The reason to include all of your transforms in a single message is because latched topics in ROS only deliver the most recently published message to new subscribers, so that single message has to include everything you want to publish. (otherwise new subscribers won't get the whole system state) Originally posted by ahendrix with karma: 47576 on 2017-01-12 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 23793, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, transform, tf-static, tf2", "url": null }
algorithms, security, databases Title: Anonymization of dataset preserving unique identities The $k$-anonymization paradigm (and its refinements) means to create datasets where every tuple is identical with $k-1$ others. However I'm in a situation where people are in the dataset many times. And I want to follow their progress through the health care system, so I need to know who is who. If I give each person a unique ID, which is necessary in this situation, a linking attack from within the table is possible! Does anyone know of any relevant theory or have attempted to deal with similar problems? I'm inclined to think it is impossible to give any good guarantee of anonymity in this situation. This will possibly be used for my MSc thesis topic. The point of k-anonymity is that you can't uniquely identify your patients. So I will rephrase your question: Given two anonymized tuples $x$ and $y$, can we tell if they are anonymizations of the same person? Let's suppose for purposes of contradiction that we could. Then this means there is a "meta-tuple" which uniquely identifies a patient. But this violates anonymity (unless $k=1$). So it is impossible.
{ "domain": "cs.stackexchange", "id": 480, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, security, databases", "url": null }
The set $C$ of $h: \mathbb{N}\cup\{0\} \to \{0,1,2,3,4,5,6,7,8,9\}$ with $h(0)=1$is a subset of $B$. For each real $x \in [1,2)$, there is a unique decimal expansion with 1 before the decimal point, ending NOT in trailing 9's. By defining for $n>0$, $h(n)=$ (n'th digit of x after decimal point), we have an injection from $[1,2)$ to the set $C$ (different $x$ => different decimal expansions => x maps to different h). Since the set of reals on $[1,2]$ is uncountable, the set C is uncountable, => the set B is uncountable, => the set A is uncountable. This proof works but seems needlessly verbose. It is easier to prove that strictly increasing functions form an uncountable set (which implies that the monotone increasing are uncountable as well). This is because there is an obvious 1-1 correspondence between strictly increasing functions and infinite subsets of N (each such function is uniquely determined by its range). It is a standard result that there are uncountably many subsets of N, of which only countably many are finite. Hence, there are uncountably many infinite subsets. It is rather easy to apply the diagonalization argument directly. Let $\langle f_n\rangle$ be a sequence of increasing functions from the naturals to itself. Define a function $f$ recursively by $f(1)=1$ and $$f(n+1)=f(n)+f_n(n+1)-f_n(n)+1.$$ The resulting function $f$ is clearly different from every function in the sequence $\langle f_n\rangle$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9833429607229954, "lm_q1q2_score": 0.8739547120535949, "lm_q2_score": 0.8887588023318196, "openwebmath_perplexity": 118.64097275636804, "openwebmath_score": 0.9356918931007385, "tags": null, "url": "https://math.stackexchange.com/questions/1860168/uncountability-of-increasing-functions-on-n/1860193" }
electrostatics, definition, dipole, dipole-moment Title: About dipole moment The picture is taken from wikipedia page https://en.wikipedia.org/wiki/Electric_dipole_moment I don't understand the formula when it comes to continuous charge distribution. But I understand this one: $$ \vec{p} = q \vec{d} $$ where dipole moment vector $ \vec{p}$ is equal to charge $q$ times the displacement vector $ \vec{d}$ that separates two opposite charges. Do you know the relation between charge and charge density(surface, volume)? $$\rm dq\sim \sigma \rm dA\sim\rho\rm d\tau$$ You can find clear relation between them by integrating and you can plug the equation in $\vec p=q\vec d$. We can do it. But, if we have specific amount of charges (e.g. dipole, quadrupole) then we can't take density. We do taylor expansion for multipole (also called multipole expansion).
{ "domain": "physics.stackexchange", "id": 87119, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, definition, dipole, dipole-moment", "url": null }
javascript, object-oriented, node.js, socket.io A Little Style I have a few other issues with the code in general: Testing settings.debug === true seems overly-restrictive. Do you really want to disable debugging if they assign 1 instead of true? I'd rather see proper JSDoc comments on properties and methods instead of "section comments". handle should not instantiate a new logger instance for every client. This should be created once for each type of handler and stored in the constructor. Exporting singleton instances of the handlers will make testing difficult if you need multiple instances of a single handler. Granted, they'll probably remain stateless which would allow singletons, but it just rubs me wrong. I would export the constructor instead and force clients to use new themselves. Too much whitespace. Use blank lines to logically group related blocks. If they are so long as to need blank lines inside, refactor. While it looks really nice when assignments are all lined up on the =, it is a PITA to maintain and easy to miss when you rename a variable in a file. I find it's just not worth it in the end. Where Are the Prototypes? The handlers don't seem to be making use of prototypes or the prototype chain at all. The constructor exported by handler.js is merely a factory method. Nothing is assigned to its prototype to be inherited by the subclasses. Instead, it builds a new object from scratch each time. Given that both debug and handle require state about the specific handler, I don't see an easy way around this. As it stands now, I would probably rewrite handler.js as a factory method. It still returns an object handler.js (function () { var Debug = require('custom_modules/debug'); module.exports = function (name, callback) { return { debug: new Debug(name); handle: function (client) { this.debug.log("Initialized"); callback(client); } }; }; }()); Creating a new handler is pretty much the same. login.js (function () { var handler = require("custom_modules/handler");
{ "domain": "codereview.stackexchange", "id": 8606, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, object-oriented, node.js, socket.io", "url": null }
machine-learning, natural-language-processing Title: How can I generate "what", "why" and "who" types of questions from question templates? I was wondering if in any way it is possible to generate $n$ questions based on gap-fill-in type questions. For example, the following template ______ is a process in which plants generate energy. could lead to the generation of the following specific question What is the process in which plants generate energy called? If so, how can I achieve this? I am familiar with and used natural language processing techniques, and have no problem with implementing an algorithm for this, but I do not know where to start with this. This seems tricky. It seems that any "surface level" transformation wouldn't give adequate results and any working solution would need to properly capture the sentence structure, and generate a gramatically correct transformed sentence. One possible option is to use a "traditional pipeline" - e.g. you run a NLP pipeline up to syntactic parsing, which for general domain english is quite accurate (you'd need some special handling for the gap "____" part though), then implement some heuristic rules to transform the syntax tree, and regenerate a sentence from the transformed tree. There are a lot of publications about similar transformations in machine translation domain, used as a way to preprocess data before running statistical machine translation for language pairs with very different word(or sentence part) ordering. A second option that may work is to look into the field of controlled natural languages, or something like http://www.grammaticalframework.org/ that can be used as a toolkit to help generating new sentences. Current fashion also suggests a very different option that might work - you could train a character-level recurrent neural network with an attention mechanism (look into recent neural machine translation publications for details) to do this transformation, but I'm not sure of how much training data it will need for decent accuracy.
{ "domain": "ai.stackexchange", "id": 194, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, natural-language-processing", "url": null }
c#, wpf, mvvm, exception, xaml or like this var isValid = model.TryLoadCertificate(userCertificatePath, UserPassword, out certificate); or w/e really, there is plenty of options. Just don't throw exceptions. Also this smells: catch (CryptographicException ex) { throw new CryptographicException(ex.Message.ToString()); } What exactly are you doing it for? Just let the original exception through, don't catch it, if you can't do anything about it.
{ "domain": "codereview.stackexchange", "id": 22535, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, wpf, mvvm, exception, xaml", "url": null }
ros, ros-kinetic, camera-drivers, camera Title: Black Image: pylon_camera I am using Resonon Pika L which has pylon camera(hyperspectral) and using it with Pylon ROS package. I followed the article available at "http://wiki.ros.org/pylon_camera" to setup the pylon ROS package. After running 'rosrun pylon_camera pylon_camera_node', I run 'rosrun image_view image_view image:=/pylon_camera_node/image_raw' But I am only getting black image. I also checked the data published to '/pylon_camera_node/image_raw' topic. It is just a matrix full of 0's. So I am not sure how to see the images from the camera. I am new to ROS. I can see some gray-line images using PylonViewerApp Any help in this regard would be highly appreciated. Originally posted by Jazzscout on ROS Answers with karma: 1 on 2018-07-19 Post score: 0 You tried to use the PylonViewerApp and it didn't work. Maybe the problem isn't with the ROS application but with the camera setup. Try to initialize the PylonViewerApp after the boot, before to the ROS Node. Because I've already had a similar problem, and I solved in this way. Originally posted by matheusns with karma: 111 on 2018-07-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31300, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-kinetic, camera-drivers, camera", "url": null }
dsp-core, snr Title: SNR of a 16-bit DSP with 12-bit ADC, 40-bit accumulator According to the 6dB-per-bit rule, a 16-bit DSP would provide 96 dB of dynamic range, approximately. If the ADC and DAC are both 12-bit long, the DSP is still considered to have 96 dB, or the dynamic range is now 72 dB? Also, I would like to know if the guard bits provided by the accumulator (40-bit) increase in some way the dynamic range. Thank you very much. The dynamic range is the ratio between maximum and minimum representable values. It is $DR = max/min$. So: For an ADC and its configuration (Vref, uniform step, ENOB, etc...), as you said, $DR = 6 \cdot N $, where $N$ is the ENOB of the ADC. However, you can vary the configuration of the ADC stage to adapt it to your signal. So, at each moment, an ADC can have different $min$, $max$ or $DR$ by changing its configuration (in real-time or off-line) at the expense of other metrics (quantization noise mainly). For example, if you have an ADC that does not perform an uniform quantization, you could increase the DR. The reason why uniform quantization is the most common among ADCs is because no statistics about the signal to be sampled is available to the ADC designer, for some signals, however, uniform quantization is not the best. A DSP can perform arbitrarily high dynamic range operations with proper software the same way it is possible to perform 1024-bit arithmetic in a 32-bit register machine. Even if the ADC data has DR=96dB, internally, the DSP can raise that DR as much as resources it has. For example, if two number with DN=96dB are multiplied, you get a result with DR=192dB as long as you store all the result bits. If your accumulator cannot handle such number of bits, you can always make arithmetic transformations over your computation not to saturate the accumulator. The guard bits in the accumulator helps not to make those arithmetic transformations for reasonable DRs along the data flow (intermediate results) of the algorithm.
{ "domain": "dsp.stackexchange", "id": 5248, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dsp-core, snr", "url": null }
homework-and-exercises, newtonian-mechanics, classical-mechanics Title: Why aren't the weights of the beads considered in this equation? I was solving this problem: A ring of mass $M$ hangs from a thread and two beads of mass $m$ slide on it without friction.The beads are released simultaneously from the top of the ring and slides down in the opposite sides. We are asked to find the condition on $m$ such that the ring will move up during the motion of the beads. Now I wrote down the equation $$ N + mg\cos\theta =\frac{mv^2}{r} $$ where $N$ is the normal reaction force provided by the ring (I am working in the frame of reference of the bead) and by using the work energy theorem I get $$ \frac{mv^2}{r} = 2mg(1-\cos\theta)$$ After that, by solving for $N$, I take the downward component of $N$ and multiply it by $2$ for the two beads so it becomes $2N\cos\theta$ which provides force to lift the ring up. Now differentiating and finding maximum force for corresponding $\theta$, we get $$F_\text{max} = \frac{2mg}{3}$$ Now my question is, I will get the correct answer which is $m>3M/2$ if I use $F>Mg$ where $F = 2N\cos\theta$, but shouldn't I write it as $F>(M+2m)g$ considering the weight of the other two small beads sliding upon the ring? Your confusion will be removed if you consider the FBD of the ring itself.
{ "domain": "physics.stackexchange", "id": 23297, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics", "url": null }
simulation Original comments Comment by gvdhoorn on 2017-12-20: Time(0) has special semantics in TF, it basically means "latest transform". But in general if use_sim_time is set and there is a publisher on /clock, I would expect things to 'just work'. I've not done it myself, but what you're doing seems rather convoluted. Comment by shardator on 2017-12-20: No, it does not, because timer delays are expected to be real time. So if you set a timer to 10Hz, it will fire 10Hz real frequency, and not accelerated frequency. This is because WallTime is used, and pthread conditional waits are called with the nominal duration. Comment by shardator on 2017-12-20: Also, I know about the Time(0) semantics, it is just after a while, inside waitForTransform() is replaced by some common time of the two frames (which we are trying to obtain the transformation between). Now this common time is somehow incorrect, as no actual transformation exists with that. Comment by gvdhoorn on 2017-12-20: Using simulated time is done with Gazebo and some other sims as well, so your statements confuse me. I'm not saying you're wrong, but I would just make sure what you're doing is absolutely necessary, as it would be unfortunate otherwise. Comment by gvdhoorn on 2017-12-20: If what you're doing is inside stdr, then that could be true. It may be that the stdr authors didn't (want to) consider FTR / STR cases and just use WallTime everywhere. Comment by shardator on 2017-12-20: No, I found that ROS core things fire at wrong times. So wrong, that my robot teleports off the map. It has nothing to do with the simulator, I have already excluded that option via debugging. I have recompiled the whole ROS infrastructure with my changes, as I found many places which depend on the Comment by shardator on 2017-12-20: speed of the clock... Comment by gvdhoorn on 2017-12-20: Then I would urge you to report that on the proper issue trackers.
{ "domain": "robotics.stackexchange", "id": 29582, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "simulation", "url": null }
thermodynamics, temperature Title: What is the thing about heat that make particles vibrate faster? I'm just trying to understand the ultimate underlying dynamics of heat that causes temperature increase of, let's say, a liquid. Is it the electromagnetic radiation vector that moves between the fields and effects atoms? How can I exactly visualize this phenomena? I'm just trying to understand the ultimate underlying dynamics of heat that causes temperature increase of, let's say, a liquid. The ultimate dynamics is heat is energy transfer due solely to temperature difference. If the transfer results in a temperature change, it's because there has been a transfer of kinetic energy to or from the substance undergoing the temperature change. A visualization of what is going on can be seen here: http://www.hyperphysics.de/hyperphysics/hbase/thermo/temper2.html#c1 It is important to understand that heat can cause a change in molecular kinetic energy, but it is not the molecular kinetic energy itself. That is properly called the internal kinetic energy of the substance. It should also be noted that heat transfer may not result in a temperature change of the bodies involved. For example, heat transfer that causes the melting of ice or the boing of water at constant temperature. That heat is called "latent heat". Heat that causes a temperature change is often referred to as "sensible heat". The three basic mechanisms of heat transfer are conduction, convection and electromagnetic radiation. The first two mechanisms require physical contact between the substances transferring heat (solids/liquids). The last (electromagnetic radiation) does not as energy can transfer in a vacuum. In this case, the increase/decrease in temperature is due to the absorption or release of electromagnetic energy. Hope this helps.
{ "domain": "physics.stackexchange", "id": 68727, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, temperature", "url": null }
Solution: For Laplace's equation in the inner disk we know that the general solution takes the form $$u(r,\theta)=\frac{A_0}{2}+\sum_{n\geq 1} r^n (A_n\sin(n\theta)+B_n\cos(n\theta))$$ Where we have dropped the logarithm and any terms with $r^{-n}$. Applying the boundary condition we have $$f(\theta)=\sum_{n\geq 1}n a^{n-1}(A_n\sin(n\theta)+B_n\cos(n\theta))$$ At this point, the coefficients can be directly calculated: $$A_n=\frac{a^{1-n}}{n\pi}\int_0^{2\pi}f(\theta)\sin(n\theta)\,d\theta=\frac{a^{1-n}}{n\pi}\left(\int_0^{\pi}\sin(n\theta)\,d\theta+\int_\pi^{2\pi}-\sin(n\theta)\,d\theta\right)=\frac{a^{1-n}}{n^2\pi}(\cos(n\theta)|_{\pi}^0+\cos(n\theta)|_\pi^{2\pi})=\frac{4a^{1-n}}{n^2\pi}\,\,\,\text{n odd, 0 otherwise}$$ Similarly,
{ "domain": "toronto.edu", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9863631659211718, "lm_q1q2_score": 0.8422680919224298, "lm_q2_score": 0.8539127585282744, "openwebmath_perplexity": 623.5791086464815, "openwebmath_score": 0.9247498512268066, "tags": null, "url": "http://forum.math.toronto.edu/index.php?PHPSESSID=vr7acj5rroo41qop46p208c185&action=printpage;topic=831.0" }
genetics, book-recommendation Title: A free book/resource for learning genetics? I took an undergrad class in genetics. I felt it was not too intensive and I do not feel prepared for grad school (if I can manage to get in.) Does anyone know of a preferably free resource for learning genetics? I know the field is very large, but I would like to be competent in the field, considering I also want to learn more about genomics as well. And I figure one must first have a fundamental understanding of either computer science or biology to be successful in genomics, both of which I lack. As you said, the question is pretty broad. Genetics is a big gigantic field and it is quite hard to know what you are exactly looking for. If you could refine into one of the subfields (molecular genetics, population genetics, phylogenetics, etc..) it would be much easier to give you better advice. As you talk about both biology and bioinformatics, it might be worth to specify whether you are looking for learning more in fundamental genetics or learning more in the methodologies to analyse genetic data. Principles of Genetics You may want in an introductory class in genetics. Here is a course on MIT Open CourseWare, but it may be too easy for you. Bioinformatics You may want to take a bioinformatics course. There are so many bioinformatics course online that there is a wikipedia page to list them (there). Side skills Programming You may just want to learn to code in R, Python, Perl or Shell script (to cite few examples). You'll find tons of beginner tutorial online. Statistics Or you might want to learn more about statistics. KhanAcademy has a very good (but very introductory) statistics class.
{ "domain": "biology.stackexchange", "id": 4070, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "genetics, book-recommendation", "url": null }
gazebo Use valgrind. This will give you very detailed look at what Gazebo is doing. Originally posted by nkoenig with karma: 7676 on 2013-01-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 2762, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo", "url": null }
javascript, node.js, angular.js Creating a closure in this case is indeed acceptabel, but dont create the function in a loop. That is just needless memory wastage, declare the function up front (with a name!) and call that function. Ideally you would call the function with $scope.data.selectedItems[i] instead of i, so that your code will look cleaner. function addTag( item, tag ){ //Set initial status to checking item.data.tags += ' ' + tag; //Update bookmark Pinboardservice.updateBookmark(item) .then(function(result) { var updatedBmHash = item.data.hash; updated++; if(result.result_code === 'done') { Appstatusservice.updateStatus('updated bookmark: ' + updatedBmHash + '.', updated, total); } else { Appstatusservice.updateStatus('Failed: ' + result.result_code + '.', updated, total, 'danger'); } }, function(reason) { Appstatusservice.updateStatus('Failed: ' + reason, updated, total, 'danger'); }); } Finally, you could clean up the code, there are a number of commented out statements and console.log and console.info calls you should remove.
{ "domain": "codereview.stackexchange", "id": 10602, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, node.js, angular.js", "url": null }
### Show Tags 04 Aug 2019, 03:54 generis wrote: Ayush1692 wrote: If y is a positive integer, and |x| < 5 − y, then what is the least possible value of x ? A. 4 B. 1 C. 0 D. -1 E. -4 Don't remove brackets and solve. The question asks for the logic behind absolute value. $$y$$ is a positive integer $$|x| < 5 − y$$ LHS is nonnegative - it is positive or 0. Least value for x is ZERO Does RHS work? For LHS to be less than RHS: RHS cannot be negative RHS cannot be 0 RHS must be positive RHS = (5 - pos. integer) y could = 4, 3, 2, or 1, and RHS can = (5 - y) = 1, 2, 3, 4 That works. The least possible value of |x| = 0 The absolute value of 0 is 0 Or: the distance of 0 from 0 is 0 Least possible value: x = 0 --== Message from the GMAT Club Team ==-- THERE IS LIKELY A BETTER DISCUSSION OF THIS EXACT QUESTION. This discussion does not meet community quality standards. It has been retired. If you would like to discuss this question please re-post it in the respective forum. Thank you! To review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you. Why can't x be a negative integer, in that case it should be -1 i.e. option D. Intern Joined: 21 Mar 2019 Posts: 22 Re: If y is a positive integer, and |x| < 5 − y, then what is the least po  [#permalink] ### Show Tags 08 Aug 2019, 18:54 I don't understand why x can't be negative Intern Joined: 06 Nov 2014 Posts: 17 Location: Viet Nam GMAT 1: 720 Q50 V36 GMAT 2: 740 Q50 V40 GPA: 3.67 Re: If y is a positive integer, and |x| < 5 − y, then what is the least po  [#permalink] ### Show Tags
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9263037302939515, "lm_q1q2_score": 0.8031384538291746, "lm_q2_score": 0.8670357546485408, "openwebmath_perplexity": 2278.1011399087774, "openwebmath_score": 0.612427294254303, "tags": null, "url": "https://gmatclub.com/forum/if-y-is-a-positive-integer-and-x-5-y-then-what-is-the-least-po-261724.html" }
it follows that: $$\|Ax\|_{\infty} = 21$$ and: $$\|x\|_{\infty} = 3$$ But then we have: $$\frac{\|Ax\|_{\infty}}{\|x\|_{\infty}} = \frac{21}3 = 7$$ that does not correlate with the fact that we previously found that $\|A\|_{\infty} = 11$. If anyone can explain to me what is wrong with my reasoning here, I would appreciate it! • How do you know that the supremum is reached at this vector? Sep 9, 2012 at 19:21 • Hm, good point :). Perhaps this is what I've overlooked. But is there a way to find out the vector which will give the supremum value? Sep 9, 2012 at 19:23
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9799765581257486, "lm_q1q2_score": 0.8057260699876178, "lm_q2_score": 0.8221891261650247, "openwebmath_perplexity": 248.275586141645, "openwebmath_score": 0.9208800792694092, "tags": null, "url": "https://math.stackexchange.com/questions/193260/infinity-matrix-norm-example" }
lagrangian-formalism, gauge-theory, group-representations Title: How to check if some term in the Lagrangian involving gauge bosons is gauge invariant without explicit computations? Normally (for fermions and scalars) we can simply use the decomposition of tensor products of gauge group representations to find invariant terms that we can write into the Lagrangian. For example for fermions living in some representation $R$ of a given gauge group $G$, we can compute $$ R \otimes R = R_1 \oplus R_2 \oplus \ldots ,$$ where $R_1$, $R_2$ denote some other representations of the gauge group. The sum on the right-hand side does normally not contain the $1$ dimensional representation, because that would mean that bare mass terms for fermions are allowed. (In other words we can get sth gauge invariant using only the fermions). A term like $ \bar R \otimes R$ does always contain the $1$ dimensional representation, but is forbidden by Lorentz invariance. Nevertheless, we can use the sum on the right-hand side in order to determine which Higgs representations can be used to generate mass terms for the fermions after symmetry breaking. For example if some Higgs fields live in $\bar R_1$, we can write $$R \otimes R \otimes \bar R_1 = (R_1 \oplus R_2 \oplus \ldots ) \otimes \bar R_1 = R_1 \otimes \bar R_1 \oplus \ldots = 1 \oplus \ldots $$ In addition, we can use decomposition like this to write down the Higgs potential. For example if in addition to $\bar R_1$ Higgs fields in the representation $ R_1$ exists we can see from $$ R_1 \otimes R_1 = 1 \oplus \ldots $$ that such a term is allowed by gauge invariance. Bosons are said to live in the adjoint representation $A$, but according to this answer to not transform according to any representation of the gauge group at all. But then, how can we determine which terms involving gauge bosons are allowed in the Lagrangian?
{ "domain": "physics.stackexchange", "id": 23111, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "lagrangian-formalism, gauge-theory, group-representations", "url": null }
structural-engineering, structural-analysis, structures, frame Title: Symmetry to determine the support reactions of a statically indeterminate frame The following is an indeterminate frame consisting of two beams that are rigidly connected to each other at the corner support. I want to find the support reactions. As far as I can see, this is a symmetric structure with symmetric loading (please do correct me if I am wrong). And as I understand it, a symmetric structure with symmetric loading has symmetric reactions. From here, the chain of logic I would follow is: The corner roller support has a vertical reaction (V2) of 0, for symmetry to be possible. Similarly, H1 = 0, for symmetry. V1 = - H2, again for symmetry of reactions. Then, from equilibrium, we can say: However, this is incorrect. The mark scheme for this question has a different solution. Could someone please tell me where in my process I have made a mistake? EDIT: This is what the mark-scheme says: The members and loads of the frame are symmetric about joint "B", but the supports are not, which is the source that causes non-conforming deflections when loaded. The sketches below depict each case of the deflection of the frame. ADD: The roller support at "B" is free to move in x-dir. The sketches below show the lateral displacement and corresponding joint rotation due to the respective loads "F" and "H". Simplified Frame Model:
{ "domain": "engineering.stackexchange", "id": 4719, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "structural-engineering, structural-analysis, structures, frame", "url": null }
c++, template-meta-programming public: using type = typename extract_impl<0, idx, Types...>::type; }; Now that we have that, extracting from our type list is a simple subset operation based on the generalized template for extraction: template <std::size_t idx, class TypeList> struct type_list_extract; template <std::size_t idx, template <class...> class TypeList, class... Types> struct type_list_extract<idx, TypeList<Types...>> { using type = typename extract<idx, Types...>::type; }; For which we can provide a convenience alias: template <std::size_t idx, class TypeList> using type_list_extract_t = typename type_list_extract<idx, TypeList>::type; That we can now use as follows: int main() { using list_t = type_list<char, bool, void>; static_assert( std::is_same<char, type_list_extract_t<0, list_t>>::value, "!" ); static_assert( std::is_same<bool, type_list_extract_t<1, list_t>>::value, "!" ); static_assert( std::is_same<void, type_list_extract_t<2, list_t>>::value, "!" ); //type_list_extract_t<3, list_t>; // static_assert fails: index out of bounds } Final words There are non-recursive ways to implement extraction (among other operations); they will most likely (possibly definitely!) require integer/index sequences. You can have a look at my answer for this question to see a non-recursive integer sequence implementation. There are many more cool tricks to do with templates. Look on this very site or Stack Overflow!
{ "domain": "codereview.stackexchange", "id": 36063, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, template-meta-programming", "url": null }
ros, installation, debian }}} Traceback (most recent call last): File "/usr/local/bin/rosinstall", line 5, in pkg_resources.run_script('rosinstall==0.5.16', 'rosinstall') File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 467, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 1200, in run_script execfile(script_filename, namespace, namespace) File "/usr/local/lib/python2.6/dist-packages/rosinstall-0.5.16-py2.6.egg/EGG-INFO/scripts/rosinstall", line 556, in sys.exit(not rosinstall_main(sys.argv)) File "/usr/local/lib/python2.6/dist-packages/rosinstall-0.5.16-py2.6.egg/EGG-INFO/scripts/rosinstall", line 547, in rosinstall_main subprocess.check_call("source %s && rosmake ros%s --rosdep-install%s" % (os.path.join(options.path, 'setup.sh'), ros_comm_insert, rosdep_yes_insert), shell=True, executable='/bin/bash') File "/usr/lib/python2.6/subprocess.py", line 488, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'source /home/aditi.nagaraj/ros/setup.sh && rosmake ros ros_comm --rosdep-install' returned non-zero exit status 1
{ "domain": "robotics.stackexchange", "id": 5242, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, installation, debian", "url": null }
This works for vector fields whose dependence has been declared by 'depends': (%i23) rot(A,q); $\tag{%o23} [\frac{d}{d \mathit{q2}} \mathit{A3}-\frac{d}{d \mathit{q3}} \mathit{A2},\frac{d}{d \mathit{q3}} \mathit{A1}-\frac{d}{d \mathit{q1}} \mathit{A3},\frac{d}{d \mathit{q1}} \mathit{A2}-\frac{d}{d \mathit{q2}} \mathit{A1}]$ We will need the cross product operator '~', which is contained in the package 'vect':
{ "domain": "uaslp.mx", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.972830769252026, "lm_q1q2_score": 0.8085615346754277, "lm_q2_score": 0.8311430520409023, "openwebmath_perplexity": 8210.193559783387, "openwebmath_score": 0.9115439653396606, "tags": null, "url": "http://galia.fc.uaslp.mx/~jvallejo/Maxima%20Mini-Tour%2019-May-2019.html" }
general-relativity, gravity, de-sitter-spacetime Title: Is the limit from $d\mathcal{S}_4$ to Minkowski spacetime smooth? For a static observer, the boundary of observed $dS^4$ spacetime locates on $r=\ell$, $\ell$ is the de Sitter radius which is reversely proportional to spacetime curvature or say cosmological constant $\Lambda^{\frac{1}{2}}$ up to a constant factor. There is seemingly a physical and smooth limit $\ell\rightarrow\inf$ back to flat spacetime, and thus the cosmological boundary becomes the null-infinity. For a finite $\ell$, the bifurcation 2-sphere $B$ is located on $U=V=0$ without any singularity or ambiguity. One should notice that this 2-sphere is a common boundary of two horizons $\mathcal{H}^-,\mathcal{H}^+$. If one takes the flat limit, seemingly the $\mathcal{H}^-,\mathcal{H}^+$ deform to null-infinity $\mathcal{I}^-,\mathcal{I}^+$ defined on Minkowski spacetime, and $B$ becomes the spatial infinity $i^0$. But, as a famous result, $i^0$ in the description of Penrose's diagram is singular and is not a common boundary of $\mathcal{I}^-,\mathcal{I}^+$. Or exactly, $\mathcal{I}^-,\mathcal{I}^+$ have no a common boundary. Is there something wrong with this limit? or say is this limit smooth? It depends on what you call a limit of something, and how do you judge convergence (and thus smoothness). Take for example the function $f(x) = C + 1/(x-a)$. Now take the limit $a \to \infty$. Is $\tilde{f}(x) = C$ a "smooth" limit of $f(x)$ as $a \to \infty$?
{ "domain": "physics.stackexchange", "id": 96455, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, gravity, de-sitter-spacetime", "url": null }
we can incrementally sub-divide the interval into smaller and smaller pieces. Evaluate the Riemann sum for {eq}\displaystyle f(x)=x-1,\ \ -6\leq x\leq 4, {/eq} with five subintervals, taking the sample points to be right endpoints. Draw The Approximating Rectangles. Approximate the area under the curve 𝑦=𝑥2+1 on the interval [0, 8] using a midpoint sum with 4 equal subintervals. The midpoints of the above subintervals are 1. (b) Use a midpoint Riemann sum with two subintervals of equal length and values from the table to approximate () 1. Use a finite sum to estimate the average value of fon the given interval by partitioning the interval into four subintervals of equal length and evaluating f at the subinterval midpoints. a) Using a left Riemann Sum with 10 subintervals, estimate the istance traveled by the engine in the first 10 each seconds. Math 2, Winter 2016 Daily Homework #13 | Solutions 4. This analysis seems to indicate that a mere 50 to 100 subintervals would provide a pretty accurate. The uniformity of construction makes computations easier. Use a midpoint Riemann sum with 3 subintervals of equal length to approximate () 70 10 ∫vt dt. (Sketch the graph of 𝑓𝑥 )=sin(𝑥. This process yields the integral, which computes the value of the area exactly. Show the computations that lead to your answer. x –3 –1 1 3 5 7 9. This ranking means that the given values will correspond to the following approximation methods: Left hand Riemann sum = 0. Break the interval [a;b] into n equal subintervals with endpoints. Use the notebook to demonstrate this new Riemann sum visually. = (area of rectangles lying above the x-axis) (area of rectangles lying below the x-axis) Each Riemann sum is a real number, and a Riemann sum with n subintervals can be thought of as an approximation of. The right-endpoint Riemann sum is then f(1)1+f(2) 21+f(3)1+f(4)1 =
{ "domain": "sicituradastra.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9865717456528508, "lm_q1q2_score": 0.8019519197982119, "lm_q2_score": 0.8128673087708699, "openwebmath_perplexity": 483.8687698821752, "openwebmath_score": 0.8928297162055969, "tags": null, "url": "http://kqbk.sicituradastra.it/use-a-midpoint-riemann-sum-with-four-subintervals.html" }
zoology, ornithology, ethology, senses, balance Source: Weimerskirch, H., Bishop, C., Jeanniard-du-Dot, T., Prudor, A. and Sachs, G. (2016). Frigate birds track atmospheric conditions over months-long transoceanic flights. Science, 353(6294), pp.74-78. PS: You have two questions here, specially after your edit. I suggest that you post another question regarding the spatial orientation, since asking different questions (even if they are related) in the same post is not nice, and it's a reason to close ("Too broad: Avoid asking multiple distinct questions at once"). However, it's worth mentioning that apparently birds do suffer from spatial disorientation when there is no visual cue. According to this relatively old paper, "Spatial Disorientation in Birds": The only conclusion is that birds are susceptible and suffer from spatial disorientation, and further that the causes of spatial disorientation in birds are exactly the same as those which affect the human pilot, namely; (a) the loss of true visual cues to the horizontal; (b) inexperience in flying under such conditions where visual cues are lost;
{ "domain": "biology.stackexchange", "id": 7217, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "zoology, ornithology, ethology, senses, balance", "url": null }
digital-communications Title: The order of processing the prerecorded samples (DC removal, Hilbert transform, frequency shifting) I have a samples prerecorded by a radio device. Samples are not in baseband, contain large DC component and include only one channel (written in int32 format). My goal is to move it to the baseband, remove the DC spike and perform Hilbert transform to make it complex for further processing. My question is, what is the correct order of performing mentioned computations? Or it doesn't matter? I assume that first I need to remove DC by calculating mean (DC component) and correct each sample. Then go with Hilbert transform to get complex representation of the signal. After that I need to multiply the signal by a complex waveform to move it to the baseband. Can anyone confirm that my way of thinking is right or correct me if I am wrong. Order of operations can be changed but will change where the processing is applied and that may be simplified in certain conditions which will be very obvious once the actual signal is processed if the goals of each step are understood. For example, if the OP means by DC a non-zero carrier (which will appear at DC after moving the signal completely to baseband), this is much easier to remove after moving to baseband by simply subtracting the mean. However in actual applications, the carrier may not initially be perfectly estimated, in which case a tone close to DC will occur and subtracting the mean will be ineffective (but the tone can be used to continue to refine the carrier estimate). If the OP meant the waveform itself as received prior to any processing has a large DC offset, this too can be removed prior to any processing with a simple subtraction of the mean (since the samples are pre-recorded this would be simplest) assuming the DC offset is not desired for some reason, otherwise the tone it becomes will be filtered out regardless in the subsequent processing below. (There can of course be advantages to removing the DC signal first in terms of dynamic range available but that would all be clear in the specicic procssing done and precision of the operations.
{ "domain": "dsp.stackexchange", "id": 10104, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "digital-communications", "url": null }
mathematics, singularities Why are extra dimensions necessary? A simple and rigorous proof of 26/10 dimensions in string theory …does not exist. You sweep the infinitely big error under an infinitely distant rug. You lose me a bit here - there is no error to sweep. Two sets have the same cardinality if and only if the elements of each can be put into one-to-one correspondence with the elements of the other. That is true of $\mathbb N$ and $2\mathbb N$, as you say. No problem. Now, if two sets $A$ and $B$ both have a finite number of elements, then they have the same cardinality if and only if they have the same number of elements. In this way, cardinality reduces to "number of elements" for finite sets. For non-finite sets, "number of elements" is not a meaningful notion. This makes me squirm a little. You can use this idea to prove unphysical results like the Banach-Tarski theorem. [...] But he says that some physicists think this is physical. Some paper have been written using this idea. For example to explain things about quark confinement. I found only one paper linking Banach-Tarski to hadron physics. Its central thesis is that there is a connection between the two, because it can be shown that the so-called minimal decomposition required to implement Banach-Tarski is a single sphere being split into 5 pieces and then reassembled into two spheres, one consisting of 2 pieces and the other consisting of the remaining 3. We are to imagine that the "pieces" are quarks, and that a 2-piece sphere and a 3-piece sphere are a meson and baryon, respectively. The author observes that if this connection exists, then quark confinement is the statement that there does not exist a decomposition in which one of the resulting spheres consists of only one piece. Personally, this seems ... unlikely to bear much fruit. I suspect the reason for this apparent correspondence can be attributed to the strong law of small numbers.
{ "domain": "physics.stackexchange", "id": 99742, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mathematics, singularities", "url": null }
thermodynamics, heat-conduction It has $3$ conduction zones, as well as $1$ inside convection zone and $1$ outside convection zone. We assume steady state, so all temperatures are invariant to time. The red curve is the temperature profile throughout the wall. It can then be shown that the heat flux $\dot{Q}$ flowing through the wall is given by: $$\dot{Q}=UA\left(T_e-T_o\right)$$ ($^{\dagger}$ see note) where $U$ the overall heat transfer coefficient is given by: $$\frac1U=\frac{1}{h_e}+\sum_i \frac{t_i}{\lambda_i}+\frac{1}{h_o}$$ Here, the $h$ are convection coefficients (see Newton's law of cooling), $t_i$ the thicknesses of the wall components and their respective heat conductivities $\lambda_i$. Note that this doesn't take into account any thermal inertias but those don't feature in a steady state regime. Without knowing $U$ no solution, transient or steady state, can be developed. ($^{\dagger}$ and not $T_{10}$ as written in the figure) What differential equation does the system satisfy?
{ "domain": "physics.stackexchange", "id": 83873, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, heat-conduction", "url": null }
homework-and-exercises, newtonian-mechanics, equilibrium Here's a text diagram, as best as I could make it Update: Found a hint from the textbook - (3.5 - x)/cos(o) + x/cos(o) = 5. Not quite sure what to make of it, but it does kinda remind me of an ellipse at a slant... https://math.stackexchange.com/questions/108270/what-is-the-equation-of-an-ellipse-that-is-not-aligned-with-the-axis Update 2: Upon closer inspection of aufkag's angle-suggestion and the hint from the textbook, I believe he is correct about the angles being equal - the formula calculates the two segments of the rope from the adjacent sides x and 3.5 - x. By the way, how can it be explained or "proved" or what's the law that says the angles between AB and the wall and AC and the wall in a setup like this are equal? Update 3: (after solved, see comment for aufkag): Added D, E, F. ABD = BAE and CBD = BCF, but can anyone prove or point out the law that says ABD = CBD or BAE = BCF? Anyways, the steps are: o = angle AB and the horizontal or BC and the horizontal x / cos(o) + (3.5 - x) / cos(o) = 5 (sum of segments of rope is 5) tension in AB = tension in BC, therefore they share the same "load" of the mass, so we can calculate the tension in just one side 100 * 9.81 / 2 / sin(o) = 687N (approximately - first half of answer) 0.75 + xtan(o) = (3.5 - x)tan(o) (equal lengths for line segment BD) solve for x to get 1.38m aufkag pointed out the necessary parts for the solution, but didn't make an answer. This walks through the problem using his tips The key is to realize that angles BAE and BCF are equal. By geometric laws DBA and DBC are equal to those other two too (notice parallel lines AE, DB, and CF). Proof by aufkag (quote from his last comment)
{ "domain": "physics.stackexchange", "id": 9603, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, newtonian-mechanics, equilibrium", "url": null }
c++, algorithm, comparative-review, random namespace net { namespace coderodde { namespace util { template<typename T> class ArrayProbabilityDistribution : public ProbabilityDistribution<T> { public: ArrayProbabilityDistribution() : ProbabilityDistribution<T>() {} ArrayProbabilityDistribution(std::random_device::result_type seed) : ProbabilityDistribution<T>(seed) {} ArrayProbabilityDistribution( const ArrayProbabilityDistribution<T>& other) { this->m_size = other.m_size; this->m_total_weight = other.m_total_weight; m_element_storage_vector = other.m_element_storage_vector; m_weight_storage_vector = other.m_weight_storage_vector; m_filter_set = other.m_filter_set; } ArrayProbabilityDistribution( ArrayProbabilityDistribution<T>&& other) { this->m_size = other.m_size; this->m_total_weight = other.m_total_weight; m_element_storage_vector = std::move(other.m_element_storage_vector); m_weight_storage_vector = std::move(other.m_weight_storage_vector); m_filter_set = std::move(other.m_filter_set); other.m_size = 0; other.m_total_weight = 0.0; } ArrayProbabilityDistribution& operator=( const ArrayProbabilityDistribution<T>& other) { this->m_size = other.m_size; this->m_total_weight = other.m_total_weight; m_element_storage_vector = other.m_element_storage_vector; m_weight_storage_vector = other.m_weight_storage_vector; m_filter_set = other.m_filter_set; return *this; } ArrayProbabilityDistribution& operator=( ArrayProbabilityDistribution<T>&& other) { if (this == &other) { return *this; }
{ "domain": "codereview.stackexchange", "id": 27192, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, comparative-review, random", "url": null }
the-sun, observational-astronomy 19390101 is the date (1939-01-01). The first zero (Marked as SC) is the station code, the second the recorded hour, and the third the recorded minute (H/M). Most of this is 0 due to being interpolated data. After this information, there are 72 remaining columns (numbered 1-72 above), which represents collected data consolidated in groups of 5 degrees, presumably starting at 0. Given the amount of interpolation of old data the value may be questionable, but newer data appear to be more complete. The data format might be a little overwhelming to work with, but you can organize it in excel quite easily - paste it in and each line break will form a row, then use the text-to-column function to break each data into its own column.
{ "domain": "astronomy.stackexchange", "id": 784, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "the-sun, observational-astronomy", "url": null }
quantum-field-theory, operators, wick-theorem, matrix-elements, non-perturbative Title: Non-perturbative matrix element calculation Following Peskin & Schroeder's Sec.7's notation, I would like to compute the matrix element $$ \left<\lambda_\vec{p}| \phi(x)^2 |\Omega\right>\tag{1} $$ where $\langle\lambda_{\vec{p}}|$ is obtained by boosting the state $\langle\lambda_0|$ whose momentum eigenvalue is zero i.e. $\langle\lambda_0|\vec{P} = 0$ while its energy eigenvalue is denoted by $\langle\lambda_0|H = \langle\lambda_0|m_\lambda$. The state $|\Omega\rangle$ is the non-perturbative vacuum of the theory. Now consider the following OPE $$ \phi(x)\phi(y) = \langle\Omega| T\{\phi(x)\phi(y)\} |\Omega\rangle - :\phi(x)^2:\tag{2} \label{ope} $$ where $:\mathcal{F}(\phi):$ is the usual normal ordering. If we sandwhich the above equation with $\langle\lambda_{\vec{p}}|$ and $|\Omega\rangle$, seemingly we would get $$ \left<\lambda_\vec{p}| \phi(x) \phi(y) |\Omega\right> = \sum_{\lambda'}D_F(x-y,\lambda') \langle\lambda_{\vec{p}}|\Omega\rangle |\left<\lambda_0'| \phi(0) |\Omega\right>|^2\tag{3} $$ where $$
{ "domain": "physics.stackexchange", "id": 99672, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, operators, wick-theorem, matrix-elements, non-perturbative", "url": null }
ros Title: ROS Matlab Compressed Image? Node = rosmatlab.node('Test','http://localhost:11311'); subscriber = Node.addSubscriber('/sonar_image/compressed', 'sensor_msgs/CompressedImage', 10); subscriber.addCustomMessageListener({@test_fun, Node.Node}); Good day, I have code as such. It doesn't seem to get anything though. My function does callback test_fun, however, the message returned doesn't seem to have anything Originally posted by soulslicer on ROS Answers with karma: 61 on 2015-02-27 Post score: 1 Original comments Comment by Andromeda on 2015-02-28: did you have the /sonar_image message defined in your /usr/local/MATLAB/R20XXa/toolbox/psp/rosmatlab/jars/ ????? okay nvmind im able to get the data, but it's a compressed jpeg object. How do i decompress it in matlab for viewing Originally posted by soulslicer with karma: 61 on 2015-03-01 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 21010, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros", "url": null }
if err >= mpmath.mpf(0.5)**int(bits*0.85): if bits >= maxbits: print("That's suspiciously large; giving up; try again with larger maxbits?") return None print("That's suspiciously large; continuing the search with more bits.") bits = round(1.2*bits) continue else: print("Seems OK.") break j = len(dseq)-2 while j>20*max(dice) and dseq[j] == dseq[-1]: j -= 1 j = int(0.8*j) dseq = dseq[:j] startpoint = len(dseq) // 10 maxperiod = len(dseq) // 10 period = None print(f"Attempting period-finding with {len(dseq)} values, start=maxperiod={maxperiod}.") for tl in range(1,maxperiod+1): tail = dseq[-tl:] reps = (len(dseq)-startpoint) // tl longtail = dseq[-(reps*tl):] if longtail == reps*tail: print("Period", tl) period = tl break else: print("No period found with this many iterations.") return (num,den,None,None,[],[]) if period is not None: for i in range(len(dseq)-2*period): p = dseq[i:i+period] r = (len(dseq)-(i+period)) // period if dseq[i+period:i+period*(1+r)] == r*p: print("Repeating period starts at", i) print("Initial transient:", (dseq[:i] if i<100 else "long")) print("Repeating seq:", (' '.join(map(str,p)) if period<100 else "long"))
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9850429116504952, "lm_q1q2_score": 0.8053459337640595, "lm_q2_score": 0.8175744673038221, "openwebmath_perplexity": 975.1590751594423, "openwebmath_score": 0.9346749186515808, "tags": null, "url": "https://math.stackexchange.com/questions/4192238/convergence-of-winning-probability-in-a-one-player-dice-throwing-game" }
particle-physics, large-hadron-collider, data-analysis, particle-detectors Title: What does the inverse background efficiency represent? I am reading a paper from the ATLAS experiment on the identification of tau jets from background jets and came across this figure: I am struggling to find what the formula is for the inverse background efficiency. Can someone explain to me how this is calculated? Here is the link to the whole paper: http://cdsweb.cern.ch/record/2064383/files/ATL-PHYS-PUB-2015-045.pdf Thanks "Inverse [background] efficiency" (also known as "[background] rejection") is calculated as the reciprocal of [background] efficiency. In more detail (using $\tau$ identification as example), let's say you have a collection of objects, a portion of which are true taus and the remaining portion of which are not taus. Apply your $\tau$ identification procedure; then the "signal efficiency" would be calculated as \begin{equation} \varepsilon_\tau = \frac{\text{number of true $\tau$ identified as a $\tau$}}{\text{total number of true $\tau$}} \end{equation} However, there will also be some fraction of the not-taus in the collection of objects that could look like a tau, and thus be identified as a tau. So similarly there will be a "background efficiency" \begin{equation} \varepsilon_b = \frac{\text{number of not-$\tau$ identified as a $\tau$}}{\text{total number of not-$\tau$}} \end{equation} and from this, the rejection value is calculated as $1/\varepsilon_b$.
{ "domain": "physics.stackexchange", "id": 86403, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, large-hadron-collider, data-analysis, particle-detectors", "url": null }
java, performance, linked-list, generics if(tmp.getNext().getValue().equals(value)){ Node<T> preTmp=tmp.getNext(); if(tmp.getNext().getNext()==null) tail=tmp; tmp.setNext(tmp.getNext().getNext()); preTmp=null; size--; break; } else tmp=tmp.getNext(); } } } Your code seems to be poorly indented, this is either because you had problems pasting it to CodeReview, or didn't notice it was like that. By using indention that follows the curly brackets, you get more clear code. Inconsistent method curly bracket indention public void append(T value) { public int findIndexOf(T value) { public T findValueOf(int index) { public void insert(T value,int index){ public void delete(int index){ private void checkValue(T value,String message){ private void checkIndex(int index) { public Node<T> getHead() { public Node<T> getTail() { public void display(){ You have 6 methods with a space before the curly bracket, and 4 without. Making this consistent improves the answer. Useless commentary public Linkedlist() { // TODO Auto-generated constructor stub head = null; tail = null; size = 0; } public Node(T value) { // TODO Auto-generated constructor stub this.value=value; this.next=null; } No commentary is better than useless commentary, do we really need to know that that constructor is auto generated by eclipse? The constructor is really that simple that it doesn't need commentary. Cannot delete index 0 You cannot delete the first index by number with your code, this is a bug that should have been catch by unit testing. delete(int) optimization
{ "domain": "codereview.stackexchange", "id": 18299, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, performance, linked-list, generics", "url": null }
python, programming-challenge, python-2.x Then we can just generate all the right palindromes, base 10, and check if they match base 2: total = 0 cap = upper_limit / 10 ** (math.log10(upper_limit)/2) for p in xrange(int(cap) + 1): for repeat in (True, False): pal = make_palindrome(p, repeat) if pal & 1 and lower_limit <= pal <= upper_limit: as_bin = bin(pal)[2:] if as_bin == as_bin[::-1]: total += pal return total Timing comparison, this is 100x faster: Brute Force Search 0.3477s Palindrome Generator 0.0030s
{ "domain": "codereview.stackexchange", "id": 16083, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, programming-challenge, python-2.x", "url": null }
gravity, newtonian-gravity, experimental-physics, antimatter Title: Current bounds on the value of $g$ for antimatter In 2011, the ALPHA experiment showed that the gravitational acceleration for antihydrogen was between -65 and 110 times the normal gravitational acceleration. Has there been any improvement on the value of gravitational acceleration for antimatter, whether antihydrogen or otherwise? I know the AEḡIS (AEgIS) experiment has been working on this, but I'm not familiar with any data, not even that the value falls into the range established by ALPHA. The ALPHA result (which I worked on) Observation of the effect of gravity on the motion of antimatter, which Charles put as an answer to his own question, is the most direct measurement of antimatter's response to gravity. It says that antihydrogen falls down with an acceleration of: $$g(0.75\pm 0.13 (\text{statistical + systematic}) \pm 0.16 (\text{simulation}))$$ However, if you're willing to accept certain well-established ideas about how gravity and mass work, then less direct measurements provide much more precision. First, the BASE collaboration (in the same building as ALPHA) measures the "double ratio" of the charge to mass ratio of the antiproton divided by the charge to mass ratio of the proton. They do this by measuring the antiproton's cyclotron frequency in a magnetic field. As the Earth orbits the sun, its gravitational potential energy changes, and this changes the rate at which time on earth progresses as seen by a distant observer. This effect is gravity, and if antimatter responded oppositely, or not at all to gravity, it shouldn't recieve the same slowdown as protons. By measuring the cyclotron frequency at different times in the year, they put a 3% bound on the difference between how matter and antimatter respond to gravity. A 16-parts-per-trillion measurement of the antiproton-to-proton charge–mass ratio
{ "domain": "physics.stackexchange", "id": 97434, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gravity, newtonian-gravity, experimental-physics, antimatter", "url": null }
I'll just write down the full solution to your problem, and explain all the steps in the derivations. So, let $$f_1,\ldots,f_k:\mathcal H \rightarrow (-\infty,+\infty]$$ be functions (convex or not) on a Hilbert space $$\mathcal H$$. Define $$g(x) := \inf_{x_1,\ldots,x_k}\left\{\sum_i f_i(x_i) \mid \sum_i x_i = x\right\}.$$ The function $$g$$ is also called the infimal convolution of $$f_1,\ldots,f_k$$, written $$g=\Box_{i=1}^k f_i$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9833429614552198, "lm_q1q2_score": 0.8543547051573486, "lm_q2_score": 0.8688267864276108, "openwebmath_perplexity": 931.0419336172458, "openwebmath_score": 0.9050416350364685, "tags": null, "url": "https://math.stackexchange.com/questions/3380913/the-conjugate-function-of-infimum-of-sum-of-functions" }
java, performance int[] firstRow = new int[size]; int[] secondRow = Arrays.copyOf(aTriangleBase, size); boolean useFirst = true; int a = 0; for(int i = 1; i < size; ++i) { if(useFirst) { for(int j = 0; j < size-i; ++j) { a = secondRow[j] + secondRow[j + 1]; if (a >= aLimit || count[a]) return -1; count[a] = true; firstRow[j] = a; } useFirst = false; } else { for(int j = 0; j < size-i; ++j) { a = firstRow[j] + firstRow[j + 1]; if (a >= aLimit || count[a]) return -1; count[a] = true; secondRow[j] = a; } useFirst = true; } } // return final value, our result if no duplicates occur during the process return a; } } Performance; base size 5 takes about 200 milliseconds on my machine, a base of 6 takes about 40 seconds, and 7 is still running after half an hour. I can solve size=9 in 18 seconds, but I tried a very different approach (starting from the top and trying to compute the rows below). For size=10, it took 50 minutes and I can see no chance that size=11 finishes this year. I don't claim that starting from the top is better, maybe starting from the base could be made much more efficient when more promising candidates get tried first. Looking at the solution 1000 489 511 277 212 299 175 102 110 189 116 59 43 67 122 77 39 20 23 44 78 50 27 12 8 15 29 49 32 18 9 3 5 10 19 30 21 11 7 2 1 4 6 13 17 you can see that the smallest values are in the middle of the base. This makes sense as they contribute most to the result. Readability; how difficult is it to read the code
{ "domain": "codereview.stackexchange", "id": 14275, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, performance", "url": null }
java, game, swing //deals with setting up for the numbers being displayed and calling scheduleNumbers() //which executes schedulers for display of numbers private void printNumbers(int[] randomNumbers) { int speed = DIFF_TIMES[difficulty.getSelectedIndex()]; int amount = BASE_AMOUNT + currentScore; answerField.setEditable(false); //stops user from entering text before the game has started checkAnswerBut.setEnabled(false); //stops user from submitting an answer when the program isn't ready showStatus(getNumbersAsString()); scheduleNumbers(randomNumbers, speed, amount); } //executes schedulers for display of numbers public void scheduleNumbers(int[] randomNumbers, int speed, int amount) { long initialDelay = INITIAL_DELAY; final AtomicInteger curNumber = new AtomicInteger(-1); //used so can be updated in Runnable setNumber() lambda final Runnable setNumber = () -> { currentNumberLab.setText(Integer.toString(randomNumbers[curNumber.incrementAndGet()])); //sets text to next number }; setNumberService = scheduler.scheduleAtFixedRate(setNumber, initialDelay, speed, MILLISECONDS); //schedules calls of setNumber() at specific intervals for certain time //scheduled to clean up after setNumberService by enabling components, removing number display and canceling setNumberService endNumberService = scheduler.schedule(() -> { currentNumberLab.setText(""); answerField.setEditable(true); checkAnswerBut.setEnabled(true); gameStatus = GameStatusEnum.WAITING; setNumberService.cancel(true); }, ((speed * amount)+ initialDelay), MILLISECONDS); }
{ "domain": "codereview.stackexchange", "id": 11110, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game, swing", "url": null }
astronomy, telescopes, astrophotography As to the scopes themselves, I own a 6SE myself, and also have an 8-inch Dobsonian. Both are very useful telescopes, and both will be excellent for visual observation. The XT8 will show you more because of its larger aperture: more detail and brighter images. The main advantage of the 6SE is its compact size and portability. It's optics are quite good, but cannot compare with the XT8. If GoTo appeals to you, the XT8 is now available in a GoTo version, the XT8g. There is also the intermediate IntelliScope XT8i, which is my personal favourite. Its computer guides you to any object in the sky, but it uses manual power rather than motors, so is very quiet and low on battery consumption. I disagree with Florin Andrei that the 6SE is "not much more than a toy." I have been very pleased with mine: it is well made and has good optics in a very solid compact portable package.
{ "domain": "physics.stackexchange", "id": 3117, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "astronomy, telescopes, astrophotography", "url": null }
navigation Title: Applying restrictions on 360-degree LiDAR for slam and autonomous navigation I am working with TurtleBot 3-Burger (Rasberry Pi 3) and ROS Kinetic. On Turtlebot 3, we have a 360-degree Planner LiDAR for slam and autonomous navigation. What I need/want to do is to make this scanner looking at certain angles for data collection. For example, I want it to collect the data from those angles which are highlighted using Slashes (front and rear), and ignore the data from those angles which are highlighted using Brackets (left and right). \ / \ / \ / ] [ ] [ ] ] LiDAR Scanner [ [ ] [ ] [ / \ / \ / \ **Sorry for my figure, I can't upload the actual image as i am new in ROS community and I don't have enough points to upload an image. Any help and hint needed and I will appreciate it. Originally posted by maziarser on ROS Answers with karma: 68 on 2017-12-20 Post score: 2 I have solved this issue and I was thinking that sharing the real solution is a good idea with others: import rospy import math from sensor_msgs.msg import LaserScan from math import * import numpy as np import copy ranges_filter = [] intensities_filter = [] #copy the range and intensities of "/scan" topic to "ranges_filter" and "intensities_filter" #you need to convert them to "list" as "data.ranges" and "data.intensities" are "tuple" def callback_scan(data): global ranges_filter, intensities_filter len(data.ranges) #360 len(data.intensities) #360 ranges_filter = copy.copy(data.ranges) intensities_filter = copy.copy(data.intensities) #convert them to list ranges_filter = list(ranges_filter) intensities_filter = list(intensities_filter)
{ "domain": "robotics.stackexchange", "id": 29579, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation", "url": null }
quantum-neural-network \langle z, 1|U^\dagger_l Y_{n+1}U_l|z, 1\rangle = l(z)$$ I do not understand what $l(z)$ exactly is(true label I believe) and how there are 22n possible functions for it. Yes, $l(z)$ is the true label, and so $l$ is the target function that you want to learn in order for the machine learning model to be correct. The set of functions $l$ that the model is capable of learning describes the expressiveness of the classifier, and they use the construction of $U_l$ to argue that there is a quantum circuit composed of two-qubit gates that can learn to represent any possible binary function on finite strings $l: \{0,1\}^n\rightarrow \{-1, 1\}$ and is therefore highly expressive. Note that the quantum circuit predicts a label of $\hat{y} = \langle z, 1|U^\dagger (\theta)Y_{n+1}U(\theta)|z, 1\rangle$, and the goal of training circuit to find a set of $\theta$ such that $\hat{y} = l(z)$ as often as possible. You can see that the loss function provided is minimum for $\hat{y} = l(z)$ and maximum for $\hat{y} = -l(z)$, and from learning theory we know that minimizing this loss on a sample of possible datapoints ("empirical risk minimization") will often result in a classifier that generalizes well to unseen datapoints. The reason why there are $2^{2^n}$ possible functions for $l$ is because there are $2^n$ possible length-$n$ bitstrings, and you want to count all of the ways you can assign one of two labels to each one. Say you want to design an arbitrary $l$: There are $2$ ways to label the string $0\dots 00$, times $2$ possible ways to label the string $0\dots 01$, times $2$ possible ways to label $0 \dots 10$ and so on. Multiplying out the possibilities you get
{ "domain": "quantumcomputing.stackexchange", "id": 2324, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-neural-network", "url": null }
In conclusion: we have proved (again modulo Exercise 2) that our construction gives a 4-(23,7,1) design D_{23}, and any 4-(23,7,1) design D' is isomorphic with it -- and indeed we may choose the isomorphism to take any ordered pair of distinct points of D' to (∞_1, ∞_2). We shall see next time that by taking D' = D_{23} we can conclude that Aut(D_{23}) is 4-transitive on points (and thus transitive on blocks). March 28 and 30: The 5-(24,8,1) Steiner system D_24; its automorphism group M_24, and its subgroups M_23, M_22, and [PSL_3(F_4)=]M_21; the automorphism groups of D_23 and D_22 With the description of hyperovals and Baer subplanes we developed to construct and prove the uniqueness of D_22 and D_23, we now easily show D_24 is unique if it exists, and with a bit more effort verify existence and the number of automorphisms. We then describe Aut(D_24), which is M_24, the largest of Mathieu's 5 sporadic groups, and its k-point stabilizer M_{24-k} for k=1,2,3 (the last of which coincides with the normal subgroup PSL_3(F_4) of Aut(Π_4)). Recall the intersection triangle of a 5-(24,8,1) Steiner system (Table 1.1 on p.21 of the text): 759 506 253 330 176 77 210 120 56 21 130 80 40 16 5 78 52 28 12 4 1 46 32 20 8 4 0 1 30 16 16 4 4 0 0 1 30 0 16 0 4 0 0 0 1
{ "domain": "harvard.edu", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9907319853876677, "lm_q1q2_score": 0.8277402594148112, "lm_q2_score": 0.8354835330070839, "openwebmath_perplexity": 2557.5821634393883, "openwebmath_score": 0.8611697554588318, "tags": null, "url": "http://abel.math.harvard.edu/~elkies/M155.15/notes.html" }
telescope, amateur-observing, telescope-making, mirror-making Why is this telescope so short? How hard is it to make such a fast (and therefore deep) primary? Gennady Borisov with the 0.65-meter telescope he built and used to discover the new comet. G. Borisov I don't have enough room in the comments for this, so I'm writing here, although it's probably not a true answer since I know nothing about that particular telescope. Anyway, if you look at many SCT systems, and their derivatives such as Ritchey-Chretien, Dall-Kirkham, etc, the distance between primary and secondary is often not too big. If the secondary has negative curvature, that means one of its focal points is behind it. That's where the focal point of the primary also is. So the focal length of the primary is bigger than it seems just by looking at the picture. There's a focal point geometrically located in front of the telescope; the distance between it and the secondary or the primary depends on the design parameters of the system. It can't be too close to the secondary, or else the secondary would have to have extraordinary amounts of curvature. Also keep in mind that there must be a gap between the edge of the primary and the inner surface of the OTA, so the primary is smaller than the visual estimate of the hole. We also don't see the bottom of the instrument, so we don't know how far the primary mirror cell is protruding out the back of the instrument - though that's limited by the mount's big fork. Also, look at a small SCT, such as the Celestron EdgeHD8 OTA. Visually, the tube seems a bit longer, compared to its diameter. However, the EdgeHD is a much smaller instrument. The relative amount "wasted" at the bottom by the primary cell is different. Also, the secondary is "buried" into the OTA, whereas the 0.65m scope has its secondary sticking out of the OTA. Anyway, this is a bunch of handwaving based on visual estimates. It is possible that this is a system with a primary somewhat more strongly curved than usual.
{ "domain": "astronomy.stackexchange", "id": 3947, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "telescope, amateur-observing, telescope-making, mirror-making", "url": null }
ros2, python3, create, rclpy, package +./install/temoto_parser/bin +./install/temoto_parser/bin/srl_test +./install/temoto_parser/share/temoto_parser/hook/path.dsv +./install/temoto_parser/share/temoto_parser/hook/path.ps1 +./install/temoto_parser/share/temoto_parser/hook/path.py +./install/temoto_parser/share/temoto_parser/hook/path.sh
{ "domain": "robotics.stackexchange", "id": 33734, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros2, python3, create, rclpy, package", "url": null }
c++ EDIT: The code compiles and give out correct result for the policy. For the values (or return) calculation, it is consistency 0.05 higher than the expected value across the board (and I am not sure if I should investigate on this). Program and File Structure As a general rule C++ source files (.cpp) are not included in other C++ source files, each .cpp file is compiled separately and the resulting object files are linked together by the linker. The benefits of this are that the entire program does not need to be rebuilt when a .cpp file is modified, just the module that was modified and then the program is re-linked. This allows bug fixes and feature requests to be implemented without rather long build times. If the program is implemented in shared libraries it means just a single library may need to be updated for bug fixes to be delivered to users. In some cases very simple classes may be implemented using only a header file. One of the problems with including source files in other source files is that it can lead to multiple definitions of objects or functions at link time. An example would be using the util.cpp in multiple other source files. A second possible problem with including source files in other source files is that the compile time for the final source file will increase. In C++ classes are generally implemented as a header file (.h or .hpp) and a C++ source file pair. The structure and public interface of the class are in the header file and in most cases the internal implementation of the class is in the C++ source file. Public interfaces are expected to not change often but internal implementation can change as often as necessary. In try.cpp board.cpp is included, this ends up including point.cpp and util.cpp, the problem with this is that the main() function only needs to know about the Board class, it does not need to know about the Point struct or the items in util.cpp.
{ "domain": "codereview.stackexchange", "id": 35264, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++", "url": null }
boolean-functions, combinatorics, lattice Example: Let $\phi$ be the monotone Boolean function on variables $\langle 4 \rangle$ defined by the CNF $F_\text{cnf} = (3 \lor 4) \land (0 \lor 4) \land (0 \lor 1 \lor 2)$. You can check that its corresponding DNF is $F_\text{dnf} = (0 \land 3) \lor (0 \land 4) \lor (2 \land 4) \lor (1 \land 4)$. The Hasse diagrams of the CNF and DNF lattices of $\phi$, together with the values $\mu(e,\hat{1})$ for each element $e$ of the lattices, are drawn below (where, e.g., "034: 1" means that it is element $e=\{0,3,4\}$ and we have $\mu(e,\hat{1})=1$): CNF lattice:
{ "domain": "cstheory.stackexchange", "id": 4619, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "boolean-functions, combinatorics, lattice", "url": null }
short practice quiz. Linear Algebra With Applications 9th by Steven J. Questions tagged [linear-algebra] Ask Question A field of mathematics concerned with the study of finite dimensional vector spaces, including matrices and their manipulation, which are important in statistics. Played 0 times. READ: All New People By Ann Lamott Essay. Elementary Linear Algebra with Applications. The second part of the video above. Books 5-7 introduce rational numbers and expressions. Once a week, you meet in small groups for discussion sections with a TA and you will have a chance to ask questions, especially those which concern the discussion problems posted each week. Get smarter in Algebra on Socratic. For problems 1-3, c onsider the following system of equations. 2 Find Slope and Rate of Change Lesson 2. The graph for x > -3. Our goal is to give the beginning student, with little or no prior exposure to linear algebra, a good ground-ing in the basic ideas, as well as an appreciation for how they are used in many applications, including data tting, machine learning and arti cial intelligence, to-. Math 55 is a two-semester long first-year undergraduate mathematics course at Harvard University, founded by Lynn Loomis and Shlomo Sternberg. Choose your answers to the questions and click 'Next' to see the next set of questions. Don't show me this again. We recommend all students to download the sample attached to each test bank page and review them deeply. Additionally, the book includes ample applications drawn from a variety of disciplines, which reinforce the fact that linear algebra is a valuable tool for modeling real-life problems. PLASMA is a software package for solving problems in dense linear algebra using OpenMP. Welcome to McDougal Littell's Test Practice site. Choose from 7 study modes and games to study Linear Algebra. 1 - Math 1310 Author: Sheila J. Linear Algebra - Test File - Spring 2003. 8 Graph Linear Inequalities in Two Variables. 06 SC) Test questions. Try our Free Online Math Solver! Online Math Solver. Unfortunately currently there is only a field to enter the response, and no display of the result as a LaTeX formula. Students, teachers, parents, and everyone can find solutions to their math problems instantly. Although linear algebra is integral to the field of machine learning, the tight relationship […]. 3 Rotation matrices Here's our old example of rotating coordinate axes. Wizard Test
{ "domain": "asdpallavolorossano.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9822876992225169, "lm_q1q2_score": 0.8076262607866592, "lm_q2_score": 0.822189121808099, "openwebmath_perplexity": 1135.8759574371509, "openwebmath_score": 0.3489063084125519, "tags": null, "url": "http://uvym.asdpallavolorossano.it/linear-algebra-quiz-questions.html" }