text
stringlengths
49
10.4k
source
dict
quantum-field-theory, operators, quantum-interpretations, observables However, any element $|\psi\rangle$ of the Hilbert space $\mathcal H$ of the relativistic theory can be written as a linear combination of products of $\phi_f$'s acting on the vacuum, $|\psi \rangle= \sum_n c_{i_1 \ldots i_n} \phi_{f_{i_1}} \ldots \phi_{f_{i_n}}|0\rangle$. According to the Reeh-Schlieder theorem, the supports of the functions $f_{i_k}$ in the previous sum may even be restricted to a common finite (open) region $\Omega$ of space-time, i.e. ${\rm supp} \, f_{i_k} \subset \Omega$ for all $i_k$.
{ "domain": "physics.stackexchange", "id": 100212, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, operators, quantum-interpretations, observables", "url": null }
• According to point no. 2 we have to prove that $P_{k+1}$ is true based on the assumption that $P_k$ is true. The author shows us that $$P_{k}+T_{k+1}=P_{k+1}$$. Is it the way to implement point no.2? And, if it iss then how? I mean in what way proving $P_{k}+T_{k+1}=P_{k+1}$ is equivalent to proving $P_{k+1}$ under the assumption?? Feb 9 '16 at 10:41 • What does $P_k + T_{p+1}$ mean? $P_k$ is an equation. Does it make sense to write $(5^2 = 25) + 3$? I would never write math that way. But do you agree that if $a$ and $b$ are numbers and $a=b$, then $a$ and $b$ are the same number, so $a + 3$ is the same as $b +3$, that is, $a+3=b+3$? That's what we mean by "add ___ to both sides of the equation." And yes, when you start with a true equation, then (the left side plus something) is equal to (the right side plus the same something), which is how the author derived $P_{k+1}$. Feb 9 '16 at 12:52 • DavidK first of all, adding something to an equation always means it is added to its both sides. In other words, it can never mean to add something to only side of equation because two sides make up an equation. So $P_k+T_{k+1}$ is not wrong anyway. It is not necessary for something to be right. When it is only "not wrong", we can adopt it. Feb 9 '16 at 18:41
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.987946222258266, "lm_q1q2_score": 0.8565846932763611, "lm_q2_score": 0.8670357494949105, "openwebmath_perplexity": 166.37486381312206, "openwebmath_score": 0.8577460646629333, "tags": null, "url": "https://math.stackexchange.com/questions/1645967/what-do-we-actually-prove-using-induction-theorem/1646299" }
ros, gazebo, trajectory-msgs while(ros::ok()){ for(int i=0;i<4;i++) { int j = i%3; trajectory_msgs::JointTrajectoryPoint points_n; points_n.positions.push_back(j+0.1); points_n.positions.push_back(j+0.3); points_n.positions.push_back(j+0.3); points_n.positions.push_back(j); traj.points.push_back(points_n); traj.points[i].time_from_start = ros::Duration(1.0,0.0); } arm_pub.publish(traj); ros::spinOnce(); } return 0; } When i launch my file.launch and I rosrun my cpp file, both are connected on rqt_graph. But immediately, i have an error (on launch terminal) : [ERROR] [1497596211.214814221, 9.889000000]: Trajectory message contains waypoints that are not strictly increasing in time. Effectively, when i use the command rostopic echo /arm_controller/command, i have : `positions: [0.0, 1.0, 0.0, 2.0] velocities: [] accelerations: [] effort: [] time_from_start: secs: 0 nsecs: 0` The time_from_start is always 0. So, I think i've a problem in my code but i don't know where. Does anyone have an idea what is wrong? Thank you. Originally posted by w.fradin on ROS Answers with karma: 38 on 2017-06-16 Post score: 1
{ "domain": "robotics.stackexchange", "id": 28128, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, gazebo, trajectory-msgs", "url": null }
game, objective-c, timer @interface DTCountdown : NSObject -(id) initWithCount:(int)startCount; -(void) update; -(void) startCountdown; -(void) pauseCountdown; -(void) restartCountdown; -(void) cancelCountdown; @property CountdownState state; @end DTCountdown.m: #import "DTCountdown.h" @implementation DTCountdown { int _currentCount; int _startCount; } -(id) initWithCount:(int)startCount { self = [super init]; if (self) { _startCount = startCount; _currentCount = _startCount; _state = CountdownNotStarted; } return self; } #pragma mark - Update Loop -(void) update { switch (self.state) { case CountdownNotStarted: break; case CountdownCounting: [self doCountdown]; break; case CountdownPaused: break; case CountdownFinished: break; default: break; } } -(void) doCountdown { if (_currentCount > 0) { _currentCount--; } else { self.state = CountdownFinished; } } -(void) startCountdown { self.state = CountdownCounting; } -(void) pauseCountdown { self.state = CountdownPaused; } -(void) restartCountdown { _currentCount = _startCount; self.state = CountdownCounting; } -(void) cancelCountdown { _currentCount = _startCount; self.state = CountdownNotStarted; }
{ "domain": "codereview.stackexchange", "id": 7959, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "game, objective-c, timer", "url": null }
botany, plant-physiology, dendrology Title: Does anyone know the explanation for branches with different flower colors (see picture)? As you can see in the below picture, there's a branch with white-colored flowers, while the rest of the tree has pink flowers. I googled a number of questions but it is a bit complicated to formulate in a search. Plus I have no background in botany or even biology. Many ornamental cherries are grown on a hardier root stock, that is, they are propagated by grafting onto hardy wild cherry saplings. This is because the ornamental variety will not produce offspring that are true to the parents, if they produce offspring at all. All are grafted plants and mainly on wild cherry (gean) rootstock. Such trees should be planted with the graft union above ground level. Seed, rarely produced, will give rise to new hybrids. This is a picture of an ornamental cherry of the pompom variety: My guess is that the tree in your picture sent a shoot off the main trunk under the graft site, resulting in a branch of native cherry, which is mostly white and a much simpler flower than the ornamental variety that was grafter onto it. Another possibility is that two different specimens were grafted onto the same stock (but since the white blossoms are very simple - like a wild cherry - I would guess the former occurred.) This is a picture of a wild cherry (gean): So you have the grafted ornamental cherry blooming alongside of the rootstock cherry it was grafted to. The rootstock (although usually wild cherry) can vary, so... can't identify with certainty the rootstock, but can explain what you're seeing. It's not rare, but it's not usually desirable. Some newer growers are grafting several varietues of cherries onto the same rootstock to intentionally produce a tree with different kinds and colors of flowers. See, for example, this artist's work. The Hamada Cherries
{ "domain": "biology.stackexchange", "id": 5545, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "botany, plant-physiology, dendrology", "url": null }
python, performance, python-3.x, multiprocessing def replace(word): word = word.replace("e", "3").replace("a", "4").replace("o", "0") return word def dates(): dates = [] dates_day = ["1","2","3","4","5","6","7","8","9","01","02","03","04","05","06","07","08","09","10","11","12","13", "14","15","16","17","18","19","20","21","22","23","24","25","26","27","28","29","30","31"] dates_month = ["1","2","3","4","5","6","7","8","9","01","02","03","04","05","06","07","08","09","10","11","12"] for days, month in itertools.product(dates_day, dates_month): dates.append(days+month) for years in range(1875,2020): dates.append(years) return dates def numbers(number_digits): i = 0 digits_list = [] while i <= int(number_digits): n = str(i) digits_list.append(n) i += 1 print(digits_list) return digits_list def readresult(): end_time = datetime.datetime.now() print("Time taken: -{" + str(end_time - start_time) + "}-")
{ "domain": "codereview.stackexchange", "id": 33575, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, python-3.x, multiprocessing", "url": null }
soft-question Title: Knots in the Ether Knots are 3 dimensional curves, which means that each component is just a finite number of stacked vibrations, and a knot is a collection of vibrations. I also heard there was an old theory that atoms were just different "knots" in the ether and molecules were linked knots. Finally, I know String Theory concerns vibrating strings. Does any of these pieces connect to each other, or am I just spewing nonsense? This part is nonsense: "Knots are 3 dimensional curves, which means that each component is just a finite number of stacked vibrations, and a knot is a collection of vibrations." The idea of an ether has been thoroughly disproven. However, space time does have shape and curvature according to General Relativity, and there are theoreticians who are still seriously considering the possibility that particles (not atoms, but the elementary particles of which atoms are comprised) are knot-like structures in space time. String theory allows for that possibility, but is not yet experimentally testable.
{ "domain": "physics.stackexchange", "id": 50418, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "soft-question", "url": null }
observational-astronomy, distances There are various calculators on the internet you can use to do these calculations - for example this calculator tells me that for a flat universe with $\Omega_M=0.3$, then $z=7$ corresponds to a light travel distance of 12.79 billion light years or a comoving distance of 28.3 billion light years. To answer your final question, if the redshift is measured to be 7, then that tells you it isn't a nearby object. Cosmological redshifts of this size are far bigger than any Doppler shift due to actual motion with respect to the Hubble-flow (typically 100-1000 km/s). Thus a redshift of this size would always be dominated by cosmological expansion and we conclude the object is very distant. Answer to edited final part. QSOs have a different spectrum to a bog-standard galaxy (and are much more luminous). You need a spectrum to get a redshift (certainly one to 3 significant figures). The spectrum tells you it is a QSO. The distance is estimated from the redshift and an assumption about the values of cosmological parameters.
{ "domain": "astronomy.stackexchange", "id": 6075, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "observational-astronomy, distances", "url": null }
ros, gazebo, ros-kinetic Title: In which version of Autoware the Gazebo Catvehicle simulation is supported? Before 1.10.0 version I can see Gazebo Catvehicle packages in Autoware/ros/src/simulation/ folder, but I tried hard to bring it up in the docker. Either the compiling was not successful or the Gazebo crashed. I guess I was not using the right version of Autoware to replicate it, since the official Catvehicle package is only supported in ROS Indigo. Can anyone give me any suggestions to help me gothrough it? Can I continue to work on it uder ROS Kinetic and Gazebo 7? Thank you Originally posted by xpharry on ROS Answers with karma: 5 on 2019-06-17 Post score: 0 Currently, Catvehicle is not used in autoware gazebo simulatior. autoware gazebo had not been supported with kinetic for a while, but recently I support gazebo simulator from Autoware 1.11 (ROS kinetic and gazebo7). https://github.com/autowarefoundation/autoware/pull/1930 https://github.com/autowarefoundation/autoware/tree/master/ros/src/simulation/gazebo_simulator This is instruction video. -https://youtu.be/IK7WgatieXc If you don't look on the big screen, the characters may be broken and you may not see Originally posted by yukkysaito with karma: 41 on 2019-06-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by xpharry on 2019-06-18: Thanks. I am sorry that I cannot tell the characters with my devices.
{ "domain": "robotics.stackexchange", "id": 33206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, gazebo, ros-kinetic", "url": null }
electrostatics, electromagnetic-radiation, electric-fields, refraction, boundary-conditions Title: Does electric field lines refract just like light? Do electric field lines approaching a boundary at an angle get refracted and change direction just like light rays do? Because will discussing electric field lines and flux associated with it we do not consider a change in direction When light (or an EM wave) reaches an inetrface the direction of propagations changes. This is not the same as the direction of the electric field. The change in propagation direction is not related and does not imply a change in the direction of the electric field. So, the logic of the question is flawed (non sequitur). However, the electric field lines can change direction at a dielectric interface and they are shown to do so in the diagrams treating electric field in and around dielectrics. So, this part of the OP question "So why don't we consider for their direction change?" is based on a false assumption. We do. Dielectric cylinder
{ "domain": "physics.stackexchange", "id": 94695, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, electromagnetic-radiation, electric-fields, refraction, boundary-conditions", "url": null }
filters, noise, estimation, linear-systems, adaptive-filters Where $ {\left\{ {g}_{k} \right\}}_{k = 1}^{\infty} $ is a Minimum Phase LTI System, $ u \left[ n \right] $ is White Noise and $ z \left[ n \right] $ is a Perfectly Predictable Process. Since the questions deals with colored noise, $ z \left[ n \right] = 0 $. Hence the process can be defined as following:
{ "domain": "dsp.stackexchange", "id": 3507, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, noise, estimation, linear-systems, adaptive-filters", "url": null }
graphs, computational-geometry, discrete-mathematics, triangulation the point $p_0$ is the leftmost highest point of $P$, the point $p_{-1}$ lies far enough to the right on a horizontal line below all points of $P$ to lie outside every circle defined by three points of $P$ and such that the clockwise order of the points in $P$ around $p_{-1}$ is equal to their lexicographic order, the point $p_{-2}$ lies far enough to the left on a horizontal line above all points of $P$ to lie outside every circle defined by three points of $P\cup \{p_{-1}\}$ and such that the counter-clockwise order of the points of $P\cup \{p_{-1}\}$ is equal to their lexicographic order. While it is possible to compute coordinates satisfying these requirement for $p_{-1}, p_{-2}$, these coordinates can be quite large, as triples that are almost collinear define a large circle. So, it is better to only use the points symbolically, by adapting the operations that may involve these points. The property that the (counter-)clockwise order around these points is equal to the lexicographic order is useful here. The book provides the details on how to implement this. For each point, find the face that contains it
{ "domain": "cs.stackexchange", "id": 21337, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "graphs, computational-geometry, discrete-mathematics, triangulation", "url": null }
javascript, beginner, jquery, animation Title: Animation effects for menu items I have pieced together some code that works for my project. However, it is pretty long, but self-similar. I am a jQuery beginner and I tried to shorten the code by using variables, but it didn't work so far. I guess I should use some for()-statement. Could you help me to simplify this code? //apply the class "active" to the menu items whose div is in the viewport and animate $('#home').bind('inview', function (event, visible) { if (visible == true) { $('#menu li:nth-child(1)').stop().animate( {backgroundPosition:"(0 0)"}, {duration:500}); $('#menu li:nth-child(1) a').css( {'color': 'white'} ); $('#nav li:nth-child(1)').stop().animate({opacity: 1}, 500); } else { $('#menu li:nth-child(1)').stop().animate( {backgroundPosition:"(-200px 0)"}, {duration:500}); $('#menu li:nth-child(1) a').css( {'color': 'black'} ); $('#nav li:nth-child(1)').stop().animate({opacity: 0.2}, 500); } }); $('#referenzen').bind('inview', function (event, visible) { if (visible == true) { $('#menu li:nth-child(2)').stop().animate( {backgroundPosition:"(0 0)"}, {duration:500}); $('#menu li:nth-child(2) a').css( {'color': 'white'} );
{ "domain": "codereview.stackexchange", "id": 2164, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, beginner, jquery, animation", "url": null }
navigation, mapping, map-server Originally posted by Charel with karma: 81 on 2014-01-11 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by bahaa on 2014-08-25: i faced the same problem, but i didn't fund in the yaml file any reference to the .pgm file! Comment by tanghz on 2015-03-04: yes, it has a reference to the .pgm file in the yaml file. Open the yaml file, and you could find these word in the first line::image: /home/...../name.pgm Comment by Elise on 2015-08-19: How exactly did you update it? Comment by Charel on 2015-08-20: indeed, open .yaml file with text editor like gedit, in the first line you will find the path name.
{ "domain": "robotics.stackexchange", "id": 16630, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, mapping, map-server", "url": null }
c++, beginner, algorithm, vectors, stl constexpr iterator insert(const iterator pos, size_type count, const_reference value) { size_type pos_index_position = std::distance(pos, begin()); if (size() + count < capacity()) { if (pos == end()) { insert_end_strong_guarantee(value); } else shift_and_construct(pos_index_position, value, count); } else { do { if (m_capacity == 0) m_capacity = 1; m_capacity *= constants::realloc_factor; } while (m_capacity < m_size); reallocate_strong_guarantee(m_capacity); shift_and_construct(pos_index_position, value, count); } return count == 0 ? pos : iterator(m_vector + pos_index_position); } constexpr iterator erase(const iterator pos) { assert(pos <= end() && "Vector subscript out of range"); size_type pos_index_position = std::distance(pos, begin()); std::allocator_traits<allocator_type>::destroy(m_allocator, m_vector + pos_index_position); if constexpr (std::is_nothrow_move_constructible<Type>::value) { std::move(m_vector + pos_index_position + 1, m_vector + size(), m_vector + pos_index_position); } else std::copy(m_vector + pos_index_position + 1, m_vector + size(), m_vector + pos_index_position); --m_size; return (end() == pos) ? end() : iterator(m_vector + pos_index_position); }
{ "domain": "codereview.stackexchange", "id": 40496, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, beginner, algorithm, vectors, stl", "url": null }
logistic-regression, numpy 0.48634422 0.46518872 0.49555082 0.48875876 0.47792453 0.49558934 0.50552906 0.47930206 0.47786953 0.48341893 0.48400097 0.495604 0.5004382 0.48308957 0.49870294 0.49171396 0.48956627 0.49867715 0.48242424 0.49040473 0.48722318 0.47119225 0.48163011 0.4944681 0.48279397 0.47103539 0.46620747 0.48312108 0.49024677 0.45447197 0.48296325 0.48324581 0.49109993 0.46619447 0.48073173 0.46168387 0.49489589 0.49996702 0.47812102 0.49040313 0.51421002 0.47901743 0.48369991 0.506677 0.50449191 0.49407345 0.50204443 0.46441209 0.48048013 0.49192639 0.50236999 0.51246231 0.49384251 0.50780198 0.48272206 0.5028832 0.46226317 0.48787584 0.50043628 0.49161108 0.46854375 0.50444538 0.4980808 0.49160058 0.48641553 0.47996963 0.47419629 0.47233925 0.48050781 0.50025151 0.48875876 0.47796746
{ "domain": "datascience.stackexchange", "id": 8854, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "logistic-regression, numpy", "url": null }
formal-languages, regular-languages, finite-automata Title: Is a particular string regular (e.g is '010') regular? If the alphabet is $\{0,1\}$, then is the string '010' regular? I think it is regular because DFA and regular languages are equivalent and this string has a DFA but at the same time it seems to contradict pumping lemma which implies not regular. Here is what I mean. For pumping lemma first we have to take $w$ only in the language (here the language is just '010') So $w=$ '010'. I choose $k = 2$ then the new string $w=xy^kz$ has length more than 3 so it is definitely not '010', which means this language is not regular! What am I missing? Being regular is a property of languages, not of words. However, it seems that by '010' you really mean the language consisting only of the single word '010', that is, $\{ 010 \}$. This language, just like any other finite language, is regular. Where does your pumping lemma proof fail, then? Here is a complete statement of the pumping lemma. If $L$ is a regular language then there exists a constant $n$ such that for all words $w \in L$ of length at least $n$ there is a decomposition $w = xyz$, where $|xy| \leq n$ and $y \neq \epsilon$, such that $xy^iz \in L$ for all $i \geq 0$. Your argument is ignoring the constant $n$, which could be larger than the length of all words in $L$.
{ "domain": "cs.stackexchange", "id": 13308, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "formal-languages, regular-languages, finite-automata", "url": null }
i = NIntegrate[x^x, {x, 0, 1}]; results = Table[ N[Sum[Sum[((2 k - 1)/2^n)^((2 k - 1)/2^n), {k, 1, 2^(n - 1)}]/ 3^n, {n, 1, m}]] + (2/3)^m*i, {m, 0, 16}]; Column[FullForm /@ results] To 12 digits, I guess the answer is $0.750372036605$. Here is another way to look at the error estimate. First, the sum used to approximate the integral $I$ is a midpoint sum. Thus, associated error is of the order $4^{-m}$. Furthermore, that integral is multiplied by $(2/3)^m$ yielding a total absolute error of the order $1/6^m$. Note that $1/6^{16} \approx 3\times10^{-13}$, which makes the 12 digits of accuracy seem quite reasonable. We should even be able to use Richardson extrapolation to squeeze a couple more digits out of the process: Column[ Drop[FullForm[(6/5) #[[2]] - (1/5) #[[1]]] & /@ Partition[results, 2, 1], 12]] I guess a similar approach should work for $$\int_0^1 f(F_C(x)) dx,$$ whenever $f$ is a well-behaved, real function; in the case considered here, we have $f(x)=x^x$. Finally, it might be worth considering how much improvement we get for this analysis versus, say, just truncating the sum:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9728307716151472, "lm_q1q2_score": 0.8268781183627358, "lm_q2_score": 0.8499711794579722, "openwebmath_perplexity": 1097.7367897224572, "openwebmath_score": 0.851910412311554, "tags": null, "url": "https://mathematica.stackexchange.com/questions/101728/numerically-evaluating-an-integral-related-to-cantors-staircase/101764" }
control-engineering, control-theory Title: Could someone explain how a compensator shifts the root locus? I'm reading a root-locus course note from MIT. However, I can't understand this part: How does adding a compensator with a pole at origin remove the plant pole on the right half-plane into the left half-plane? If I read it correctly (i.e. the plant has a single zero but not pole at the origin), then the issue I think is being solved, is that the plant RHP pole we are trying to manipulate is unpaired. Thus its RL would travel along the real axis. However the zero at the origin, being its destination at infinite gain, effectively blocks this pole's RL from ever reaching the LHP, as would be required for stability. Adding an additional pole at the origin -- think of it instead as a RHP pole very close to the origin -- means you now have two RHP poles. So they will do the pole-splitting thing, where they converge on the positive real axis, then split up going up/down, and eventually get pulled into LHP (provided there are enough net zeros in the LHP to attract them). The text actually describes direct cancellation, which is a more special case, in my understanding. Conceptually, for that I'd modify the above description to put the additional pole very slightly to the left of the origin. This results in traveling along the real axis again, and eventually a (very near) pole-zero cancellation in the CL. Whenever there is pole-zero cancellation, you have to sortof check the 'order of operations', because it has potential to conceal internal instability. Also it's a little confusing, because the RL that is drawn only makes sense to me if there is a double zero at the origin.
{ "domain": "engineering.stackexchange", "id": 4331, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "control-engineering, control-theory", "url": null }
c#, sorting, winforms Title: Sorting Visualizations - WinForms Description I think many many people have seen this youtube video "15 Sorting Algorithms in 6 Minutes" When I saw it, I decided: "I can do that better" and I tried to get things done with the tools at my disposal back then. Windows Forms and C#. I added a functionality to allow the pausing of sorting in any point of time. Therefore I reasoned, my algoritms need to move step by step, and Implemented accordingly. Since then much has changed and yesterday I revisited my code from then, and it was gruesome. Not just bad, but horrible. So i went ahead and refactored what I could, removed superfluous stuff, decoupled implementations and algorithms, renamed methods to be more C#-ish, and realized, I have become a java-programmer by heart. My method names got camelCased, there's no properties, and my Interface wasn't prefixed with I at the start. Well, that aside, I finally arrived at something, where I am facing difficulties reducing the coupling. Where I am unsure what the best practice is, so here goes. Code Windows Forms oh how I love it, here come the FormDesigner.cs and the Form.cs ;) namespace SortVisualizations { partial class MainWindow { /// <summary> /// Erforderliche Designervariable. /// </summary> private System.ComponentModel.IContainer components = null; /// <summary> /// Verwendete Ressourcen bereinigen. /// </summary> /// <param name="disposing">True, wenn verwaltete Ressourcen gelöscht werden sollen; andernfalls False.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } #region Vom Windows Form-Designer generierter Code
{ "domain": "codereview.stackexchange", "id": 8977, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, sorting, winforms", "url": null }
$$\underline{u}(t_0) = \underline{u_0}$$ this is an initial value problem. Assuming the entries of $\underline{b}(t)$ are continuous on $[t_0,T]$ for some $T > t_0$, Picard-Lindelöf provides a unique solution on that interval. If $A$ is diagonalisable, the solution of the homogeneous initial value problem is easy to compute. Let $$P^{-1} A P = \Lambda = \operatorname{diag}(\lambda_1, \dots, \lambda_n),$$ where $P = \begin{pmatrix} x_1 & \dots & x_n \end{pmatrix}$. Defining $\tilde{\underline{u}}:= P^{-1} \underline{u}(t)$ and $\tilde{\underline{u_0}} = P^{-1} \underline{u_0}$, the IVP reads $$\tilde{\underline{u}}'(t) = \Lambda \tilde{\underline{u}}(t), \; \tilde{\underline{u}}(t_0) = \tilde{\underline{u_0}} =: \begin{pmatrix} c_1 & \dots & c_n \end{pmatrix}^T.$$ These are simply $n$ ordinary, linear differential equations $$\tilde{u_j}'(t) = \lambda_j \tilde{u_j}(t), \; \tilde{u_j}(t_0) = c_j$$ for $j = 1, \dots, n$ with solutions $\tilde{u_j}(t) = c_j e^{\lambda_j(t-t_0)}$. We eventually retrieve $\underline{u}(t) = P \tilde{\underline{u}}(t)$. Example: We can write
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9702399069145609, "lm_q1q2_score": 0.8064081493286263, "lm_q2_score": 0.8311430436757312, "openwebmath_perplexity": 362.63873754561075, "openwebmath_score": 0.6874520778656006, "tags": null, "url": "https://math.stackexchange.com/questions/1072459/what-are-some-applications-of-elementary-linear-algebra-outside-of-math/1072463" }
c++, algorithm, tree, coordinate-system Title: QuadTree C++ Implementation I recently created a QuadTree implementation in C++. It varies slightly from most implementations. Instead of storing elements it only organizes elements. This allows programmers to insert elements into the QuadTree and maintain access of those elements outside the QuadTree. Nodes will only subdivide if an element is inserted and that element has a different position than other elements in that node. This allows multiple elements of the same position to exist within the QuadTree. I am hoping to get some feedback on code. I haven't had many people criticize my code before, so I am really looking for ways that I could improve it. Anyways, I've also put my code up on GitHub: https://github.com/bnpfeife/quadtree quadtree.hpp: #pragma once #include <vector> namespace bnp { // When storing elements in the QuadTree, users may want to use unique/custom // objects. To aid the QuadTree with interpreting different types of data, // implement the qt_point_adapter on your objects. class qt_point_adapter { public: virtual float get_x() const = 0; // Retrieves x position virtual float get_y() const = 0; // Retrieves y position // Determines if point equals another virtual bool equals(const qt_point_adapter & point) const; };
{ "domain": "codereview.stackexchange", "id": 22444, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, tree, coordinate-system", "url": null }
data-mining, data, visualization, data-analysis Title: Compare two tends with big difference in absolute value I'm studying the Spotify streams trend for different nations in order to show the impact of the pandemic on music listeners. So far i retrieved my data, and plotted it. Obviously, since the various nations have different numbers in population, spotify users ecc... this graphic doesn't tell much. So i decided to scale every curve dividing its values by its peak. So for example, the US has a maximum value of 3.50 million streams, i normalized that curve with that value and i did the same for all the other states (with their maximum value) and I obtained this: Could this be a good approach? In general which approach should I use if i want to compare different curves which have a really big difference in absolute values? EDIT : In the end i normalized my data using a zscore normalization for each single line. So basically I computed the mean and standard deviation for each state and then I normalized the single state with its mean and std. This is the resulting plot: Is this a good approach? Can I now compare the different trends and conclude that there's an overall decrease in the period between 10 and 15 weeks? I think it would be better to use a standard scaler that removes the mean and divides by the standard deviation. See here for more info and an implementation using sklearn. Why? At least you should be aware that dividing by the maximum could hide smaller effects. In the case you have an outlier that has a very high value, you would loose the small changes in the corresponding curve. Moreover, you might not compare the same changes between all the curves. Edit On the question when to use standard scaler vs minmax scaler, I think a good start is the sklearn preprocessing page that deeply explains both. Then in this post, @Sammy pointed "Python Machine Learning" book by Raschka. The author provides some guidance on page 111 when to normalize (min-max scale) and when to standardize data that I requote here:
{ "domain": "datascience.stackexchange", "id": 8174, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "data-mining, data, visualization, data-analysis", "url": null }
Now, let us include the Gender variable and run this model again. fit.lm2 <- lm(Balance ~ Income + Gender) summary(fit.lm2) Call: lm(formula = Balance ~ Income + Gender) Residuals: Min 1Q Median 3Q Max -1.05399 -0.30172 -0.02495 0.37714 0.84589 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 9.254e+00 5.273e-01 1.755e+01 2.87e-08 *** Income 5.000e+00 5.669e-06 8.820e+05 < 2e-16 *** Gender 3.310e+00 3.398e-01 9.741e+00 4.45e-06 *** You can now clearly see how the intercept term is effected by introduction of Gender variable. In the first case, the model was estimated to be $Balance = 11.37 + 5 * Income$ for everyone While in the second case, the model became $Balance = 9.25 + 5 * Income$ for Males and $Balance = 12.56 + 5 * Income$ for Females By introducing the Gender term the model intercept changed from 11.37 for everyone to 9.25 for Males and 12.56 for females, so it indeed has an affect both males and females. Hope that clarifies your question. • Thanks :) yes your answer has clarified my ambiguities. Unfortunately I can not cast you a vote since i don't have 15 reputation. But thanks again :) – Bakhtawar Jan 8 '17 at 12:27
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9728307692520259, "lm_q1q2_score": 0.8452214276829425, "lm_q2_score": 0.8688267830311354, "openwebmath_perplexity": 2392.202620678092, "openwebmath_score": 0.7352732419967651, "tags": null, "url": "https://stats.stackexchange.com/questions/255123/need-clarification-on-using-qualitative-predictors-in-the-regression-model" }
c# Title: Overlapping rectangles - multidimensional arrays Task: You are given two rectangles.For each rectangle you are given its bottom-left and top-right points. Check if they overlap. If they do, check for bottom-left and top-right points of the overlapping area. I decided to solve this by using multidimensional arrays. Using object might make it more transaprent tho. Is there anything, that I could change (especially in terms of task approach, code bugs, code style and performance ) ? I came up with following code: Method wich checks if the give rectangles overlap public static bool Overlapping(int[][] firstRectangle, int[][] secondRectangle) { bool XOverlapping = secondRectangle[0][0] <= firstRectangle[1][0] && firstRectangle[0][0] <= secondRectangle[1][0]; bool YOverlapping = secondRectangle[0][1] <= firstRectangle[1][1] && firstRectangle[0][1] <= secondRectangle[1][1]; if (XOverlapping && YOverlapping) { return true; } else { return false; } }
{ "domain": "codereview.stackexchange", "id": 29739, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#", "url": null }
ros, stream, video, ardrone if __name__ == '__main__': print('Function call started ') main(sys.argv) rostopic hz : rostopic hz /ardrone/image_raw subscribed to [/ardrone/image_raw] average rate: 31.109 min: 0.011s max: 0.051s std dev: 0.00873s window: 31 average rate: 31.077 min: 0.011s max: 0.059s std dev: 0.00947s window: 62 average rate: 30.875 min: 0.011s max: 0.061s std dev: 0.00909s window: 92 average rate: 30.508 rosgraph is : /ardrone/image_raw/compressedDepth/set_parameters /ardrone_driver/set_logger_level /ardrone/front/image_raw/compressedDepth/set_parameters /ardrone/setledanimation /ardrone/image_raw/theora/set_parameters /image_converter_3034_1485929043890/set_logger_level /ardrone/front/image_raw/compressed/set_parameters /ardrone/front/set_camera_info /ardrone/setflightanimation /ardrone/setrecord /image_converter_3034_1485929043890/get_loggers /rosout/set_logger_level /ardrone/image_raw/compressed/set_parameters /ardrone_driver/get_loggers /ardrone/togglecam /ardrone/flattrim /ardrone/bottom/image_raw/theora/set_parameters /ardrone/bottom/set_camera_info /ardrone/bottom/image_raw/compressed/set_parameters /ardrone/bottom/image_raw/compressedDepth/set_parameters /rosout/get_loggers /ardrone/setcamchannel /ardrone/front/image_raw/theora/set_parameters
{ "domain": "robotics.stackexchange", "id": 26791, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, stream, video, ardrone", "url": null }
fft, fourier-transform # peak paramaters O <- 0.3 # Frequency values from 0->1 R <- 0.04 # Decay in arbritrary units z <- (f-O)/R # The original lorentzian frequency ff <-complex(re = 1, im = z)/(1 + z^2) # creating the time domain signal ftideal <- exp(-R*t)*exp(complex(i = (O)*2*pi*t)) unscaled <- (fft(ftideal)) scaled <- unscaled - min(Re(unscaled)) plot(f, Re(ff), type = 'l') lines(f, Re(scaled), type = "l", col = 'red') ``` First of all, take a look at this very related question and its answer(s). Second, your analytic solution is wrong, so it's no surprise that you don't see the correct result. The time domain function $$x(t)=e^{j2\pi f_0t}e^{-Rt}\tag{1}$$ has the following Fourier transform: $$X(f)=\frac{1}{R}\frac{1-j\frac{2\pi(f-f_0)}{R}}{1+\left(\frac{2\pi(f-f_0)}{R}\right)^2}\tag{2}$$ Also, scaling should be multiplicative and not additive (unless you're in the logarithmic domain), and you should also not only look at the real parts of complex-valued functions. As explained in this answer, approximating the CTFT by the DFT usually results in two types of errors: the truncation error (due to truncation of the time domain function), and the aliasing error (due to sampling the time domain function). These errors can be made small by choosing a large sampling frequency and a large DFT length. Below is an example code in Octave/Matlab showing how the CTFT of the given function can be approximated by the DFT: F0 = 0.3; R = 0.04;
{ "domain": "dsp.stackexchange", "id": 7757, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fft, fourier-transform", "url": null }
# example 6: function with oscillation def e6(x): valf = -np.cos(np.absolute(x)+np.pi/4) + 0.8 valdf = np.sin(np.absolute(x)+np.pi/4)*np.sign(x) return (valf, valdf) # example 8: arctangent def e8(x): valf = np.arctan(x) valdf = 1/(x**2+1) return (valf, valdf) # example 9: function with many roots def e9(x): valf = (x+2)*(x+1.5)*(x-0.5)*(x-2) valdf = (x+1.5)*(x-0.5)*(x-2) + (x+2)*(x-0.5)*(x-2) +(x+2)*(x+1.5)*(x-2) + (x+2)*(x+1.5)*(x-0.5) return (valf, valdf) # example 11: function with many roots, one multiple def e11(x): valf = (x+2)*(x+1.5)**2*(x-0.5)*(x-2) valdf = (x+1.5)**2*(x-0.5)*(x-2) + 2*(x+2)*(x+1.5)*(x-0.5)*(x-2) +(x+2)*(x+1.5)**2*(x-2) + (x+2)*(x+1.5)**2*(x-0.5) return (valf, valdf) # define parameters for plotting - adjust these as needed for your function # define left and right boundaries for plotting # for overview plots: interval_left = -2.1 interval_right = 2.1 interval_down = -25 interval_up = 60
{ "domain": "computingskillset.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9869795125670754, "lm_q1q2_score": 0.8022833846667089, "lm_q2_score": 0.8128673133042217, "openwebmath_perplexity": 544.3014051204011, "openwebmath_score": 0.8252004384994507, "tags": null, "url": "https://computingskillset.com/solving-equations/highly-instructive-examples-for-the-newton-raphson-method/" }
physiology, cell-biology Title: Polarized epithelium and localization of ion channels I'm trying to learn more about polarized epithelial cells of the gut. I am familiar with classic brush border transporters localized to the apical memebrane to facilitate nutrient absorption. I am wondering though, where are ion channels located? I would guess basolaterally since they would be exposed to the extracellular space. I would appreciate a primary reference showing the location of voltage-gated channels in particular as I could not find them myself. Well, that's a first for me. I wouldn't have guessed gut cells would have voltage-gated channels. This article describes voltage-gated sodium channels on both the luminal and basolateral membranes: Barshack, I., Levite, M., Lang, A., Fudim, E., Picard, O., Ben Horin, S., & Chowers, Y. (2008). Functional voltage-gated sodium channels are expressed in human intestinal epithelial cells. Digestion, 77(2), 108-117. http://www.ncbi.nlm.nih.gov/pubmed/18391489
{ "domain": "biology.stackexchange", "id": 2159, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physiology, cell-biology", "url": null }
javascript, programming-challenge, array, matrix Ouch. For some reason, it insists on circling back up to 15 at the end. I can't for the life of me figure out why. I've already spent a good like hour and a bit on this though, and didn't want the alternative solution to go to waste. This solution was mostly how I'd approach it in Clojure, and it doesn't translate 100% to Javascript given the bulk in some places. You still may be able to draw inspiration from it though. My primary goal here was to reduce the redundancy however I could, not adhere to idiomatic Javascript (as I don't write JS very often honestly). Oh, and I increased indentation to use four-spaces, as I find that it's more readable. There doesn't seem to be a good consensus on what should be used though, so take that with a grain of salt.
{ "domain": "codereview.stackexchange", "id": 33798, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, programming-challenge, array, matrix", "url": null }
Let $A=(a_1, a_2, \ldots )$, $B=(b_1, b_2, \ldots )$ be two countable sets. We define $f:\mathbb{N}\times\mathbb{N}\rightarrow A\times B$ with $f(n,m)=(a_n, b_m)$. This map is surjective, isn't it? We have that $\mathbb{N}\times\mathbb{N}$ is counatble, right? Then it follows that $A\times B$ is also countable. Is this part correct? Now we have to use the induction to show that $\displaystyle{\bigcup_{n\in \mathbb{N}}M_n}$ is countable: Base case: For $n=1$ it holds since then we have just one set, which is countable. Inductive Hypothesis: We suppose that the statement holds for $n=k$. Inductive step: We want to show that it holds for $n=k+1$. We have that $\bigcup_{n=1}^{k+1}M_n=\bigcup_{n=1}^{k}M_n\cup M_{k+1}$. The first one $\bigcup_{n=1}^{k}M_n$ is countable from the inctuctive hypothesis and $M_{k+1}$ is also ccountable. So we have a union of two countable sets, which is countable because of the above argument. Is everything correct?
{ "domain": "mathhelpboards.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.984336353126336, "lm_q1q2_score": 0.8001348556781661, "lm_q2_score": 0.8128673223709251, "openwebmath_perplexity": 311.1205138429175, "openwebmath_score": 0.9657533764839172, "tags": null, "url": "https://mathhelpboards.com/threads/countable-sets.25093/" }
perl # @ARGV is a list of packages we need to find all dependencies for and is the roots of our dependency graph foreach my $package (@ARGV) { $Agraph->add_vertex($package); } # A list of packages we have already gotten dependencies for so we don't check twice my $checked = []; # Loop until our graphs stop changing between dependency runs while ( $Agraph ne $Bgraph ) { $Bgraph = $Agraph->deep_copy(); # Check every single vertice in our graph my @vertlist = $Agraph->vertices; foreach my $package (@vertlist) { # If we haven't checked this package before, get it added to the graph with all of it's dependencies if (! grep( /^\Q$package\E$/, @{ $checked } )) { my $deplist = get_deps($package); foreach my $dep (@{ $deplist }) { $Agraph->add_edge($package, $dep); } # Add this package to our list of already checked packages so we don't waste time push(@{ $checked }, $package); } } } my @fulldeplist = sort $Agraph->vertices; foreach my $package (@fulldeplist) { print "$package\n"; } I like to use this line for warnings, but it can be too restrictive for some people's tastes: use warnings FATAL => 'all'; FATAL
{ "domain": "codereview.stackexchange", "id": 44659, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "perl", "url": null }
c, strings, memory-management Title: String appending function (location-independent) Very useful operation that hasn't been merged into a function.. until now. This is supposed to insert a substring into a previously d.allocated source string. void strapp (char *source_offset, int position, char *appendage) { size_t appendage_size = strlen(appendage) + 1; char copy[strlen(source_offset)]; strcpy(copy, source_offset); source_offset = realloc(source_offset, strlen(source_offset) + appendage_size); memcpy(&source_offset[position], appendage, strlen(appendage)); sprintf(source_offset + (position + strlen(appendage)), &copy[position]); } USAGE: int main(void) { char *str1 = malloc(11 + 1); sprintf(str1, "Hello World"); strapp(str1, 5, " Horrible"); printf(str1); free(str1); return 0; } Produces the output: Hello Horrible World This is more of a string-inserting function than a string-appending function, and should be renamed as such. When calling realloc(…), the string could very well move to a different memory location. The caller of strapp() has no way of knowing where the new string is, though. Furthermore, if the string got moved, and the caller continues to use the string as if were still at source_offset, then that's a memory-use-after-free bug. There are two possible remedies: /** * s points to the address of the old and new string. */ string_insert(char **s, int insertion_pt, const char *to_insert) or /** * Returns the location of the new string, which may or may not be at * the original s. Callers must assume that the string at s may be * invalid after this call. */ char *string_insert(char *s, int insertion_pt, const char *to_insert)
{ "domain": "codereview.stackexchange", "id": 11949, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, strings, memory-management", "url": null }
1. Write a payoff matrix for Player I. 2. Find the optimal strategies for each player and the value of the game. ### Exercise 3 A mayor of a large city is thinking of running for re-election, but does not know who his opponent is going to be. It is now time for him to take a stand for or against abortion. If he comes out against abortion rights and his opponent is for abortion, he will increase his chances of winning by 10%. But if he is against abortion and so is his opponent, he gains only 5%. On the other hand, if he is for abortion and his opponent against, he decreases his chance by 8%, and if he is for abortion and so is his opponent, he decreases his chance by 12%. 1. Write a payoff matrix for the mayor. 2. Find the optimal strategies for the mayor and his opponent. #### Solution 1. .05.10.08.12.05.10.08.12 size 12{ left [ matrix { "." "05" {} # "." "10" {} ## - "." "08" {} # - "." "12"{} } right ]} {} 2. The optimal strategy for the mayor is 1010 size 12{ left [ matrix { 1 {} # 0{} } right ]} {} and for his opponent is 1010 size 12{ left [ matrix { 1 {} ## 0 } right ]} {} . In other words, both candidates should oppose abortion rights. ### Exercise 4 A man accused of a crime is not sure whether anybody saw him do it. He needs to make a choice of pleading innocent or pleading guilty to a lesser charge. If he pleads innocent and nobody comes forth, he goes free. However, if a witness comes forth, the man will be sentenced to 10 years in prison. On the other hand, if he pleads guilty to a lesser charge and nobody comes forth, he gets a sentence of one year and if a witness comes forth, he gets a sentence of 3 years. 1. Write a payoff matrix for the accused. 2. If you were his attorney, what strategy would you advise? ## NON-STRICTLY DETERMINED GAMES ### Exercise 5 Determine the optimal strategies for both the row player and the column player, and find the value of the game.
{ "domain": "cnx.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9525741227833249, "lm_q1q2_score": 0.8188934481679507, "lm_q2_score": 0.8596637559030338, "openwebmath_perplexity": 11980.602721686899, "openwebmath_score": 0.7934074997901917, "tags": null, "url": "http://cnx.org/content/m38665/latest/" }
performance, computational-geometry, matlab Title: Removing undesired points from a mesh I have a 3D data set made of a certain number of points (x,y,z) that cover a certain region of space. Using the scatteredInterpolant object I can interpolate this data set over a grid to produce a rectangular mesh. Note that the mesh may extend to regions that are not defined by the data set; in fact, after the mesh generation I need to remove the part of the mesh that is extrapolated away from the data set (replacing its values with NaN, for example) in order to retain only the mesh generated between the data points. I came up with the following MATLAB script to solve this problem in a very naive way. Considering a single mesh point (xq,yq), I evaluate the minimum distance between this point and the data set; if this distance is greater than a certain threshold, then the corresponding interpolated value (zq) is set to NaN. %% Data set (x,y,z) x = [3 3 3 4 4 4 4 4 5 5 5 5 5]'; y = [1 2 3 0 1 2 3 4 0 1 2 3 4]'; z = [.5 .505 .51 .51 .51 .51 .51 .515 .535 .528 .53 .53 .53]'; %% Interpolant F = scatteredInterpolant(x,y,z,'natural'); %% Mesh generation (xq,yq,zq) delta = 0.5; ti = 0:delta:5; si = 0:delta:4; [xq,yq] = meshgrid(ti,si); zq = F(xq,yq); %% Replacing undesired values with NaN thresh = 1; n = length(ti) * length(si); m = length(x); xqcol = reshape(xq,[n,1]); yqcol = reshape(yq,[n,1]); zqcol = reshape(zq,[n,1]); tab = [xqcol yqcol zqcol];
{ "domain": "codereview.stackexchange", "id": 25221, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, computational-geometry, matlab", "url": null }
# Why choose sets to be the primitive objects in mathematics rather than, say, tuples? Sets are defined in such a way that $$\{a,a\}$$ is the same as $$\{a\}$$, and $$\{a,b\}$$ is the same as $$\{b,a\}$$. By contrast, the ordered pair $$(a,a)$$ is distinct from $$(a)$$, and $$(a,b)$$ is distinct from $$(b,a)$$. Intuitively, it would seem useful to draw a distinction between two collections if they are ordered differently, or if one collection has a different number of copies of an element to the other. For instance, this would mean that the collection of prime factors of $$6$$ would be different to that of $$12$$. However, it is the set, rather than the tuple, that is chosen as the primitive object. Why is it useful for the foundations of mathematics that sets have very little "structure", and would their be any difficulties in choosing tuples to be the primitive object instead?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668695588648, "lm_q1q2_score": 0.8021769850769469, "lm_q2_score": 0.817574471748733, "openwebmath_perplexity": 379.0716335370273, "openwebmath_score": 0.7237902879714966, "tags": null, "url": "https://math.stackexchange.com/questions/4213876/why-choose-sets-to-be-the-primitive-objects-in-mathematics-rather-than-say-tup" }
c#, algorithm, interview-questions } I don't like the fact that I'm doing it in \$ O(n^2) \$, although I am checking for no repetitions. Please review my code's complexity and algorithm. I did the testing inside the constructor just for convenience, although I usually will use unit tests. Please find my (language agnostic) suggestions below : String reversal: Use the same string variable and swap i and (len-i-1) characters? -> O(n/2). Your program caters to str == reverse(str) and not anagrams in general, i.e gilad == dalig is checked but not gilad == gliad / gladi / ladig / glaid ,etc Anagram finder: Here are the multiple ways to do it: A hash function on each String which generates a unique number for string with same letters. Could be a unique prime mapping to each character and H(S) = P1 * p2 * ... => H(gilad) = P(g) * p(i) * P(l) * P(a) * P(d) Sort each string, Sort the array and traverse to find anagrams. {"gilad","glaid","bat","tac","act","tab"} = {"adgil", "adgil" ,"abt", "act", "act", "abt"} // Sort each string = {"abt","abt","act","act", "adgil", "adgil"} // Sort the array. = Traverse to know the anagrams. Use Trie to store sorted strings. Each traversal of the string will point to its anagram i.e Once gilad => adgil is added to the trie, addition of "glaid" = "adgil" will point to its early existence and hence its anagram. Use a HashMap to store the sorted string and you would easily find the anagrams.
{ "domain": "codereview.stackexchange", "id": 11081, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, algorithm, interview-questions", "url": null }
energy, work, statics With no "lossy" forces eg friction acting the body will overshoot the position $x=0$ and undergo oscillatory motion about that position. If friction does act then the amplitude of oscillation of the body will decrease with time until the body eventually stops at position $x=0$.
{ "domain": "physics.stackexchange", "id": 60899, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "energy, work, statics", "url": null }
gene-expression, crispr In short, these Genes affect the Eye and Hair Colour and Andrew is suggesting to us that by removing/modifying them using CRISPR we change the Eye/Hair Colours. Can we knock out OCA2, HERC2, and MC1R? Yes, it has been done before and there are sequences available to target these genes within the genome. An example would be the gRNA sequences developed by Sigma-Aldrich and Genscript. I'll provide links of articles which has knocked out the respective gene and links to the gene sequences. OCA2: (Article: https://pubmed.ncbi.nlm.nih.gov/29555241/ | Sequence: https://www.genscript.com/gRNA-detail/4948/OCA2-CRISPR-guide-RNA.html) HERC2: (Article: https://jmg.bmj.com/content/early/2020/06/22/jmedgenet-2020-106873 | Sequence: https://www.sigmaaldrich.com/catalog/genes/HERC2) MC1R: (Article: https://www.genscript.com/gRNA-detail/4157/MC1R-CRISPR-guide-RNA.html | Sequence: https://www.genscript.com/gRNA-detail/4157/MC1R-CRISPR-guide-RNA.html) Can we change the Eye/Hair Colour by modifying the above genes?
{ "domain": "biology.stackexchange", "id": 10912, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gene-expression, crispr", "url": null }
# Rouché's Theorem/Argument Principle Application Preparing for some prelim. exams, I encountered this problem: Show that $$p(z)=z^6+3z^4+1=0$$ has precisely two zeros in the upper half of the unit disk. There are two good ways to solve this. The standard trick seems to be to notice that $$p(z)$$ sends $$\mathbb{R}\to\mathbb{R}$$, so that by unique continuation and the Schwartz Reflection Principle, we know that $$p(z)=\overline{p(\overline{z})}$$ for $$z$$ in the lower half of the unit disk. So, the number of zeros above is the same as the number of zeros below. We can then perform a straightforward application of Rouché's Principle to finish. However, the less "clever" (but more general) way to solve this problem is to consider $$p(z)=z^6+3z^4+1$$ and compute $$\Delta\text{Arg}(p)$$ around the contour enclosing the upper half unit disk. We can see that $$\Delta Arg(p)=0$$ on the axis. Along $$e^{i\theta}$$ for $$\theta\in [0,\pi]$$, we know that the angle changes by $$\Delta \theta=\pi$$, so that considering the "dominant term" of $$3e^{4i\theta}$$, we calculate $$\Delta\text{Arg}(p)\approx 4\pi i$$. By the argument principle, we find that there are two zeros in the upper half disk. My question: How do we make the second approach rigorous? I know how to use it, but I'm not really sold on the idea that the contributions of the terms of lesser modulus than $$3$$ are negligible. Can someone explain to me how to see this precisely? We can construct a proof based on homotopy invariance, borrowing ideas from the proof of Rouche's theorem.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9744347838494567, "lm_q1q2_score": 0.8077468167469247, "lm_q2_score": 0.82893881677331, "openwebmath_perplexity": 106.20762033129736, "openwebmath_score": 0.9415306448936462, "tags": null, "url": "https://math.stackexchange.com/questions/3056350/rouch%C3%A9s-theorem-argument-principle-application" }
ros, transform, ros-kinetic, tf2 My updated code is as follows: #include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf2_ros/transform_listener.h> void transform(const tf2_ros::TransformListener& tfBuffer){ //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 0.2; laser_point.point.z = 0.0; try{ geometry_msgs::PointStamped base_point; tfBuffer.transform("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); } catch(tf2::TransformException& ex){ ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what()); } }
{ "domain": "robotics.stackexchange", "id": 31623, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, transform, ros-kinetic, tf2", "url": null }
java, object-oriented, socket, server, client General Your API is not threadsafe. Consider using a locking mechanism when manipulating the list of connections in the server, and also when taking a snapshot of connections to send to. Your API should be made more robust against exceptions, and should check for lingering connections.
{ "domain": "codereview.stackexchange", "id": 35645, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, object-oriented, socket, server, client", "url": null }
The following links show the original proof if you want to check it out. Question According to the proof, it can be concluded that the solution will converge for $\rho=r_0$. The proof does not mention anything about the last sentence of the theorem that $\rho \ge r_0$ which is saying that $r_0$ is lower bound for radius of convergence of the solution. I just cannot understand that how the radius of convergence of the solution can be $\rho \gt r_0$. I think having $\rho>r_0$ is meaningless as the coefficients of ODE can just be replaced by their power series only in $|x-x_0|<r_0$. Can someone shed some light on this?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9799765598801371, "lm_q1q2_score": 0.8101581820577017, "lm_q2_score": 0.8267117962054049, "openwebmath_perplexity": 146.55858291999317, "openwebmath_score": 0.9368202090263367, "tags": null, "url": "https://math.stackexchange.com/questions/1853590/radius-of-convergence-of-the-power-series-solution-to-a-second-order-linear-homo" }
Take a variable sphere $S_P$ of radius $\sqrt{2}$ and center $P$. Denote $S_A$ the sphere of radius $\sqrt{2}$ and center $A$. The locus for $P$ such that $A \notin S_P$ is $K\setminus S_A$ where $K$ denotes the cube. Suppose we can find two spheres $S_X, S_Y$ centred in $X,Y$ and radii $\sqrt{2}$ which do not cover the cube and $XY >\sqrt{2}$. If the two spheres do not cover the cube, then neither one of them covers the cube, and there exists a vertex(since the cube is convex) which is not covered, for example $A$. Suppose $A \notin S_X$. Then $X \in K\setminus S_A$. Wherever we pick $X \in K\setminus S_A$ it is easy to see that $S_X$ covers $K\setminus S_A$, and $S_X$ leaves uncovered at most two vertices, let's say $A$ and $E$. If $S_X$ contains all the vertices but $A$ we are done. Then $S_X$ contains the prism $BCDFGH$. In a similar way, $Y$ is near $A$ or $E$, and $S_Y$ covers $A$ and $E$ and the prism $ABDEFH$. Therefore $S_X,S_Y$ cover the cube, which is a contradiction. I feel I am missing something, but I don't think I'm far from the solution. - I think this claim requires substantiation: "In a similar way, $Y$ is near $A$ or $E$, and $S_Y$ covers $A$ and $E$ and the prism $ABDEFH$." –  TonyK Jun 9 '11 at 22:41 @TonyK: That's what I was thinking too. –  Beni Bogosel Jun 10 '11 at 7:00
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.974434788304004, "lm_q1q2_score": 0.807746812203652, "lm_q2_score": 0.8289388083214156, "openwebmath_perplexity": 180.7833560109982, "openwebmath_score": 0.9015769362449646, "tags": null, "url": "http://math.stackexchange.com/questions/44396/largest-triangle-with-vertices-in-the-unit-cube" }
friction Title: How to stop a leg from sliding? One of the legs of a bed is bent (I believe it buckled). The bent can be corrected to be straight, however under the application of load to the bed, the leg will slowly return to the bent position. My goal is to fix the leg in place and stop it from sliding. My current idea is to increase friction between the bed leg and floor. As a first pass, I tried putting an old rubber slipper underneath the bent leg after correcting it(best I could come up with). It seemed to help a bit, but on heavier load (eg: a person getting on and off), the bending happens again. I believe the material of the bed leg is some sort of plastic, and that of the floor is granite. Here are a few possible solutions you can try to stop the leg from sliding: Furniture Grippers: You can buy furniture grippers, which are small pads that go underneath the legs of furniture to provide more friction and prevent sliding. These are available in many different sizes and shapes, and can be easily attached to the bottom of the bed leg. Look for grippers that are specifically designed for use on hard floors like granite. Rubber Pads: Another option is to use rubber pads or discs underneath the bed leg. These can also be purchased at most hardware stores or online. The rubber material will help provide more grip and prevent sliding. Anti-Slip Tape: Anti-slip tape is another option that can be used to increase friction between the bed leg and the floor. This tape has a rough surface that provides more traction, and can be easily cut to size and applied to the bottom of the bed leg. Glue or Adhesive: If none of the above solutions work, you can try using a strong adhesive or glue to attach the bed leg to the floor. This should be a last resort, as it may damage the floor if you ever need to remove the bed in the future. Use a strong adhesive that is designed for use on plastic and granite surfaces. Before trying any of these solutions, make sure the bed leg is straightened as much as possible to prevent any further bending. Good luck!
{ "domain": "engineering.stackexchange", "id": 5110, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "friction", "url": null }
c#, cryptography, rags-to-riches var sum = data.Sum(d => (long)d); cypher.EncryptData(ref data); cypher.DecryptData(ref data, false); if(!Enumerable.SequenceEqual(original, data)){ Console.WriteLine("Can not encrypt and desencrypt arrays"); }else{ Console.WriteLine("Can encrypt and desencrypt arrays"); } var stream = cypher.EncryptData(new MemoryStream(data)) as MemoryStream; var memStream = cypher.DecryptData(stream) as MemoryStream; data = memStream.ToArray(); if(!Enumerable.SequenceEqual(original, data)){ Console.WriteLine("Can not encrypt and desencrypt streams"); }else{ Console.WriteLine("Can encrypt and desencrypt streams"); } memStream = cypher.EncryptData(new MemoryStream(data)) as MemoryStream; data = memStream.ToArray(); cypher.DecryptData(ref data); if(!Enumerable.SequenceEqual(original, data)){ Console.WriteLine("Can not encrypt streams and desencrypt arrays"); }else{ Console.WriteLine("Can encrypt streams and desencrypt arrays"); } cypher.EncryptData(ref data); memStream = cypher.DecryptData(new MemoryStream(data)) as MemoryStream; data = memStream.ToArray(); if(!Enumerable.SequenceEqual(original, data)){ Console.WriteLine("Can not encrypt arrays and desencrypt streams"); }else{ Console.WriteLine("Can encrypt arrays and desencrypt streams"); } } } You are using in your code some non-standard loops. for (int i = 1; i <= streamFooter.Length; ++i)
{ "domain": "codereview.stackexchange", "id": 26401, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, cryptography, rags-to-riches", "url": null }
dirac-equation, majorana-fermions, charge-conjugation An edit. I decided to check my assumptions about charge conjugating. It is determined as $$ \Psi^{c} = \hat {C} \gamma_{0}^{T} \Psi^{*}, $$ where $\hat {C}$ refers to the charge conjugation operator. I started from spinor basis: $$ \gamma_{0} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad \hat {C} = \begin{pmatrix} -i\sigma_{y} & 0 \\ 0 & i\sigma_{y}\end{pmatrix}. $$ Standart (or Dirac) basis: $$ U_{spinor\to standart} = U_{1} = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1\end{pmatrix} \Rightarrow $$ $$ \gamma_{0}^{Dirac} = U_{1}^{+}\gamma_{0}U_{1} =\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}, \quad \hat {C}^{Dirac} = -\begin{pmatrix} 0 & i\sigma_{y} \\ i\sigma_{y} & 0\end{pmatrix}. $$ Finally, Majorana basis: $$ U_{standart \to Majorana} = U_{2} = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & \sigma_{y} \\ \sigma_{y} & -1\end{pmatrix} \Rightarrow $$ $$
{ "domain": "physics.stackexchange", "id": 11730, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dirac-equation, majorana-fermions, charge-conjugation", "url": null }
doppler-effect Title: Doppler effect contradiction when counting wave fronts For concreteness, let's take a source moving away from a stationary observer. The observed frequency will be less than the emitted frequency, so for any given time interval, the observer will count less incoming peaks than the source counts outgoing ones. This is what frequency means, this makes sense. But what if the source is turned on for a given time, then switched off? The source and observer will disagree on how many peaks were actually emitted. This feels like a contradiction. If "observing a peak" were an event, some events wouldn't occur in the observer's frame! The source is on for a time $t$. In that time, the source sees $f_st$ peaks, and the observer sees $f_ot$ peaks. My intuition also says they should see the same number, but using the different frequencies to find how many peaks are observed leads to a disagreement! I feel like I'm mistaken in applying the formula $ft=n$ here, but why? The issue is in assuming that the time $t$ that is experienced by the source and the observer is the same. This is not the case. This is because as the source moves away from the observer, the sound wave will be longer by an amount $vT_s$, where $v$ is the velocity of the source, and $T_s$ is the time the source is on. The total length of the wave will be $\ell=(c+v)T_s$, where $c$ is the speed of sound. Since the wave moves at the speed of sound $c$, the observer will experience the sound for a time $$T_o=\frac{\ell}{c}=\frac{(c+v)T_s}{c}$$ Now, applying the Doppler shift in frequency for a source moving away from a stationary receiver: $$f_o=\frac{c}{c+v}f_s$$ we find that $$f_oT_o=f_sT_s=n$$ which is what you correctly assumed should be true.
{ "domain": "physics.stackexchange", "id": 53014, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "doppler-effect", "url": null }
the number lies. divide 1 2/6 by 2 1/4. Rational-equations. Evaluating expressions. 45x-24y+78z=12. A solution of a simplified version of an equation that does not satisfy the original equation. Solve equation. Thus the set of our solutions is the part of the x -axis indicated below in red, the interval (-1,1): If we want to see the solutions of the inequality. This is the update to the absolute-value equation program. solving polynomial and rational inequalities (one variable) We will use a combination of algebraic and graphical methods to solve polynomial and rational inequalities. Then, raise both sides of the expression to the reciprocal of the exponent since ¡ xm=n ¢n=m = x. They are not defined at the zeros of the denominator. Example 1: to solve $\frac{1}{x} + 2x = 3$ type 1/x + 2x and 3. Find the best digital activities for your math class — or build your own. From Multivariable Equation Solver to scientific notation, we have got all kinds of things covered. Solve and graph inequalities in one variable 2. Another method of solving inequalities is to express the given inequality with zero on the right side and then determine the sign of the resulting function from either side of the root of the function. Free Math Worksheets for Grade 7. We will remind ourselves of our inequality key phrases, as Algebra Class so nicely summarizes, draw upon our knowledge of how to simplify expressions and solve inequalities using our SCAM technique. |x|, we use the following relationships, for some number n: If |f(x)| > n, then this means: f(x) < -n or f(x) > n`. This calculator can solve for X in fractions as equalities and inequalities: < or ≤ or > or ≥ or =. Zero of the denominator: solve x + 2 = 0 to obtain x = -2. ; Dig deeper into specific steps Our solver does what a calculator won't: breaking down key steps. Section 2-13 : Rational Inequalities. Rational Functions Test Review (2015) Solutions (2015) Lesson 9-5 Adding and Subtracting Rational Expressions. Any help
{ "domain": "tuningfly.it", "id": null, "lm_label": "1. Yes\n2. Yes\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9869795121840873, "lm_q1q2_score": 0.8408599627455497, "lm_q2_score": 0.8519528038477825, "openwebmath_perplexity": 856.2827349084015, "openwebmath_score": 0.41914433240890503, "tags": null, "url": "http://tuningfly.it/mwdx/solving-rational-inequalities-calculator.html" }
quantum-mechanics, optics, waves, greens-functions, huygens-principle Note, that this is not the Huygens principle as usually given – the solution denpends not only on the points on the wavefront but on the value of the solution in the entire support of $G$ at an earlier time. (The wave equation $\partial_t^2 \phi = c^2 \Delta \phi$ in odd dimensions, however, does permit a solution that can be interpreted as strictly following the Huygens principles). In other words: In general, you don't construct wavefronts from wavefronts, but rather wave configurations from wave configurations. And in this sense it is a strongly generalized Huygens principle. Note: Wikipedia has quite a nice discussion of more or less exactly these issue: https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_principle.
{ "domain": "physics.stackexchange", "id": 93673, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, optics, waves, greens-functions, huygens-principle", "url": null }
electromagnetism, electric-circuits, inductance, lenz-law Thus we get the direction of the induced current shown in the following figure. Figure 9. Image source: own. So far so good; I don't have any doubts. The only step missing is to find the polarity of the induced voltage $v_\text{ind}$. I don't know how to do this, this is where I'm stuck. However, I realized/observed something. It looks like we must assign the reference polarity of $v_\text{ind}$ such that the induced current $i_\text{ind}$ flows from higher potential (the terminal marked with the “$+$” sign of $v_\text{ind}$) to lower potential (the terminal marked with the “$-$” sign of $v_\text{ind}$) through the external active circuit. (Equivalently, the induced current $i_\text{ind}$ flows from lower potential [the terminal marked with the “$-$” sign of $v_\text{ind}$] to higher potential [the terminal marked with the “$+$” sign of $v_\text{ind}$] through the inductor.) But I don't know why it is correct, and I'd like to know it. You may ask how I'm sure my observation above is correct. The reason is because if we assume it is correct, then, as I'll show below, equation (1) indeed considers the sign of $v$ and the sign of the rate of change of $i$. So let's suppose the above observation is correct. Then, in the previous figure: In cases a) and d), the induced current $i_\text{ind}$ flows into the active circuit through the upper terminal, so the induced voltage $i_\text{ind}$ has the positive reference polarity at the upper terminal. In cases b) and c), the induced current $i_\text{ind}$ flows out of the active circuit through the upper terminal, so it flows into the active circuit through the lower terminal, so the induced voltage $i_\text{ind}$ has the positive reference polarity at the lower terminal. In this way we get the polarity of the induced voltage shown in the following figure.
{ "domain": "physics.stackexchange", "id": 84533, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, electric-circuits, inductance, lenz-law", "url": null }
$\displaystyle \lim_{k \rightarrow \infty} \mathop{\bf E} (\int_0^{\min(k-X_1-\dots-X_k),0} g(t_k)\ dt_k)^2 = \infty$ and the claim follows from (49) if ${k}$ is sufficiently large depending on ${m}$. We are thus reduced to the easy one-dimensional problem of producing a smooth function ${g: [0,+\infty) \rightarrow {\bf R}}$ obeying the constraints (47), (48), (49). However, one can verify that the choice $\displaystyle g(t) := \frac{1}{(1+t) \log(2+t)}$ (barely) obeys (49) with ${\int_0^\infty g(t)^2\ dt}$ and ${\int_0^\infty t g(t)^2\ dt}$ both finite, and the claim follows by a routine rescaling. Exercise 42 By working through a more quantitative version of the above argument, establish Theorem 38 with ${m \gg \log k}$ as ${k \rightarrow \infty}$. (Hint: One can replace the use of the law of large numbers with Chebyshev’s inequality.) It is not currently known how to obtain this theorem with any choice of ${m}$ that grows faster than logarithmically in ${k}$; the current record is ${m = (\frac{1}{4} + \frac{7}{600}) \log k + O(1)}$, due to the Polymath project. It is known, though, that bounds on the order of ${\log k}$ are the limit of the Selberg sieve method, and one either needs new sieves, or new techniques beyond sieve theory, to increase ${m}$ beyond this rate; see the Polymath paper for further discussion.
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9886682444653242, "lm_q1q2_score": 0.8012947666000314, "lm_q2_score": 0.8104789155369048, "openwebmath_perplexity": 248.19796379327, "openwebmath_score": 0.9931489825248718, "tags": null, "url": "https://terrytao.wordpress.com/2015/01/21/254a-notes-4-some-sieve-theory/" }
computer-networks The same physical link can serve for different logical speeds. Same technology makes the deployment easier and the manufacturing cheaper. Logical speed can be modified to any value, at any time, simply by a change in the configuration of one or more devices. No need to replace equipment unless you need to provide more speed that the one supported by the physical layer. The same physical link can be used by more people, and this is key. If you have a 1 Mbps service using fibre or cable connection, the physical layer would be unused between your packets. But it can be used for other user's packets, it is shared. And it is aggregated in the network hierarchy. A 10 Gbps link can carry thousands of low-speed link from thousands of users. I hope this is clear and answers your question, but certainly it is just a very simplified overview. Feel free to ask more questions or clarifications.
{ "domain": "cs.stackexchange", "id": 6035, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computer-networks", "url": null }
python Notice that since --... SECTION-NAME ...-- lines are parsed separately, you no longer need to guard against lines starting with -- inside SECTIONS[2] or SECTIONS[3]. Named Sections What the heck does elif act_section == SECTIONS[2]: mean, anyway? If a new section gets added before "SECTION-C", is it going to be added to the SECTIONS list before that item? If so, you'll have to find everywhere there is SECTIONS[2] and change that to SECTIONS[3], but first you'll have to change all SECTIONS[3] to SECTIONS[4]! You're using SECTIONS[#] to avoid typing out the whole string multiple times to reduce the chance of typos and ensure it only needs to be changed in one spot if it does require changing, but it is obfuscating the code such that the reader has to refer elsewhere and count to determine what section SECTIONS[2] actually means! It needs to be commented better, but comments aren't actually tested so there is no guarantee the comments are actually correct. You need: def parser(lines): SECTION_A = "SECTION-A" SECTION_B = "SECTION-B" SECTION_C = "SECTION-C" SECTION_D = "SECTION-D" SECTION_END = "END" SECTIONS = { SECTION_A, SECTION_B, SECTION_C, SECTION_D, SECTION_END, } ... elif act_section == SECTION_A or act_section == SECTION_B: ... elif act_section == SECTION_C: ... elif act_section == SECTION_D: ... No more counting is necessary (or even possible because SECTIONS is now a set). At this point, it may be worth considering an Enum. from enum import Enum class Section(Enum): A = "SECTION-A" B = "SECTION-B" C = "SECTION-C" D = "SECTION-D" END = "END" Efficiency There is no need to read and store the entire file into memory. You can easily process the file line by line.
{ "domain": "codereview.stackexchange", "id": 43730, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
electromagnetism, forces, magnetic-fields We don't really have a time coordinate as we're only looking at one moment in time, but let me take a typical charge flow over parameter-change $ds$ as happening over a real-time-change $dt$. Then, identifying $v(s)$ as $\left|\frac {d\vec r}{dt}\right| = \left|\vec r'(s)\right| \cdot \left|\frac{ds}{dt}\right|$ we get $\frac{ds}{dt} = I/\big[\lambda(s)~ |\vec r'(s)|\big].$ So that gives an explicit notion for $dt$. The net Lorentz force due to the magnetic field is of course $$\vec F = \oint_C dq~\vec v\times\vec B, $$and we identify $dq = \lambda(s) ~ |\vec r'(s)|~ ds$ and $\vec v = \frac {d\vec r}{dt} = \vec r'(s) ~\frac{ds}{dt}$ to turn this into: $$\vec F = \int_0^S ds~I(s) ~ \vec r'(s) \times\vec B(s) = I \oint_C d\vec r\times \vec B.$$ So you just get a straightforward line integral. Another simple way to see this is as the curvy generalization of the well-known result for a length $L$ of wire oriented in direction $\hat n$ with current $I$ going in the $\hat n$ direction: then the force on the wire is $\vec F = L~I~\hat n\times\vec B.$ (Some people write $\vec I = \hat n ~I.$)
{ "domain": "physics.stackexchange", "id": 59534, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, forces, magnetic-fields", "url": null }
rna-seq, transcriptome, long-reads, pacbio, coverage Title: Coverage calculation: long reads (RNA-seq) Say your aim is to calculate the coverage of an RNA-seq experiment generated with long-read sequencing (so, uneven read length). Up to now, I relied on the Lander/Waterman equation: $$C = L*N / G$$ where: $C$ = final coverage $G$ = haploid genome (or transcriptome) length $L$ = read length $N$ = number of reads I have two major conceptual issues with this formula:
{ "domain": "bioinformatics.stackexchange", "id": 275, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rna-seq, transcriptome, long-reads, pacbio, coverage", "url": null }
homework-and-exercises, electrostatics, electric-fields, integration, calculus Building the electric field line of Figure-02 (video).
{ "domain": "physics.stackexchange", "id": 97903, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electrostatics, electric-fields, integration, calculus", "url": null }
forces, fluid-statics Title: Is Frobscottle from the movie 'The BFG' less dense than air? For those who have either read the book, or watched the movie "The BFG", you would know Frobscottle as a green drink the giant uses, and has bubbles fizzing "in the wrong way", which is downwards. Assuming the bubbles to be filled with air, and that gravitational force on the bubble is greater than buoyant force, does this imply Frobscottle is less dense than air? Furthermore, is a liquid possible that is less dense than air? It implies that the author made it up without worrying about physics. If Frobscottle was less dense that air, it wouldn't stay in a cup. It would float like a helium balloon. There is a liquid less dense than air, but it is nothing you could put air bubbles in. It is 3 monolayers of $He^3$ adsorbed on graphite at temperatures below 80 milliKelvin. First, the liquid is only 3 atoms deep. Second, air freezes solid at that temperature. See http://www.u-tokyo.ac.jp/en/utokyo-research/research-news/lowest-density-liquid-in-nature/ and https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.109.235306.
{ "domain": "physics.stackexchange", "id": 39912, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "forces, fluid-statics", "url": null }
machine-learning, supervised-learning, probability \end{equation*} In the context of supervised learning, the variable $y$ is used to denote the class labels, and the vector $\mathbf{x}$ for measurement vector or feature vector. For the purpose of discussion, let us assume that the class label $y$ takes values in the set $\{1,2\}$, where $1$ denotes male and $2$ denotes female. Similarly, $\mathbf{x}$ is a measurement vector on two variables say, $(x_{1},x_{2})$, where $x_{1}$ stands for height and $x_{2}$ stands for weight of individuals. $p(y|\mathbf{x})$ denote the posterior density for $y$ given the observation $\mathbf{x}$. For example, $p(1|\mathbf{x})$ means that given the observation $\mathbf{x}$, what is the probability that sample belongs to the class males. Similarly, we can interpret $p(2|\mathbf{x})$. $p(\mathbf{x}|y)$ stands for the class conditional probability density. For example, $p(\mathbf{x}|1)$ denote the probability density for males and $p(\mathbf{x}|2)$ denote the probability density for females respectively. In supervised learning, these class conditional densities are usually known in advance. Finally $p(y)$ denote the prior probability for the class label $y$. For example, $p(1)$ denotes the probability that an individual/example selected randomly from the population is a male. Assuming that, the prior probabilities for the classes and class conditional densities are known, Bayes rule says, assign an observation $x_{0}$ whose class label is not known, to the class, say, male, if \begin{equation*} p(\mathbf{x_{0}}|1)p(1)>p(\mathbf{x_{0}}|2)p(2). \end{equation*} Note that, $p(\mathbf{x})$ is not used in the classifier. Classification rules usually require class conditional densities.
{ "domain": "datascience.stackexchange", "id": 1240, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, supervised-learning, probability", "url": null }
ros, usb, freenect, libfreenect, ros-indigo Comment by luketheduke on 2015-12-19: I your Kinect a model 1473? If so this question may help you: http://answers.ros.org/question/196455/kinect-installation-and-setup-on-ros-updated/ Hi, It seems that usb 3 is not well recognized on linux kernel < 3.13 so it doesn't work on every machine. EDIT: try georg l answer before, this will tell you directly if you have a permission issue or if the problem is elsewhere. Otherwise here is how I sovled it on my laptop. I've installed ubuntu14.04.2 which uses the newer kernel versions and kinect2 is working fine on it. I followed the steps listed here. note: if you use 14.04.2, you'll need to install a bunch of X libraries, most of them are listed on the ROS installation page. note2: the registration of the kinect2 point clouds is done on the computer, which take a huge amount of computing power, I'm trying to figure out how to make the GPU perform the registration. On an i7 laptop the framerate drops to 5~6 fps when you perform the registration. Originally posted by marguedas with karma: 3606 on 2015-04-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 21414, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, usb, freenect, libfreenect, ros-indigo", "url": null }
electrostatics, differentiation Title: Vector Identity For Electrostatics I am reading about electrostatics and came across this vector identity when discussing the $D$ field: $$\frac{\nabla k_{e}}{k_{e}} = \nabla \ln (k_{e}).$$ I have not seen this identity before and was wondering if someone could should me how this is derived It's a basic application of the chain rule, the vector analogue of $\frac{d}{dx}(\ln f(x))=\frac{1}{f(x)}\frac{df(x)}{dx}$.
{ "domain": "physics.stackexchange", "id": 52883, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, differentiation", "url": null }
dataset def extract_data(data_dir, output_dir, frame_rate): """ Extracts data in tfrecord format to gifs, frames and text files :param data_dir: :param output_dir: :param frame_rate: :return: """ if os.path.exists(output_dir): if os.listdir(output_dir): raise RuntimeError('Directory not empty: {0}'.format(output_dir)) else: os.makedirs(output_dir) seq_generator = get_next_video_data(data_dir) while True: try: _, k, actions, endeff_pos, aux1_frames, main_frames = next(seq_generator) except StopIteration: break video_out_dir = os.path.join(output_dir, '{0:03}'.format(k)) os.makedirs(video_out_dir)
{ "domain": "datascience.stackexchange", "id": 9480, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dataset", "url": null }
sql, csv, database, php5, laravel Javascript submitFiles() { //Form validation that will not allow a user to submit the form by upon the @click event if(!this.$refs.csvFile.checkValidity()) { return null; } let formData = new FormData(); formData.append('file', this.file); axios.post('http://localhost:8080/api/contacts/upload-csv', formData, { headers: { 'Content-Type': 'multipart/form-data' } } ) .then(function(response){ console.log('SUCCESS!!'); console.log(response); }) .catch(function(response){ console.log('FAILURE!!'); console.log(response); }) } } } </script> I defined the fields that I wanted to be required first. Then, I defined a One-to-Many relationship between an account table and a contacts table where the contact id is the account's name in the account's table in the SQL database $account = new Account(); $mapping = $request->$mapping; $account->name = $rowProperties[$mapping['account_name']]; //One-To-Many Eloquent Relationship that links a table of Account Names in the Account's //table to contact Account_ID's in the Contact tables //$contact->id = $account->id; $account->save(); $contact = new Contact(); $contact->id = $account->id; $contact->contact_id = $rowProperties[$mapping['contact_account_name']]; $contact->first_name = $rowProperties[$mapping['contact_first_name']]; $contact->last_name = $rowProperties[$mapping['contact_last_name']];
{ "domain": "codereview.stackexchange", "id": 37597, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sql, csv, database, php5, laravel", "url": null }
Now, Ronie, I'm counting on you to determine the internal energy at that final state so that we can finally reach consensus. Chet Yes, I see the flaws now and you are right. This is the correct equation for isochoric compression (but do you agree on this?) 1Q 2= ΔU Where in Q (heat) can be equal to ΔU, ΔKE, W, ΔH but not limited ΔH = ΔU (erroneous eq'n by definition in this case), because equation of state for internal energy would be u = h - pv. Question: ΔH = ΔU is there a case (an actual one) describing this equation? What insights can you infer on this simple expression? Last edited: Chestermiller Mentor Yes, I see the flaws now and you are right. That's it? That's all you have to say to me after all the hours of my valuable time and effort I put in trying to figure out a way of explaining this in a way that resonates with you? Are you aware that being a Mentor at Physics Forums is done strictly on a volunteer basis by people who just want to help other members? I don't get paid for this. I am very disappointed in your unappreciative response. Growing up, I was taught better manners than this. This is the correct equation for isochoric compression (but do you agree on this?) 1Q 2= ΔU Yes. This is correct in the typical case where significant KE and PE changes are absent. Where in Q (heat) can be equal to ΔU, ΔKE, W, ΔH but not limited The more general form of the first law applicable to a closed system is $$\Delta U+\Delta (KE)+\Delta (PE)=Q-W$$ I don't feel like answering you last question.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9553191348157372, "lm_q1q2_score": 0.8265651336092777, "lm_q2_score": 0.8652240947405564, "openwebmath_perplexity": 1027.876941015103, "openwebmath_score": 0.6364243626594543, "tags": null, "url": "https://www.physicsforums.com/threads/when-is-dh-du.856385/page-2" }
If , then using implicit differentiation would be. This is a Universal General Education Transfer Component (UGETC) course. Implicit differentiation was developed by the famed physicist and mathematician Isaac Newton. Other topics include differentiation and integration of algebraic and trigonometric functions, implicit differentiation, related rates problems, and other application problems. Implicit differentiation. 7) Answer Key. You might even disdain to read it until, with pencil and paper, you have solved the problem yourself (or failed gloriously). All the solutions are given by the implicit equation Second Order Differential equations. differentiate Solve the equation explicitly for y and y/ by implicit differentiation. We know that y = 300 and dy dt = 60. If a solution set is available, you may click on it at the far right. Implicit di erentiation Statement Strategy for di erentiating implicitly Examples Table of Contents JJ II J I Page2of10 Back Print Version Home Page Method of implicit differentiation. Michael Kelley Mark Wilding, Contributing Author. The chain rule. y discussed a few examples on where an implicit function theorem could be useful: (1)The inverse function problem can be turned into an implicit function theorem (more in the notes). If you notice any errors please let me know. Find dy/dx by implicit differentiation. Apply derivatives to solve optimization problems, related rates problems. Step 1: Multiple both sides of the function by ( + ) ( ) ( ) + ( ) ( ). Find ∂z ∂x and ∂z ∂y for each of the following functions. Sudoku Puzzle with Derivatives (Basic derivative formulas, Chain Rule, Implicit differentiation) A Puzzle by David Pleacher Solve the 26 derivative problems below and place the answer in the corresponding cell ld be integers from 1 to 9 inclusive. Linear multi-step methods: consistency, zero-. 1 Verify by substitution that the given function is a solution of the given differential equation. Differentiation formulas (sums, differences, products, and quotients) The chain rule Derivatives of trigonometric functions Implicit differentiation Rate of change in the natural and social sciences, including position, velocity, acceleration, and rectilinear motion— includes an oral presentation. Course Hours per Week: Class, 3. Implicit differentiation is also crucial to find the derivative of inverse functions. If y =f(x), the variable y is given explicitly (clearly) in terms of x. Such a function is referred as in implicit function. Find the derivative of f
{ "domain": "serviziepiu.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.989510909668023, "lm_q1q2_score": 0.8066814443506204, "lm_q2_score": 0.8152324915965392, "openwebmath_perplexity": 767.8150842855367, "openwebmath_score": 0.7326173186302185, "tags": null, "url": "http://nmaw.serviziepiu.it/implicit-differentiation-problems-and-solutions-pdf.html" }
One can easily pot data as a scattered manner on a graph. It basically depends on the data given to us as to how can it be plotted. For prediction of values in a scattered kind of data we make use of a straight line representing an equation which is not actually displayed on the graph. This straight line is known as line of best fit. In other terms it is also referred to as the trend line. ## Line of Best Fit Definition We can define the line of best fit as the line that represents the data of a scatter plot diagram in the best manner. The line of best fit may pass through some points of the scatter plot, may even pass through all the points or at times may not even pass through any point of the scatter plot. ## How to Find and Draw The Line of Best Fit To find the line of best fit we can use two methods: one is the spaghetti method and the second one is using least square method. The first one is a random method according to which we get different lines of best fit as the judgment varies from person to person. The least square method gives a general and more accurate line of best fit for a given set of values. With this method the line so obtained is a common line for every person determining it for the same set of values given. Once the line is obtained by using the spaghetti or judgment method we can easily find the equation of the line using point slope formula or two point formula of finding the equation of the line. In the other method we will obtain the equation first using which we can easily draw the line of best fit of the graph via finding points lying on the equation so obtained and joining them. ## Line of Best Fit Equation Once the line of best fit is drawn one can easily find the equation of the line using any method of finding equation of a line may it be point slope or two point method. ## Line of Best Fit Formula Usually we can make a line of best fit by the judgment of eyes which may vary from person to person. But a more accurate way of finding the line of best fit for a particular data is to make use of the least square method to determine the line of best fit.
{ "domain": "mathcaptain.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9898303443461076, "lm_q1q2_score": 0.8248499772757141, "lm_q2_score": 0.8333246015211008, "openwebmath_perplexity": 168.68689337996943, "openwebmath_score": 0.3933320641517639, "tags": null, "url": "http://www.mathcaptain.com/probability/line-of-best-fit.html" }
quantum-mechanics, path-integral This is the usual connection between statistical physics (which is a nice, well-defined real theory) at inverse temperature $\beta$ in (N+1,0) space-time dimensions and evolution of the quantum system in (N, 1) dimensions for time $t = -i \hbar \beta$ that is used all over the physics but almost never justified. Although in some cases it was actually possible to prove that Wightman QFT theory is indeed a Wick rotation of some Euclidean QFT (note that quantum mechanics is also a special case of QFT in (0, 1) space-time dimensions).
{ "domain": "physics.stackexchange", "id": 22806, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, path-integral", "url": null }
homework-and-exercises, optics, visible-light, lenses Title: Does the order of analysis matter when considering multiple lenses and mirrors? Say we have an object $O$ that is to the left of two lenses. Let's say that the first lens $L_1$ is to the left of $L_2$, the second lens. Assuming we know the type of each lens (concave or convex), the focal length of each lens and the distances between the object and the lenses, we could determine the final image as follows: First, use the lens equation to find the image of the object through $L_1$, and call this image $I_1$. Then you can treat $I_1$ as the object of the second lens $L_2$ by using the distance between $I_1$ and $L_2$. Doing so will produce your final image $I_2$. What if I decided to go through the same process, but by finding the image that goes through $L_2$ first and then using that image as the object for $L_1$? Does this calculation correspond to a valid image? If not, then why? Also, I ask the same questions for the case when $L1$ is a lens and $L2$ is a mirror. I am sorry if this is somewhat basic, but it is bothering me. Thank you for your help in advance! I don't know if it's basic. But you could have done an example to see that they are not the same. IF you wanted to prove that two ways are equivalent, that's hard because you must show that they are the same for all possible cases. However, if you want to check that it is false, you just need one case in which the equality doesn't hold. So, if you invent any example, you'll easily check that the order does matter. Now let's see why. The lens equation is $$ \frac{-1}{a}+\frac{1}{a'}=\frac{1}{f'}$$ with $a=$distance to object; $a'=$ distance to the image; and $f'$ =focal lenght, all of them measured from the origin.
{ "domain": "physics.stackexchange", "id": 56788, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, optics, visible-light, lenses", "url": null }
c++, performance, c++17 int main() { constexpr std::array<utils::WordPair, 5> my_map{{ {"&", "&amp;"}, {"<", "&lt;"}, {">", "&gt;"}, {"'", "&apos;"}, {"\"", "&quot;"} }}; std::string my_string{ "This & that aren't \"<>\"." }; // use a lambda for clarity auto repl = [&my_string](const utils::WordPair &p){ utils::replace_all(my_string, p); }; std::for_each(my_map.cbegin(), my_map.cend(), repl); std::cout << my_string << '\n'; }
{ "domain": "codereview.stackexchange", "id": 37707, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, c++17", "url": null }
electrochemistry, redox Redox properties can be well described with redox potentials attributed to a given reaction. It is essentially the same concept that you are talking about: we assign a number (conveniently it is an actual measurable potential) to an electrode reaction which includes an oxidized and reduced form of the species we are talking about. This number, the redox potential, can show you how strong oxidizer/reducer is the corresponding corresponding form, and just as you guess, strong oxidizers generally have a weak reducer pair. The $\ce{HCl}$ formation reaction is generally a radical chain reaction, triggered e.g. by light, and not a redox reaction. The key here is to split the $\ce{Cl2}$ (e.g photo-chemically), then the obtained free radicals induce a chain-reaction with elementary steps like $\ce{Cl + H2 -> HCl + H}$ or $\ce{H + Cl2 -> HCl + Cl}$ The two gas mixed is more or less inert in total darkness.
{ "domain": "chemistry.stackexchange", "id": 1319, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrochemistry, redox", "url": null }
javascript, node.js, express.js, mongoose Problem I am trying to solve As you can see this code clearly violates DRY. Except the mongoose database connector, everything else in the 2 files is exactly the same. How can I write this in a cleaner way such that the db connector is abstracted away? So I kind of figured this out. The db connector can be passed around like just another variable. Silly I didn't realise this before. I created a new file to abstract the model. dbOperations.js exports.addToDB = async function (model, dataObj, listName, res) { model.update({ userid: dataObj.userID }, { $addToSet: { listName: { $each: dataObj.playerID } } }, { upsert: true }, function (err, data) { if (!err && data) { util.successResponder(res, successText); } else { util.serverErrorResponder(res, errorOccured); } }); }; teamController.js var mongoose = require('mongoose'); const teamModel = mongoose.model('teamModel'); const dbOperations = require('../../helper/dbOperations'); exports.addPlayerToTeam = function (req, res) { let dataObj = {teamID: req.body.teamID, playerID: req.body.playerList}; dbOperations.addToDB(teamModel,dataObj,'teamList',res); } And similarly for the other file as well. Any other files which follow a similar schema can use it. There is probably a better way to do this so that this can be generalised further to accommodate other schema types.
{ "domain": "codereview.stackexchange", "id": 31253, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, node.js, express.js, mongoose", "url": null }
# Lecture 24 (March 25, 2022) ### Ito process as a stochastic differential equation Recall that the Ito’s process is $dx_t = \mu(x_t,t) dt + \sigma(x_t,t)dw_t$ where $w_t$ is the standard BM with $w_0 = 0$ . If $G(x,t)$ is twice differentiable Ito formula: $dG(x_t,t) = \left(\frac{\partial G}{\partial x}\mu + \frac{\partial G}{\partial t} + \frac{1}{2}\frac{\partial^2 G}{\partial x^2}\sigma^2\right)dt + \sigma\frac{\partial G}{\partial x} dw_t$ For example: $dx_t = \mu dt + \sigma dw_t$ If $y_t = G(x,t) = x_t^2$, then $\frac{\partial G}{\partial x} = 2x \quad \frac{\partial^2 G}{\partial x^2} = 2 \quad \frac{\partial G}{\partial t} = 0$ \begin{aligned} dy_t &= (2x_t \mu + \frac{1}{2}\times 2\sigma^2) dt + 2x_t\sigma dw_t \\ &=(2x_t\mu + \sigma^2)dt + 2x_t\sigma dw_t \end{aligned} Another example If $\mu = 0$ and $\sigma = 1$, then $dx_t = dw_t$. This leads to that $dw_t^2 = dt + 2 w_t dw_t$ The term $dt$ which should not appear in classical differential calculus, is from the quadratic variation of $w_t$. Another example: Geometric Brownian motion. Consider stock price at time $t$, it satisfies that $dP_t = \mu P_t dt + \sigma P_t dt$
{ "domain": "github.io", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9863631631151012, "lm_q1q2_score": 0.8041152968714368, "lm_q2_score": 0.815232489352, "openwebmath_perplexity": 1385.275030697945, "openwebmath_score": 0.9994842410087585, "tags": null, "url": "https://fancunwei95.github.io/stats556/Lecture24/" }
efficiency, linear-programming, integer-programming, modelling Now you can test feasibility, i.e., test whether it is possible to pack all of the balls into some given number of boxes. For example, suppose we have 5 big balls and 10 little balls and 10 boxes. Then there will be at most about 300 legal combinations (probably fewer), so there will be about 3000 binary variables. I don't know whether PuLP will be effective, but you could give it a try. Exact cover Even better yet, one can note that your problem is an instance of the exact cover problem. The universe is the set of balls; each combination is a subset of the universe; and we want to find a way to cover the universe by a disjoint union of the combinations. There are dedicated algorithms for the exact cover problem that might be faster than expressing this as an integer linear program and applying an ILP solver.
{ "domain": "cs.stackexchange", "id": 6006, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "efficiency, linear-programming, integer-programming, modelling", "url": null }
navigation, move-base Title: Using callbacks with simple action clients Hi everyone! I am trying to use a goal callback and a feedback callback with my move base action client. I am getting a really long and hard to understand error though. I was hoping someone might be able to help me decipher it. Here is a snippet of the code: void GoToBehavior::goalCallback(const actionlib::SimpleClientGoalState& state, const move_base_msgs::MoveBaseActionResult::ConstPtr& result) { if(state == actionlib::SimpleClientGoalState::SUCCEEDED) ROS_INFO("Goal reached!"); else ROS_INFO("Goal failed"); } void GoToBehavior::feebackCallback(const move_base_msgs::MoveBaseActionFeedback::ConstPtr& feedback){ } void GoToBehavior::execute(){ ROS_INFO("Sending goal"); ac->sendGoal(goalMsg, boost::bind(&GoToBehavior::goalCallback, this, _1, _2), MoveBaseClient::SimpleActiveCallback(), boost::bind(&GoToBehavior::feebackCallback, this, _1)); } where ac is of type: actionlib::SimpleActionClient<move_base_msgs::MoveBaseAction> The error I am getting is this:
{ "domain": "robotics.stackexchange", "id": 20766, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, move-base", "url": null }
write the bounds for the reverse order of integration 2+2cos 3 1rdrd 9. An integral to find other values - Introduction to evaluating an Improper integral and using a value of an to!, y ) \, dA\ ) by using the easier order of integration evaluating an Improper integral Integrals write! Y coordinates Sketch regions too 1 i=1 3 j=1 2i3j 1/2 1/2 3 )... J=1 2i3j x, y coordinates Sketch regions too 1 1/2 1/2.. 0 4 too 1 double integral \ ( \displaystyle \iint_D ( x^2 - y^2 ) \, dA\ by. Integrals and write the bounds for the reverse order of integration is continuous a. Worksheet illustrating the Estimation of definite Integrals is continuous on a region D. 1 Z 2+2cos 1rdrd! Nd the area of the region \ ( D\ ) is shown in the following figure Integrals Theorem! ( \displaystyle \iint_D ( x^2 - y^2 ) \, dA\ ) by using the order. Introduction to evaluating an Improper integral and using a value of an integral to other! The double integral \ ( \displaystyle \iint_D ( x^2 + y ) \, dA\ ) by using easier... A region D. 1 by using the easier order of integration A= Z ˇ=3 0 Z 2+2cos 3 =.
{ "domain": "filmisdead.org", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9697854129326061, "lm_q1q2_score": 0.8081460327572522, "lm_q2_score": 0.8333245911726382, "openwebmath_perplexity": 2819.737226168277, "openwebmath_score": 0.957338809967041, "tags": null, "url": "http://filmisdead.org/thinkorswim-on-jad/double-integral-worksheet-with-answers-4b8ba2" }
fluid-dynamics, waves, acoustics, boundary-conditions, resonance Why should it not occur as it leaves the pipe? Why must it wait to diffract? Is this due to the boundary being vague and smoothed out, as a requirement of the continuity of pressure and velocity, and thus the reflected wave is created further out? As I type this, I feel more convinced than before, but am still not entirely sure. The literature on this is very sparse. Any response that either addresses these concerns or explains it differently is appreciated. I agree with you that the quoted explanation is pretty poor, for the reasons you gave. Here's a figure from my own book (free online).
{ "domain": "physics.stackexchange", "id": 51838, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluid-dynamics, waves, acoustics, boundary-conditions, resonance", "url": null }
Example: $f(x) = 1$ : The value of the integral is $\frac{\pi}{2} = 1.57079\dots$. Using left-Riemann summation (and halving $w_0$), with $h = 1/10$, and stopping when $k=20$ (because $1-x_k$ and $w_k$ are about $10^{-5}$), the sum is $1.5659 \dots$. Continuing on to $k = 27$ obtains $1.57079\dots$. • With $h = 1/20$, stopping at $k = 40$ ($1 - x^k$ around $10^{-5}$ and $w_k$ around $10^{-36}$), obtains $1.56504\dots$. • With $h = 1/20$, stopping at $k = 54$ ($1 - x^k < 10^{-6}$ and $w_k$ around $10^{-150}$), obtains $1.57078\dots$. (An arbitrary precision calculation gives an error of about $5 \times 10^{-12}$ when we sum $70$ terms.) We're only summing tens of terms to get these results... • The double-exponential quadrature of Takahasi and Mori is indeed a good way to flatten singularities at either end of the integration interval, but one needs to be careful with evaluations near the endpoints, as pointed out in their paper. – J. M. isn't a mathematician Sep 11 '17 at 14:56 • @J.M.isnotamathematician : I agree. I don't think I've hidden that. I repeatedly comment on the difficulty of representing $1-x_k$ and $w_k$ for large $k$... – Eric Towers Sep 11 '17 at 15:19 I like Professor Vector's method, but here is a way to take advantage of the integrability of $\frac1{\sqrt{1-x^2}}$ on $(0,1)$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9711290913825541, "lm_q1q2_score": 0.8348444751814098, "lm_q2_score": 0.8596637487122111, "openwebmath_perplexity": 335.2277592628072, "openwebmath_score": 0.9098044633865356, "tags": null, "url": "https://math.stackexchange.com/questions/2425070/is-there-a-way-to-deal-with-this-singularity-in-numerical-integration" }
inorganic-chemistry, acid-base, experimental-chemistry, aqueous-solution, titration Suppose they have given different values. Suppose the values given are: $\mathrm{pH}_{(A)} = 9.55$ and $\mathrm{pH}_{(B)} = 9.45$. Assume for $\ce{NH4+/NH3}$ system, $\mathrm{p}K_\mathrm{a} = 9.25$ Now we can use the Henderson–Hasselbalch equation, $\mathrm{pH} = \mathrm{p}K_\mathrm{a} + \log \left(\frac{\text{[base]}}{\text{[acid]}}\right)$, to calculate each concentration (acid and base here are $\ce{NH4Cl}$ and $\ce{NH3}$, respectively): $$9.55 = 9.25 + \log \left(\frac{\frac{10a-17.03}{34.06}}{0.50}\right) = 9.25 + \log \left(\frac{10a-17.03}{17.03}\right)\\ = 9.25 - 1.231 + \log (10a-17.03) $$ $$\log (10a-17.03) = 9.55 - 9.25 + 1.231 = 1.531 \\ \Rightarrow \ \therefore \ a = \frac{1}{10}(33.96 + 17.03) = 5.099$$ Similarly, for Buffer B: $$9.45 = 9.25 + \log \left(\frac{\frac{10b-17.03}{34.06}}{0.50}\right) = 9.25 + \log \left(\frac{10b-17.03}{17.03}\right)\\ = 9.25 - 1.231 + \log (10b-17.03) $$
{ "domain": "chemistry.stackexchange", "id": 13978, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, acid-base, experimental-chemistry, aqueous-solution, titration", "url": null }
organic-chemistry, aromatic-compounds, stability, carbocation Title: Why is cyclopentadiene more stable than its aromatic counterpart cyclopentadienyl anion? What will be the order of stability? The answer key says that the stability order is: cyclopentadiene > cyclopentadienyl anion > cyclopentadiene cation According the me it should be: cyclopentadienyl anion > cyclopentadiene > cyclopentadienyl cation because the order of stability is aromatic > non-aromatic > anti-aromatic. This is a nice question because it forces you to step back and think rather than fall into the obvious trap. You are entirely correct that aromatic compounds are more stable than non aromatic compounds which are more stable than anti aromatic compounds assuming that all other things are equal. In this case, all other things are not equal because one of the species is neutral while the other two are charged. Having a positive or negative charge on carbon is not very favourable, even if the negative charge does make the species aromatic. The $\text{p}K_\text{a}$ of cyclopentadiene in DMSO is 18[1] which means the equilibrium constant for dissociation of cyclopentadiene into the cyclopentadienyl anion and a hydrogen ion (which will actually be attached to a DMSO molecule) is $10^{-18}$! In other words, cyclopentadiene is much more stable than the cyclopentadienyl anion.
{ "domain": "chemistry.stackexchange", "id": 5666, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, aromatic-compounds, stability, carbocation", "url": null }
# Sallen Key Butterworth Response I attempted to design a low-pass 2nd Order Sallen key filter that exhibits a butterworth response. Unfortunately I'm getting confused in the math that describes the response. My design has a quality factor of approximately 0.707 and a cut-off frequency of 2kHz, which to my knowledge this is equivalent to a butterworth response. I looked up the tables for a 2nd order butterworth response and obtained the polynomial: $$s^2+1.414s+1$$ Does my pole frequency wn have to be equal to 1 for my design to exhibit a butterworth response? Thanks. Circuit • No, that's the normalized response with wn = 1. They're designed that way using tables, then you just shift the frequency and can have all the poles. If your Q is 0.707 then it's a butter (for order 2). Check here en.wikipedia.org/wiki/… – Andrés Apr 10 '18 at 20:15 • @Andrés Please do not answer questions in the comment section. It even says so when you start writing one. The user can't accept it if it's correct, so it breaks the whole idea with Stack Exchange. – pipe Apr 11 '18 at 6:16 Your sallen key filter has a gain of 1 hence it posseses this transfer function: - $\dfrac{V_{OUT}}{V_{IN}} = \dfrac{\omega_n^2}{s^2+2\zeta\omega_n s+\omega_n^2}$ So, if $\omega_n$ (the natural resonant frequency) is normalized to 1 you get: - $\dfrac{V_{OUT}}{V_{IN}} = \dfrac{1}{s^2+2\zeta s+1}$ where $2\zeta = \dfrac{1}{Q}$ If your Q = 0.707, the inverse is 1.414 (as seen in your polynomial). It doesn't matter what value $\omega_n$ actually is; for a butterworth response Q = $\dfrac{1}{\sqrt2}$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9719924769600771, "lm_q1q2_score": 0.8162150992252531, "lm_q2_score": 0.8397339676722393, "openwebmath_perplexity": 1890.6398661354, "openwebmath_score": 0.8112914562225342, "tags": null, "url": "https://electronics.stackexchange.com/questions/367767/sallen-key-butterworth-response" }
java, performance, algorithm, programming-challenge int powB = (tmpB+1)(tmpB+1); for(int i=2;i < tmpB;i++){ if(powB%i==0){
{ "domain": "codereview.stackexchange", "id": 25748, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, performance, algorithm, programming-challenge", "url": null }
momentum, big-bang, speed, relative-motion Title: Origin of motion and relative speed of bodies in the universe Charged particles can hit the earth at relativistic speeds. But it seems that all large bodies have fairly low relative speed. Of course, speed can increase considerably when a body orbits close to a massive object, but then it will not travel very far, and it can be discounted by averaging the speed over a long enough time (maybe a year). The Earth cruises around the Sun at 30 km/s, the sun cruises at 200 km/s and the Milky Way at 600 km/s. Not that much. I have two somewhat opposite questions: What do we know of the relative speeds of massive bodies in the universe, small or large (not particles)? I am not sure whether it was the proper way to state the question. Also, do relative speeds get very high if we correct out the part due to universe expansion? Are there braking phenomena? On the other hand, what initiated that motion? If the initial soup had been homogenous, the coalescence of randomly moving particles should have produced structures essentially at rest (ahem: where is the energy going?) with respect to each other. There were small variations in temperature, but how should that create speed differences? Or did it cause large scale streams that coalesced into moving bodies? Even if some speed is due to contraction of large scale rotating structures, it does require initial momentum to exist. To put it together, do we have measured speed statistics in conformance with universe evolution models? What does it say about speed? Sorry if the questions are not well stated; that is my best. A reference to a paper for non-specialists would do too.
{ "domain": "physics.stackexchange", "id": 9089, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "momentum, big-bang, speed, relative-motion", "url": null }
c#, beginner long calcA = Math.Abs((long)a - compareValue); long calcB = Math.Abs((long)b - compareValue); if (calcA == calcB) { return 0; } if (calcA < calcB) { return a; } return b; }
{ "domain": "codereview.stackexchange", "id": 13176, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner", "url": null }
Anyway, the mean for that is easier calculated using the probability we just found. If we'd select random number of posts $t$, by mean we'd find $tP(X=3)$ Trips. Let's find for what number of posts, the expected number of Trips is $1$. \begin{align} 1&=tP(X=3)\\ &=0.0009t\\ t&=111.1111 \end{align} By mean, every $111.1111$ posts you'll have a Trips. For Quads it's equivalent. \begin{align} 1&=t'P(X=4)\\ &=0.00009t'\\ t&=1111.111 \end{align} By mean, every $1111.111$ posts you'll have a Quads. A simpler way to answer a and b is to just ignore the rest of the string and consider the final n+1 digits (assuming post numbers in a given thread are effectively random (and thus independent) which for predictive purposes while looking in one thread they are). A. Calling last digits q, r, s, t we have R =/= q: .9 S=q: .1 T = q&s: .1 Multiply ??? Profit B is similar C. Has no finite answer. For any number N there is SOME calculable probability that trips does not occur. Thus for no N can the odds of trips (1- p(no trips)) can never be 1. Analagously, how many times do you have to flip a coin to GUARANTEE heads? Pro tip: you can't.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676436891864, "lm_q1q2_score": 0.8184961821156341, "lm_q2_score": 0.8354835309589074, "openwebmath_perplexity": 484.5448712342675, "openwebmath_score": 0.5669775009155273, "tags": null, "url": "https://math.stackexchange.com/questions/1077057/probability-of-posting-a-quad-and-trip-on-4chan" }
# Does $\bigcap_{n=1}^{+\infty}(-\frac{1}{n},\frac{1}{n}) = \varnothing$? When I learn the below theorem: If $I_n$ is closed interval, and $I_{n+1} \subset I_n$, then $$\bigcap I_n \ne \varnothing$$ and someone says if we replace closed interval with open interval, can construct counter-example. So I have tried to construct the one: Does $$\bigcap_{n=1}^{+\infty}\left(-\frac{1}{n},\frac{1}{n}\right) = \varnothing\quad?$$ Thanks very much. • I am 100% certain that this question was asked at least twice before. Jan 5 '13 at 14:21 • Yes and I am searching for that link, Asaf. Jan 5 '13 at 14:24 • 10 answers? really? Jan 5 '13 at 20:17 • its obviously non-empty? – user85461 Jul 19 '13 at 18:41 • Related post: math.stackexchange.com/questions/1304402/… Jan 8 '16 at 12:40 No. It is not empty. Since $0\in (-1/n, 1/n)$ for all $n$, so $0\in\bigcap_{n=1}^\infty \left(-\frac{1}{n},\frac{1}{n}\right)$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9912886160570101, "lm_q1q2_score": 0.8344921847590692, "lm_q2_score": 0.8418256512199033, "openwebmath_perplexity": 336.4824426590478, "openwebmath_score": 0.9073121547698975, "tags": null, "url": "https://math.stackexchange.com/questions/270949/does-bigcap-n-1-infty-frac1n-frac1n-varnothing/270951" }
newtonian-mechanics, angular-momentum, conservation-laws, collision, rigid-body-dynamics Title: Elastic collision of rotating bodies How would you explain in detail elastic collision of two rotating bodies to someone with basic understanding of classical mechanics? I'm writing simple physics engine, but now only simulating non-rotating spheres and would like to step it up a bit. So what reading do you recommend so I could understand what exactly is happening when two spheres or boxes collide (perfectly in 2 dimensions)? I worked on a physics engine written in C# that does just this. Here are my notes on this topic. Objects have both translational and rotational momentum. When two objects collide, the overall algorithm goes like this: 1> Find the total momentum of both objects. Calculate the translational and rotational momentum, the vector sum of this is the total momentum of the object. 2> Split the momentum using the usual momentum splitting equation you would ordinarily use. (As in here) Each object now has their new momentum. The next step is to decide how much of that momentum is translational and rotational. 3> Imagine a vector A which goes from the point of collision to the center of mass of the object that was hit. The component of the incoming momentum vector which is parallel with A forms the new translational momentum vector, the rest of the vector represents rotational momentum. The extra notes I have linked to show more details on my methematical working, and also a description of how to handle inelastic collisions. You can find the physics engine here, and an implementation of the collision handling here
{ "domain": "physics.stackexchange", "id": 8878, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, angular-momentum, conservation-laws, collision, rigid-body-dynamics", "url": null }
go I will rewrite it. The config value is created in that function, so it's safe to assume that this function will need to check the config values. How am I going to reliably unit-test the package, if I can't pass in all possible combinations of config? Sure, I could write a ton of code setting/unsetting environment variables, and call this Start function over and over again, but am I really unit-testing the code then? Surely, if the config package is broken, then all other tests become unreliable to say the least. What about a change to the config package? How awful it'd be to rewrite all the tests just to make sure the changes to the config are reflected there, too? It's going to be a PITA, and an enormous waste of time and resources. PS: code like config.NewConfig() will also get rewritten by a lot of gophers. This is referred to as stuttering code (read this). I now I'm getting config, because I'm using the config package. The function name New is enough information, surely. config.NewConfig reads like "from config, get me new config" as opposed to "hey, config, give me a new value". This might be personal preference, but I generally find it better to use the function Get for config, rather than New. Config is, IMO, an immutable set of data, not something that does something, not something that needs constructing, it needs to be fetched/loaded. For that reason, I'd write config.Get() and that func would looks something like this: package config // Conf - global config struct type Conf struct { App // config per package } // App - config specific to the App package type App struct { Locale string } func Get() (*Conf, error) { // get values, return... }
{ "domain": "codereview.stackexchange", "id": 35745, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "go", "url": null }
• Well, a function is necessarily a composite function when you can't simplify it with known simplification techniques to arrive at a function comprised of a single function (a lot of functions have alternate definitions that you can use for simplification/complication later in your math education, like the link between exp(x) and trig functions). For example, (x^2)^2 isn't necessarily a composite function because with simplification, you can end up with x^4, which is a single function. Something that is necessarily a composite function is something like ln(sin(x)), which you can't simplify with elementary techniques to end up with a single function. • say we have w(u(x)) we can take d/dx of this using the composite rule formula which it becomes w'(u(x)) * u'(x) But since this example starts talking about situations where you can use three functions at instead of two, how would you find the derivate then? if g(x) = h(w(u(x)) does d/dx of g(x) become = h'(w(u(x))) * w'(u(x)) * u'(x) ? I did some working out and have seen that it can not and it seems that it becomes: =h'(w(u(x))) * u'(x) Without the middle term I had above w'(u(x)) I don't understand why I thought it should be in there. Is this correct? What is the proper way to think about the chain rule with more than 2 functions involved in the composition? • You were right the first time, why did you decide it was: h'(w(u(x))) • u'(x)? One way to remember this is to think of taking the derivative in steps: Let f(x) = w(u(x)), so g(x) = h(f(x)) Apply the chain rule once: g'(x) = h'(f(x)) • f'(x) f'(x) = w'(u(x)) • u'(x)
{ "domain": "khanacademy.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9763105300791785, "lm_q1q2_score": 0.8027119122257927, "lm_q2_score": 0.8221891370573388, "openwebmath_perplexity": 1114.2400591632002, "openwebmath_score": 0.8147158622741699, "tags": null, "url": "https://en.khanacademy.org/math/ap-calculus-ab/ab-differentiation-2-new/ab-3-1a/v/recognizing-compositions-of-functions" }
rospy, ros-kinetic And you can calculate the exact average velocity in the ReturnLinearVelocity() : def ReturnLinearVelocity(): #Instead of current_stamp, you can use the current.header.stamp and current.point.x # if you use a geometry_msgs/PointStamped for example vel_x = (current.x - previous.x) / (current_stamp-previous_stamp) vel_y = (current.y - previous.y) / (current_stamp-previous_stamp) vel_z = (current.z - previous.z) / (current_stamp-previous_stamp) return [vel_x, vel_y, vel_z] and also you can change your code a bit, putting the array of velocities in global and doing the linear velocity computation inside the Callback() function directly to store only the array of velocity, but it's your code you know what to do ! Originally posted by lmathieu with karma: 591 on 2019-10-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by haloted on 2019-10-12: @Imathieu Much appreciated for such a detailed explanation and suggestion.
{ "domain": "robotics.stackexchange", "id": 33880, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rospy, ros-kinetic", "url": null }
qiskit, circuit-construction, quantum-circuit, qasm Title: Conversion error: from a QISKIT circuit to a QASM string and back I created a circuit in qiskit and then converted it into a QASM string. When I try to make a circuit out of the QASM string I get the error: QasmError: "Cannot find gate definition for 'mcphase', line 3 file circuit.qasm" I used the following code to convert a circuit into a QASM file: qasm_file_name = "circuit.qasm" main_circuit.qasm(formatted=True, filename=qasm_file_name) Then I loaded the QASM string like so: circuit = QuantumCircuit.from_qasm_file(qasm_file_name) After this, I get the QASM error above. I believe mcphase gate was generated automatically during the conversion to QASM. It seems the mapping from a qiskit circuit to QASM string is not reversible. How this can be fixed? This is a toy QASM file that produces the error. OPENQASM 2.0; include "qelib1.inc"; gate ccircuit_87_dg q0,q1,q2,q3,q4,q5,q6,q7,q8,q9,q10,q11,q12,q13,q14,q15,q16,q17,q18,q19,q20 { cx q0,q4; cu(pi/2,0,pi,0) q0,q4; mcphase(pi/8) q0,q4; cu(pi/2,0,pi,0) q0,q6;}
{ "domain": "quantumcomputing.stackexchange", "id": 4383, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "qiskit, circuit-construction, quantum-circuit, qasm", "url": null }
c#, .net, playing-cards, .net-5 In the former case you have defined a getter which will always return a new Rank instance. In the latter case you have defined a getter which will always return the same Rank instance. As I said before default value intializer can't be used for instance level properties inside struct. But this limitation is not applicable for static properties. static getter only property vs static readonly field If you want to separate Rank type definition from its predefined instances then it might make sense to create a static Ranks class where you define the well-known constants: public static class Ranks { public static readonly Rank Ace = new(RankName.Ace, "A"); public static readonly Rank Two = new(RankName.Two, "2"); public static readonly Rank Three = new(RankName.Three, "3"); public static readonly Rank Four = new(RankName.Four, "4"); public static readonly Rank Five = new(RankName.Five, "5"); public static readonly Rank Six = new(RankName.Six, "6"); public static readonly Rank Seven = new(RankName.Seven, "7"); public static readonly Rank Eight = new(RankName.Eight, "8"); public static readonly Rank Nine = new(RankName.Nine, "9"); public static readonly Rank Ten = new(RankName.Ten, "10"); public static readonly Rank Jack = new(RankName.Jack, "J"); public static readonly Rank Queen = new(RankName.Queen, "Q"); public static readonly Rank King = new(RankName.King, "K"); public static readonly Rank Joker = new(RankName.Joker, "¡J!"); }
{ "domain": "codereview.stackexchange", "id": 41507, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, playing-cards, .net-5", "url": null }
A standard feature in pretty well every econometrics package a goodness-of-fit test that determines whether or not and jarque.test the. This video will show you how to check if R’s random number generating are. Steps to perform a Jarque-Bera test on my data more details see Gel and Gastwirth 2006! Annual measurements of the Jarque–Bera test for the standard deviation data manipulation in R, machine,! Wird bestimmt, ob die Probendaten eine Schiefe und kurtosis eine passende Normalverteilung deviation. Jb.Norm.Test Arguments x a numeric vector of data values - test ist statistischer... Data manipulation in R, data manipulation in R through the package tseries model is high... Finally, the R-squared reported by the model has fitted the data local asymptotic power, alternatives! ; 3 siehe auch ; 4 Weblinks ; Test-Beschreibung 6, 255-259 und der Schiefe in den Daten prüft ob! Maximum local asymptotic power, against alternatives in the case we have an accurate volatility forecast in. Was a bit confused regarding the mean, median, and more Wallace! Has the Jarque-Bera test of normality, see Jarque, C. and (... Normality Age.110 1048 jarqueberatest in r statistic df Sig eine Normalverteilung vorliegt function is from... The wish to test if they come from a normal distribution Daten prüft, ob die Zahlenreihe x durch,! And Business Services Director for Revolution Analytics, Weiwen Miao can say that the model has the. Jarque–Bera test in R through the package lawstat zero excess kurtosis independence of regression,... I get this p-value doing the Jarque-Bera ( 1982, 1987 ) test value. Aer '' for a Description of functions of one defined as: Jarque-Bera test be precise: should have zero! - in the package tseries, for example, and jarque.test in the package.! - test ist ein Güte-of-fit test, ob eine Normalverteilung vorliegt the skew and kurtosis a... String giving the name
{ "domain": "jimchristy.com", "id": null, "lm_label": "1. Yes\n2. Yes\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9658995742876885, "lm_q1q2_score": 0.8339495851220217, "lm_q2_score": 0.8633916064586998, "openwebmath_perplexity": 4628.22776329525, "openwebmath_score": 0.5811334848403931, "tags": null, "url": "http://www.jimchristy.com/the-matchmaker-lwqknrg/jarqueberatest-in-r-48ffc2" }
array, vba, excel Title: Get Worksheet Data Array (Standard Methods) I'm re-writing my module of Standard Methods. Virtually every project I do begins with grabbing some number of Data Tables and putting them in arrays. So, this is my general "Get Worksheet Data" method(s). As always, particularly interested in maintainability, but all feedback is welcome. Public Function GetWsDataArray(ByRef wbTarget As Workbook, ByRef wsTarget As Worksheet, ByVal topLeftCellText As String, ByVal useCurrentRegion As Boolean _ , Optional ByVal searchStartRow As Long = 1, Optional ByVal searchStartColumn As Long = 1 _ , Optional ByVal searchEndRow As Long = 10, Optional ByVal searchEndColumn As Long = 10) As Variant '/ 10x10 is arbitrary search range that should cover almost all typical worksheets Dim dataArray As Variant dataArray = Array() dataArray = GetWsDataRange(wbTarget, wsTarget, topLeftCellText, useCurrentRegion, searchStartRow, searchStartColumn, searchEndRow, searchEndColumn) GetWsDataArray = dataArray End Function Public Function GetWsDataRange(ByRef wbTarget As Workbook, ByRef wsTarget As Worksheet, ByVal topLeftCellText As String, ByVal useCurrentRegion As Boolean _ , ByVal searchStartRow As Long, ByVal searchStartColumn As Long _ , ByVal searchEndRow As Long, ByVal searchEndColumn As Long) As Range Dim wbSource As Workbook, wsSource As Worksheet Set wbSource = ActiveWorkbook Set wsSource = ActiveSheet wbTarget.Activate wsTarget.Activate UnhideWsCellsAndRemoveFilters wsTarget
{ "domain": "codereview.stackexchange", "id": 18002, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "array, vba, excel", "url": null }
image-processing, sampling, interpolation Title: What Does 'Zero Order Hold' and 'First Order Hold' Mean? While studying the Image Magnification in spatial domain, I have come across this definition of Image Magnification by Replication: Replication is a zero order hold where each pixel along a scan line is repeated once and then each scan line is repeated. And the definition for Image Magnification by Interpolation is given as: Linear interpolation is a first order hold where a straight line is first fitted in between pixels along a row. Then the pixels along each column are interpolated along a straight line. Can you please explain the terms 'zero order hold' and 'first order hold' in layman's terms, so that I could understand these two definitions? Thanks. We need to assume the reader knows some basic stuff to answer that. Let's give it a try. Lets understand the sentence - Zero / First Order Hold. We have the Zero / First Order and the Hold. Zero / First Order hold means the order of the Taylor Series of the function we use to interpolate. In other words, the degree of the Polynomial we can write the function with. Zero Order means the function is constant, we interpolate the same value in the missing parts. First order means we can use linear function to interpolate (Line with a slope). Hold means we hold the parameters to be the same until the next sample.
{ "domain": "dsp.stackexchange", "id": 7345, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "image-processing, sampling, interpolation", "url": null }
ros that's the end of catkin_make. let me know if there are certain format that I need to follow. the forum instructions said I just need to copy and paste the source code here. Thanks! [Originally posted](https://answers.ros.org/question/372763/catkin_make-errors-(multiple)/) by FreddyWang on ROS Answers with karma: 1 on 2021-02-25 Post score: 0 Original comments Comment by gvdhoorn on 2021-02-25:\ I am new to ROS. [..] I am simply following a course in motion planning. have you asked the instructor(s) of the course? Those should be your first point of contact. Comment by FreddyWang on 2021-02-26: @Loguna @gvhoorn Thanks! I am not sure whether PCL library has been successfully configured in my Ubuntu system. as I said, I just installed Ubuntu 20.04 in a VM. and I didn't remember that I used to configure any PCL related items. Should I go get PCL installed and configured first? any other library? It would seems to me like your PCL library package is not properly configured in your PC. That will explain why it cannot find certain types like Points. Do know that Catkin_make is basically like a debug and build tool that debug and build all your ROS packages that contain inside that specific workspace. Originally posted by loguna with karma: 98 on 2021-02-26 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 36141, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros", "url": null }
java, performance, android, animation public static void setImageBitmapWithAnimation(ImageView imageView, Bitmap bitmap) { final TransitionDrawable transitionDrawable = new TransitionDrawable(new Drawable[]{ new ColorDrawable(Color.TRANSPARENT), new BitmapDrawable(imageView.getContext().getResources(), bitmap) }); imageView.setImageDrawable(transitionDrawable); transitionDrawable.startTransition(DEFAULT_ANIMATION_DURATION); } public static void setTextColorWithAnimation(final TextView textView, int fromColor, int toColor) { ValueAnimator anim = new ValueAnimator(); anim.setIntValues(fromColor, toColor); anim.setEvaluator(new ArgbEvaluator()); anim.setDuration(DEFAULT_ANIMATION_DURATION); anim.addUpdateListener(new ValueAnimator.AnimatorUpdateListener() { @Override public void onAnimationUpdate(ValueAnimator animation) { textView.setTextColor((int) animation.getAnimatedValue()); } }); anim.start(); } }
{ "domain": "codereview.stackexchange", "id": 18536, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, performance, android, animation", "url": null }
scala, fibonacci-sequence Title: Generating Fibonacci sequences in Scala Would you consider this to be idiomatic Scala? If not, what would you do to improve it? def fibonaccis(max: Int): List[Int] = { fibonaccis(max, 2, 1) } private def fibonaccis(max: Int, prev: Int, prevPrev: Int): List[Int] = prev >= max match { case true => List[Int](prevPrev) case false => prevPrev :: fibonaccis(max, prev + prevPrev, prev) } Naming the parameters prev and prevPrev is weird. Why not current and prev? You shouldn't need the curly braces here for a function that consists of a single expression: def fibonaccis(max: Int): List[Int] = fibonaccis(max, 2, 1) Conventionally, the Fibonacci sequence is said to start like $$1, 1, 2, 3, 5, 8, \ldots$$ or $$0, 1, 1, 2, 3, 5, \ldots$$ Your fibonaccis(13) would produce List(1, 2, 3, 5, 8), which I would consider to be missing the first element. The Fibonacci sequence is an infinite sequence. It would be a shame to limit it to a max value. The most idiomatic way to model it in Scala would be using an infinite lazy stream, just as suggested in the documentation: def fibFrom(a: Int, b: Int): Stream[Int] = a #:: fibFrom(b, a + b) val fibonaccis = fibFrom(0, 1) Alternative implementation: val fibonaccis: Stream[Int] = 0 #:: 1 #:: fibonaccis.zip(fibonaccis.tail).map( n => n._1 + n._2 ) Then, you could do fibonaccis.takeWhile(_ < 13).toList to obtain List(0, 1, 1, 2, 3, 5, 8).
{ "domain": "codereview.stackexchange", "id": 22727, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "scala, fibonacci-sequence", "url": null }
algorithm, computational-geometry, rust #[test] fn test_triangle() { let p2 = Point::new(0.0, 0.0); let p1 = Point::new(0.0, 1.0); let p0 = Point::new(1.0, 1.0); let t = Triangle::new(p0, p1, p2); assert_eq!(t.range_x(), (0.0, 1.0)); assert_eq!(t.range_y(), (0.0, 1.0)); assert_eq!(t.p0, p2); assert_eq!(t.p1, p1); assert_eq!(t.p2, p0); // triangle should not contain its vertices assert!(!t.contains(p0)); assert!(!t.contains(p1)); assert!(!t.contains(p2)); // triangle should contain points on any side assert!(!t.contains(Point::new(0.5, 0.5))); assert!(!t.contains(Point::new(0.3, 0.3))); assert!(!t.contains(Point::new(0.2, 0.2))); assert!(!t.contains(Point::new(0.1, 0.1))); assert!(!t.contains(Point::new(0.0, 0.1))); assert!(!t.contains(Point::new(0.0, 0.2))); assert!(!t.contains(Point::new(0.1, 1.0))); assert!(!t.contains(Point::new(0.2, 1.0))); assert!(!t.contains(Point::new(0.2, 1.1))); // strictly inside the triangle assert!(t.contains(Point::new(0.5, 0.51)));
{ "domain": "codereview.stackexchange", "id": 22013, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithm, computational-geometry, rust", "url": null }
kinematics, forward-kinematics Title: How do I convert link parameters and angles (in kinematics) into transformation matrices in programming logic? I'm doing robotics research as an undergraduate, and I understand the conceptual math for the most part; however, when it comes to actually implementing code to calculate the forward kinematics for my robot, I am stuck. I'm just not getting the way the book or websites I've found explain it. I would like to calculate the X-Y-Z angles given the link parameters (Denavit-Hartenberg parameters), such as the following: $$\begin{array}{ccc} \bf{i} & \bf{\alpha_i-1} & \bf{a_i-1} & \bf{d_i} & \bf{\theta_i}\\ \\ 1 & 0 & 0 & 0 & \theta_1\\ 2 & -90^{\circ} & 0 & 0 & \theta_2\\ 3 & 0 & a_2 & d_3 & \theta_3\\ 4 & -90^{\circ} & a_3 & d_4 & \theta_4\\ 5 & 90^{\circ} & 0 & 0 & \theta_5\\ 6 & -90^{\circ} & 0 & 0 & \theta_6\\ \end{array}$$ I don't understand how to turn this table of values into the proper transformation matrices needed to get $^0T_N$, the Cartesian position and rotation of the last link. From there, I'm hoping I can figure out the X-Y-Z angle(s) from reading my book, but any help would be appreciated. The DH Matrix section of the DH page on wikipedia has the details.
{ "domain": "robotics.stackexchange", "id": 114, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinematics, forward-kinematics", "url": null }
c++, queue, breadth-first-search for (it = m_Filters.begin(); it != m_Filters.end(); ++it) {} However, if you're using C++11, consider auto instead of your current iterator. The compiler will determine the correct type, and it's more readable. You may also keep it inside the loop statement. for (auto it = m_Filters.begin(); it != m_Filters.end(); ++it) {} Better yet, if you're using C++11, consider a range-based for-loop: for (auto& it : m_Filters) {} This: if (outPinConnections.size() != 0) should instead use !empty(): if (!outPinConnections.empty()) For here: for (size_t i = 0; i < outPinConnections.size(); ++i) {} If this is the size type returned by size(), make it std::size_t since this is C++. Otherwise, make sure you're using the correct size type to avoid potential loss-of-data warnings. These: (*it).first; (*it).second; should use the more readable -> operator: it->first; it->second; You say: If I have a loop, I only want to count the end of the loop once. But I don't see that anywhere in the code, plus there are multiple loops used here. If you're still receiving the correct output and no errors associated with them, then they should be okay.
{ "domain": "codereview.stackexchange", "id": 5841, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, queue, breadth-first-search", "url": null }