anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How to place multiple graphs in same coordinate system (pandas,mathplotlib) -> look pic
Question: in short how to set graphs in same coordinate system, not separated as in pic each in its own coordinate system in one frame.... Answer: Problem is with the dtype of your y values. It's "object" and thus matplotlib thinks two object containing same float value are different. Use, y = a.astype(float).values y1 = b.astype(float).values y2 = c.astype(float).values
{ "domain": "datascience.stackexchange", "id": 4603, "tags": "pandas, graphs, jupyter, matplotlib" }
Scalar field equation of motion in FRW metric
Question: Consider a scalar field $\phi$ with the following Lagrangian density: $$\mathscr{L}=-\frac{1}{2} \partial_{\mu} \phi \partial^{\mu} \phi-V(\phi),$$ and consider a FRW metric, whose line element is given by $$\mathrm{d} s^{2}=-\mathrm{d} t^{2}+a(t)^{2}\left[\frac{\mathrm{d} r^{2}}{1-k r^{2}} + r^{2} \mathrm{d}\theta^{2} + r^{2} \sin^{2} \theta \mathrm{d}\phi^{2}\right],$$ with $a(t)$ being the FRW scale factor. According to e.g. Turner 1983, the equation for motion for $\phi$ in this setting turns out to be $$\ddot{\phi}+3 H \dot{\phi}+V^{\prime}(\phi)=0.$$ How do I derive this? I have varied the action of the scalar field and obtained the scalar field equation of motion for a generic metric $g_{\mu\nu}$: $$g_{\mu\nu} \partial^\mu \partial^\nu \phi - \frac{\delta V(\phi)}{\delta \phi}=0.$$ Now, I suppose that the $\ddot{\phi}$ term in the equation of motion is sourced by the $g_{00}$ component of the metric tensor, but what is the origin of the term $3 H \dot{\phi}$ given the metric I have written above? Answer: You made a mistake when you varied the action. Explicitly, the Lagrangian density is: $$ \mathcal L = (-\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi-V(\phi))\sqrt{-g} $$ so the Euler-Lagrange equations actually give you: $$ -\partial_\mu (\sqrt{-g}g^{\mu\nu}\partial_\nu\phi)+\sqrt{-g}V'(\phi) = 0 $$ which you usually rearrange as: $$ -\frac{1}{\sqrt{-g}}\partial_\mu (\sqrt{-g}g^{\mu\nu}\partial_\nu\phi)+V'(\phi) = 0 $$ and you recognize in the first term the Laplace-Beltrami operator, which is a covariant quantity that you can rewrite covariantly as $\nabla_\mu\nabla^\mu\phi$. The author was considering spatially homogeneous solutions, ie $\phi(t)$, so calculating: $$ g = -a^6\frac{r^4\sin^2\theta}{1-kr^2} $$ the equation simplifies to: $$ -\frac{1}{\sqrt{-g}}\frac{d}{dt} (-\sqrt{-g}\dot\phi)+V'(\phi) = 0 \\ \ddot\phi+3\frac{\dot a}{a}\phi+V'(\phi) = 0 \\ $$ and if you set the Hubble constant to $H = \frac{\dot a}{a}$, you obtain the advertised equation (you'll notice that the factor $3$ comes from the $3$ spatial dimensions). Hope this helps and tell me if something is not clear.
{ "domain": "physics.stackexchange", "id": 99051, "tags": "general-relativity, cosmology, field-theory, variational-principle" }
JavaScript traits implementation
Question: I wanted to step up my JS game, and move on from mostly closure based scripting, so I decided to write an application in node. I don't have much experience with prototype based programming, and I guess it's a little too early for ES6 (well, maybe with traceur). During planning of my app I saw that often I'll need container behaviour. So, I know I could implement it with a class, but I thought: why not use trait-like thingy instead? Gamemode.js, concrete class 'use strict'; var Script = require('../Script'); var Gamemode = module.exports = function () { Script.apply(this, arguments); }; Gamemode.prototype = Object.create(Script.prototype); Gamemode.prototype.constructor = Gamemode; Script.js, abstract class 'use strict'; var helper = require("./helper"); var _ = require("lodash"); var Script = module.exports = function () { if (this.constructor === Script) { throw new Error("Can't initialize abstract class!"); } var defaultDefinitions = { MAX_PLAYERS: '(500)' }; _.merge(this, helper.collection("definition", defaultDefinitions)); }; Script.prototype.build = function () { return this.definitions; }; helper.js 'use strict'; var _ = require("lodash"); var capitalize = function (string) { return string.charAt(0).toUpperCase() + string.slice(1).toLowerCase(); }; var container = function (singular, mutable) { if (undefined === mutable) { mutable = false; } var accessors = {}, methodSingular = capitalize(singular), plural = singular + 's', methodPlural = capitalize(plural); accessors['get' + methodPlural] = function () { return this[plural]; }; accessors['set' + methodPlural] = function (value) { if (undefined === value) { delete this[plural]; } this[plural] = value; return this; }; if (mutable) { accessors['add' + methodSingular] = function (item) { _.assign(this[plural], item); }; accessors['remove' + methodSingular] = function (item) { delete this[plural][item]; }; } return accessors; }; var collection = function (name, defaults) { if (undefined === defaults) { defaults = {}; }; var all = {}; all[name + 's'] = _.defaults({}, defaults); return _.merge(all, container(name, true)); }; var helper = module.exports = { capitalize: capitalize, container: container, collection: collection }; It works properly, now every instance of Gamemode has its own "definitions" property, with getDefinitions, setDefinitions, addDefinition and removeDefinition methods. Am I violating any rules, like encapsulation? Am I adding property somewhere else than in the constructor? Would creating a "container" object instead be better? Answer: Interesting question, this code pass JSHint, it is readable, well organized and except for return definitions; (where did you declare definitions?) holds few mysteries to me. Still, when I take a step back, why not simple use a bog standard Object? It seems as if you just re-invented the Object with few added benefits.
{ "domain": "codereview.stackexchange", "id": 10879, "tags": "javascript, node.js" }
In Aaronson's "The Learnability of Quantum States", what do $E_j$ and $E$ represent?
Question: In an interesting paper by Scott Aaronson, he demonstrate the learnability of quantum states in a framework of pac-learning in machine learning. Here what he wrote: But I am wondering what is the role of $E$ and how it is different from any member of the set $\mathcal{E} = \{E_j\}_{j=1}^{m}$, because in the first inequality he is using the $E_j$ and in the second after the "it also satisfies..." he is using only $E$ with no indice. what is the difference between the measurements $E_j$ and the measurement $E$? elements of $\mathcal{E}$ are measurements or results of measurements? IF THE $E_j$s are measurements, what is $E$? what am I missing? Answer: The $E$ is just there to simplify the problem (if complicating the problem statement). We normally think of a measurement as a Hermitian operator $O$ such that the $n$th moment of the probability distribution of the value that you will get when you measure a state $\rho$ is $\mathrm{Tr}(\rho O^n)$. His goal is to formalize the idea of "learning the result of a measurement," so in order to learn the results of $O$ on $\rho$, we need to come up with some criterion by which we can say that we've learned the whole distribution of measurements sufficiently well, which means we have to mess with the moments and for the most part that's a distraction from the point that he's trying to make. His scheme of two-outcome POVMs, on the other hand, is the most general way to map states $\rho$ into a single probability $p(\rho) \in [0,1]$, and so "learning the state" is inverting this operation and estimating $\rho$ from a bunch of measurements $\mathrm{Tr}(\rho E_i)$. When he's discussing $E$ with no subscript, he's talking about a measurement in the abstract. When he talks about $E_i$, he's specifically making reference to the idea of drawing a bunch of $E_i$s from the distribution $\mathcal{D}$ in the sense of the set $\mathcal{E} = \{E_i\}$ that's used in this theorem. This set $\mathcal{E}$ is a set of measurements (i.e. two-outcome POVMs), not real-number results or expectations of measurements; the expectation of the measurement $E_i$ is $\mathrm{Tr}(\rho E_i)$. The $E$ in the second equation is parameterized by the $\mathrm{Pr}_{E\in\mathcal{D}}$, which is the probability of the event in square brackets occuring when $E$ is sampled from the distribution $\mathcal{D}$.
{ "domain": "physics.stackexchange", "id": 99821, "tags": "quantum-mechanics, quantum-information, density-operator" }
Are validation sets necessary for Random Forest Classifier?
Question: Is it necessary to have train, test and validation sets when using random forest classifier? I understand it is important with Neural Networks but I am not understanding the importance of it with RF. I understand the idea of having a third unseen set of data to test on is important to know the model isn't overfitting, esp with Neural networks, but with RF it seems like you could almost not even have test or validation data (I know in practise this isn't true) but in theory since each tree of the forest uses a random sample (with replacement) of the training dataset. At the moment I am missing out on approx 250 samples by keeping them unseen from the train and test set and I know the model would improve with the extra data, so is it possible to have only train and test and not designate a seperate validation set, whilst still having a reliable model? Answer: is it possible to have only train and test and not designate a seperate validation set, whilst still having a reliable model? Sure! You can train a RF on the training set, then test on the testing set. That's perfectly valid as long as the model doesn't see any of the testing data during training. (Or, better yet, you can run cross-validation since RFs are quick to train) But if you want to tune the model's hyperparameters or do any regularization (like pruning), then you'll need a validation set. Train with the training set, use the validation set for tuning, then generate an accuracy estimate with the testing set.
{ "domain": "datascience.stackexchange", "id": 8778, "tags": "random-forest" }
Why the electric field inside a metal sphere carrying a charge Q is zero?
Question: I was solving some problem regarding the polarization during this I came across an example 4.5 in Griffith's where it is written that the Electric field inside the metal sphere carrying a charge Q is zero . The metal sphere is surrounded by a dielectric material Is there any effect of the material which has surrounded the metal sphere due to which it's electric field inside the sphere becomes zero. Can anyone explain me about these things? I literally confuse among the situations that where the electric field should be zero or not. Answer: The important conditions here are that you are considering the interior of a conducting surface with the topology of a spherical shell, at equilibrium. Its spherical symmetry and surroundings turn out to be irrelevant. The spherical shell will be at an electric equipotential. (If it weren’t, the gradient of the potential would make charge redistribute.) The entire interior will have the same value of electric potential (because solutions of Laplace’s equation cannot have local maxima or minima). Any apertures in the shell would invalidate the conclusion.
{ "domain": "physics.stackexchange", "id": 50184, "tags": "electrostatics, electricity, polarization, metals" }
Application search form
Question: I have a search form in my application. When I search, it takes too much time to retrieve and display data, so I need to optimize this code. public void search_thread() { Stopwatch sw = new Stopwatch(); sw.Start(); string sqlcmd = "select en.eventname,q.Q,q.QLevel,q.QUsed,q.QType,q.QAlt,q.QStatus,q.QTag ,m.MediaName,m.MediaPath,m.MediaTag,m.MediaType,O.*,c.Cat1,c.Cat2,c.CatTags,c.SubCat1,c.SubCat2 from tblQ q Inner join tblMediaType m ON q.QRefNo =M.QRefNo Inner Join tblOptions o On Q.QRefNo =o.QRefNo Inner Join tblCategories c On Q.QRefNo = c.QRefNo Inner Join tbleventname en On Q.QRefNo = en.QRefNo"; int flag_checked = 0; if (t_search.Text.Trim() != "") { sqlcmd = sqlcmd + " where (q.Qalt like N'%" + t_search.Text.Trim() + "%' or o.oAlt1 like N'%" + t_search.Text.Trim() + "%' or o.oAlt2 like N'%" + t_search.Text.Trim() + "%' or o.oAlt3 like N'%" + t_search.Text.Trim() + "%' or o.oAlt4 like N'%" + t_search.Text.Trim() + "%' or o.oAlt5 like N'%" + t_search.Text.Trim() + "%' or O.oAlt6 = N'%" + t_search.Text.Trim() + "%' or o.oAlt7 like N'%" + t_search.Text.Trim() + "%' or o.oAlt8 like N'%" + t_search.Text.Trim() + "%' or o.oAlt9 like N'%" + t_search.Text.Trim() + "%' or o.oAlt10 like N'%" + t_search.Text.Trim() + "%' or o.oAlt11 like N'%" + t_search.Text.Trim() + "%' or o.oAlt12 like N'%" + t_search.Text.Trim() + "%' or q.Q like '%" + t_search.Text.Trim() + "%' or o.o1 like '%" + t_search.Text.Trim() + "%' or o.o2 like '%" + t_search.Text.Trim() + "%' or o.o3 like '%" + t_search.Text.Trim() + "%' or o.o4 like '%" + t_search.Text.Trim() + "%' or o.o5 like '%" + t_search.Text.Trim() + "%' or o.o6 like '%" + t_search.Text.Trim() + "%' or o.o7 like '%" + t_search.Text.Trim() + "%' or o.o8 like '%" + t_search.Text.Trim() + "%' or o.o9 like '%" + t_search.Text.Trim() + "%' or o.o10 like '%" + t_search.Text + "%' or o.o11 like '%" + t_search.Text.Trim() + "%' or o.o12 like '%" + t_search.Text.Trim() + "%' or o.CorrectAns like '%" + t_search.Text.Trim() + "%' )"; flag_checked = 1; if (r_Excludestack.Checked == true) { if (con.State != ConnectionState.Open) { if (con.State != ConnectionState.Open){con.Open();} } SqlCommand smc = new SqlCommand("select distinct count(qrefno) from tblStackData", con); int qrefnocnt = Convert.ToInt32(smc.ExecuteScalar()); string scmd = "select qrefno from tblStackData"; string[] temp_qref_array = new string[qrefnocnt]; smc = new SqlCommand(scmd, con); int i = 0; if (dr != null) { if (dr.IsClosed) { dr = smc.ExecuteReader(); } else { dr.Close(); dr = smc.ExecuteReader(); } } else dr = smc.ExecuteReader(); while (dr.Read()) temp_qref_array[i++] = dr["qrefno"].ToString(); con.Close(); scmd = ""; for (i = 0; i < temp_qref_array.Length - 1; i++) { scmd = scmd + "'" + temp_qref_array[i] + "',"; } scmd = scmd + "'" + temp_qref_array[i] + "'"; sqlcmd = sqlcmd + " and Q.qrefNo Not IN (" + scmd + ")"; } if (r_search4mstack.Checked == true) { if (con.State != ConnectionState.Open) { if (con.State != ConnectionState.Open){con.Open();}} SqlCommand smc = new SqlCommand("select distinct count(qrefno) from tblStackData", con); int qrefnocnt = Convert.ToInt32(smc.ExecuteScalar()); string scmd = "select qrefno from tblStackData"; string[] temp_qref_array = new string[qrefnocnt]; smc = new SqlCommand(scmd, con); int i = 0; if (dr != null) { if (dr.IsClosed) { dr = smc.ExecuteReader(); } else { dr.Close(); dr = smc.ExecuteReader(); } } else dr = smc.ExecuteReader(); while (dr.Read()) temp_qref_array[i++] = dr["qrefno"].ToString(); con.Close(); scmd = ""; for (i = 0; i < temp_qref_array.Length - 1; i++) { scmd = scmd + "'" + temp_qref_array[i] + "',"; } scmd = scmd + "'" + temp_qref_array[i] + "'"; sqlcmd = sqlcmd + " and Q.qrefNo IN (" + scmd + ")"; } if (r_searchnotused.Checked == true) { if (con.State != ConnectionState.Open){ if (con.State != ConnectionState.Open){con.Open();}} SqlCommand smc = new SqlCommand("select distinct count(qrefno) from tblQ where Qused Not In ('','null','0')", con); int qrefnocnt = Convert.ToInt32(smc.ExecuteScalar()); string scmd = "select qrefno from tblQ where Qused Not In ('','null','0')"; string[] temp_qref_array = new string[qrefnocnt]; smc = new SqlCommand(scmd, con); int i = 0; if (dr != null) { if (dr.IsClosed) { dr = smc.ExecuteReader(); } else { dr.Close(); dr = smc.ExecuteReader(); } } else dr = smc.ExecuteReader(); while ( dr.Read()) temp_qref_array[i++] = dr["qrefno"].ToString(); con.Close(); scmd = ""; for (i = 0; i < temp_qref_array.Length - 1; i++) { scmd = scmd + "'" + temp_qref_array[i] + "',"; } scmd = scmd + "'" + temp_qref_array[i] + "'"; sqlcmd = sqlcmd + " and Q.qrefNo Not IN (" + scmd + ")"; } if (r_stackedbutnotused.Checked == true) { if (con.State != ConnectionState.Open){if (con.State != ConnectionState.Open){con.Open();}} SqlCommand smc = new SqlCommand("select distinct count(qrefno) from tblQ where Qused In ('','null','0')", con); int qrefnocnt = Convert.ToInt32(smc.ExecuteScalar()); string scmd = "select qrefno from tblQ where Qused In ('','null','0')"; string[] temp_qref_array = new string[qrefnocnt]; smc = new SqlCommand(scmd, con); int i = 0; if (dr != null) { if (dr.IsClosed) { dr = smc.ExecuteReader(); } else { dr.Close(); dr = smc.ExecuteReader(); } } else dr = smc.ExecuteReader(); while (dr.Read()) temp_qref_array[i++] = dr["qrefno"].ToString(); dr.Close(); scmd = ""; for (i = 0; i < temp_qref_array.Length - 1; i++) { scmd = scmd + "'" + temp_qref_array[i] + "',"; } scmd = scmd + "'" + temp_qref_array[i] + "'"; smc = new SqlCommand("select distinct count(qrefno) from tblStackData where qrefno In(" + scmd + ")", con); qrefnocnt = Convert.ToInt32(smc.ExecuteScalar()); scmd = "select qrefno from tblStackData where qrefno In(" + scmd + ")"; temp_qref_array = new string[qrefnocnt]; smc = new SqlCommand(scmd, con); i = 0; try { dr = smc.ExecuteReader(); while (dr.Read()) temp_qref_array[i++] = dr["qrefno"].ToString(); con.Close(); } catch (Exception h) { } scmd = ""; if (temp_qref_array.Length != 0) { for (i = 0; i < temp_qref_array.Length - 1; i++) { scmd = scmd + "'" + temp_qref_array[i] + "',"; } scmd = scmd + "'" + temp_qref_array[i] + "'"; sqlcmd = sqlcmd + " and Q.qrefNo IN (" + scmd + ")"; } } } if (T_refno.Text.Trim() != "") { if (flag_checked == 0) { flag_checked = 1; sqlcmd = sqlcmd + " where O.QrefNo like '%" + T_refno.Text.Trim() + "%' "; } else { sqlcmd = sqlcmd + " and O.QrefNo like '%" + T_refno.Text.Trim() + "%' "; } } // string en = null; // label6.Invoke(new Action(() => en = c_eventname.Text)); if ( c_eventname.Text != null && c_eventname.Text.Trim() != "") { if (flag_checked == 0) { flag_checked = 1; sqlcmd = sqlcmd + " where en.eventname = '" + c_eventname.Text + "' "; } else { sqlcmd = sqlcmd + " and en.eventname = '" + c_eventname.Text + "' "; } } // label6.Invoke(new Action(() => en = c_qtype.Text)); if ( c_qtype.Text != null && c_qtype.Text.ToString().Trim() != "") { if (flag_checked == 0) { flag_checked = 1; sqlcmd = sqlcmd + " where Q.QType = '" + c_qtype.Text + "' "; } else { sqlcmd = sqlcmd + " and Q.QType = '" + c_qtype.Text + "' "; } } // label6.Invoke(new Action(() => en = c_qtype.Text)); if ( c_level.Text != null && c_level.Text!= "") { if (flag_checked == 0) { flag_checked = 1; sqlcmd = sqlcmd + " where Q.QLevel = " + c_level.Text; } else { sqlcmd = sqlcmd + " and Q.QLevel = " + c_level.Text; } } // label6.Invoke(new Action(() => en = c_qtype.Text)); // Console.WriteLine("Qtype " + C_category.Text); if ( C_category.Text != null && C_category.Text != "") { if (flag_checked == 0) { flag_checked = 1; sqlcmd = sqlcmd + " where C.Cat1 = '" + C_category.Text + "'"; } else { sqlcmd = sqlcmd + " and C.Cat1 = '" + C_category.Text + "'"; } } // label6.Invoke(new Action(() => en = .Text)); if ( c_subcategory.Text != null && c_subcategory.Text.Trim() != "") { if (flag_checked == 0) { flag_checked = 1; sqlcmd = sqlcmd + " where C.subCat1 = '" + c_subcategory.Text + "'"; } else { sqlcmd = sqlcmd + " and C.subCat1 = '" + c_subcategory.Text + "'"; } } try { if (con.State != ConnectionState.Open) { if (con.State != ConnectionState.Open){con.Open();} } cmd = new SqlCommand(sqlcmd, con); if (dr != null) if (dr.IsClosed) { dr = cmd.ExecuteReader(); } else { dr.Close(); dr = cmd.ExecuteReader(); } else dr = cmd.ExecuteReader(); dataGridView1.Rows.Clear(); while (dr.Read()) { try { int f = 0; try { dataGridView1.Rows.Add(dr["qrefno"].ToString(), dr["QType"].ToString(), dr["Qused"].ToString(), dr["CorrectAns"].ToString(), dr["Q"].ToString(), dr["o1"].ToString(), dr["o2"].ToString(), dr["o3"].ToString(), dr["o4"].ToString(), dr["o5"].ToString(), dr["o6"].ToString(), dr["o7"].ToString(), dr["o8"].ToString(), dr["o9"].ToString(), dr["o10"].ToString(), dr["o11"].ToString(), dr["o12"].ToString(), dr["MediaName"].ToString(), dr["MediaType"].ToString(), dr["cat1"].ToString()); // label6.Invoke(new Action(() => f = dataGridView1.Rows.Add(dr["qrefno"].ToString(), dr["QType"].ToString(), dr["Qused"].ToString(), dr["CorrectAns"].ToString(), dr["Q"].ToString(), dr["o1"].ToString(), dr["o2"].ToString(), dr["o3"].ToString(), dr["o4"].ToString(), dr["o5"].ToString(), dr["o6"].ToString(), dr["o7"].ToString(), dr["o8"].ToString(), dr["o9"].ToString(), dr["o10"].ToString(), dr["o11"].ToString(), dr["o12"].ToString(), dr["MediaName"].ToString(), dr["MediaType"].ToString(), dr["cat1"].ToString()))); } catch ( System.InvalidOperationException exe) { break; } if (dr["QUsed"].ToString() == "" || dr["QUsed"].ToString() == "0") { // dataGridView1.Rows[f].DefaultCellStyle.BackColor = Color.Red; } else { dataGridView1.Rows[f].DefaultCellStyle.BackColor = Color.Red; } } catch(Exception ex) { dr.Close(); } } con.Close(); l_searchfound.Text = dataGridView1.Rows.Count.ToString(); // l_searchfound.Invoke(new Action(() => l_searchfound.Text = dataGridView1.Rows.Count + "")); sw.Stop(); Console.WriteLine("Task countUP2 took: " + sw.Elapsed.ToString()); } catch (Exception e1) { con.Close(); dr.Close(); } // b_search.Enabled = true; } Answer: I just would love to enter inside your t_search textbox something like a'); DROP TABLE tblStackData; DROP TABLE tblOptions; DROP TABLE tblCategories; deleting 3 tables of your database. Do yourself a favour and use parameterized queries to avoid SQL injections One shouldn't use string concatenation like scmd = ""; for (i = 0; i < temp_qref_array.Length - 1; i++) { scmd = scmd + "'" + temp_qref_array[i] + "',"; } scmd = scmd + "'" + temp_qref_array[i] + "'"; in a loop. That's what the StringBuilder class is for. leading to StringBuilder sb = new StringBuilder(1024); for (i = 0; i < temp_qref_array.Length - 1; i++) { sb.Append("'").Append(temp_qref_array[i]).Append("',"); } sb.Append("'").Append(temp_qref_array[i]).Append("'"); sqlcmd = sqlcmd + " and Q.qrefNo Not IN (" + sb.ToString() + ")"; or much better make sqlcmd also a StringBuilder resulting in sqlcmd.Append(" and Q.qrefNo Not IN (") for (i = 0; i < temp_qref_array.Length - 1; i++) { sqlcmd.Append("'").Append(temp_qref_array[i]).Append("',"); } sqlcmd.Append("'").Append(temp_qref_array[i]).Append("')") but we still can do better, by using String.Join() like @Simon André Forsberg suggested like sqlcmd.Append(" and Q.qrefNo Not IN ('") .Append(String.Join("','" , temp_qref_array)) .Append("'"); You have the following patter very often if (dr != null) { if (dr.IsClosed) { dr = smc.ExecuteReader(); } else { dr.Close(); dr = smc.ExecuteReader(); } } else dr = smc.ExecuteReader(); which can be simplified to if (dr != null && !dr.IsClosed) { dr.Close(); } dr = smc.ExecuteReader(); Here you are calling t_search.Text.Trim() over and over again (I can't count how often). if (t_search.Text.Trim() != "") { sqlcmd = sqlcmd + " where (q.Qalt like N'%" + t_search.Text.Trim() + "%' or o.oAlt1 like N'%" + t_search.Text.Trim() + "%' or o.oAlt2 like N'%" + t_search.Text.Trim() + "%' or o.oAlt3 like N'%" + t_search.Text.Trim() + "%' or o.oAlt4 like N'%" + t_search.Text.Trim() + "%' or o.oAlt5 like N'%" + t_search.Text.Trim() + "%' or O.oAlt6 = N'%" + t_search.Text.Trim() + "%' or o.oAlt7 like N'%" + t_search.Text.Trim() + "%' or o.oAlt8 like N'%" + t_search.Text.Trim() + "%' or o.oAlt9 like N'%" + t_search.Text.Trim() + "%' or o.oAlt10 like N'%" + t_search.Text.Trim() + "%' or o.oAlt11 like N'%" + t_search.Text.Trim() + "%' or o.oAlt12 like N'%" + t_search.Text.Trim() + "%' or q.Q like '%" + t_search.Text.Trim() + "%' or o.o1 like '%" + t_search.Text.Trim() + "%' or o.o2 like '%" + t_search.Text.Trim() + "%' or o.o3 like '%" + t_search.Text.Trim() + "%' or o.o4 like '%" + t_search.Text.Trim() + "%' or o.o5 like '%" + t_search.Text.Trim() + "%' or o.o6 like '%" + t_search.Text.Trim() + "%' or o.o7 like '%" + t_search.Text.Trim() + "%' or o.o8 like '%" + t_search.Text.Trim() + "%' or o.o9 like '%" + t_search.Text.Trim() + "%' or o.o10 like '%" + t_search.Text + "%' or o.o11 like '%" + t_search.Text.Trim() + "%' or o.o12 like '%" + t_search.Text.Trim() + "%' or o.CorrectAns like '%" + t_search.Text.Trim() + "%' )"; flag_checked = 1; better store the returned value in a variable like (asuming sqlcmd now is a StringBuilder) String searchText = t_search.Text.Trim(); if (searchText.Length > 0) { sqlcmd.Append(" where (q.Qalt like N'%") .Append(searchText) .Append("%' or o.oAlt1 like N'%") .Append(searchText) .Append("%' or o.oAlt2 like N'%") .Append(searchText) .Append("%' or o.oAlt3 like N'%") .Append(searchText) ... and so on Expressions like if (r_Excludestack.Checked == true) can be simplified to if (r_Excludestack.Checked). You really should split this god method into multiple methods. Like the part of if (r_Excludestack.Checked) { } should be extracted to a method which I name now GetExcludeStackCondition() private string GetExcludeStackCondition() { if (con.State != ConnectionState.Open) { con.Open(); } SqlCommand smc = new SqlCommand("select distinct count(qrefno) from tblStackData", con); int qrefnocnt = Convert.ToInt32(smc.ExecuteScalar()); smc = new SqlCommand("select qrefno from tblStackData", con); DataReader dr = smc.ExecuteReader(); int i = 0; string[] temp_qref_array = new string[qrefnocnt]; while (dr.Read()) { temp_qref_array[i++] = dr["qrefno"].ToString(); } con.Close(); StringBuilder scmd = new StringBuilder(1024); scmd.Append(" and Q.qrefNo Not IN ('") .Append(String.Join("','" , temp_qref_array)) .Append("'"); return scmd.ToString(); } and would be called like if (r_Excludestack.Checked) { sqlcmd.Append(GetExcludeStackCondition()); } resulting in reducing the former method by a lot of code and making it more readable and easier to maintain.
{ "domain": "codereview.stackexchange", "id": 12585, "tags": "c#, .net, sql-server" }
Passing a parameter to a ROS Timer callback
Question: I'm trying to pass a parameter to a ROS Timer callback using boost bind as I'd have done it for a subscriber callback but it's not working for me. Here's the relevant part of the code. MyClass::MyClass() { int my_arg = 0; node.createTimer(ros.Duration(1.0), boost::bind(&MyClass::callback, this, _1, _2, my_arg), true); //for oneshot } void MyClass::callback(int arg) { ROS_ERROR_STREAM("callback: " << arg); } But this is not compiling: /usr/include/boost/bind/bind.hpp:69:37: error: ‘void (MyClass::*)(int)’ is not a class, struct, or union type Any idea? Originally posted by Ugo on ROS Answers with karma: 1620 on 2012-08-14 Post score: 0 Answer: The bind call should match the function signature. You'll have to remove the _1, _2 part from the bind call. Originally posted by dornhege with karma: 31395 on 2012-08-14 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 10603, "tags": "ros" }
What is a low level controller for a quadrotor?
Question: I am studying control systems, and I was reading a paper about quadrotors. In the paper it is said that in the platform of the robot it has to be present a low level controller. But what is a low level controller? I have tried to search online, but i have not really understood, since I think its definition depends also on the context in which it is used, so I posted the question here in Robotics stack exchange. In the paper it is also said that the low level controller is prefereable to be open source, so to have the possibility to add capabilities. So, what is the difference between a low level controller and a high level controller for a quadrotor? Answer: Controllers, be it for quadrotors or industrial manipulators or any electromechanical system for that matter, are implemented as hierarchies. Controllers that directly drive the hardware are low-level controllers whereas those that implement logical decision-making are high-level controllers. The terms "high" and "low" are relative. Nested controllers ensure abstraction and code modularity. In the context of a quadrotor, the controller for the motor driver(s) that actually makes the rotors spin would be a low-level controller, whereas another controller that tells the quadrotor to move forward/backward etc. could be a high-level controller. The low-level controller has abstracted away all the details on how exactly to drive the rotors. All the high-level controller does is instruct the low-level controllers that a particular action is desired. It's now the low-level controllers' job to actually spin the rotors in such a way that the quadrotor executes the desired action. "in the platform of the robot it has to be present a low level controller" Think of making your quadrotor execute actions like ascend(desired_height) or translate_forward(distance). Great, you've called those functions and have passed the arguments. What then? If the control architecture is implemented as a hierarchy, then these functions would call low-level controllers to actuate the rotors: # high-level controller ascend(desired_height) { drive_motors(desired_height, 'up') } # another high-level controller translate_forward(distance) { drive_motors(distance, 'forward') } # low-level controller drive_motors(distanceToTarget, direction_to_move_in) { # code that actually drives the motors } Note that the low-level controller can again call another lower-level controller which is even closer to the "metal". Likewise, the high level controllers can be called by an even higher level controller, for eg. go_above_obstacles() which utilizes ascend() and translate_forward(). High-level controllers thus are for high-level logic and are generally not concerned with the low-level mechatronic actuation. Here's another answer that might be useful.
{ "domain": "robotics.stackexchange", "id": 2124, "tags": "control, quadcopter" }
Usefulness of Differential Geometry
Question: I recently came across these books: Differential Geometry and Lie Groups: A Computational Perspective Differential Geometry and Lie Groups: A Second Course Their subject matter really intrigues me, as I really enjoy topology/geometry/analysis, but had not planned to pursue them since I also want to work in an area with very concrete application. However, I am skeptical. At one point I thought topological data analysis (TDA) was the perfect marriage of my interests, but I have found very little evidence of that field actually being used in computer science, much less in industrial or otherwise more 'practical' settings. It seems like TDA makes mathematicians feel more relevant to the data science world, but I'm not convinced that it makes them so (feel free to contradict me if you think I'm wrong on this point, but note that I want a concrete use case, not an abstract argument about its relevance). I have similar stories about coding theory, certain aspects of set theory, etcetera. They may have theoretical relevance, but is there any situation where, in the process of developing software, one might need to consult theses fields? I don't know of any. So now my question: is there any practical field of computer science that makes advanced use of differential geometry? Medical imaging, other imaging, computer graphics, virtual reality, and some other fields come to mind as potential application areas. In my (admittedly limited) experience, however, these areas seem to use basic 3D geometry, numerical linear algebra, and sometimes numerical analysis of PDEs. Those are all very nice topics, but they do not require anything as abstract as differential geometry. Thanks in advance. Answer: I mainly see differential geometry applied to computer science in the following applied subfields: Computer Graphics / Geometry processing Machine Learning / Signal Processing For Computer Graphics / Geometry processing, recommend looking for: Discrete Differential geometry course by Keenan Crane Discrete Differential geometry for CS playlist Compilation of Discrete Differential geometry papers For Machine Learning /Signal Processing recommend looking for: Manifold Learning Information Geometry Nonlinear Signal processing Geometric Deep Learning Also check this answer in Math exchange, and this conference Differential Geometry meets Deep Learning Btw the Functional Differential Geometry is a great book.
{ "domain": "cs.stackexchange", "id": 17409, "tags": "computational-geometry, mathematical-programming, mathematical-foundations, mathematical-software" }
Why doesn't the Bessemer process oxidize the iron?
Question: The Bessemer process for making steel involves blowing air or oxygen thru hot iron. Impurities and carbon oxidize away. Why does the iron itself not oxidize? Answer: I thought you asked a great question, one that I have wondered about myself. The answer, as best I could determine, is that the iron does oxidize. A good description of the Basic Oxygen Steelmaking process, today's descendent of the Bessamer process, says so anyway. The BOS process uses pure oxygen instead of the original Bessamer processes's air. It reports that some iron does oxidize. The oxidation of carbon to form carbon monoxide is more thermodynamically favorable than the oxidation of iron, so it occurs preferentially. The page I found (once again the link is here) mentions using on the order of 1800 scf of oxygen per ton of metal. A ton of steel is about 18 kmol of iron. 1800 scf of oxygen is about 2 kmol of oxygen. So there is not enough oxygen to oxidize even close to all the iron in the average BOS run. That fact, combined with the fact that carbon, silicon, and other impurities in the pig iron oxidize preferentially, mean that not much iron is oxidized during BOS or Bessamer. But "not much" does not mean zero. My guess is that iron oxides are more soluble in the slag than they are in molten iron and so are also removed with the slag after BOS.
{ "domain": "chemistry.stackexchange", "id": 3055, "tags": "metallurgy" }
Creating file archives with the right kinds of file extensions
Question: I have a list of files, the files have the same names and are in various different formats like swf, jpg, gif and fla. Each swf may contain ONLY a gif image, it's possible a JPG image that does not have SWF, which should be listed. For example: file1.fla file1.gif file1.jpg file2.jpg The output should be: BasicDBObject 1 shall contain: file1.fla AND file1.gif BasicDBObject 2 shall contain: file2.jpg I developed my code, but it seems to me full of For's and If's, probably the sonarQube generates an issue. Is there a better logic to apply and refactor it to have fewer lines of code? private BasicDBList generateBDObject(String validacaoId) { List<Criativo> arquivos = criativoDAO.getAll(validacaoId); BasicDBList allFiles = new BasicDBList(); List<String> auxiliar = new ArrayList<>(); if (arquivos != null && !arquivos.isEmpty()) { for (Criativo criativo : arquivos) { for (int i = 0; i < criativo.getArquivos().size(); i++) { if (criativo.getArquivos().get(i).getExtensao().contentEquals("swf")) { for (int j = 0; j < criativo.getArquivos().size(); j++) { String myFile = criativo.getArquivos().get(j).getExtensao(); if (myFile.contentEquals("gif")) { BasicDBObject swfObject = new BasicDBObject(); swfObject.append("id", Util.getMd5Time(criativo.getArquivos().get(0).getNome())); swfObject.append("path", null); swfObject.append("nome", criativo.getArquivos().get(i).getNomeOriginal()); swfObject.append("pathOriginal", criativo.getArquivos().get(i).getPathOriginal()); swfObject.append("imagem", null); allFiles.add(swfObject); auxiliar.add(criativo.getNome()); } } } } while (!auxiliar.contains(criativo.getNome())) { BasicDBObject dbObject = new BasicDBObject(); dbObject.append("id", Util.getMd5Time(criativo.getArquivos().get(0).getNome())); dbObject.append("path", null); dbObject.append("nome", criativo.getArquivos().get(0).getNomeOriginal()); dbObject.append("pathOriginal", criativo.getArquivos().get(0).getPathOriginal()); dbObject.append("imagem", null); auxiliar.add(criativo.getNome()); allFiles.add(dbObject); } } } return allFiles; } Criativo.java public class Criativo { @Id String id; String tipo = "criativo"; String validacaoId; String nome; String linhaCriativa; String veiculo; String formato; String canal; List<Arquivo> arquivos = new ArrayList<>(); public String getTipo() { return this.tipo; } public String getLinhaCriativa() { return this.linhaCriativa; } public Criativo setLinhaCriativa(String linhaCriativa) { this.linhaCriativa = linhaCriativa; return this; } public String getVeiculo() { return this.veiculo; } public Criativo setVeiculo(String veiculo) { this.veiculo = veiculo; return this; } public String getFormato() { return this.formato; } public Criativo setFormato(String formato) { this.formato = formato; return this; } public String getCanal() { return this.canal; } public Criativo setCanal(String canal) { this.canal = canal; return this; } public List<Arquivo> getArquivos() { return this.arquivos; } public Criativo setArquivos(List<Arquivo> arquivos) { this.arquivos = arquivos; return this; } public String getNome() { return this.nome; } public Criativo setNome(String nome) { this.nome = nome; return this; } public Criativo addArquivo(Arquivo arq) { this.arquivos.add(arq); return this; } public String getId() { return this.id; } public Criativo setId(String id) { this.id = id; return this; } public String getValidacaoId() { return this.validacaoId; } public Criativo setValidacaoId(String validacaoId) { this.validacaoId = validacaoId; return this; } } Arquivo.java public class Arquivo { @Id String id; String validacaoId; String nome; String tipo; String path; String extensao; String nomeOriginal; String pathOriginal; String dataCriacao; long tamanho; BasicDBObject atributos; public String getId() { return this.id; } public Arquivo setId(String id) { this.id = id; return this; } public String getNome() { return this.nome; } public Arquivo setNome(String nome) { this.nome = nome; return this; } public String getTipo() { return this.tipo; } public Arquivo setTipo(String tipo) { this.tipo = tipo; return this; } public String getPath() { return this.path; } public Arquivo setPath(String path) { this.path = path; return this; } public String getExtensao() { return this.extensao; } public Arquivo setExtensao(String extensao) { this.extensao = extensao; return this; } public String getNomeOriginal() { return this.nomeOriginal; } public Arquivo setNomeOriginal(String nomeOriginal) { this.nomeOriginal = nomeOriginal; return this; } public String getDataCriacao() { return this.dataCriacao; } public Arquivo setDataCriacao(String dataCriacao) { this.dataCriacao = dataCriacao; return this; } public long getTamanho() { return this.tamanho; } public Arquivo setTamanho(long tamanho) { this.tamanho = tamanho; return this; } public String getValidacaoId() { return this.validacaoId; } public Arquivo setValidacaoId(String validacaoId) { this.validacaoId = validacaoId; return this; } public BasicDBObject getAtributos() { return this.atributos; } public Arquivo setAtributos(BasicDBObject atributos) { this.atributos = atributos; return this; } public String getPathOriginal() { return this.pathOriginal; } public Arquivo setPathOriginal(String pathOriginal) { this.pathOriginal = pathOriginal; return this; } } Note: imagem and path is null because it is in development. But imagem is equal to JPG image. Answer: Could move some of the procedural logic into the objects. BasicDBObject could be returned by Criativo, such as a toDBObject() method instead of being converted outside by generateBDObject. Collection inside Criativo or a predicate method could provide which have the "gif" and other desired requirements rather than evaluating in a loop: if (myFile.contentEquals("gif")) {
{ "domain": "codereview.stackexchange", "id": 17790, "tags": "java" }
Why the galaxies form 2D planes (or spiral-like) instead of 3D balls (or spherical-like)?
Question: Question: As we know, (1) the macroscopic spatial dimension of our universe is 3 dimension, and (2) gravity attracts massive objects together and the gravitational force is isotropic without directional preferences. Why do we have the spiral 2D plane-like Galaxy(galaxies), instead of spherical or elliptic-like galaxies? Input: Gravity is (at least, seems to be) isotropic from its force law (Newtonian gravity). It should show no directional preferences from the form of force vector $\vec{F}=\frac{GM(r_1)m(r_2)}{(\vec{r_1}-\vec{r_2})^2} \hat{r_{12}}$. The Einstein gravity also does not show directional dependence at least microscopically. If the gravity attracts massive objects together isotropically, and the macroscopic space dimension is 3-dimensional, it seems to be natural to have a spherical shape of massive objects gather together. Such as the globular clusters, or GC, are roughly spherical groupings Star cluster, as shown in the Wiki picture: However, my impression is that, even if we have observed some more spherical or more-ball-like Elliptical galaxy, it is more common to find more-planar Spiral galaxy such as our Milky Way? (Is this statement correct? Let me know if I am wrong.) Also, such have a look at this more-planar Spiral "galaxy" as this NGC 4414 galaxy: Is there some physics or math theory explains why the Galaxy turns out to be planar-like (or spiral-like) instead of spherical-like? See also a somehow related question in a smaller scale: Can I produce a true 3D orbit? p.s. Other than the classical stability of a 2D plane perpendicular to a classical angular momentum, is there an interpretation in terms of the quantum theory of vortices in a macroscopic manner (just my personal speculation)? Thank you for your comments/answers! Answer: Short answer: A spiral galaxy is, in fact, spherical-like. To understand how, let us as a starting point look at Wikipedia's sketch of the structure of a spiral galaxy: A spiral galaxy consists of a disk embedded in a spheroidal halo. The galaxy rotates around an axis through the centre, parallel to the GNP$\leftrightarrow$GSP axis in the image. The spheroidal halo consists mostly of Dark Matter (DM), and the DM makes up $\sim90\%$ of the mass of the Milky Way. Dynamically, it is the DM, that, ehrm, matters. And DM will always arrange itself in a ellipsoid configuration. So the question should rather be: Why is there even a disk, why isn't the galaxy just an elliptical? The key to answering this lies in the gas content of a galaxy. Both stars and Dark Matter particles - whatever they are - are collisionless; they only interact with each other through gravity. Collisionless systems tend to form spheroid or ellipsoid systems, like we are used to from elliptical galaxies, globular clusters etc.; all of which share the characteristic that they are very gas-poor. With gas it is different: gas molecules can collide, and do it all the time. These collisions can transfer energy and angular momentum. The energy can be turned into other kinds of energy, which can escape, through radiation, galactic winds etc., and as energy escapes, the gas cools and settles down into a lower energy configuration. The gas' angular momentum, however, is harder to transfer out of the galaxy, so this is more or less conserved. The result - a collisional system with low energy but a relatively high angular momentum - is the typical thin disk of a spiral galaxy. (Something similar, but not perfectly analogous, happens in the formation of protoplanetary disks). Stars also do not collide, so they should in theory also make up an ellipsoid shape. And some do in fact: the halo stars, including but not limited to the globular clusters. These are all very old stars, formed when the gas of the galaxy hadn't settled into the disk yet (or, for a few, formed in the disk but later ejected due to gravitational disturbances). But the large majority of stars are formed in the gas after it has settled into the disk, and so the large majority of stars will be found in the same disk. Elliptical galaxies So why are there even elliptical galaxies? Elliptical galaxies are typically very gas-poor, so gas dynamics is not important in these, they are rather a classical gravitational many-body system like a DM halo. The gas is depleted from these galaxies due to many different processes such as star formation, collisions with other galaxies (which are quite common), gas ejection due to radiational pressure from strongly star forming regions, supernovae or quasars, etc. etc. - many are the ways for a galaxy to lose its gas. If colliding galaxies are sufficiently gas-depleted (and the collision results in a merger), then the resulting galaxy will not have any gas which can settle into a disk, and kinetic energy of the stars in the new galaxy will tend to be distributed randomly due to the chaotic nature of the interaction. (This picture is simplified, as the whole business of galactic dynamics is quite hairy, but I hope it gets the fundamentals right and more or less understandable).
{ "domain": "physics.stackexchange", "id": 22504, "tags": "gravity, angular-momentum, astrophysics, galaxies, star-clusters" }
A header only class for C++ time measurement
Question: I wrote a header only class (actually there is a second one for data bookkeeping and dumping) to measure the execution time of a C++ scope without worrying too much about boilerplates. The idea being to be able to simply instantiate a class at the begining of the scope you want to measure, and do a single call at the end to dump the measure. I rely on the fact that the class instantiation is at the beginning of the scope, and its destruction at the very end. So my main worry is about compile time optimizations that could change the order of execution and bias the measure. Also I'm not satisfy on how I retrieve the type for ScopeTimer::Duration but I couldn't find the type properly :/ Here is the code: scope_timer.hpp #ifndef SCOPE_TIMER #define SCOPE_TIMER #include <chrono> #include <string> #include <vector> #include <map> #include <fstream> class ScopeTimer { public: using ScopeSignature = std::string; using DurationType = std::chrono::microseconds; using Duration = decltype(std::chrono::duration_cast<DurationType>(std::chrono::high_resolution_clock::now() - std::chrono::high_resolution_clock::now()).count()); ScopeTimer(const ScopeSignature& scopeName); ~ScopeTimer(); Duration getDurationFromStart() const; private: ScopeTimer(); const ScopeSignature scopeName; const std::chrono::high_resolution_clock::time_point start; }; class ScopeTimerStaticCore { public: static void addTimingToNamedScope(const ScopeTimer::ScopeSignature& scopeName, const ScopeTimer::Duration& duration); static void dumpTimingToFile(const std::string& path); static void clearAllTiming(); static void clearTimingForNamedScope(const ScopeTimer::ScopeSignature& scopeName); private: using TimingVector = std::vector<ScopeTimer::Duration>; using ScopesTiming = std::map<ScopeTimer::ScopeSignature, TimingVector>; static ScopesTiming& getScopesTimingStaticInstance() { static ScopesTiming scopesTimingContainer; return (scopesTimingContainer); }; }; /*******************************************************Implementations*******************************************************/ inline ScopeTimer::ScopeTimer(const ScopeSignature& scopeName) : scopeName(scopeName), start(std::chrono::high_resolution_clock::now()) {}; inline ScopeTimer::~ScopeTimer() { const Duration scopeTimerLifetimeDuration = this->getDurationFromStart(); ScopeTimerStaticCore::addTimingToNamedScope(this->scopeName, scopeTimerLifetimeDuration); return ; }; inline ScopeTimer::Duration ScopeTimer::getDurationFromStart() const { using std::chrono::duration_cast; const std::chrono::high_resolution_clock::time_point now = std::chrono::high_resolution_clock::now(); return (duration_cast<DurationType>(now - this->start).count()); }; inline void ScopeTimerStaticCore::addTimingToNamedScope(const ScopeTimer::ScopeSignature& scopeName, const ScopeTimer::Duration& duration) { ScopesTiming& scopesTimingContainer = ScopeTimerStaticCore::getScopesTimingStaticInstance(); scopesTimingContainer[scopeName].push_back(duration); return ; }; inline void ScopeTimerStaticCore::dumpTimingToFile(const std::string& path) { const ScopesTiming& scopesTimingContainer = ScopeTimerStaticCore::getScopesTimingStaticInstance(); std::ofstream dumpfile; dumpfile.open(path, std::ios::out | std::ios::trunc); for (ScopesTiming::const_iterator it_scopes = scopesTimingContainer.begin(); it_scopes != scopesTimingContainer.end(); ++it_scopes) { const ScopeTimer::ScopeSignature& currentScope = it_scopes->first; const TimingVector& timings = it_scopes->second; for (TimingVector::const_iterator it_timings = timings.begin(); it_timings != timings.end(); ++it_timings) dumpfile << currentScope << "," << *it_timings << std::endl; } dumpfile.close(); return ; }; inline void ScopeTimerStaticCore::clearAllTiming() { ScopesTiming& scopesTimingContainer = ScopeTimerStaticCore::getScopesTimingStaticInstance(); scopesTimingContainer.clear(); return ; }; inline void ScopeTimerStaticCore::clearTimingForNamedScope(const ScopeTimer::ScopeSignature& scopeName) { ScopesTiming& scopesTimingContainer = ScopeTimerStaticCore::getScopesTimingStaticInstance(); ScopesTiming::iterator it_scopes = scopesTimingContainer.find(scopeName); if (it_scopes != scopesTimingContainer.end()) it_scopes->second.clear(); return ; }; #endif /* SCOPE_TIMER */ And an dummy program that use it main.cpp #include "../include/scope_timer.hpp" void functionA(); void functionB(); int main() { for (size_t i = 0; i < 3; ++i) { functionA(); functionB(); } ScopeTimerStaticCore::dumpTimingToFile("/tmp/scope-timer_dump-dummy-test.csv"); return (0); }; dumb_functions.cpp #include <thread> #include <chrono> #include "../include/scope_timer.hpp" void functionA() { ScopeTimer scopeTimer("functionA"); std::this_thread::sleep_for (std::chrono::milliseconds(500)); return ; }; void functionB() { ScopeTimer scopeTimer("functionB"); std::this_thread::sleep_for (std::chrono::seconds(1)); return ; }; if you want to retrieve the code quickly here is the repo link Answer: using Duration = decltype(std::chrono::duration_cast<DurationType>(std::chrono::high_resolution_clock::now() - std::chrono::high_resolution_clock::now()).count()); I'm confused by this line. decltype(std::chrono::duration_cast<DurationType>(expr)) is invariably DurationType, isn't it? That's why it's a "cast"? So this simplifies down to decltype(std::declval<DurationType&>().count()), which I'm pretty sure can be spelled as DurationType::rep unless you're really eager to support non-standard duration types that might not have a rep member. So: using Duration = typename DurationType::rep; And now it appears that maybe Duration is the wrong name for this typedef, eh? (EDIT: Oops, the keyword typename is not needed here because DurationType is not dependent. Just using Duration = DurationType::rep; should be sufficient.) static ScopesTiming& getScopesTimingStaticInstance() { static ScopesTiming scopesTimingContainer; return (scopesTimingContainer); }; Minor nits on whitespace and naming and parentheses and trailing semicolons: static ScopesTiming& getScopesTimingStaticInstance() { static ScopesTiming instance; return instance; } The defining quality of instance is that it's a static instance of ScopesTiming. If you want to convey the additional information that ScopesTiming is actually a container type, then that information belongs in the name of the type. Personally I'd call it something like TimingVectorMap, because it's a map of TimingVectors. Since the static map is not guarded by any mutex, your function addTimingToNamedScope (which mutates the map) is not safe to call from multiple threads concurrently. This could be a problem for real-world use. ScopeTimer has two const-qualified fields. This doesn't do anything except pessimize its implicitly generated move-constructor into a copy-constructor. I recommend removing the const. ScopeTimer also has an implicit conversion from ScopeSignature a.k.a. std::string, so that for example void f(ScopeTimer timer); std::string hello = "hello world"; f(hello); // compiles without any diagnostic I very strongly suggest that you never enable any implicit conversion unless you have a very good reason for it. This means putting explicit on every constructor and conversion operator. explicit ScopeTimer(const ScopeSignature& scopeName); dumpfile.open(path, std::ios::out | std::ios::trunc); Should you check to see if the open succeeded? inline void ScopeTimerStaticCore::clearTimingForNamedScope(const ScopeTimer::ScopeSignature& scopeName) { ScopesTiming& scopesTimingContainer = ScopeTimerStaticCore::getScopesTimingStaticInstance(); ScopesTiming::iterator it_scopes = scopesTimingContainer.find(scopeName); if (it_scopes != scopesTimingContainer.end()) it_scopes->second.clear(); return ; }; This would be a good place to use C++11 auto: inline void ScopeTimerStaticCore::clearTimingForNamedScope(const ScopeTimer::ScopeSignature& scopeName) { ScopesTiming& instance = getScopesTimingStaticInstance(); auto it = instance.find(scopeName); if (it != instance.end()) { it->second.clear(); } } Or, if you don't mind removing the element from the map completely, you could just use erase: inline void ScopeTimerStaticCore::clearTimingForNamedScope(const ScopeTimer::ScopeSignature& scopeName) { ScopesTiming& instance = getScopesTimingStaticInstance(); instance.erase(scopeName); } I also notice that these functions would get a lot shorter and simpler to read if you put their definitions in-line into the class body of ScopeTimerStaticCore. In this case you could omit the keyword inline and the qualification of the parameter type: void clearTimingForNamedScope(const ScopeSignature& scopeName) { ScopesTiming& instance = getScopesTimingStaticInstance(); instance.erase(scopeName); } (Assuming that ScopeTimerStaticCore contains a member typedef using ScopeSignature = ScopeTimer::ScopeSignature;, I guess. It probably should — or vice versa.)
{ "domain": "codereview.stackexchange", "id": 33280, "tags": "c++, performance, c++11, benchmarking" }
Multidimensional dynamic array class using a continuous block of memory
Question: The following code is designed to be a multidimensional array that works off of a continuous block of memory. It is designed to be able to be resized at runtime, but only when a resize is explicitly requested. #include <memory> #include <cstddef> namespace detail{ //frame buffer class: represents a 2d array of audio data using a continuous block of memory template<typename T,template<typename...>class __frame_type> class __frame_buffer{ public: using self_type = __frame_buffer<T,__frame_type>; using frame_type = __frame_type<T>; using value_type = T; using size_type = std::size_t; using pointer = value_type*; using const_pointer = const value_type*; using reference = value_type&; using const_reference = const value_type&; __frame_buffer():_data(),_frame_data(),_size(0),_frame_size(0){} __frame_buffer(size_type nframes,size_type fsize):_data(std::make_unique<value_type[]>(nframes*fsize)), _frame_data(std::make_unique<frame_type[]>(nframes)), _size(nframes), _frame_size(fsize){ for(size_type j=0; j < size(); ++j){ _frame_data[j] = frame_type(&_data[j*frame_size()],frame_size()); } } __frame_buffer(self_type& other):_data(std::make_unique<value_type[]>(other.data_size())), _frame_data(std::make_unique<frame_type[]>(other.size())), _size(other.size()), _frame_size(other.frame_size()){ for(size_type j=0,i=0; j < size(), i < data_size(); ++j,++i){ _data[i] = other._data[i]; _frame_data[j] = frame_type(&_data[j*frame_size()],frame_size()); } } __frame_buffer(self_type&& other):_data(std::move(other._data)), _frame_data(std::move(other._frame_data)), _size(other._size), _frame_size(other._frame_size){} self_type& operator=(self_type const& other){ if(resize(other.size(),other.frame_size())){ for(size_type j=0,i=0; j < size(), i < data_size(); ++j,++i){ _data[i] = other._data[i]; _frame_data[j] = frame_type(&_data[j*frame_size()],frame_size()); } } return *this; } self_type& operator=(self_type&& other){ std::swap(_size,other._size); std::swap(_frame_size,other._frame_size); std::swap(_data,other._data); std::swap(_frame_data,other._frame_data); return *this; } //array subscript operators frame_type& operator[](size_type const& index){ return _frame_data[index]; } frame_type const& operator[](size_type const& index)const{ return _frame_data[index]; } //get number of frames in the buffer (length) inline const size_type& size() const{ return _size; } //get the size of each frame in the buffer(width) inline const size_type& frame_size()const{ return _frame_size; } //get the overall number of elements in the buffer (length * width) inline size_type data_size()const{ return size() * frame_size(); } //resize the buffer to a new length & width bool resize(size_type const& nframes,size_type const& fsize){ if(size() != nframes || frame_size() != fsize){ std::unique_ptr<value_type[]> _new_data(std::make_unique<value_type[]>(nframes * fsize)); std::unique_ptr<frame_type[]> _new_frame_data(std::make_unique<frame_type[]>(nframes)); if(_new_data && _new_frame_data){ _size=nframes; _frame_size = fsize; std::swap(_data,_new_data); std::swap(_frame_data,_new_frame_data); return true; }else{ return false; } }else{ return true; } } //iterators frame_type* begin(){ return &_frame_data[0]; } const frame_type* begin()const{ return &_frame_data[0]; } frame_type* end(){ return &_frame_data[_size]; } const frame_type* end()const{ return &_frame_data[_size]; } protected: //array of value_type objects std::unique_ptr<value_type[]> _data; //array of frames to be overlayed upon the _data to simulate multidimensional array std::unique_ptr<frame_type[]> _frame_data; size_type _size; size_type _frame_size; }; template<typename T> class __frame{ public: using self_type = __frame<T>; using size_type = std::size_t; using value_type = T; using reference = T&; using const_reference = const T&; using pointer = T*; using const_pointer = const T*; __frame():_data(nullptr),_size(nullptr){} __frame(pointer addr,size_type const& size):_data(new (addr) value_type[size]),_size(&size){} __frame(self_type& other):_data(other._data),_size(&other._size){} __frame(self_type&& other):__frame(other){} self_type& operator=(self_type& other){ _data = other._data; _size = other._size; return *this; } self_type& operator=(self_type&& other){ std::swap(_size,other._size); std::swap(_data,other._data); return *this; } self_type& operator=(const_reference value){ if(_data){ for(size_type i = 0; i < size(); ++i){ _data[i]=value; } } return *this; } reference operator[](size_type const& index){ return _data[index]; } const_reference operator[](size_type const& index)const{ return _data[index]; } pointer begin(){ return &_data[0]; } const_pointer begin()const{ return begin(); } pointer end(){ return &_data[*_size]; } const_pointer end()const{ return end(); } size_type size()const{ return *_size; } protected: pointer _data; const size_type* _size; }; } namespace audio{ template<typename T> using frame = detail::__frame<T>; template<typename T> using frame_buffer = detail::__frame_buffer<T,detail::__frame>; } #include <iostream> int main(int agrc,char** argv){ audio::frame_buffer<double> fbuff{16,2}; int k=0; for(auto&& x: fbuff){ for(auto&& y:x){ y=k; } ++k; } for(auto&& x: fbuff){ for(auto&& y:x){ std::cout<<y<<"\t"; } std::cout<<std::endl; } return 0; } The actual code is broken up into a separate header file. My original instinct was to use a std::vector under the hood but it is crucial to my design that it is not possible for the array to be resized without an explicit request. The array will be used for an audio application I am working on and it would be very costly to performance if a resize were to cause an allocation on the audio thread. I would like to know if there is a more concise way to achieve what I am trying to do as well as any design considerations I should take into account? Live Demo EDIT:This container is intended for use in a real-time audio environment. An important factor in the design is that no dynamic memory allocation can occur from use while being used on a real-time thread. The intent of the design is to have a container that from the outside look like a 2d array and can be indexed as such foo[0][0] but is implemented in such a way that the stored data is contained within a continuous block of memory. The frame class represents a "frame" of audio data, a "frame" is a collection of audio samples each of which correspond to a channel of audio data. Answer: Some points that come to mind immediately. Your array is merely two-dimensional. This is much easier than a truly multidimensional array. Hence your question is misleading/wrong. You must not use identifiers starting with a double underscore, such as __frame. These are reserved for internal use in standard library implementations. Any user code using such identifiers is ill-formed. Your code is not generic, but specific for your application. What you really want is generic re-usable code of the form template<typename T ,typename A=std::allocator<T> > class Array2D { /* ... */ }; You should really try to use a std::vector<> under the hood, so that all the standard operations (copy, move, assignment) are easy to implement correctly and in an exception safe way. Your issue that this would allow resizing w/o explicit request is invalid, as you can avoid/disallow calls to any members of std::vector that would (under the hood) re-allocate/resize. There is no need for an extra array (your __frame_data) to mimick the multidimensionality. You should avoid unnecessary non-static members when a static constexpr member would do (i.e. _frame_size) and member functions that merely regurgitate such values (frame_size()). Iterators represent an inherently linear model for the underlying data and are thererfore not natural for multi-dimensional objects. Hence, iterator support may be dropped. You should add sufficient documentation (preferrably implicit for ease of maintenance, i.e. via appropriate variable names rather than via comments) so that any user (including yourself when you come back months/years later) can immediately understand the intent. With these points in mind, a skeleton for such a 2D array could be as follows template<typename T, typename A=std::allocator<T> > class array2D : std::vector<T,A> { using base = std::vector<T,A>; public: // types using size_type = typename base::size_type; using value_type = typename base::value_type; // etc for other types using pointer = typename base::pointer; struct row { // used as return type of at() const pointer ptr; const size_type size; value_type &operator[](size_type i) { return ptr[i]; } value_type const&operator[](size_type i) const { return ptr[i]; } value_type &at(size_type i) { if(i>=size) throw std::out_of_range(); return ptr[i]; } value_type const&at(size_type i) const { if(i>=size) throw std::out_of_range(); return ptr[i]; } private: friend class array2D; row(pointer p, size_type s) : ptr(p), size(s) {} }; // construction & assignment array2D() = default; array2D(array2D const&) = default; array2D&operator=(array2D const&) = default; array2D(array2D &&) = default; array2D&operator=(array2D &&) = default; explicit array2D(size_type dim0, size_type dim1) : base(dim0*dim1), _dims{{dim0,dim1}} {} explicit array2D(size_type dim0, size_type dim1, value_type const&fill) : base(dim0*dim1,fill), _dims{{dim0,dim1}} {} // data access size_type size(size_type dim) const { return _dims[dim]; } using base::size; const value_type*operator[](size_type i) const { return base::data()+i*size(0); } value_type *operator[](size_type i) { return base::data()+i*size(0); } const row at(size_type i) const { if(i>=size(0)) throw std::out_of_range(); return {base::data()+i*size(0), size(1)}; } row at(size_type i) { if(i>=size(0)) throw std::out_of_range(); return {base::data()+i*size(0), size(1)}; } // resize to new sizes; if sizes change, data are lost. void resize(size_type dim0, size_type dim1) { if(dim0!=_dims[0] || dim1!=_dims[1]) base::clear(); // otherwise, the old data would be re-shuffled _dims={{dim0,dim1}}; base::resize(dim0*dim1); } private: size_type _dims[2] = {{0,0}}; // size in dimension 0,1 }; If you don't like to inherit from a std::vector, you can use membership w/o affecting the design too much. If you want to keep iterator support, then need an object similar to array2D::row to serve as an iterator over rows. Unlike row, it must allow its ptr member to be changed and must keep the array size in both dimensions. It needs operator++ and operator-- to increment/decrement its ptr member by size(0).
{ "domain": "codereview.stackexchange", "id": 20285, "tags": "c++, array, collections" }
Response of a system using DFT
Question: Let's say that we have two sequences, input sequence $x(n) = [0121]$ and impulse response of a given system $h(n) = [0, 1, -1, 1]$. I need to find response of this system to given input sequence. After that, i need to calculate linear convolution of given sequences. If we denote the response as $y(n)$ we have $y(n)=h(n)*x(n)$, which means that, due to convolution theorem, in frequency domain we have $Y(k)=H(k)X(k)$. From this, we can find $y(n) = IDFT(Y(k))$. Considering the fact that i need to find convolution of given sequences, meaning $y(n)=h(n)*x(n)$, that would mean that it should yield same result as when i was doing this by using DFT. However, my final results don't match at all. $x(n) = [0121] \Rightarrow X(k) = [4, -2, 0, -2] \\ h(n) = [0, 1, -1, 1] \Rightarrow H(k) = [1, 1, -3, 1] \\ Y(k)=X(k)H(k) = [4, -2, 0, -2] \Rightarrow y(n)=[0, 1, 2, 1]$ On the other hand, convolution of given sequences gives the following result: $y(n) = h(n)*x(n) = [0, 0, 1, 1, 0, 1, 1]$ Not only that result is completely wrong, but also dimensions of vectors i got as a result are not the same. What am i doing wrong? Any help appreciated! Answer: It will match for circular convolution modulo $N$, where $N$ is 4 here. For finite length sequences product of DFT of 2 sequences is equivalent to DFT of circular-convolution of the 2 sequences. >> cconv([0,1,2,1], [0,1,-1,1], 4) ans = 0 1 2 1 >>ifft(fft([0,1,2,1]).*fft([0,1,-1,1])) ans = 0 1 2 1
{ "domain": "dsp.stackexchange", "id": 8931, "tags": "dft, convolution" }
opencv causes segfaults because it does not use system install of zlib
Question: We cannot get anything working unless this is fixed. I reported it here: https://code.ros.org/trac/opencv/ticket/970 Originally posted by Rosen Diankov on ROS Answers with karma: 516 on 2011-03-30 Post score: 0 Answer: It would be great if someone fixes this, the patch is already provided. Originally posted by Rosen Diankov with karma: 516 on 2011-04-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 5250, "tags": "ros, opencv2.2" }
DFA minimazation (dead state)
Question: I was trying to minimize given DFA: a b -> 1 4 2 F 2 1 1 3 1 4 F 4 1 1 I've watched few videos about solving this problem and decided to use the equivalance method. While doing it I've come to the solution below: 0 Equivalance: {1,3} {2,4} 1 Equivalance: {1} {3} {2,4} a b -> 1 2,4 2,4 F 2,4 1 1 3 1 2,4 which translates into the following graph: State 3 looks dead to me. Should I even draw it if the solution is correct? What would be the point of this state? Kind regards, Answer: It's inaccessable; you don't need it.
{ "domain": "cs.stackexchange", "id": 19305, "tags": "finite-automata" }
Are there any microchips specifically designed to run ANNs?
Question: I'm interested in hardware implementation of ANNs (artificial neural networks). Are there any popular existing technology implementations in form of microchips which are purpose designed to run artificial neural networks? For example, a chip which is optimised for an application like image recognition or something similar? Answer: In May 2016 Google announced a custom ASIC which was is specifically built for machine learningwiki and tailored for TensorFlow. It is using tensor processing unit (TPU) which is a programmable microprocessor designed to accelerate artificial neural networks. NeuroCores, 12x14 sq-mm chips which can be interconnected in a binary tree, see: Neurogrid, a supercomputer which can provide an option for brain simulations. TrueNorth, a neuromorphic CMOS chip produced by IBM, which has 4096 cores in the current chip, each can simulate 256 programmable silicon "neurons", giving a total of over a million neurons. Further readings: Neuromorphic engineering, Vision processing unit, AI accelerators As a side note, you can always use an FPGA based piece of hardware which you can implement selected genetic algorithm (GA) directly in hardware. For example the CoDi model was implemented in the FPGA based CAM-Brain Machine (CBM)2001.
{ "domain": "ai.stackexchange", "id": 18, "tags": "image-recognition, hardware" }
Discrepancy in the form of the free Klein-Gordon field
Question: When solving for the Klein-Gordon field $\phi$, most texts and online resources that I look at say that: $$\phi(x) = \int \frac{ d^{3} p }{ ( 2 \pi )^{3} } \frac{1}{\sqrt{ 2 E_{\mathbf{p}} }} \left[ a_{\mathbf{p}} e^{-ip\cdot x} + a_{\mathbf{p}}^{\dagger} e^{ip\cdot x} \right]\tag{1}$$ In this case, we'd have $$[a_{\mathbf{k}}, a_{\mathbf{p}}^{\dagger}]=(2\pi)^{3}\delta^{(3)}(\mathbf{k} - \mathbf{p}).$$ $\ $ However, my teacher right now has given me a solution $\phi$ such that: $$\phi(x) = \int \frac{ d^{3} p }{ ( 2 \pi )^{3} } \frac{1}{2 E_{\mathbf{p}} } \left[ a_{\mathbf{p}} e^{-ip\cdot x} + a_{\mathbf{p}}^{\dagger} e^{ip\cdot x} \right]\tag{2}$$ Note the lack of square root here! In this case, I believe we'd have $$[a_{\mathbf{k}}, a_{\mathbf{p}}^{\dagger}]=(2\pi)^{3}2E_{\mathbf{k}}\delta^{(3)}(\mathbf{k} - \mathbf{p}).$$ $\ $ Both of these seem valid, and so to me it seems like the factor in front of the bracket is free for us to choose (as long as it makes $\phi$ Lorentz invariant). Is this correct? Or, what is going on? Answer: Pretty much any QFT book you read has different prefactors that differ in factors of $2\pi$ or $E_{\mathbf{k}}$. The point is that one can redefine the operator $a(\mathbf{p})$ by incorporating such factors to it. For example, if you want to consider the integral $(1)$ for $\phi(x)$ with the Lorentz-invariant measure $$\frac{d^3\mathbf{p}}{(2\pi)^3E_{\mathbf{p}}},$$ you can replace $a(\mathbf{p})$ with $\tilde{a}(\mathbf{p})=\sqrt{E_{\mathbf{p}}}a(\mathbf{p})$. Or, if you want to get rid of the $2\pi$ factors in the commutation relations you can replace $a(\mathbf{p})$ with $a(\mathbf{p})/(2\pi)^{3/2}$. So its just redefining the operator $a(\mathbf{p})$ with a Lorentz-invariant measure.
{ "domain": "physics.stackexchange", "id": 34784, "tags": "quantum-field-theory, operators, conventions, klein-gordon-equation" }
What are the animals in these pictures?
Question: I recently found some pictures of animals' skeletons. Does anyone know what species they are? Answer: The top is some kind of seal (the flippers give it away) and the bottom is some kind of lemur (the long thin limbs and tail with the narrow skull) but that is the best I can do from just those photos. It might be a flying lemur but again from just the one photo without scale is difficult to say for certain.
{ "domain": "biology.stackexchange", "id": 8234, "tags": "zoology" }
Classifying with certainty
Question: I'm trying to classify a binary sample with Keras and I would like to classify as many correctly as possible, while ignore the ones where the model is not sure. The fully connected Nerual network currencly achieves around 65% but I would like to get a higher result of correctly classified ones, while ignoring the ones where the model is uncertain. Is there a way to tell Keras to simply ignore the ones where the model is uncertain and achieve a higher accuracy that way? Or is there a network design that could achieve this, for example feeding the result of the network striaght into a second part of it which then decides whether the prediction is likely accurate or not? One way I was thinking of achieving this is by building a second neural network on top of it that decides based on the result of the first network and all the input data of it, whether the classification will be correct or not. Would that work, and if yes, is there no more elegant way of achieving this in one go, such as directly having the results feeding into a second part of the network that then decides if the prediction is likley accurate or not? Answer: Softmax output in neural networks can be misleading - often the confidence provided is higher than is intuitive. See e.g. here: A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks https://pdfs.semanticscholar.org/c26e/1beaeaa55acae7336882de5df48716afb8bb.pdf which suggests that in practice, softmax is not helpfully interpretable as a probability but should instead be used for ranking among class options. If you want to have an accurate probability estimate, you might consider using a Bayesian approach in which you explicitly model your estimate of each of the input variances, the output variance, etc. Failing that, having a second phase neural network that takes the input and predict correct or incorrect classification by the first network is an interesting idea - where incorrect classification is a proxy for 'low confidence' classification. If you try it I'd be curious to know how it works. Edit: As @Emre said the input to the softmax would be more informative than the softmax itself because it's pre-scaled (i.e. not forced to sum to 1). So it should reflect confidence better, with values further away from 0 indicating higher confidence.
{ "domain": "datascience.stackexchange", "id": 11927, "tags": "classification, tensorflow, keras" }
how to use eband_local_planner with navigation?
Question: Hello, I installed eband_local_planner package by sudo apt-get install ros-indigo-eband-local-planner . But when i try to use it with navigation ,the "base_local_planner " param of move_base does not changed when i launch the corresponding launch file.And in the terminal , no error occurred!!! In the move_base node ,i set <rosparam file="$(find rbx1_nav)/config/fake/eband_planner_params.yaml" command="load" /> and <param name="base_local_planer" value="eband_local_planner/EBandPlannerROS" /> when using rospack plugins --attrib=plugin nav_core,it shows me that eband_local_planner is one of the plugins of nav_core, and when i use roscd eband_local_planner,i also can find it in "/opt/ros/indigo/share/eband_local_planner". how should i do to configure eband_local_planner with navigation correctively ? Thanks a lot ^-^ Originally posted by jxl on ROS Answers with karma: 252 on 2016-08-29 Post score: 0 Answer: Thanks all, i find why this happen . Spelling mistake : param name="base_local_planer" ,should be param name="base_local_planner". Originally posted by jxl with karma: 252 on 2016-08-31 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25631, "tags": "ros, navigation, eband-local-planner" }
How many states for two spin 1 particles?
Question: A fairly simple question: If we have a composite system of two spin-1 particles, where $J_1=1$ and $J_2=1$, how many possible states $|Jm\rangle$ are there? I know |$J_2 - J_1$| < $J$ < |$J_2 + J_1$| and $J_1$< $m_1$ < $J_1$, and likewise for $m_2$, so are there then 5 possible J's (-2,-1,0,1,2), and 9 possible m's (-2,-1,0,1,2) for each J, giving $9*4=36$ total states? Answer: When dealing with angular momentum of a combined system you have two possible basis. Let $\mathscr{H}_1$ be the Hilbert space of particle 1 and $\mathscr{H}_2$ be the Hilbert space of particle 2. On $\mathscr{H}_1$ you have the basis $|j_1 m_1\rangle$ where $j_1 = 1$ and $m_1 = -1,0,1$. On $\mathscr{H}_2$ you have the basis $|j_2 m_2\rangle$ where $j_2 = 1$ and $m_2 = -1,0,1$. In that sense both $\mathscr{H}_1$ and $\mathscr{H}_2$ are three-dimensional Hilbert spaces. On the other hand, the combined system is $\mathscr{H}_1\otimes \mathscr{H}_2$. One obvious basis for this space is that associated to the complete set of commuting observables $J_1^2,J_2^2,J_{1z}J_{2z}$. This is the basis $$|j_1,j_2;m_1,m_2\rangle=|j_1,m_1\rangle\otimes |j_2,m_2\rangle.$$ It is clear that $j_1,j_2=1$ and $m_1,m_2=-1,0,1$. You thus have $9$ basis states. But all basis of a finite dimensional Hilbert space have the same number of element which is its dimension, whatever basis set you use, it will have $9$ states. For completeness, another natural basis is that of the total angular momentum. You define $$\mathbf{J}=\mathbf{J}_1\otimes \mathbf{1}+\mathbf{1}\otimes \mathbf{J}_2$$ to get operators $J_z$ and $J^2$. They commute with $J_1^2$ and $J_2^2$ so that you get a basis $$|j_1,j_2;j,m\rangle$$ This is the basis of total angular momentum. It is a result then, that you can consult for example in Cohen's book Volume 2 on the "Addition of Angular Momentum" chapter, that the possible values for $j$ the eigenvalues of $J^2$ are $$j=|j_1+j_2|,|j_1+j_2-1|,\dots, |j_1-j_2|$$ Here $j_1=j_2=1$ hence the possible values for $j$ are $$j=2,1,0.$$ Now for $j = 0$ you have just $m = 0$ (one state), for $j = 1$ you have $m = -1,0,1$ (three states) and for $j = 2$ you have $ m = -2,-1,0,1,2$ (five states). This gives a total of $9$ states on the basis of total angular momentum as anticipated.
{ "domain": "physics.stackexchange", "id": 75825, "tags": "quantum-mechanics, homework-and-exercises, angular-momentum, hilbert-space, representation-theory" }
Is force quantised?
Question: Things like charge, distance, energy, etc are quantised , Are all the phenomenon around us also quantised ?? Like time or say Force !? I was reading this question which goes about seeing the effect of individual human being on the Pluto. I wanted to ask, is there limit of for how small a force can be like if I have two particles, is there a limit after which the force that they exert is so so small , that it is less than a theoretical limit. Is there a limit to everything ?? Like I read about smallest unit that is possible for length ( planks length ) and since the fastest possible speed is that of the light that I'm ultimately is quantised, is everything else too ?? I'm still in highschool, and read some of these things just for fun, so I don't know a lot about these actually...:P Answer: There are a few things to clarify: EM charge is quantized, and we do have a elementary charge (electron), and you would think that all elementary particles have multiples of this charge. In reality, quarks (though they do not exist freely) do carry one third (or multiples of that) of the elementary EM charge. So EM charge is quantized, but not just multiples of the elementary charge exist in objects (QM). energy is quantized and photons are the elementary quanta of EM energy W and Z particles are the quanta of the electroweak field distance is not quantized (yet), we think of the fabric of spacetime as a continuous manifold Now there seems to be a misunderstanding about the forces. Forces are mediated by virtual particles (not real particles, but a mathematical model), that is: EM force is mediated by virtual photons (QED quantized) strong force by virtual gluons (pions for nuclear) (QCD quantized) weak force is mediated by w and z bozons gravity by virtual gravitons (hypothetical) The energy of a wave in a field (for example, electromagnetic waves in the electromagnetic field) is quantized, and the quantum excitations of the field can be interpreted as particles. The Standard Model contains the following particles, each of which is an excitation of a particular field Thus, forces themselves are quantized, meaning that according to QFT, every force field is quantized and there is a vector boson associated with it. There is a misunderstanding with the Planck length too. The Planck length does not mean that distance is quantized. It is a scale at which the classical ideas about space and time cease to be valid and QM dominates. You become confused because they call it the quantum of length, but in reality it is just the smallest measurement of length with any meaning.
{ "domain": "physics.stackexchange", "id": 59346, "tags": "quantum-mechanics, forces, discrete" }
How to express time complexity when the exponential "e" comes into play?
Question: I am new to all of this and I am trying to understand how to define Time Complexity. I have an algorithm which performs a set of operations on inputs of different size. While timing the execution of such algorithm I have figured out that the time elapsed follows an exponential law: Time(size)=0.15*exp(0.05*size) My question: is it correct to define this complexity according to T(n)=O(e^n)? Answer: It depends on whether you're talking about $O$ or $\Theta$. The notation $O$ indicates that the complexity is "this much or smaller", and the notation $\Theta$ indicates that the complexity is "this much, no more or less". The complexity of $0.15 e^{0.05 n}$ is $\Theta(e^{0.05 n})$. The complexity is not $\Theta(e^n)$, because $e^n$ grows faster than $e^{0.05 n}$ does (the ratio between the two increases without bound). It is, in fact, true that $0.15 e^{0.05 n}$ is an $O(e^n)$ function, but this isn't a good description of the complexity, in much the same way that "more than a thousand people live in China" is not a good description of the population of China. What this comes down to is that you should state the complexity as $\Theta(e^{0.05 n})$ or as $O(e^{0.05 n})$.
{ "domain": "cs.stackexchange", "id": 6603, "tags": "asymptotics" }
DeepSORT features
Question: I'm reading about Detection and Tracking algorithms and I'm unclear about the DeepSORT algorithm: How does the DeepSORT algorithm gets the features? Does it "hijack" the feature vector from the upstream detection algorithm? (such as YOLO? or others?). This seems unreasonable to me, since not all methods would make it easy to get the feature vector. Does it create features on its own, using a pretrained CNN network? This seems to make sense, and also makes it independent from the Detection algorithm. So I would imagine that DeepSORT gets the bounding-box for each object, and then would need to do some image-preprocessing on its own? (such as crop/resize the part of the image that's related to the BB?) Thanks. Answer: The answer is (2). This makes sense because the features created for the Detection task are different from the ones needed for the Tracking task. It is one thing to detect a car, and there might be multiple cars. You might not need features such as make, model, and the specific numbers on the license plate to determine "yes - this IS a car". But to differentiate one car from another (even passing by each other) such features might be needed. This is not generally true (there are models that do detection/tracking/re-id) using a single features-network, but DeepSORT isn't one of those.
{ "domain": "datascience.stackexchange", "id": 11171, "tags": "object-detection, yolo, object-recognition" }
Time travel in the past
Question: The past weeks I watched some episodes from "Through the Wormhole" with Morgan Freeman. When talking about traveling back in time, they said you can only travel back up to the date when the first time travel occured. So, if for instance time travel will be discovered in 2045, all time travelers(from any given time) won't be able to travel in time earlier than 2045. Why is this? Can anyone explain this please? I know this is just theoretical but still I would like to know the concept behind it. Answer: I think it's worth expanding a bit on Hal's answer to try and make it a bit less technical. We denote a point in spacetime as $(t, x, y, z)$ i.e. both the position $x, y, z$ and the time $t$. In the absence of time machines we can only pass through a spacetime point once. Of course you can go back to the point in space $x, y, z$ but only at a later time so you can't get back to the point $(t, x, y, z)$. If it were possible to pass through $(t, x, y, z)$ go somewhere else then get back to $(t, x, y, z)$ your trajectory would form a loop, and we call this a closed timelike curve (or CTC). It's closed because it's a loop and timelike is a technical term that means you don't have to travel faster than light to go round the loop. For any particular CTC there will be some earliest time that lies on the loop, so by going round the loop you can only get as far back in time as this earliest point. The point that Morgan Freeman is making is that for all the types of time machine we know about this earliest point corresponds to the creation of the time machine. So the statement is true for all the time machines we know about. I don't know if there is a general rule that says it must be true for all time machines, but I suspect not.
{ "domain": "physics.stackexchange", "id": 28953, "tags": "time-travel" }
What are joint angles of Kinova Jaco in home position?
Question: I want to work with Kinova Jaco with spherical wrist in CoppeliaSim, because currently I do not have access to a physical manipulator. The model there is in reset configuration (arm fully extended and points up), but I want it to be in home configuration. Does anyone know what are values of joint positions for Kinova Jaco with spherical wrist in home configuration? Answer: With the quarantine going less strict I finally have an access to a physical Jaco arm. If someone else will need this info, here are the joint positions of Jaco arm in the home state: $$ \boldsymbol{\theta}_{home}=\begin{bmatrix} 4.8055, 2.9211, 0.9989, 4.2076, 1.4420, 1.3220 \end{bmatrix}rad $$ And these are joint positions in retracted state. $$ \boldsymbol{\theta}_{ret}=\begin{pmatrix} 4.7143, 2.6191, 0.4693, 4.6728, 0.0916, 1.7412 \end{pmatrix}rad $$ Finally, these are joint positions in retracted state just after you turn on the Jaco and before you do any operation: $$ \boldsymbol{\theta}_{ret\_off}=\begin{pmatrix} 4.7246, 2.6108, 0.3681, 4.6694, 0.0905, 1.7441 \end{pmatrix}rad $$
{ "domain": "robotics.stackexchange", "id": 2320, "tags": "joint, precise-positioning" }
Ways Light Can Interact With Matter
Question: Below you will see how my "understandings" and obsevations are in conflict. Please look them over and let me know what I am missing. Absorption: Light can only be absorbed by an atom if it has the exact eV to match the atom's "required" eV. When this happens the electron jumps to a higher state, but falls back down and in the process emits a photon or photons that equal the eV that was absorbed. I think I understand this. The Observation: If I shine a red light in a dark room I can see basically everything, but the red light produces an eV of say 1.9074eV. It's very unlikely that all of the objects in the room have an obsorption "requirement" of exactly 1.9074eV. How is the light being reflected? I don't think bounced is the right answer. What am I missing?? The Observation: If I shine polarized light onto a polarizing filter. The polarizer will reduce the light as I rotate the filter, but it will do it with light of all sorts of eV, red, green, blue, etc. If the energy is lost by heat, wouldn't that require infared waves to be emitted? I have looked for questions already posted on the topic, but was not satisfied. I think the answer may be that I need to consider the molecule not just the atom. Thanks Answer: (1) Photons can be absorbed without re-emitting light. The photons energy is absorbed and heats up the matter. As it get hotter the atoms bump into each other and radiate infrared heat. Think of black surfaces. (2) If the walls of the room reflect any light at all, it's because the walls are not absolute black. And the small amount of light reflecting is red because that the only light being used. (3) Polarizers do absorb photons and the energy is radiated as Infrared heat.
{ "domain": "physics.stackexchange", "id": 94495, "tags": "quantum-mechanics, electromagnetic-radiation, absorption" }
How do you calculate molar specific heat?
Question: I know how to calculate the specific heat, but not molar specific heat. What is molar specific heat and how do you calculate it? Answer: Specific heat has the units of $\mathrm{J/(K\cdot kg)}$. Molar specific heat is in units of $\mathrm{J/(K\cdot mol)}$, and is the amount of heat needed (in joules) to raise the temperature of $1$ mole of something, by $1$ kelvin (assuming no phase changes). So, the conversion factor you need, from dimension analysis, will have unit $\pu{kg/mol}$. $\pu{kg/mol}$ is the SI unit for molar mass. Multiply the specific heat by the molar mass to get the molar specific heat. For example, the molar mass of water is $\approx \pu{0.018 kg/mol}$. The specific heat of water is $\approx 4186\ \mathrm{J/(K\cdot kg)}$. So the molar specific heat of water is $4186\ \mathrm{J/(K\cdot kg)} \times \pu{0.018 kg/mol} \approx 75\ \mathrm{J/(K\cdot mol)}$
{ "domain": "chemistry.stackexchange", "id": 209, "tags": "thermodynamics, heat" }
Is sum of forces always zero along the axis that momentum is conserved?
Question: Lately I've been approaching problems with conservation of momentum of systems, I was wondering, if I draw the free body diagram, then write the equations and then add them togheter; if momentum is conserved along an axis I will always get that the sum of the forces must be equal to zero in the equations of the dynamic along the axis where momentum is conserved, is that right? Answer: Net force is the rate of change of momentum with respect to time: for a system of constant masses $m_i$ with velocities $v_i$ and accelerations $a_i$: $\vec F = \Sigma \vec F_i = \frac{d}{dt}(\Sigma m_i\vec v_i) = \Sigma m_i\vec a_i$ If total tracked momentum $\Sigma m_i\vec v_i$ does not change, the net force is zero. Note that if we are tracking all the momenta of all the masses in a given interaction (including the planet, if the interaction includes friction with the ground, air resistance, gravity, etc), the net force and the change of total system momentum will always be zero, since there aren't any un-tracked objects left to be applying a force from outside. Every force prompts an equal and opposite force as indicated by the Third Law. This principle is conservation of momentum.
{ "domain": "physics.stackexchange", "id": 98408, "tags": "newtonian-mechanics, forces, momentum" }
Nuclear Fusion: What is the cause of transport? (Plasma leakage)
Question: Context: I've been reading about The Hairy Ball Theorem which shows that the ideal shape for magnetic confinement fusion has to be a torus. Given that tokamaks use toroidal magnetic fields, I assumed that plasma leakage then wouldn't be a problem. What is the cause of plasma leakage? (If I misunderstood anything, feel free to correct me) Answer: What is the cause of plasma leakage? It's no one cause, there are dozens of reasons. First off there is a natural leakage inherent to any real-world fluid due to a pure random-walk process. The plasma particles are orbiting the long axis of the torus while spinning around the "lines of force", thereby tracing out helical paths. The helical diameter is larger than the inter-orbital spacing, meaning any single particle will overlap the paths of others during its motion, and this means there will be multiple chances for scattering as they orbit and collide. This causes the particles to undergo a random-walk process that eventually takes them outside the boundary of the confinement field, and/or into the walls of the reactor. Basic math suggests this rate, now known as "classical diffusion" is low enough that a reactor would work. There is a dependency on the square of the field strength, so it seemed that even low-power machines would be useful testing systems because as long as they worked a little one could then build a machine that would work completely by scaling up the magnets for production. So in the 1950s you see many small-scale tabletop devices being built. When they did, they found the actual confinement time was dramatically lower than classical diffusion suggested, and increasing the magnet power had no effect. This was determined to be due to natural instabilities in the plasma itself. To illustrate a simple example, consider a plasma torus where, purely at random, one section of the plasma is slightly higher density. When a current is run through the plasma, as it is in the pinch machines, the current creates a field that pulls the plasma down into a filament. However, as one section has slightly higher density, the field in this region is higher, so it collapses faster, which increases the density, which increases the field... This instability, the "sausage", is inherent to the plasma. Similar examples include the kink, the flute (aka interchange) and various higher-order MHD modes where standing waves in the plasma cause "pump out". It took about 15 years to come up with ways to solve these issues, which were first demonstrated with convincing effect in the T-3 tokamak in 1968. The key was to use more external magnetic field compared to the field from the internal current, which causes the overall long-axis path to be more "spirally" and thus smoothes out the instabilities before they can build up. As new toks came online, it was soon noticed that yet new instabilities were being seen. A key one among these is now known as the banana orbit. Consider a single particle orbiting the reactor; when it is at the outside of the torus the magnetic field is lower than it is when it moves toward the inside of the curve - simply due to geometry, the magnets are closer together on the smaller radius. If the particle has a velocity below a threshold value, it will reflect off the increasing field in the same fashion as in a magnetic mirror. Now you have low-energy particles bouncing back and forth within limited regions of the reactor, which from above look like the shape of a banana. The higher-energy ions, the ones you need for fusion, keep scattering off of these low-energy ones. So then we added more complexity. One is to "scrape off" ions near the outside of the confinement area, another is to divert them into a cooler, typically liquid lithium in modern designs, while other fields and heaters can be used to control the action of these ions and use them constructively. Today we have yet more instabilities to deal with, and these are truly destructive. There are conditions that form that cause electrons to bunch up and create channels that accelerate the electrons to relativistic speeds. These "disruptions" are extremely annoying, in one case burning a hole into the vacuum chamber. Controlling these is a major ongoing area of research in the field. And on top of all of this, you still have that random walk going on.
{ "domain": "physics.stackexchange", "id": 70820, "tags": "plasma-physics, fusion" }
Do gene expression levels necessarily correspond to levels of protein activation?
Question: I have seen a lot of research into molecular mechanisms of diseases/phenotypes use measures of RNA as a 'proxy' for the level of protein available in the cell. Is this actually valid? My problem with the assumption that RNA levels correlate with that of the active product (i.e. the protein) is that a lot of post translational regulation occurs, including co-factor binding and phosphorylation, to name but 2. Does anyone know of any studies that have looked into the correlation between RNA levels and protein levels, and separately into the correlations between RNA levels and active protein? It makes sense to me that RNA would correlate with protein certainly, but whether this relates to the proteins active function is what I wonder - i.e. there could be a pool that is replenished as and when the protein levels drop, but the proteins are only actually active for short periods in response to specific stimuli. So, does anyone know of any studies that have looked into the correlation between RNA levels and protein levels, and separately into the correlations between RNA levels and active protein? Update (04.07.12) I have not accepted any answers as yet because none address my question about levels of protein activation, but I concede to Daniel's excellent point that proteins are not all activated in the same way; some are constantly active, some require phosphorylation (multiple sites?), some binding partners... etc! So a study looking at 'global' activation is not yet possible. Yet I was hoping that someone may have read some specific examples. I today found an unpublished review by Nancy Kendrick of 10 studies that have looked at the correlation between mRNA and protein abundance - still not relating to activation. However she finishes the paper as follows; The conclusion from the ten examples listed above seems inescapable: mRNA levels cannot be used as surrogates for corresponding protein levels without verification. If this is her conclusion about protein levels, then any correlation between protein activation and mRNA abundance seems unlikely (as a rule. Some protein levels do correlate with the RNA - see the paper). I am still interested in any answers that give any information about specific examples of protein activation and mRNA levels - it seems highly unlikely there are no such studies, but I have been as yet unable to find any! Answer: It has been well established that mRNA abundance serves as a poor proxy for protein abundance in most cases. This paper on yeast and this paper on cancer both establish this, although using older techniques (SAGE and microarrays, respectively), while this more recent review discusses the topic in light of more recent technologies (e.g. RNA-seq). Perhaps the difficulty with exploring the correlation between mRNA levels and levels of active, mature proteins is that we still know so little about so many proteins and and what makes them active and mature versus inactive, premature, etc. Not all proteins need post-translational modifications to be active, but some do. Currently, this is investigated on a very detailed protein-by-protein basis (as far as I know), so gathering enough data for a large-scale study could take a long time. On the other hand, there are a variety of (increasingly affordable) high-throughput methods for measuring the abundance of thousands of RNA species simultaneously. I think people are using mRNA levels to estimate expression not because it's the most biologically cogent course of action, but rather because it is much easier, more affordable, and more high-throughput (which is all the rage these days).
{ "domain": "biology.stackexchange", "id": 402, "tags": "proteins, mrna" }
Why Is Capacitance Not Measured in Coulombs?
Question: I understand that the simplest equation used to describe capacitance is $C = \frac{Q}{V}$. While I understand this doesn't provide a very intuitive explanation, and a more apt equation would be one that relates charge to area of the plates and distance between them, I'm having trouble understanding it in general. Capacitance seems to be describing, well, the capacity of two plates to store charge (I understand that the electric field produced between them is generally the focus more so than the actual charge). Shouldn't it just be measured in units of charge such as coulombs? I'm sure this is due to a lack of more fundamental understanding of electric potential and potential difference but I'm really not getting it. Answer: An analogy here would be to a pressure vessel and asking what mass of air will fit inside. While the tank has a fixed volume, the amount of air that will go inside depends on the pressure you that you use to force it in. For quite a while the relationship is linear. At double the pressure, you have double the mass of air. Similarly, the capacitor doesn't have a fixed amount of charge that will fit. The amount depends on the electrical "pressure" (voltage) that is used. Actually your initial equation is the useful one. Unless we're constructing one, we usually do not care about the physical particulars of a capacitor. Instead we want to know how much charge will move if we change the voltage. For a "larger" capacitor (higher capacitance), more charge will fit at a given voltage.
{ "domain": "physics.stackexchange", "id": 90952, "tags": "electrostatics, charge, voltage, capacitance, dimensional-analysis" }
the ROC of a Z-transform for shifted signal
Question: I have got two different answers for the ROC of the signal. In that PIC, I have solved it in 2 methods, but I'm getting different answer. Which one is correct? Also please explain how to find the ROC of shifted signal? Answer: The $\mathcal{Z}$-transform of your signal is $$X(z)=1+\frac12 z^{-1}\tag{1}$$ (as you figured out by yourself). The ROC is obviously the whole $z$-plane except for $z=0$, because there's a pole at $z=0$. Note that the ROC of the $\mathcal{Z}$-transform of any finite length sequence must be the whole $z$-plane, possibly except for $z=0$ (for sequences with non-zero values for $n>0$), and $z=\infty$ (for sequences with non-zero values for $n<0$). So by merely looking at the sequence you could determine its ROC. The reason why your second method of determining the ROC doesn't work is because when you add the two $\mathcal{Z}$-transforms you get a pole/zero cancellation: $$X(z)=\frac{1}{1-\frac12 z^{-1}}-\frac{\frac14 z^{-2}}{1-\frac12 z^{-1}}=\frac{(1-\frac12 z^{-1})(1+\frac12 z^{-1})}{1-\frac12 z^{-1}}=1+\frac12 z^{-1}$$ So the restricted ROCs of both functions caused by the pole don't show up in the final result due to this pole/zero cancellation. Note that you could express the unit impulse by $\delta[n]=a^nu[n]-a^nu[n-1]$ and obtain arbitrary ROCs of the separate $\mathcal{Z}$-transforms of the two terms on the right-hand side, depending on the choice of $a$. However, the ROC of the $\mathcal{Z}$-transform of $\delta[n]$ (which is $1$) is obviously the whole $z$-plane, including $z=0$ and $z=\infty$. In general, if you have a $\mathcal{Z}$-transform $X(z)$ and you shift the corresponding sequence to the right by $k$ samples, $X(z)$ gets multiplied by $z^{-k}$. So all poles and zeros of $X(z)$ remain unchanged, and a pole at $z=0$ and a zero at $z=\infty$ are added, both with multiplicity $k$. Of course they can cancel with already existing poles or zeros. Analogously, if you shift the sequence to the left by $k$ samples, $X(z)$ gets multiplied by $z^k$, which adds a $k$-fold pole at $z=\infty$ and a $k$-fold zero at $z=0$. As an example, take the sequence $a^nu[n]$ with $|a|<0$. Its $\mathcal{Z}$-transform is $$X(z)=\frac{z}{z-a}$$ with a pole at $z=a$ and a zero at $z=0$. If you shift the corresponding sequence to the right by one sample, the resulting $\mathcal{Z}$-transform is $z^{-1}X(z)$. The original zero at $z=0$ gets cancelled by the new pole at $z=0$, and you're left with the original pole at $z=a$ and a new zero at $z=\infty$. If you shift the sequence to the left by one sample you get $zX(z)$, which gives a double zero at $z=0$, the original pole at $z=a$, and an additional pole at $z=\infty$.
{ "domain": "dsp.stackexchange", "id": 3249, "tags": "z-transform" }
Why must the angular part of the Schrodinger Equation be an eigenfunction of L^2?
Question: I was reading about the solution to the Schrodinger Equation in spherical coordinates with a radially symmetric potential, $V(r)$, and the book split the wavefunction into two parts: an angular part and a radial part. When dealing with the angular part of $\psi$, the book claims that the angular part "must" be an eigenfunction of $L^2$ (the square of the angular momentum operator) and since the eigenfunctions of $L^2$ are the spherical harmonics $Y_{lm}$, then the angular part of $\psi$ is just the spherical harmonic equations. I am confused as to why the angular part of $\psi$ must be an eigenfunction of $L^2$. Answer: It has to do with the Laplacian $(\nabla^2)$. When trying a separable solution $$\psi(r,\theta,\phi) = R(r)\Theta(\theta)\Phi(\phi)$$ You will get two ODE's, a radial and an angular equation. They must be equal to separation constants. Turns out the angular part is the $L^2$ operator. You can rewrite $$H \psi = E \psi$$ as $$\left( \frac{-\hbar^2}{2 m} \nabla^2 + V(r) \right) \psi = E \psi $$ But $$\nabla^2 = \left(\frac{1}{r^2}\partial_r ( r^2 \partial_r ) - \frac{1}{\sin{\theta}} \partial_{\theta} (\sin{\theta}\, \partial_{\theta}) + \frac{1}{\sin^2{\theta}} \partial_{\phi \phi} \right) = \left(\frac{1}{r^2}\partial_r ( r^2 \partial_r ) + \frac{L^2}{\hbar^2} \right) $$ Now by inserting the above relation for $\psi$ into our Schrodinger equation and finding separation constants for the ODE's you will get a second order harmonic oscillator ODE for $\Phi(\phi)$ which proves that you will have integer values of m and the associated Legendre equation that will yield $P^m_l (\cos{\theta})$. Combined these will give you the spherical harmonics which must be solutions to this angular equation, and conversely eigenfunctions of $L^2$ with eigenvalues $l(l+1)$
{ "domain": "physics.stackexchange", "id": 7361, "tags": "quantum-mechanics, angular-momentum, schroedinger-equation, spherical-harmonics" }
fuerte machine tag changes
Question: rep 124 describes the changes made in the machine tag syntax in the ROS fuerte release [1], and states: The env-loader file must be an executable script that accepts variable-length arguments. After performing environment setup, the script must then execute its arguments as a command. ROS installations come with a default environment loader file. Can someone give an example of how such a script would typically look like? [1] http://ros.org/reps/rep-0124.html#machine-tag Originally posted by Steven Bellens on ROS Answers with karma: 735 on 2012-04-05 Post score: 4 Answer: I don't really get how this script works. Executing it once prints the first echo statement, executing it a second time only prints the second statement. Must be my lack of bash script understanding. Anyway, I put together a simple script that does the job for me: #!/bin/bash export ROS_ROOT=<ros-root> export ROS_PACKAGE_PATH=<ros-package-path> export ROS_IP=<ip> export ROS_MASTER_URI=<master-uri> exec "$@" This script should be placed on the remote computer. In the machine file, remove all definitions of ros-* variables, and use the env-loader="" attribute, e.g. <launch> <machine name="<machine-name>" env-loader="<path-to-env-script>" address="<remote-ip>" user="<username>"> </machine> </launch> Originally posted by Steven Bellens with karma: 735 on 2012-04-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8876, "tags": "ros-fuerte" }
How to retrieve data from xv-11 lidar using rviz?
Question: I want to work with my xv-11 lidar but I am a newbie in ros, rviz. I connected the lidar succesfully and can see surroundings, data from it. Now I want to do a simple system that if it detects objects closer than 1 meter it will light red led connected with arduino, if there is no object closer than 1 meter, green led will be on. How can I define my function for getting that data from rviz? Can you at least give me a roadmap. I am using odroid xu4, lidar xv-11, ubuntu 18.04 with ros melodic. Originally posted by dta800 on ROS Answers with karma: 1 on 2018-08-02 Post score: 0 Answer: Hi, I suppose that lidar you use provides LaserScan.msg type, which has fields as given in here for your purpose ,to find points closer than 1m you can use a callback as following; YourClass::YourClass(){ bool red(false),green(false); float boundary = 1.0; lidar_sub_ = node_handle_ptr_->subscribe("lidar_topic", 1,&YourClass::callback, this); } void YourClass::callback(const sensor_msgs::LaserScanConstPtr &scan){ red = false; green = true; for (int i = 0 ; i < scan->ranges.size(); i++){ if(scan->ranges[i] < boundary){ ROS_INFO("A POINT CLOSER THAN BOUNDARY FOUND") ; red = true; green = false; break; } } } so after each call your green and red flags will be updated. You can use those flags to do the task you described. Originally posted by Fetullah Atas with karma: 819 on 2020-02-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 31447, "tags": "ros-melodic" }
Add a folder via JSON
Question: I read about, that I only should have up to two parameters and a function should have about 10 lines. Now I am in trouble to achieve this. It is a Javascript/JSON thing in C#. I have too many parameters and too many lines in a function and I would like to reduce it. I reduced it already, but what can I do better. I need all the parameters, because the DB needs them. public JsonResult AddFolder(int id, string structureId, string newFolderName, string parent) { int folderNumber = 0; //parent folder id int parentFolder = Convert.ToInt32(GetFolderId(id).Split('_')[0]); //get current period int period = GetLastPeriod(); //actual ISO date string folderDate = Helper.ConvertToIsoDate(DateTime.Now.ToString().Split(' ')[0]); if (period == -1) { throw new Exception(ErrorMessages.PeriodNotFound); } bool publish; //implement publish true when it is parent - false when it is child try { if (parent == "#") { publish = true; structureId = AddNewParentFolderId(folderNumber); } else { publish = false; structureId = AddNewChildFolderId(structureId, parentFolder, parent); } } catch (Exception ex) { Logging.LogToFile(ex, 3); return Json(new { success = false, message = ErrorMessages.CountFolderNotPossible }, JsonRequestBehavior.AllowGet); } if (folderNumber < 0) { Logging.LogToFile("MrNumber has to be > 0", 3); base.Response.StatusCode = 500; return Json(new { errorAddFolderMsg = ErrorMessages.MrNumberNotFound }, JsonRequestBehavior.AllowGet); } id = _folderRepository.AddFolder(structureId, parent, period, folderNumber, newFolderName, publish, folderDate); if (id == -1) { Logging.LogToFile("Folder ID is -1", 3); base.Response.StatusCode = 500; return Json(new { success = false, errorAddFolderMsg = ErrorMessages.AddFolderNotPossible }, JsonRequestBehavior.AllowGet); } return Json(new { id = id, folderId = structureId, newFolderName = newFolderName, parent = parent, period = period }); } Furthermore I am not sure about my namings. Answer: To amplify what's already been stated in comments, the concept of having two parameters and approximately ten lines of code per function (it's called a method in C# by the way), is really just a guideline. Its purpose is to make you think about the following points (there are more, but these are some of the primary points): Am I following OOP? Does my method have a single responsibility? Is my method easy to understand? Simply put, your method should serve one purpose, it should be concise and easy to maintain, and since you're using a language that supports OOP, you should use it when needed. Parameter Count I wouldn't personally say that you have too many parameters for your method. The golden rule of thumb is actually a cap of seven, but this isn't software driven, it's a limitation for most human beings: Countless psychological experiments have shown that, on average, the longest sequence a normal person can recall on the fly contains about seven items. This limit, which psychologists dubbed the "magical number seven" when they discovered it in the 1950s, is the typical capacity of what's called the brain's working memory. There are also several posts on Stack Exchange (here and here are two good ones) asking about the topic of parameter count. The general consensus is that as you approach four parameters, you're either starting to do too much, or you should be creating a model to represent those parameters. In your particular case, I think you're fine, count wise. Line Count Similar to your parameter counts, I can't say you have too many lines. However, I will say that your method isn't focused. It currently has the following responsibilities: - Get a new structure ID. - Attempt to add a new folder to the repo. - Generate error response. - Generate full response. Why not separate those into their own methods? Data Models With all of the above in mind, why not create two new data models: public sealed class FolderSummary { public int FolderId { get; set; } public string StructureId { get; set; } public string FolderName { get; set; } public string Parent { get; set; } public int Period { get; set; } } public sealed class RepoResponse<T> { public bool Success { get; set; } public string Message { get; set; } public T Data { get; set; } } You'll come to find that you can utilize FolderSummary to call AddFolder, and make your return data strongly typed: public JsonResult AddFolder(FolderSummary summary) { ... return Json(new FolderSummary(...)); } Additionally, you can utilize RepoResponse to handle error data and the full response at the same time, adding even further consistency to your output: public JsonResult AddFolder(FolderSummary summary) { ... catch (Exception ex) { Logging.LogToFile(ex, 3); return Json(new RepoResult<string> { Success = false, Message = ErrorMessages.CountFolderNotPossible, Data = structureId }, JsonRequestBehavior.AllowGet); } ... return Json(new RepoResult<FolderSummary> { Data = new FolderSummary(...) }); Results I took the liberty to implement everything I previously mentioned to demonstrate it in action: public JsonResult AddFolderToRepo(FolderSummary summary) { // Nothing prior to this check impacts the check itself, so do it first and get it out of the way since you're throwing an exception. if (!TryGetLastPeriod(out int period)) throw new Exception(ErrorMessages.PeriodNotFound); // Create a variable for storing the folder number. int folderNumber = 0; // Create a generic response to handle errors and the full data as part of the JsonResult. RepoResponse<object> result = null; // Get the structure ID. result = TryGetStructureId(folderNumber, summary); if (result.Success) { // Save the new structure ID and attempt to add the folder. string newStructureId = result.Data; result = TryAddFolderToRepo(result.Data, folderNumber, summary); if (result.Success) { int newId = result.Data; // Update the result data to the new folder summary result.Data = new FolderSummary { FolderId = newId, StructureId = newStructureId, FolderName = summary.FolderName, Parent = summary.Parent, Period = period }; } } // Return the structured response. if (result.Success) return Json(result); else return Json(result, JsonRequestBehavior.AllowGet); } private bool TryGetLastPeriod(out int period) { period = GetLastPeriod(); return period >= 0; } private RepoResponse<string> TryGetStructureId(int folderNumber, FolderSummary summary) { string resultMessage = string.Empty; string newStructureId = string.Empty; try { // Attempt to get the folder ID from the summary. if (int.TryParse(GetFolderId(summary.Id).Split('_')[0], out int parentFolderId)) { // Attempt to get the structure ID. if (summary.Parent.Equals("#", StringComparison.InvariantCultureIgnoreCase)) newStructureId = AddNewParentFolderId(folderNumber); else newStructureId = AddNewChildFolderId(summary.StructureId, parentFolderId, summary.Parent); } } catch (Exception e) { Logging.LogToFile(e, 3); resultMessage = ErrorMessages.CountFolderNotPossible; } return new RepoResponse<string> { Success = !string.IsNullOrWhiteSpace(newStructureId), Message = resultMessage, Data = newStructureId }; } private RepoResponse<int> TryAddFolderToRepo(string newStructureId, int folderNumber, FolderSummary summary) { string resultMessage = string.Empty; int newId = -1; try { if (folderNumber > 0) { string isoDate = Helper.ConvertToIsoDate(DateTime.Now.ToString().Split(' ')[0]); bool shouldPublish = summary.Parent.Equals("#", StringComparison.InvariantCultureIgnoreCase); newId = _folderRepository.AddFolder(newStructureId, summary.Parent, summary.Period, folderNumber, summary.FolderName, shouldPublish, isoDate); } else { Logging.LogToFile("MrNumber has to be > 0", 3); Response.StatusCode = 500; resultMessage = ErrorMessages.MrNumberNotFound; } } catch (Exception e) { Logging.LogToFile(e, 3); resultMessage = ErrorMessages.AddFolderNotPossible; } return new RepoResponse<int> { Success = newId >= 0, Message = resultMessage, // Return folder number if we blew up on folder number. Data = folderNumber > 0 ? newId : folderNumber } } Note: There may be spelling and syntactical errors in the code snippets due to me typing this up quickly. However, I believe the message is clear, so, I'll entrust you to resolve those.
{ "domain": "codereview.stackexchange", "id": 41957, "tags": "c#, json" }
Convert raw depth data from depth image to meters (Kinect v2)
Question: Hello, I am using Kinect v2 and I am trying to convert its raw values from the depth image to meters. I am subscribing to the image_depth_rect topic. Firstly, I would like to ask if I need to calibrate the depth image with the color image I get from the sensor ? I have searched for similar issues, but all I found is info about the sensor from the previous version. I also found some stuff outside from ROS which didn't help. Is there an equation to convert the unsigned 16bit data that I get in meters? Currently I am using the following callback function for the depth image topic: void kinectdepthCallback(const sensor_msgs::ImageConstPtr& msg2){ cv_bridge::CvImagePtr cv_ptr2; try{ cv_ptr2 = cv_bridge::toCvCopy(msg2, sensor_msgs::image_encodings::TYPE_16UC1); } catch (cv_bridge::Exception& e){ ROS_ERROR("Could not convert from '%s' to '16UC1'.", e.what()); return; } cv::imshow("Depth Image",cv_ptr2->image); cv::waitKey(20); ROS_INFO("The object is in depth: %d",cv_ptr2->image.at<int>(x,y)); } where (x,y) are some points of the detected object but I get wrong results. To be more specific, I detect an orange marker but I cannot get its depth from the Kinect v2. I present the following images: Color image with detected object: Thresholded image: Depth image is black: Terminal messages from the above code: I don't know what is wrong and I get zero depth even if the object is at approximately a 19 - 20 cm. I have tried to find the depth in bigger depths like 60 cm, but I was still getting 0 results. Thanks for answering in advance, Chris Originally posted by patrchri on ROS Answers with karma: 354 on 2016-10-13 Post score: 0 Original comments Comment by Dimitri Schachmann on 2016-10-14: does this help: http://answers.ros.org/question/160611/how-can-i-get-depth-data-from-kinect/ ? Comment by patrchri on 2016-10-14: In the example he just gets the raw data from kinect. I am using a kinect v2 and I am trying to make this data into meters. Unless Kinect v2 and its tools do this automatically, which is something I don't know... Comment by Abdu on 2017-07-17: Hi, I have simple python code for detecting and tracking the object based on color by using webcam. My question is how can I use the same code but by using Kinect v2 (NOT webcam). can u plz help with this, and tell me how to use Kinect v2 as webcam in linux? or share your method ? Comment by patrchri on 2017-07-17: You want to use Kinect v2 in linux or in ROS? This topic has to do with integrating Kinect v2 into ROS as a node and manipulating its data with cv_bridge. Take a look here for that. Comment by patrchri on 2017-07-17: If you want to use Kinect v2 in linux, you need to install the drivers from here and ask in another more relative forum for that. A simple googling of your issue will also help you with your problem also. Good luck! Comment by patrchri on 2017-07-17: For integrating Kinect v2 into ROS, also check this. Comment by Abdu on 2017-07-17: Thanks for you guys, I know how to use kinect v2 with ROS, but I don't know how to use it as a webcam, because my code is in python and using webcam, but I want to modify it so it would use kinect v2 not webcam. That's why I ask Chris to pass the code, cuz its for color detection by using kinect v Answer: Better calibration of the sensor and <ushort> casting will do the work. The raw data from the Kinect v2 are in millimeters. Originally posted by patrchri with karma: 354 on 2016-11-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25973, "tags": "ros, kinect, depth-image, ros-kinetic" }
Methods for standardizing / normalizing different rank scales
Question: I know there is the normal subtract the mean and divide by the standard deviation for standardizing your data, but I'm interested to know if there are more appropriate methods for this kind of discrete data. Consider the following case. I have 5 items that have been ranked by customers. First 2 items were ranked on a 1-10 scale. Others are 1-100 and 1-5. To transform everything to a 1 to 10 scale, is there another method better suited for this case? If the data has a central tendency, then the standard would work fine, but what about when you have more of a halo effect, or some more exponential distribution? Answer: For item-ratings type of data with the restriction that an item's rating should be between 1 and 10 after transformation, I would suggest using a simple re-scaling, such that the item's transformed rating $x_t$ is given by: $$x_t = 9\left(\frac{x_i - x_{min}}{x_{max} - x_{min}}\right) + 1$$ where $x_{min}$ and $x_{max}$ are the minimum and maximum possible rating in the specific scale for the item, and $x_i$ is the item rating. In the case of the above scaling, the transformation applied is independent of the data. However, in the normalization, the transformation applied is dependent on the data (through mean and standard deviation), and might change as more data becomes available. Section 4.3 on page 30 of this document shows other ways of normalizing in which your restriction (transforming to the same absolute scale) might not be preserved.
{ "domain": "datascience.stackexchange", "id": 331, "tags": "statistics" }
Removing an element and the element following from an ArrayList
Question: I have an ArrayList of Strings. I want to delete all elements with a particular value and the element immediately after that one. I have an ArrayList with the elements "Ape", "Bear", "Cat", "Dog", "Emu", "Fox", and "Gopher". Any time I see "Cat", I want to delete "Cat" and the element immediately following it. So in this example, I want to remove "Cat" and "Dog". But I don't know if it is going to be "Dog", "Dingo", "Duck", or some other animal. The following code works: import java.util.ArrayList; public class TestList { public static void main(String[] args) { ArrayList<String> arrayList = new ArrayList<>(); arrayList.add("Ape"); arrayList.add("Bear"); arrayList.add("Cat"); arrayList.add(Math.random() > 0.5 ? "Dog" : "Duck"); arrayList.add("Emu"); arrayList.add(Math.random() > 0.5 ? "Fox" : "Ferret"); arrayList.add("Gopher"); for (int i = 0; i < arrayList.size(); i++) { if (arrayList.get(i).equals("Ape")) { arrayList.set(i + 1, "Beagle"); //Changes "Bear" to "Beagle" } else if (arrayList.get(i).equals("Cat")) { arrayList.remove(i); //Removes "Cat" arrayList.remove(i); //Removes the String following "Cat" } else if (arrayList.get(i).equals("Emu")) { arrayList.remove(i); //Removes "Emu" arrayList.remove(i); //Removes the String following "Emu" } } } } I know the order of the list will never change unless I am the one to change it, and I know that I will never search for the last element ("Gopher" in this case) or an element which immediately follows an element I have searched for ("Dog/Duck" or "Fox/Ferret" in this case). This code just feels dangerous to me, but I don't know why. Is this code dangerous? If so, why? Answer: This code just feels dangerous to me, but I don't know why. Is this code dangerous? If so, why? If the method modifying the list is called from multiple threads, that's obviously dangerous: without synchronization, one thread might not see changes done by other threads, and you have no control over the order in which the different threads modify the list. In addition, the program depends on some pre-conditions: A next element must exist after the "special" ones used by your conditions. Otherwise you will get IndexOutOfBoundsException Elements after the "special" ones are deleted or modified, for no apparent logical reason Your code looks hypothetical, not something realistic. If your real code is organized in a way that the preconditions make perfect sense, then your operations are not necessarily dangerous. If your code is not organized in a way that these operations are obviously correct, for example the class and method names involved don't make these operations obvious, then this can be dangerous. When code does something that is not obvious from its public interface, then what happens inside can be unexpected to users, lead to bugs, and be prone to errors. In addition, it's better to refer to types by interfaces instead of implementations. And, instead of adding elements to an ArrayList one by one, it can be ergonomic to use Arrays.asList, for example: List<String> list = new ArrayList<>(Arrays.asList("Ape", "Bear", "Dog"));
{ "domain": "codereview.stackexchange", "id": 14298, "tags": "java, concurrency" }
Which areas of TCS are better for a math bachelor?
Question: I am graduating from mathemetics bachelor program and I've been accepted for masters degree in mathematics. But I am not sure to pursue math, hence I am looking for other field I can do research. I am going to study potenial theory in the masters degree in math so I am good at analysis, particularly in measure theory. I also took some statistics courses, java and matlab. Moreover, I really like computers so I educated myself on some other stuff(I do some game modding and photoshop) Some friends of mine suggested machine learning but they are undergrad and they don't really know the subject well. So my question is what fields/areas of computer science I can work on for PhD and what should I learn. Or in this respect can I work on CS(Phd) as a math grad? (I've read the similar questions but they are only similar. So I appreciate any help.) Edit: In Math M.S program I only have one none-technical elective and I will get some undergrad CS lectures(additional). So, as @RB suggested, I am asking for the areas that require realtively less CS and more math. But note that I am willing to study/work hard. Answer: It had been argued that Theoretical Computer Science is a branch of mathematics, so it seems to me that any answer to this question would necessarily be primarily opinion-based. That said, in your place (and this is an uninformed opinion) I would steer clear of subfields in which a tremendous volume of technical work has been done, and in which coding plays a greater role. The barrier to entry in such subfields will be quite high for a math major. So for example, I would be cautious of anything related to computer vision and speech recognition. I would also be cautious of subfields in which the objective is perhaps a bit fuzzy, such as AI and machine learning. More controversially, I think that perhaps you should steer clear of quantum computation for this reason. Conversely, subfields in which the objective is well-defined and in which coding plays a lesser role have a lower entry barrier for a math major. For example, complexity theory, graph algorithms, information theory (perhaps especially for you!), concurrency, cryptography, and maybe even things like clustering and compressed sensing.
{ "domain": "cstheory.stackexchange", "id": 2732, "tags": "soft-question, advice-request, career" }
Network error when trying to launch a node on an other machine
Question: Hello, My distribution is ros-kinetic and I am using ubuntu 16.04 on both machines I am trying to run a node on a second machine (a realsense camera node) by adding this lines in my launch file: <group> <machine name="suitee" address="192.168.0.100" default="true" /> <include file="$(find s_bringup)/launch/rs_aligned_depth.launch" machine="suitee"/> </group> I have followed the network setup tutorial here: http://wiki.ros.org/ROS/NetworkSetup Specifically, I have the following variables set on my master machine (where roscore is ran): export ROS_HOSTNAME=192.168.0.142 export ROS_MASTER_URI=http://${ROS_HOSTNAME}:11311 I have these set on my slave machine: export ROS_IP=192.168.0.100 export ROS_MASTER_URI=http://192.168.0.142:11311 I also respectively modified the /etc/hosts file on both machines: For master : 192.168.0.100 suitee For slave: 192.168.0.142 suitee2 I can ping both machines correctly from each other. However when I try to run my launch file, I get the following error: started roslaunch server http://192.168.0.142:42619/ remote[192.168.0.100-0] starting roslaunch remote[192.168.0.100-0]: creating ssh connection to 192.168.0.100:22 /usr/lib/python2.7/dist-packages/Crypto/Cipher/blockalgo.py:141: FutureWarning: CTR mode needs counter parameter, not IV self._cipher = factory.new(key, *args, **kwargs) remote[192.168.0.100-0]: failed to launch on suitee: Unable to establish ssh connection to [192.168.0.100:22]: Server u'192.168.0.100' not found in known_hosts [192.168.0.100-0] killing on exit unable to start remote roslaunch child: 192.168.0.100-0 The traceback for the exception was written to the log file I have tried to remove the know_hosts files and connect via ssh and I can connect from both machines using: ssh suitee@192.168.0.100 ssh suitee2@192.168.0.142 EDIT: I can correctly see all topics on both machines and I i launch the node manually on the slave card, I can see the published topics correctly and it seems to have no problems to find the master. Can anyone help me on this ? Originally posted by Alrevan on ROS Answers with karma: 126 on 2019-12-12 Post score: 0 Answer: I finally found the answer for this issue: I had to add the tag "user" into the launch file: <group> <machine name="suitee" address="192.168.0.100" default="true" user="suitee" /> <include file="$(find s_bringup)/launch/rs_aligned_depth.launch" machine="suitee"/> </group> I also had to follow the answer here: https://answers.ros.org/question/41446/a-is-not-in-your-ssh-known_hosts-file/ Originally posted by Alrevan with karma: 126 on 2019-12-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34139, "tags": "ros, ros-kinetic, network, network-setup, beginner" }
libboost_system.so.1.55 vs. 1.54 on armhf (rpi3)
Question: Hi, I'm compiling ROS Kinetic from source for the Raspberry Pi 3. Recently I started getting a lot of these warnings: Scanning dependencies of target rosout [ 50%] Building CXX object CMakeFiles/rosout.dir/rosout.cpp.o [100%] Linking CXX executable /home/pi/ros_catkin_ws/devel_isolated/rosout/lib/rosout/rosout /usr/bin/ld: warning: libboost_system.so.1.54.0, needed by /usr/lib/gcc/arm-linux-gnueabihf/4.9/../../../arm-linux-gnueabihf/libconsole_bridge.so, may conflict with libboost_system.so.1.55.0 /usr/bin/ld: warning: libboost_thread.so.1.54.0, needed by /usr/lib/gcc/arm-linux-gnueabihf/4.9/../../../arm-linux-gnueabihf/libconsole_bridge.so, may conflict with libboost_thread.so.1.55.0 [100%] Built target rosout These don't seem to affect anything, but I'd like to understand more about why these started appearing. Edit: I have headers for 1.55.0 installed via apt, and libraries for 1.54.0 and 1.55.0: pi@raspberry:~$ find / -name libboost_system.so* | less /usr/lib/arm-linux-gnueabihf/libboost_system.so.1.54.0 /usr/lib/arm-linux-gnueabihf/libboost_system.so /usr/lib/arm-linux-gnueabihf/libboost_system.so.1.55.0 pi@raspberry:~$ ls -l /usr/lib/arm-linux-gnueabihf/libboost_system.so lrwxrwxrwx 1 root root 25 Sep 24 2014 /usr/lib/arm-linux-gnueabihf/libboost_system.so -> libboost_system.so.1.55.0 Originally posted by clyde on ROS Answers with karma: 1247 on 2017-10-31 Post score: 0 Original comments Comment by gvdhoorn on 2017-11-04: Do you have multiple versions of Boost installed? Comment by clyde on 2017-11-05: 1.55.0 and 1.54.0, see edits Comment by clyde on 2017-11-06: Hmmm, looking at /var/log/apt/history.log, it appears that both versions were installed by rosdep. First "apt-get install -y libboost-all-dev", which pulled in all of 1.55, then "apt-get install -y libconsole-bridge-dev" which pulled in 2 libraries from 1.54. Answer: Multiple boost versions on the same machine can work, but typically only if you keep them strictly separate. See #q274016 for another question about that and some thoughts. Did you build the 'other Boost' from sources? It can't have just "turned up" on its own. Originally posted by gvdhoorn with karma: 86574 on 2017-11-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29247, "tags": "ros, boost, raspberrypi, armhf" }
Lost track of the hydrogen in step 3 of the Krebs cycle
Question: I'm trying to follow the Krebs cycle step by step, accounting for the number of hydrogens in each molecule before and after reactions. I've run into a problem starting from step 3. Isocitrate has five hydrogens. It undergoes oxidisation to Alpha-Ketoglutarate, with NAD+ being reduced to NADH/H+. I would say this process entails the loss of two hydrogens from Isocitrate, and the gain of those hydrogens in NADH/H+. What I don't understand (and this lack of understanding cascades through steps 4 and 5) is that Alpha Ketoglutarate has four hydrogens. My arithmetic says that there should be only three. Where does the fourth hydrogen come from? Note: If I start from Step 6, my accounting method works, all the way round to Step 3. Step 6: Succinate (four hydrogens) is oxidised to Fumarate (two hydrogens), reducing FAD to FADH2. Step 7: Fumarate (two hydrogens) is reduced to Malate (four hydrogens), by the addition of H2O, gaining two hydrogens. Step 8: Malate (four hydrogens) is oxidised to Oxaloacetate (two hydrogens), reducing NAD+ to NADH/H. Etc. I came across a similar question at https://biology.stackexchange.com/questions/93775/what-exactly-happens-to-hydrogen-atoms-in-step-4-of-citric-acid-cycle but as this was about step 4, and for me the problem occurs one step earlier. Answer: Your problem stems from the statement "I would say this process entails the loss of two hydrogens from Isocitrate, and the gain of those hydrogens in NADH/H+." This statement is incorrect. In isocitrate, the carboxylic acid functional groups are deprotonated, as would be the case at neutral pH. The oxidation reaction thus involves loss of only $\ce{H-}$ to NAD. There is no loss of $\ce{H+}$, because that has already been removed. If you start with isocitric acid, which is isocitrate in its neutral form with $\ce{-CO2H}$ instead of $\ce{-CO2-}$, then you will need to remove $\ce{H-}$ and $\ce{H+}$ to accomplish the reaction and your above statement will be correct. The textbook image you showed takes a confusing approach of implicitly associating an $\ce{H+}$ with isocitrate even though it is not actually on the molecule, so that "NADH/H+" is indicated as a product even though only NADH has actually been produced in the reaction shown.
{ "domain": "chemistry.stackexchange", "id": 14810, "tags": "redox, hydrogen" }
Is superstrings on the $E_8$ torus dual to bosonic string theory on the Leech lattice torus?
Question: Two important unimodular lattices are $E_8$ and the Leech lattice. One can take 10D superstring theory and compactify it over the $E_8$ torus. One can also take 26D bosonic string theory and compactify it over the Leech Latice $\Lambda_{24}$. In both cases one ends up with a 2 dimensional theory. (Due to the various dualities each of the 10D superstring theories is probably dual to each other when compactified down to 2 dimensions.) The question is then whether these pair of 2D field theories one ends up with are equivalent in some way. Yes, one started with N=1 supersymmetry and has fermions but in 2D the distinction between bosons and fermions is less important (due to e.g. bozonization). Also with heterotic string theory one can think of it as the left hand modes moving in 26 dimensions anyway. We know the second one has conncections with the Monster group. So either the first one is equivalent and also had conncections with the Monster group or it would be connected to some other group. So the question is: "Is there a duality between a 10D superstring theory on $E_8$ torus with 26D bosonic string theory on the Leech lattice torus". I think the easiest way to disprove this would be to compare the degrees of freedom of the lowest energy level particles. Answer: The answer is no. The reason is supersymmetry. No matter how do you compactify the theory of the bosonic string; tachyons sit ubiquitously in the spectrum. On the other hand, the five $d=10$ superstring theories are tachyon free; this property is preserved under compactification on a flat torus. The point is that dualities can't relate quantum, consistent, stable, UV-complete and anomaly free backgrounds (namely, supersymmetric string compactifications) with theories that, indeed, doesn't fully exist as quantum mechanical systems ($d=26$ bosonic string theory).
{ "domain": "physics.stackexchange", "id": 72221, "tags": "string-theory, duality" }
What are pseudoknots?
Question: I'm trying to get my head around what a pseudoknot is and how I can identify them given some RNA string. For example, suppose I have a string s = CGUUGUGUACACGAUAGUACAU. Suppose the two longest substring inversions are identified in bold CGUUGUGUACACGAUAGUACAU, which form the stack of the hairpin and everything after the first substring and before the start of the last substring make up the hairpin as illustrated below. U G U G A U C C - A G - C U - G A U A G U A C A U By definition, a pseudoknot is defined as a secondary structure formed by pairing between a loop and a region located outside of the stem flanking the loop. In this example, we would start the pairing using some substring of the outer tail and start matching bases around the hairpin. My question: is the pseduoknot the alignment of these base pairs, and what is the significance of this alignment? Answer: is the pseduoknot the alignment of these base pairs I'm not exactly sure what is meant by alignment, but if you mean base pairing from the flanking RNA to the loop of the hairpin you have depicted, then you would have an H-type pseudoknot. Since you are interested in predicting these structures, it's important to note that a nucleic acid sequence capable of forming the necessary base-pairs for a pseudoknot structure does not imply that the sequence indeed forms a pseudoknot. It's likely that the sequence has a minimum free energy (MFE) structure that is not pseudoknotted. So the base-pairing capability is necessary but not sufficient for predicting these structures. Take a look at the following (full-text) articles on pseudoknot prediction for additional information: Computational Analysis of Noncoding RNAs DotKnot: pseudoknot prediction using the probability dot plot under a refined energy model A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures KnotSeeker: Heuristic pseudoknot detection in long RNA sequences
{ "domain": "biology.stackexchange", "id": 3532, "tags": "molecular-biology, bioinformatics" }
Make the tree from a database into a tree of Java
Question: There is a task to get the data from the database (tree) and translate them into Java. They must maintain their properties (each entry has parent and descendants). I'm taking data from database using Statement and ResultSet. The output is an arraylist of RowNode: public class RawNode { int id; int parentId; String text; //getters&setters here } Now we need to convert this array into the likeness of a tree. How I do it: public List<Node> makeNewTree(ArrayList<RawNode> _rawNode) { List<Node> nodes = new ArrayList<>(_rawNode.size()); Map<Node, RawNode> rawNodesByNodes = new HashMap<>(_rawNode.size()+10, 0.98f); Map<Integer, Node> nodesById = new HashMap<>(_rawNode.size()+10, 0.98f); try { for (RawNode rawNode : _rawNode) { Node node = new Node(); node.setId(rawNode.getId()); node.setText(rawNode.getText()); rawNodesByNodes.put(node, rawNode); nodesById.put(rawNode.getId(), node); nodes.add(node); } for (Node node : nodes) { RawNode rawNode = rawNodesByNodes.get(node); Integer parentId = rawNode.getParentId(); if (parentId == null) { node.setParent(null); } else { Node parent = nodesById.get(parentId); node.setParent(parent); } } for (Node node : nodes) { Node parentNode = node.getParent(); if (parentNode == null) { continue; } parentNode.addChild(node); } } catch (Exception ex) {...} return nodes; } Where Node is: public class Node { private int id; private Node parent; private String text; private List<Node> children = new ArrayList<>(); } Isn't that code redundant? Is there a more elegant solution? Answer: OR-mapping You fall into the object-relational impedance problem. RawNode seems to be the same as Node but they are totally different in purpose. This doesn't even change if you use an OR-Mapper like Hibernate. The following definitions assume you do not want to have an anemic domain model. RawNode (mapping object) It provides a concrete representation what came from the database. It is a datastructure only with no logic. It abstracts from the raw resultset where you access columns via column names or indices. A RawNode is used as a datastructure to communicate with the database through the DAOs in both ways. A RawNode object is a value object. It has no assertion to consistency. You may check the values for consistency (e.g. Java Validation API) and get a list of constraint violations. But the current state of the object may be inconsistent. OR-Mapper can do validations but as they are not able to enforce business rules I would not rely on this. Equality of these type of objects should be checked either be on all values or on none. Node (business object) Equality of a business object is checked on a global unique id. This is a real business object. It has the assertion to be immutable in the current version. If the non unique values are different but the id is the same you have a different version of the business object. But it is the same object. A business object is ALWAYS consistent in respect to the information it provides. If it is not this is an error. Business objects enforce business rules and consistency when you try to make a change. They will process validation and structural checks to keep the whole system consistent. Useful assertions As you remap the business object to the mapping object in the DAO you can be sure that the communication object is consistent as the business object was consistent. As you remap an unmodified mapping object from the database to a business object you can assume consistency. Your code Avoid continue multiple return, break and continue are not refactring-friendly. They make it hard to extract methods if you want to sub divide a method. What kind of exception do you expect? You have all transient datastructure available. Do you expect a NullPointerException because a parent that was defined could not be found? If that is the case you should fix the algorithm that produces that inconsistency and not try to straighten it for further algorithms. The decision to fix the data is depending on your influence on the data holder of course. Extract methods Extract the following responsibilities into separate methods: 1. Build Nodes without parents and children from RawNodes 2. Build a temporaray map with Node by Id 3. Set parent child relationship
{ "domain": "codereview.stackexchange", "id": 24359, "tags": "java, tree" }
Inverse Kinematics - How to only find a unique joint angle solution in 4 dof robot?
Question: I have to develop an algorithm to determine the necessary joint angles to achieve a desired TCP position and orientation in a 4 joint manipulator. I have come across a concept called "degeneracies", and I have to think of a scheme that will handle degenerate points so that only one joint angle solution exists for any TCP position and orientation. For example: keep the shoulder tilted up at all times. How can I go about this? I understand the equations, it's mainly the "only one joint angle solution" I'm having a hard time with. Answer: If you have a 4 degrees of freedom system you will most probably solve the inverse kinematics equations for $X, Y$ and $Z$ position and, aditionally 1 orientation, let's call this $\alpha$. IFF the chosen orientation and the robot structure allows an analytical solution, you can proced as follows: As you mentioned you will get more than 1 solution for one set of inputs, so you must have a strategy to select one of the solutions. You can derive the equations to have the following form: $q_{11} = f_{11}^{-1}(x, y, z, \alpha)$ $q_{12} = f_{12}^{-1}(x, y, z, \alpha)$ here you decide which value of $q_1$ to use, e.g. based on its sign. $ q_1 = \left\{ \begin{array}{c} q_{11} \text{ if } q_{11}>0;\\ q_{12} \text{ otherwise } \end{array} \right. $ usually, for an analytical solution the next angle which gets calculated is $q_{3}$. You can formulate the equations so that they are dependent on $q_1$ also. This way the $q_1$ choise made earlier will be pluged in the calculations for the remaining joint angles. $q_{31} = f_{31}^{-1}(x, y, z, \alpha, q_1)$ $q_{32} = f_{32}^{-1}(x, y, z, \alpha, q_1)$ Also in this case you can make a choise which solution to use: $ q_3 = \left\{ \begin{array}{c} q_{31} \text{ if } q_{31}>0; \\ q_{32} \text{ otherwise } \end{array} \right. $ You can continue solving the inverse kinematics problem by calculating $q_2$. Here there is are no multiple solutions expected if $q_3$ has already been chosen: $q_3 = f_{2}^{-1}(x, y, z, \alpha, q_1, q_3)$ and you can continue this way until you have all angles.
{ "domain": "robotics.stackexchange", "id": 1778, "tags": "robotic-arm, inverse-kinematics, manipulator" }
wheeled vehicle forward velocity from RPM brushless data
Question: I'm trying to hack an electric RC car with the goal to make it autonomous, as part of my PhD research. The mounted ESC is able to send back telemetry data, including the spinning velocity (in RPM) of the brushless motor used as tracking system. I'm wondering if it could be possible to estimate the vehicle forward velocity from this information, noting that the vehicle has three differentials and therefore negating a trivial relation between wheel velocities and motor rotations. I'm aware that there is no way to detect slipping, but I the car will work with modest velocities and smooth accelerations, therefore an estimate of the wheel velocities should still be a significant information in e.g. a Kalman filter that fuses it with IMU data. I've searched for anything related in literature, but nothing seems to be released on this topic. Note: vehicle differentials are supposed to be configured as open differentials, but not sure of this. Answer: This project does exactly that on an RC car. The author is a top competitor in the DIYRobocars community; it's the blue car in this video. He uses tachometry from the brushless motor, an IMU and visual odometry for localization. I don't know the code well enough to point you to any specific file.
{ "domain": "robotics.stackexchange", "id": 2285, "tags": "wheeled-robot, brushless-motor, esc, estimation" }
Is there a way to localise my robot using a combination of the navigation stack and robot localization package
Question: I'm using ROS melodic on a Raspberry Pi 3. Basically I need to keep track of the robot as it travels. Using the ROS navigation stack I can track where the robot is, but I have to manually give it the initial pose. I would like to do this with GPS and an IMU. I'll be using the robot in the presence of a magnetic field so I don't think the IMU will give correct data, so I imagine I would need to use this in combination with a lidar scanner. I'm quite new to ROS so any direction would be quite helpful. Originally posted by liambroek on ROS Answers with karma: 5 on 2021-01-25 Post score: 0 Answer: You can use robot_localization package, the link below is a similar topic fusing IMU and GPS. https://answers.ros.org/question/200071/how-to-fuse-imu-gps-using-robot_localization/ Originally posted by AmirSaman with karma: 130 on 2021-02-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 36009, "tags": "localization, navigation, ros-melodic, raspberrypi, 2d-pose-estimate" }
Calculating travel time of an object on the inclined plane
Question: Let's consider we have frictionless inclined plane and we pushed small object on the plane with initial velocity $v_0$. Travel time while going up can be calculated from following formula: $l = v_0 t_1 + \frac{at_{1}^2}{2}$, where $l$ is distance object traveled on the plane, $a$ is the acceleration. Now let's consider objected stopped and started to slide down. Traveling time down will be $l = \frac{at_{2}^2}{2}$. $l$ obviously is the same aswell as $a$ (there is no friction, only gravity affects on the object). From these equations $t_1 \neq t_2$. Now lets think from velocity point of view. In moving up case $0 = v_0 - at_1$ and in down case $v_0 = 0 - at_2$, $v_0$ is the same in both cases there is no friction, energy is conserved. Calculated this way $t_1 = t_2$. I don't see where I have mistake. For me both logic is correct and valid. please help me with the confusion, thank you! Answer: I believe you made some mistakes involving signs. In the first place, for consistency between the equations, the velocity should be written $v = v_0 + a t$ (notice the $+$ sign instead of $-$). That being said, first fix some sign convention for the directions, e.g. positive upward the plane and negative downward (with this convention $a$ is negative.) Then, for the slide down, $-l = \frac{1}{2} a t_2^2$ (again, notice the $-$ sign in the LHS.) This is because $l$ is a positive number (the length of the plane) and a $-$ sign must be included to account for the fact that the RHS is negative, because $a$ is. I believe that after modifying this, there should be no inconsistency anymore.
{ "domain": "physics.stackexchange", "id": 76660, "tags": "homework-and-exercises, newtonian-mechanics" }
Calculating the distance crossed by a ball under changing gravitational acceleration
Question: Suppose a ball of finite mass is taken to such a height in space that the gravitational acceleration decreases significantly. Now, as you let go of the ball, it should head straight down towards the surface of earth, and as it crosses distance and gets closer to the earth surface, it's acceleration should increase. How would I, in this case, be able to calculate the distance traveled by the ball in a certain time? Answer: $$\begin{align}F&=-\frac {GMm}{x^2}\\ a&=-\frac {GM}{x^2}\\ v\frac {dv}{dx}&=-\frac {GM}{x^2}\\ \int v\;{dv}&=\int-\frac {GM}{x^2}\;dx\\ \frac {v^2}2&=\frac {GM}x\\ v&=\sqrt\frac {2GM}x\\ \int\sqrt x\;dx&=\int\sqrt {2GM}\;dt\\ \frac 23x^{\frac 32}&=t\sqrt {2GM} \end{align}$$ Here, you have an equation relating distance and time in a gravitational field.
{ "domain": "physics.stackexchange", "id": 64798, "tags": "homework-and-exercises, newtonian-mechanics, acceleration, free-fall" }
Are mitochondria dead?
Question: In the video "What is Life? Is Death Real?", the subject of mitochondria is raised at 2:58. At 3:12, the narrator says "[mitochondria] are not alive any more: they are dead." What currents of thought lead to this affirmation? When I search for mitochondria are dead on Google, I get many links about the role of mitochondria in cell death, but I don't see anywhere that the assertion in this video is discussed. Answer: SolarLunix posted an excellent answer detailing the criteria for being classified as "alive", and showed that by those criteria, mitochondria could be considered as "dead". However, I would argue that the narrator's statement in your video does not make any sense. The currently-accepted theory of the evolution of mitochondria (and possibly other organelles) from free-living prokaryotes is called symbiogenesis and postulates that the mitochondrial progenitors began living symbiotically with the precursors to eukaryotes some 1.5 billion years ago. Life originated on Earth about 3.5 billion years ago, so eukaryotic cells (carrying mitochondria) have been around for a little less than half of that time. In that incredibly large span of time (if you counted 1 number per second, you'd reach 1.5 billion in about 47.5 years), what used to be an independent organism (we assume) has become an organelle - an integral part of the eukaryotic cell. It absolutely could not survive outside the cell on its own, and relies on signals from the cell (mainly, the cell's energy needs as detected by "sensor" proteins to key molecules like glucose and ATP) to reproduce. Therefore, it is most appropriate to think of mitochondria simply as another organelle, just like the endoplasmic reticulum, lysosomes, or the nucleus. It doesn't make sense to think of those other structures as "alive" or "dead", just as it doesn't make sense to think of mitochondria in the same way. Yes, they used to be independent, but are no longer; instead they are a well-integrated part of the whole cell.
{ "domain": "biology.stackexchange", "id": 4322, "tags": "mitochondria, life" }
Doubt on "wave-particle duality" in quantum mechanics
Question: I'm reading the book $[1]$ (which is not a scientific communication book, rather a student-friendly introduction to Quantum Mechanics). Jakob $[1]$ then writes: Many people unfamiliar with quantum mechanics may wonder how an electron could be a partile and a wave at the same time. Please ignore this kind of idle speculation. The situation is not as crazy as some would lead you believe. Electrons, photons and all other elementary particles are particle. Period. This is what every experiment tell us. Our detector make "click, click, click"$^{(*)}$. Waves are merely one convenient mathematical tool for describing the behavior of these particles. $^{(*)}$Here the author is talking about the double slit experiment using electrons. Considering the realization of the author, I can conclude that, when the books (modern physics mostly and some introductory texts on quantum mechanics also) said the famous idea "the nature of particles in quantum mechanics have a dual behavior: a electron can be a wave and a particle at the same time! This is called particle-wave duality" they acctualy want to mean: Electrons, photons and all other elementary particles are particle. Period. This is what every experiment tell us (...) Waves are merely one convenient mathematical tool for describing the behavior of these particles. So, can I say that particle-wave duality is mostly a mathematical formalism rather than a huge physical fact? $$ --\circ --$$ $[1]$ Jakob Schwichtenberg. No-Nonsense Quantum Mechanics. No-nonsense Books. 2ed. 2020. Answer: The definition of particles in QFT is a bit technical than our usual notion of particles. A particle is an excitation of a field. For example, the Higgs boson is an excitation of the Higgs field. With this notion, we can say electrons are particles. However, the wave notion is also built-in in the excitation part of the definition. In the usual sense, we cannot say that electron is only a particle and the wave nature is just a mathematical tool. This is not a correct statement. In some experiments, it behaves as a particle and it some other experiments it behaves as a wave. This is because neither description is the full fledged QFT description of electrons. The price we pay is we have to choose the electron either as a particle or as a wave according to the needs, while in truth they are not two different things. For example, if you consider that the electron is a particle, you cannot have double slit experiment (just put a detector on one of the slits and the pattern will be destroyed) , and if you consider electron as waves in the usual sense, photoelectric effect cannot be explained. While the author is correct in saying that electrons are particles, his emphasis on the wave nature being just a mathematical convenience is a bit oversimplification to make the book readable to beginners, a trait that is often found in these books but can be harmful sometimes.
{ "domain": "physics.stackexchange", "id": 73403, "tags": "quantum-mechanics, wavefunction, wave-particle-duality" }
How can imparted energy be a stochastic quantity?
Question: It may be a silly question, but I have a dosimetry course and it started by defining deposed energy and imparted energy and for both it says that they're stochastic quantities. The mathematical definitions are: $$\epsilon_i = \epsilon_{in} - \epsilon_{out} + Q$$ Where $\epsilon_i$ is the deposed energy, $\epsilon_{in}$ is the energy of the incident particle, $\epsilon_{out}$ is the energies of the particles after the interaction and $Q$ is the variation of the energy of mass. And for the imparted energy, it's simply the sum of the deposed energies for every interaction in the volume $V$. $$\epsilon = \sum_i \epsilon_i$$ Now I understand how the deposed energy could be stochastic because we cannot predict which interaction will take place but given that we have the information about the volume, we should be able to model the imparted energy. So how could imparted energy be stochastic ? After all, this is the objective of dosimetry. Is there something I'm missing or is there a problem of how I'm seeing things. P.S. : it's a bit weird that there isn't a dosimetry tag. Answer: Averaged over small volumes or a small number of interactions, the quantity is stochastic. Averaged over a sufficiently large volume or number of interactions, it becomes a well-behaved average quantity. This is just the law of large numbers ... Or am I missing the point of your question?
{ "domain": "physics.stackexchange", "id": 92999, "tags": "energy, radiation, radioactivity, stochastic-processes" }
Complexity of a variant of Subset Sum problem
Question: This is the variant of SSP: Given $n$ positive integer points $a_1, \ldots, a_n$ which are all at most $n$, does there exist a subset $\{a_i\}_{i \in P}$, such that its summation is exactly $n+1$? My question is, for general $n$, is this problem NP-hard? Answer: The problem is polynomial-time solvable using a reduction to 0-1 knapsack problem. Take a knapsack of size $W = n+1$. Take $n$ items of size $a_i$ and value $a_i$. The maximum value obtained is $n+1$ if and only if there exists a subset of items that sum to $n+1$. The 0-1 knapsack problem can be solved in time $O(n \cdot W)$ using dynamic programming. Therefore, the running time of the algorithm here is $O(n^2)$.
{ "domain": "cs.stackexchange", "id": 19023, "tags": "complexity-theory, subset-sum" }
Why does adding kinematical conditions between the coordinates prevent the Riemannian line element to "preserve the Euclidean structure"?
Question: I was reading Kinetic Energy and Riemannian Geometry in The Variational Principles of Mechanics by Cornelius Lanczos; here is the concerned excerpt: Let us define the line-element of a $3N$- dimensional space by the equation: $$\overline{\mathrm ds}^2 = \sum_{i\,=\,1}^Nm_i~(\mathrm dx_i^2 + \mathrm dy_i^2 + \mathrm dz_i^2)\tag{15.11}$$ [...] The form of the line-element $(15.11)$ of $3N$-dimensional configuration space of $N$ free particles has a Euclidean structure, and that the quantities $$\sqrt{m_i}x_i~~~ \sqrt{m_i}y_i~~~\sqrt{m_i}z_i $$ have to considered rectangular coordinates of that space. If the rectangular coordinates are changed to arbitrary curvilinear coordinates according to the transformation equations $$x_1= f_1(q_1,q_2,\ldots, q_n),\\ .......................\\ .......................\\ z_N= f_{3N}(q_1,q_2,\ldots, q_n),\tag{12.8} $$ the geometry remains Euclidean, although the line-element is given by the more general Riemannian form: $$ \overline{\mathrm ds}^2 = \sum_{i, \, k\,= \,1}^n g_{ik}~\mathrm dx_i\mathrm dx_k$$ with $n= 3N\,.$ Let us now consider a system with given kinematical conditions between the coordinates. We can handle such a system in two different ways. We may consider once more the previous configuration space of $3N$ dimensions but restrict the free-movability of the C-point by the given kinematical conditions which take the form $$f_1(x_1,\ldots, z_n) = 0,\\ ..................\\ f_m(z_1,\ldots, z_n)= 0\,.\tag{15.15}$$ Geometrically, each one of these restricting equations signifies a curved hyper-surface of $3N$-dimensional space. The intersections of these hyper-spaces determines a subspace of $3N-m= n$ dimensions, in which the C-point is forced to stay. This subspace is no longer a flat Euclidean but a curved Riemannian space. Another way of attacking the same problem is to express from the very beginning the rectangular coordinates of the particles in terms of $n$ parameters $q_1\ldots,q_n\,.$ These parameters are now the curvilinear coordinates of an $n$-dimensional space whose line element can be obtained by differentiating each side of $(12.8)$ and substituting them in the expression $(15.11)\,.$ The line-element takes the form: $$\overline{\mathrm ds}^2 = \sum_{i, \, k\,= \,1}^n a_{ik}~\mathrm dq_i\mathrm dq_k\,.\tag{15.16}$$ The $a_{ik}$ are here given the functions of the $q_i\,.$ The line element is now truly Riemannian not only because the $q_i$ are curvilinear coordinates, but because the geometry of the configuration space does not preserve the Euclidean structure of the original $3N$-dimensional space, but in infinitesimal regions. I couldn't comprehend the above marked statements: From $(15.11),$ the author concludes that the $3N$-dimensional space is Euclidean. Then transforming the coordinates to generalised coordinates $q_1,\ldots, q_n,$ he told the line-element $ \overline{\mathrm ds}^2 = \displaystyle\sum_{i, \, k\,= \,1}^n g_{ik}~\mathrm dx_i\mathrm dx_k$ develops again the Euclidean Geometry but the line-element being in the more general Riemannian form. $\bullet$ How does the line-element $ \overline{\mathrm ds}^2 = \displaystyle\sum_{i, \, k\,= \,1}^n g_{ik}~\mathrm dx_i\mathrm dx_k$ develop Euclidean Geometry as stated by the author? For the line-element to be developing Euclidean geometry, the $\mathrm dx_i\mathrm dx_k$ terms must not be there, isn't it? In the very next para he worked on a system with given kinematical conditions between the coordinates and got the form of the line-element as: $$\overline{\mathrm ds}^2 = \sum_{i, \, k\,= \,1}^n a_{ik}~\mathrm dq_i\mathrm dq_k\,.$$ He asserted the line-element $(15,16),$ above, is purely Riemannian and it does not develop the Euclidean Geometry since "the geometry of the space doesn't preserve the Euclidean structure". $\bullet$ Notice both the line-elements viz. $ \overline{\mathrm ds}^2 = \displaystyle\sum_{i, \, k\,= \,1}^n g_{ik}~\mathrm dx_i\mathrm dx_k$ and $ \overline{\mathrm ds}^2 = \displaystyle\sum_{i, \, k\,= \,1}^n a_{ik}~\mathrm dq_i\mathrm dq_k$ are in the general Riemannian form and looking almost alike. Still, the former preserves the Euclidean Geometry while the latter develops geometry which does not "preserve the Euclidean structure of the $3N$-dimensional space". Why is it so? Why does, by introducing kinematical conditions between the coordinates, avert the line-element $\overline{\mathrm ds}^2 = \displaystyle\sum_{i, \, k\,= \,1}^n a_{ik}~\mathrm dq_i\mathrm dq_k$ unlike $ \overline{\mathrm ds}^2 = \displaystyle\sum_{i, \, k\,= \,1}^n g_{ik}~\mathrm dx_i\mathrm dx_k$ to preserve the Euclidean Structure? Of course, I wouldn't expect same behaviour from both cases as the first one deals with free-particles while the second one deals with a system with given kinematical conditions i.e., constraints. But, I'm not getting, in spite of appearing to be exactly same, why the first line-element can develop the Euclidean Geometry while the second one not. Could anyone shed light on this? I have discussed about this in the chat where John Rennie put forth: The curvature is coordinate independent, so it does not depend on the choice of coordinates. $^\S$ Yes. Starting from a flat space it's possible to choose curved coordinates in which the metric doesn't look like a Euclidean metric. However the apparent curvature is due to your choice of coordinates and not a property of the space. $^{\S\S}$ I thought if that is so, then it seems pretty okay to me for the first case as the $3N$ dimension was Euclidean while after transforming the coordinates to generalised one, the line-element $\overline{\mathrm ds}^2 = \displaystyle\sum_{i, \, k\,= \,1}^n g_{ik}~\mathrm dx_i\mathrm dx_k$ also develops Euclidean Geometry. But for the second case, the author explicitly mentioned the line-element $\overline{\mathrm ds}^2 = \displaystyle\sum_{i, \, k\,= \,1}^n a_{ik}~\mathrm dq_i\mathrm dq_k$ didn't develop Euclidean Geometry not due to just the curvilinear coordinates: The line element is now truly Riemannian not only because the $q_i$ are curvilinear coordinates, but because the geometry of the configuration space does not preserve the Euclidean structure of the original $3N$-dimensional space. That's the thing I didn't get why it didn't preserve the Euclidean structure "even though curvature is coordinate dependent". Where am I misinterpreting and mistaking? $^\S$ Link I $^{\S\S}$ Link II Answer: Lanczos is essentially saying that if one imposes constraints on a $3N$-dimensional Euclidean space (which is an affine space with an Euclidean metric of Euclidean signature), one gets (under certain regularity assumptions) an embedded $n$-dimensional submanifold, with the generalized coordinates as local coordinates. The submanifold inherits a Riemannian metric by pullback from the ambient Euclidean space. Be aware that the word Euclidean has different meanings in mathematics and physics, cf. my Phys.SE answer here.
{ "domain": "physics.stackexchange", "id": 33856, "tags": "classical-mechanics, differential-geometry" }
Zero volume at zero Kelvin
Question: Why does the volume of a gas become zero at 0 Kelvin? Can a Bose Einstein condensate be considered as matter? (I mean the volume becomes zero) Answer: At constant pressure the volume of an ideal gas is given by Charles' law: $$ V \propto T $$ and this law tells us that when the temperature $T$ falls to zero the volume $V$ also becomes zero. But no gas is ideal and real gases show all sorts of non-ideal behaviour. For example real gases liquify then solidify as the temperatue falls. Real gases deviate from Charles law and their volume does not fall to zero at absolute zero. Bose-Einstein condensates are indeed another form of matter and they don't have zero volume.
{ "domain": "physics.stackexchange", "id": 48486, "tags": "thermodynamics, ideal-gas, bose-einstein-condensate" }
doubt about the 'white spaces' in the maps from satellite
Question: I have a doubt about the 'white spaces' in the maps from satellite. For example, for the atlantic areas I have plotted AOD and appear many areas where sistematically do not have any data (white spaces). So I want to know why that happens or which factors are behind that. thank you for your answer Answer: Expect every daily retrieval of AOD to have many missing pixels, mostly where clouds appear. Data is removed when the quality assurance flags do not meet some minimum criteria. This can happen when the retrieval is contaminated (e.g. from clouds or sun glint on the surface). If you are interested, you can obtain raw level-1 and level-2 data with all data, including arrays for quality assurance flags. There will also be systematic gaps in the data where the satellite had no coverage. Polar orbiting satellites will get complete coverage near the poles, but widening gaps will appear the closer you get to the equator.
{ "domain": "earthscience.stackexchange", "id": 893, "tags": "atmosphere, planetary-science, weather-satellites" }
Crispr-Cas9 method and nobel
Question: Since i had my first cell class at university i have heard about Cripsr Cas9 method. But I am quite surprised about one fact. Why actually wasnt rewarded by Nobel price? Is it something like Einsteins relativity (to early to reward it)? Answer: My guess is that some people (likely Jennifer Doudna, Emmanuelle Charpentier and Feng Zhang) will eventually be awarded a Nobel Prize for the discovery of CRISPR and the development of its applications for genome editing, because it really is a major advance. But for now, the University of California Berkeley and the Broad Institute are still legally fighting over patent conflicts, and the patent situation worldwide is generally complicated. I think this is a possible reason why the Nobel committee decided to wait.
{ "domain": "biology.stackexchange", "id": 8348, "tags": "genetics" }
DIfferent learning rates converging to same minima
Question: I am optimizing some loss function using Gradient Descent method. I am trying it with different learning rates, but the objective function's value is converging to same exact point. Does this means that I am getting stuck in a local minima?, because the loss function is non-convex so it is less likely that I would converge to a global minima. Answer: This is the expected behavior. Different learning rates should converge to the same minimum if you are starting at the same location. If you're optimizing a neural network and you want to explore the loss surface, randomize the starting parameters. If you always start your optimization algorithm from the same initial value, you will reach the same local extremum unless you really increase the step size and overshoot.
{ "domain": "datascience.stackexchange", "id": 965, "tags": "optimization, gradient-descent" }
Reimplementing enumerate() to produce a sequence or iterator
Question: Apparently it is possible to implement enumerate so it produces a sequence when given a sequence, and produces an iterable otherwise. Sequence can be safely reversed. This works by replacing the builtin function. Other similar functions could be made this way, too. class EnumeratedSequence: def __init__(self, items): self.items = items def __getitem__(self, index): return (index,self.items[index]) def __len__(self): return len(self.items) def enumerate(items): if hasattr(items, '__getitem__'): print 'Sequence detected' return EnumeratedSequence(items) else: print 'Iterator detected' return __builtin__.enumerate(items) print list(reversed(enumerate('abcdef'))) print list(enumerate(reversed('abcdef'))) Which outputs: Sequence detected [(5, 'f'), (4, 'e'), (3, 'd'), (2, 'c'), (1, 'b'), (0, 'a')] Iterator detected [(0, 'f'), (1, 'e'), (2, 'd'), (3, 'c'), (4, 'b'), (5, 'a')] This was a solution to a problem of reversed(enumerate(...)) failing. Answer: In enumerate, the test: hasattr(items, '__getitem__') isn't quite right: you need __len__ as well as __getitem__. The intention would be clearer if you wrote the test like this: isinstance(items, Sequence) using collections.abc.Sequence. If you're going to create a sequence, then you should implement the whole sequence interface, including __contains__, __iter__, __reversed__, index and count. This is most easily done by inheriting from collections.abc.Sequence, which provides implementations for all of these in terms of __getitem__ and __len__. However, you have the opportunity to do better that this, by providing \$O(1)\$ implementations of __contains__, index, and count. Here's an example of how to do this for __contains__: def __contains__(self, item): if not isinstance(item, tuple) or len(item) != 2: return False try: value = self.items[item[0]] except IndexError: return False return value == item[1]
{ "domain": "codereview.stackexchange", "id": 17861, "tags": "python, iterator" }
The Plus/Minus sign on Forces in a Cartesian coordinate system
Question: I have been struggling with Forces in a Cartesian Coordinate System and whether to understand what signs to put on to solve simple problems in the view of mathematics. Let's make a simple one dimensional problem, in the y-axis. I have an increasing $+yî$ direction up and we have gravity force that points down in the frame of reference. Most books will give it $\vec{F}_yî=m(-\vec{g})î$. Why is the minus sign only for the gravitational acceleration in the expression? Is it always opposite sign in the algebraic expressions in situations like this? Meaning $\vec{F}_yî$ has not a minus sign but the acceleration on the other side of the equation does. Even thou the force and acceleration is pointing down. Does the unit vectors have any impact to tell in what direction the forces point and then declares it signs? Please help! Answer: In a Cartesian coordinate system, the sign convention for forces and accelerations is crucial to ensure consistency and accurate calculations. In your problem, you have an increasing $+y$ direction up, and gravity points down. To maintain a consistent sign convention, we typically define the following conventions: Positive and Negative Directions: Positive direction: Upward along the $+y$ axis. Negative direction: Downward along the $-y$ axis. Gravitational Acceleration: The acceleration due to gravity ($\vec{g}$) is typically considered positive when pointing in the negative $y$ direction. This is because gravity always acts downward. Forces: When you write the equation for the force acting on an object near the Earth's surface, such as $\vec{F}_y = m(-\vec{g})\hat{i}$, the negative sign in front of $\vec{g}$ ensures that the force is correctly directed in the opposite direction of the positive $y$ axis. It indicates that the force is acting downward, consistent with the convention that positive forces act in the positive direction.
{ "domain": "physics.stackexchange", "id": 97445, "tags": "newtonian-mechanics, reference-frames, coordinate-systems, conventions, geometry" }
Random generator considerations in the design of randomized algorithms
Question: It is well known that the efficiency of randomized algorithms (at least those in BPP and RP) depends on the quality of the random generator used. Perfect random sources are unavailable in practice. Although it is proved that for all $0 < \delta \leq \frac{1}{2}$ the identities BPP = $\delta$-BPP and RP = $\delta$-RP hold, it is not true that the original algorithm used for a prefect random source can be directly used also for a $\delta$-random source. Instead, some simulation has to be done. This simulation is polynomial, but the resulting algorithm is not so efficient as the original one. Moreover, as to my knowledge, the random generators used in practice are usually not even $\delta$-sources, but pseudo-random sources that can behave extremely badly in the worst case. According to Wikipedia: In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior. In fact, the implementations of randomized algorithms that I have seen up to now were mere implementations of the algorithms for perfect random sources run with the use of pseudorandom sources. My question is, if there is any justification of this common practice. Is there any reason to expect that in most cases the algorithm will return a correct result (with the probabilities as in BPP resp. RP)? How can the "approximation" mentioned in the quotation from Wikipedia be formalized? Can the deviation mentioned be somehow estimated, at least in the expected case? Is it possible to argue that a Monte-Carlo randomized algorithm run on a perfect random source will turn into a well-behaved stochastic algorithm when run on a pseudorandom source? Or are there any other similar considerations? Answer: Here is one good justification. Suppose you use a cryptographic-strength pseudorandom number generator to generate the random bits needed by some randomized algorithm. Then the resulting algorithm will continue to work, as long as the crypto algorithm is secure. A cryptographic-strength pseudorandom number generator is a standard tool from cryptography that accepts a short seed (say, 128 bits of true randomness) and generates an unlimited number of pseudorandom bits. It comes with a very strong security guarantee: as long as the underlying cryptographic primitive is not broken, then the pseudorandom bits will be completely indistinguishable from true-random bits by any feasible process (and, in particular, no efficient algorithm can distinguish its output from a sequence of true random bits). For instance, we might get a guarantee that says: if factoring is hard (or, if RSA is secure; or, if AES is secure), then this is a good pseudorandom generator. This is a very strong guarantee indeed, since it is widely believed to be very hard to break these cryptographic primitives. For instance, if you can figure out an efficient way to factor very large numbers, then that would be a breakthrough result. For all practical purposes, you can act as though the cryptographic primitives are unbreakable. This means that, for all practical purposes, you can act as though the output of a cryptographic-strength pseudorandom number generator is basically the same as long as a sequence of true-random bits. In particular, this is a good source of the randomness needed by a randomized algorithm. (I've glossed over the fact that, to use a crypto-strength PRNG, you still need to find 128 bits of true randomness on your own to form the seed. But usually this is not hard, and indeed, there are cryptographic tools to assist with that task as well.) In practice, getting extremely good pseudorandom bits is as simple as $ cat /dev/urandom
{ "domain": "cs.stackexchange", "id": 1363, "tags": "randomized-algorithms, randomness, pseudo-random-generators" }
Neat Python - Population size explodes because of high mutation rate
Question: Someone has already asked a question about this. But I implemented the suggestion made in the comments without success. So I was wondering if anybody had a better idea. I changed the mutation power for the bias and the weights to 5. And the number of genomes roughly doubles after every generation. Consequently the genetic algorithm is taken ages. Is there any solution for this? Answer: One can overcome the problem of exploding population by increasing the compatibility threshold. This way one need not compromise the mutation rate.
{ "domain": "ai.stackexchange", "id": 3815, "tags": "genetic-algorithms, neat" }
kinect2 Service call failed ,roslaunch kinect2_bridge kinect2_bridge.launch
Question: hello,I am a studnet . I recently learning ROS,I use kinectv2 . but sometimes I can not start kinectv2 because some reasome. roslaunch kinect2_bridge kinect2_bridge.launch It's alway : List item[ INFO] [1453219160.622347895]: [Kinect2Bridge::initDevice] Kinect2 devices found: [ INFO] [1453219160.622389523]: [Kinect2Bridge::initDevice] 0: 506327542542 (selected) [Info] [Freenect2DeviceImpl] opening... [Info] [Freenect2DeviceImpl] opened [ INFO] [1453219160.721061188]: [Kinect2Bridge::initDevice] starting kinect2 [Info] [Freenect2DeviceImpl] starting... [Info] [Freenect2DeviceImpl] enabling usb transfer submission... [Info] [Freenect2DeviceImpl] submitting usb transfers... [Info] [Freenect2DeviceImpl] started [ INFO] [1453219161.068072837]: [Kinect2Bridge::initDevice] device serial: 506327542542 [ INFO] [1453219161.068125397]: [Kinect2Bridge::initDevice] device firmware: 4.0.3911.0 [Info] [Freenect2DeviceImpl] stopping... [Info] [Freenect2DeviceImpl] disabling usb transfer submission... [Info] [Freenect2DeviceImpl] canceling usb transfers... [Info] [Freenect2DeviceImpl] stopped [ WARN] [1453219161.274574060]: [Kinect2Bridge::initCalibration] using sensor defaults for color intrinsic parameters. [ WARN] [1453219161.274684781]: [Kinect2Bridge::initCalibration] using sensor defaults for ir intrinsic parameters. [ WARN] [1453219161.274753169]: [Kinect2Bridge::initCalibration] using defaults for rotation and translation. [ WARN] [1453219161.274817813]: [Kinect2Bridge::initCalibration] using defaults for depth shift. [ INFO] [1453219161.337075616]: [DepthRegistration::New] Using CPU registration method! nodelet: /usr/include/eigen3/Eigen/src/Core/DenseStorage.h:78: Eigen::internal::plain_array<T, Size, MatrixOrArrayOptions, 16>::plain_array() [with T = double; int Size = 16; int MatrixOrArrayOptions = 0]: Assertion `(reinterpret_cast<size_t>(eigen_unaligned_array_assert_workaround_gcc47(array)) & 0xf) == 0 && "this assertion is explained here: " "http://eigen.tuxfamily.org/dox-devel/group__TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE !!! "' failed. [FATAL] [1453219161.454741504]: Service call failed! [FATAL] [1453219161.454745591]: Service call failed! [FATAL] [1453219161.454747922]: Service call failed! [FATAL] [1453219161.455476087]: Service call failed! [kinect2-1] process has died [pid 28527, exit code -6, cmd /opt/ros/indigo/lib/nodelet/nodelet manager __name:=kinect2 __log:=/home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2-1.log]. log file: /home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2-1.log [kinect2_bridge-2] process has died [pid 28555, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load kinect2_bridge/kinect2_bridge_nodelet kinect2 __name:=kinect2_bridge __log:=/home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2_bridge-2.log]. log file: /home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2_bridge-2.log [kinect2_points_xyzrgb_qhd-4] process has died [pid 28599, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/point_cloud_xyzrgb kinect2 rgb/camera_info:=kinect2/qhd/camera_info rgb/image_rect_color:=kinect2/qhd/image_color_rect depth_registered/image_rect:=kinect2/qhd/image_depth_rect depth_registered/points:=kinect2/qhd/points __name:=kinect2_points_xyzrgb_qhd __log:=/home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2_points_xyzrgb_qhd-4.log]. log file: /home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2_points_xyzrgb_qhd-4.log [kinect2_points_xyzrgb_hd-5] process has died [pid 28672, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/point_cloud_xyzrgb kinect2 rgb/camera_info:=kinect2/hd/camera_info rgb/image_rect_color:=kinect2/hd/image_color_rect depth_registered/image_rect:=kinect2/hd/image_depth_rect depth_registered/points:=kinect2/hd/points __name:=kinect2_points_xyzrgb_hd __log:=/home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2_points_xyzrgb_hd-5.log]. log file: /home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2_points_xyzrgb_hd-5.log [kinect2_bridge-2] restarting process process[kinect2_bridge-2]: started with pid [28762] [kinect2_points_xyzrgb_qhd-4] restarting process process[kinect2_points_xyzrgb_qhd-4]: started with pid [28763] [kinect2_points_xyzrgb_hd-5] restarting process process[kinect2_points_xyzrgb_hd-5]: started with pid [28764] [ INFO] [1453219161.713041317]: Loading nodelet /kinect2_bridge of type kinect2_bridge/kinect2_bridge_nodelet to manager kinect2 with the following remappings: [ INFO] [1453219161.716821221]: waitForService: Service [/kinect2/load_nodelet] could not connect to host [lin-MSH87TN-00:58081], waiting... [kinect2_points_xyzrgb_sd-3] process has died [pid 28571, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/point_cloud_xyzrgb kinect2 rgb/camera_info:=kinect2/sd/camera_info rgb/image_rect_color:=kinect2/sd/image_color_rect depth_registered/image_rect:=kinect2/sd/image_depth_rect depth_registered/points:=kinect2/sd/points __name:=kinect2_points_xyzrgb_sd __log:=/home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2_points_xyzrgb_sd-3.log]. log file: /home/exbot/.ros/log/f81d454c-bea8-11e5-bf34-74d43571568e/kinect2_points_xyzrgb_sd-3*.log [kinect2_points_xyzrgb_sd-3] restarting process But sometimes the normal start,kinect2 intact, ./bin/Protonect can open kinect2 How can I solve this problem,Thanks. Originally posted by study_science on ROS Answers with karma: 3 on 2016-01-19 Post score: 0 Answer: can you start like rosrun kinect2_viewer kinect2_viewer sd cloud without using roslaunch. Also try to comment in iai_kinect2 in github. Originally posted by crazymumu with karma: 214 on 2016-01-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23485, "tags": "ros, iai-kinect2" }
what's the name of this org. compound. RR'-C=NNH2
Question: What's the name of this compound? It's a semi-product of Wolff-Kishner reduction. The complete reaction is: Answer: The (now deleted) answer by NotCorey has already explained that the compound that is shown in the picture is a hydrazone that is derived from acetone (propan-2-one). According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), the preferred IUPAC name (PIN) for a hydrazone is generated substitutively as ‘ylidene’ derivatives of hydrazine rather than by functional class nomenclature as in previous recommendations. P-68.3.1.2.2 Hydrazones Compounds having the general structure $\ce{RCH=N-NH2}$ or $\ce{RR'C=N-NH2}$ are called ‘hydrazones’ and are named in two ways: (1) substitutively as derivatives of the parent hydride ‘hydrazine’, $\ce{H2N-NH2}$; (2) by functional class nomenclature using the class name ‘hydrazone’. Method (1) generates preferred IUPAC names. Therefore, the PIN for the compound that is given in the question is (propan-2-ylidene)hydrazine.
{ "domain": "chemistry.stackexchange", "id": 7130, "tags": "nomenclature" }
How do we know that a matter is acidic or basic without knowing their $pH$ value?
Question: There are too many examples for acidic matters as seen below. $$HCI \tag {1}$$ $$HNO_3 \tag {2}$$ $$H_2SO_4 \tag {3}$$ And the examples for basic matters $$NH_3 \tag {4}$$ $$NaOH \tag {5}$$ According to this, how do we differenciate between acidic and basic matters without knowing their $pH$ value? Regards Answer: pH is simply a measure of H+/H3O+ concentration in a solution. I will use the two terms interchangeably below. The acidity or basicity of a compound has to do with its chemical behaviour. There are three definitions of acidity. The simplest definition is the Arrhenius definition: An acid dissociates in water to form H+. A base ionizes in water to form OH-. This definition is limited to aqueous solutions, and so is usually ignored in favor of the Brønsted-Lowry definition, where: An acid is a hydrogen ion donor. A base is a hydrogen ion acceptor. This definition is more generally, because it can be used to describe acid-base reactions that occur in solvents other than water. There is a third definition, Lewis: A Lewis acid is an electron pair acceptor. A Lewis base is an electron pair donor. The Lewis definition is the broadest definition of acidity. However, most entry-level courses stick with the Brønsted-Lowry definition since it does an excellent job of describing what we typically consider to be acid-base chemistry. To summarize, we recognize acids and bases from their chemical structure. Recognizing Acids Acids generally come in one of three forms: Binary Acids These have the form HX, like HCl, or HF. Oxyacids The have an oxygen-containing polyatomic anion. Examples include H2SO4 or HNO3. Organic Acids You can't tell if an organic (carbon-containing) compound is an acid based on its formula. However, organic acids are often written RCOOH. The COOH represents the carboxylic acid functional group, which is what makes the compound an acid. For example, acetic acid is often written CH3COOH. Recognizing Bases Metal Hydroxides Soluble metal hydroxides are all bases (since when the dissociate in water they generate OH-). Nitrogen-containing Bases Nitrogen-containing compounds similar to ammonia (NH3) are also bases. $$\ce {NH3 + H2O <=> NH4+ + OH-}$$
{ "domain": "chemistry.stackexchange", "id": 9128, "tags": "acid-base" }
Finding the right questions to increase accuracy in classification
Question: Lets say I have a list of 100k medical cases from my hospital, each row = patient with symptoms (such as fever , funny smell, pain etc.. ) and my labels are medical conditions such as Head trauma, cancer , etc.. The patient come and say "I have fever" and I need to predict his medical condition according to the symptoms.According to my data set I know that both fever and vomiting goes with condition X. So i would like to ask him if he is vomiting to increase certainty in my classification. What is the best algorithmic approach to find the right question (generating question from my data set of historical data). I thought about trying active learning on the features but I am not sure that it is the right direction. Answer: The problem you're trying to address can, in some sense, be viewed as a Feature Selection problem. If you look for literature using only those words, you're not going to find what you're looking for though. In general, "Feature Selection" simply refers to the problem where you already have a large amount of features, and you're simply deciding to select which ones to keep and which ones to throw away (because they're not informative or you don't have the processing power to try training with all features for example). I'd recommend looking around for a combination of "Feature Selection" and "Cost-Sensitive". This is because, in your case, there are costs associated with selecting features; values may be costly to obtain for some features. Searching for this combination leads to publications which look to be interesting for you, such as: Cost-sensitive feature selection using random forest: Selecting low-cost subsets of informative features Cost-sensitive Dynamic Feature Selection Cost-Effective Feature Selection and Ordering for Personalized Energy Estimates and probably much more... I cannot personally vouch for any of those techniques since I've never used them, but those papers certainly look relevant for your problem. When you're looking around for more literature, terms like "cost", "cost-based", maybe "budgeted" are crucial to include. If you don't include those, you're just going to get papers on problems like: Feature Selection: given a set of features/columns, which ones am I going to use across all samples/instances/rows? Feature Extraction: given data (typically without clear human-defined features, like images, sound, etc.), how am I going to extract relevant features from this? Active Learning: given a bunch of samples without labels but feature values already assigned, which one would I like an oracle/human expert/etc. to have a look at so that they can tell me what the true label is? Those kinds of problems all do not really appear to be relevant in your case. Active Learning may be somewhat interesting in that it is about trying to figure out which rows would be valuable to learn from, whereas your problem is about which columns would be valuable to learn from. There does seem to be a connection there, Active Learning techniques might to some extent be able to inspire techniques for your problem, but just that; inspire, they likely won't be 100% directly applicable without additional work.
{ "domain": "ai.stackexchange", "id": 606, "tags": "machine-learning, classification, statistical-ai" }
Is the Lagrangian "math" or "science"?
Question: I've seen in class that we can get from Lagrangian to derive equations of motion (I know its used elsewhere in physics, but I haven't seen it yet). It's not clear to me whether the Lagrangian itself follows from the equations of motion, or whether it represents a fundamentally different approach - whether it's a different "model", or whether it's largely a mathematical observation (it may illuminate/integrate what came before, but for all intents and purposes it is equivalent). I'm just curious about how these relationships are generally understood. I ask because it's not at all obvious to me that the principle of least action should be true. I'd also be curious whether the answer to this question would be the same across different fields of physics, though if the answer's "no" much detail would probably go over my head. Answer: Short answer: Lagrangian mechanics only applies to a subset of classical mechanics problems, but when it does, it is mathematically equivalent to Newtonian mechanics (which I take to mean direct application of $\vec{F} = m\vec{a}$). More detailed rambling: Your class and whatever text you're using ought to cover the equivalence of the Newtonian and Lagrangian (and Hamiltonian) approaches, so I'll just give the overview. We begin by expressing the state of a system by specifying the locations of all the relevant parts: $\vec{r}_1, \ldots, \vec{r}_M$. In $d$ dimensions there are $dM$ components to worry about, but let's just say $d = 3$. Now there will also be some number $C$ of constraints on the system (things like "the mass won't fall through the table" or "the pendulum mass is always a distance $l$ from the pivot"), so the system actually has $N = 3M - C$ degrees of freedom. We should be able to come up with $N$ generalized coordinates, often written $q_k$, and express the system in terms of $N$ equations in those coordinates. Here's where the first of two important restrictions comes into play. We are only interested in holonomic constraints - those that depend only on positions but not on velocities or other derivatives. In this way we can express each $\vec{r}_i$ as a function of $q_1, \ldots, q_{N}$ and possibly $t$, with no $\dot{q}_i$'s appearing. (Bonus vocabulary lesson: the holonomic constraints are rheonomic or rheonomous if there is explicit time dependence; they are scleronomic or scleronomous otherwise.) The classical Lagrangian method doesn't really apply to nonholonomic constraints. It's simple enough to calculate the kinetic energy $T$ as a function of the $q_k$'s and $\dot{q}_k$'s. The other important restriction is that the forces on the system are conservative - i.e., that they come from the gradient of a potential $U$. We need there to be such a $U$ expressible in terms of the $q_k$'s, otherwise we're stuck. (Actually, there are some methods involving "generalized potentials" that get around this in a few cases.) If we have holonomic constraints and conservative forces, the calculus of variations tells us that the $N$ (possibly coupled) differential equations of motion for the system are $$ \frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{\partial L}{\partial\dot{q}_k}\right) - \frac{\partial L}{\partial q_k} = 0, $$ where $L = T - U$ is considered a function of the $2N + 1$ variables $q_1, \ldots, q_N, \dot{q}_1, \ldots, \dot{q}_N, t$. These give exactly the same motion as $\vec{F}_i = m \ddot{\vec{r}}_i$ applied to each of the $M$ original coordinates. In this sense, Lagrangian mechanics is just some mathematical trickery for easily getting equations of motion in certain cases. It's not new physics in any way. Final note: Now this was all done long ago in a purely classical setting, considering systems with small numbers of mechanical parts. It's a very different way of thinking, emphasizing the global properties of the system rather than just the local properties at points of interest. As it turns out, this method, together with the related Hamiltonian approach, lend themselves to quantum mechanics quite nicely. Quantum field theory and its offshoots are all about constructing Lagrangians to derive equations of motion, and this applies to settings where $\vec{F} = m\vec{a}$ doesn't even make much sense anymore.
{ "domain": "physics.stackexchange", "id": 4759, "tags": "lagrangian-formalism, variational-principle, action" }
What is this insect looking like a wasp or an ant?
Question: It looks like a black wasp or big ant with wings. It produces no noise. It mostly sits at one place on the wall, does not move. The young (or another gender) form has unproportionally long back legs, making it look like a spider. It has big head with big eyes. Answer: These are two different kind of insects: First insect There is one pair of translucent wings (admittedly, the number of wing pairs is not visible from the photo), what tells us it must be some kind of fly (Diptera). Further, the antenna is rather short, from what we can deduce it must be a brachyceran fly. Even further, it has three segments, so we can conclude it is a member of the family "soldier flies" (Stratiomyidae). A friend (a dipterologist), upon seeing the photo, immediately argued it must be a "black soldier fly (Hermetia illucens), a meanwhile cosmopolitan species that has gained economic importance in organic recycling. Second insect This one is much trickier, I think. It does look like a shore bug (Saldidae): the large eyes, slender and long antenna and legs, the flat and oval body - everything fits. But shore bugs are tightly bound to their habitats (shores of all kinds of waterbodies). And at least in Europe, it is very unlikely (although not impossible) to find one inside a building. Also, shore bugs are very agile and are fast runners, jumpers and flyers - what contradicts the OP's description ("It mostly sits [...], does not move").
{ "domain": "biology.stackexchange", "id": 12428, "tags": "species-identification" }
The chirality of the standard model fermions
Question: I read 'The Standard Model Effective Field Theory at Work' by Isidor, Wilsch, and Wyler. In a footnote, they say that, in principle, right-handed neutrinos could be included in the Standard Model by extending the fermion content. But these would be completely neutral under the Group of Standard Model. Why is that a problem? Why is the chiral fermion such a necessity? Answer: ...right-handed neutrinos could be included in the Standard Model by extending the fermion content. But these would be completely neutral under the Group of Standard Model. Why is that a problem? Why is the chiral fermion such a necessity? Most QFT texts covering the SM, such as M Schwartz's, deal with this. It is not a problem. Modern texts could, or should, include the unresolved theoretical possibility of Right-chiral neutrinos. These are possible, completely inert under both the SU(2) and the hypercharge U(1) of the SM, and can couple to their left-chiral active mates and the higgs in a gauge invariant way, to produce conventional Dirac mass terms, exactly like up-like quarks do in the SM. (However, $u_R$ quarks do have non-vanishing hypercharge, so they are not fully inert/sterile in the SM, by contrast to R neutrinos.) People are reluctant to introduce completely sterile/inert d.o.f., but, of course, they now speculate about such all the time. There is no logical necessity in sticking to purely L chiral fermions, and, as seen above, this does not obtain for quarks anyway. The historical reason bad science reporters made a hash of the issue about 20 years ago is because R neutrinos were unnecessary in the simplest version of the SM covered in books, with no indication around of neutrino masses. So, texts left their logical possibility out, like zoology books leaving unicorns out. Upon the discovery of neutrino masses, R neutrinos became an almost attractive possibility, and clueless science reporters started perorating on particles "beyond the standard model", a misunderstanding your footnote attempts to moderate. R-chiral neutrinos do not violate anything about the SM, defined through its symmetries, not so much its particle content.
{ "domain": "physics.stackexchange", "id": 97193, "tags": "particle-physics, standard-model, neutrinos, beyond-the-standard-model, chirality" }
Topic classification on text data with no/few labels
Question: I would like to achieve a classification of a text input into predefined categories. From what I have understand unsupervised approach are unfeasible if my target label is something very rare in pretrained models (I have labels about specific industrial processes). Is this true? Otherwise I could try an approach in which I label for example 1000 input texts using all the different labels and use a supervised approach with very few labeled data. Should this help someway the learning process? And what methods could I use in this case? Answer: A feasible approach would be to take a pre-trained model, like BERT, and fine-tune it on a small labeled dataset. For that, you may use Huggingface's Transformers, which makes all the steps in the process relatively easy (see their tutorial on doing exactly that: https://huggingface.co/docs/transformers/training)
{ "domain": "datascience.stackexchange", "id": 11509, "tags": "nlp, unsupervised-learning, supervised-learning, text-classification, semi-supervised-learning" }
rostest does not show log message
Question: Hi everyone, I have an annoying problem of dispalying the ros log message during testing. I have some functions showing ROS_ERROR when errors occur and a test program to test these error cases. If I run a pure gtest program, all ros log messages show up normally in console. As long as I need to run a test node with rostest, all ros log messages disappear from console. From rostest documentation about the test tag: no output attribute as tests use their own output logging mechanism. rostest --text also does not help. But if I put std::cout in the test program, rostest will dispaly these messages. Thanks for any suggestions. kai Originally posted by khu on ROS Answers with karma: 11 on 2017-07-24 Post score: 1 Answer: I have had this issue with setting up GTests in ROS before. You need to properly initialize the logger and set the logger level in ROS To initialise the logger either ros::init(argc, argv, "node_name"); or ROSCONSOLE_AUTOINIT; To set the logger level either ros::start(); or if(ros::console::set_logger_level(ROSCONSOLE_DEFAULT_NAME, ros::console::levels::Debug)) { ros::console::notifyLoggerLevelsChanged(); } Of course you can just use rosconsole directly #include <ros/console.h> #include <log4cxx/logger.h> int main(int argc, char* argv[]) { ROSCONSOLE_AUTOINIT; log4cxx::LoggerPtr my_logger = log4cxx::Logger::getLogger(ROSCONSOLE_DEFAULT_NAME); my_logger->setLevel(ros::console::g_level_lookup[ros::console::levels::Debug]); ROS_INFO("test logger); exit 0; } Originally posted by phillip with karma: 36 on 2018-04-07 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 28425, "tags": "ros, rostest, rosconsole" }
magnetic moment of proton
Question: I just tried to calculate the magnetic moment of a proton. I took the proton g-Factor of $g=5.585694$ nuclear magneton of $\mu_k = 5.050783 * 10^{−27}$ J/T proton spin of $I=1/2$ At first I calculated the norm of the proton spin $|\vec{I}|=\hbar \sqrt{I*(I+1)}=\hbar \frac{1}{2}\sqrt{3}$ And then I put everything together in $\mu=g\mu_k\frac{|\vec{I}|}{\hbar}$ and obtain 2,44134228 × 10^-26 instead of 1.410606 × 10^-26 ... Interesting enough, I obtain the correct value if I devide by $\sqrt{3}$. But I see no reason to do this... It would be great if you could help me. Thanks in advance ftiaronsem Answer: It is only a question of definition. There is the operator of interaction of particle with an externally-produced magnetic field: $\hat{H}_{int}=-\hat{\boldsymbol{\mu}}\cdot\mathbf{H}$, where $\mathbf{H}$ is a magnetic field and $\hat{\boldsymbol{\mu}}$ is an operator: $\hat{\boldsymbol{\mu}}=\displaystyle \frac{g e}{2 m} \hat{\mathbf{s}}$ By the «value» of the magnetic moment of particle, people usually imply the maximum of the following diagonal matrix element: $\mu=\langle\psi\vert \hat{\boldsymbol{\mu}}_{z} \vert\psi\rangle,$ which is of course $g \mu_N/2$
{ "domain": "physics.stackexchange", "id": 4085, "tags": "quantum-mechanics, protons, hydrogen" }
What is the mechanism for atoms absorbing light?
Question: Apart from saying that electron orbitals have specific energy levels and can only absorb light of that energy/wavelength, what actually causes electrons to absorb the energy? Answer: Apart from saying that electron orbitals have specific energy levels and can only absorb light of that energy/wavelength, Please note that we call them electron orbitals because we assume a framework where the nucleus is at rest. They are actually "atomic orbitals" It is the whole atom that changes in energy when a photon is absorbed. what actually causes electrons to absorb the energy? Electrons do not absorb energy. The atom does, and the electron is raised into a higher energy orbital. The cause is the impinging of a photon that carries the energy packet that separates the two orbitals.
{ "domain": "physics.stackexchange", "id": 37199, "tags": "particle-physics" }
Show that a language with union is not regular by using pumping lemma
Question: Given the language $L:= { \{ c^{2k} w \ \vert \ k \ge 1, \ w \in \{a,b,c\}^* \ and \ \vert w\vert_a \ = \ \vert w\vert_b \} \ \cup \ \{ a,b \}^* }$ I'm really unsure how to even start because of the union. I tried it with $w=a^nb^n$ but my correction said that it's pumpable because it is in $\{ a,b \}^*$ which makes sense. What would be a good word to start with? I guess there are several cases i need to show? Answer: If $L$ was regular, so would be $L \cap c^+\Sigma^* = \{c^{2k}w\mid k\geq 1, w\in\{a,b,c\}^* \text{ and }|w|_a = |w|_b\}$. That means that if you prove that this language is not regular, then $L$ cannot be regular too (that way, we got rid of the union). Now you can take it from here with pumping lemma, starting from $c^2a^nb^n$!
{ "domain": "cs.stackexchange", "id": 18439, "tags": "formal-languages, pumping-lemma" }
US customary units below 1 mil/thou?
Question: I have been asked to provide some of our technical docs in both metric and US units. For the most part, that is easy enough: approx 1cm → approx 1/2" 500μm → 0.02" But then I start getting stuck: 50μm → 0.002" or 2 thou or 2 mils? Is there a common symbol for thou/mills? 20nm → ??? I imagine that people who care about nanometers have mostly gone metric now anyway, but it would still be good to provide US measurements for everything. What units are commonly used for lengths smaller than 0.001" in the US, and what symbols are used for them? If it varies across different fields, the relevant ones would be optics, precision machining and mechanical engineering. Answer: 50μm → 0.002" or 2 thou or 2 mils? I've seen both 0.002" and 2 mils. On a drawing it would always be 0.002". In a specification document it could be either. I've never seen 2 thou written in a formal specification (but I have heard people say it in the shop). But that might just be my experience. There could be variation from industry to industry. Is there a common symbol for thou/mills? I'm not aware of any. We just always wrote "mils" (not "mills") What units are commonly used for lengths smaller than 0.001" in the US, and what symbols are used for them? Colloquially, I know that people in precision machine shops refer to 0.1 mils (0.0001") as "tenths" (as in one ten thousandth of an inch). I've never seen that written in a formal specification document though, just as talking between machinists. For US customary units, there's nothing smaller than an inch, you just start adding more zeros, applying prefixes, or using scientific notation. For example, it would be common to specify surface roughness in microinches. 20nm → ??? I would write it as "7.87e-7 inches"
{ "domain": "engineering.stackexchange", "id": 3037, "tags": "distance-measurement, unit, international" }
Binary tree struct in C
Question: Its been done to death, but the particular prompt (see Assignment 4) guiding me has strict+simple requirements that lead to a nice reduced environment to critique poor form. The project implements a heap-based binary tree structure with insert, find, and remove functions. Both the binary tree nodes' IDs as well as data are int. Children are called left and right. The project is split into 3 files, user.c, bintree.c, and bintree.h. The header file is #include'd in both the C scripts. user.c is where all the interaction is done - see bintree.h for the available functions as well as the binary tree node struct's definition. I will leave the original instructor comments in for clarity (the boilerplate can be obtained from the link above). user.c #include <stdio.h> #include "bintree.h" #include <stdlib.h> int main() { /* Insert your test code here. Try inserting nodes then searching for them. When we grade, we will overwrite your main function with our own sequence of insertions and deletions to test your implementation. If you change the argument or return types of the binary tree functions, our grading code won't work! */ return 0; } bintree.h #ifndef BINTREE_H #define BINTREE_H // Node structure should have an int node_id, an int data, and pointers to left and right child nodes typedef struct node { int node_id; int data; struct node *left; struct node *right; } node; ///*** DO NOT CHANGE ANY FUNCTION DEFINITIONS ***/// // Declare the tree modification functions below... void insert_node(int node_id, int data); int find_node_data(int node_id); void remove_node(int node_id); void delete_tree(); #endif bintree.c // bintree.c interfaces with the node struct defined in bintree.h // to create and manipulate a heap-based binary tree. All functions // meant for interfacing by the user are defined in the header file. // // There is only ONE root (ie one tree), so there is only // one node of the tree with no parents. // // bintree.c assumes binary tree was built USING THE INSERT_NODE ALGORITHM! // It is *theoretically* possible to construct binary trees obeying // the (left,right)~(<,>)node_id equivalence (ie left-child has smaller node id than node_id, // right-child has larger node id than node_id) that lose the 'useful' property // of these trees: the extremely fast find_node_data algorithm, // relying on all nodes to the right of the root having // a larger node_id, and all nodes to the left having a smaller node_id. // That is, a binary tree could theoretically locally preserve the // equivalence property without it globally holding. Therefore, bintree.c // cannot be handed a root pointer from another heap-based binary tree interface // and be expected to work, unless the tree also globally preserves the equivalence. #include <stddef.h> // NULL def #include <stdlib.h> // malloc #include <stdio.h> // printf/scanf #include "bintree.h" #define ERROR_VAL -999999 // junk data value for find_node_data ///*** DO NOT CHANGE ANY FUNCTION DEFINITIONS ***/// // Initialize tree node *root = NULL; // Insert a new node into the binary tree with node_id and data void insert_node(int node_id, int data) { // reserve memory for node node *newNode = (node *)malloc(sizeof(node)); // initialize node values newNode->node_id = node_id; newNode->data = data; newNode->left = NULL; newNode->right = NULL; // if tree empty, use as root if (root == NULL) { root = newNode; return; } // if tree not empty, traverse tree // find parent and attach child node *tmp = root; while (1) { // go to right-child if higher id if (node_id > tmp->node_id) { // if no right-child, attach child if (tmp->right == NULL) { tmp->right = newNode; return; } else { tmp = tmp->right; } // go to left-child if lower id } else if (node_id < tmp->node_id) { // if no left-child, attach child if (tmp->left == NULL) { tmp->left = newNode; return; } else { tmp = tmp->left; } // if ids are equal, ask to replace data (keeps children) } else if (node_id == tmp->node_id) { char ans; while(1) { printf("node id occupied. replace data? (y)/(n): "); scanf("%c", &ans); if ( ans == 'y') { tmp->data = data; return; } else if ( ans != 'n') { printf("invalid answer.\n"); continue; } } } } } // Returns pointer to node's parent given node id node *find_parent(int node_id) { if (root == NULL) { printf("root is NULL\n"); return NULL; } else if (root->node_id == node_id) { return NULL; } // root has no parent node *tmp = root; node *parent = NULL; while (1) { if (tmp == NULL) { printf("no node found with node_id %d\n", node_id); return NULL; } // if node_id higher, go right; if lower, go left; if match, return else if ( (node_id > tmp->node_id) && (tmp->right != NULL) ) { parent = tmp; tmp = tmp->right; } else if ( (node_id < tmp->node_id) && (tmp->left != NULL) ) { parent = tmp; tmp = tmp->left; } else if (node_id == tmp->node_id) { // check to ensure is actually parent before return if ( (parent->left == tmp) || (parent->right == tmp) ) { return parent; } } else { // catch no matches tmp = NULL; } } } // Returns pointer to node given node_id node *find_node(int node_id) { if (root == NULL) { return NULL; } // no nodes at all else if (root->node_id == node_id) { return root; } // root has no parent node *tmp = find_parent(node_id); if (tmp == NULL) { return NULL; } // parent has node w/ node_id as left- or right-child else if ( (tmp->left != NULL) && ((tmp->left)->node_id == node_id) ) { return tmp->left; } else if (tmp->right != NULL) { return tmp->right; } // catch no matches return NULL; } // Find the node with node_id, and return its data int find_node_data(int node_id) { node *tmp = find_node(node_id); if (tmp == NULL) { return ERROR_VAL; } return tmp->data; } /* OPTIONAL: Challenge yourself w/ deletion if you want Find and remove a node in the binary tree with node_id. Children nodes are fixed appropriately. */ void remove_node(int node_id) { node *parent = find_parent(node_id); node *tmp = find_node(node_id); if (tmp == NULL) { printf("remove_node failed, node DNE\n"); return; } // • 'left-child' and 'right-child' refer to the // children of the node being deleted. // • 'appropriate leg' refers to whichever (left or right) // side of the parent the node being deleted is attached to. // // find the right-most descendent of the left-child (hence the highest node id less than node_id). // // - if no left-child, attach right-child to appropriate leg of tmp's parent. // // - otherwise, attach right-child to right-most descendent of left-child's right leg. // attach left-child to appropriate leg of tmp's parent. // // free tmp and return if (tmp->left == NULL) { if (tmp->right != NULL) { // catch root case if (parent == NULL) { root = tmp->right; } // otherwise attach right-child to appropriate leg of parent else if (parent->left == tmp) { parent->left = tmp->right; } else { parent->right = tmp->right; } } } else { // catch root case if (parent == NULL) { root = tmp->left; } // otherwise attach left-child to appropriate leg of parent else if (parent->left == tmp) { parent->left = tmp->left; } else { parent->right = tmp->left; } // and attach right-child to right-most descendent of left-child's right leg node *next = tmp->left; while (next->right != NULL) { next=next->right; } next->right = tmp->right; } free(tmp); return; } // For internal use by delete_tree, // recursively frees nodes of tree void delete_node(node *tmp) { if (tmp->left != NULL) { delete_node(tmp->left); } if (tmp->right != NULL) { delete_node(tmp->right); } free(tmp); } // There is only one tree and it starts where 'root' points, // delete_tree unallocates all memory for every node connected // to root, including root. void delete_tree() { delete_node(root); } I would have liked to change find_node_data to be cast to a pointer so that NULL could be used to represent no result, but I wanted to stick to the grading scheme. This is a rudimentary assignment, so any discussion/critique is appreciated! How are my comments? Is the code concise, clear, efficient, and consistent (are there shorter algorithms, are my control structures convoluted, are my logic checks sound, is it readable, correct usage of macros)? Within the bounds of the guidelines, are there any gaps in my implementation? Answer: General Observations The use of the typedef for the node struct is a good practice. The fact that all of the loops and if statements contain the code as blocks by using the braces ({ and }) is a good practice. It would be much easier to do a code review if we had the working test code. The lack of the working test code in the main() function in user.c makes it more difficult to review the code. Some users on this site might close the question as Missing Review Context. It really isn't clear why there is a node_id element in the node struct. Generally binary trees are ordered by the value of their contents (the data filed or member). Insertion into the tree should be based on the value of the data rather than the node ID. The code would be more portable if stdlib.h was included in user.c and the code returned either EXIT_FAILURE or EXIT_SUCCESS from main(). Since delete_node() is using recursion, it would probably be better if insert_node() used recursion to find the correct location for the node rather than a while loop. Convention When Using Memory Allocation in C When using malloc(), calloc() or realloc() in C a common convention is to sizeof(*PTR) rather sizeof(PTR_TYPE), this make the code easier to maintain and less error prone, since less editing is required if the type of the pointer changes. node* newNode = malloc(sizeof(*newNode)); In C there is no reason to cast the return value of malloc(). Test for Possible Memory Allocation Errors In modern high-level languages such as C++, memory allocation errors throw an exception that the programmer can catch. This is not the case in the C programming language. While it is rare in modern computers because there is so much memory, memory allocation can fail, especially if the code is working in a limited memory application such as embedded control systems. In the C programming language when memory allocation fails, the functions malloc(), calloc() and realloc() return NULL. Referencing any memory address through a NULL pointer results in undefined behavior (UB). Possible unknown behavior in this case can be a memory page error (in Unix this would be call Segmentation Violation), corrupted data in the program and in very old computers it could even cause the computer to reboot (corruption of the stack pointer). To prevent this undefined behavior a best practice is to always follow the memory allocation statement with a test that the pointer that was returned is not NULL. node* newNode = malloc(sizeof(node)); if (!newNode) { fprintf(stderr, "Malloc failed in insert_node()\n"); exit(EXIT_FAILURE); } A better practice would be to have a function that tests the allocation and returns status (NULL pointer in this case). static node* creat_node(int data) { node* newNode = malloc(sizeof(*newNode)); if (!newNode) { fprintf(stderr, "Malloc failed in insert_node()\n"); return NULL; } else { newNode->data = data; newNode->left = NULL; newNode->right = NULL; } return newNode; } Avoid Global Variables It is very difficult to read, write, debug and maintain programs that use global variables. Global variables can be modified by any function within the program and therefore require each function to be examined before making changes in the code. In C and C++ global variables impact the namespace and they can cause linking errors if they are defined in multiple files. The answers in this stackoverflow question provide a fuller explanation. In this case root is a global variable. You can make it a local variable by using the static keyword. // Initialize tree static node* root = NULL; Any functions that should be local such as delete_node() should be declared static as well. I am assuming delete_node() is local since it isn't included in bintree.h. This is also true for the find_parent() function.
{ "domain": "codereview.stackexchange", "id": 45437, "tags": "beginner, c, homework, binary-tree" }
Classification of very similar images
Question: I have two groups of images, each one with 1000 samples. The speckle pattern, in this context, is the same as a random pattern or "white noise" image. So these images are fundamentally different. In group one, each figure is generated by considering a random function that returns something similar to a speckle pattern (see fig. 1). In group two we follow the same procedure as group 1, but we plot a small point above that can be positioned anywhere and with any color (see fig. 2). I want to classify both groups and I already tried to do it with simple neural networks, but I have been unsuccessful. What is the best technique for this kind of problems? Fig. 1: Fig. 2: Answer: I found the answer in the paper linked above. The authors use a CNN to solve the problem. I will post the code. https://link.springer.com/article/10.1007/s00170-017-0882-0
{ "domain": "datascience.stackexchange", "id": 2715, "tags": "python, neural-network, classification, computer-vision" }
Trying to grasp why internal energy decreases in an isochoric, isentropic process
Question: So there's this equation: $dU_{V,S}≤0$ which comes from $dS≥\frac{dQ}{T}$ $dQ = dU$ since isochoric $dS≥\frac{dU}{T}$ $TdS≥dU$ $dU≤0$ if $dS=0$ Firstly, are these set of equations valid? If so, how can the internal energy of a system decrease if there is no work done and no change in entropy? I'm having a hard time wrapping my head around this concept so does anyone have a simple viusalization or example of this process? Answer: Thermodynamic inequalities The inequalities involving $U$, or $H$, or $A$, or $G$ require careful explanation to avoid confusion. First, we have $$dU = T dS - P d V \tag{1}$$ which is true (we are assuming closed system, $dN=0$ throughout). But then sometimes we also see $$dU \leq T dS - P d V \tag{2}$$ which, as written, contradicts (1). The proper way to write Eq (2) is $$ \Big(\Delta U\Big)_{S,V} \leq 0 $$ and the proper way to read it is: at equilibrium $U$ is at a minimum with respect to partitioning of $S$ and $V$. This is illustrated above: all boxes are in internal equilibrium, but unless parts $A$ and $B$ are in equilibrium with each other, $U$ of the combined system is less than the sum of the parts before they were combined, under the constraints $S_A+S_B=\text{const}$, $V_A+V_B=\text{const}$. By straightforward application of the method of Lagrange multipliers, minimization of $U_A+U_B$ under these constraints gives $$ \frac{\partial U_A}{\partial S_A} = \frac{\partial U_B}{\partial S_B} = T \quad\text{and}\quad \frac{\partial U_A}{\partial V_A} = \frac{\partial U_B}{\partial V_B} = -P $$ which are the familiar equilibrium conditions. But why does internal energy decrease? For the combined system the transition from two parts to one is isochoric, therefore, there is no work done. If we did this in an adiabatic experiment by removing the wall between the parts, the entropy of the combined system would increase because the expansion is irreversible. But we are requiring constant entropy, so heat must be rejected to the surroundings, and this is why internal energy decreases.
{ "domain": "physics.stackexchange", "id": 94800, "tags": "thermodynamics, entropy" }
Find the resultant force in a motion problem with a loop
Question: In the approximate diagram, the block with mass $m$ starts without movement in the position $P$ in the top of a hill with height $5R$. After that, the block falls down through the hill, and reaches a loop until it arrives at the position $Q$ with a height of $R$. If the loop is a circle of radius $R$ and there is no friction, what is the value of the resultant force in the point $Q$? My try I tried to use the law of conservation of energy in the points $P$ and $Q$, with this i get the speed, but i don't know how to continue in this problem. Any hints? Answer: Hint: the net force will be the resultant of the normal force which is horizontal (equal to centripetal force) at A and obviously the weight(=mg)which acts vertically downwards.
{ "domain": "physics.stackexchange", "id": 58057, "tags": "forces, kinematics, energy-conservation, centripetal-force" }
Converting from RGB to depth
Question: hello, i know that given a (u,v,depth) in the depth image it is easy to retreive the corresponding rgb pixel by unporjecting the depth pixel and projecting to RGB image using the [R T] now my question is, is it possible to do the inverse ? ( having an rgb pixel, retreive the corresponding depth pixel?) and if yes, how ? thx Originally posted by caspersky on ROS Answers with karma: 21 on 2011-02-28 Post score: 2 Answer: openni_camera already provides what you are looking for by default in its current version! If you have a look at the rgb and depth image you will find that the depth image is already transformed and projected to camera coordinates of the rgb camera: Note that you can turn this off by changing the setting for depth_registration after running: rosrun dynamic_reconfigure reconfigure_gui Make sure to have a look at the wiki page as well. Originally posted by sebsch with karma: 790 on 2011-03-01 This answer was ACCEPTED on the original site Post score: 9
{ "domain": "robotics.stackexchange", "id": 4897, "tags": "ros, navigation, mapping, kinect, depth" }
What is the difference between "ground truth" and "ground-truth labels"?
Question: I'm aware that the ground-truth of the example at the top left-hand corner of the image below is "zero" However, I am confused about the meaning of the terms ground truth and ground-truth labels. What is the difference between them? Answer: These two terms could easily refer to the same thing, depending on the context. For example, a lazy person could easily say something like this We compute the loss/error between the prediction (of the model) and the ground truth. Here, the ground-truth refers to the "officially correct" label (categorical or numerical) for a given input with which you compute the prediction. So, in this case, ground-truth would be a synonym for a ground-truth label. However, in general, ground-truth refers to anything, not just labels, that are correct or true (hence the name), so it could be used more generally. For instance, you could say something like this We assume that the ground-truth underlying probability distribution from which the data is sampled is a Gaussian. However, in this case, you could also leave out the ground-truth part, as it's more or less implied by the fact that you're assuming something. So, the difference between the two is that "ground-truth" can be used more generally to refer to anything that is "true".
{ "domain": "ai.stackexchange", "id": 2860, "tags": "machine-learning, comparison, terminology, data-labelling" }
How to determine the direction of components of force while solving the problems related to hinge force?
Question: There are two different directions of horizontal force in two different situations. Why? Answer: The horizontal component of the tension force on the left digram is to the left. Therefore the horizontal component of the reaction of the hinge must be to the right in order for the sum of the horizontal components to be zero. On the other hand, the horizontal component of the tension force on the right diagram is to the right. Therefore the horizontal component of the reaction of the hinge must be to the left in order for the sum of the horizontal components to be zero. Hope this helps.
{ "domain": "physics.stackexchange", "id": 60765, "tags": "newtonian-mechanics, forces, free-body-diagram, statics" }
Proof of an Infinite Binary Sequence
Question: I have a problem where given an infinite binary sequence S ∈ {0, 1}∞ to be "prefix-repetitive" if there are infinitely many strings w ∈ {0, 1}* such that ww is a prefix of S. I need to prove that if the bits of a sequence S ∈ {0, 1}∞ are chosen by independent tosses of a fair coin, then Prob[S is prefix-repetitive] = 0 My first instinct to tackling this problem was that the probability was 0, because of Cantor's diagonal argument, because we can construct an sequence s0 that is not in the set S. This would mean that S is countably infinite and the set of "all" possible infinite binary sequences is uncountable. Any help or suggestions to see if I'm on the right track to proving this problem would be great! Answer: Here is a formal or informal proof, depending on how the probability distribution is defined. Let $S_k=\left\{vvs\in\{0,1\}^\infty\mid v\in\{0,1\}^k,\ s\in\{0,1\}^\infty \right\}$, the sequences of infinite binary digits that repeat their initial $k$ binary digits immediately once (or more), where $k\ge1$. $$\begin{align} \text{Prob[$S_k$]} &\le\frac1{2^{2k}}\#\left\{w\in\{0,1\}^{2k}\mid w\text{ can be extended to an element in } S_k\right\}\\ &\le\frac1{2^{2k}}\#\left\{w\in\{0,1\}^{2k}\mid w=vv \text{ for some } k \text{ binary digits } v\right\}\\ &=\frac1{2^{2k}}\#\left\{v\in\{0,1\}^{k}\right\}\\ &=\frac1{2^{2k}}2^{k}\\ &=\frac1{2^k}\,. \end{align}$$ Given an integer $i\ge1$ and a prefix-repetitive toss $S$, we know $S$ must be contained in $S_k$ for some $k\ge i$ since there infinitely many initial segments $v$ such that $vv$ is a prefix of S. Hence, $$\begin{align} \text{Prob[$S$ is prefix-repetitive]} &\le\sum_{k=i}^\infty\text{Prob[$S_k$]}\\ &\le\sum_{k=i}^\infty\frac1{2^{k}}\\ &=\frac1{2^{i-1}}\,.\\ \end{align}$$ Let $i$ go to infinity, we see that $$\text{Prob[$S$ is prefix-repetitive]} = 0\,.$$ Here are three related exercises. Exercise 1. Show that the number of all prefix-repetitive tosses are more than countably infinite. Hence the approach mentioned in the question is unlikely to succeed. Exercise 2. Show that if the bits of a sequence $S \in \{0, 1\}^\infty$ are chosen by independent tosses of an unfair coin, then Prob[S is prefix-repetitive] = 0. Exercise 3. An infinite binary sequence $S \in \{0, 1\}^\infty$ is prefix-palindrome if there are infinitely many strings $v \in \{0, 1\}^*$ such that $vv^R$ is a prefix of $S$, where $v^R$ is the reverse of $v$. Show that if the bits of a sequence $S \in \{0, 1\}^\infty$ are chosen by independent tosses of a coin, then Prob[S is prefix-palindrome] = 0.
{ "domain": "cs.stackexchange", "id": 13218, "tags": "proof-techniques, probability-theory, sets" }
Can something (again) ever fall through the event horizon?
Question: Since I am more confused by the answers given in this site to the many variants and duplicates of this question, with some arguing that from the point of view of the falling observer, it happens in finite time, and the issue is a matter of GR frame of reference (in Can black holes form in a finite amount of time?) and others saying that everything falling into a black hole will always asymptotically falls towards the event horizon, but never actually crossing it (in How can anything ever fall into a black hole as seen from an outside observer?), I am going to pose this question as a thought experiment, hoping that I will be able to make sense out of the answer, and get to a conclusion myself: Imagine I am standing on altitude $h$ above a non-rotating black hole of mass $M$. I am not in orbit, but I am not falling because I am in a rocket that perfectly counters the gravity, keeping me stationary. I have with me a magic ball. It is magic because it can fly like Superman, thrusting with any finite amount of force. So, no matter how close it gets to the event horizon, as long as it doesn't crosses it, it can escape flying radially outwards. Now I drop my ball from the rocket, and it free falls radially into the black hole. It can decide at any moment to use its powers to try to climb up back to the rocket, but I don't know when or if that will happen. So, how much time I must wait to be completely sure that my ball crossed the event horizon and will never return? Answer: Since another answer claims that a massive magic device would form in finite time I have to disagree. You have to wait forever, but only because your device is magic. The simplest problems are the spherically symmetric ones. And if you can get things close to an event horizon and magically bring them away as long as they stay outside then it is possible to not even know if the black hole forms. It is widely known that it takes finite time for two black holes to merge into a single black hole; this has been proved in the corresponding numerical computations. This question wasn't about the real world, it was about the real world where there are magic devices that can move on timelike curves whenever they feel like it. Which is a useful thought experiment for understanding the geometry of a black hole. Step one. Draw a Kruskal-Szekeres diagram for a star of mass M+m and pick an event of Schwarzschild $r=r_0$ and Schwarzschild $t=t_0.$ Step two. Draw a time like curve heading to the event horizon. Consider the region that has Schwarzschild t bigger than $t_0$ and has $r$ bigger than that curve at that Schwarzschild $t.$ This is a region of spacetime that sees a spherical shell of mass $m$ starting at $r=r_0$ and $t=t_0$ and heading down into an event horizon of a mass $M$ black hole. Step three. Now pick any event in this region of spacetime. Which is any point outside the black hole event horizon provided it is farther out than the thing lower down. So it is ancient, waiting for the new bigger black hole to form. Say it has an $r=r_{old}$ and a $t=t_{old}.$ Step four. Trace its past lightcone. Now pick any $\epsilon>0$ and trace that cone back until it reaches the surface of Schwarzschild $r=(M+m)(2+\epsilon).$ And find that Schwarzschild $t_{young}$ where that event (past lone intersecting the surface Schwarzschild $r=(M+m)(2+\epsilon)$) occurs. As long as the magic spherically symmetric shell of mass $m$ stays at Schwarzschild r smaller than $r=(M+m)(2+\epsilon)$ until after Schwarzschild $t=t_{young}$ then it can engage its magic engines and come back up and say hi to the person a $r=r_{old}.$ And the person won't see it until after the event $r=r_{old},$ $t=t_{old}.$ Which means. No matter how long you wait outside, the magic spherical shell of mass $m$ could still return to you so it most definitely has not crossed the event horizon of the original mass $M$ black hole and not even the larger event horizon for the mass $M+m$ black hole of it plus the original black hole. We do use the magic ability to come up. If you are willing to leave some of the substance behind it could shoot off a large fraction of itself and use that to have rest of it escape. But real everyday substances can't get thin enough to fit into that small region just outside the horizon so you can't make a device that does this out of ordinary materials. But as far as your logic goes, this process would take infinite time and therefore is impossible. We want to know if you can tell whether the magic device joined the black hole. The answer is no exactly because it takes an infinite amount of Schwarzschild time. Earlier answer follows ... For instance imagine a bunch of thin shells of matter. You can have flat space on the inside and then have a little bit of curvature between the two inner most shells. And have it get more and more curved on the outside of each sucessive shell until outside all of them it looks like a star of mass $M.$ Each shell is like two funnels sewn together with a deeper funnel always on the outside and all sewn together where they have the same circumference at the place they are sewn together. So now how do I know we can never know if anything crosses an event horizon. If they crossed an event horizon then the last bit to cross has a final view, what they see with their eyes or cameras as they cross. And if there is something they see that hasn't crossed yet when they cross that thing can run away and wait as many millions or billions of years as long as you want. And where ever and whenever they are they, the people outside, will still see the collapsing shells from before they crossed the event horizon. So now imagine a different universe. One where they didn't form a black hole or cross an event horizon. But all the shells got really close, so close that everything up to the point looks the same to the person in the future. Then they turn around and come back. So we never saw a single thing cross the event horizon. And if there are magic ways to get away as long as you haven't crossed the event horizon then there is no amount of time to wait before you know they cross. Because no matter how long you wait they still might not cross the horizon or they might cross it and you don't know yet. With the spherical symmetry it is easy to see that what I say works because there are really nice pictures for the spherical symmetry case where you can see what is and isn't possible. So you can pick a radius and a time and I can draw a point on a graph and trace back to find out how close the magic device has to get before it turns around. As long as things can wait until they are really really close then you can't tell if they have crossed an event horizon. The other answer is just plain wrong. If you take a collapsing star of mass $M+m$ then you can find where an arbitrarily distant time sees the infalling body. And as long as you waited until that point then the magic device can escape.
{ "domain": "physics.stackexchange", "id": 72129, "tags": "general-relativity, black-holes, reference-frames, observers, event-horizon" }
Magnetic field in a wire with constant current
Question: I assume I have a wire parellel to the $z$ axis and with radius $R$. A constant current $I$ flows through it in the $z$ direction. I want to know the magnetic field inside the wire at distance $r<R$. In the figure, the pink dots represent the flow of electrons in the z direction for $r<R$. The red dots represent the the flow of electrons in the z direction for $r>R$. I added the green magnetic field $B_o$, so that one might be wrong. The formula I came across is: $$ B=\frac{\mu_{0} I r}{2 \pi R^2} = \frac{\mu_{0} I_{enc} }{2 \pi r} \text{ for } r<R$$ Why does the magnetic field (blue in figure) at $r$ only depend on the magnetic field caused by the enclosed current $I_{enc}$ (the pink dots) ? Why isn't there an influence by the magnetic field caused by the remainder of the current [$ I-I_{enc}$] (red dots)? For example, from $r \angle 40$ radially towards $R \angle 40$ there is current flowing in the $z$ direction (3 consecutive red dots in figure) which at $r \angle 40$ causes a magnetic field (green $B_o$ in figure) that is opposite to the one caused by the enclosed current (blue in figure), or am I wrong? Answer: The relationship you propose only applies if the current has symmetry with respect to the axis of the wire. Amperes law is that the line integral of the B-field around a closed loop equals the enclosed current times $\mu$. $$\oint \vec{B}\cdot d\vec{l} = \mu I$$ To get your relationship you need to assume that on any circular path around the axis, that (I) the B-field is parallel to the path and (II) is of constant magnitude. These conditions are satisfied for circular symmetry of the current distribution. Effectively what happens is that the B -field due to the current in the annulus around the circle exactly cancels to zero inside the circle. But I stress again, this only happens when everything is nicely symmetric. In your diagram, I count 7 dots in the outer part of the wire in the top left quadrant and 5 in the bottom right. Therefore the current distribution does not have circular symmetry, the B-field in the wire will not have circular symmetry and you cannot easily use Ampere's law to calculate it! I suspect this is either careless drawing, or done by someone who isn't (yet) completely understanding when and how you can apply Ampere's law. So, assuming the drawing is accurate then you are not (completely) wrong. Though of course one would have to integrate the contributions from all parts of the wire.
{ "domain": "physics.stackexchange", "id": 20711, "tags": "electromagnetism" }
Fourier Series representation of a signal
Question: Use the defining equation for the Fourier Series coefficients to evaluate the Fourier Series representation of the following signal: $$x(t)=\sum_{m=-\infty}^{+\infty}=(\delta(t-m/3)+\delta(t-2m/3))$$ I calculated $T=2/3$ and $w_0=3\pi$, however, I'm not sure whether $X[k]$ will be $$X[k]=3/2 * \int_{0}^{2/3}(2\delta(t)+\delta(t-1/3)+2\delta(t-2/3))*e^{-jk3\pi t}dt$$or$$X[k]=3/2 * \int_{0}^{2/3}(\delta(t)+\delta(t-1/3)+\delta(t-2/3))*e^{-jk3\pi t}dt$$ And if $X[k]=3/2 * \int_{0}^{2/3}(2\delta(t)+\delta(t-1/3)+2\delta(t-2/3))*e^{-jk3\pi t}dt$, is this following calculation correct (since I never did integral with $\delta$ before)? $$X[k]=3/2*(2e^{-jk3\pi 0} + e^{-jk3\pi 1/3} + 2e^{-jk3\pi 2/3})=3+3/2*e^{-jk\pi} + 3e^{-jk2\pi}$$ Answer: You're getting yourself into unnecessary trouble by choosing the integration limits exactly at those values of $t$ where you have Dirac impulses. Note that you can choose any integration limit as long as you integrate over one period of the given function. E.g., if you choose some positive $\epsilon$ satisfying $0<\epsilon\le\frac16$ and you integrate from $-\epsilon$ to $\frac13+\epsilon$ then you have the relevant portion of the signal during one period inside the integral, and there's no question as to the scaling factor of the Dirac impulses. The result of the integration will be a real-valued constant for even $k$, and another real-valued constant for odd $k$. I'm sure you can derive the final result yourself.
{ "domain": "dsp.stackexchange", "id": 8019, "tags": "continuous-signals, homework, fourier-series" }
Pseudo-Generic Array Stack in C
Question: I have implemented an array based pseudo-generic stack in C using macros. The code works fine for all data types. Is it a good idea to implement such a data structure using macros? array_stack.h #ifndef ARRAY_STACK_H #define ARRAY_STACK_H #include<stdlib.h> #define array_stack(type) struct{size_t _size;size_t _capacity;type*_arr;} #define stack_init(stack) do{\ stack._capacity=1;\ stack._size=0;\ stack._arr=calloc(stack._capacity,sizeof(*stack._arr));\ }while(0) #define stack_push(stack,data) do{\ if(stack._size==stack._capacity)\ {\ stack._capacity*=2;\ void*new_array=realloc(stack._arr,stack._capacity*sizeof(*stack._arr));\ stack._arr=new_array;\ }\ stack._arr[stack._size]=data;\ stack._size++;\ \ }while(0) #define stack_pop(stack) if(stack._size!=0) stack._size-- #define stack_top(stack) (stack._size>0) ? stack._arr[stack._size-1] : *stack._arr //returns address of array if stack is empty #define stack_empty(stack) (stack._size==0) #define stack_length(stack) stack._size #endif Usage in main.c #include<stdio.h> #include<stdlib.h> #include"array_stack.h" #include<string.h> int main(int argc,char**argv) { array_stack(char)chars; array_stack(double)nums; stack_init(chars); stack_init(nums); const char*text="AzZbTyU"; for (size_t i = 0; i < strlen(text); i++) stack_push(chars,text[i]); stack_push(nums,3.14); stack_push(nums,6.67); stack_push(nums,6.25); stack_push(nums,0.00019); stack_push(nums,22.2222); printf("Printing character stack: "); while(!stack_empty(chars)) { printf("%c ",stack_top(chars)); stack_pop(chars); } printf("\n"); printf("Printing double stack: "); while(!stack_empty(nums)) { printf("%lf ",stack_top(nums)); stack_pop(nums); } printf("\n"); return 0; } Answer: A better than usual implementation. Is it a good idea to implement such a data structure using macros? It is tricky to do well. User code looks like it is using non-macro code, yet the usual concerns about multiple execution of arguments and lack of function addresses occur. Unnecessary asymmetry stack_init() and stack_push() are wrapped in a do { ... } while (0). Why stack_pop() not wrapped? Do you want to allow: stack_pop(stack) else puts("Hmmm"); How to create a pointer to a function? How to create a pointer to the stack? // Does not work size_t (*f)() = stack_empty; array_stack(char)chars; what_type_here *p = &chars; Misleading comment #define stack_top(stack) (stack._size>0) ? stack._arr[stack._size-1] : *stack._arr //returns address of array if stack is empty Code does not return an address ... if stack is empty. Instead it returns the data type like char or double. (Comment hidden in the far right. Consider formatting to a smaller nominal line length.) Lack of documentation array_stack.h deserves comments describing the overall functionality and limitations. I'd comment each "function" as well. Other Test include *.h independence Someplace, code as follows to test that "array_stack.h" does not rely on the .c file first including other include files. // #include<stdio.h> // #include<stdlib.h> // #include"array_stack.h" #include"array_stack.h" #include<stdio.h> #include<stdlib.h> Overly compact style // array_stack(char)chars; array_stack(char) chars; // ^ space Growth Consider starting stack with size 0. #define stack_init(stack) do{\ stack._capacity=0;\ stack._size=0;\ stack._arr=NULL;\ }while(0) Grow with ._capacity = ._capacity*2 + 1. Avoid repeated O(n) calls // for (size_t i = 0; i < strlen(text); i++) for (size_t i = 0; text[i]; i++)
{ "domain": "codereview.stackexchange", "id": 43137, "tags": "c, array, stack, macros" }
What is the sequence similarity between humans and chimps in the noncoding genome?
Question: I've seen some papers discuss extensively how human and chimp genomes compare in many of their features, eg Suntsova & Buzdin 2020, but I have not been able to find a paper that specifically studies and determines the sequence similarity percentage between humans and chimps specifically at noncoding regions (of course coding regions are known to have similarity around 97-99% depending on the metric). So I am curious to ask here: what is the percentage of noncoding sequence similarity between humans and chimps? Answer: According to "MUMmer4: A fast and versatile genome alignment system" (Plos Computational Biology 2018): "First we aligned the current assemblies of human and chimpanzee, using the default nucmer4 options with 32 parallel threads ... We used human as the reference and chimpanzee as the query sequence. The human GRCh38 assembly contains 3.088 Gb of sequence while chimpanzee assembly, with 3.31 Gb, contains 7% more DNA. (Note that the chimpanzee genome is far less polished than human, and much of the extra DNA might be explained by haplotype variants or incompletely merged regions; thus the two genomes might be much closer in size than these numbers indicate.) MUMmer had 2.782 Gb of the sequence in mutual best alignments, where each location in the chimp was aligned to its best hit in human and vice versa, with an average identity of 98.07%. The 1.93% nucleotide-level divergence found here is higher than the 1.23% reported in the original chimpanzee genome paper [25]. Our higher divergence is likely due to two factors: first, the 2005 report was based on 2.4 Gb of aligned sequence from older versions of both genomes, while ours is based on 2.782 Gb (16% more sequence) aligned between the current, more-complete versions of both genomes. Second, the original report used different methods, and may have counted fewer small indels than were counted in our alignments. Approximately 306 Mb (9.91%) of the human sequence did not align to the chimpanzee sequence, while 138 Mb (4.15%) of the chimpanzee sequence did not align to human. We detected 390 Mb in alignments where multiple sequences from chimpanzee aligned to the same location in human sequence and thus only one was chosen as the best alignment based on alignment identity. The genomes are very similar across all chromosomes, with the percent identity varying only slightly, from 97.5% to 98.2% for chromosomes 1-22 and X. Chromosome Y was an outlier at 96.6% identity over 84.6% of its length; however this is likely due to the fact that the chimpanzee Y chromosome is much less complete than the human Y." A more recent 2020 paper (https://doi.org/10.3389/fgene.2020.00292) found: "Since centromeres and unsequenced regions correspond to biological features, calculations including N’s and centromeres are listed in parentheses. Initial alignment coverage is 95.57% (90.9%) of the Hg38 reference, and identity within the alignment of 98.65%." So, it seems that if you include the noncoding genome, the sequence identity between human and chimp genomes is around ~90–91%.
{ "domain": "biology.stackexchange", "id": 12225, "tags": "human-genetics, sequence-analysis, homology" }
Does hot water dissolve Copper more than cold water?
Question: I read from WA state dept health https://www.doh.wa.gov/portals/1/Documents/pubs/331-178.pdf that hot water disolves copper more copper than cold water. Use cold water for drinking and cooking. Because hot water dissolves more copper than cold water, limit consumption of water from the hot water tap. Is this correct that hot water dissolves copper more than cold? Answer: Actually, the topic of this question involves an important health care issue surrounding exposure to copper sources in varying conditions. For example, in case of acidic drinking water with copper plumbing, to quote a 1980 article in Archives of Disease in Childhood: The boy's copper toxicosis was attributed to the ingestion of water which had a high copper concentration. Although the family's well water contained little copper, it had a pH of 3.8-4.8, and dissolved copper from the domestic plumbing. Three explanations may be offered... Thirdly, he had been fed on cows' milk diluted with water, and it is known that the availability for absorption of copper in milk is greater than in other foods.[4] Generally, applying heat is known to increase the speed of chemical reaction and relatedly, increased copper poisoning is found in cultures employing copperware, especially in combination with common milk consumption, to quote a source: Indian childhood cirrhosis: One manifestation of copper toxicity, cirrhosis of the liver in children (Indian childhood cirrhosis), has been linked to boiling milk in copper cookware. So, caution and education is especially required in environments with potential copper exposure involving acidic conditions, warming and apparently, foods (like milk), noted to promote copper accumulation.
{ "domain": "chemistry.stackexchange", "id": 14739, "tags": "water, heat" }
Effect of the translation operator affected by spin?
Question: I'm reading an introductory review on quantum walks and at some point it incorporates spin into the translation operator in a way that I don't follow. Initially it states that the translation by distance $l$ is defined as $$ U_l|\psi_x\rangle=|\psi_{x-l}\rangle $$ $$ U_l=\exp(-iPl) $$ where $|\psi_x\rangle$ is the position wavefunction and $P$ is the momentum operator. The discussion then shifts to an object with a position and spin parts to its wavefunction written as $|\Psi\rangle=\alpha^\uparrow|\uparrow\rangle\otimes|\psi_x\rangle+\alpha^\downarrow|\downarrow\rangle\otimes|\psi_x\rangle$. The article then goes on to say that the translation of this object is now described by $$ U_l=\exp(-2iS_z\otimes Pl) $$ meaning that $$ U_l|\uparrow\rangle\otimes|\psi_x\rangle=|\uparrow\rangle\otimes|\psi_{x-l}\rangle $$ $$ U_l|\downarrow\rangle\otimes|\psi_x\rangle=|\downarrow\rangle\otimes|\psi_{x+l}\rangle. $$ Now I understand why this new operator behaves the way it does but I don't get why it's used in the first place. Why has the $S_z$ operator suddenly become part of the generator of translation? Why does spin affect this at all? Also where did that factor of 2 come from in the exponent? This is the review article: http://arxiv.org/pdf/quant-ph/0303081v1.pdf The relevant section begins on page 2. Answer: Why has the $S_z$ operator suddenly become part of the generator of translation? Because the authors want their $U$ operation to make the spin and the position interact. The process that moved the $S_z$ into the generator wasn't anything physical or simulated, it was just the authors defining a useful operation. Also where did that factor of 2 come from in the exponent? Think of it as the strength or duration of the interaction. It's the $t$ in $U(t) = e^{-itH}$ that you end up computing when converting from a Hamiltonian to a unitary matrix (more generally, see Schrödinger equation). I don't know why the authers picked 2 specifically, but there's not anything stopping them from doing it. Maybe it's just to undo the factor of $\frac{1}{2}$ they introduced in the definition of $S_z$?
{ "domain": "physics.stackexchange", "id": 32269, "tags": "quantum-information, quantum-spin" }
Infinitesimal transformation of differential forms
Question: The infinitesimal differential forms $dx_1$ and $dx_2$ span a two-dimensional vector space (cotangent space). The transformation $f$ acts on $(x_1,x_2)$ like a coordinate transformation. Thus, $$ dx_i \to du_i = \frac{\partial u_i}{\partial x_j}dx_j,\qquad i = 1,2.$$ Let's look at the infinitesimal transformation of $x_i$, i.e. $$ u_i(x) = x_i +\epsilon_i(x).$$ Show that if $f$ is a conformal transformation: $$ \omega(x)\delta_{ij} = -\frac{\partial \epsilon_j}{\partial x^i}-\frac{\partial \epsilon_i}{\partial x^j},$$ where $\omega(x) \in \mathbb{R}$ is a scale factor. Hint: First, show that $$\delta_{ij}du^idu^j=\left(1+\omega(x)\right)\delta_{kl}dx^kdx^l.$$ I've already tried different things such as substituting the $du$'s in the expression given as a hint. All of this basically led to nothing, so it would be great if somebody could help me find the solution. How should I start? Maybe I should mention that this is an exercise from a physics workbook, so this question has to be answered using the information given above. Answer: You already have all the elements that you need. You just need to equate two expressions for the metric after a conformal transformation. Lets see it. First, its change under any transformation $x\mapsto u(x)$ can be calculated as \begin{align} \delta_{ij}dx_idx_j\mapsto& \,\delta_{ij}du_idu_j = \delta_{ij}\frac{\partial u_i}{\partial x_k} \frac{\partial u_j}{\partial x_l}dx_kdx_l = \left(\delta_{ik}+\frac{\partial \epsilon_i}{\partial x_k}\right) \left(\delta_{jl}+\frac{\partial \epsilon_j}{\partial x_l}\right) \delta_{ij}dx_kdx_l\\ =& \left(\delta_i^k\delta_j^l+ \delta_{jl}\frac{\partial \epsilon_i}{\partial x_k}+ \delta_{ik}\frac{\partial \epsilon_j}{\partial x_l}+ O\left(\left(\frac{\partial\epsilon}{\partial x} \right)^2\right)\right)\delta_{ij}dx_kdx_l \\ =& \left(\delta_{kl}+\frac{\partial\epsilon_l}{\partial x_k} +\frac{\partial\epsilon_k}{\partial x_l}+ O\left(\left(\frac{\partial\epsilon}{\partial x}\right)^2\right)\right) dx_kdx_l \end{align} On the other side, if it is a conformal transformation it should satisfy that after the transformation the metric is equal to \begin{equation} \exp(-\omega(x))\delta_{ij}dx_idx_j=\left(1- \omega(x)+O\left(\omega(x)^2\right)\right)\delta_{ij}dx_idx_j. \end{equation} Now, neglecting quadratic or higher powers of $\omega$ and the derivatives of $\epsilon$ one gets: \begin{equation} (1+\omega)\delta_{ij}dx_idx_j=\left(\delta_{kl}- \frac{\partial \epsilon_l}{\partial x_k} -\frac{\partial \epsilon_k}{\partial x_l}\right)dx_kdx_l \end{equation} And because the coefficient of each $dx_idx_j$ has to be equal on both sides: \begin{equation} \omega\delta_{ij}= -\frac{\partial \epsilon_j}{\partial x_i} -\frac{\partial \epsilon_i}{\partial x_j} \end{equation}
{ "domain": "physics.stackexchange", "id": 36270, "tags": "homework-and-exercises, conformal-field-theory, coordinate-systems" }
A photon cannot exist if it travels at speed lower than $\mathrm{c}$ in a vacuum, why?
Question: This is known in physics that photons travel only at fixed speed $\mathrm{c}$ in vacuum but also inside a medium going from one atom to the next and taking advantage of the vacuum that exists between atoms in a material medium. Any other speed lower than $c$ and the photons would cease to exist and get "destroyed". Why is that? This question is fundamental and I have a hard time to derive an analysis or physical explanation. Why a photon is possible only at $\mathrm{c}$ propagation speed independent if there is a medium or not? There must a deeper connection to vacuum spacetime. Answer: The property of the photon moving with speed of light can be derived through the relationship between the Lagrangian and the Hamiltonian of a free particle (with mass or without mass): $$ H = p\cdot \dot{q} -\cal{L} \equiv p\cdot v -\cal{L}$$ where $v$ is the velocity as time derivative of the canonical coordinate $q$ of the free particle. $p$ is the momentum of the free particle. Now in relativistic mechanics the Lagrangian of the free particle is $$\cal{L} = const \int ds $$ the constant term const is only necessary to keep up with the units and $ds$ is the invariant line element: $$ds = \sqrt{c^2dt^2 -dx^2 -dy^2 -dz^2}$$ So for photons the line element ds vanishes: $ds=0$, so the Lagrangian is zero. So the Hamiltonian is simply: $$H = p\cdot v$$ Furthermore we know from relativistic mechanics that the energy of a free particle is : $$ E =\sqrt{ (pc)^2 + m^2 c^4}$$ For a massless photon m=0 we get: $$E =p\cdot c$$ Knowing that the Hamiltonian of a particle corresponds to its energy we get: $$p\cdot v = H \equiv E = pc$$ which finally provides us with the answer: $v =c$. The main reason why photons move with speed of light is that they are massless $m=0$. But one could also just "define" photons as free particles (possibly adding another property to distinguish them from other massless particles -- gravitons for instance) whose line element is zero --- since this property is essential for the proof.
{ "domain": "physics.stackexchange", "id": 84187, "tags": "special-relativity, particle-physics, photons, speed-of-light, vacuum" }
Why are the total functions not enumerable?
Question: We learned about the concept of enumerations of functions. In practice, they correspond to programming languages. In a passing remark, the professor mentioned that the class of all total functions (i.e. the functions that always terminate for every input) is not enumerable. That would mean that we can not devise a programming language that allows us to write all total functions but no others---which would be nice to have! So how is it that we (apparently) have to accept the potential for non-termination if we want decent computational power? Answer: Because of diagonalization. If $(f_e: e \in \mathbb{N})$ was a computable enumeration of all total computable functions from $\mathbb{N}$ to $\mathbb{N}$, such that every $f_e$ was total, then $g(i) = f_i(i)+ 1$ would also be a total computable function, but it would not be in the enumeration. That would contradict the assumptions about the sequence. Thus no computable enumeration of functions can consist of exactly the total computable functions. Suppose we think of a universal computable function $h(e,i)$, where "universal" means $h$ is a computable binary function and that for every total computable unary function $f(n)$ there is some $e$ such that $f(i) = h(e,i)$ for all $i$. Then there must also be some $e$ such that $g(n) = h(e,n)$ is not a total function, because of the previous paragraph. Otherwise $h$ would give a computable enumeration of total computable unary functions that includes all the total computable unary functions. Thus the requirement that every function is a system of functions is total is incompatible with the existence of a universal function in that system. For some weak systems, such as the primitive recursive functions, every function is total but there are not universal functions. Stronger systems that have universal functions, such as Turing computability, simply must have partial functions in order to allow the universal function to exist.
{ "domain": "cs.stackexchange", "id": 25, "tags": "computability, semi-decidability, enumeration" }
Is it possible to modify the linux assembler in order to modify what fuctions do?
Question: I was wondering if it were possible to modify the linux assembler in order to change the function of "-" to "+" and vice versa. I was also wondering if this would affect the whole system making it collapse. I apologize if this question is not appropriate here. Please explain why if it is the case. E.g: 2+2 = 0 2-2 = 4 My objective is to see what applications this could have for a potential computer virus (I hope this is allowed here). Answer: While this is trivially possible, the fundamental flaw in your idea is that assemblers are not interpreters. Python programs are run by the Python interpreter, Swapping - and + in the Python interpreter would likely crash all Python programs. But normal programs are executed directly by the CPU. The assembler is not involved in their execution.
{ "domain": "cs.stackexchange", "id": 16335, "tags": "assembly" }