anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How to determine if the amount of manganese chloride will change neural resting potential
Question: If I am treating an organism in MnCl2 dissolved in water, how do I determine if the amount of Cl (in MnCl2) will change the neural resting potential Oor if it will influence motoneurons? Answer: For theoretical calculations with other ions, you need to know the concentration and permeability of each of the major ions inside and outside the cell, both with and without the added substance (in your case MnCl2), and then use the Goldman equation/GHK equation to calculate resting potential before and after. Approximations would probably be sufficient. However, chloride can be a bit complicated because the intracellular chloride concentration can fluctuate with extracellular chloride moreso than the other major ions. Therefore, depending on the sensitivity of your experiment, you may need to perform whole-cell patch clamp recordings to measure the resting potential directly. If the change in chloride is a small percentage of the total, you can probably consider the influence to be negligible.
{ "domain": "biology.stackexchange", "id": 9100, "tags": "neuroscience, neurophysiology" }
How extensive is CD47?
Question: CD47 aka the "don't eat me" signal has recently been claimed to be expressed on all tumor cells. This doesn't seem to corroborate with other cell-biology experiments. On what other cells is CD47 expressed? Answer: I don't know how extensive. Let's run a simple data query and find out: Go to GEO at NCBI. In the "Gene profiles" window, type CD47, and hit enter to launch the query. At the top of the resulting page, use the link labeled "Limits" to restrict the 7000+ results to human by entering the term "human" in the "DataSet organism" window. So, there are 3700+ results. Look through them to get an idea of which cell types and under which conditions CD47 is expressed. Let's try a second method. Go to the BioGPS portal. Enter CD47 in the appropriate window and run the query. From the results, select the row with ID # 961 as this represents the human gene CD47. The resulting gene expression/activity chart for Hs (human) will show you in more general terms where CD47 is expressed.
{ "domain": "biology.stackexchange", "id": 313, "tags": "cell-biology, gene-expression" }
How is the rate of current change in an inductor present in a circuit maintained / decreased?
Question: Considering an ideal circuit of DC voltage source and an inductor connected together with a switch in between them. When the switch is closed at t=0 the current starts increasing which causes induced EMF which in turn results in non-conservative electric field, in order to maintain 0 electric field inside a conductor(ideal) electric charges accumulate to cancel the non-conservative field, due to which voltage develops across the inductor(as it is defined for static fields) but the net electric field is zero inside the ideal inductor, from the equation of EMF= -L(di/dt), I understand that current should change for the EMF to exist but from the argument I just mentioned above I feel that there is no electric field inside and hence the current should remain constant, which causes the induced EMF to become zero(as there is no change in current now) and also the electric field due to accumulated charges on the surface of inductor also vanishes, so now how is the rate of current change maintained? What causes it to be maintained that is what causes it to exist? Please give an intuitive explanation of how this happening. I have read that induced EMF is causing the rate of current to decrease(in general) but how according to the argument I mentioned above? And what does the rate of change of current mean? Is it the increase in velocity of charges? Or increase in number of charges? I know there is some flaw in my explanation, I am surely missing out something so please help me out. Answer: there is no electric field inside and hence the current should remain constant, In case of ideal inductor and ideal DC source, the increase of current does not require presence of substantial net macroscopic electric field in the wire. There is strong Coulomb field of the battery and surface charges but inside the wires this field is almost cancelled out by the induced electric field due to the charge carriers in the inductor. Of course, on the microscopic level of description, there has to be some small non-zero force that accelerates the current carriers in the direction of the current to make them move faster. Thus the Coulomb field of the battery and surface charges is a little greater than the induced electric field, so some net work is done on the charge carriers in increasing their kinetic energy. But this accelerating force is, on the macroscopic scale, negligible, because charge carriers are extremely light, and there is (by assumption) no ohmic resistance. When you calculate kinetic energy of mobile electrons in an inductor, it is many orders smaller than magnetic energy stored in the inductor. So net force (sum of the Coulomb forces and forces of the induced electric field) needed to accelerate them is negligible compared to the net Coulomb force, and thus net electric field is pronounced effectively zero.
{ "domain": "physics.stackexchange", "id": 59715, "tags": "electromagnetism, electric-circuits, voltage, electromagnetic-induction, inductance" }
Parent/child relationships: adding/updating details to an answer
Question: Answer is the Parent table and AnswerDetail is the child. Below works, but I'm wondering if there is a better way to do this using EF? My Method Signature is this: [HttpPost] public ActionResult Evaluator(EvaluationVM evaluation, string command) It is coming from a MVC 4 application. Answer AnswerRecord; AnswerRecord = (db.Answers).Where(x => x.TeacherID.Equals(evaluation.CurrentTeacher.ID)).Where(y => y.LeaderID != null).FirstOrDefault<Answer>(); if (AnswerRecord == null) { //Add New Parent Record: AnswerRecord.CreatedBy = userName; AnswerRecord.CreateStamp = DateTime.Now; db.Entry(AnswerRecord).State = EntityState.Added; AnswerRecord.UpdatedBy = userName; AnswerRecord.UpdateStamp = DateTime.Now; } AnswerRecord.UpdatedBy = userName; AnswerRecord.UpdateStamp = DateTime.Now; //I use this block if I know the child record is new: adItem = db.AnswerDetails.Find(item.LeaderAnswerDetailKey); if (adItem != null) { adItem.Comment = item.LeaderComment; adItem.AnswerOptionKey = item.LeaderAnswerOptionKey.Value; adItem.UpdatedBy = userName; adItem.UpdateStamp = DateTime.Now; db.Entry(adItem).State = EntityState.Modified; } foreach (EvaluationObject item in evaluation.ResultSet) { //I use this block if I am updated the child record adItem = new AnswerDetail(); adItem.QuestionID = item.QuestionID.Value; adItem.AnswerOptionKey = item.TeacherAnswerOptionKey.Value; adItem.Comment = item.TeacherComment; adItem.CreatedBy = userName; adItem.CreateStamp = DateTime.Now; adItem.UpdatedBy = userName; adItem.UpdateStamp = DateTime.Now; adItem.AnswerKey = AnswerRecord.AnswerKey; AnswerRecord.AnswerDetails.Add(adItem); db.Entry(adItem).State = EntityState.Added; //This is called at the end of my method: db.SaveChanges(); }//End of ForEach Loop EvaluationVM public class EvaluationVM { public bool IsAdmin { get; set; } public bool IsLeader { get; set; } public int TeacherStatus { get; set; } public int LeaderStatus {get;set;} public bool IsPublicTeacher { get; set; } public bool IsFinalTeacher { get; set; } public bool IsPublicLeader { get; set; } public bool IsFinalLeader { get; set; } public List<EvaluationObject> ResultSet {get; set;} public TeacherInfo CurrentTeacher {get; set;} public LeaderInfo CurrentLeader { get; set; } public List<EvaluationRating> RatingSet {get;set;} } EvaluationObject: public class EvaluationObject : IEvaluationObject { public int? QuestionID { get; set; } public string IndicatorID { get; set; } public string QuestionDescription { get; set; } public string TeacherID { get; set; } public int? TeacherStatusKey { get; set; } public int? TeacherAnswerKey { get; set; } public int? TeacherAnswerDetailKey { get; set; } public int? TeacherAnswerOptionKey { get; set; } public string TeacherComment { get; set; } public string LeaderID { get; set; } public int? LeaderStatusKey { get; set; } public int? LeaderAnswerKey { get; set; } public int? LeaderAnswerDetailKey { get; set; } public int? LeaderAnswerOptionKey { get; set; } public string LeaderComment { get; set; } } Answer: Separation of Concerns I like that you're using EF's built-in unit-of-work implementation. However db (your DbContext class) looks like it's declared at instance level in your controller - make sure it's disposed properly at the end of the request (you could inject the context in the controller's constructor and use an IoC container to ensure per-request instantiation & disposal). That doesn't mean the controller's [HttpPost] methods should be doing all the work! Looking at the method's code, I think you can extract an entire service class that exposes at least 3 methods! Answer CreateNewAnswer(string userName) AnswerDetail FindByLeaderId(int leaderId) AnswerDetail CreateNewAnswerDetail(int answerId, AnswerDetailVM item) Extracting these methods into their own class will make it much easier to follow your controller methods' code, by adding a level of abstraction - the controller shouldn't be dealing with minute details, rather should call into more specialized objects that do their specialized stuff. Shortly put, separate the concerns ;) SaveChanges You don't need to call db.SaveChanges() for every entity you create in the loop - EF's DbContext is a unit-of-work that encapsulates a transaction, so calling SaveChanges is like saying "I'm done, now commit all these pending changes!" - call it once (or, as sparingly as possible - e.g. the parent would typically needs to exist in the db before a child can be added), when you're done. Naming & Other Nitpicks Answer AnswerRecord; AnswerRecord = (db.Answers).Where(x => x.TeacherID.Equals(evaluation.CurrentTeacher.ID)).Where(y => y.LeaderID != null).FirstOrDefault<Answer>(); For readability, I prefer to split these across multiple lines: AnswerRecord = (db.Answers).Where(x => x.TeacherID.Equals(evaluation.CurrentTeacher.ID)) .Where(y => y.LeaderID != null) .FirstOrDefault<Answer>(); This is confusing, because x and y refer to the same object. Also I'd prefer == over .Equals in most cases, so I'd write it like this instead: var answer = db.Answers.Where(answer => answer.TeacherID == evaluation.CurrentTeacher.ID && answer.LeaderID != null) .FirstOrDefault(); The type parameter for FirstOrDefault is inferred from usage, doesn't need to be specified ;) Comments If these comments are real, in-code comments... //I use this block if I know the child record is new: //I use this block if I am updated the child record ...then they're both lying - the code blocks under each comment seems to be doing what the other comment is saying! Remove these misleading comments, calling _service.CreateNewAnswerDetail should be clear enough that you're creating a new AnswerDetail entry ;) As for this one: //Add New Parent Record: and this one: //End of ForEach Loop ...they both say nothing that the code doesn't say already, remove them as well, thank yourself later ;)
{ "domain": "codereview.stackexchange", "id": 7326, "tags": "c#, entity-framework" }
Power loss due to eddy currents
Question: I am curious about estimating power losses due to eddy currents. Looking on Wikipedia I find an expression for power dissipation under limited circumstances, $$ P = \frac{\pi^2 B^2 d^2 f^2}{6k\rho D} $$ where $P$ is the power in watts per kilogram, $B$ is the peak field, $d$ is the thickness of the conductor, $f$ is the frequency, $k\sim1$ is a dimensionless constant which depends on the geometry, $\rho$ is the resistivity, and $D$ is the mass density. However, I can't make the units work out. The tricky ones are usually the electromagnetic units. From the Lorentz force $\vec F = q\vec v\times\vec B$ I find $$ \mathrm{ 1\,T = 1 \frac{N\cdot s}{C\cdot m} }. $$ From Ohm's law $V=IR$, $$ \mathrm{ 1\,\Omega = 1\,\frac{V}{A} = 1\,\frac{N\cdot s}{C^2} }, $$ and the dimension of $\rho$ is $\mathrm{\Omega\cdot m}$. So the dimensions of ratio $P$ should be \begin{align*} \left[ B^2 (d\,f)^2 \rho^{-1} D^{-1} \right] &= \mathrm{ \left( \frac{N\cdot s}{C\cdot m} \right)^2 \left( \frac ms \right)^2 \left( \frac{C^2}{N\cdot s\cdot m} \right) \left( \frac{m^3}{kg} \right) }\\\ &= \mathrm{ \left( \frac{N^2}{C^2} \right) \left( \frac{C^2}{N\cdot s\cdot m} \right) \left( \frac{m^3}{kg} \right) }\\ &= \mathrm{ \left( \frac{N}{m\cdot s} \right) \left( \frac{m^3}{kg} \right) } = \mathrm{ \frac{N\cdot m^2}{s\cdot kg} = \frac{W\cdot m}{kg} } \end{align*} This is different from the stated units of $\mathrm{W/kg}$. Am I making some stupid mistake? Is the formula wrong? What's happening here? Answer: Very good practice to double check Wikipedia, but in this case Wikipedia is right. Those E&M units are tricky. From $F=Eq$ we get that $V=\frac{N \cdot m}{C}$ So $\Omega = \frac{N \cdot s \cdot m}{C^2}$ which solves your problem.
{ "domain": "physics.stackexchange", "id": 62615, "tags": "homework-and-exercises, electromagnetism, units" }
Can we define potential for all conservative forces?
Question: I know that defining potential for non-conservative forces is not possible and we can define potential and potential energy for conservative forces only. But can we define it for all conservative forces? Answer: A conservative vector field is, by definition, a vector field that can be written as the gradient of a function. Since conservative forces are vector fields, they all can be written as a gradient of a function (that function is the potential)
{ "domain": "physics.stackexchange", "id": 56691, "tags": "forces, classical-mechanics, potential, potential-energy, conservative-field" }
How to convert my classes to Dependency injection?
Question: I am still learning to develop my skills in OOP. It uses a combination of the factory and its real singletons? As I did more research, I have realized this design pattern is bad because of global state and it is hard to test. I can't figure how how to convert this OOP to Dependency injection? Section Class class Section { // Array contains of instantiations of Section protected static $instance; // Validation instance of validation public $validation; public $something; // instance method returns a new section instance. public static function instance($name = "default") { if ($exist = static::getInstance($name)) { return $exist; } static::$instance[$name] = new static($name); return static::$instance[$name]; } // Return a specific instance public static function getInstance($instanceKey = null) { if (!isset(static::$instance[$instanceKey])) { return false; } return static::$instance[$instanceKey]; } // Get Validation instance or create it public function validation() { if ($this->validation instanceof Validation) { return $this->validation; } if (empty($this->validation)) { $this->validation = Validation::instance($this); } return $this->validation; } public function add($something) { $this->something = $something; } } Validation Class (Just a random name for now) class Validation { // Related section instance public $section; protected function __construct($section) { $this->section = $section; } public static function instance($section = "default") { if (is_string($section)) { $section = Section::instance($section); } return new static($section); } public static function getInstance($instanceKey = null) { $temp = Section::getInstance($instanceKey); if (!$temp) { return false; } return $temp->validation(); } // Alias for $this->section->add() public function add($name) { $this->section->add($name); return $this->section->something; } } Testing: $section = Section::instance("Monkey"); $validation = $section->validation(); echo $validation->add("This is just a test"); Answer: First, a few minor comments: Class properties should almost always be protected or private. This forces you to use a known tested interface (the class methods) to interact with the state stored in the class. The properties section, something and validation should have all been protected or private. Your method add does not add, it sets. add might be appropriate for concatenation (although I'd prefer append or prepend). It is best used for real addition. Your add implementation would have been better named set. I have added to your example class to highlight the benefits of Dependency Injection. I have used text and title in place of your something. Validation Interface The validation interface defines the way to check for validity. If we need to check for validity and we are passed an object that implements this interface we can be sure that it will work. By relying on an interface rather than a specific object we reduce our coupling from a specific object to any object that implements the interface. interface IValidation { public function isValid($str); } Section Class The constructor accepts all of the parameters that the class needs (the dependencies are injected). I have used an array in the constructor (as a personal preference to remove a required order of parameters and to use named parameters with the associative array). Standard SPL exceptions are thrown if the validation objects do not match the Validation interface. class Section { // Text for the section. protected $text; // Validation for the text. protected $textValidation; /// Title for the section. protected $title; // Validation for the title. protected $titleValidation; public function __construct(Array $setup) { $setup += array('Text' => '', 'Text_Validation' => NULL, 'Title' => '', 'Title_Validation' => NULL); if (!$setup['Text_Validation'] instanceof IValidation) { throw new \InvalidArgumentException( __METHOD__ . ' requires Text_Validator'); } if (!$setup['Title_Validation'] instanceof IValidation) { throw new \InvalidArgumentException( __METHOD__ . ' requires Title_Validation'); } $this->text = $setup['Text']; $this->textValidation = $setup['Text_Validation']; $this->title = $setup['Title']; $this->titleValidation = $setup['Title_Validation']; } public function setText($text) { $this->text = $text; } public function setTitle($title) { $this->title = $title; } // Return whether the section is valid. public function isValid() { return $this->textValidation->isValid($this->text) && $this->titleValidation->isValid($this->title); } } Validation Class Notice how this class implements the interface. class Validation implements IValidation { // The regex for a valid match. protected $validMatch; public function __construct($validMatch) { $this->validMatch = $validMatch; } public function isValid($str) { return preg_match($this->validMatch, $str); } } Usage We can see from the usage some of the benefits of dependency injection. There is no hardcoded dependencies in any of the classes and we can glue the components together when they are used. This allows us to define the exact type of validations that we require and pass them to the Section class with injection. $alphanumUnderscores = new Validation('/^[[:alnum:]_]*$/'); $digits = new Validation('/^[[:digit:]]+$/'); $section = new Section(array('Text_Validation' => $alphanumUnderscores, 'Title' => 'InvalidTitle', 'Title_Validation' => $digits)); if (!$section->isValid()) { // The title should be digits. echo 'As expected the title is invalid.' . "\n"; } $section->setTitle(9876); if ($section->isValid()) { echo 'As expected the title is valid with digits.' . "\n"; }
{ "domain": "codereview.stackexchange", "id": 1396, "tags": "php, design-patterns, object-oriented, php5, classes" }
Streamlines & Pathlines Problem
Question: I have a 2D flow velocity field $\bar{V} = y^2\hat{i} + 2\hat{j}$. I'd like to find the equations for the streamlines and pathlines. Since $\bar{V}$ has 0 time derivative, the flow is steady, and so the equations for the streamlines should be identica, right? Streamlines: $$\left. \frac{dy}{dx} \right)_{streamline} = \frac{v}{u} = \frac{2}{y^2}$$ $$ \therefore y^2 dy = 2 dx \Rightarrow \int_{y_0}^y y^2 \, dy = \int_{x_0}^x 2 \, dx $$ $$ \therefore \tfrac{1}{3}y^3 - \tfrac{1}{3}y_0^3 = 2(x - x_0) \Rightarrow y^3 - 6x = y_0^3 - 6x_0$$ Pathlines ($x_p$ and $y_p$ the particle coordinates): $$ \frac{dx_p}{dt} = y^2 \Rightarrow x_p = y^2(t-t_0) + x_{p,0} $$ $$ \frac{dy_p}{dt} = 2 \Rightarrow y_p = 2(t - t_0) + y_{p,0} $$ Now eliminating $(t-t_0)$: $$x_p = y^2 \left( \frac{y_p - y_{p,0}}{2} \right) + x_{p,0}$$ Without expanding this final equation for the pathline, it is obviously different from the equation of the streamlines; but it shouldn't be, as the flow is steady? Normally this might suggest I've been fast and loose with the math, but I just can't see where I'm wrong here... Answer: y is changing in time, but in your second solution you integrated the x equation as if it were constant, which is illegitimate. $${dy_p\over dt} = 2 \implies y=2t+y_0$$ $${dx_p\over dt} = y^2 = (2t+y_0)^2 \implies x_p = {4\over 3}t^3 + 2y_0 t^2 + y_0^2 t + x_0$$ Which is consistent with the first solution.
{ "domain": "physics.stackexchange", "id": 1554, "tags": "fluid-dynamics" }
Are the collisions between the real gas particles perfectly elastic?
Question: Well my question is simple, if two real gas particles are colliding (head on collision) then will the kinetic energy will be conserved i.e. will it be a perfectly elastic collisions? Answer: Well, what is an inelastic collision, really? Suppose you have two balls made of steel; they collide, then fly away with some lasting deformation, so some energy is lost. With molecules, it is not quite like that. You can't leave a dent on a molecule. It has certain discrete energy levels, and that's it. You either excite the molecule to one of these levels, or you don't excite it at all. To sum it up, some collisions of molecules are perfectly elastic, and others are combined with excitation of some rotational or (more probable at higher temperatures) vibrational mode in one of the molecules, or maybe in both. Noble gases which have no molecules and hence neither rotational nor vibrational modes may enjoy perfectly elastic collisions up to pretty high temperatures.
{ "domain": "chemistry.stackexchange", "id": 4624, "tags": "molecules, intermolecular-forces" }
Hyperconjugation vs. steric hindrance: which is stronger?
Question: If we talk about a series of molecules: ethene, prop-1-ene, but-2-ene, etc., we see both hyperconjugation (which increases stability) and steric hindrance (which decreases stability). So which one is more powerful i.e. which of these molecules will be most stable? Answer: It's a contradict to explain the stability of substituted alkenes by using hyperconjugation and steric hinderance both. I think, it is more better to consider heat of hydrogenation factor in case for stability of substituted alkenes. The stability of substituted alkenes is explained by Hyperconjugation factor while the stability of geometrical alkenes (cis-trans) is explained by steric hinderance. The stability of alkene can be determined by measuring the amount of energy associated with the hydrogenation of the molecule. Since the double bond is breaking in this reaction, the energy released in hydrogenation is proportional to the energy in the double bond of the molecule. This is a useful tool because heats of hydrogenation can be measured very accurately. The ΔHo is usually around -30 kcal/mol for alkenes. Stability is simply a measure of energy. Lower energy molecules are more stable than higher energy molecules. More substituted alkenes are more stable than less substituted ones due to hyperconjugation. They have a lower heat of hydrogenation. The following illustrates stability of alkenes with various substituents: In disubstituted alkenes, trans isomers are more stable than cis isomers due to less steric hindrance because in trans bulky groups are in opposite sides so less repulsion between them as there is more repulsion in case of cis due same side of bulky groups. Also, internal alkenes are more stable than terminal ones. See the following isomers of butene: Overall stability : trans-but-2-ene > cis-but-2-ene > propene > ethene
{ "domain": "chemistry.stackexchange", "id": 4830, "tags": "stereochemistry, hyperconjugation" }
Conversion from octal numerical system to binary numerical system
Question: I know that in order to convert a numeral in base-8 to a numeral in base-2 I can write each octal digit as a binary word of 3 bits. I know that 2^3 = 8 that is exactly the dimension of the base of the octal system. I just cannot understand the fundamental reason why it works. Is it just a coincidence? Answer: Its not a coincidence. It is a general result for any number represented in a positive base $b$, its representation in base $b^k$ for some positive integer $k$, is simply grouping $k$ digits in its base $b$ representation starting from the least significant digit to the most significant digit. Take base $10$ for example. $427428$ in base $10$ is $(4,2,7,4,2,8)$. $427428$ in base $10^2$ is $(42,74,28)$. $427428$ in base $10^3$ is $(427,428)$. You can now find an analogy to your problem. Edit: Proof for the above claim Let a number $n$ be represented in base $b$ positional system as $n = (a_{m-1},a_{m-2},\space...\space,a_1,a_0)$ such that $0 \le a_i \lt b \space \forall i\in\{0,1,\space...\space,m-1\}$ where $m$ represents the number of digits $n$ has in the base $b$ representation. We wish to represent $n$ in base $B = b^k$ for some positive integer $k$. For the sake of simplicity of the argument, let $m$ be a multiple of $k$, i.e. $m = c k$ for some integer $c$. (If $m$ is not a multiple of $k$, the representation of $n$ in base $b$ can be prefixed by $0$'s untill the number of digits are a multiple of $k$) From the representation in base $b$, we can write $n$ as follows $n = a_{m-1} b^{m-1} + a_{m-2} b^{m-2} + \space ... \space + a_2 b^2 + a_1 b + a_0$ $\space\space= a_{ck-1} b^{ck-1} + a_{ck-2} b^{ck-2} + \space ... \space + a_2 b^2 + a_1 b + a_0$ $\space\space = a_{ck-1}b^{ck-1} + a_{ck-2}b^{ck-2} + \space ... \space + a_{(c-1)k+1} b^{(c-1)k+1} + a_{(c-1)k} b^{(c-1)k}$ $\space\space\space + a_{(c-1)k-1}b^{(c-1)k-1} + a_{(c-1)k-2}b^{(c-1)k-2} + \space ... \space + a_{(c-2)k+1} b^{(c-2)k+1} + a_{(c-2)k} b^{(c-2)k}$ $\space\space\space\vdots$ $\space\space\space + a_{2k-1}b^{2k-1} + a_{2k-2}b^{2k-2} + \space ... \space + a_{k+1} b^{k+1} + a_k b^k$ $\space\space\space + a_{k-1}b^{k-1} + a_{k-2}b^{k-2} + \space ... \space + a_1 b + a_0$ We can rearrange the last expression by taking out powers of $b^k$ from each sequence of $k$ terms as follows $\space\space\space + \{\space a_{ck-1}b^{k-1} + a_{ck-2}b^{k-2} + \space ... \space + a_{(c-1)k+1} b + a_{(c-1)k} \space\}.b^{(c-1)k}$ $\space\space\space + \{\space a_{(c-1)k-1}b^{k-1} + a_{(c-1)k-2}b^{k-2} + \space ... \space + a_{(c-2)k+1} b + a_{(c-2)k} \space\}.b^{(c-2)k}$ $\space\space\space\vdots$ $\space\space\space + \{\space a_{2k-1}b^{k-1} + a_{2k-2}b^{k-2} + \space ... \space + a_{k+1} b + a_k b \space\} . b^k$ $\space\space\space + \{\space a_{k-1}b^{k-1} + a_{k-2}b^{k-2} + \space ... \space + a_1 b + a_0 \space\}$ $\space\space = p_{c-1} B^{c-1} + p_{c-2} B^{c-2} + \space ... \space + p_1 B + p_0$ Where, $p_i = a_{(i+1)k-1} b^{k-1} + a_{(i+1)k-2} b^{k-2} \space ... \space + a_{ik+1} b + a_{ik} \space$ $\space\space\forall i \in \{0,1,\space ...\space ,c-1\}$ We can see that this is a representation of $n$ in base $B$ i.e. base $b^k$, with $c$ digits. $n = (p_{c-1},p_{c-2},\space ... \space, p_1,p_0)$. For the above to a valid representation in base $B$, $\forall i:\space0 \le p_i \lt B$ We can verify this by finding the maximum value of any of the group of terms in the above expressions. Since, $\forall i:\space 0 \le a_i \lt b$, maximum value of $a_i$ is $b-1$. Now we get for any $i$, $a_{(i+1)k-1}b^{k-1} + a_{(i+1)k-2}b^{k-2} + \space ... \space + a_{i+1} b + a_i$ $= (b-1)b^{k-1} + (b-1)b^{k-2} + \space ... \space + (b-1)b + b$ $= (b-1)(b^{k-1}+b^{k-2}+\space...\space+b+1)$ $= (b-1)\frac{b^k-1}{b-1}$ $= b^k - 1 $ which is strictly less than $b^k$ or $B$. Hence, we prove the claim.
{ "domain": "cs.stackexchange", "id": 14588, "tags": "binary, base-conversion, numeral-representations" }
How are the covariant Pauli matrices defined?
Question: When doing calculations with Weyl spinors, terms like $\theta\sigma^\mu\theta^\dagger$ appear. I know that for 3+1 spacetime dimensions, $\sigma^\mu = (\textbf{1}, \sigma^i)$ with $i=1,2,3$ the usual Pauli matrices. But what if we consider 1+1 or even $D$+1 dimensions? Answer: A covariant Pauli matrix $\sigma^\mu$ is defined as $$\sigma^\mu=e^\mu_a\,\sigma^a\, \tag{1}$$ where $e^\mu_a$ is a vielbein with a Lorentz index $\mu$, and a flat spacetime (tangent space) index $a$. $\sigma^a$ is a Pauli matrix in flat spacetime. $$\sigma^a=({\bf{1}},\,\sigma^i)\,\tag{2}$$ where ${\bf 1}$ is an identity matrix and $\sigma^i=(\sigma^1,\,\sigma^2,\,\sigma^3)$ are the usual Pauli matrices. The above definitions do not make any reference to the number of dimensions of the spacetime. Hence they are true in all spacetime dimensions. You just need to use the tangent space Pauli matrices $\sigma^a$ shown in $(2)$ as per the number of spacetime dimensions you are working in. This discussion might be helpful in getting $\sigma^a$ in higher dimensions.
{ "domain": "physics.stackexchange", "id": 94693, "tags": "conventions, spinors, dirac-matrices, clifford-algebra" }
Particle in a box/Quantum confinement/Surface phonon resonance - What is the difference?
Question: I am fascinated by many colour phenomena. When reading about several of them, I have come across all the explainations mentioned in the title. For example: Conjugated Chains in molecules (specifically indicators): colour explained by particle in a box model (HOMO-LUMO transitions). Colours of quantum dots vary with size of the particle, explained by quantum confinement. Colour of gold colloids. Colour depends on particle size and shape, explained by surface phonon resonance frequency. My understanding of all of these phenomena is limited, as I am not myself able to solve the Shroedinger equation, I simply try to understand the results. My understanding of the last two explainations is particularly poor. I struggle to point out the difference in the different models. They all seem to explain electrons occupying a confined space, and the Shroedinger Equation solved with these constraints gives discrete energy levels. The absorption/emissions of photons due to electrons varying between these energy levels results in observed colour. Edit: Can someone point out the essence in each of these models, in a way that the differences become clear? Answer: Colors are determined by the energy levels of the light absorbing or emitting systems. Quantum dots are (semiconductor) structures which are comparable in size to the de Broglie wave length in the respective crystal. Depending on the finite size and shape (boundary conditions) of the quantum dots, you have standing electron waves in these quantum dots which correspond to a number of specific energy levels for the absorption and emission of light. In gold particles similarly a number of energy levels for the absorption of light occurs due to standing surface plasmon-polariton waves. These waves consist of coupled oscillations of electron density and electromagnetic fields which can propagate along the interface of a metal with a negative real part of permittivity and a dielectric (air). On small gold particles, like spheres, these surface waves can form standing waves (not related to de Broglie waves) with discrete frequencies which are related to absorption frequencies of light and thus determine the color of these particles.
{ "domain": "physics.stackexchange", "id": 35857, "tags": "quantum-mechanics, phonons" }
Hydride Donating Ability
Question: We know that some compounds can donate $\ce{H-}$ anion to attain stability. What are the conditions under which it occurs and what should be the extent of stability of the product for this pathway to be viable? Answer: Many compounds are good hydride donors, if it results in carbonyl bond formation or extended conjugation. Sometimes, a compound can donate H- ion if it results in stable pi bond formation. (e.g. Tertiary butyl carbanion)
{ "domain": "chemistry.stackexchange", "id": 8733, "tags": "organic-chemistry, reaction-mechanism, reaction-control" }
Unable to install rmf_demos package
Question: I get this error below when i do colcon build with rmf_demos package: Starting >>> rmf_task_msgs Finished <<< rmf_workcell_msgs [1min 39s] Finished <<< building_map_msgs [1min 54s] Starting >>> building_map_tools Starting >>> rmf_gazebo_plugins Finished <<< building_map_tools [2.17s] Starting >>> rmf_demo_maps Starting >>> test_maps --- stderr: building_gazebo_plugins /home/ip3d/rmf_demos_ws/src/rmf/traffic_editor/building_gazebo_plugins/src/door.cpp:5:10: fatal error: gazebo_ros/node.hpp: No such file or directory #include <gazebo_ros/node.hpp> ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. make[2]: *** [CMakeFiles/door.dir/src/door.cpp.o] Error 1 make[1]: *** [CMakeFiles/door.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... /home/ip3d/rmf_demos_ws/src/rmf/traffic_editor/building_gazebo_plugins/src/slotcar.cpp:5:10: fatal error: gazebo_ros/node.hpp: No such file or directory #include <gazebo_ros/node.hpp> ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. make[2]: *** [CMakeFiles/slotcar.dir/src/slotcar.cpp.o] Error 1 make[1]: *** [CMakeFiles/slotcar.dir/all] Error 2 make: *** [all] Error 2 --- Failed <<< building_gazebo_plugins [ Exited with code 2 ] Aborted <<< test_maps Aborted <<< rmf_traffic Aborted <<< rmf_gazebo_plugins Aborted <<< rmf_demo_maps Aborted <<< rmf_task_msgs Aborted <<< rmf_traffic_msgs Summary: 12 packages finished [2min 45s] 1 package failed: building_gazebo_plugins 6 packages aborted: rmf_demo_maps rmf_gazebo_plugins rmf_task_msgs rmf_traffic rmf_traffic_msgs test_maps 3 packages had stderr output: building_gazebo_plugins rmf_gazebo_plugins rmf_traffic 8 packages not processed But the gazebo_ros packages is installed. and the header file node.hpp can be found in /opt/ros/eloquent/include/gazebo_ros/. Kindly help to resolve this issue. Thanks. Originally posted by webvenky on ROS Answers with karma: 117 on 2020-04-15 Post score: 0 Answer: Adding ${gazebo_ros_INDLUDE_DIRECTORIES} in the target_include_directories solves the problem. Originally posted by webvenky with karma: 117 on 2020-04-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34767, "tags": "gazebo, ros2" }
What force causes entropy to increase?
Question: What force causes entropy to increase? I realize that the second law of thermodynamics requires the entropy of a system to increase over time. For example, gas stored in a canister, if opened inside a vacuum chamber, will expand to fill the chamber. But I’m not clear on what force, exactly, is acting upon the molecules of gas that causes them to fly out of the opened canister and fill the chamber. Just looking for a concise explanation as to what is going on at the fundamental level, since obviously, the second law of thermodynamics is not a force and therefore does not cause anything to happen. Answer: This might not be as detailed as you want, but really all the second law says is that the most likely thing will happen. The reason we can associate certainty with something that seems random is because when we are looking at systems with such a large number of particles, states, etc. anything that is not the most likely is essentially so unlikely that we would have to wait for times longer than he age of the universe to observe them to happen by chance. Therefore, as you say in your last paragraph, there is no force associated with entropy increase. It's just a statement of how systems will move towards more likely configurations. For the specific example you give of Joule expansion the (classical) gas molecules are just moving around according to Newton's laws as they collide with each other and the walls of the container. There is no force "telling" the gas to expand to the rest of the container. It's just most likely that we will end up with a uniform gas concentration in the container.
{ "domain": "physics.stackexchange", "id": 57476, "tags": "thermodynamics, forces, statistical-mechanics, entropy" }
a small noiseless motor with accurate RPM control?
Question: What kind of motor is suitable for the below requirements? 2700 rpm, needs to be accurate, not +- few rpms. I think the best approach is to use a hall effect sensor and adjust current/voltage on the fly? Or a better idea? Be as noiseless as possible. The noise 12V square fan motors make is perfect. Torque is not much of an issue as even though it is rotating a 100g load on top of it the load is supported by 4 bearings. What motor am I looking for? Answer: A brushless motor controlled by an electronic speed controller can produce an accurate and relatively powerful system. Most BLDC motors used for hobby r/c equipment would be suitable and these are capable of supporting a significant axial and tangential load. When operating at low RPMs (2700 RPM is slow) good quality motors are virtually silent. The speed controller (ESC) can be controlled by pwm or a coded signal such as sBus which would also provide an interface for a feedback system via a microcontroller. BLDC motors are specified by the can size and the revs per volt rating. For example a 2216-900kv motor would be approximately 22mm dia, with 16mm magnets, rotating at 900 rpm per volt unloaded. Small motors operate on 2S-4S LiPo batteries or 7v to 14v.
{ "domain": "engineering.stackexchange", "id": 2050, "tags": "motors" }
However what is so acidic about CaO and basic(no pun intended) about SiO2 while calculating basicity of slag?
Question: I have seen Basicity to be calculated as $\mathrm{B} = wt\%\:\ce{CaO}/wt\%\:\ce{SiO2}$ particularly in slag bascity/acidity calcuations. Now I do not think that $\ce{CaO}$ is the most basic oxide that we have or $\ce{SiO2}$ is the most acidic oxide then why are they calculated accordingly? On a lighter note in my geology classes, I have been taught to classify minerals as acidic on the basis of Silica content. Are they related? Please explain with close reference to context. Answer: What is so acidic...and basic...? Nothing. This derives from the historical perception that silica in geological systems and in melts was in the form of silicic acid ($\ce{H2SiO4}$) and the alkali and alkali earth elements were considered as bases. We now know that in high temperature silicate liquids (that eventually solidifies into slag, or rocks, or glass) there is no acid–base chemistry, at least in the form that we know it from low-temperature aqueous acid–base chemistry. However, the usage of the terms acid and base in relation to rocks and slags still persists, primarily with older professors (who still teach) or in certain parts of the world, such as Russia. Nowadays geologists usually prefer the term "felsic" over the term "acidic" for silica-rich rocks. Nonetheless, the measure of $\ce{CaO}$ and $\ce{SiO2}$ contents in slags, glasses and rocks is still useful for various reasons. Therefore, the ratio is still calculated and used, and the old name persists. A slightly better parameter, called optical basicity, takes into account the compositional variability other than $\ce{CaO}$ and $\ce{SiO2}$, by adding other oxides such as $\ce{Na2O}$ and $\ce{K2O}$ into the calculation. It is widely used and has many applications.
{ "domain": "chemistry.stackexchange", "id": 13453, "tags": "inorganic-chemistry, physical-chemistry, metal, metallurgy, geochemistry" }
Is the 0-1 Knapsack problem where value equals weight NP-complete?
Question: I have a problem which I suspect is NP-complete. It is easy to prove that it is NP. My current train of thought revolves around using a reduction from knapsack but it would result in instances of 0-1-Knapsack with the value of every item being equal to its weight. Is this still NP-complete? Or am I missing something? Answer: Yes, this is called the subset-sum problem and is NP-Hard.
{ "domain": "cs.stackexchange", "id": 1274, "tags": "complexity-theory, np-complete, decision-problem, packing" }
alpha tubulin molecular weight problem
Question: Is there any academic reference that shows α-tubulin is around 50-55 kDa? The only thing I found is some data sheets from companies. I need the real reference. Answer: The paper titled Identification of α-tubulin as a granzyme B substrate during CTL-mediated apoptosis mentions it as 51KDa and the paper titled Carboxy-terminal amino acid sequence of α-tubulin from porcine brain mentions it as 55,000 Da. Hope that suffices.
{ "domain": "biology.stackexchange", "id": 2370, "tags": "biochemistry, homework, proteins" }
Why the statement "there exist at least one bound state for negative/attractive potential" doesn't hold for 3D case?
Question: Previously I thought this is a universal theorem, for one can prove it in the one dimensional case using variational principal. However, today I'm doing a homework considering a potential like this:$$V(r)=-V_0\quad(r<a)$$$$ V(r)=0\quad(r>a)$$ and found that there is no bound state when $V_0a^2<\pi^2\hbar^2/8m$. So what's the condition that we have at least one bound state for 3D and 2D? Answer: The precise theorem is the following, cf. e.g. Ref. 1. Theorem 1: Given a non-positive (=attractive) potential $V\leq 0$ with negative spatial integral $$ v~:=~\int_{\mathbb{R}^n}\! d^n r~V({\bf r}) ~<~0 ,\tag{1} $$ then there exists a bound state$^1$ with energy $E<0$ for the Hamiltonian $$\begin{align} H~=~&K+V, \cr K~=~& -\frac{\hbar^2}{2m}{\bf \nabla}^2\end{align}\tag{2} $$ if the spatial dimension $\color{Red}{n\leq 2}$ is smaller than or equal to two. The theorem 1 does not hold for dimensions $n\geq3$. E.g. it can be shown that already a spherically symmetric finite well potential does not$^2$ always have a bound state for $n\geq3$. Proof of theorem 1: Here we essentially use the same proof as in Ref. 2, which relies on the variational method. We can for convenience use the constants $c$, $\hbar$ and $m$ to render all physical variables dimensionless, e.g. $$\begin{align} V~\longrightarrow~& \tilde{V}~:=~\frac{V}{mc^2}, \cr {\bf r}~\longrightarrow~&\tilde{\bf r}~:=~ \frac{mc}{\hbar}{\bf r},\end{align}\tag{3} $$ and so forth. The tildes are dropped from the notation from now on. (This effectively corresponds to setting the constants $c$, $\hbar$ and $m$ to 1.) Consider a 1-parameter family of trial wavefunctions $$\begin{align} \psi_{\varepsilon}(r)~=~&e^{-f_{\varepsilon}(r)}~\nearrow ~e^{-1}\cr &\text{for}\quad \varepsilon ~\searrow ~0^{+} , \end{align}\tag{4}$$ where $$\begin{align} f_{\varepsilon}(r)~:=~& (r+1)^{\varepsilon} ~\searrow ~1\cr &\text{for}\quad \varepsilon ~\searrow ~0^{+}\end{align} \tag{5} $$ $r$-pointwise. Here the $\nearrow$ and $\searrow$ symbols denote increasing and decreasing limit processes, respectively. E.g. eq. (4) says in words that for each radius $r \geq 0$, the function $\psi_{\varepsilon}(r)$ approaches monotonically the limit $e^{-1}$ from below when $\varepsilon$ approaches monotonically $0$ from above. It is easy to check that the wavefunction (4) is normalizable: $$\begin{align}0~\leq~~&\langle\psi_{\varepsilon}|\psi_{\varepsilon} \rangle\cr ~=~~& \int_{\mathbb{R}^n} d^nr~|\psi_{\varepsilon}(r)|^2 \cr ~\propto~~& \int_{0}^{\infty} \! dr ~r^{n-1} |\psi_{\varepsilon}(r)|^2\cr ~\leq~~& \int_{0}^{\infty} \! dr ~(r+1)^{n-1} e^{-2f_{\varepsilon}(r)} \cr ~\stackrel{f=(1+r)^{\varepsilon}}{=}&~ \frac{1}{\varepsilon} \int_{1}^{\infty}\!df~f^{\frac{n}{\varepsilon}-1} e^{-2f}\cr ~<~~&\infty,\qquad \varepsilon~> ~0.\end{align}\tag{6} $$ The kinetic energy vanishes $$\begin{align} 0~\leq~~&\langle\psi_{\varepsilon}|K|\psi_{\varepsilon} \rangle \cr ~=~~& \frac{1}{2}\int_{\mathbb{R}^n}\! d^nr~ |{\bf \nabla}\psi_{\varepsilon}(r) |^2\cr ~=~~& \frac{1}{2}\int_{\mathbb{R}^n}\! d^nr~ \left|\psi_{\varepsilon}(r)\frac{df_{\varepsilon}(r)}{dr} \right|^2 \cr ~\propto~~& \varepsilon^2\int_{0}^{\infty}\! dr~ r^{n-1} (r+1)^{2\varepsilon-2}|\psi_{\varepsilon}(r)|^2\cr ~\leq~~&\varepsilon^2 \int_{0}^{\infty} \!dr ~ (r+1)^{2\varepsilon+n-3}e^{-2f_{\varepsilon}(r)}\cr ~\stackrel{f=(1+r)^{\varepsilon}}{=}&~ \varepsilon \int_{1}^{\infty}\! df ~ f^{1+\frac{\color{Red}{n-2}}{\varepsilon}} e^{-2f}\cr ~\searrow ~~&0\quad\text{for}\quad \varepsilon ~\searrow ~0^{+},\end{align} \tag{7}$$ when $\color{Red}{n\leq 2}$, while the potential energy $$\begin{align}0~\geq~&\langle\psi_{\varepsilon}|V|\psi_{\varepsilon} \rangle\cr ~=~& \int_{\mathbb{R}^n} \!d^nr~|\psi_{\varepsilon}(r)|^2~V({\bf r}) \cr ~\searrow ~& e^{-2}\int_{\mathbb{R}^n} \!d^nr~V({\bf r})~<~0 \cr &\text{for}\quad \varepsilon ~\searrow ~0^{+} ,\end{align}\tag{8} $$ remains non-zero due to assumption (1) and Lebesgue's monotone convergence theorem. Thus by choosing $ \varepsilon \searrow 0^{+}$ smaller and smaller, the negative potential energy (8) beats the positive kinetic energy (7), so that the average energy $\frac{\langle\psi_{\varepsilon}|H|\psi_{\varepsilon}\rangle}{\langle\psi_{\varepsilon}|\psi_{\varepsilon}\rangle}<0$ eventually becomes negative for the trial function $\psi_{\varepsilon}$. A bound state$^1$ can then be deduced from the variational method. Note in particular that it is absolutely crucial for the argument in the last line of eq. (7) that the dimension $\color{Red}{n\leq 2}$. $\Box$ Simpler proof for $\color{Red}{n<2}$: Consider an un-normalized (but normalizable) Gaussian test/trial wavefunction $$\psi(x)~:=~e^{-\frac{x^2}{2L^2}}, \qquad L~>~0.\tag{9}$$ Normalization must scale as $$||\psi|| ~\stackrel{(9)}{\propto}~ L^{\frac{n}{2}}.\tag{10}$$ The normalized kinetic energy scale as $$0~\leq~\frac{\langle\psi| K|\psi \rangle}{||\psi||^2} ~\propto ~ L^{-2}\tag{11}$$ for dimensional reasons. Hence the un-normalized kinetic scale as $$0~\leq~\langle\psi| K|\psi \rangle ~\stackrel{(10)+(11)}{\propto} ~ L^{\color{Red}{n-2}}.\tag{12}$$ Eq. (12) means that $$\begin{align}\exists L_0>0 \forall L\geq L_0:~~0~\leq~& \langle\psi|K|\psi\rangle\cr ~ \stackrel{(12)}{\leq} ~&-\frac{v}{3}~>~0\end{align}\tag{13}$$ if $\color{Red}{n<2}$. The un-normalized potential energy tends to a negative constant $$\begin{align}\langle\psi| V|\psi \rangle ~\searrow~&\int_{\mathbb{R}^n} \! \mathrm{d}^nx ~V(x)~=:~v~<~0\cr &\quad\text{for}\quad L~\to~ \infty.\end{align}\tag{14}$$ Eq. (14) means that $$\exists L_0>0 \forall L\geq L_0:~~ \langle\psi| V|\psi\rangle ~\stackrel{(14)}{\leq}~ \frac{2v}{3} ~<~ 0.\tag{15}$$ It follows that the average energy $$\begin{align}\frac{\langle\psi|H|\psi\rangle}{||\psi||^2} ~=~~&\frac{\langle\psi|K|\psi\rangle+\langle\psi|V|\psi\rangle}{||\psi||^2}\cr ~\stackrel{(13)+(15)}{\leq}&~ \frac{v}{3||\psi||^2}~<~0\end{align}\tag{16}$$ of trial function must be negative for a sufficiently big finite $L\geq L_0$ if $\color{Red}{n<2}$. Hence the ground state energy must be negative (possibly $-\infty$). $\Box$ References: K. Chadan, N.N. Khuri, A. Martin and T.T. Wu, Bound States in one and two Spatial Dimensions, J.Math.Phys. 44 (2003) 406, arXiv:math-ph/0208011. K. Yang and M. de Llano, Simple variational proof that any two‐dimensional potential well supports at least one bound state, Am. J. Phys. 57 (1989) 85. -- $^1$ The spectrum could be unbounded from below. $^2$ Readers familiar with the correspondence $\psi_{1D}(r)=r\psi_{3D}(r)$ between 1D problems and 3D spherically symmetric $s$-wave problems in QM may wonder why the even bound state $\psi_{1D}(r)$ that always exists in the 1D finite well potential does not yield a corresponding bound state $\psi_{3D}(r)$ in the 3D case? Well, it turns out that the corresponding solution $\psi_{3D}(r)=\frac{\psi_{1D}(r)}{r}$ is singular at $r=0$ (where the potential is constant), and hence must be discarded.
{ "domain": "physics.stackexchange", "id": 17305, "tags": "quantum-mechanics, mathematical-physics, wavefunction, schroedinger-equation, potential" }
Calculate exact date and time from position of the sun - 88° degrees
Question: I am currently working on an astrology project and need to figure out the exact date and time based on the sun's position of birth - 88° degree (approx. 89 days) before the moment of birth. I am using the swisseph python library (https://astrorigin.com/pyswisseph/pydoc/index.html) to calculate planet postions. For example: the following date (1991.01.07 09:27:00 UT) gives the longitude of the sun 286.57609582639475. Now I would need a formula to calculate the UT time from this longitude of sun - 88° degree. Does anybody have an idea? Answer: You could try successive approximation using your Python library. Put in your best guess date/time and keep adjusting it to get 88 degrees.
{ "domain": "astronomy.stackexchange", "id": 5779, "tags": "observational-astronomy, python" }
Why can we assume a current direction when using kirchhoff's circuit law?
Question: Reference direction: When analyzing electrical circuits, the actual direction of current through a specific circuit element is usually unknown. Consequently, each circuit element is assigned a current variable with an arbitrarily chosen reference direction. When the circuit is solved, the circuit element currents may have positive or negative values. A negative value means that the actual direction of current through that circuit element is opposite that of the chosen reference direction. Why can we assume a direction and get the correct value and sign? Is there a simple proof of this fact? Answer: Is there a simple proof of this fact? Sure; when you insert an ammeter into a circuit branch, there are two choices of polarity which amount to choosing a reference direction. If two identical ammeters, connected in series with opposite polarity, are inserted into a circuit branch, they will measure the same current but give the opposite sign since each has a different reference direction. However, they both give the same information. For one ammeter, current enters the positive lead and this ammeter gives a positive reading. For the other ammeter, current exits the positive lead and this ammeter gives a negative reading. In either case, the ammeter reading gives you the correct direction of the current.
{ "domain": "physics.stackexchange", "id": 36017, "tags": "electric-circuits, conventions, coordinate-systems" }
getting type error on nav topic from move base
Question: Update: So I think my question is if ekf is a combined odometry why does move base not accept it in its published type , as a pose. How would I use the combined odometry if not in move base. Is this what the RoadMap part of the ekf webpage is talking about,yes? Any ideas on what this error is means? Is this ekf node? [ERROR] [1386956256.093192603, 0.774000000]: Client [/move_base] wants topic /rrbot_combined_odom/odom to have datatype/md5sum [nav_msgs/Odometry/cd5e73d190d741a2f92e81eda573aca7], but our version has [geometry_msgs/PoseWithCovarianceStamped/953b798c0f514ff060a53a3498ce6246]. Dropping connection. my launch file is: <launch> <!-- Use simulation time from gazebo --> <rosparam param="use_sim_time=true"/> <!-- Load robot description here ; used by rviz / rqt_gui --> <param name="robot_description" command="$(find xacro)/xacro.py '$(find rrbot_description)/urdf/rrbot.xacro'" /> <node pkg="tf" type="static_transform_publisher" name="link1_broadcaster" args="-.1 0 .1 0 0 0 base_link tower_link 10" /> <node pkg="tf" type="static_transform_publisher" name="link2_broadcaster" args="-.1 0 .2 0 0 0 tower_link hokuyo_frame 10" /> <node pkg="tf" type="static_transform_publisher" name="link3_broadcaster" args="-.1 0 .2 -1.57 0 -1.57 tower_link camera_frame 10" /> <node pkg="tf" type="static_transform_publisher" name="link4_broadcaster" args="-.13 -.13 .1 0 0 0 base_link left_wheel 10" /> <node pkg="tf" type="static_transform_publisher" name="link5_broadcaster" args="-.13 .13 .1 0 0 0 base_link right_wheel 10" /> <!-- Start rqt : note rrbot.rviz file must reside in launch dir --> <node name="rqt_gui" pkg="rqt_gui" type="rqt_gui" respawn="true"> </node> <!-- load the controllers --> <!-- and Load joint controller configurations from YAML file to parameter server --> <rosparam file="$(find rrbot_control)/config/rrbot_control.yaml" command="load"/> <node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="true" output="screen" ns="/rrbot" args="--namespace=/rrbot joint1_position_controller joint2_position_controller"/> <!-- ******************************************************************************************** --> <!-- start mapping server and pass odom frame : creates map from laser --> <node name="gmapping_node" pkg="gmapping" type="slam_gmapping" output="screen" respawn="true"> <param name="occ_thresh" value="0.05"/> <param name="map_update_interval" value="0.05"/> </node> <!-- Moves robot based on goal command : Listen for goals posted as twist to robot base --> <node pkg="move_base" type="move_base" name="move_base" output="screen" respawn="true"> <param name="controller_frequency" type="double" value="50.0" /> <rosparam file="$(find rrbot_control)/config/move_base/costmap_common_params.yaml" command="load" ns="global_costmap" /> <rosparam file="$(find rrbot_control)/config/move_base/costmap_common_params.yaml" command="load" ns="local_costmap" /> <rosparam file="$(find rrbot_control)/config/move_base/local_costmap_params.yaml" command="load"/> <rosparam file="$(find rrbot_control)/config/move_base/global_costmap_params.yaml" command="load"/> <rosparam file="$(find rrbot_control)/config/move_base/base_local_planner_params.yaml" command="load"/> <remap from="odom" to="/rrbot_combined_odom/odom"/> </node> <!-- Take gazebo model state and setup map-odom-base_link-senors transforms --> <node pkg="rrbot_control" type="robot_odometry" name="rrbot_odometry" output="screen" respawn="true"/> <node pkg="robot_pose_ekf" type="robot_pose_ekf" name="rrbot_combined_odom"> <param name="output_frame" value="odom"/> <param name="freq" value="10.0"/> <param name="sensor_timeout" value="1.0"/> <param name="odom_used" value="true"/> <param name="vo_used" value="false"/> <param name="imu_used" value="true"/> <param name="debug" value="true"/> <param name="self_diagnose" value="true"/> <remap from="imu_data" to="/rrbot/imu_data"/> </node> </launch> Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-12-13 Post score: 0 Answer: I added a translation of the odom_combined from a PoseWithCovarienceStamped to navigation odometry. Originally posted by rnunziata with karma: 713 on 2013-12-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by yalan on 2017-10-16: Hi munziata, can you tell us how do you add a translation of the odom_combined from a PoseWithCovarienceStamped to sensor_msgs/Odometry? Thanks in advance and best regards. Comment by rnunziata on 2017-10-16: Sorry .... that project is long gone now....I did not save the code.
{ "domain": "robotics.stackexchange", "id": 16453, "tags": "ros" }
Why is allolactose the LacI inducer?
Question: For what reason(s) is allolactose, instead of lactose, the "natural" inducer of lac operon repressor? Answer: When lactose is present in a cell, some of it is enzymatically converted by $\beta$-galactosidase from the $\beta(1,4)$ linkage (typical of lactose) to the $\beta(1,6)$ glycosidic linkage (becoming allolactose). Allolactose and other analogues can then bind LacI to induce the appropriate conformational change and unbind the lac operator (one such review here). Why does allolactose act as a signal of lactose presence instead of just lactose? From the perspective of the lac operon, there is no operational difference between lactose and allolactose -- either way one molecule binds LacI and cannot be metabolized while bound. The full paradox is explained by Edgel and summarized by well-known biochemistry textbook author and scientist, Larry Moran, on his personal blog. The short version is that neither allolactose nor lactose are the intended substrates for $\beta$-gal and instead a different sugar entirely is the "natural" inducer and substrate. Juers DH, Matthews BW, Huber RE. LacZ β-galactosidase: structure and function of an enzyme of historical and molecular biological importance. Protein Sci. 2012 1792-807. Egel R. The 'lac' operon: an irrelevant paradox? Trends Genet. 1988 Feb;4(2):31.
{ "domain": "biology.stackexchange", "id": 1106, "tags": "biochemistry, molecular-biology, molecular-evolution" }
Could two identical stars revolve around each other in a common orbit if we only account for Newtonian physics?
Question: Both a parent star and its planet revolve around the center of mass of the system, the reason we see stellar wobble. But if we take this to be true, which it is, there can be a configuration in which two identical stars revolve around their center of mass, in a common orbit. What I find astonishing in this case is that they will be revolving around something having no mass at all, in a shared orbit, like two runners trying to catch each other but never quite being able to do so. In that case, in a purely Newtonian system, the centripetal force must be provided by the gravitational attraction between the stars. Now if I assume both the stars to have a mass $m$, at a separation of $d$ from each other, revolving in the common orbit diametrically opposite to each other, $$\frac{mv^2}{r} = \frac{Gm^2}{d^2}\; {\rm where}\; r = \frac{d}{2}$$ solving for $v$, we get a velocity where a stable orbit is formed, $v = \sqrt{\frac{Gm}{2d}}$ Note: I have failed to find such an expression, I am not certain about the math. I did find a system described in such a way that two stars orbit a common point in separate ellipses though. Is my conventional wisdom correct, or is my derivation of this expression somehow intrinsically flawed? Does this expression already exist? And I suppose the odds of observing such a system are 'astronomical', but has something like it ever been observed? Answer: Yes, that can happen. It is somewhat realized in positronium, a bound electronic state where an electron and a positron revolve around each other. Both have the same mass, so they could (classically) have the same spherical orbit. With Newton, you have an attractive force for two equal bodies of mass $m$ of $$ F = G \frac{m^2}{d^2}. $$ The centripetal force that would be needed in an orbit of radius $d/2$ is $$ F = m \frac{v^2}{d/2} = m \omega r^2, $$ where $v$ is the tangential velocity and $\omega$ the angular frequency. Set them equal and you get: $$ G \frac{m^2}{d^2} = 2 m \frac{v^2}{d} \iff G \frac{m}{2d} = v^2 \iff v = \sqrt{\frac{Gm}{2d}}, $$ which is what you obtained. You can get a deeper understanding if you look at the Jacobi method for the two body problem. There, you separate the center-of-mass motion of the relative motion. You define a relative distance $r$ between the two bodies. Then you need the reduced mass $$ \mu := \frac{m_1 m_2}{m_1 + m_2} $$ which is just $\mu = m/2$ in the case of two equal masses. Then your problem is not two bodies on their mutual gravitational field, but one body of reduced mass in the field of the other particle. So we have a similar problem. The force is $$ G \frac{m \mu}{d^2}. $$ This force force is half as strong as before. However, the centripetal force has now to be calculated with $d$ being the radius, not $d/2$ as before. The force is half as strong as well. Therefore, the result is the exact same. The derivation now goes: $$ G \frac{m \mu}{d^2} = m \frac{v^2}{d} \iff G \frac{m^2}{2 d^2} = m \frac{v^2}{d} \iff G \frac{m}{2 d} = v^2, $$ which I had before.
{ "domain": "physics.stackexchange", "id": 15160, "tags": "classical-mechanics, astronomy, newtonian-gravity, orbital-motion, centripetal-force" }
Download a zip archive and extract one file from it
Question: I wrote a function that downloads a file https://www.ohjelmointiputka.net/tiedostot/junar.zip if it not already downloaded, unzips it and returns the content of junar1.in found in this zip. I have PEP8 complaints about the length of lines that I would like to fix. Is there a way to make the code more readable? My code : import os.path import urllib.request import shutil import zipfile def download_and_return_content(): if not os.path.isfile('/tmp/junar.zip'): url = 'https://www.ohjelmointiputka.net/tiedostot/junar.zip' with urllib.request.urlopen(url) as response, open('junar.zip', 'wb') as out: data = response.read() # a `bytes` object out.write(data) shutil.move('junar.zip','/tmp') with zipfile.ZipFile('/tmp/junar.zip', 'r') as zip_ref: zip_ref.extractall('/tmp/') with open('/tmp/junar1.in') as f: return f.read() Answer: Let's start refactoring/optimizations: urllib should be replaced with requests library which is the de facto standard for making HTTP requests in Python and has reach and flexible interface. instead of moving from intermediate location (shutil.move('junar.zip','/tmp')) we can just save the downloaded zip file to a destination path with open('/tmp/junar.zip', 'wb') as out decompose the initial function into 2 separate routines: one for downloading zipfile from specified location/url and the other - for reading a specified (passed as an argument) zipfile's member/inner file reading from zipfile.ZipFile.open directly to avoid intermediate extraction. Otherwise zipfile contents should be extracted at once, then - just reading a regular files being extracted (with adjusting the "reading" function) From theory to practice: import os.path import requests import zipfile import warnings def download_zipfile(url): if not os.path.isfile('/tmp/junar.zip'): with open('/tmp/junar.zip', 'wb') as out: out.write(requests.get(url).content) def read_zipfile_item(filename): with zipfile.ZipFile('/tmp/junar.zip') as zip_file: with zip_file.open(filename) as f: return f.read().decode('utf8') # Testing url = 'https://www.ohjelmointiputka.net/tiedostot/junar.zip' download_zipfile(url=url) print(read_zipfile_item('junar1.in')) The actual output (until the input url is accessible): 10 6 1 4 10 7 2 3 9 5 8
{ "domain": "codereview.stackexchange", "id": 36381, "tags": "python, python-3.x, file" }
What are kernel initializers and what is their significance?
Question: I was looking at code and found this: model.add(Dense(13, input_dim=13, kernel_initializer='normal', activation='relu')) I was keen to know about kernel_initializer but wasn't able to understand it's significance? Answer: The neural network needs to start with some weights and then iteratively update them to better values. The term kernel_initializer is a fancy term for which statistical distribution or function to use for initialising the weights. In case of statistical distribution, the library will generate numbers from that statistical distribution and use as starting weights. For example in the above code, normal distribution will be used to initialise weights. You can use other functions (constants like 1s or 0s) and distributions (uniform) too. All possible options are documented here. Additional explanation: The term kernel is a carryover from other classical methods like SVM. The idea is to transform data in a given input space to another space where the transformation is achieved using kernel functions. We can think of neural network layers as non-linear maps doing these transformations, so the term kernels is used.
{ "domain": "datascience.stackexchange", "id": 10217, "tags": "machine-learning, python, neural-network, deep-learning, keras" }
What are typical ranges of rainfall drop sizes, speeds, and areal or volume densities?
Question: I'd like to look into the possibility of using a Raspberry Pi and the PiCam module to detect rain, and to try to make some measurements of droplet size and rate. This could be done using an LED flash or a (modified/pulsed) laser pointer equipped with a fan-out element, synchronizing the flash/pulse with the Pi camera's electronic shutter for a "freeze frame" effect, followed by some Python image analysis (e.g. PIL). This is for fun, and not meant to be a quantitative rain gauge necessarily. In order to better estimate the challenge, I'd like to get some feel for the distribution of sizes, speeds, and either areal rates (drops per sec per square meter) or number densities (drops per cubic meter). I can convert between various units and histograms, but I don't know where to find a good survey to understand over what ranges these can vary. See also Will long-term viewing of a sunny sky hurt the Pi Camera? for pics of a cool Pi sky camera by @ThomasJacquin as described here. Edit: per this comment I should point out my Raspberry Pi will be within a few meters of the Earth's surface, where I can keep an eye on it. Answer: I listed Harnessing of Kinetic Energy of Raindrops as a reference when I answered one of your recent questions, Ways to make a “How hard is it raining?” detector for personal use?. Serendipitously, that reference has information relevant to this question. The size and velocities of rain drops varies according to the type of rainfall event. For Light Stratiform Rain, light rain had drops 0.5 mm in size (diameter) and a terminal velocity of 2.06 m/s. The size of large drops is 2.0 mm and their terminal velocity is 6.49 m/s. For Moderate Stratiform Rain, light rain had drops 1.0 mm in size and a terminal velocity of 4.03 m/s. The size of large drops is 2.6 mm and their terminal velocity is 7.57 m/s. For Heavy Thundeshowers, light rain had drops 1.2 mm in size and a terminal velocity of 4.64 m/s. The size of large drops is 4.0 mm and their terminal velocity is 8.83 m/s. When rain drops start to get to be larger than 4.5 mm in size they split in two, as illustrated by this diagram, which relates rain drop shape to its size. Below 1 mm in diameter, rain drops "are almost spherical".
{ "domain": "earthscience.stackexchange", "id": 2304, "tags": "meteorology, rainfall, rain" }
Image Histogram - What Is the Interpretation of PDF
Question: For digital images (assume 2D gray scale image), the normalised intensity histogram is often treated as the probability distribution function of the intensity ie., intensity is treated as a random variable (RV). Where is the randomness in intensity coming from ? Should I treat this RV as the intensity value at any pixel in the overall image ie., can I infer this statement from the PDF - “choose any pixel. Probability that the intensity at that pixel has a value of 100 is 0.35” ? If so, it would seem that all pixels have the same PDF. Shouldn’t the overall structure of the image structure have a bearing on the PDF? eg., if the image is black at a pixel, shouldn't we would expect the PDF to be concentrated only around 0 at that pixel and zero everywhere else. Answer: Well, If you model your image as a realization of a random variable generator then the Histogram is the best estimation (Assuming no other information like prior, etc..) you have for the PDF of the random variable. For instance, you can see this model is used when doing Histogram Equalization (Transforming the realization into a realization of Uniformity Generator). Pay attention that this is a very simple model. For instance it doesn't take care of the correlation between adjacent pixels in image. Indeed your interpretation is correct given the model.
{ "domain": "dsp.stackexchange", "id": 6318, "tags": "image-processing, histogram" }
Generating violet noise with a specific PSD coefficient
Question: I am trying to generate a time-domain violet noise signal with the following power spectral density (PSD): $$ S_n(f) = A^2f^2 $$ Unfortunately, I am having trouble finding the right amplitude coefficient to get the correct value of $A$. I am generating the signal by: Creating a white-noise signal array $\mathcal{w}(t)$ with sample frequency $f_s$ and $\sigma = 1$. Performing numerical differentiation on this signal (which is equivalent to multiplying by $f$ in frequency domain). Multiplying by $1/f_s$ to renormalize after differentiating. Multiplying this signal by the root-mean square value of: $$ \begin{aligned} \bar{\mathcal{v}}_n &= \left(\int_0^{f_s/2} S_n(f) df\right)^{1/2} = \left(\int_0^{f_s/2} A^2f^2 df\right)^{1/2} \\ &= \left(A^2 \frac{1}{3} \left(\frac{f_s}{2}\right)^3 \right)^{1/2} \\ &= \frac{1}{2\sqrt{6}}A{f_s}^{3/2} \end{aligned} $$ so the final expression is: $$ \mathcal{v}(t) = \bar{\mathcal{v}}_n \frac{1}{f_s}\frac{d\mathcal{w}(t)}{dt} $$ My problem is that the resulting PSD from this signal is off by a factor of $\pi$ (or maybe 3?) with respect to the expected response. Here is my code in python: import numpy as np from scipy import signal import allantools as aln from matplotlib import pyplot as plt rng = np.random.default_rng() fs = 10e3 # Sampling freq [Hz] N = 1e5 # Number of points A = 1 # Amplitude spectral density coefficient of violet noise [a.u./(Hz^(3/2)] time = np.arange(N)/fs # time array [s] vn = np.sqrt(1/3*A**2*(fs/2)**3) # RMS value of signal [a.u.] # Time-domain violet noise signal vn_t = vn*np.diff(rng.normal(size=time.shape[0]+1)) # Compute PSD f, Sn_f = signal.welch(vn_t, fs, nperseg=2048) plt.loglog(f,Sn_f) plt.loglog(f,A**2*f**2,'tab:red') plt.xlabel('frequency [Hz]') plt.ylabel('PSD [(A.U.)**2/Hz]') plt.legend(('Simulated PSD','Expected PSD'),loc='lower right') plt.xlim([1e1,5e3]) plt.grid() plt.show() Which results in the following plot: These are the results if I divide the simulated PSD by $\pi$: My guess is that I am missing something in the differentiation step, as this is for a Gaussian-distributed random process, but after doing a lot of searching, most of the references I see say that either white-noise signals are non-differentiable, or have just some complicated stochastic differential calculus equations that don't really point to anything practical (like this). Any help would be greatly appreciated. Note: I know that I could generate the frequency-domain signal and then perform an ifft to get the time-domain signal of interest, but I am asking this question because I am interested in knowing what would be the correct procedure to generating the time-domain signal directly. Answer: Upgraded to full answer. The diff function implements the difference equation $$y[n] = x[n]-x[n-1]$$ The transfer function is simply $$H(z) = 1 - z^{-1}$$ or $$H(\omega) = 1 - e^{-j\omega}$$ where $\omega$ is the normalized frequency. We can write this as $$H(\omega) = e^{-j\omega /2} \left( e^{+j\omega/2} - e^{-j\omega/2}\right) = e^{-j\omega /2} \cdot 2 j \cdot \sin(\omega/2) $$ (sorry, I had the factor $2j$ inversed in my original comment). There are a few things to note here: The linear phase term $ e^{-j\omega/2}$ is equivalent to a half sample delay. That is caused by the fact that the difference is centered around $n = 1/2$ and not around $n = 0$. If you estimate the derivative as $y[n] = x[n+1]-x[n-1]$, that problem would go away (but make other things worse). For small frequencies we can use $\sin(x) \approx x$ and we'd get $$H(\omega) \approx j\omega e^{-j\omega/2}$$ which matches the continuous derivative other than the half sample delay. At higher frequencies you will run into some sort of aliasing. In order to sample a signal without loss, the signal needs to be bandlimited, which is not the case. When you represent a signal in a computer as an array of numbers, it's discrete, and if it's discrete in one domain it's periodic in the other. Hence, it flattens out at the Nyquist frequency: the frequency domain periodicity enforces that (which is exactly what aliasing is). EDIT: matching the amplitudes I think your goal is to energy-match the signal before and after the spectral shaping. So if $y[n] = A \cdot (x[n]-x[n-1])$ you have $$\sum y^2[n] = \sum x^2[n]$$ That's really simple if $x[n]$ is white noise. White noise is uncorrelated with itself other than at a lag of zero and specifically we have $r_{xx}[-1] = 0$. That means you are simply subtracting two uncorrelated sequences and the energy of the sum (or difference) is the sum of the energies. Assuming $x[n]$ has a RMS of 1, than $x[n]-x[n-1]$ has a power of 2 or an RMS of $\sqrt{2}$. So the scale factor simply becomes $$A = \frac{1}{\sqrt{2}}$$ Your original method doesn't work because you are using a continuous model to solve a discrete problem. It's simply not applicable. You can do it in the discrete frequency domain. Let's assume a DFT length of $N$ which is sufficiently large with the transform pairs $x[n] \leftrightarrow X[k]$ and $y[n] \leftrightarrow Y[k]$ . We also assume DFT scaling of $1/\sqrt{N}$ in both directions which preserves Perceval's Theorem between discrete time and frequency. We have $$X[k] = 1 \\ Y[k] = Ae^{-j2 \pi/N \cdot k/2} \cdot 2 j \cdot \sin(2\pi/N \cdot k/2) $$ The magnitude squares (or PSDs) are $$|X[k]|^2 = 1 \\ |Y[k]|^2 = 4A^2\sin^2(2\pi/N \cdot k/2) = 2A^2(1 - \cos (2\pi/N \cdot k)) $$ The integration turns into a sum, so we get $$E_x = \sum_0^{N-1} |X[k]|^2 = N \\ E_y = \sum_0^{N-1} |Y[k]|^2 = 2A^2( \sum_0^{N-1} 1 + \sum_0^{N-1} \cos (2\pi/N \cdot k)) = 2A^2 N $$ Again we see that the one-sample difference of white noise simply doubles the power and that $A = 1/\sqrt(2)$ will match the input power. Your expected PSD becomes $$|Y[k]|^2 = 2\sin(2\pi/N \cdot k/2) = 1 - \cos(2\pi/N \cdot k)$$ Below is a graph, that shows PSD for both white noise and the diff()'ed version with proper scale. Measurement and expectation match well. Code: %% violet noise fs = 48000; % sample rate nx = 2^16; % FFT size % create signals rng(1); % make it reproducible x = randn(nx+1,1); % one extra sample for diff() y = diff(x)/sqrt(2); x = x(1:nx); % cut down to desired length % calucalted FFT and PSD fx = fft(x)/sqrt(nx); fy = fft(y)/sqrt(nx); psdx = fx.*conj(fx); psdx = psdx(1:nx/2+1); psdy = fy.*conj(fy); psdy = psdy(1:nx/2+1); % plot it clf; k = (0:nx/2)'/nx; % index 0 ... 0.5 fr = k*fs; % frequency vector semilogx(fr(2:end),10*log10([psdx(2:end) psdy(2:end)])); hold('on'); grid('on'); xlabel('Frequency in Hz'); ylabel('Level in dB'); set(gca,'ylim',[-100 20]); set(gca,'xlim',[fr(2) fr(end)]); % expectation for PSD white is 1 or 0dB plot(fr,0*fr,'Linewidth',2); plot(fr,10*log10(1-cos(2*pi*k)),'LineWidth',2); legend('White actual','Diff actual','White expected','Diff expected', ... 'Location','SouthEast'); title('white and violet noise, unit power, fs = 48 kHz');
{ "domain": "dsp.stackexchange", "id": 10558, "tags": "noise, power-spectral-density, random-process" }
Programming a gas mass flow controller rig
Question: I'm trying to upgrade a multiple gas flow controller, which must be capable of controlling the mass flowrates of three gases independently through one outlet line to our system, and am in need of a bit of advice on programming it. I'm using an arduino and a few DACs to tell the proportioning solenoid valves how open they should be. The current setup schematic is below, though I have more pressure sensors and mass flow sensors at my disposal if necessary. My colleagues' advice is to use PID loops for each of the gases. This would be easy, particularly as there's an arduino code already available - however I'm not convinced that's either necessary or the best option, because: There is hardly any inertia in the system, unlike the thermal inertia in an oven for example. If it were just one gas, with one inlet pressure and one outlet, just the P part of a PID would be enough I think. The flow rate of each gas is dependent on the pressure differential before and after the valve, which is then dependent on the mass flow of the other gases and the outlet pressure, hence a PID could reach the optimal flow for one gas but in doing so change the flow rates of the other gases - leaving oscillation in the system. (the pressure in the system at the outlet may be changing slightly also) I don't know if its possible, or if so then how, to write a PID loop for simultaneous multiple input/outputs. Hence I would appreciate any ideas, or solutions that exist which I was unaware of, for how my program should go to control the proportioning solenoid valves? P.S. I'm not asking anyone to actually write a code for me, but just a basic idea of how I should do it would be great please? P.P.S. (There is a formula for gas flow rate here, so I suppose you could try to work out theoretically what the optimal signal to the valves would be based on some kind of simultaneous solution of this formula for the three gases, taking into account friction factors for all the little fittings and parts in the system and expansion factors for the particular gas mixtures concerned (I don't think any of the gas cylinders are absolutely pure mixtures) etc. but I would think it's possible not to go that complex here!) Answer: My best stab at this would be to attach parallel inputs of: Gas1(100psi) -> Flow Meter1 -> Valve1 -> Manifold Input1 Gas2(100psi) -> Flow Meter2 -> Valve2 -> Manifold Input2 Gas3(100psi) -> Flow Meter3 -> Valve3 -> Manifold Input3 ...where: I picked "100psi" to stress that the input pressure should be >> the output pressure. Could be 50psi, or whatever... Valve[n] is controlled by your spiffy Arduino code taking Flow Meter[n] as its input. All Valve[n] outputs go to a manifold that effectively connects together all inputs. The Manifold output (not shown) connects to your artificial lung. In addition: Manifold Output -> Lung -> Px Sensor ' | Purge Valve Strategy: Open purge valve. Control Proportional Valve[n] so each gas has its proportional mass flow input. (This is where the spiffy software comes in, more on that later). Allow purge to stay open while all 3 gasses are flowing, until steady state has been achieved. Close the purge valve. Monitor the Px sensor while the pressure builds up. While in the background, maintain proper flow proportions. Once the target Px is reached, close all Valve[n] valves simultaneously. Control Algorithm I would suspect that you could get away with a straight P controller. I assume that you are not too concerned with small errors (you didn't say). I do not think that you will need to work the pressure sensor data into any of your calculations. However, if delta-p starts getting too small, then maybe you would need to build in a higher order control system, or perhaps the Px data to scale the valve outputs as delta-p goes down. TBD. You will spend 80% of your time prototyping 1 control system to work. Then another 80% of your time integrating the hardware. Getting the proportional valve to work as you expect, well that's the golden goose. Stability will be dependent on a few things, but macroscopically I would be concerned about: Valve Cv value (Ahh, Cv numbers. Mean different things to different people, and nothing to most.) Are the valves ported properly for the desire flow rates? Are the valve orifices a suitable size for the flow rates and control range? Arduino limitations. We love Arduino's. They're fun. They also suck. Especially when you want to do something real, and real fast. I'd bet dollars-to-donuts that you could get this to work with some flavor of Arduino. I'd also bet that you will end up working around some wonky limitation that you could have avoided by spinning your own electronics, and programming in native c. But heck, it's fun. The parts, and more importantly your time, are really cheap - so why not. Is it all really stable under all conditions? Everyone will scream "Model it!". Instead, just try it and find out. A few hundred times. It will be a good experience. Even if you do take the modeling route, you will still need to try it out a few hundred times, so leave the expensive modeling software to the interns, where it'll do less damage. Purchase a $30 adjustable pressure relief valve, in case something goes awry. No need to blow the poor fella's lungs apart. It sounds like a fun project. Good luck!
{ "domain": "engineering.stackexchange", "id": 2341, "tags": "control-engineering, pressure, gas, pid-control, flow-control" }
Do gravitational time dilation and SR time dilation always cancel eachother out - on every planet?
Question: In the show Gravity and Me, he discusses that on Earth the bulge at the equator leads to the SR time dilation (from moving faster) to exactly equal the gravitational time dilation (from increased gravity at the poles), from the poles to the equator. Is this true on every planet? It seems like a very big coincidence that the speed of time is exactly equal at every place on Earth at sea level regardless of how fast the clock is moving, from the poles where a clock is not moving at all, to the equator where the clock is moving at a thousand miles an hour. Answer: No, it's not a coincidence. The earth's surface is an equipotential, and the time-time component of the metric depends only on the potential.
{ "domain": "physics.stackexchange", "id": 59758, "tags": "special-relativity" }
Using the household_objects_database with diamondback
Question: Hello I'd like to use the grasping-database on our PR2 (or gazebo to start with). I installed the database according to this tutorial: http://www.ros.org/wiki/household_objects_database/Tutorials/Install%20the%20household_objects%20database%20on%20your%20local%20database%20server and the node via the object_manipulation-package. This is my server config and launchfile for the db: household_objects_database: database_host: localhost database_port: 5432 database_user: willow database_pass: willow database_name: household-objects-0.4 (was 0.2 in earlier version) and <launch> <!-- load database connection parameters --> <rosparam command="load" file="$(find object_detection)/config/my_server.yaml"/> <!-- start the database wrapper node --> <node pkg="household_objects_database" name="objects_database_node" type="objects_database_node" respawn="true" output="screen"/> </launch> A first check on the DB looks good: rosservice call /objects_database_node/get_model_list REDUCED_MODEL_SET return_code: code: -1 model_ids: [18665, 18685, 18691, (...)] But I can't get any gripping positions: rosservice call /objects_database_node/database_grasp_planning "{arm_name: right_arm, target: {type: 1, model_pose:{model_id: 18744 } } }" ERROR: Incompatible arguments to call service: No field name [target.model_pose] Provided arguments are: * {'arm_name': 'right_arm', 'target': {'model_pose': {'model_id': 18744}, 'type': 1}} (type dict) Service arguments are: [arm_name target.reference_frame_id target.potential_models target.cluster.header.seq (...) ] If I use the db within the pick&place-demo, the demo aborts with the message Object manipulator failed to call planner at /objects_database_node/database_grasp_planning which seems rather related to the db error. Has anyone used the db with diamondback and can give me a hint? Nikolas // Update (see comments) In Standalone: rosservice call /objects_database_node/database_grasp_planning "{arm_name: right_arm, target: {potential_models: [{ model_id: 18744 }] } }" returns grasps: [] error_code: value: 2 and the service itself prints those messages: [ERROR] [1307478152.866188465, 25974.284000000]: Database grasp planning: database query error [ERROR] [1307478758.305219944, 26535.359000000]: Hand description: could not find parameter /hand_description/right_arm/hand_database_name [ERROR] [1307478758.305614745, 26535.360000000]: Database get list: query failed. Error: ERROR: column "grasp_compliant_copy" does not exist LINE 1: ...sp_cluster_rep, grasp_table_clearance, hand_name, grasp_comp... ^ [ERROR] [1307478758.305675838, 26535.360000000]: Database grasp planning: database query error // Update 2: Results for the 0.4-2 db: grasps: [] error_code: value: 0 and [ERROR] [1307617317.355176736]: Hand description: could not find parameter /hand_description/right_arm/hand_database_name [ INFO] [1307617317.359785243]: Database object node: retrieved 0 grasps from database [ INFO] [1307617317.359861977]: Database grasp planner: pruned 0 grasps for table collision or gripper angle above threshold [ INFO] [1307617317.359905145]: Database grasp planner: returning 0 grasps Originally posted by NikolasEngelhard on ROS Answers with karma: 106 on 2011-06-06 Post score: 1 Original comments Comment by Matei Ciocarlie on 2011-06-10: This is strange, I was pretty sure that the 0.4 version has the grasp_compliant_copy field. In any case, you can always check by hand using PGAdmin3. If the table grasp does not have that column, can you please add it by hand, type boolean, and set the default to False? Comment by NikolasEngelhard on 2011-06-08: This already was the result for the 0.4 db (diamondback-prerelease-backup), I just forgot to change the name in the config-file after I restored the db from the 0.4-backup. Comment by Matei Ciocarlie on 2011-06-08: I think this is caused by using the 0.2 version of the database, which works with cturtle. Can you please give it a try with the 0.4 prerelease version, which you can download from the same spot: https://code.ros.org/svn/data/trunk/household_objects/ Comment by NikolasEngelhard on 2011-06-07: Thanks for your help :) I updated the question. I'll add the error messages from the demo tomorrow. Comment by Matei Ciocarlie on 2011-06-07: When calling it standalone, can you please try instead: rosservice call /objects_database_node/database_grasp_planning "{arm_name: right_arm, target: {potential_models: [{ model_id: 18744 }] } }" Comment by Matei Ciocarlie on 2011-06-07: Any other error messages when you try running it from the pick and place demo? I am trying to get some insight into why the service call is failing. Answer: I now installed the database (household_0.4) on our PR2 and still get some errors, though this time, they are a bit different: engelhar@marvin:~$ rosservice call /objects_database_node/database_grasp_planning "{arm_name: right_arm, target: {potential_models: [{ model_id: 18780 }] } }" grasps: [] error_code: value: 2 And the service says: [ERROR] [1312116981.010510658]: Hand description: could not find parameter /hand_description/right_arm/hand_database_name [ERROR] [1312116981.011074672]: Database get list: query failed. Error: ERROR: column "fingertip_object_collision" does not exist LINE 1: ...rasp_compliant_original_id, grasp_scaled_quality, fingertip_... ^ [ERROR] [1312116981.011722189]: Database grasp planning: database query error I checked for the "grasp_compliant_copy" value and its boolean and false by default. Originally posted by NikolasEngelhard with karma: 106 on 2011-07-31 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Matei Ciocarlie on 2011-08-01: To get rid of this, you should be able to also add the "fingertip_object_collision" boolean field to the "grasp" table and set its default to FALSE. Comment by Matei Ciocarlie on 2011-08-01: This is caused by a mismatch between the code and the database schema, sorry about this... It seems to be complaining about the "fingertip_object_collision" field, which only appears in the prerelease_2 version of the backup file. Is that the one you downloaded?
{ "domain": "robotics.stackexchange", "id": 5767, "tags": "ros, household-objects-database, ros-diamondback, grasping" }
Retrieving data from SQL DB based on user input
Question: I wrote this to connect to a local DB using my local login credentials. It pulls back card information based on the user's input. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Data.SqlClient; namespace SQL_Console{ public class Info { //SQL Sever connection public static string connstring = @"Data Source=localhost\MAYDAY;Initial Catalog=NV; Integrated Security=SSPI; Connection Timeout=5"; public static string ConnectionString { get { return ConnectionString; } set { connstring = value; } } } public class Lookup { //SQL Query public static string squery = "Select Cardnumber, CardHolder, ExpireDate from CreditCardTransactionHistory where clientnumber = (select clientnumber from foliolink where folionumber = @fnumber)"; } //Execution class Program { static void Main(string[] args) { string sqlconn = Info.connstring; Console.WriteLine("Enter Folio Number:"); int fnum = Convert.ToInt32(Console.ReadLine()); Console.Clear(); SqlConnection sqlcon1 = new SqlConnection(sqlconn); sqlcon1.Open(); SqlCommand sql1 = new SqlCommand(Lookup.squery, (sqlcon1)); sql1.Parameters.AddWithValue("@fnumber", fnum); SqlDataReader reader = sql1.ExecuteReader(); while (reader.Read()) { Console.Write("Card Number: {0}\nCard Holder: {1}\nExpiration: {2}\n\n", reader[0], reader[1], reader[2]); } Console.ReadLine(); sqlcon1.Close(); } } } Answer: I don't get using class with static properties to store some strings. Use Using for class that dispose: public static void SimpleDB() { string sqlconn = @"Data Source=localhost\MAYDAY;Initial Catalog=NV; Integrated Security=SSPI; Connection Timeout=5"; string query = "Select Cardnumber, CardHolder, ExpireDate " + Environment.NewLine + "from CreditCardTransactionHistory " + Environment.NewLine + "where clientnumber = (select clientnumber from foliolink where folionumber = @fnumber)"; Console.WriteLine("Enter Folio Number:"); int fnum = Convert.ToInt32(Console.ReadLine()); Console.Clear(); using (SqlConnection sqlcon1 = new SqlConnection(sqlconn)) { sqlcon1.Open(); using (SqlCommand sql1 = new SqlCommand(query, (sqlcon1))) { sql1.Parameters.AddWithValue("@fnumber", fnum); using (SqlDataReader reader = sql1.ExecuteReader()) { if (reader.HasRows) { while (reader.Read()) { Console.Write("Card Number: {0}\nCard Holder: {1}\nExpiration: {2}\n\n", reader[0], reader[1], reader[2]); } } else { Console.WriteLine("no results"); } } } } Console.ReadLine(); }
{ "domain": "codereview.stackexchange", "id": 29533, "tags": "c#, beginner, sql-server" }
What is the complexity to show this theorem?
Question: Given a sum of regular expressions, where each regular expression in the sum is n-1 concatenations of 0, 1 and (0+1). There is need to show that the sum of all regular expressions is either equal to or not equal to the regular expression, which is n-1 concatenations of (0+1). I want to know what is the complexity to show this theorem? Actually it is possible to open up all brackets and parentheses, but this increases number of regular expressions in the sum to some exponential number. Proof in this way is very long and requires a lot of large papers. I want to know if there is more easier, efficient and faster way to do this proof? Answer: This is just SAT in disguise. Given a clause, you can encode the set of assignments falsifying the clause as a sum of the type you indicated. The formula is unsatisfiable iff every assignment falsifies some clause, that is, iff the sum of the regular expressions is $(0+1)^{n-1}$.
{ "domain": "cs.stackexchange", "id": 9447, "tags": "complexity-theory, time-complexity, runtime-analysis, space-complexity, space-analysis" }
What is exact sync message filter?
Question: I am writing a ROS node for a stereo camera. It publishes sensor_msgs::Image objects to two topics, one for left camera and another for right camera. The subsequent node uses ExactSync synchronizer to subscribe to the published topics. I assume it is for making sure all the published images have the same timestamp. But this is just the guess. Can someone here help me understand what an ExactSync message filter is? Originally posted by shankk on ROS Answers with karma: 18 on 2021-09-27 Post score: 0 Answer: Publishers and subscribers in ROS are agnostic to each other. So I assume it is for making sure all the published images have the same timestamp. No. In ROS, what you do on the subscriber's end will have no effect on what (and how) the publisher does. Here is the message_filter wiki page. A message filter is defined as something which a message arrives into and may or may not be spit back out of at a later point in time. What the ExactSync message filter does is that it will only spit out messages with the exact timestamp. If they are few nanoseconds different, they will not be identified as a matching pair. To quote the documentation, The message_filters::sync_policies::ExactTime policy requires messages to have exactly the same timestamp in order to match. Your callback is only called if a message has been received on all specified channels with the same exact timestamp. The timestamp is read from the header field of all messages (which is required for this policy). Therefore this subscriber will only work if your publisher explicitly synchronizes the left and right images using the timestamps. That is something you have to implement on your publisher node. If the left and right images are not explicitly synchronized, but you want to logically synchronize them based on the timestamp on the subscriber node, perhaps you can use the Approximate Time Policy instead. Originally posted by janindu with karma: 849 on 2021-09-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by shankk on 2021-09-27: Okay. I understand this better now. Since I have a while loop which publishes image messages for both cameras in each iteration, I think the appropriate thing to do is to explicitly set the same value for the header timestamp of both. I think the reason why we use the ExactTime policy is to make sure that the left image from one capture does not erroneously get matched with the right image from another capture instance. Right? Comment by janindu on 2021-09-27: Yes, that should work if you explicitly set the same timestamp to both image messages.
{ "domain": "robotics.stackexchange", "id": 36954, "tags": "ros, message-filter, sensor-msgs#image" }
Can I use euclidean distance for Latent Dirichlet Allocation document similarity?
Question: I have a Latent Dirichlet Allocation (LDA) model with $K$ topics trained on a corpus with $M$ documents. Due to my hyper parameter configurations, the output topic distributions for each document is heavily distributed on only 3-6 topics and all the rest are close to zero ($K$~$\mathcal{O}(100)$). What I mean by this, is that the 3-6 highest contributing topics for all documents is orders of magnitude (about 6 orders) greater than the rest of the topic contributions. If I use the Jensen-Shannon distance to compute the similarity between documents, I need to store all values of the topic distribution as non-zero, even the very small values of the non contributing topics, because Jensen-Shannon divides by each discrete value in the distribution. This requires a lot of storage and is inefficient. If, however, I store the topic distributions of each document as a sparse matrix (the 3-6 highest contributing topics are non-zero and the rest are zero) where each row is a unique document and each column is a topic, then this uses far less space. But I can no longer use the Jensen-Shannon metric, because we would be dividing by 0. In this case: Can I use the euclidean distance between documents topic distributions to compare similarity between documents? Using the euclidean distance would require far less storage and is extremely fast to compute. I appreciate that Jensen-Shannon is one of the "correct" metrics to compare discrete probability distributions, as well as the Bhattacharyya distance and Hellinger distance. But ultimately, the output of LDA is a discrete topic distribution for each doucment - each document is a vector (or point) in a $K$ dimensional space. By this argument, is it valid to use the euclidean distance to calcualte documents similarities? Is there something blatantly wrong with this method? I have tested the euclidean distance to compare documents, and yielded good results, which works well for my industrial application. But I want to know the academics behind such a method. Thanks in advance! Answer: Euclidean distance -by which in this application, I assume you mean the euclidean distance in an $n$-dimensional space defined by the distribution of document contents among $n$ topics considered, is a valid measure to use in comparing the topics represented within two documents. What you're doing by applying this method is quantifying a topic frequency difference within this newly defined space, and so interpretation of these quanta will require analysis of the space. For example, what euclidean distance indicates that documents are relatively similar? In distiction, the normalized result of something like the hellinger distance provides an easily interperable framework by which to evaluate the results- a score of 0 indicates no overlap in the distribution over the topics in question of the two documents, and a 1, perfect overlap. For the efficiency concerns, it's not clear to me why you couldn't truncate your topics considered to the crucial topics and then calculate any of the metrics on the distributions over ony those topics, rather than the entire universe of considered topics.
{ "domain": "datascience.stackexchange", "id": 2229, "tags": "nlp, lda, distance, similar-documents" }
Retrieving a list, by iterating through a list
Question: I'm using VB.Net, MVC 5, EF 6, and Linq. I have a list of Integers (category attribute IDs). I need to create a second list of String (values). There will be one string for each integer. I am currently accomplishing my task like this: Function getValues(catAttIDs As List(Of Integer), itemID As Integer) As List(Of String) Dim db As New Model1 Dim values = New List(Of String) For i As Integer = 0 To catAttIDs.Count - 1 Dim catAttID = catAttIDs(i) Dim currentValue = (From row In db.tblEquipment_Attributes Where row.Category_Attribute_Identifier = catAttID _ And row.Unique_Item_ID = itemID Select row.Value).SingleOrDefault() values.Add(currentValue) Next Return values End Function I have a strong feeling that there is a better way to do this, but I have not been able to find the information I'm looking for. I'm particularly interested in changing this code so that the database is called once for the list, instead of calling the database 5 or 6 times as I work my way through the list. Answer: You're looking for the LINQ-equivalent of an IN clause in SQL. So something like this: SELECT value FROM tblEquipment_Attributes WHERE Category_Attribute_Identifier IN (<list of integers>) AND Unique_Item_ID = itemID; So what you could do is write your LINQ statement to see if the Category_Attribute_Identifier is in the list. Then your function will look something like this: Function getValues(catAttIDs As List(Of Integer), itemID As Integer) As List(Of String) Dim db As New Model1 Dim currentValues as List(Of String) = (From row In db.tblEquipment_Attributes Where catAttIDs.Contains(row.Category_Attribute_Identifier) _ And row.Unique_Item_ID = itemID Select row.Value).ToList() Return currentValues End Function Note that ToList will create a List<T>, where T is the type of the elements. As long as Value in your db is a varchar, it'll be a List.
{ "domain": "codereview.stackexchange", "id": 16090, "tags": "linq, vb.net" }
NP-hard proof: Polynomial time reduction
Question: As I understand, to show that a certain problem $P$ is NP-hard we can reduce a known NP-hard problem $Q$ to a problem in $P$. This reduction, say $f$, has to be polynomial time. Could someone please explain is it necessary that $f$ maps each instance of $q\in Q$ to $P$ or can we map a selected subset of $\tilde Q \subseteq Q$ to $P$? i.e. does pre-image of $f$ has to be be all of $Q$ or can it be a subset of $Q$? Thanks Answer: Yes and no. To cut a long story short, it's enough that the pre-image of $f$ is NP-hard. Intuitively, the point of NP-hardness is that, if you had an efficient algorithm for an NP-hard problem, then you would have an efficient algorithm for all problems in NP. Let's suppose you've come up with a new problem, Triomphe's Problem (TP), and you want to prove that it's NP-hard. You need to show that every problem in NP can be reduced to TP. There are, on the face of it, two ways of doing this. The direct way. Show that there is a polynomial-time computable function $f$ with the following property: for any nondeterministic polynomial-time Turing machine $M$ and every input $x$, $f(M,x)$ is an instance of TP and $f(M,x)$ is a "yes" instance of TP if, and only if, $M$ accepts $x$. This is how Cook proved NP-completeness of Boolean satisfiability and how Fagin proved NP-completeness of evaluation of formulas of existential second-order logic. The indirect way. Show that there is an NP-hard problem $P$ and a polynomial-time computable function $f$ with the following property: for any instance $x$ of $P$, $f(x)$ is an instance of TP and it is a "yes" instance of TP if, and only if, $p$ is a "yes" instance of $P$. This is how just about every other NP-hard problem, apart from the two listed above, was proven NP-hard. The indirect way works through a chain of reductions. We need to establish that every problem in NP can be reduced to TP. So, we start with our nondeterministic polynomial-time Turing machine $M$ and its input $x$. We convert that to an instance of Boolean satisfiability. Then we convert that into, say, an instance of 3-SAT. Then we convert that into, say, an instance of 3-colourability. Then maybe we convert that into an instance of $P$ and, finally, convert that into an instance of our fictional problem TP. Because all of these reductions work for every instance of the problem, we have a reduction from our generic NP problem to TP. Both in theory and in practice, that is how reductions are done: you need to translate every instance of the problem. But we don't actually need that much. Look at the first step of the chain of reductions in the previous paragraph. We started with any Turing machine at all, and we converted it into a Boolean formula. Without looking closer, all we know is that we've produced some Boolean formula, and we don't know any details about it. However, looking more closely at the reduction, we see that the formula is in conjunctive normal form (CNF) (or that the proof can easily be modified to make it so). For the next step, converting to 3-CNF, the definition of reductions tells us that we have to be able to translate every Boolean formula into one in 3-CNF, but we know we don't need to do that much. It would suffice to translate only the formulas that are already in conjunctive normal form, because those are the only ones that the translation from Turing machines will produce. And that's actually what the standard proof does. Normally, when a new problem is proven NP-hard, a full reduction is given from some known NP-hard problem to the new problem, which translates all instances. However, in principal, you could get away with a reduction that does less than that, as long as it covers enough instances to establish the chain back to the generic Turing machine $M$ and its input $x$. To give another example, 3-colourability is NP-hard because of a standard reduction from 3-SAT. You could prove a new problem to be NP-hard by a reduction that only translates the instances of 3-colourability that could be produced by that reduction from 3-SAT. This works because 3-colourability is already NP-hard when its input is restricted to be from the class of graphs that can arise from the reduction from 3-SAT. However, if you're a student doing exercises and exam questions, I'd recommend that you always produce reductions that map the whole problem, rather than just a subset of it.
{ "domain": "cs.stackexchange", "id": 2184, "tags": "complexity-theory" }
Why doesn't KI(aq) react with HCl(aq)?
Question: When concentrated sulfuric acid is added to anhydrous potassium chloride and the fumes produced are bubbled into aqueous potassium iodide solution, the observed solution would be colourless solution. I think the first reaction is: $$\ce{2KCl(aq) + H2SO4(aq) -> K2SO4 + 2HCl(g)}$$ I assumed the second reaction would be: $$\ce{HCl + KI -> KCl + \frac{1}{2}I2 + \frac{1}{2}H2 }$$ However the second reaction is wrong according to the answer book as $\ce{HCl}$ won't react with $\ce{KI}$. From what I know, $\ce{Cl}$ is a stronger oxidising agent than $\ce{I}$, so shouldn't $\ce{I-}$ in $\ce{KI}$ be oxidized to $\ce{I2}$? I am an A-levels student so would appreciate a simpler answer. Answer: In your first reaction, you added concentrated sulfuric acid to anhydrous potassium chloride. In your equation, you wrote $\ce{KCl (aq)}$, which is incorrect. Under dilute conditions, that reaction would not take place because all reagents and products would be in aqueous ionic state, meaning there is no reaction. In your second reaction, the equation is completely wrong. It is true that elemental $\ce{Cl2}$ a stronger oxidizing agent than elemental $\ce{Br2}$ or elemental $\ce{I2}$. But in current situation is ionic $\ce{Cl-}$ versus ionic $\ce{I-}$ where both are aqueous. If all reagents are in ionic form and expected products are also ionic, there is no reaction. See your reaction below: $$\ce{H+(aq) + Cl-(aq) +K+(aq) + I-(aq) <=> H+(aq) + I-(aq) +K+(aq) + Cl-(aq)}$$ There is no gas, liquid or solid formed, but all ions. Therefore, there is no reaction.
{ "domain": "chemistry.stackexchange", "id": 10086, "tags": "inorganic-chemistry, acid-base" }
How is hinge loss related to primal form / dual form of SVM
Question: I'm learning SVM and many classic tutorials talk about the formulation of SVM problem as a convex optimization problem: i.e. We have the objective function with slack variables and subject to constraints. Most tutorials go through the derivation from this primal problem formulation to the classic formulation (using Lagrange multipliers, get the dual form, etc...). As I followed the steps, they make sense eventually after some time of learning. But then an important concept for SVM is the hinge loss. If I'm not mistaken, the hinge loss formula is completely separate from all the steps I described above. I can't find where the hinge loss comes into play when going through the tutorials that derive the SVM problem formulation. Now, I only know SVM as a classic convex optimization / linear programming problem with its objective function and slack variables that is subject to constraints. How is that related to hinge loss?? Answer: Hinge loss for sample point $i$: $$l( y_i, z_i) = \max(0, 1-y_iz_i)$$ Let $z_i=w^Tx_i+b$. We want to minimize $$\min \frac1n \sum_{i=1}^nl(y_i, w^Tx_i+b)+\|w\|^2$$ which can be written as $$\min \frac1n \sum_{i=1}^n\max(0,1-y_i (w^Tx_i+b))+\|w\|^2$$ which can be written as $$\min \frac1n \sum_{i=1}^n \zeta_i + \|w\|^2$$ subject to $$\zeta_i \ge 0$$ $$\zeta_i \ge 1-y_i (w^Tx_i+b)$$ The constraint comes from hinge loss. It is a reformulation of a minimax optimization problem.
{ "domain": "datascience.stackexchange", "id": 5145, "tags": "svm, hinge-loss" }
How does one write Adjoint, Self-adjoint and Hermitian operators in Dirac notation?
Question: The following portion is paraphrased from Mathematical Methods for Physics and Engineering by Riley, Hobson, and Bence. The adjoint of a linear operator $\hat{A}$, denoted by $A^\dagger$, is an operator that satisfies $$\int_{a}^{b}\psi_1^*(\hat{A}\psi_2)dx =\int_{a}^{b}(\hat{A}^\dagger\psi_1)^*\psi_2 dx+\text{boundary terms}\tag{1}$$ where the boundary terms are evaluated at the end-points of the interval $[a,b]$. An operator is said to be self-adjoint if $A^\dagger=A$. Therefore, for self-adjoint operators, $$\int_{a}^{b}\psi_1^*(\hat{A}\psi_2)dx -\int_{a}^{b}(\hat{A}\psi_1)^*\psi_2 dx=\text{boundary terms}.\tag{2}$$ In addition, if certain boundary conditions are met by the function $\psi_1$ and $\psi_2$ on which the self-adjoint operator acts, or by the operator itself, such that the boundary terms vanish, then the operator said to be hermitian in the interval $a\leq x\leq b$. In that case, $$\int_{a}^{b}\psi_1^*(\hat{A}\psi_2)dx =\int_{a}^{b}(\hat{A}\psi_1)^*\psi_2 dx.\tag{3}$$ My question is that in terms of Dirac's abstract bra and ket notation, how does one write each of these defining equations? Answer: This is a non-standard definition of (self-)adjointness. From context, your definitions are supposed to be for operators $A$ on the space of square-integrable complex-valued functions $L^2(\mathbb{R})$. This is a Hilbert space, and the abstract definition of the adjoint $A^\dagger$ of any operator $A$ on any Hilbert space $H$ with inner product $\langle -,-\rangle$ is $$ \langle Av,w\rangle = \langle v, A^\dagger w\rangle \label{Ad}\tag{Ad}$$ for all $v \in D(A), w\in D(A^\dagger)$, where $D(A)$ is the domain of definition of $A$ and $D(A^\dagger)$ the inferred domain of definition of the adjoint. The operator $A$ may be a bounded operator defined on the entire Hilbert space, or it may be densely defined only on some dense subspace $D(A)\subset H$. This is usually the case for unbounded operators like the position and momentum operators. Note the absence of any "boundary terms" in $\eqref{Ad}$. This is because the notion of "boundary terms" only makes sense for the specific case of $L^2(\mathbb{R})$, but not for a generic Hilbert space. The text you are using likely wants to side-step the discussion of domains of definition - for vectors outside the domain of definition of $A$ or $A^\dagger$, you usually get such boundary terms when trying to apply the naive definition of operators on $L^2(\mathbb{R})$. However, in some contexts it is really crucial to pay attention to this subtlety, and the operators really are only defined on the subspace of functions where these boundary terms vanish. See this answer of mine for a detailed discussion of a case where not paying attention to this leads to an apparent contradiction between the operators and their commutation relations. It appears that this - the case when the boundary terms vanish - is when your text wants to call the operator "Hermitian". This is a decidedly non-standard usage and I would strongly recommend against accepting that usage into your vocabulary. In almost all other usages, "Hermitian" is either synonymous with self-adjoint or is a weaker condition, e.g. some people call an operator Hermitian (or symmetric) when $A=A^\dagger$ but $D(A)\neq D(A^\dagger)$ and self-adjoint when additionally $D(A) = D(A^\dagger)$). Other people do not use the word "Hermitian" for operators on infinite-dimensional spaces at all and reserve its usage for finite-dimensional matrices on $\mathbb{C}^n$, where there are no domain issues and it is always equivalent to self-adjointness. Dirac notation has trouble expressing a lot of the things that go on with adjoints because in $\langle v \vert A\vert w\rangle$, you can't really tell whether $A$ acts to the left on $\lvert w\rangle$ or to the right on $\langle v\vert$, i.e. it is ambiguous whether this is $\langle v, Aw\rangle$ or $\langle Av,w\rangle$. For self-adjoint operators $A=A^\dagger$ this doesn't matter since $\langle Av,w\rangle = \langle v,A^\dagger w\rangle = \langle v,Aw\rangle$, and so Dirac notation is only unambiguous when you use only self-adjoint operators. People usually - but not always - assume that $\langle v\vert A\vert w\rangle$ for non-self-adjoint $A$ means $\langle v,Aw\rangle$, and would express the adjoint condition as something like $$ \langle A^\dagger v\vert w\rangle = \langle v\vert A\vert w\rangle$$ or $$ (A\lvert v\rangle)^\dagger \lvert w\rangle = \langle v\vert A\vert w\rangle$$ or $$ (\langle v\vert A^\dagger)\lvert w\rangle = \langle v\vert A\vert w\rangle$$ but all of these are suboptimal and could arguably be interpreted wrongly. In my opinion, it is best just to not use Dirac notation in cases where this ambiguity can happen.
{ "domain": "physics.stackexchange", "id": 93186, "tags": "quantum-mechanics, hilbert-space, operators, definition, notation" }
Can expanding space stretch the wavelength of GWs?
Question: I have read this question: Redshifting of Light and the expansion of the universe Now analogously, we could talk about GWs traveling in the empty voids of space, where the expansion of space is dominant. There is a debate over whether gravitons exist or not, and whether gravitons are the quanta of GWs or not, by let's assume yes. Just as EM waves wavelength get stretched as they travel in expanding space, GWs wavelength could get stretched. We are actually able to measure the redshifting of EM waves from far away sources. We could be able to measure the redshifting of GWs from far away sources as we have already detected GWs. Question: Can expanding space stretch the wavelength of GWs? When we detected the first GWs, did we measure the actual amount of redshift they went through as they traveled from the source? Answer: According to general relativity gravitational waves are affected by gravitational redshift in exactly the same manner and extent as electromagnetic waves. A complicating matter is that the effect of redshift on a gravitational wave signal is fully degenerate with the total mass of its sources. Consequently, it is not possible to determine the redshift from the a gravitational wave observation alone.
{ "domain": "physics.stackexchange", "id": 57341, "tags": "general-relativity, space-expansion, gravitational-waves, redshift" }
A custom RxJS operator which emits (and completes) once a condition has been satisfied for all previous values of the source
Question: The code is written in TypeScript. export function waitFor<T, R> ( reducer: (acc: R, value: T) => R, initial: R, condition: (accumulated: R) => boolean, ): OperatorFunction<T, R> { return (source$: Observable<T>) => { return new Observable<R>((subscriber) => { let accumulated: R = initial return source$.subscribe({ next (value) { accumulated = reducer(accumulated, value) if (condition(accumulated)) { subscriber.next(accumulated) subscriber.complete() } }, error (error) { subscriber.error(error) }, complete () { subscriber.complete() }, }) }) } } Here's a passing marable test which should make it more obvious what the operator is supposed to do. import { marbles } from 'rxjs-marbles/jest' import { waitFor } from './utils' describe(`waitFor`, () => { it(`waits for sum to be over 12`, marbles(m => { const source = m.cold(' ----a----b----c----d----e----f----g------|', { a: 1, b: 2, c: 3, d: 4, e: 5, f: 6, g: 7 }) const expected = m.cold('------------------------(x|)', { x: 15 }) const actual = source.pipe(waitFor((a, b) => a + b, 0, sum => sum > 12)) m.expect(actual).toBeObservable(expected) })) }) I'm mostly interested in the best practices of writing the operator. Did I miss unsubscribing from something and introduced a memory leak? Did I reinvent the wheel, i.e. could I have achieved the same versatility of the operator by combining the existing standard ones? I realize now this is just a scan followed by a skipUntil and first. Ignoring this, I'd still like to know if my logic was sound and if I missed something important in my custom operator. Answer: Yes, this implementation looks sound to me. Also you should first try to implement new operators like this by combining existing operators, since that leaves less room for mistakes, oversights, and let you piggyback on all the years of improvements the library wend trough. You already mentioned you re-implemented scan + something extra. Myself i would use find. The resulting operator below already has 2 minor improvements: export function waitFor<T, R> ( reducer: (acc: R | S, value: T, index: number) => R, seed: S, condition: (accumulated: R, index: number) => boolean, ): OperatorFunction<T, R> { return (source$: Observable<T>) => source$.pipe( scan(reducer, initial), find(condition) ) } The reducer and the condition now have an index parameter. The scan operator gave me inspiration to separate the type of the seed and the accumulated.
{ "domain": "codereview.stackexchange", "id": 41521, "tags": "javascript, typescript, rxjs" }
local minima vs saddle points in deep learning
Question: I heard Andrew Ng (in a video I unfortunately can't find anymore) talk about how the understanding of local minima in deep learning problems has changed in the sense that they are now regarded as less problematic because in high-dimensional spaces (encountered in deep learning) critical points are more likely to be saddle points or plateaus rather than local minima. I've seen papers (e.g. this one) that discuss assumptions under which "every local minimum is a global minimum". These assumptions are all rather technical, but from what I understand they tend to impose a structure on the neural network that make it somewhat linear. Is it a valid claim that, in deep learning (incl. nonlinear architectures), plateaus are more likely than local minima? And if so, is there a (possibly mathematical) intuition behind it? Is there anything particular about deep learning and saddle points? Answer: This is simply trying to convey my intuition, i.e. no rigor. The thing with saddle points is that they are a type of optimum which combines a combination of minima and maxima. Because the number of dimensions are so large with deep learning, the probability that an optimum only consists of a combination of minima is very low. This means 'getting stuck' in a local minimum is rare. At the risk of oversimplifying, it's harder to 'get stuck' in a saddle point because you can 'slide down one of the dimensions'. I think the Andrew Ng video you refer to comes from the Coursera course on Deep Learning by him.
{ "domain": "datascience.stackexchange", "id": 2975, "tags": "machine-learning, deep-learning, optimization, convergence" }
Inter-processor communication for robotic arm
Question: I'm building a hobby 6-DOF robotic arm and am wondering what the best way is to communicate between the processors (3-4 AVRs, 18 inches max separation). I'd like to have the control loop run on the computer, which sends commands to the microprocessors via an Atmega32u4 USB-to-??? bridge. Some ideas I'm considering: RS485 Pros: all processors on same wire, differential signal more robust Cons: requires additional chips, need to write (or find?) protocol to prevent processors from transmitting at the same time UART loop (ie, TX of one processor is connected to RX of next) Pros: simple firmware, processors have UART built in Cons: last connection has to travel length of robot, each processor has to spend cycles retransmitting messages CANbus (I know very little about this) My main considerations are hardware and firmware complexity, performance, and price (I can't buy an expensive out-of-box system). Answer: You want to use USB for communications with the computer. If you have a number of microcontrollers, you will probably only connect one of the microcontrollers directly to the computer. The other microcontrollers will need to get their commands from the main microcontroller. The communication you choose will depend on a number of factors: required bandwidth (we will assume you are running them at 16MHz) complexity (wiring and coding) bi-directional, or master-slave Almost all options have built-in support on the AVR microcontroller. There is no option you might reasonably prefer over the built-in options which would require additional hardware. Because they have built-in support, the software complexity is all similar, in that you just configure the port (using registers), put the data to transmit in another register, then trigger the transmission by setting a bit in another register. Any data received is found in another register, and an interrupt is triggered so you can handle it. Whichever option you choose, the only difference is the change in register locations, and some changes to the configuration registers. A USART loop has the following features: Maximum baud rate of CLK/16 = 1MHz (at 16MHz clock) which is a transfer rate of around 90KB/s fully bi-directional communications (no master or slave designation) requires separate wires between each pair of microcontrollers - the Atmega32u4 supports two USART ports natively, limiting the number of microcontrollers you can connect in a network in practice (or else you end up with a long string of microcontrollers - ie. connected in a linked list manner) Note: this is also what you would use to get RS232 communication, except that because RS232 requires 10V, it requires a driver to get those voltage levels. For communication between microcontrollers, this is not useful (only voltage levels are changed). RS485: Essentially, you use the USART port in a different mode - there is no advantage in bandwidth, and it may only simplify the wiring slightly, but it also complicates it. This is not recommended. Two-wire interface: This is also referred to as I2C. This means that all devices share the same two wires. You need a pull-up resistor on both wires It is slow (because the pull-up resistors are limited in value, and there is increasing capacitance as the number of devices increases, and the wire length increases). For this AVR microcontroller, the speed is up to 400 kHz - slower than USART (but this speed depends on limiting your capacitance). The reason is that although a device drives the data wire low, the opposite transition is accomplished by letting the wire float high again (the pull-up resistor). It is even slower when you consider that ALL communication shares the same limited bandwidth. Because all communication shares the same limited bandwidth, there may be delays in communication where data must wait until the network is idle before it can be sent. If other data is constantly being sent, it may also block the data from ever being sent. It does rely on a master-slave protocol, where a master addresses a slave, then sends a command/request, and the slave replies afterwards. Only one device can communicate at a time, so the slave must wait for the master to finish. Any device can act as both a master and/or a slave, making it quite flexible. SPI This is what I would recommend/use for general communication between microcontrollers. It is high speed - up to CLK/2 = 8MHz (for CLK at 16MHz), making it the fastest method. This is achievable because of its separate wire solely for the clock. The MOSI, MISO data, and SCK clock wires are shared across the whole network, which means it has simpler wiring. It is a master-slave format, but any device can be a master and/or slave. However, because of the slave select complications, for shared wiring (within the network), you should only use it in a hierarchical manner (unlike the two-wire interface). IE. if you organise all devices into a tree, a device should only be master to its children, and only a slave to its parent. That means that in slave mode, a device will always have the same master. Also, to do this correctly, you need to add resistors to MISO/MOSI/SCK to the upstream master, so that if the device is communicating downstream (when not selected as a slave), the communications will not affect communications in other parts of the network (note the number of levels you can do this using resistors is limited, see below for better solution using both SPI ports). The AVR microcontroller can automatically tri-state the MOSI signal when it is slave-selected, and switch to slave mode (if in master). Even though it might require a hierarchical network, most networks can be organised in a tree-like manner, so it is usually not an important limitation The above can be relaxed slightly, because each AVR microcontroller supports two separate SPI ports, so each device can have different positions in two different networks. Having said this, if you need many levels in your tree/hierarchy (more than 2), the above solution using resistors gets too fiddly to work. In this case, you should change the SPI network between each layer of the tree. This means each device will connect to its children on one SPI network, and its parent on the other SPI network. Although it means you only have a single tree of connections, the advantage is that a device can communicate with both one of its children and its parent at the same time and you don't have fiddly resistors (always hard to choose the right values). Because it has separate MOSI and MISO wires, both the master and slave can communicate at the same time, giving it a potential factor of two increase in speed. A extra pin is required for the slave-select for each additional slave, but this is not a big burden, even 10 different slaves requires only 10 extra pins, which can be easily accommodated on a typical AVR microcontroller. CAN is not supported by the AVR microcontroller you have specified. As there are other good options, it is probably not important in this case anyways. The recommendation is SPI, because it is fast, the wiring isn't too complex, and doesn't require fiddly pull-up resistors. In the rare case where SPI doesn't fully meet your needs (probably in more complicated networks), you can use multiple options (eg. use both SPI ports, along with the two-wire interface, as well as pairing some of the microcontrollers using a USART loop!) In your case, using SPI means that naturally, the microcontroller with the USB connection to the computer can be the master, and it can just forward the relevant commands from the computer to each slave device. It can also read the updates/measurements from each slave and send these to the computer. At 8MHz, and 0.5m wire length, I don't think it will become a problem. But if it is, try be more careful of capacitance (keep ground and signal wires getting too close, and also be careful of connections between different conductors), and also signal termination. In the unlikely event that it remains a problem, you can reduce the clock rate, but I don't think it is necessary.
{ "domain": "robotics.stackexchange", "id": 248, "tags": "microcontroller, electronics, communication, robotic-arm" }
Confusion regarding mass-spring system
Question: The force equation for above system is $$ \Sigma F=ma $$ which is $$ m\frac{d^2x}{dt^2}=-kx $$ This should be always true; but confusion came as I think more on this. When I set $a=\frac{d^2x}{dt^2}=g$ $$ mg=-kx $$ does this make sense? since $m>0$, $g>0$, $k>0$ then x should be $<0$ to be true. What is the correct way to interpret this? Answer: Assuming that your system is in a gravitational field directed in the $x$ direction, which you did not specify, you have two forces acting on your body: the elastic force of the spring and the gravitational force. It is the sum of these two that you need to equate to $ma$. So the equation of motion you want to solve is $$m \frac{d^2 x}{dt^2}=-kx + mg$$ The gravitational force is just one of several forces that can act on a body, while the equation you wrote means that the sum of all the forces acting on a body equals the gravitational force. This is false.
{ "domain": "physics.stackexchange", "id": 33708, "tags": "newtonian-mechanics, forces, spring, free-body-diagram" }
Does the law of Universal Gravitation apply to every matter?
Question: If I have a big ball of 20,000,000kg and another of 100g, does it mean that the big ball will pull the small ball towards it? Answer: The magnitude of the force for both balls is $F = G \frac{M m}{r^2}$. Here $M$ is the mass of the big and $m$ the mass of the small ball, both assumed to be point masses. As you can see the force is the same on both balls. So $F_{Big} = F_{small}$. This leads with $F = m\cdot a $ to $m a_{small} = M a_{big}$ or in a much nicer way: $$\frac{a_{small}}{a_{big}} = \frac M m = 200,000,000$$ As you can see both balls will move, but the smaller ball has a much bigger acceleartion. With this you can assume that the big ball stays at rest and only the small ball is moving.
{ "domain": "physics.stackexchange", "id": 8363, "tags": "newtonian-mechanics, newtonian-gravity" }
Can radio waves be formed into a pencil beam?
Question: Laser beams are said to have high "spatial coherence". This means that the beam is highly concentrated even at long distances (low spread). Can this be achieved with radio waves (much longer waves) or is it due to laser's stimulated emission? Answer: Laser light is spatially and temporally coherent. The stimulated emission is mainly responsible for the temporal coherence. So the answer is yes, you can create an electromagnetic beam that is spatially but not temporally coherent by placing a pinhole close to the source, and then another pinhole in the far field of the first pinhole. This beam will not spread out very much. (But also remember that laser light does spread out.) Note that for RF frequencies, a "pinhole" is probably several meters in diameter. The far field distance is given by this inequality: $L \gg a^2/\lambda$, where L is the distance, a is the diameter of the hole, and $\lambda$ is the wavelength. However, creating a RF pencil beam is probably not practical. The term "pencil beam" mentioned in the Wikipedia article is explained as being diffraction-limited. The size of a diffraction-limited beam gets larger with, I believe, the square root of the wavelength. It would be more like a gas-pipeline beam than a pencil beam.
{ "domain": "physics.stackexchange", "id": 18134, "tags": "electromagnetic-radiation, laser" }
What does it mean when people say a cost function is something you want to minimize?
Question: I am having a lot of trouble understanding this. Does it mean you should not use the cost function very often? Answer: No, it means you are trying to find the inputs that make the output of the cost function the smallest. It doesn't mean that you should "minimize" use of it.
{ "domain": "datascience.stackexchange", "id": 939, "tags": "machine-learning, beginner, linear-regression" }
PSD Magnitude with Welch's Method
Question: I'm having some trouble getting consistent results with PSD calculations. If I just take a basic sine wave fs = 20000; len = 16384; fsig = 1000; Sig = 1.5 * sin( (1:len) * 2 * pi * fsig / fs); fsig = 700; Sig = Sig + 0.9 * sin( (1:len) * 2 * pi * fsig / fs); and I run it through the follow two Welch's functions with MATLAB's pwelch: [matWelchD1, matWelchF1] = pwelch(Sig, length(Sig), 0, [], fs); [matWelchD2, matWelchF2] = pwelch(Sig, ceil(length(Sig/2)), 0, [], fs); I get the $1\textrm{ kHz}$ peak at approximately $0.65$ and $0.26$. I would love to blame frequency leakage due to how the frequencies line up, and there is some support for this because the $0.65$ peak has a much narrower base, but I see a similar result on real world data with FFTs that have broadband frequencies. Anyone know why the two implementations would be different? Answer: Your value for $f_s$ is missing. Assuming you work above Nyquist sampling rate (i.e. $f_s > 2*f_{sig}$), I obtain the following results: fs = 20000; len = 16384; fsig = 1000; Sig = 1.5 * sin( (1:len) * 2 * pi * fsig / fs); fsig = 700; Sig = Sig + 0.9 * sin( (1:len) * 2 * pi * fsig / fs); [matWelchD1, matWelchF1] = pwelch(Sig, length(Sig), 0, [], fs); [matWelchD2, matWelchF2] = pwelch(Sig, ceil(length(Sig)/2), 0, [], fs); subplot(2,1,1); plot(matWelchF1, (matWelchD1), '-o'); grid; subplot(2,1,2); plot(matWelchF2, (matWelchD2), '-o'); grid; df1 = matWelchF1(2) - matWelchF1(1); df2 = matWelchF2(2) - matWelchF2(1); df1 df2 Output: df1 = 1.2207 df2 = 2.4414 As you can see, both frequencies are nicely represented in the figures. One more comments on the window length (i.e. second parameter to pwelch): It determines how big the sections are, into which the signal is divided. Each section is multiplied with a Hamming window and then the FFT is taken. Afterwards, the values of all FFTs are summed together, yielding the PSD estimate. I.e. putting the signal length as the window length, would result in a single section. For your stationary signal this is fine, but in reality you would want to have a smaller value, or leave the parameter out (or use the empty vector []), such that the PSD can be more accurately estimated. As you can see, different window length create a different amount of frequency samples: The longer the window, the more frequency samples. NOte that pwelch calculates the Power Spectral Density, i.e. the power per Hertz. In order to get the energy of a frequency you need to multiply with the bandwidth of each bin: >> sum(matWelchD1) * (matWelchF1(2)-matWelchF1(1)) ans = 1.5300 >> sum(matWelchD2) * (matWelchF2(2)-matWelchF2(1)) ans = 1.5300 Both PSDs contain the same overall energy. Since the frequency samples in the first PSD are closer to each other, the PSD has higher peaks for each frequency (to deliver the same overall power).
{ "domain": "dsp.stackexchange", "id": 4465, "tags": "matlab, fft, discrete-signals, power-spectral-density" }
Is AT or GC content important in electrophoresis?
Question: Will it make a difference in running speed if we run samples of same no of bases but different AT - GC content ? Answer: It does make a difference on polyacrylamide. A and C are faster while G and T are slower. Image from publication: The "standards" are AAA/TTT/GGG/CCC molecules.
{ "domain": "biology.stackexchange", "id": 2060, "tags": "molecular-biology, homework" }
IMU data to be used with robot_localization
Question: I am using the package robot_localization with an IMU. In the docs, the Preparing Your Data for Use with robot_localization, it says Adherence to specifications: As with odometry, be sure your data adheres to REP-103 and the sensor_msgs/Imu specification. Double-check the signs of your data, and make sure the frame_id values are correct. REP-103 says the axes should be oriented in the following fashion: x forward y left z up Which means when an IMU is placed in its neutral position (flat on the surface), the axes should look like this: So for acceleration due to gravity, it should measure - (minus) 9.8 meters per second squared for the Z axis. However the docs say: Acceleration: Be careful with acceleration data. The state estimation nodes in robot_localization assume that an IMU that is placed in its neutral right-side-up position on a flat surface will: Measure +9.81 meters per second squared for the Z axis. If the sensor is rolled +90 degrees (left side up), the acceleration should be +9.81 meters per second squared for the Y axis. If the sensor is pitched +90 degrees (front side down), it should read -9.81 meters per second squared for the X axis. This would mean the axes are oriented in the following manner, implying left handed coordinate system: I am definitely missing something here. Can anyone help? Originally posted by Subodh Malgonde on ROS Answers with karma: 512 on 2018-07-24 Post score: 2 Original comments Comment by Subodh Malgonde on 2018-07-24: @Tom Moore I avoided posting an issue in the robot_localization repository since most recent issues have been directed towards answers.ros.org. Please help. Answer: I found the answer to my question. See these 2 posts: IMU convention for robot_localization Why do 3-axis accelerometers seemingly have a left-handed coordinate system? To summarize, in static condition the IMU measures the opposite of gravity acceleration. Originally posted by Subodh Malgonde with karma: 512 on 2018-07-24 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Martin Günther on 2018-07-24: That's the perfect answer to the question. Thanks for taking the time to answer your own question and helping others! :) Comment by Tom Moore on 2018-07-30: Apologies for not responding in a timely fashion, but I'm glad you were able to track down the correct answer!
{ "domain": "robotics.stackexchange", "id": 31347, "tags": "imu, navigation, ros-melodic, acceleration, robot-localization" }
Reduction from $A$ to $B$ as execution of Turing machines
Question: As explained in answers to this question, reduction from $A \le B$ can be represented in the following way. But in this example: from here At least as I understand it: The reduction is from $\overline{HP}$ to $L_2$, so $A$ is $\overline{HP}$ and $B$ is $L_2$. But it's implemented in the way that $M'$ is TM of $L_2$ and $M$ is TM of $\overline{HP}$, and $M'$ execute $M$ (I mean $L_2$ is the external and $\overline{HP}$ is the internal). Where am I wrong? Answer: What confuses you is that the words in the languages are encodings of machines that simulate runs of other machines, but at the end of the day, these are just words. Specifically, given input $x = \langle M, x\rangle$ for the reduction, the reduction itself does not simulate the run of $M$ on $x$, the reduction only outputs $f(x) = \langle M'\rangle$ which is an encoding of a machine. The machine $M'$, by definition, simulates the run of $M$ on $x$, given any input $w$ for $M'$. You can think of this reduction as a python code $f$ that is given as input another python code $M$ and an input $x$ for $M$. Then, $f$ halts and outputs a python code $M'$. Note that $M'$ may never halt on any input $w$, but this is okay as $f$ never runs $M'$. Please note that the first attached figure is irrelevant here as it does not describe how the reduction operates (how $M_f$ computes $f(x)$, given $x$), it only describes how you can define a machine $M_A$ for $\overline{HP}$ assuming that you already have: 1) a machine $M_B$ for $L_2$, and 2) a machine $M_f$ that computes the reduction $f$.
{ "domain": "cs.stackexchange", "id": 17572, "tags": "complexity-theory, turing-machines, computability, reductions" }
Do stars have exactly sphere shape?
Question: Planets (at least, some of them like Earth) aren't exactly spherical - but what about stars? Is Sun perfectly spherical, for example? What may be the reasons if it isn't? What about other stars? Answer: No, no stars have an exactly spherical shape. The reason for this is that the centrifugal force of the star's rotation is much greater at the equator of the star than it is at the poles, for the simple reason that the rotational velocity is greater. This greater centrifugal force pushes the equator outwards, stretching the star into an oblate shape. This is called gravity darkening. Because we have only visually resolved the surfaces of a few other stars, it's not something that is commonly directly observed (though the effect can also be observed from stellar spectra). Regulus is one star that has been observed as an oblate spheroid spectrographically. The image below shows the star Altair, directly imaged using the CHANDRA space telescope. Go here for an animation! Some stars are also non-spherical due to the effects of a nearby star in a close binary orbit. Much like how the Moon causes tides on Earth, two stars can stretch each other's surfaces. If they're very close to one another, as in the picture below, there can even be mass transfer between them (ref common envelope).
{ "domain": "astronomy.stackexchange", "id": 258, "tags": "star" }
Evaluating $\texttt{FermionOperator}$ equality in Openfermion
Question: Apologies if this is the wrong place to ask this kind of question. I have a simple question about Openfermion. I have two normal ordered FermionOperators A and B in Openfermion which are not equal. However, when I evaluate A == B, it returns True. Usually, the equality function seems to work fine, but in this specific case it doesn't seem to work. Below is the source code to replicate this problem. My question is: why is this happening? I'm wondering if this is maybe a problem with my installation, or perhaps a bug (although, looking at the source code, I cannot see any problems with the equality function). Or maybe I have some sort of conceptual misunderstanding? Source code: from openfermion import FermionOperator, normal_ordered #string representation of FermionOperators A_string = '(4.525943261204361+0j) [0^ 0] +\n(-0.3849231244033131+0j) [0^ 1] +\n(0.8547260787716595+0j) [0^ 2] +\n(-1.1299596129151999+0j) [0^ 3] +\n(-0.3849231244033129+0j) [1^ 0] +\n(-4.381884800061626+0j) [1^ 0^ 1 0] +\n(-4.105905711065374+0j) [1^ 0^ 2 0] +\n(-0.944626497232035+0j) [1^ 0^ 2 1] +\n(3.5938875653898616+0j) [1^ 0^ 3 0] +\n(0.5559380952173161+0j) [1^ 0^ 3 1] +\n(-5.184266085346127+0j) [1^ 0^ 3 2] +\n(0.3677762971945129+0j) [1^ 1] +\n(3.6630156211728777+0j) [1^ 2] +\n(-2.5183774065038045+0j) [1^ 3] +\n(0.8547260787716596+0j) [2^ 0] +\n(-4.105905711065371+0j) [2^ 0^ 1 0] +\n(2.0232821070880056+0j) [2^ 0^ 2 0] +\n(0.8540908922866308+0j) [2^ 0^ 2 1] +\n(1.2460804358695565+0j) [2^ 0^ 3 0] +\n(-5.690396818975691+0j) [2^ 0^ 3 1] +\n(3.2577629759360036+0j) [2^ 0^ 3 2] +\n(3.663015621172878+0j) [2^ 1] +\n(-0.360964409584815+0j) [2^ 1^ 0^ 2 1 0] +\n(-0.515120856381232+0j) [2^ 1^ 0^ 3 1 0] +\n(1.2472039494390206+0j) [2^ 1^ 0^ 3 2 0] +\n(1.2551065744601149+0j) [2^ 1^ 0^ 3 2 1] +\n(-0.9446264972320322+0j) [2^ 1^ 1 0] +\n(0.8540908922866324+0j) [2^ 1^ 2 0] +\n(6.764363308219267+0j) [2^ 1^ 2 1] +\n(-0.5061307336295583+0j) [2^ 1^ 3 0] +\n(1.0665005168981174+0j) [2^ 1^ 3 1] +\n(-1.5870659190452852+0j) [2^ 1^ 3 2] +\n(-1.624576120995937+0j) [2^ 2] +\n(2.9075472932497357+0j) [2^ 3] +\n(-1.1299596129151999+0j) [3^ 0] +\n(3.5938875653898643+0j) [3^ 0^ 1 0] +\n(1.2460804358695592+0j) [3^ 0^ 2 0] +\n(-0.5061307336295612+0j) [3^ 0^ 2 1] +\n(4.19902020363552+0j) [3^ 0^ 3 0] +\n(2.44500249921506+0j) [3^ 0^ 3 1] +\n(-1.4836026643174591+0j) [3^ 0^ 3 2] +\n(-2.518377406503804+0j) [3^ 1] +\n(-0.5151208563812295+0j) [3^ 1^ 0^ 2 1 0] +\n(-0.3481302294514168+0j) [3^ 1^ 0^ 3 1 0] +\n(1.6862584012597264+0j) [3^ 1^ 0^ 3 2 0] +\n(3.6148854702404463+0j) [3^ 1^ 0^ 3 2 1] +\n(0.5559380952173167+0j) [3^ 1^ 1 0] +\n(-5.69039681897569+0j) [3^ 1^ 2 0] +\n(1.066500516898118+0j) [3^ 1^ 2 1] +\n(2.445002499215056+0j) [3^ 1^ 3 0] +\n(0.0213735881799324+0j) [3^ 1^ 3 1] +\n(6.043172803857193+0j) [3^ 1^ 3 2] +\n(2.9075472932497357+0j) [3^ 2] +\n(1.2472039494390192+0j) [3^ 2^ 0^ 2 1 0] +\n(1.686258401259725+0j) [3^ 2^ 0^ 3 1 0] +\n(-4.660442362479951+0j) [3^ 2^ 0^ 3 2 0] +\n(-6.470812728417476+0j) [3^ 2^ 0^ 3 2 1] +\n(-5.184266085346124+0j) [3^ 2^ 1 0] +\n(1.2551065744601169+0j) [3^ 2^ 1^ 2 1 0] +\n(3.6148854702404485+0j) [3^ 2^ 1^ 3 1 0] +\n(-6.470812728417465+0j) [3^ 2^ 1^ 3 2 0] +\n(-3.4393374365242755+0j) [3^ 2^ 1^ 3 2 1] +\n(3.2577629759360045+0j) [3^ 2^ 2 0] +\n(-1.5870659190452812+0j) [3^ 2^ 2 1] +\n(-1.4836026643174585+0j) [3^ 2^ 3 0] +\n(6.043172803857192+0j) [3^ 2^ 3 1] +\n(-0.7285308515736753+0j) [3^ 2^ 3 2] +\n(0.996525236164558+0j) [3^ 3]' B_string = '(4.525943261204361+0j) [0^ 0] +\n(-0.3849231244033139+0j) [0^ 1] +\n(0.8547260787716593+0j) [0^ 2] +\n(-1.1299596129151999+0j) [0^ 3] +\n(-0.3849231244033138+0j) [1^ 0] +\n(-4.381884800061625+0j) [1^ 0^ 1 0] +\n(-4.105905711065372+0j) [1^ 0^ 2 0] +\n(-0.9446264972320364+0j) [1^ 0^ 2 1] +\n(3.5938875653898648+0j) [1^ 0^ 3 0] +\n(0.5559380952173187+0j) [1^ 0^ 3 1] +\n(-5.184266085346124+0j) [1^ 0^ 3 2] +\n(0.36777629719451244+0j) [1^ 1] +\n(3.6630156211728773+0j) [1^ 2] +\n(-2.518377406503804+0j) [1^ 3] +\n(0.8547260787716591+0j) [2^ 0] +\n(-4.105905711065371+0j) [2^ 0^ 1 0] +\n(2.023282107088007+0j) [2^ 0^ 2 0] +\n(0.8540908922866277+0j) [2^ 0^ 2 1] +\n(1.2460804358695592+0j) [2^ 0^ 3 0] +\n(-5.690396818975689+0j) [2^ 0^ 3 1] +\n(3.257762975936007+0j) [2^ 0^ 3 2] +\n(3.6630156211728773+0j) [2^ 1] +\n(-0.36096440958481857+0j) [2^ 1^ 0^ 2 1 0] +\n(-0.5151208563812304+0j) [2^ 1^ 0^ 3 1 0] +\n(1.2472039494390195+0j) [2^ 1^ 0^ 3 2 0] +\n(1.2551065744601182+0j) [2^ 1^ 0^ 3 2 1] +\n(-0.944626497232038+0j) [2^ 1^ 1 0] +\n(0.8540908922866295+0j) [2^ 1^ 2 0] +\n(6.764363308219268+0j) [2^ 1^ 2 1] +\n(-0.5061307336295606+0j) [2^ 1^ 3 0] +\n(1.0665005168981163+0j) [2^ 1^ 3 1] +\n(-1.5870659190452843+0j) [2^ 1^ 3 2] +\n(-1.624576120995937+0j) [2^ 2] +\n(3.9075472932497366+0j) [2^ 3] +\n(-1.1299596129151999+0j) [3^ 0] +\n(3.5938875653898634+0j) [3^ 0^ 1 0] +\n(1.2460804358695587+0j) [3^ 0^ 2 0] +\n(-0.5061307336295595+0j) [3^ 0^ 2 1] +\n(4.199020203635514+0j) [3^ 0^ 3 0] +\n(2.445002499215058+0j) [3^ 0^ 3 1] +\n(-1.483602664317454+0j) [3^ 0^ 3 2] +\n(-2.518377406503804+0j) [3^ 1] +\n(-0.5151208563812286+0j) [3^ 1^ 0^ 2 1 0] +\n(-0.34813022945142125+0j) [3^ 1^ 0^ 3 1 0] +\n(1.6862584012597162+0j) [3^ 1^ 0^ 3 2 0] +\n(3.6148854702404476+0j) [3^ 1^ 0^ 3 2 1] +\n(0.5559380952173192+0j) [3^ 1^ 1 0] +\n(-5.690396818975689+0j) [3^ 1^ 2 0] +\n(1.066500516898117+0j) [3^ 1^ 2 1] +\n(2.4450024992150574+0j) [3^ 1^ 3 0] +\n(0.02137358817993462+0j) [3^ 1^ 3 1] +\n(6.043172803857192+0j) [3^ 1^ 3 2] +\n(2.9075472932497366+0j) [3^ 2] +\n(1.247203949439014+0j) [3^ 2^ 0^ 2 1 0] +\n(1.6862584012597197+0j) [3^ 2^ 0^ 3 1 0] +\n(-4.660442362479957+0j) [3^ 2^ 0^ 3 2 0] +\n(-6.470812728417478+0j) [3^ 2^ 0^ 3 2 1] +\n(-5.184266085346124+0j) [3^ 2^ 1 0] +\n(1.25510657446012+0j) [3^ 2^ 1^ 2 1 0] +\n(3.614885470240445+0j) [3^ 2^ 1^ 3 1 0] +\n(-6.470812728417472+0j) [3^ 2^ 1^ 3 2 0] +\n(-3.439337436524271+0j) [3^ 2^ 1^ 3 2 1] +\n(3.2577629759360067+0j) [3^ 2^ 2 0] +\n(-1.5870659190452825+0j) [3^ 2^ 2 1] +\n(-1.483602664317455+0j) [3^ 2^ 3 0] +\n(6.043172803857192+0j) [3^ 2^ 3 1] +\n(-0.7285308515736739+0j) [3^ 2^ 3 2] +\n(0.996525236164558+0j) [3^ 3]' #create FermionOperators A = normal_ordered(FermionOperator(A_string)) B = normal_ordered(FermionOperator(B_string)) #if A and B are equal, A - B should be zero up to numerical precision print("A - B = {}".format(A - B)) #since A and B are not equal, A == B should return False print("A == B: {}".format(A == B)) The output is A - B = (-1.0000000000000009+0j) [2^ 3] A == B: True Answer: This looks like a bug$^1$. Look at following lines of code from the v1.3.0 release branch for the implementation of SymbolicOperator.isclose (which implements FermionOperator.__eq__): if not (isinstance(a, sympy.Expr) or isinstance(b, sympy.Expr)): tol *= max(1, abs(a), abs(b)) This occurs inside of a for loop comparing A and B componentwise. tol is provided to isclose once at call time, then gets modified by this line to grow larger every time a term of A or B has a (non-sympy) coefficient greater than 1. If the coefficients of each term in A or B are all greater than 1, this will eventually corrupt the tolerance after some number of terms and the comparison becomes vacuous. Here's a minimal reproducing example x = FermionOperator("0^ 0") y = FermionOperator("0^ 0") print(EQ_TOLERANCE) # should be 1e-08 for # construct two identical operators up to some number of terms num_terms_before_ineq = 30 for i in range(num_terms_before_ineq): x += FermionOperator(f" (10+0j) [0^ {i}]") y += FermionOperator(f" (10+0j) [0^ {i}]") # add a final term that is equal within tol but gets missed by the isclose check xfinal = FermionOperator(f" (1+0j) [0^ {num_terms_before_ineq + 1}]") yfinal = FermionOperator(f" (2+0j) [0^ {num_terms_before_ineq + 1}]") # these two terms are not equal within tol.. print(xfinal == yfinal) # >>> False print(xfinal - yfinal) # (-1+0j) [0^ 31] # ...but these two terms will be, because the `tol` argument to `isclose` was corrupted x += xfinal y += yfinal print(x == y) # >>> True print(x - y) # (-1+0j) [0^ 31] If you make num_terms_before_ineq a bit smaller then this issue doesn't come up. But in your case there's a ton of terms, so this seems to be exactly what's happening. $^\dagger$ Their intention here almost looks like comparing the terms based on a relative difference compared to $\lVert \mathbf{a} \rVert_2$, i.e. check if each term agrees within a relative tolerance $(a_i - b_i) / \text{max}(\lVert \mathbf{a} \rVert_2, \lVert \mathbf{b} \rVert_2) \leq \epsilon$. But in that case the docstring for equality checking is wrong, Comparison is done for each term individually. Return True if the difference between each term in self and other is less than EQ_TOLERANCE
{ "domain": "quantumcomputing.stackexchange", "id": 3441, "tags": "programming, chemistry, openfermion" }
3rd analytical group of cations
Question: If a mixture of Mn$^{2+}$, Zn$^{2+}$, Co$^{2+}$, Ni$^{2+}$ and Al$^{3+}$ salts is dissolved in water, a white dim solution is obtained. I guess that the aluminium and zinc ions hydrolysed and the respective hydroxides are formed. Am I right, or maybe one of them won't hydrolyze so much that the solution becomes dim or maybe some other cation could have hydrolysed, too? Answer: Aluminium and zinc will produce a white precipitate due to formation of hydroxide. Both these hydroxides dissolve in excess acid or alkali due to their amphoteric character. Manganese, nickel, cobalt hydroxides are probably soluble as they are precipitated in the 4th group as sulphides.
{ "domain": "chemistry.stackexchange", "id": 3010, "tags": "inorganic-chemistry, solubility, analytical-chemistry, identification" }
Doesn't the use of a thermometer alter the temperature of the system?
Question: If I place a mercury thermometer in hot water, heat energy will transfer from the water to the mercury inside the thermometer. Will this continue until thermal equilibrium is reached and thus the mercury will show the temperature of the water? However, if this is so, will the thermometer show the right temperature as some of the heat energy is transferred to the thermometer and this in turn will cause original temperature of water to fall? Please correct me if I am wrong. Answer: You are right. The thermal equilibrium will eventually be reached. In this process, heat is transferred from the water to the thermometer. This increases the temperature of the thermometer and decreases the temperature of the water until they are equal. However, generally, the amount of water is large so that the heat it loses is too small to significantly change its temperature.
{ "domain": "physics.stackexchange", "id": 10477, "tags": "thermodynamics, experimental-physics, temperature, measurements" }
What really goes on in a vacuum?
Question: I've been told that a vacuum isn't actually empty space, rather that it consists of antiparticle pairs spontaneously materialising then quickly annihilating, which leads me to a few questions. Firstly, is this true? And secondly, if so, where do these particles come from?... (do the particles even have to come from anywhere?) Answer: I don't think the particle-anti-particle picture is a very good one to grasp what's going on. Essentially, it's a consequence of zero-point energy. In classical physics, the lowest energy state of a system, its ground state, is zero. In quantum mechanics, it's a non-zero (but very small) value. The easiest way to see how this zero point energy arises is through an elementary problem is quantum mechanics, the quantum harmonic oscillator. The classical harmonic oscillator is a system in which there is a restorative force proportional to the displacement. For example, a spring — the further you pull the end of a spring, the more force the spring resists your pull. Modeling this system in classical physics is very easy. Things are a bit different in quantum mechanics — the state of a particle is specified by its wavefunction, which encodes the probabilities of finding the particle in certain positions. Another property of quantum systems is that their energies come in discrete energy levels. If you're interested in how it is worked out, you can see here. You can derive the following result for the energy levels of the particle $$E=\hbar \omega\left( n+\frac {1}{2}\right).$$ Since $n$ specifies the energy level, setting $n$ to zero will give us the ground state. However, we can see this isn't zero — so the lowest possible state of a quantum system still contains some energy. In a practical example, liquid helium does not freeze under atmospheric pressure at any temperature because of its zero-point energy. One very important thing to note is the following: zero-point energy does not violate the conservation of energy. A common explanation is that the uncertainty principle allows particles to violate it 'if they're quick enough!'. This simply isn't true. From the Wiki page on conservation of energy: In quantum mechanics, energy of a quantum system is described by a self-adjoint (Hermite) operator called Hamiltonian, which acts on the Hilbert space (or a space of wave functions ) of the system. If the Hamiltonian is a time independent operator, emergence probability of the measurement result does not change in time over the evolution of the system. Thus the expectation value of energy is also time independent. The local energy conservation in quantum field theory is ensured by the quantum Noether's theorem for energy-momentum tensor operator. Note that due to the lack of the (universal) time operator in quantum theory, the uncertainty relations for time and energy are not fundamental in contrast to the position momentum uncertainty principle, and merely holds in specific cases (See Uncertainty principle). Energy at each fixed time can be precisely measured in principle without any problem caused by the time energy uncertainty relations. Thus the conservation of energy in time is a well defined concept even in quantum mechanics. Now, on to your question — in quantum field theory, all particle are modeled as excitations of fields. That is, every particle has an associated field. For the particles that carry forces, these are the familiar force fields — such as the electromagnetic field. Fields take a value everywhere in space. Now, in classical mechanics, this value would be zero in most places. However, as we saw above, the ground state of a quantum field is non-zero. So, even in empty space (or 'free space') these fields have a a very small value. So, empty space has vacuum energy.
{ "domain": "physics.stackexchange", "id": 4306, "tags": "quantum-field-theory, vacuum, virtual-particles" }
Understanding bremsstrahlung spectrum in Landau-Lifschitz
Question: I am reading through Landau, Lifschitz "The classical theory of fields", in particular I am reading about bremsstrahlung spectrum in $\S70$ "Radiation in the case of Coulomb interaction". The book says that the radiation emitted by a single particle is given by: $$d\mathcal{E}_\omega=\dfrac{\pi \mu^2\alpha^2\omega^3}{6c^3\mathcal{E}^2}\left( \dfrac{e_1}{m_1}-\dfrac{e_2}{m_2} \right)^2\left[ (H^{(1)'}_{i\nu}(i\nu\epsilon))^2- \dfrac{\epsilon^2-1}{\epsilon^2} (H^{(1)}_{i\nu}(i\nu\epsilon))^2\right]d\omega, \tag{70.18}$$ where $e_i$ and $m_i$ ($i=1,2$) are the particles charges and masses, $\alpha=|e_1e_2|$, $\mu$ is the reduced mass, $\mathcal{E}$ is the energy, $\nu=\dfrac{\omega\alpha}{\mu v^3}$ ($v$ is the relative velocity) and $$\epsilon=\sqrt{1+\dfrac{2\mathcal{E}M^2}{\mu \alpha^2}},$$ where $M$ is the angular momentum. There are two things I do not understand: Is the angular momemtnum $M$ a parameter? I cannot make sense of the unit of measurement in the equation for $d\mathcal{E}_\omega$: for example $\nu$ should be a number but from its definition it is far from being clear. Can anyone enlighten me? Answer: Landau uses Gaussian units (see $\S27$, this is stated right before equation $(27.4)$), in which the unit of charge, the statcoulomb, is defined as $$1\,\mathrm{statC} = 1\,\mathrm{g}^{1/2}\,\mathrm{cm}^{3/2}\,\mathrm{s}^{-1}.$$ This, in particular, gives the Coulomb's law in the form $$F=\frac{e_1e_2}{r^2},$$ without a special constant of proportionality. With this, all the equations, including $(70.12)$ that defines the semimajor axis as $a=\frac{\alpha}{2\mathcal{E}}$, make sense with the $\alpha$ defined as $\alpha=|e_1e_2|$. And the $\nu$ becomes then simply an angle, which is dimensionless and can be passed to transcendental functions. As for whether the angular momentum $M$ is a parameter, yes, it is. It's quite usual to "hide" it inside the eccentricity of the orbit when analyzing motions in Coulomb-type fields.
{ "domain": "physics.stackexchange", "id": 69187, "tags": "electromagnetism, electromagnetic-radiation" }
if a force is 1 newton metre, what is it at 2 meters?
Question: If I have a force, say 24 kg/cm what would that equate to at 2cm? I would like to know the formulae for calculating this. For example. If a motor can hold an object of 24kg at 1cm from its pivot point, what is it cable of holding at 5cm? And how is it calculated? Answer: The units you mean are probably kg*cm (sometimes written kg.cm in robotics). Your original specification of 24 kgcm is a torque and not a force. The difference in practice is that, as the units imply, your resulting force at a point a distance from the "pivot point" decreases by the distance. So 24 kg*cm means that it can hold 24 kg at 1 cm or 12 kg at 2 cm etc. Notice that strictly speaking kg is not a standard unit of force in physics since the force of gravity on a kilogram varies. In robotics, this seems to be the standard unit of servo torque though.
{ "domain": "physics.stackexchange", "id": 1087, "tags": "homework-and-exercises, forces, torque, unit-conversion, spring" }
What is $E$ in the Planck-Einstein relation?
Question: The Planck-Einstein relation was first given for photons $$E = h\nu$$ But later, de Broglie extended it to matter waves, and showed that it would hold for all particles as well. The $E$ for a photon is simple enough to define (the relativistic KE, if I'm right), but what is $E$ for other particles, say for an electron? Is it the kinetic energy, potential energy, or total energy? Answer: $\newcommand{ket}[1]{\left| #1 \right>}$ $\newcommand{bra}[1]{\left< #1 \right|}$ $\newcommand{bk}[2]{\left< #1 \big| #2 \right> }$ In the level of QM you really don't have kinetic energy and potential energy. Some likes to call the expectation value of the operator $\frac{\hat{p}^2}{2m}$ on the state $\ket{\Psi}$ the kinetic energy and the expectation value of the operator $V(\hat{x})$ on the state $\ket{\Psi}$ the potential energy. However they are not like the classical kinetic and potential energy. The energy in this equation $E=\hbar \omega=h \nu$ is the total energy of the particle that you are talking about. For example for the harmonic oscillator you have $E=\hbar \underbrace{\omega_o \left(n+\frac{1}{2}\right)}_{:=\omega} = \hbar \omega$, where $\omega_0$ is the angular frequency of the harmonic oscillator. From this you can see that $\hbar \omega$ gives the total energy.
{ "domain": "physics.stackexchange", "id": 20815, "tags": "quantum-mechanics, special-relativity, energy" }
Cr(II) and Mn(III) - their oxidizing and reducing properties?
Question: My textbook states that $\ce{Cr^2+}$ is a reducing agent while $\ce{Mn^3+}$ is an oxidizing agent in spite of both having $\ce{d^4}$ configuration. The explanation states that when $\ce{Cr^2+}$ gets oxidized to $\ce{Cr^3+}$, it attains a stable half-filled $\mathrm{t_{2g}}$ orbitals but when $\ce{Mn^3+}$ gets reduced to $\ce{Mn^2+}$, it attains a stable half-filled $\mathrm{d}$ orbitals. First, is it a reasonable explanation? If this is the case, why can't $\ce{Cr^2+}$ get reduced to $\ce{Cr+}$ to attain a stable half-filled $\mathrm{d}$ orbitals and $\ce{Mn^3+}$ get oxidized to $\ce{Mn^4+}$ to attain a stable half-filled $\mathrm{t_{2g}}$ orbitals, i.e. in-short is the vice-versa true? Answer: Related question with same answer but in a different context of the 4f block: Why don't we see these lanthanide species? You have a misconception regarding the stability of oxidation states. The factors you have listed are honestly not very important in determining the stability of a certain oxidation state. They will tip a delicate balance in favour of one oxidation state or another, but they are hardly the sole determining factors. If you are going from $\ce{M+}$ to $\ce{M^2+}$, you need to consider two main things. You need to ionise the second electron from $\ce{M+}$, denoted $I_2$. That represents an input of energy. $\ce{M^2+}$ is smaller and more highly charged than $\ce{M+}$, so you can gain some energy back on the basis that the $\ce{M^2+}$ ion is more strongly solvated or forms stronger ionic/covalent bonds with anions. Loosely speaking, if the energy you get out of (2) is enough to compensate for the energy you lose in (1), then $\ce{M+}$ will act as a reducing agent. And the converse is true as well. It is for this reason that one does not observe Mg(I) compounds (apart from some esoteric organometallics). The $I_2$ of Mg is so small that it can be recouped extremely easily. Likewise, the $I_2$ of K is so large that there's no way any stronger bonding can compensate for that. For the cases you have described: $\ce{Cr^2+}$ simply does not act as an oxidising agent because $I_2$ of Cr is rather small. The same is true of the whole 3d block. How many 3d compounds do you know of with the metal in a +1 oxidation state? Without resorting to organometallic compounds, all I can suggest is copper(I). The reason why none of the other transition metals exist in the +1 oxidation state is precisely because their $I_2$ values are fairly small. Similarly, $\ce{Mn^3+}$ does not act as a reducing agent because $I_4$ is way too large. Remember that successive ionisation is extremely difficult. What your textbook should have written is: The $I_3$ of Mn is anomalously large because you are "losing the stability of a half-filled 3d subshell". Hence, $\ce{Mn^3+}$ is a good oxidising agent. The hydration enthalpy of $\ce{Cr^3+}$ is anomalously large (for a 3+ ion) because there is a large ligand-field stabilisation energy associated with the $(t_{2g})^3$ configuration. (This is factor number 2, not factor number 1). Hence, $\ce{Cr^2+}$ is a reducing agent. These factors are small compared to the raw size of the ionisation energies. Yes, the $d^3$ ion $\ce{Mn^4+}$ also enjoys a large LFSE, and its hydration enthalpy would be larger than expected. However, that 4th ionisation energy is just too big, and that simply cannot be compensated for by the comparatively tiny increase in LFSE.
{ "domain": "chemistry.stackexchange", "id": 5058, "tags": "redox, transition-metals" }
What is the best Query to retrieve DNA from NCBI?
Question: I want to retrieve a sequence for many species from the Nucleotide database in NCBI. I'm using a command line approach and I have to figure out what is the best query that will return exclusively the region of DNA I am interested in and filter out unwanted noisy sequences. I am using a wide range of species (that I have stored as a list of TaxIDs of about 2000 species): it includes small crustaceans, invertebrates, some algae and small vertebrates (reptiles, amphibians and few mammals). My final goal is to obtain a phylogenetic tree for all or most of the species. I have been suggested to use these genes to create such phylogeny CO1 mithocondrial DNA 16S rRNA 18S rRNA I want to formulate a specific query that will return exclusively those sequences. I'm using the GenBank query builder to visually check for the accuracy of my search and when I find a good query I will use it in the API. So far I came up with the following queries: (COX1[Title] OR CO1[Title]) AND complete[Title] --> 63/2000 Species 16S[title] AND complete[title] AND rRNA[title] NOT partial[title] --> 9/2000 species 18S[title] AND complete[title] AND rRNA[title] NOT partial[title] --> 15/2000 species As you can see the number of species that I get sequence for is very low compared to the initial 2000 species. I doubt that we have so little available sequences (especially for COX1 that is used for barcoding) Can you help me understanding whether my queries are good or not? And if possible suggest a better alternative More Info A subset of 10 of my species of interest is Rasbora heteromorpha Elasmopus rapax Gasterosteus aculeatus Palaemonetes pugio Catostomus commersoni Daphnia magna Oryzias latipes Xenopus laevis Tigriopus japonicus Oncorhynchus mykiss Out of these 10 only 2 have sequences available using the COX1 query Name SeqID Oncorhynchus mykiss EU186789.1 Xenopus laevis AB278691.1 But Daphnia magna is one of the most commonly used organisms in lab and I found this paper regarding the complete mitogenome of a specific strain of D. magna thus implying that the complete mitogenome of the species is already known. That means that there must be a way to retrieve all the other species that have full mitogenome but no COX1 gene mentioned in their GenBank title. Answer: The search term you need is [gene name] rather than [title]. For example, (Gasterosteus aculeatus[Organism] AND cox1[Gene Name]) OR (Gasterosteus aculeatus[Organism] AND coi[Gene Name]) Results in 266 hits which all contain COX1 COI Using Daphnia magna ... (Daphnia magna[Organism] AND cox1[Gene Name]) OR (Daphnia magna[Organism] AND coi[Gene Name]) This results in 775 hits. Daphina magna comprises 61 mtDNA genomes, you perform the above search and sort by sequence length and set the sequences per page to 200(easy). Hope that resolves the issue.
{ "domain": "bioinformatics.stackexchange", "id": 2616, "tags": "phylogenetics, phylogeny, sequence-annotation, genbank, barcode" }
SI type safe unit calculations
Question: I wrote a small type-rich MKS Unit system for the consistent and safe calculation of physical units in everyday use. I realized some operators' implementations via the Barton-Nackman trick while defining types via an unique template parameter fixed upon object construction. This prevents e.g. the addition of inconsistent units etc. #include <string> #include <sstream> template<typename Value> struct OperatorFacade { friend constexpr bool operator!=(Value const &lhs, Value const &rhs) noexcept { return !(lhs==rhs); } friend constexpr bool operator>(Value const &lhs, Value const &rhs) noexcept { return rhs < lhs; } friend constexpr bool operator<=(Value const &lhs, Value const &rhs) noexcept { return !(rhs > lhs); } friend constexpr bool operator>=(Value const &lhs, Value const &rhs) noexcept { return !(rhs < lhs); } friend constexpr auto &operator<<(std::ostream &os, Value const other) noexcept { return os << static_cast<long double>(other); } friend constexpr auto operator-(Value const &lhs, Value const &rhs) noexcept { return Value{lhs} -= rhs; } friend constexpr auto operator+(Value const &lhs, Value const &rhs) noexcept { return Value{lhs} += rhs; } }; // Type-safety at compile-time template<int M = 0, int K = 0, int S = 0> struct MksUnit { enum { metre = M, kilogram = K, second = S }; }; template<typename U = MksUnit<>> // default to dimensionless value class Value final : public OperatorFacade<Value<U>> { public: constexpr explicit Value() noexcept = default; constexpr explicit Value(long double magnitude) noexcept : magnitude_{magnitude} {} //constexpr auto &magnitude() noexcept { return magnitude_; } constexpr explicit operator long double() const noexcept { return magnitude_; } friend bool operator==(Value const &lhs, Value const &rhs) { return static_cast<long double>(lhs)==static_cast<long double>(rhs); } friend bool operator<(Value const &lhs, Value const &rhs) { return static_cast<long double>(lhs) < static_cast<long double>(rhs); } auto &operator+=(Value const &other) { magnitude_ += static_cast<long double>(other); return *this; } auto &operator-=(Value const &other) { magnitude_ -= static_cast<long double>(other); return *this; } auto const &operator*(long double scalar) const { magnitude_ *= scalar; return *this; } friend auto &operator*(long double scalar, Value const &other) { return other.operator*(scalar); } private: long double mutable magnitude_{0.0}; }; // Some handy alias declarations using DimensionlessQuantity = Value<>; using Length = Value<MksUnit<1, 0, 0>>; using Area = Value<MksUnit<2, 0, 0>>; using Volume = Value<MksUnit<3, 0, 0>>; using Mass = Value<MksUnit<0, 1, 0>>; using Time = Value<MksUnit<0, 0, 1>>; using Velocity = Value<MksUnit<1, 0, -1>>; using Acceleration = Value<MksUnit<1, 0, -2>>; using Frequency = Value<MksUnit<0, 0, -1>>; using Force = Value<MksUnit<1, 1, -2>>; using Pressure = Value<MksUnit<-1, 1, -2>>; using Momentum = Value<MksUnit<1, 1, -1>>; // A couple of convenient factory functions constexpr auto operator "" _N(long double magnitude) { return Force{magnitude}; } constexpr auto operator "" _ms2(long double magnitude) { return Acceleration{magnitude}; } constexpr auto operator "" _s(long double magnitude) { return Time{magnitude}; } constexpr auto operator "" _Ns(long double magnitude) { return Momentum{magnitude}; } constexpr auto operator "" _m(long double magnitude) { return Length{magnitude}; } constexpr auto operator "" _ms(long double magnitude) { return Velocity{magnitude}; } constexpr auto operator "" _kg(long double magnitude) { return Mass{magnitude}; } constexpr auto operator "" _1s(long double magnitude) { return Frequency{magnitude}; } // Arithmetic operators for consistent type-rich conversions of SI-Units template<int M1, int K1, int S1, int M2, int K2, int S2> constexpr auto operator*(Value<MksUnit<M1, K1, S1>> const &lhs, Value<MksUnit<M2, K2, S2>> const &rhs) noexcept { return Value<MksUnit<M1 + M2, K1 + K2, S1 + S2>>{ static_cast<long double>(lhs)*static_cast<long double>(rhs)}; } template<int M1, int K1, int S1, int M2, int K2, int S2> constexpr auto operator/(Value<MksUnit<M1, K1, S1>> const &lhs, Value<MksUnit<M2, K2, S2>> const &rhs) noexcept { return Value<MksUnit<M1 - M2, K1 - K2, S1 - S2>>{ static_cast<long double>(lhs)/static_cast<long double>(rhs)}; } // Scientific constants auto constexpr speedOfLight = 299792458.0_ms; auto constexpr gravitationalAccelerationOnEarth = 9.80665_ms2; void applyMomentumToSpacecraftBody(Momentum const &impulseValue) {}; int main(){ std::cout << "Consistent? " << 10.0_ms - 5.0_m << std::endl; } Do you mind taking a look and tell me what you think and where I can improve? Answer: Value issues friend bool operator==(const Value& lhs, const Value& rhs) can be noexcept. Also, why use those static_casts instead of simply comparing lhs.magnitude == rhs.magnitude? That's why it's a friend in the first place: To allow access to non-public members. Similar for operator<. operator+= and operator-=: Both can be noexcept, and in both the static_cast can be replaced by accessing other.magnitude. auto const &operator*(long double scalar) const Just that signature gives me a headache. A multiplication is supposed to return a new value, not modify one of its operands! If I do c = b * a; (and a != 1), I wouldn't expect b == c afterwards. So let's drop the const & part of the return type, and change the function body to return a new Value with the adjusted magnitude: auto operator*(long double scalar) const noexcept { return Value{ _magnitude * scalar }; } Similar for friend auto& operator*(long double scalar, Value const& other): drop the reference from the return type. long double mutable magnitude_{0.0};: Why does this need to be mutable (other than to make the "wrong" scalar multiplication work)? General stuff Please put the user-defined literals into their own namespace. This allows user to choose which literals should apply. I can already see collisions with literals from the <chrono> header! Is there a reason long double gets passed by value. but Value isn't? They should be the same size, after all. As @TobySpeigh mentioned in a comment, there are other SI base units, like candela or Ampere. It's surprising that those are missing. Also, is there a reason for using kilograms instead of grams as the base unit? That factor of 1000 shouldn't make that much of a difference. operators *=, /=, % and %= are missing throughout the implementation. With some effort, most if not all of this library could be made constexpr, thus allowing better optimization (or precalculation of values at compile time). Some of those "values" not only have a magnitude, but also a direction (i.e. they're a vector, not a scalar). The current system can't really handle those.
{ "domain": "codereview.stackexchange", "id": 31707, "tags": "c++, template-meta-programming, type-safety, unit-conversion" }
If I throw an object upward will its acceleration change or it will stay constant at $-g$?
Question: I mean if, for example, I throw a rock upward, its acceleration will always be $-g$ or it will be $-g+a$ because I apply a force on the object when I throw it? (without considering friction) Answer: Its acceleration will be $-g+a$ while you are applying the force on it, but it will be only $-g$ when you end applying the force. If there were friction, the object would experience an additional acceleration on both cases.
{ "domain": "physics.stackexchange", "id": 77399, "tags": "newtonian-mechanics, gravity, newtonian-gravity, acceleration" }
Line follower robot with 2 or 4 motors?
Question: I am developing a line follower robot and I am not sure about how many motors should I use: two or four. I'm thinking of using four, but I do not know if it's worth it (it will let the car be heavier, consume more power...). Does anyone have an idea? I'm planning in use something like this design here, of Aniki Hirai: http://anikinonikki.cocolog-nifty.com/.shared/image.html?/photos/uncategorized/2014/11/19/cartsix04.jpg. The engine I'll use is a micro-metal motor, from Pololu, just like in the link: https://www.pololu.com/product/3048. I know the question is a little bit vague, but I don't know another way to ask this. Answer: The best option would be to use two wheels. Then one motor for each wheel in total two. Then, you can use roller coaster at the front. You do not need to provide power to that. Just provide power to back two wheels. By varying the speed of two back wheels (differential drive) you can move straight and turn as well.
{ "domain": "robotics.stackexchange", "id": 1274, "tags": "motor, design, line-following" }
Why mutations in genes involved in general processes like DNA repair increase the risk of developing specific types of cancer?
Question: For example, mutation in MHS2, which encodes a protein involved in the repair of mismatches that occur during DNA replication, dramatically increases the risk of developing colon cancer. (There are many other examples, like the RB gene which encodes a tumor supressor protein and is correlated to retinoblastoma, thus its name) My question is: how a gene like MHS2 which participates in a general mechanism of DNA repair increases the risk of a specific type of cancer? Why doesn't it increase the risk of developing cancer in other tissues as well? Answer: I can't fault @WYSIWYG for mentioning the cited Vogelstein article in providing an answer. You point to what seems like a great explanation for why certain cancers arise in some tissues but not others. However, for those who look closely this paper has some serious errors in its derivation of the model, and for good reason it has come under strong fire in the last couple of months. See this for a really important rebuttal: http://ameyer.me/science/2015/01/02/vogel.html Unfortunately, the paper provides an elegantly specious explanation for, in part, the longstanding paradox of why cancers more readily arise in some tissues like the intestines and from seemingly identical mechanisms, but do not seem to arise in other tissues. My favorite example, though @El Cid you do list some good ones, are inactivating BRCA1 mutations which normally play a role in repairing double stranded DNA breaks. And yet even when mutated in the germline this only seems to increase cancer risk in the breast, and really only in one gender (females). So the answer is still quite certainly that the field does not understand this paradox, and the cited Vogelstein paper was a total tragedy of a publication. There have been a number of officially submitted critiques of the paper and the Vogelstein lab has been trying to defend it as best they can. This is however a great example of how big name labs can seem to get anything published in Cell/Nature/Science. Another thing to consider and @El Cid points to a good example, is the pRb mutations (in a very central tumor suppressor pathway for all cells) cause retinoblastoma and not for instance intestinal or blood cancers very readily, and the retina is not a rapidly dividing tissue. So the Vogelstein paper cannot explain this.
{ "domain": "biology.stackexchange", "id": 3560, "tags": "molecular-biology, molecular-genetics, cancer, mutations" }
c++ CLI Ascii image renderer
Question: I am wondering if this is written well or not, I am just a beginner and would like some feedback what should I improve on. Thanks. #define STB_IMAGE_IMPLEMENTATION #include<iostream> #include "stb_image.h" #include "vector" #define LOG(x) std::cout << x << std::endl const char* density = "#$lho,. "; struct Vec3{ float x, y, z; float Average(){ return (x+y+z)/3.0f; } Vec3& operator/(float other){ x /= other; y /= other; z /= other; return *this; } }; std::ostream& operator<<(std::ostream& stream, const Vec3& other){ stream << other.x << ", " << other.y << ", " << other.z; return stream; } int main(){ std::ios_base::sync_with_stdio(false); int w, h, c; unsigned char *data = stbi_load("image.jfif", &w, &h, &c, STBI_rgb); std::vector<Vec3> vec; for (int i = 0; i < w*h*c; i+=3) vec.push_back(Vec3{ float(data[i]), float(data[i+1]), float(data[i+2]) }); int counter = 0; for (Vec3 v : vec){ if (counter%w == 0) { std::cout << "\n"; } std::cout << density[ int((v/255).Average()*(strlen(density)-1)) ] << " "; counter++; } LOG(counter); stbi_image_free(data); return 0; } Answer: Overview. This is probably not your fault but the stbi interface is badly designed C++. This is more like a C interface. The problem here are: The "Resource Allocation"/ "Resource Release" done manually There is no excapsulation of the data. The resource alocation is the big thing to me (you need to use RAII). As a consequence, the code is not exception safe, and you are going to leak the resource in any non-trivial application. Additionally, the data is actually 4 items. int w, h, c; unsigned char *data; But they are not protected from misuse and thus likely to be messed up. Not sure running across the data converting char to float then running across the data again is it worth it. Simple run across the data and print it computing an average as you go. Code Review What does this do. #define STB_IMAGE_IMPLEMENTATION It should be made clear why you are doing this. Please be neat in your code. #include<iostream> Add an extra space after the include. Vector is standard library. It is probably not something you want to define yourself (as implied by the quotes). Or this should use <> around vector to show this is a standard library. #include "vector" Don't use macros for this: #define LOG(x) std::cout << x << std::endl Macros should normally be reserved for designed machine/architecture differences. Not writing pseudo functions. template<typename T> inline void log(T const& value) {std::cout << value << std::endl;} Don't use std::endl. std::cout << value << std::endl; This outputs a new line then flushes the buffer. This will make using the streams very inefficient (and if used a lot will significantly deteriorate the performance of an application). I would say that if you want to flush the buffer you should explicitly do it. If you want your logging to force a flush fine then keep it. But normally you sould simply output a new line. std::cout << value << "\n"; Then if you actually need to flush then ask the engineer to explicitly flush it at important points (but usually the system does it at the correct times). If this is for debugging use std::cerr this will auto flush (as it has no buffer to flush). That's a short density string. const char* density = "#$lho,. "; Sure. But is "Average" the best function for this? float Average(){ return (x+y+z)/3.0f; } See this video where he discusses some alternatives: Yes. std::ios_base::sync_with_stdio(false); How I would do it // I am going to use the same type of class for a point // You did but I am going to make the data points // the same as the data you get from the stbi library // so there is no need to do any conversion. struct Vec3 { unsigned char x, y, z; float Average() const { return (x+y+z)/3.0f; } }; // You are effectively manipulating an image. // So lets encapsulate that data in its own class. class Image { int w; int h; int c; unsigned char *data; public: // We know that we are doing resource management // So we need a constructor and destructor to // correctly handle the resource. // // Note we could do something fancy with a smart // pointer. But I think that is overkill for now. // But may be worth thinking about down the road. Image(std::string const& fileName) { data = stbi_load(fileName.c_str(), &w, &h, &c, STBI_rgb); } ~Image() { stbi_image_free(data); } // Need to think about the rule of 3/5 // So I am going to delete the copy operations. // You may do something more sophisticated in the // long run. Image(Image const&) = delete; Image& operator=(Image const&) = delete; // The only other operation you do is scan over the // data for the image. This is usually done via iterators // in C++. I am going to do the simplist iterator I can // probably not super exactly compliant but it will work // for this situation. You should fix up to be better. struct Iterator { // Really the iterator is simply a pointer to // a location in the raw data of them image. // So that is all we need to store. unsigned char* data; // Moving the iterator you want to move by // three data points (as the three points represent // one pixel (I believe)). Iterator& operator++() { data+=3; return *this; } // You can add post increment and or the decrement // operators. // This is the operator that gives you back a reference // to one point in the data. // We simply convert it to be a pointer of the correct // type and then convert it to a reference before // returning. Vec3& operator*() { return *reinterpret_cast<Vec3*>(data); } // The main comparison for iterators. Is checking // if they are not equal. You should probably also // test for equivelance. bool operator!=(Iterator const& rhs) const { return data != rhs.data; } }; // In C++ ranges are defined by the beginning // and one past the end. So we create iterators // at these points. Iterator begin() {return Iterator{data};} Iterator end() {return Iterator{data + w*h*c};} }; int main() { Image image("image.jfif"); for (auto const& point: image) { std::cout << point.Average() << "\n"; } } Side nots: The range based for() uses the begin() and end() methods on an object to iterat across the range. So the above for loop is actually equivelent to: { auto begin = std::begin(image); // this by default calls image.begin(). auto end = std::end(image); // this by default calls image.end(). for (auto loop = begin; loop != end; ++loop) { auto const& point = *loop; std::cout << point.Average() << "\n"; } } This shows that we are using all the methods I define above. begin() and end() to get the range. operator++() to increment the iterator and operator!=() to check we are not at the end and finally operator*() to get a reference to a point.
{ "domain": "codereview.stackexchange", "id": 43247, "tags": "c++, console, image, ascii-art" }
Selection of base model for transfer learning
Question: Is there a golden rule which gives intuition on which base model needs to be used for a give image classification problem. Most of the articles gives the below details which says how to train the model based on the dataset. However I was not able to find good reference for the selection of base model. Thank you Answer: There is no specific rule associated with the base model selection for transfer learning. It is generally a trade-off between model precision and resource allocation. As the number of layers increases, the number of parameters increases and the model becomes more and more resource heavy but deeper models tend to have better accuracy over shallower counterparts. Here's a comparison: Apart from that also refer to this question: Which is the fastest image pre-trained model? if the speed of the model matters.
{ "domain": "datascience.stackexchange", "id": 5228, "tags": "deep-learning, image-classification, transfer-learning" }
FTL Communication using Quantum Entanglement (A new Approach)
Question: Ok i have a proposition: Imagine there are two people 1 on earth and other in space. They syncronised their clocks before they left earth and agreed that they will factor in time dilation and regularly update their clocks to account for the other. Now they agree that in 10 years the first person to collapse the entanglement will be the guy in space and he can do that before the guy on earth every time as he has a clock that keeps track of the time on earth as mentioned above. They also agree that if the spin is up then the guy in space will head to a certain planet if down then no. Now after ten years the guy in space will first make his observation and decide to go or not based on the spin. This info is communicated to the guy on earth as he necessarily opens it after the guy in space. In this case the choice the guy makes in space,the information travels faster than light. I observe no problems with this system. Very interested in the counter points Edit:it works classically as well.It was a dumb question.Thanks for indulging in it regardless Answer: There is no communication happening at a distance here: the communication happened beforehand, when they agreed how to interpret the results. We can easily do this classically as well. I will write the same word, either "up" or "down", on two pieces of paper and seal them in envelopes. I will use a fair coin flip to decide what word to write. I then give one envelope to "Earth guy" and one envelope to "space guy". I ask them to open their envelopes 10 years from now, at the same time, and if they see "up" they will both know instantly that "space guy" is going to Planet X, and if they see "down" they know he won't. Neither knew before they opened the envelope. There is nothing magical about this.
{ "domain": "physics.stackexchange", "id": 97757, "tags": "special-relativity, quantum-entanglement, faster-than-light" }
What are the optimal conditions to fuel your car?
Question: I was filling my car earlier today, and noticed a sticker posted on the pump. This pump dispenses fuel at a volumetric amount measured in standard gallons (231 cubic inches). It does not adjust for temperature, or other factors that may affect the energy content of each gallon. This got me thinking. What would be the optimal conditions to fuel your car so that you can get the most energy per gallon? Let's assume we measure this in miles per gallon achieved traveling at 60mph in a standard car that advertises 30mpg; a Honda Civic, perhaps. Why would temperature affect the energy of each gallon? Answer: Why would temperature affect the energy of each gallon? The energy content depends on the mass (i.e. on the number of molecules available for combustion) The volume of a kilogram mass of gasoline depends on it's temperature - gasoline expands and becomes less dense as it gets warmer. So a litre of warm gasoline contains less mass than a litre of cold gasoline. The difference is very slight and underground gasoline storage tanks maintain a fairly even temperature day and night. The difference may be less than the volumetric accuracy of the gasoline pump. When gasoline is delivered in large tanker-trucks, the temperature is taken into account when calculating the value of the delivered volume. What would be the optimal conditions to fuel your car so that you can get the most energy per gallon? As explained above, in practice this isn't worth doing. A better strategy is to drive smoothly, plan ahead whilst driving, use throttle and brakes as little as possible, use highest possible gear, don't use high speeds. An overly literal answer might be: At the end of your previous journey, remove the fuel from your car and put it into a chiller. Just before your next journey put into your tank just sufficient mass to complete your journey. By using chilled fuel, the volume is less so the number of gallons is fewer - this however does not save money or fuel (as measured by mass) it just gets you furthest using the least number of gallons. By not carrying fuel you won't use, you reduce the mass of the vehicle and it's contents, less mass means less force needed to achieve a specific acceleration, which means a reduction in fuel consumption. This doesn't increase the energy per gallon but it does save you money.
{ "domain": "physics.stackexchange", "id": 5532, "tags": "energy, temperature, everyday-life" }
Any existing Reeds-Shepp implementations?
Question: Does anyone know of any open source implementations for finding the optimal path of a Reeds-Shepp car? I'm trying to implement the formulas myself, but I'm having trouble with one of them. I think it's a typo in their paper; their formula just doesn't spit out the expected result. The specific formula is found in Section 8.3 in Reeds and Shepp's 1990 paper, found here. I'm trying to find the optimal path from (x = 0, y = 0, theta = 0) to (x = 0, y = 0, theta = -pi), or put differently: the car goes back to the starting position, but pointed in the opposite direction. The correct solution is shown in Figure D of Section 1. The formula in Section 8.3 should give the same turn lengths, but it doesn't. The three turn lengths should all be pi / 3, but working it out by hand gives me completely different values. Answer: Copy of my comment above (I suppose that the question can be accepted/closed): I think that the question is not research level, however perhaps this can help you: http://msl.cs.uiuc.edu/~lavalle/cs326a/rs.c (README: http://msl.cs.uiuc.edu/~lavalle/cs326a/README_RS). You can also take a look at Chapter 13 of the book "Planning algorithms".
{ "domain": "cstheory.stackexchange", "id": 1269, "tags": "graph-algorithms, implementation" }
Is the following molecule aromatic or antiaromatic?
Question: This molecule has the same chemical formula $\ce{C20H10}$ as corannulene (which also is non-planar) and corannulene is aromatic, so I'm pretty sure that this molecule should be aromatic too. I tried to analyse it using the MoCubed app on my phone (on my previous posts, I wrote "WebMo" instead of "MoCubed" because I have both on my phone and sometimes get confused) but it keeps adding unnecessary hydrogen atoms and I can't rotate the molecule to remove all of them so I have to ask here. P.S. I do know Hückel's rules, but they don't apply to pericyclic rings. Answer: It turns out this question has some subtleties. The analysis presented below, based on Huckel MO theory, indicates that: 1. The compound as presented by the OP is not expected to be aromatic and would be anti-aromatic by the Huckel model -- meaning in practice it would probably distort out of full conjugation to avoid this calamity. 2. The dication, however, does appear to be aromatic. Part of the reason is that while we ordinarily suppose that five-membered rings must be negatively charged in an aromatic system, a phenanthrene-like accumulation of three such rings can become aromatic with a positive charge instead. Same formula, but ... The OP suggests a comparison with the aromatic compound corannulene, which like the compound in question has the formula $\ce{C20H10}$ and twenty conjugated pi electrons. Let's compare the Huckel energy levels for these two compounds, keeping in mind that an aromatic system will have (1) a stabilized highest occupied energy level and (2) a wide energy gap between this level and unoccupied states (figure made by the author): Occupying the lowest available orbitals in corannulene on the left (including two degenerate pairs between $\beta$ and zero that are close together, thus appear as thick bars), we see the desired effect for aromaticity: stabilized highest occupied orbitals, wide gap to the unoccupied ones). Not so on the right with the OP's compound: here there are nine bonding orbitals so putting in twenty pi electrons means having a mixture of unoccupied and occupied states at the nonbonding level where there is a degeneracy. This is a classic antiaromatic situation (similar, actually, to triangulene which us invoked in another answer), and thus the answer to the original question is the compound we are examining will not be aromatic. It will form a diradical or find a way to get out of antiaromaticity instead. Thrown for a loop What happened? In multicyclic systems aromatic coupling is not just a matter of having the right number of electrons. Hidden in the energy calculations is a topological difference. In corannulene five of the conjugated atoms are in an interior closed loop surrounded by the rest of the molecule, while the OP's compound lacks such an internal loop. This topological change can alter aromatic coupling, which is sensitive to how loops of conjugated atoms are connected to each other. For this particular combination of molecular formula and pi-electron count, corannulene has a good topology for forming an aromatic system. The OP's compound does not. Making lemonade out of lemons Many Huckel-antiaromatic systems become aromatic if one places the proper charge on them, adjusting the number of pi electrons and thus occupied pi orbitals. The classic case is cyclootatetraene which goes from an antiaromatic/distorted nonaromatic neutral molecule to an aromatic dianion when we combine it with potassium. What can be done in this direction for the OP's compound? According to the Huckel energy levels we ought to either fill the degenerate orbitals as in cyclooctatetraene, or empty them. We see that in this case the latter is what works. The dication, as opposed to the dianion, gas a stabilized highest orbital and a wide gap up to what are now unoccupied degenerate states. In other words, the hallmarks of aromaticity. But where can these positive charges reside? We can make the seven-membered ring aromatic with one positive charge, but shouldn't the benzene rings be neutral and the five-membered rings negative like the cyclopentadienyl anion? We crack open our topology textbook again and we discover that three separate rings are not connected in the same way as the accumulated five-membered rings in the OP's molecule/dication. We'd better check the Huckel orbitals inherent to a cumulated five-membered ring system like the one in the OP's molecule, having the formula $\ce{C11H7}$ (this figure also by the author): The seemingly similar phenanthrene system is a fourteen-pi-electron aromatic system. Here, however, with fewer input atomic orbitals we see the widest gap, thus likely the greatest stabilization, occurs with only ten pi electrons. This means the eleven-carbon system will have a positive overall charge. The chain of three "cyclopentadienyl" rings generates an aromatic cation and thus can take on the second positive charge in the dicationic, predicted aromatic form of the OP's compound.
{ "domain": "chemistry.stackexchange", "id": 13127, "tags": "organic-chemistry, aromaticity" }
Explain it to me like I'm a physics grad: Greenhouse Effect
Question: What is the mechanism by which increasing $\rm CO_2$ (or other greenhouse gases) ends up increasing the temperature at (near) the surface of the Earth? Mostly what I'm looking for is a big-picture explanation of how increasing $\rm CO_2$ affects the Earth's energy transfer balance that goes a step or two beyond Arrhenius's derivation. I've read Arrhenius's 1896 derivation of the greenhouse effect in section III here. It assumes that there is non-negligible transmission of the long wavelength radiation from the surface through the full thickness of the atmosphere to space. In the band of $\rm CO_2$ vibrational lines (wavenumbers between about $\rm 600cm^{-1}$ to $\rm 800cm^{-1}$) It is my impression that for most (some? almost all?) of the wavelengths in this band, the atmosphere is optically thick, so the outgoing long wave radiation, e.g. as observed by IRIS on Nimbus 4 had it's "last scattering" somewhere up in the atmosphere, and thus this Arrhenius's "the surface can't radiate into space as efficiently" doesn't apply uniformly across this band. How does this kind of saturation effect modify Arrhenius's description of the greenhouse effect? If this line of reasoning is correct, then the net outgoing long wave emissions in $\rm CO_2$ band of vibrational lines is some complicated mix of radiation from different altitudes. If my inference is correct, how does this affect the response of the Earth to changes in CO2 concentration? Maybe there is some sort of statistical-mechanics picture in terms of the photons doing a random walk to escape the atmosphere (for wavelengths where the atmosphere is optically thick), but I don't know how to connect that idea to overall radiative efficiency. The issue in my understanding that I'm trying to resolve that that Arrhenius's derivation assumes a non-negligible amount of transmission from the surface directly to space. My, admittedly cursory and thus potentially incorrect, understanding of the absorption spectrum of CO2 is that for a range of IR wavelengths the atmosphere (taken as a whole) is effectively opaque. For the portions of the spectrum where there is only some absorption, Arrhenius's argument applies; is the best model to describe the impact of small changes to CO2 concentration to only consider the portions of the IR spectrum that are (partially) transparent and basically ignore the bands that are opaque? I'm mostly interested in the direct effect of $\rm CO_2$ on an Earth-like planet, so we're dealing with a planet whose blackbody temperature is $\rm \approx 250K$ (in order emit the short wavelength (visible and above) radiation it absorbed from the Sun), but whose surface temperature is more like $\rm 280K$, and has concentrations of $\rm CO_2$ in the $\rm 300ppp-400ppm$ range, but I'm willing to ignore the effects of water vapor (I figure that might overly complicate things), so assuming a dry atmosphere, i.e. just $\rm N_2/O_2$ and $\rm CO_2$, would be fine. I'm not being cheeky with the "physics grad", assume I know, or can learn, any of the relevant physical or mathematical relationships required to understand the relationship between greenhouse gas concentrations and the heat transfer properties of the Earth. Answer: Executive summary: Carbon dioxide in the atmosphere absorbs some of the energy radiated by the Earth; when this energy is re-emitted, part of that is directed back to Earth. More carbon dioxide $\rightarrow$ more energy returns to Earth. This is the "greenhouse effect". The full answer is very very complex; I will try a slight simplification. The sun can be treated as a black body radiator, with the emission spectrum following Planck's Law: $$H(\lambda, T) = \frac{2hc^2}{\lambda^5}\frac{1}{e^{\frac{hc}{\lambda kT}}-1}$$ The integral of emission over all wavelengths gives us the Stefan-Boltmann law, $$j^* = \sigma T^4$$ Where $j$ is the radiance, $\sigma$ is the Stefan-Boltzmann constant ($5.67\times10^{-8} ~\rm{W~ m^{-2}~ K^{-4}})$ If we considered the Earth to be itself a black body radiator with no atmosphere (like the moon), then it is receiving radiation from just a small fraction of the space surrounding it (solid angle $\Omega$), but emitting radiation in all directions (solid angle $4\pi$). Because of this, the equilibrium temperature for a black sphere at 1 a.u. from the sun can be calculated from Stefan-Boltzmann: $$4\pi \sigma T_e^4 = \Omega \sigma T_s^4\\ T_e = T_s \sqrt[4]{\frac{\Omega}{4\pi}}$$ Now the solid angle of the sun as seen from Earth is computed from the radius of the sun and the radius of the Earth's orbit: $$\Omega = \frac{\pi R_{sun}^2}{R_o^2}$$ With $R_s\approx 7\times 10^8 ~\rm{m}$ and $R_o\approx 1.5\times 10^{11}~\rm{m}$ we find $\Omega \approx 5.4\times 10^{-5}$; given the sun's surface temperature of 5777 K, we get the temperature of the "naked" earth as $$T_e = 278~\rm{K}$$ [updated calculation... removed a stray $4\pi$ that had snuck in to my earlier expression. Thanks David Hammen!] Note that this assumes that the Earth is spinning sufficiently fast that the temperature is the same everywhere on the surface - that is, the sun is heating all parts of the Earth evenly. That is not true of course - the poles consistently get less than their "fair share" and the equator more. Taking that into account, you would expect a lower average temperature, as the hotter equator would emit disproportionately more energy (the correct value for the "naked earth black body" is 254.6 K as David Hammen pointed out in a comment); but the (relatively) rapid rate of rotation, plus presence of a lot of water and the atmosphere does prevent some of the extreme temperatures that you see on the moon (where the difference between "day" and "night" can be as high as 276 K...) Now we need to look at the role of the atmosphere, and how it modifies the above. Clearly, we are alive on Earth, and temperatures are much higher than would be calculated absent an atmosphere. This means the "greenhouse effect" is a good thing. How does it work? Clouds in the atmosphere reflect part of the incoming sunlight. This means less solar energy reaches Earth, keeping us cooler As Earth's surface heats up, it re-emits energy back into the atmosphere Because Earth is much cooler than the sun, the spectrum of radiation of the surface is shifted towards the IR part of the spectrum. Here is a plot of the spectrum of the Sun and Earth (assumed at 20 °C), with their peaks normalized for easy comparison, and with the visible light range overlaid: Now for the "greenhouse effect". I already mentioned that clouds stopped some of the Sun's light from reaching the Earth's surface; similarly, the radiation from Earth will in part be absorbed/re-emitted by the atmosphere. The critical thing here is absorption followed by re-emission (when there is equilibrium, the same amount of energy that is absorbed must be re-emitted, although not necessarily at the same wavelength). When there is re-emission, some of the photons "return" to Earth. This has the effect of making the fraction of "cold sky" that the Earth sees smaller, so the expression for the temperature (which had $\sqrt[4]{\frac{\Omega}{4\pi}}$ in it) will be modified - we no longer "see" $4\pi$ of the atmosphere. The second effect is absorption. The absorption spectrum of $\rm{CO_2}$ can be found for example at Clive Best's blog As you can see, much of the energy emitted by Earth is absorbed by the atmosphere: $\rm{CO_2}$ is not the only culprit, but it does have an absorption peak that is quite close to the peak emission of Earth's surface, so it plays a role. Increase the $\rm{CO_2}$ and you increase the amount of energy that is captured by the atmosphere. Now when that energy is re-emitted, roughly half of it will be emittend towards the Earth, and the other half will be emitted to space. As energy is re-emitted back to Earth, the effective mean temperature that the surface has to reach before there is equilibrium (given a constant influx of energy from the Sun) goes up. There are many complicating factors. Hotter surface may mean more clouds and thus more reflected sunlight; on the other hand, increased water vapor also implies increased absorption in the IR. But the basic idea that absorption of IR by the atmosphere will lead to an increased equilibrium temperature of the surface should be pretty clear. Update The question "If the atmosphere is already so opaque to IR radiation, why does it matter if we add more CO2?" deserves more thought. There are three things I can think of. Spectral broadening First - there is the issue of spectral broadening. According to [this lecture](http://irina.eas.gatech.edu/EAS8803_Fall2009/Lec6.pdf) and references therein, there is significant pressure broadening of the absorption lines in $\rm{CO_2}$. Pressure broadening is the result of frequent collisions between molecules - if the time between collisions is short compared to the lifetime of the decay (which sets a lower limit on the peak width), then the absorption peak becomes broader. The link gives an example of this for $\rm{CO_2}$ at 1000 mb (sea level) and 100 mb (about 10 km above sea level): This tells me that as the concentration of $\rm{CO_2}$ in the atmopshere increases, there will be more of it in the lower (high pressure) layers, where it effectively has no "windows". At lower pressures, the gaps between the absorption peaks would let more of the energy escape without interaction. This will be more important in the upper atmosphere - not so much near Earth's surface where pressure broadening is significant. Near IR absorption bands In the analysis above, I was focusing on the radiation of Earth, and its interaction with $\rm{CO_2}$ absorption bands around 15 µm - what is usually called the "greenhouse effect". However, there are also absorption bands in the near-IR, at 1.4, 1.9, 2.0 and 2.1 µm (see [Carbon Dioxide Absorption in the Near Infrared](http://jvarekamp.web.wesleyan.edu/CO2/FP-1.pdf). These bands will absorb energy of the sun "on the way down", and result in atmospheric heating. Increase the concentration of carbon dioxide, and you effective make the earth a little better at capturing the sun's energy. In the higher layers of the atmosphere (above the clouds) this is particularly important because this is energy absorbed before clouds get a chance to reflect it back into space. Since these bands have lower absorption (but the incident flux of sunlight is so much higher), they play a role in atmospheric modeling (as described more fully in the paper linked above). More absorption from "side bands" This is really well explained in [the answer by @jkej](https://physics.stackexchange.com/a/300125/26969) but worth reiterating: beside the spectral broadening that I described above, given the shape of a spectral peak, the lower absorptivity as you move away from the center frequency becomes more significant as the total number of molecules increases. This means that the part of the spectrum that was only 10% absorbed will become 20% absorbed when the concentration doubles. As the linked answer explains, this only leads to a "square root of concentration" effect for a single line in the spectrum, and an even smaller amount when spectral lines overlap - but it should not be ignored. I think there may also be an argument that can be made regarding treating the atmosphere as a multi-layered insulator, with each layer at its own temperature (with lapse rate mostly controlled mostly by convection and gravity); as carbon dioxide concentration increases, this will change the effective emissivity of different layers of the atmosphere, and this might expose the surface of the earth to different amounts of heat flux depending on the concentration. But this is something I will have to give some more thought to... and maybe run some simulations for. Finally, in a nod to "the other side", here is a link to a website that attempts to argue that carbon dioxide (let alone man-made carbon dioxide) cannot possibly explain global warming - and that global warming in fact does not exist at all. Writing a full refutation of the arguments in that site is beyond the scope of this answer... but it might make a good exercise for another day.
{ "domain": "physics.stackexchange", "id": 96728, "tags": "thermodynamics, thermal-radiation, atmospheric-science, geophysics, climate-science" }
Learning by translating - Follow the Rubberduck - Part 2: Beta
Question: This project is my learning place for a few things: MVP (model view presenter) XML (parsing, editing and leveraging) deeper swing functionality Concerning the XML part I have already recieved a very nice review by rolfl on my previous question. Since then quite a lot of things have changed and the current state of the code is available on github I implemented a few features, the most significant change since back then may be the free choice of translated locale. In addition to that I now support an "Unsaved Changes" dialog upon closing. Furthermore I have removed interfaces that have only a single implementation (so basically, all), except for the OverviewView, which I want to implement with a different UI provider than swing. Enter the Translation Helper. As Entry point serves your trusty Main-Class: public class Main { public static final String RUBBERDUCK_PATH = "RetailCoder.VBE/UI"; public static final String ARGUMENT_MISMATCH = "Arguments do not match up. Please provide one single path to read the Rubberduck resx from"; public static final String ILLEGAL_FOLDER = "Rubberduck .resx files can only be found under RetailCoder.VBE/UI. Please give a path that points to a Rubberduck UI folder"; private Main() { } public static void main(final String[] args) { // parsing the first argument given into a proper path to load the resx // from if (args.length != 1 && args.length != 3) { // don't even bother! System.out.println(ARGUMENT_MISMATCH); return; } Path resxFolder = Paths.get(args[0]); // normalize path to allow checking resxFolder = resxFolder.normalize(); if (!resxFolder.endsWith(RUBBERDUCK_PATH)) { System.out.println(ILLEGAL_FOLDER); return; } TranslationPresenter tp = new TranslationPresenter(); OverviewModel m = new OverviewModel(); OverviewView v = new SwingOverviewView(); OverviewPresenter p = new OverviewPresenter(m, v, tp); p.initialize(); p.loadFiles(resxFolder); // set the selected locales if they were specified on commandline // check whether they are available before that and fall back if they aren't if (args.length == 3) { final String leftLocale = args[1]; final String rightLocale = args[2]; if (m.getAvailableLocales().contains(leftLocale) && m.getAvailableLocales().contains(rightLocale)) { p.onTranslationRequest(leftLocale, Side.LEFT); p.onTranslationRequest(rightLocale, Side.RIGHT); } // "fallback" } p.show(); } } Main does quite some things actually. The arguments given are parsed and run through a sanity check. Then we fire up the Presenter, Model and View and wire them together. Surely I could clean this up a little, but I didn't find it necessary yet... Candidates, present yourself: The presenter has a significant working field. It is the access point for the application and controls View as well as Model and is managing their interactions. User actions that cannot be handled by the View get propagated to the presenter. There a decision between 3 Options is made: Handle yourself Delegate to model Delegate to a separate presenter This gets us to following class public class OverviewPresenter { public static final String DEFAULT_TARGET_LOCALE = "de"; public static final String DEFAULT_ROOT_LOCALE = ""; private final Map<Side, String> chosenLocale = new EnumMap<>(Side.class); private final OverviewModel model; private final OverviewView view; private final TranslationPresenter translationPresenter; private boolean initialized = false; public OverviewPresenter(final OverviewModel m, final OverviewView v, final TranslationPresenter p) { model = m; view = v; translationPresenter = p; view.initialize(); } public void show() { if (!initialized) { initialize(); } view.show(); } public void initialize() { // initialization shall only happen once! if (initialized) { return; } view.register(this); model.register(this); translationPresenter.register(this); initialized = true; } public void onTranslationRequest(final String locale, final Side side) { chosenLocale.put(side, locale); rebuildView(); } public void onException(final Exception e, final String message) { view.displayError(message, e.getMessage()); } public void onParseCompletion() { rebuildView(); } private void rebuildView() { List<Translation> left = model.getTranslations(chosenLocale.getOrDefault(Side.LEFT, DEFAULT_ROOT_LOCALE)); List<Translation> right = model.getTranslations(chosenLocale.getOrDefault(Side.RIGHT, DEFAULT_TARGET_LOCALE)); view.rebuildWith(left, right); } public void loadFiles(final Path resxFolder) { model.loadFromDirectory(resxFolder); } public String[] getLocaleOptions() { return model.getAvailableLocales().toArray(new String[]{}); } public void onTranslationSubmit(final Translation t) { translationPresenter.hide(); model.updateTranslation(t.getLocale(), t.getKey(), t.getValue()); rebuildView(); } public void onTranslationAbort() { translationPresenter.hide(); } public void onTranslateRequest(final String key) { translationPresenter.setRequestedTranslation( model.getSingleTranslation(chosenLocale.getOrDefault(Side.LEFT, DEFAULT_ROOT_LOCALE), key), model.getSingleTranslation(chosenLocale.getOrDefault(Side.RIGHT, DEFAULT_TARGET_LOCALE), key) ); translationPresenter.show(); } public void onSaveRequest() { model.saveAll(); } public void onWindowCloseRequest(WindowEvent windowEvent) { if (model.isNotSaved()) { // prompt to save changes int choice = JOptionPane.showConfirmDialog(windowEvent.getWindow(), "You have unsaved changes. Do you wish to save before exiting?", "Unsaved Changes", JOptionPane.YES_NO_CANCEL_OPTION); switch (choice) { case JOptionPane.YES_OPTION: model.saveAll(); // fallthrough intended case JOptionPane.NO_OPTION: view.hide(); System.exit(0); break; case JOptionPane.CANCEL_OPTION: // do nothing break; } } else { System.exit(0); } } } No presenting without a view: Whoever expected goodies now, will be sorely disappointed. The Translation Helper is incredibly ugly. Well at least it resizes nicely, has two columns and looks okay enough while doing that. That's enough for me. To make this as simple as possible for me I settled on using a GridBagLayout to enable resizing without any additional code from my side. On initialization I set some constraints, and that's it. Well not quite. After the layouting is done there's basically two things that can happen: The locales to display change The presenter changes For both cases I need to ensure integrity of functionality, and as such these two things happen in methods called externally. That said here's the Swing code. Anyone not interested in boring manual layouting and event-bindings should skip this block: public class SwingOverviewView implements OverviewView { private static final Dimension MINIMUM_WINDOW_SIZE = new Dimension(800, 500); private static final Dimension DEFAULT_WINDOW_SIZE = new Dimension(1000, 700); private static final Dimension MENU_BAR_DIMENSION = new Dimension(800, 100); private static final Dimension BUTTON_DIMENSION = new Dimension(100, 40); private final JFrame window; private final JTable translationContainer; private final JPanel menuBar; private final JButton saveButton; private final JButton chooseLeft; private final JButton chooseRight; private OverviewPresenter presenter; public SwingOverviewView() { window = new JFrame("Rubberduck Translation Helper"); window.setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE); translationContainer = new JTable(); translationContainer.setModel(new TranslationTable()); menuBar = new JPanel(); saveButton = new JButton("save"); chooseLeft = new JButton("choose left"); chooseRight = new JButton("choose right"); } @Override public void register(final OverviewPresenter p) { presenter = p; saveButton.addActionListener(event -> presenter.onSaveRequest()); chooseLeft.addActionListener(event -> chooseAndLoadLanguage(Side.LEFT)); chooseRight.addActionListener(event -> chooseAndLoadLanguage(Side.RIGHT)); window.addWindowListener(new WindowListener() { @Override public void windowOpened(WindowEvent windowEvent) { // nothing } @Override public void windowClosing(WindowEvent windowEvent) { p.onWindowCloseRequest(windowEvent); } @Override public void windowClosed(WindowEvent windowEvent) { // nothing } @Override public void windowIconified(WindowEvent windowEvent) { // nothing } @Override public void windowDeiconified(WindowEvent windowEvent) { // nothing } @Override public void windowActivated(WindowEvent windowEvent) { // nothing } @Override public void windowDeactivated(WindowEvent windowEvent) { // nothing } }); } private void chooseAndLoadLanguage(Side side) { String locale = chooseLocale(); presenter.onTranslationRequest(locale, side); } private String chooseLocale() { String[] localeOptions = presenter.getLocaleOptions(); int selectedOption = JOptionPane.showOptionDialog(window, "Please choose the Locale out of following options:", "Choose Locale", JOptionPane.DEFAULT_OPTION, JOptionPane.QUESTION_MESSAGE, null, localeOptions, null); return localeOptions[selectedOption]; } @Override public void initialize() { window.setLayout(new GridBagLayout()); window.setSize(DEFAULT_WINDOW_SIZE); window.setMinimumSize(MINIMUM_WINDOW_SIZE); window.setBackground(new Color(0.2f, 0.3f, 0.7f, 1.0f)); addMenuBar(); addTranslationContainer(); window.doLayout(); } private void addTranslationContainer() { GridBagConstraints constraints = new GridBagConstraints(); constraints.insets = new Insets(15, 15, 15, 15); constraints.weightx = 1.0; constraints.weighty = 1.0; constraints.fill = BOTH; constraints.gridx = 0; constraints.gridy = 1; JScrollPane scroller = new JScrollPane(translationContainer); scroller.setMinimumSize(new Dimension(800, 400)); scroller.setSize(new Dimension(800, 400)); window.add(scroller, constraints); bindEventListener(); translationContainer.setDefaultRenderer(Object.class, new TranslationTableRenderer()); } private void bindEventListener() { translationContainer.addMouseListener(new MouseListener() { @Override public void mouseClicked(final MouseEvent event) { if (event.getClickCount() != 2) { // only react to doubleclicks! return; } final int row = translationContainer.rowAtPoint(event .getPoint()); final String key = ((TranslationTable) translationContainer .getModel()).getKeyAt(row); presenter.onTranslateRequest(key); } @Override public void mouseEntered(final MouseEvent arg0) { // IGNORE } @Override public void mouseExited(final MouseEvent arg0) { // IGNORE } @Override public void mousePressed(final MouseEvent arg0) { // IGNORE } @Override public void mouseReleased(final MouseEvent arg0) { // IGNORE } }); } private void addMenuBar() { GridBagConstraints constraints = new GridBagConstraints(); constraints.insets = new Insets(15, 15, 15, 15); constraints.gridx = 0; constraints.gridy = 0; constraints.weightx = 1.0; constraints.weighty = 0.0; constraints.fill = BOTH; menuBar.setLayout(new GridBagLayout()); menuBar.setBackground(new Color(0.4f, 0.2f, 0.4f, 0.2f)); addToGridBag(menuBar, window, MENU_BAR_DIMENSION, constraints); GridBagConstraints buttonConstraints = (GridBagConstraints) constraints.clone(); buttonConstraints.gridx = GridBagConstraints.RELATIVE; addToGridBag(chooseLeft, menuBar, BUTTON_DIMENSION, buttonConstraints); addToGridBag(chooseRight, menuBar, BUTTON_DIMENSION, buttonConstraints); addToGridBag(saveButton, menuBar, BUTTON_DIMENSION, buttonConstraints); } @Override public void rebuildWith(final List<Translation> left, final List<Translation> right) { translationContainer.setModel(new TranslationTable(left, right)); } @Override public void displayError(final String title, final String errorMessage) { JOptionPane.showMessageDialog(window, errorMessage, title, JOptionPane.ERROR_MESSAGE); } @Override public void show() { window.setVisible(true); } @Override public void hide() { window.setVisible(false); } } But what should I show? Exactly. That's what the Model is responsible for. It does a significant bit of the actually interesting functionality, namely: Parsing .resx files at a location Writing edited .resx files back to that location For that it relies on the java.nio-API and JDOM, as well as the new Streams. This is the interesting part that would need a whole rewrite to support arbitrary files and other interesting stuff. Luckily that is not what I want :) public class OverviewModel { public static final String VALUE_NAME = "value"; public static final String KEY_NAME = "name"; public static final String SINGLE_TRUTH_LOCALE = ""; private static final String ELEMENT_NAME = "data"; private static final String FILE_NAME_FORMAT = "RubberduckUI%s.resx"; private static final String FILENAME_REGEX = "^.*RubberduckUI\\.?([a-z]{2})?\\.resx$"; private static final Pattern localeFinder = Pattern.compile(FILENAME_REGEX); private final Map<String, Document> translations = new HashMap<>(); private final XPathFactory xPathFactory = XPathFactory.instance(); private final XPathExpression<Element> valueExpression = xPathFactory.compile("/*/" + ELEMENT_NAME + "[@" + KEY_NAME + "=$key]/" + VALUE_NAME, Filters.element(), Collections.singletonMap("key", "")); private OverviewPresenter presenter; private Path currentPath; private final AtomicBoolean saved = new AtomicBoolean(true); public static final XMLOutputter XML_PRETTY_PRINT = new XMLOutputter(Format.getPrettyFormat()); private static String parseFileName(final Path path) { final Matcher localeMatcher = localeFinder.matcher(path.getFileName().toString()); if (localeMatcher.find()) { // should always be true, since we check beforehand final String locale = localeMatcher.group(1) == null ? SINGLE_TRUTH_LOCALE : localeMatcher.group(1); return locale; } throw new IllegalArgumentException("Argument was not a conform resx file"); } public void register(final OverviewPresenter p) { presenter = p; } public void loadFromDirectory(final Path resxFolder) { this.currentPath = resxFolder; translations.clear(); try (Stream<Path> resxFiles = Files.find(resxFolder, 1, (path, properties) -> path.toString().matches(FILENAME_REGEX), FileVisitOption.FOLLOW_LINKS)) { translations.putAll(resxFiles.collect(Collectors.toMap( OverviewModel::parseFileName, this::parseFile) )); } catch (IOException ex) { String errorMessage = String.format( "Could not access %s due to %s", resxFolder, ex); System.err.println(errorMessage); presenter.onException(ex, errorMessage); } normalizeDocuments(); presenter.onParseCompletion(); } private void normalizeDocuments() { final Set<String> singleTruth = translations .get(SINGLE_TRUTH_LOCALE) .getRootElement() .getChildren(ELEMENT_NAME) .stream() .map(el -> el.getAttribute(KEY_NAME).getValue()) .collect(Collectors.toSet()); translations.values().forEach( doc -> normalizeDocument(doc, singleTruth)); saved.lazySet(false); } private void normalizeDocument(final Document doc, final Set<String> singleTruth) { final List<Element> localeElements = doc.getRootElement().getChildren(ELEMENT_NAME); Set<String> localeKeys = new HashSet<>(); // remove keys not present in the Single truth for (Iterator<Element> it = localeElements.iterator(); it.hasNext(); ) { final Element el = it.next(); if (!singleTruth.contains(el.getAttribute(KEY_NAME).getValue())) { it.remove(); continue; } localeKeys.add(el.getAttribute(KEY_NAME).getValue()); } singleTruth.stream() .filter(key -> !localeKeys.contains(key)) .map(OverviewModel::createNewElement) .forEach(doc.getRootElement()::addContent); } private static Element createNewElement(String key) { Element newElement = new Element(ELEMENT_NAME); Element valueContainer = new Element(VALUE_NAME); valueContainer.setText(""); newElement.setAttribute(KEY_NAME, key); newElement.addContent(valueContainer); return newElement; } private Document parseFile(final Path path) { final Path xmlFile = path.getFileName(); final SAXBuilder documentBuilder = new SAXBuilder(); final Document doc; try { doc = documentBuilder.build(path.toFile()); return doc; } catch (JDOMException e) { presenter.onException(e, "Unspecified Parsing error"); throw new IllegalStateException("Unable to parse " + xmlFile, e); } catch (IOException e) { presenter.onException(e, "Unspecified I/O Error"); throw new UncheckedIOException("Unable to read" + xmlFile, e); } } public List<Translation> getTranslations(final String locale) { Document document = translations.get(locale); final List<Element> translationElements = document.getRootElement() .getChildren(ELEMENT_NAME); return translationElements.stream() .map(el -> new Translation(locale, el)) .sorted(Comparator.comparing(Translation::getKey)) .collect(Collectors.toList()); } public void updateTranslation(final String locale, final String key, final String newTranslation) { Element translationToUpdate = getValueElement(locale, key); translationToUpdate.setText(newTranslation); } private Element getValueElement(final String locale, final String key) { valueExpression.setVariable("key", key); return valueExpression.evaluateFirst(translations.get(locale)); } public void saveAll() { for (Map.Entry<String, Document> entry : translations.entrySet()) { final Path outFile = currentPath.resolve(fileNameString(entry .getKey())); try (OutputStream outStream = Files.newOutputStream(outFile, StandardOpenOption.TRUNCATE_EXISTING, StandardOpenOption.WRITE)) { XML_PRETTY_PRINT.output(entry.getValue(), outStream); saved.lazySet(true); } catch (IOException e) { e.printStackTrace(System.err); presenter.onException(e, "Could not save File"); } } } private String fileNameString(final String locale) { return String.format(FILE_NAME_FORMAT, locale.isEmpty() ? "" : "." + locale.toLowerCase()); } public Translation getSingleTranslation(final String locale, final String key) { final String currentValue = getValueElement(locale, key).getText(); return new Translation(locale, key, currentValue); } public List<String> getAvailableLocales() { return new ArrayList<>(translations.keySet()); } public boolean isNotSaved() { return !saved.get(); } } I left out a few things that are available on github for full executability, namely a small UI helper for common code in the gridbag setup process. Also I left out declarations for Side, Translation, the OverviewView interface and the TranslationPresenter that's responsible for the actual editing of a translation. I plan to incorporate the changes suggested here and pack the code into a Jar for simple distribution and use as beta version. I am especially interested in: Separation of concerns between Model, View and Presenter Necessity of a proper command line argument parser Swing tricks to simplify the View And as usual: all feedback is appreciated :) Answer: Main What is the purpose of: private Main() { } I can currently think of two possibilities: To prevent inheritance: Easily fixed with the final keyword. To prevent other parts of code to instantiate your Main class: It doesn't really matter, does it? You have no code that is not static in the class, so instantiation does not matter. public static final String RUBBERDUCK_PATH = "RetailCoder.VBE/UI"; public static final String ARGUMENT_MISMATCH = "Arguments do not match up. Please provide one single path to read the Rubberduck resx from"; public static final String ILLEGAL_FOLDER = "Rubberduck .resx files can only be found under RetailCoder.VBE/UI. Please give a path that points to a Rubberduck UI folder"; Two things: Your lines are long. To be in the 80 character limit (or as close as possible) while still avoiding string concatenation, do: public static final String RUBBERDUCK_PATH = "RetailCoder.VBE/UI"; public static final String ARGUMENT_MISMATCH = "Arguments do not match up. Please provide one single path to read the Rubberduck resx from"; public static final String ILLEGAL_FOLDER = "Rubberduck .resx files can only be found under RetailCoder.VBE/UI. Please give a path that points to a Rubberduck UI folder"; Why are they public? It has no real use as a public field. Make the fields that aren't supposed to be seen private. OverviewPresenter public void initialize() { // initialization shall only happen once! if (initialized) { return; } view.register(this); model.register(this); translationPresenter.register(this); initialized = true; } I think it looks better this way: public void initialize() { // initialization shall only happen once! if (!initialized) { view.register(this); model.register(this); translationPresenter.register(this); initialized = true; } } I don't really like seeing empty return statements in Java, as there is always a way around them. It's my opinion; you may think different, and that's fine. public void onWindowCloseRequest(WindowEvent windowEvent) { if (model.isNotSaved()) { // ... switch (choice) { case JOptionPane.YES_OPTION: model.saveAll(); // fallthrough intended case JOptionPane.NO_OPTION: view.hide(); System.exit(0); break; case JOptionPane.CANCEL_OPTION: // do nothing break; } } else { System.exit(0); } } The last case is not required if it does nothing. If you really want to tell a reviewer/code-reader that it will do nothing, simply use a comment. It is also understandable, as only the yes and no options should do anything, and the cancel button should be completely ignored, as it is in many real-life applications. I cannot think of a single situation where a cancel button will do anything... SwingOverviewView window.addWindowListener(new WindowListener() { @Override public void windowOpened(WindowEvent windowEvent) { // nothing } @Override public void windowClosing(WindowEvent windowEvent) { p.onWindowCloseRequest(windowEvent); } @Override public void windowClosed(WindowEvent windowEvent) { // nothing } @Override public void windowIconified(WindowEvent windowEvent) { // nothing } @Override public void windowDeiconified(WindowEvent windowEvent) { // nothing } @Override public void windowActivated(WindowEvent windowEvent) { // nothing } @Override public void windowDeactivated(WindowEvent windowEvent) { // nothing } }); Horrendous useless methods... Use a WindowAdapter instead; it's pretty much the same thing, the only difference being you don't need to specify all the methods: window.addWindowListener(new WindowAdapter() { @Override public void windowClosing(WindowEvent windowEvent) { p.onWindowCloseRequest(windowEvent); } }); Same thing here: translationContainer.addMouseListener(new MouseListener() { @Override public void mouseClicked(final MouseEvent event) { if (event.getClickCount() != 2) { // only react to doubleclicks! return; } final int row = translationContainer.rowAtPoint(event .getPoint()); final String key = ((TranslationTable) translationContainer .getModel()).getKeyAt(row); presenter.onTranslateRequest(key); } @Override public void mouseEntered(final MouseEvent arg0) { // IGNORE } @Override public void mouseExited(final MouseEvent arg0) { // IGNORE } @Override public void mousePressed(final MouseEvent arg0) { // IGNORE } @Override public void mouseReleased(final MouseEvent arg0) { // IGNORE } }); Use a MouseAdapter: translationContainer.addMouseListener(new MouseAdapter() { @Override public void mouseClicked(final MouseEvent event) { if (event.getClickCount() != 2) { // only react to doubleclicks! return; } final int row = translationContainer.rowAtPoint(event .getPoint()); final String key = ((TranslationTable) translationContainer .getModel()).getKeyAt(row); presenter.onTranslateRequest(key); } }); OverviewModel private static final Pattern localeFinder = Pattern.compile(FILENAME_REGEX); static final fields are usually ALL_CAPS_WITH_UNDERSCORES_AS_SPACES. You do fine with that everywhere, but here... localeFinder should be LOCALE_FINDER. EDIT: I know that you don't like the spacing, but I will leave it here as a reference as it is the standard java conventions (eclipse formatting implies that too). private void normalizeDocuments() { final Set<String> singleTruth = translations .get(SINGLE_TRUTH_LOCALE) .getRootElement() .getChildren(ELEMENT_NAME) .stream() .map(el -> el.getAttribute(KEY_NAME).getValue()) .collect(Collectors.toSet()); translations.values().forEach( doc -> normalizeDocument(doc, singleTruth)); saved.lazySet(false); } Code that is part of the same command but is on a separate line should be 8-spaced: private void normalizeDocuments() { final Set<String> singleTruth = translations .get(SINGLE_TRUTH_LOCALE) .getRootElement() .getChildren(ELEMENT_NAME) .stream() .map(el -> el.getAttribute(KEY_NAME).getValue()) .collect(Collectors.toSet()); translations.values().forEach( doc -> normalizeDocument(doc, singleTruth)); saved.lazySet(false); } Again here: private void normalizeDocument(final Document doc, final Set<String> singleTruth) { // ... singleTruth.stream() .filter(key -> !localeKeys.contains(key)) .map(OverviewModel::createNewElement) .forEach(doc.getRootElement()::addContent); } And here: public List<Translation> getTranslations(final String locale) { Document document = translations.get(locale); final List<Element> translationElements = document.getRootElement() .getChildren(ELEMENT_NAME); return translationElements.stream() .map(el -> new Translation(locale, el)) .sorted(Comparator.comparing(Translation::getKey)) .collect(Collectors.toList()); } And a lot of other parts of your code, not just this class...
{ "domain": "codereview.stackexchange", "id": 16562, "tags": "java, xml, swing, i18n, rubberduck" }
Help on unit conversion problem
Question: This is a problem from school. I will show my attempt. The question: "The gas constant for dry air R is 287 $\frac{m^2}{s^2*K}$. Assuming the temperature is 330 K and the pressure is 1050 hPa, what is the atmospheric density." The professor said DO NOT produce an answer by finding a formula, but to use the magic of unit conversion to try to solve things. I know density is measured in kg/m^3 or thereabouts so I tried the following: 1050 hPA = 105, 000 Pa 1 Pa = 1 kg/m*s^2 105,000 $\frac{kg}{m*s^2}$ * 330 K * 287 $\frac{m^2}{s^2*K}$. This cancels some units... but not enough...in fact it cancels just K, so far as I understand, far from what I need for my density unit. Any ideas on what Im doing foolishly here? Answer: the line you wrote 105,000 $\frac{kg}{m*s^2}$ * 330 K * 287 $\frac{m^2}{s^2*K}$ has to read in fact $\frac{105,000 \frac{kg}{m*s^2} }{ 330 K * 287 \frac{m^2}{s^2*K}} = 1.11 \frac{kg}{m^3}$. This comes from the gas law $p=\rho \ R \ T $ where $p$ is the air pressure and $\rho$ is the air density. Solving for $\rho$ you get $\rho =\frac{p}{R T} $ from which the numerical solution follows.
{ "domain": "physics.stackexchange", "id": 4424, "tags": "homework-and-exercises, units, unit-conversion" }
What is the difference between a Rodin coil and a Rodin starship?
Question: I've seen various designs for Rodin coil and a 'Rodin starship'? Are these just regular electromagnets? Or something different? How do they differ from regular electromagnets? Answer: I have never heard the term 'Rodin coil' but what you can see in the linked videos are normal electromagnets. The advantage of these Torodial coils (wikipedia) is that you can built transformers with almost no stray field. The magnetic flux is completely contained within the inside of the windings. This is relatively easy to understand if you think about a long normal coil and then bend both ends together so that the whole coil is closed. Such a coil can also be used to compare currents without changing or influencing the circuit too much. You probably have one at home as a Residual-current device to protect you from any current flow from a broken AC device: The relay 1 cuts off the current flow if the currents through the L and N lines do not cancel each other, which means some other current path to ground.
{ "domain": "physics.stackexchange", "id": 2312, "tags": "electromagnetism" }
Set node parameter in launch file
Question: Hi I was trying to run a node through launch file and set its node parameter. The correspond launch file code: <node pkg="hector_quadrotor_demo" type="uav1_pathplan" name="uav1_pathplan" args="-d $(find hector_quadrotor_demo)/src/uav1_pathplan.cpp"> <param name="ref.x" type="double" value="7.0" /> <param name="ref.y" type="double" value="-8.0"/> <param name="ref.z" type="double" value="4.0" /> When I start the launch file, the node just didn't receive parameter values. In my node files, ros::init(argc, argv, "uav1_pathplan"); ros::NodeHandle n; destination[1]=ref.x; destination[2]=ref.y; destination[3]=ref.z; I just want the node can read "ref" to "destination[]". Then I can do further calculation. I'm not sure what's wrong. Any advice appreciated. Glen Originally posted by Glen on ROS Answers with karma: 40 on 2014-07-24 Post score: 1 Original comments Comment by sterlingm on 2014-07-24: Do "rosparam list" - can you see the parameters? The NodeHandle might just be looking in the wrong namespace. Comment by Glen on 2014-07-24: yes, I can read the parameter values through "rosparam list". Then, should I modify anything in my .cpp file? Comment by sterlingm on 2014-07-24: Whenever you load them in with your NodeHandle you need to include their namespace. If in "rosparam list" it says "uav1_pathplan/ref.x" you need to do loadParam("uav1_pathplan/ref.x"). Comment by demmeln on 2014-07-24: No you should use a private node handle instead. Comment by Glen on 2014-07-24: Do you mean use: loadParam("uav1_pathplan/ref.x") in .cpp file? But function loadParam() is not declared yet. Answer: How do you retrieve the ref object in your code. I don't see you accessing the parameters at all. The launch file is fine; in you code do something like: ros::NodeHandle nh("~"); double ref_x; nh.getParam("ref.x", ref_x); destination[1] = ref_x; Edit: Best read up on retrieving parameters in the wiki. Originally posted by demmeln with karma: 4306 on 2014-07-24 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Glen on 2014-07-24: I retrieve the object as you said. When roslaunch the launch file, error shows "process has died[pid 21550, exit code...]". It is close to run successfully and I'm still trying to figure it out. Thanks! Comment by Glen on 2014-07-24: It finally works. I just redefine all the variables "destination and ref_" and it worked! Thanks
{ "domain": "robotics.stackexchange", "id": 18755, "tags": "roslaunch, hector-quadrotor" }
Why measuring a liquid volume from the bottom of the meniscus isn't considered an underestimation?
Question: I know from my OL chemistry and physics classes that the reading of a liquid volume should always be taken from the bottom of the meniscus, but isn't this an underestimation of the real volume. I mean there's some water sticking upward to the surface of the container and, say, if these water fell and made a flat surface they would add to the volume of water. I realize that if we try to take the reading from the highest point the liquid reached this would be an overestimation so I was wandering if measuring from a point that's between the meniscus and the highest point the liquid reached would be more accurate. Answer: The markings on volumetric glassware are already calibrated to account for the volume above the meniscus*. The location of the mark on the glass is where the bottom of the meniscus should be in order to have the "true" volume be what's marked. So by measuring from the bottom of the meniscus, you're synchronizing your measurement procedure with the procedure of the people who originally calibrated the markings on the glass. *For volumetric flasks where this kind of precision is very important, they are usually calibrated for a specific solution with a specific meniscus height. Otherwise, the general assumption, as far as I can tell, is that your solution will have a vaguely similar surface tension to water, with a vaguely similar meniscus height. For example, calibration of general graduated cylinders is often done with distilled water. Typically the error introduced by the difference in meniscus height is not the dominant source of measurement error, so this is often ignored. In cases where you're using something with a very different surface tension than water, and precision to that level actually matters, explicit calibration is usually recommended. As an aside, while researching this answer, I came across the following entertaining scenario on the official USGS Water Science School site: In your high-school chemistry final exam you mistakenly read a meniscus as 72 milliliters (ml) instead of the correct 66 ml (in this picture), and thus you get an 89 on the test instead of a 90. Your GPA falls from 4.00 to 3.99 and you don't get into that engineering college program you wanted. Consequently, you don't get that prestigious engineering job, where, 20 years later, you would have invented a new water-based chemical to allow rubber to grip better. Sadly, 10 years later, a mother and her adorable 4-year old daughter are leaving the ice cream store and the little girl, whose shoes don't have your un-invented coating, slips on a napkin and drops her ice cream cone. She cries at her loss ... because you misread the meniscus in the 12th grade. The moral of this fictional tale is that it is important to read the measurement correctly, and yes, in the picture (top right) the true volume in the graduated cylinder is at the bottom of the water level—21.7 milliliters, not 21.9.
{ "domain": "physics.stackexchange", "id": 70661, "tags": "measurements" }
6D pose estimation of a known 3D CAD object
Question: I'm looking for a codebase for 6DOF pose estimation of a known 3D CAD object with RGB or RGBD. It must be: -Usable commercially (licensed under BSD, MIT, BOOST, etc.), not GPL. -Easy to setup and use (having a running colab example would be great) -The training time required for a new CAD object should be on the order of hours, not days. -State of the art of near state of the art results. (See https://bop.felk.cvut.cz/home/ for benchmarks) Are there any libraries fitting these requirements? Answer: I would like to know about this too. Vuforia combines libraries for several platforms (Android, iOS) and provides pose estimation. It uses 2D feature extracted from CAD and the feature extraction is fairly quick (minutes, not hours), but it is not CAD based pose estimation and the results are average (drift, stability, accuracy). I came across this: http://track.virnect.com/1.2.1/ Not tested
{ "domain": "robotics.stackexchange", "id": 2546, "tags": "computer-vision, pose, precise-positioning, reference-request" }
Rosws error trying to overlay common_msgs
Question: Hi, i just tried to setup my Fuerte workspace using rosws. This also includes an overlay of common_msgs and one package that depends on common_msgs. When I add the common_msgs stack to my workspace using "rosws merge" and try to build the package that depends on geometry_msgs, i get the following error when trying to rosmake: [rosbuild] Building package morsetesting [rosbuild] Cached build flags older than manifests; calling rospack to get flags Failed to invoke /opt/ros/fuerte/bin/rospack cflags-only-I;--deps-only morsetesting Package geometry_msgs was not found in the pkg-config search path. Perhaps you should add the directory containing `geometry_msgs.pc' to the PKG_CONFIG_PATH environment variable No package 'geometry_msgs' found After removing the overlay of common_msgs (using "rosws remove"), i can build the package again... Is this a bug or am i missing something here? Originally posted by michikarg on ROS Answers with karma: 2108 on 2012-05-06 Post score: 0 Original comments Comment by michikarg on 2012-05-06: I should mention that for my project, an overlay of common_msgs does not make sense any more since the python code is now stored outside of the rospackage anyway, but still i´m interested in an answer... Answer: The common_msgs stack builds using catkin, not rosmake, now. If you need to build it from source as an overlay, you will have to use the new tools. Originally posted by joq with karma: 25443 on 2012-05-06 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by michikarg on 2012-05-06: Tanks. So i cannot mix packages that are build by catkin with packages built by rosmake... Would you recommend to replace rosmake-builds completely by catkin? Comment by joq on 2012-05-06: Not yet. Catkin building is not yet fully documented. I believe you can build and install something using catkin, then refer to the new version in other packages.
{ "domain": "robotics.stackexchange", "id": 9263, "tags": "ros, rosws, ros-fuerte" }
What does the molecular orbital scheme of beryllium chloride and hydride look like?
Question: Beryllium is a somewhat fascinating element since it is the only member of the second group that behaves somewhat non-metalish and, e.g. forms a somewhat covalent chloride and a covalent hydride. In an answer to a different and unrelated question, I tried guessing what the bonding picture of $\ce{BeCl2}$ would be. Given the monomer’s linear shape, I was inclined to assume something that introduces a four-electron-three-centre bond, which has the nice side-advantage of allowing beryllium to contribute to bonding orbitals using its s orbital only. However, I would guess that the more traditional picture would include two $\mathrm{sp^2}$ type bonds to the neighbouring chlorines and then a nonzero amount of π backbonding making the MO similar to that of $\ce{CO2}$. What does the actual, calculated MO scheme of $\ce{BeCl2}$ look like; what does $\ce{BeH2}$’s look like, how do they compare and what can we learn from them concerning the bonding properties of beryllium? Answer: I calculated molecular orbitals for $\ce{BeCl2}$ at MP2/jun-cc-pVDZ level of theory; given below are the doubly occupied MOs at this level of theory along with their symmetry designations and energy in a.u. For some reason my Avogadro kept crashing when I tried to select an iso value I thought appropriate for the last two MOs; anyway these are completely localised $\ce{Be}$ atoms. If we look at the 8 valence orbitals, there is indeed delocalised $\pi$ bonding as you suggest, and do bear a striking resemblance to MOs for $\ce{CO2}$. For $\ce{BeH2}$ I again ran into trouble generating nice visuals using the cube file in Avogadro. I will update this post with visuals as soon as possible (It would be great if someone can help out). I got the following doubly occupied MOs at MP2/jun-cc-pVDZ level of theory: -4.644660 ($\ce{a_g}$) ; -0.504366 ($\ce{a_g}$) ; -0.469857 ($\ce{b_{3u}}$). However, it is evident from the symmetry designations that $\pi$-type orbitals aren't involved in bonding. P.S: If you are interested, I can provide the input, output, and cube files on request.
{ "domain": "chemistry.stackexchange", "id": 10250, "tags": "inorganic-chemistry, molecular-orbital-theory, halides" }
Transform a MongoDb parent reference tree into a deep nested tree
Question: Given a mongodb collection that stores trees as nodes with parent references this methods returns a deep nested tree where every child nodes are stored in a property childs[Seq] def getTree(rootId: BSONObjectID): Future[CategoryTreeNode] = { def collect(parent: CategoryTreeNode): Future[CategoryTreeNode] = { // Query the database - returns a Future[Seq[CategoryTreeNode]] getChilds(parent._id.get).map { // Seq[CategoryTreeNode] childs => Future.sequence( childs.map(child => collect(child)) // Recursion ).map(childSeq => parent.copy(childs = Some(childSeq))) }.flatMap(x => x).map(y => y) } // Find the root node and start the recursion findOne[CategoryTreeNode](Json.obj("_id" -> rootId)).map(maybeNode => maybeNode match { case None => throw new RuntimeException("Current node not found by id!") case Some(node) => collect(node) }).flatMap(x => x) } Answer: flatMap (x => x) would probably better be written as flatten. map (x => x) is essentially a no-op, and could be removed. map ( ... ) flatten is equivalent to flatMap (...) It doesn't seem useful to put a type into a comment – you can specify the type for parameters of lambdas as well: { (childs: Seq[CategoryTreeNode]) => ... However, this style is generally discouraged. It would be more useful to store the future from the DB in a variable which gets typed explicitly. Higher-order method like map should not be invoked with a leading .: do collection map (x => ...) instead of collection.map(x => ...) (source). (foo => foo match { ... }) can be simplified to (_ match { ... }) (child => collect(child)) can be simplified to (collect(_)) or even just collect Together, I'd clean up your code to this: def getTree(rootId: BSONObjectID): Future[CategoryTreeNode] = { def collect(parent: CategoryTreeNode): Future[CategoryTreeNode] = { // Query the database val ourChilds: Future[Seq[CategoryTreeNode]] = getChilds(parent._id.get) ourChilds flatMap { childs => Future.sequence(childs map collect) map { childSeq => parent.copy(childs = Some(childSeq)) } } } // Find the root node and start the recursion findOne[CategoryTreeNode](Json.obj("_id" -> rootId)) flatMap (_ match { case None => throw new RuntimeException("Current node not found by id!") case Some(node) => collect(node) }) }
{ "domain": "codereview.stackexchange", "id": 6456, "tags": "scala, tree, mongodb" }
How can I prove $\hat{a}^\dagger(\vec{k})$ is a complex scalar field?
Question: I want to prove $\hat{a}^\dagger(\vec{k})$, the creation operator for real Klein-Gordon bosons transforms like a complex scalar field under Lorentz transformations, so $$\exp\left\{-\frac{\mathrm{i}}{2\hbar}\omega^{\mu\nu}\hat{M}_{\mu\nu}\right\}\space\hat{a}^\dagger(\vec{k})\space\exp\left\{\frac{\mathrm{i}}{2\hbar}\omega^{\mu\nu}\hat{M}_{\mu\nu}\right\} = \hat{a}^\dagger(\Lambda\vec{k}),$$ and I'd like to do it, if possible, algebraically manipulating the left-hand side expression. I thought maybe I could use a formula like $$\mathrm{e}^{-\hat{A}}\hat{B}\space\mathrm{e}^\hat{A} = \mathrm{e}^{\mathrm{ad}(\hat{A})}\hat{B}$$ but I don't know how to get the adjoint representation of the (proper, orthochronous) Lorentz group as $\hat{M}_{\mu\nu}$ has two indicies and it makes it hard to see what the structure constants are (I do know the commutation relation of those, though). Is this a good idea, or should I proceed in another way? Can anyone give me a hint? Answer: To derive the Lorentz transformation of the creation and annihilation operators, you can start with the transformation single particle states. For scalar particles, with appropriate normalization, you can have: $$ U(\Lambda)|k\rangle=|\Lambda k\rangle $$ On the other hand, by making use of the definition of the creation operator: \begin{align} U(\Lambda)|k\rangle=&U(\Lambda)a^\dagger(k)|0\rangle\\ =&U(\Lambda)a^\dagger(k)U^\dagger(\Lambda)U(\Lambda)|0\rangle\\ =&U(\Lambda)a^\dagger(k)U^\dagger(\Lambda)|0\rangle \end{align} From this, you can show that: $$ U(\Lambda)a^\dagger(k)U^\dagger(\Lambda) = a^\dagger(\Lambda k) $$
{ "domain": "physics.stackexchange", "id": 66422, "tags": "quantum-field-theory, special-relativity, klein-gordon-equation" }
Time Card Web Application Offline
Question: About & Purpose Purpose: As said by the title this is just a simple web application for use of a SINGLE user. It has the features of calculating total hours, clocking in, and clocking out. It also has some functions that helps with the calculations. This is for OFFLINE USE ONLY.This is by no means for commercial use. List of Functions and their Purpose Main Functions initialize() - Sole purpose is to make sure that the user is in the proper state. If they already started their shift, then it will lock the CLOCK IN button and enable the CLOCK OUT button and vice-versa. (TL;DR Check session state.) populateTimeCard() - Populates the template with datas in the timeCardDatas (localStorage). This is done using Template-Engine. login() - returns true if the user types in the correct password. This is done by passing the input through the md5(input) function. Afterwards compares it to the MD5 Hash that is hardcoded. reset() - Simply removes the startShift. This is triggered by pressing the Clock Out button. calculateHours(single) - Takes param single. This is for calculating single values instead of an entire array. Ex: (mill1 - mill2) => 50hr. General purpose is to add up all the time and return that. markTime() - Gets the current time and marks it as start or end of shift depending on user session state. Utility Functions: militaryToStandard(time) - Converts military time to standard time. dowToWord(dow) - Turns time.getDay() to word. time.getDay() returns day of the week as int. padZero(number) - As the function names says gives padding zeroes for number less than 9 (1 digit numbers). constructTime(withDOW) - Constructs date in a clear format. (MM / DD / YYYY [DAY_OF_THE_WEEK]). Takes a param that either adds in Day of the Week or not. Codes index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <title>Time Card</title> <link rel="stylesheet" type="text/css" href="css/main.css"/> <link rel="stylesheet" type="text/css" href="css/bootstrap.css"/> <style> html, body { margin : 0; height: 100%; } .container-fluid { height: 100%; } .row { height: 100%; } .box-header { background-color: #6C7E8F; text-align: center; padding: 8px; color: #FFF; } </style> </head> <body style="background-color: #2c3e50;"> <div class="container-fluid"> <div class="row"> <div class="col-sm-8" style="padding: 0; border-right: 1px solid black; height: 100%;"> <div class="card" style="margin: 16px; height: 95%; "> <h3 class="card-header">Interface</h3> <div class="card-block"> <h4 class="card-title">Instructions</h4> <p class="card-text">Press the <b>Clock In</b> button when starting shift. This will get the current time and add it in your <i>Time Card</i>.</p> <p class="card-text">Press the <b>Clock Out</b> button when ending shift. This will get the current time and add it in your <i>Time Card</i>.</p> <!-- <hr> <h4>Datas</h4> <p>This is just showing all the datas available. Will be removed after final build. [For development purposes only.]</p> <br> <b style="color: blue">Date </b> <span id="clockInDate">N/A</span><br> <b style="color: green">Clock In </b> <span id="clockInData">N/A</span><br> <b style="color: red">Clock Out </b> <span id="clockOutData">N/A</span> --> <hr> <h4>Password</h4> <p>Please enter your password before pressing either <b>Clock In</b> or <b>Clock Out</b>. <div class="form-group" id="password-group"> <input class="form-control" type="password" id="password" required/> </div> </div> <div class="card-footer text-muted"> <button id="clockIn" class="btn btn-primary btn-block">Clock In</button> <button id="clockOut" class="btn btn-danger btn-block">Clock Out</button> </div> </div> </div> <div class="col-sm-4" style="background-color: #34495e; padding: 0;"> <!-- <div class="header-container"> <h4 class="box-header">Time Card</h4> </div> <div class="datas-container"> <span id="time"></span> </div> --> <div class="card" style="margin: 16px; height: 95%; "> <h3 class="card-header">Time Card</h3> <div class="card-block" style="padding: 0"> <div id="accordion" role="tablist" aria-multiselectable="true"> </div> </div> <div class="card-footer"> <b>Total Hours: </b> <span id="totalHours"></span> </div> </div> <!-- <div class="card-footer text-muted"> <button id="clockIn" class="btn btn-primary btn-block">Clock In</button> <button id="clockOut" class="btn btn-danger btn-block">Clock Out</button> </div> --> </div> </div> </div> <script src="js/jquery.min.js"></script> <script src="js/tether.js"></script> <script src="js/bootstrap.js"></script> <script src="js/en.js"></script> <script src="js/tmpl.min.js"></script> <script src="js/timecard.js"></script> <!-- TEMPLATE --> <script type="text/x-tmpl" id="tmpl-demo"> <div class="card"> <div class="card-header" role="tab" id="heading{%= o.id %}"> <h5 class="{%= o.id %}"> <a class="collapsed" data-toggle="collapse" data-parent="#accordion" href="#collapse{%= o.id %}" aria-expanded="true" aria-controls="collapse{%= o.id %}"> {%= o.title %} </a> </h5> </div> <div id="collapse{%= o.id %}" class="collapse" role="tabpanel" aria-labelledby="heading{%= o.id %}"> <div class="card-block"> <b>Date: </b> {%= o.title %}<br> <b style="color: green">Clock In: </b> {%= o.clockIn %}<br> <b style="color: red">Clock Out: </b> {%= o.clockOut %}<br> <hr> <b style="color: blue">Total: </b> {%= o.total %}<br> </div> </div> </div> </script> <script type="text/javascript"> $(document).ready(function(){ if (localStorage.getItem("timeCardDatas") != null) { populateTimeCard(); calculateHours(); } $("#totalHours").text(localStorage.getItem("totalHours")); }); $("#clockIn").on("click", function(){ if ($("#password").val().length > 0) { if (login($("#password").val())) { markTime("start"); $("#password").val(""); toggleClockIn(); } else { triggerFormControl(); } } else { triggerFormControl(); } }); $("#clockOut").on("click", function(){ if ($("#password").val().length > 0) { if (login($("#password").val())) { markTime("end"); $("#password").val(""); toggleClockOut(); /*-- Populate Base --*/ populateTimeCard(); } else { triggerFormControl(); console.log("[DEBUG] Triggered Form Control"); } } else { triggerFormControl(); console.log("[DEBUG] Triggered Form Control"); } }); </script> </body> </html> timecard.js /* * Time Card web application. * ------------------------------ * This program is not secure as it's only JS at work. */ initialize(); function initialize() { if (localStorage.getItem("startShift") != null) { toggleClockIn(); } else { toggleClockOut(); } } function populateTimeCard() { var tcDatas = JSON.parse(localStorage.getItem("timeCardDatas")); for (var i = tcDatas.length - 1; i >= 0; i--) { if (i >= 2) { var datas = { "id": i, "title": tcDatas[i][0][2], "clockIn": tcDatas[i][0][0], "clockOut": tcDatas[i][1][0], "total": calculateHours(tcDatas[i][0][1] - tcDatas[i][1][1]) }; $("#accordion").append(tmpl("tmpl-demo", datas)); } } } function login(password) { if (md5(password) == "<MD5 Hash Here>") { return true; } else { return false; } } function reset() { localStorage.removeItem("startShift"); } function calculateHours(single) { var timeCardDatas = JSON.parse(localStorage.getItem("timeCardDatas")), totalHour = 0; single = (typeof single !== 'undefined') ? single : false; if (!single) { for (var i = timeCardDatas.length - 1; i >= 0; i--) { if (i >= 2) { var curHour = ((timeCardDatas[i][0][1] - timeCardDatas[i][1][1]) / 1000 / 60 / 60).toFixed(2); if (curHour < 0) { totalHour += parseFloat((curHour * -1).toFixed(2)); } else { totalHour += parseFloat(curHour).toFixed(2); } } } localStorage.setItem("totalHours", totalHour.toFixed(2)); } else { var tHour = ((parseFloat(single)) / 1000 / 60 / 60).toFixed(2); if (tHour < 0) { tHour = parseFloat((tHour * -1)).toFixed(2); } else { tHour = parseFloat(tHour).toFixed(2); } return tHour; } } function markTime(shift) { var curTime = new Date(); var h = curTime.getHours(); var m = curTime.getMinutes(); var s = curTime.getSeconds(); m = padZero(m); s = padZero(s); var shiftData = [militaryToStandard(h + ":" + m + ":" + s), curTime.getTime(), constructTime(true)]; if (shift == "start") { localStorage.setItem("startShift", JSON.stringify(shiftData)); } else if (shift == "end") { var tcDatas = [JSON.parse(localStorage.getItem("startShift")), shiftData]; if (localStorage.getItem("timeCardDatas") == null) { var fillerDatas = [["FILLER", 0000000000], ["FILLER", 0000000000]]; localStorage.setItem("timeCardDatas", JSON.stringify(fillerDatas)); var oldItems = JSON.parse(localStorage.getItem("timeCardDatas")); oldItems.push(tcDatas); localStorage.setItem("timeCardDatas", JSON.stringify(oldItems)); reset(); } else { var oldItems = JSON.parse(localStorage.getItem("timeCardDatas")); oldItems.push(tcDatas); localStorage.setItem("timeCardDatas", JSON.stringify(oldItems)); reset(); } } else { console.log("[Time Card] An error has occured."); } } /* * Error Controls */ function triggerFormControl() { $("#password").addClass("form-control-danger"); $("#password-group").addClass("has-danger"); } function toggleClockIn() { $("#clockIn").addClass("disabled"); $("#clockOut").removeClass("disabled"); $('#clockIn').prop('disabled', true); $('#clockOut').prop('disabled', false); } function toggleClockOut() { $("#clockOut").addClass("disabled"); $("#clockIn").removeClass("disabled"); $('#clockIn').prop('disabled', false); $('#clockOut').prop('disabled', true); } /* * Utility */ function militaryToStandard(time) { time = time.split(':'); var hours = Number(time[0]); var minutes = Number(time[1]); var seconds = Number(time[2]); var timeValue; if (hours > 0 && hours <= 12) { timeValue= "" + hours; } else if (hours > 12) { timeValue= "" + (hours - 12); } else if (hours == 0) { timeValue= "12"; } timeValue += (minutes < 10) ? ":0" + minutes : ":" + minutes; timeValue += (seconds < 10) ? ":0" + seconds : ":" + seconds; timeValue += (hours >= 12) ? " P.M." : " A.M."; return timeValue; } function dowToWord(dow) { switch (dow) { case 0: return "Sunday"; break case 1: return "Monday"; break; case 2: return "Tuesday"; break; case 3: return "Wednesday"; break; case 4: return "Thursday"; break; case 5: return "Friday"; break; case 6: return "Saturday"; break; } } function padZero(number) { if (number <= 9) { number = "0" + number; } return number; } function constructTime(withDOW) { var curTime = new Date(); var construct = padZero((curTime.getMonth() + 1)) + "/" + curTime.getDate() + "/" + curTime.getFullYear(); var a = (typeof withDOW !== 'undefined') ? true : false; if (a) { construct += " " + dowToWord(curTime.getDay()); } return construct; } JS Libraries Libraries that I installed. Bootstrap 4 Required Tether JQuery 3.2.1 JavaScript-Templates JavaScript-MD5 Answer(s) I'm Looking For Code Improvements General Tips References to Guides (Naming Convention, etc.) AND any other meaningful and useful stuff. If I am missing any other information do let me know what other information you need down in the comments. I will be sure to answer than as fast as I can. Answer: Unnecessary loop cycles In this loop in populateTimeCard: for (var i = tcDatas.length - 1; i >= 0; i--) { if (i >= 2) { // ... } } It would be better to change the loop condition and remove the if statement from the loop body, like this: for (var i = tcDatas.length - 1; i >= 2; i--) { Review the other similar loops as well. Use boolean expressions directly Instead of this: if (md5(password) == "<MD5 Hash Here>") { return true; } else { return false; } You can write simply: return md5(password) == "<MD5 Hash Here>"; Bug or feature? The two branches of this condition look a bit suspicious: if (curHour < 0) { totalHour += parseFloat((curHour * -1).toFixed(2)); } else { totalHour += parseFloat(curHour).toFixed(2); } Did you misplace the parentheses in the if branch? Perhaps you meant to write this: if (curHour < 0) { totalHour += parseFloat(-curHour).toFixed(2); } else { totalHour += parseFloat(curHour).toFixed(2); } In this case I would suggest to not repeat yourself, as totalHour += parseFloat(...).toFixed(2); is the same in both branches. You could write: totalHour += parseFloat(Math.abs(curHour)).toFixed(2); Notice that when duplicate logic is eliminated, it's impossible to misplace parentheses. This is one of the reasons why it's good to eliminate duplications always. Don't repeat yourself The parseFloat(...).toFixed(2) snippet appears in many places in the code. It would be good to encapsulate this logic in a helper function and eliminate the duplications. Multiplying by -1 Instead of x * -1 you can write simply -x. Returning from case in a switch When you return from a case in a switch, you don't need to add a break statement.
{ "domain": "codereview.stackexchange", "id": 26313, "tags": "javascript, beginner, jquery, twitter-bootstrap" }
Differences between ros::Duration::sleep and std::this_thread::sleep_for?
Question: My team is developing in modern C++ and this is my first major project using ROS, but not my first major project using modern C++. I generally prefer using standard library functions to third party library functions, and so I generally use std::this_thread::sleep_for in order to sleep the current thread context. This is different from much of the existing codebase which uses ROS's ros::Duration::sleep and similar methods. I've looked through the source a little bit, but can't seem to find the actual implementation of ROS's sleep method. Is there a difference between the implementations of these two methods I should be aware of? If not, have many developers stopped using ros::Duration in favor of standard functions? Originally posted by ssnover on ROS Answers with karma: 31 on 2019-03-13 Post score: 1 Original comments Comment by VictorLamoine on 2019-03-13: Implementation is here: https://github.com/ros/roscpp_core/blob/8b9365756d3757441defd0a90992f3c5e426fb8f/rostime/src/time.cpp#L447 Comment by jayess on 2019-03-13: One thing to keep in mind IIRC is that ROS's Time and Duration classes and associated methods take into account simulated time if you're running in a sim or using a bag file which standard library functions won't take into account Comment by knxa on 2019-03-14: @jayess: I think your comment is the answer, and I suggest reposting it as an answer Comment by gvdhoorn on 2019-03-17: @VictorLamoine: +1 for the code reference, but please use permalinks. Otherwise future readers could be directed to completely unrelated lines due to code refactoring, addition, etc. I've updated the link to be a permalink. Answer: If I recall correctly, ROS's Time and Duration classes and associated methods take into account simulated time if you're running in a sim or using a bag file, This is something that the standard library functions won't take into account. I can't find anything official stating this, but I've seen it around this site that it's encouraged to use ros::Duration::sleep and similar methods in favor of the standard functions. Now, maybe you don't care about simulated time or bag files right now, but if you end up caring about them later you may have to go back and do significant rewrites which is no fun. Originally posted by jayess with karma: 6155 on 2019-03-14 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 32644, "tags": "ros, ros-melodic, sleep" }
Tension is not equal to $m g \cos \theta$ in a simple pendulum. Why?
Question: $l$ is constant but $T(t) \neq m g \cos \theta(t)$. Why? Please explain intuitively. Answer: Whenever an object follows a circular path then a centripetal force is required to explain such(the change in direction). The force is given by $F_{centripetal } = m \frac {v^2}{l}$. Therefore here rather than being $T - m g \cos \theta =0$ it is $T - m g \cos \theta =m \frac {v^2}{l}$ [Note: Here $v$ is the instantaneous velocity and varies from one moment to another (as there is tangential component of gravity acting on it)]
{ "domain": "physics.stackexchange", "id": 62779, "tags": "newtonian-mechanics, forces, free-body-diagram, string" }
Heisenberg uncertainty principle in German
Question: I have heard that uncertainty was not the actual translation for the word which Heisenberg had used to describe his original principle (in German). The translated meaning is a bit different. What was the actual German word which Heisenberg had used for 'uncertainty'? Answer: It is called the « Unschärferelation ». Some people prefer that to the english version because uncertainty makes you think that if you made better measurements, you wouldn’t get this uncertainty, which is not the case. In fact, it is better to think about it as relation between the dispersion of the momentums and the dispersion of the positions of a particle, which product has to be greater than $\frac{\hbar}{2}$. In other words it is all about the relation of the unsharpness of these two distributions. That’s why « unsharpness relation » seems to describe it better than « uncertainty principle ».
{ "domain": "physics.stackexchange", "id": 74460, "tags": "terminology, soft-question, heisenberg-uncertainty-principle" }
Sorting numeric tags without omitting the attributes
Question: I used the following Javascript to sort an unnumbered list of numeric li-tags as follows: // 1. get the <UL>-element from the BODY var nList = document.getElementById("allItems"); // 2. extract all the <LI>-elements from that <UL> and put it in a NodeList var nEntry = nList.getElementsByTagName('li'); // 3. we can't sort a NodeList, so first make it an Array var nEntryArray = Array.prototype.slice.call(nEntry, 0); // 4. sort the array, the normal sort()-function won't do because it is an alphabetical sort // to sort() numeric values, see http://www.w3schools.com/jsref/jsref_sort.asp example, as "Default sort order is alphabetic and ascending. When numbers are sorted alphabetically, "40" comes before "5". To perform a numeric sort, you must pass a function as an argument when calling the sort method." // the numeric value of the <LI> nodes can be located in nEntryArray[i].firstChild.nodeValue , so compare those nEntryArray.sort(function(a,b){ return a.firstChild.nodeValue - b.firstChild.nodeValue }) // 5. empty the nList and refill it with those in the correct order at the nEntryArray while (nList.lastChild) { nList.removeChild(nList.lastChild); } for (i=0; i<nEntryArray.length; i++) { nList.appendChild(nEntryArray[i]); } If a HTML-code is given with a list as such: <ul id='allItems'> <li class="black">100</li> <li id="note10">10</li> <li>1</li> <li>20</li> <li class="order2">16</li> </ul> it will order the list numerically without omitting the attributes of the <LI>-tags. It is one of my first JavaScript-attempts, and I was wondering if this is indeed correct or whether it could be simplified. Especially the conversion from Nodelist to Array seems redundant to me, but I haven't found a more gracious solution. I post this code-block as I was looking on StackOverflow and through Google for a piece of code to help me solve this particular problem, but I couldn't find something that was understandable to a beginner like me. I easily found ways to sort lists , but then the attributes would be omitted and in the end it took me two days (yep, beginner :-)) to come up with this solution. Any comments? Answer: There are a few clear improvements I can see when you are reinserting the nodes back into the DOM tree. for (i=0; i<nEntryArray.length; i++) { nList.appendChild(nEntryArray[i]); } 1) You may benefit from using a documentFragment to build up your elements then submit them all at once to the nList 2) (micro), cache the length of your array in the for loop. var df = document.createDocumentFragment(); for( var i = 0, l = nEntryArray.length; i < l; i++) { df.appendChild(nEntryArray[i]); } nList.appendChild(df); Otherwise I do not see any issues with this code. http://jsfiddle.net/rlemon/faY3Z/
{ "domain": "codereview.stackexchange", "id": 2294, "tags": "javascript, sorting" }
How to construct a space which is translation invariant but not rotation invariant
Question: I am just confused by the following idea. Consider a 3-dimensional translation invariant space, we now have 3 translation generators. Then let us start with a point, the full 3-dimensional space should be generated by these 3 generators, namely by translating the single point in a 3-dimensional abstract parameter space. But when we have a 3-dimensional translation and rotation invariant space, we can also generate the full space by translating a point. Since the 3 translation generators already generate a 3-dimensional space, what can the another 3 rotation generators do? Where is going wrong? Put another way, could someone construct a space which is translation invariant but not rotation invariant? Answer: A 3D cube with pacman topology is translationally invariant and not rotationally invariant. A space like this is a possible (but unlikely) flat space part of a cosmological spacetime
{ "domain": "physics.stackexchange", "id": 31821, "tags": "spacetime, group-theory, space, lie-algebra" }
Does XNOR of three variable equals XOR of same three variables
Question: I came across following excerpt: $(x'y'+xy)'z'+(x'y'+xy)z=x\oplus y\oplus z$ What I see is left hand side is XNOR of $x,y$ and on right, $z$ and I get XOR of of $x,y$ and $z$ !!! In other word, $XNOR(XNOR(x,y),z)=XOR(XOR(x,y),z)$ Somehow I am surprised by the result, as I didnt came across this earlier, no text stated this explicitly and was unexpected. It was unexpected because, I knew some other equalities and inequalities stated below, so was never thought of existence of above equality. $XNOR(x,y)=NOT(XOR(x,y))$ $NAND(NAND(x,y),z)\neq AND(AND(x,y),z)$ $NOR(NOR(x,y),z)\neq AND(OR(x,y),z)$ Q1. Am I interpreting it correct as XNORs of three variables equals XOR of same three variables? Q2. If yes, can someone shed more light about why such relationship is not true for other gates? Is just that their definition does not permit such relationship? Or Is it beacause ExOR is not basic gate, while AND and OR are? PS: Sorry for naive, possibly stupid question. The full excerpt: $(x'y'+xy)'z'+(x'y'+xy)z$ $(x'y')'(xy)'z'+x'y'z+xyz$ $(x+y)(x'+y')z'+x'y'z+xyz$ $xy'z'+x'yz'+x'y'z+xyz$ $=x\oplus y\oplus z$ Answer: XOR is addition modulo 2, and XNOR computes the sum modulo 2 of its inputs and 1. Since $$ (x+y+1)+z+1 \equiv x+y+z \pmod{2}, $$ We see that XORing three variables is the same as XNORing them. The same holds for any odd number of variables. When XNORing an even number of variables, you get the negation of their XOR. Nothing of this sort happens for AND or OR. There isn't any particular reason. You should think of it the other way around: XOR satisfies this surprising property, but other gates do not. XOR is special. It has absolutely nothing to do with AND and OR being "basic gates", whatever that means (mathematically, probably not much).
{ "domain": "cs.stackexchange", "id": 10704, "tags": "boolean-algebra" }
Isopropyl and Butyl Groups - Relative Priorities
Question: This does not compute for me: Why would the four-carbon butyl group have a lower relative priority than the 3 carbon isopropyl group? If we compare the carbons one by one between the isopropyl and the butyl group, we'd run out of carbons in the isopropyl group first! How then can isopropyl possibly have a higher priority than the butyl group? Answer: Comparing IsoPropyl and n-Butyl On first position both have C. On second position IsoPropyl has two C's whereas n-Butyl has only one C. Thus IsoPropyl has higher priority and we do not see the further chain.
{ "domain": "chemistry.stackexchange", "id": 1332, "tags": "organic-chemistry, chirality" }
Filtering a recursive directory listing, discarding subfolders
Question: I needed a function that would take a list of folders and removed all sub folders, so that only the top level folders stayed in the list. For example, given the list: c:\stuf\morestuff c:\stuf\morestuff\sub1 c:\otherstuf c:\otherstuf\sub1 c:\otherstuf\sub2 I wanted the list to be reduced to: c:\stuf\morestuff c:\otherstuf So I came up with this solution: // remove empty strings and sub folders private static void CleanUpFolders(List<string> uniqueFolders) { uniqueFolders.RemoveAll( delegate(string curFolder) { // remove empty if (string.IsNullOrEmpty(curFolder)) return true; // remove sub paths if (uniqueFolders.Exists( delegate(string s) { if (!string.IsNullOrEmpty(s) && curFolder.StartsWith(s) && string.Compare(s, curFolder) != 0) return true; return false; } )) return true; return false; } ); } This seems to work (not very well tested though) but I was left wondering about some things: is there an issue with using variables inside anonymous methods that were declared outside? any potential issues with nested anonymous methods? any other issues or best practices worth mentioning? Answer: is there an issue with using variables inside anonymous methods that were declared outside? No, this is what anonymous methods are designed for! It is a very useful trick to keep up your sleeve. Read up on Closures. There are all sorts of things you can do with them. Obviously there are issues with doing anything when you don't understand it fully, but the way you are using them in your code is what these things are all about! any potential issues with nested anonymous methods? Same thing. any other issues or best practices worth mentioning? Unless you are still using c# 2, the syntax has been simplified to use what is known as a lambda. Instead of using delegate(string curFolder) { ..code.. } you can just go : curFolder => ..code.. As RemoveAll takes a Predicate, you can also lose the return key word. As long as the statement evaluates to True or False, it will take that as the return. You have some code that is basically going : if x == true return true return false This can be simplified to : return x With those two things, your code could be simplified to : uniqueFolders.RemoveAll( curFolder => string.IsNullOrEmpty(curFolder) || uniqueFolders.Exists( s=> !string.IsNullOrEmpty(s) && curFolder.StartsWith(s) && string.Compare(s, curFolder) != 0) ); Its a bit of a mouthfull. You may want to factor out a new method. uniqueFolders.RemoveAll( curFolder => IsNotRootFolder(uniqueFolders, curFolder ) ); bool IsNotRootFolder ( uniqueFolders, curFolder ) { return string.IsNullOrEmpty(curFolder) || uniqueFolders.Exists( s=> !string.IsNullOrEmpty(s) && curFolder.StartsWith(s) && string.Compare(s, curFolder) != 0) }
{ "domain": "codereview.stackexchange", "id": 144, "tags": "c#, strings, collections" }
Why is it useful to scale seismic ground motion data
Question: I am learning by myself structural dynamics but confused about the concept of scaled ground motion data. I can't get the reason why the acceleration data should be scaled. I mean, isn't it better to use the original data "as it is" to compute the response of structures so that the results can reflect actual response of a specific structure? EDIT : Is it the case that when we design a structure, scaled data should be used, while unscaled data can be utilized when we conduct post-seismic-analysis of a structure? Could someone please summarize the situations when we should use scaled or unscaled data? Answer: A ground motion record (accelerogram) is a measurement with high specificity to the fault that caused the earthquake. The recorded peak acceleration, frequencies, duration etc depend on the fault geometry and movement. Of course there are not two identical faults and every time a fault is activated it doesn't produce the same ground motion. Therefore, the value of designing a structure by one "as recorded" ground motion is small, since it is highly unlikely to encounter it in the future. Typically the design with real ground motions requires a set of them (at least 5). Why scale a ground motion In an ideal world a structural engineer would have a vast amount of historical ground motions to select from, for a particular site, when designing a new structure. There would be no need to use any records from other parts of the world. Of course this is not happening because severe earthquakes are not an every-day phenomenon and recording ground accelerations started not to far in the past. There are sites where only one or two good records are available. So the solution is to use available records from all over the world, but then the problem is that the seismic risk is not the same all over the planet. Typically, the seismic code provides us with the expected intensity of the seismic excitation, in the form of peak ground acceleration. A solution is to scale an accelerogram to match the code requirements. A problem arises though, because a distrortion in the frequencies is unavoidable. Therefore, it is better to avoid too much scaling, or alternativly to use artificial accelerograms. Also, scaling of the ground motions is inherently needed for some advanced types of analyses (incremental dynamic analysis).
{ "domain": "engineering.stackexchange", "id": 1269, "tags": "structural-engineering, dynamics, seismic" }
Binary Search Tree implementation in Python 3
Question: Please review my BST Code 1. Class node class Node: 2. Constructor def __init__(self, data): self.left = None self.right = None self.data = data 3. Insert node def insert(self, data): if self.data: if data < self.data: if not self.left: self.left = Node(data) else: self.left.insert(data) elif data > self.data: if not self.right : self.right = Node(data) else: self.right.insert(data) else: self.data = data 4. Node in delete any value def getMinValue(self,node): current = node while(current.left is not None): current = current.left return current def delValue(self,data): if data < self.data: self.left = self.left.delValue(data) elif data > self.data: self.right = self.right.delValue(data) else: if self.left is None: temp = self.right self = None return temp elif self.right is None: temp = self.left self = None return temp temp = self.getMinValue(self.right) self.data = temp.data self.right = self.right.delValue(temp.data) return self 5. Node in search any value def getSearchValue(self,data): if data == self.data: return print(self.data,"True") if data < self.data: if self.left: self.left.getSearchValue(data) if data > self.data: if self.right: self.right.getSearchValue(data) 6. Print tree def print_tree(self): if self.left: self.left.print_tree() print(self.data) if self.right: self.right.print_tree() Answer: A few quick points that came to my mind when looking at your code: Documentation Your code has no documentation whatsoever. Python has so called documentation strings, which are basically """triple quoted text blocks""" immediately following def whatever(...). Example: def print_tree(self): """Print the content of the tree This method performs an in-order traversal of the tree """ # ... your code here Since your question title indicates that you're working with Python 3, also consider using type hints to document your code. Naming There is the infamous Style Guide for Python Code (aka PEP 8) which recommends to use lower_case_with_underscores for variable names and (member) functions. You do this for print_tree, but use camelCase for the other member functions. Searching the tree Your getSearchValue function is a little bit awkward in that it always returns None. Although your code promises to "get" the value, you instead print it to the console (together with the string "True") and return the return value of print which is None (aka no return value in that case). Your function also only returns something (other than the implicit None) if the value was found. In my opinion something like def has_value(self, data): """Return True or False indicating whether the value is in the BST""" if data == self.data: return True if data < self.data: if self.left is not None: return self.left.has_value(data) if data > self.data: if self.right is not None: return self.right.has_value(data) return False would be a more appropriate approach. As you can see, this function returns an appropriate bool value to signal the result. Another minor tweak: this implementation uses if ... is not None: to explicitly check for None as signaling value. Since None is a singleton in Python, you should always use is (not) to check for equality. Unnecessary parentheses The parentheses around the condition in while(current.left is not None): are not needed. while works the same way as if in that regard. They are sometimes used for longer conditions that span multiple lines, since Python does implicit line joining in that case. I'm also not fully convinced about your delValue function, but unfortunately I'm a little bit short on time at the moment.
{ "domain": "codereview.stackexchange", "id": 36092, "tags": "python, python-3.x, tree, binary-search" }
Orbital quadrature and relative velocities
Question: In Robert J. Sawyer's sf novel The Oppenheimer Alternative, a bunch of physicists try to find the composition of the martian atmosphere with spectroscopy. They say that to find water vapour or oxygen, a good technique is to use orbital quadrature, i.e., when the line from the Sun to the Earth and the line from Mars to Earth cross at 90 degrees. At that point, Mars relative velocity with respecto to Earth will be a maximum, so the Doppler shift will allow to see the water vapour or oxygen truly from Mars, not those from Earth. I didn't know of this technique, but I have been thinking about the fact that the relative velocity of Mars with respect to Earth will be a maximum in quadrature, but I can't find this to be necessarily true. Do you know/have a proof of it? Thx. Answer: Assume for simplicity that Earth ($E$) and Mars ($M$) orbit the Sun ($S$) on circular orbits in the same plane, with radii $r_E$ and $r_M$, respectively. Call $\alpha$ the angle $\widehat{SEM}$ and $\beta$ the angle $\widehat{SME}$. The law of sines in the triangle $\Delta SEM$ gives us the relation $$ r_E\sin\alpha = r_M\sin\beta. $$ The velocity of the Earth is perpendicular to the line $SE$, so the velocity of Earth along the line $EM$ is given by $$ \bar{v}_E = v_E\cos(\pi/2-\alpha) = v_E\sin\alpha. $$ Likewise, the velocity of Mars along the line $EM$ is $$ \bar{v}_M = v_M\cos(\pi/2-\beta) = v_M\sin\beta. $$ Therefore, the relative velocity between Earth and Mars along the line $EM$ is $$ \bar{v} = \bar{v}_E - \bar{v}_M = \left(v_E - \frac{r_E}{r_M}v_M\right)\sin\alpha, $$ which is maximal if $\alpha= \pi/2$.
{ "domain": "physics.stackexchange", "id": 73590, "tags": "orbital-motion, doppler-effect" }
QuTip: How to multiply symbol with matrix
Question: I am trying to multiply a symbol with a matrix which is defined by QuTip quantum object, but I got this error: TypeError: Incompatible object for multiplication I used: from qutip import * import sympy as sp w0 = sp.Symbol('\omega_{0}') w0*destroy(4) Did I miss something? Answer: well, you can either convert destroy(4) to a sympy matrix or a numpy array like that: a = destroy(4) destroy_ = sp.Matrix(a) destroy_ = w0*destroy_ destroy_ And here is the result : or try numpy array: destroy = np.array(a) result = w0*destroy After you finished all stuff (like finding w0 or doing all computations), then, you can convert your final matrix to Qobj
{ "domain": "quantumcomputing.stackexchange", "id": 3491, "tags": "programming, qutip" }
What is the cause of reflection?
Question: A wave that meets the boundary of two substances with different propagating velocities is partially reflected. Does anybody know the reason behind reflection? Answer: The question is general, so one can start with the simplest waves, the ones that are easiest to picture: transverse waves on a string. First one needs to consider reflection at the ends of strings. There are two cases: where the end is fixed, or where the end is free. Those are the extreme ends of boundary conditions: where the string on the right is infinitely heavy or where the string on the right is massless. It is often is a bit of math to solve differential equations with boundary conditions, but at these types of ends it can be done with picture-type arguments, and the principle of superposition. In both of these cases, one can add a mirror wave, a wave traveling in the opposite direction, like in the animation below: The wave is on the blue string. In the first case, the string end is fixed at the black dot. One could imagine the string continuing behind this point (pictured in red), and a wave coming from the other side. When the distance is the same and the phase is inverted, the dot remains stationary in the superposition of the two waves traveling through eachother. So this satisfies the wave equation with the boundary condition of a fixed end. Similar for a free end, but now the reflected wave has the same phase. Now to the question. Imagine ropes with different masses connected at the dot. Then part of the wave continues, but there will also be reflection. The phase depends on whether the wave velocity to the right is higher or lower, the amplitude depends on how large the difference is. This is very similar to water waves (wave speed changes at a change of the depth of the water), or sound, or light. Or even the quantum mechanical problem of a particle transmitted at a potential step. See also this page (with animations): http://www.acs.psu.edu/drussell/Demos/reflect/reflect.html More mathematically: http://www.people.fas.harvard.edu/~djmorin/waves/transverse.pdf In 4.2 there, also the boundary condition is explained. It is obvious that the displacement of the string must be continuous at the boundary. But also the slope must be continuous.
{ "domain": "physics.stackexchange", "id": 36404, "tags": "waves, reflection" }