anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Intuition behind phase difference in AC circuits | Question: So I've been studying AC Circuits lately and I've come across a few things that I'm not able to digest. Let's take a pure inductive circuit for example.
This is the first time I'm experiencing current and voltage are not in sync with each other it's really weird why is the inductor doing that.
I mean If I think about think about certain points, there are some points where voltage is zero in the circuit zero voltage but we have current to be maximum. How does that make any sense
And then you have these points over here where the voltage is maximum in the circuit but the current is at zero. Just what is going on here?
Never experienced anything like this in previous circuits so I'm really really trying to figure out what's going on over here. Any help will be appreciated.
Answer: This is because in an inductor, current does not follow voltage version of Ohm's law.
Voltage and current are simply two very different things; voltage characterizes how much electric charges are separated, it is a measure of single-sign charge concentration and of its Coulomb electric field; while current characterizes how much the mobile charges are moving, it is a measure of electric charge flow. In general these two things could be totally unrelated, but in special cases they are related.
Ohm's law is intuitive and simple relation between voltage and current:
$$
I = U/R
$$
valid in metals when current does not change in time too quickly. But you have to realize that in general, current does not require force to be causing it; current can simple be, just like moving asteroids move around the Sun without any force pushing them around. Current can just flow due to inertia. It can also be driven by other forces than the force of the Coulomb field (quantified by voltage in AC circuits). For example, it can be driven by chemical reactions (inside electrochemical cell or a battery of them) even against the voltage. Or it can be driven by induced electric field, which is a separate kind of electric field independent of the Coulomb field and thus independent of the voltage concept.
In the simplest case where ideal inductor is connected to harmonic AC voltage source, current is driven by both the AC voltage source, and also by the inductor's induced electric field. These two forces cancel each other so that finite current can exist in the circuit. This is because inside ideal inductor conducting body, electric field has to vanish, so voltage on the inductor is exactly minus induced EMF. This induced EMF is proportional to rate of change of current (Faraday's law). So when the current is changing the most, the induced EMF has the greatest magnitude. Consequently, voltage has the greatest magnitude as well. This happens when the current is zero. | {
"domain": "physics.stackexchange",
"id": 83538,
"tags": "electric-circuits, electric-current, electrical-resistance, inductance, intuition"
} |
Bias in Naive Bayes classifier | Question: I am building a document classifier using Naive Bayes. There are 10 classes. My question is that :
1 Should each class contain the same number of documents for training?
What if the number of training example in each class is different?
2 Does the number of classes and classification algorithm have any relation? say is there any thump rule like if there are 100 classes shall algorithm 'X' has better performance than 'Y'
Answer: Unbalanced class distributions
First, unbalanced datasets will cause your model to have a bias towards the over-represented classes. If the distribution of the classes is not very drastic then this should not cause a significant problem with any algorithm you will employ. However, as the difference between the class distribution becomes more severe you should expect to get higher false negatives for that class. Consider this, you are trying to have the model adequately identify what it means for a specific example to belong to a class. If you do not provide sufficient examples, then the model will not be able to understand the extent of the variation which exists among the examples.
If the class distribution is very different, then I would suggest anomaly detection techniques. These techniques allow you to learn the distribution of a single classification and then identify when novel examples fall within this distribution or not.
Choosing an algorithm
More classes will result in a higher dimensional output, thus contributing to the complexity of your model. For example, if you have a model which discriminates between 2 classes with a set dataset size. Then further discrimination (increasing the number of output classes) will cause the model to have higher bias. You should thus expect to see greater test error if you do not increase the size of your dataset.
If you have a set dataset X. Then you need to find the correct balance between bias and model complexity to get the optimal results. For example, a neural network based technique (highly complex) is not a good algorithm to use for a limited dataset with many output classes. However, Naive Baye's or Random Forest would be. | {
"domain": "datascience.stackexchange",
"id": 2202,
"tags": "python, classification, naive-bayes-classifier"
} |
Create a PDA for the given language | Question: The task is to create a PDA for this language. The |u| a reffers to the number of a's in that word. I have tried working on it as two separate languages that I can later combine, but I fail to even do any of the two.
I would appreciate any help.
Answer: Since you mention splitting $L$ in two different languages and combining the resulting PDAs, I assume you are fine with nondeterministic PDAs.
Let me show you a context-free grammar for $L$ instead (there are standard and completely mechanical ways to convert CFG grammars to PDAs).
Let $\Sigma = \{a,b,c\}$ and split $L$ into two languages $L_1 = \{ uv \in \Sigma^* : 3|u|_a = |v|_b + |v|_c \}$ and $L_2 = \{ uv \in \Sigma^* : |u|_c = 2|v|_a \}$.
We can then design a context-free grammar for $L_1$ and $L_2$ separately.
A CFG for $L_1$ is the following (where the axiom is $S_1$):
$$
\begin{align*}
S_1 &\to XS_1A \mid aS_1YAYAY \mid \epsilon \\
X & \to bX \mid cX \mid \epsilon \\
A & \to aA \mid \epsilon \\
Y & \to b \mid c
\end{align*}
$$
A CFG for $L_2$ is the following (where the axiom is $S_2$):
$$
\begin{align*}
S_2 &\to WS_2Z \mid cWcS_2a \mid \epsilon \\
W & \to aW \mid bW \mid \epsilon \\
Z & \to bZ \mid cZ \mid \epsilon
\end{align*}
$$
To get a grammar for $L$, it suffices to combine the former two Grammars and add the production $S \to S_1 \mid S_2$ (where $S$ is the new axiom). | {
"domain": "cs.stackexchange",
"id": 16400,
"tags": "formal-languages, pushdown-automata"
} |
Resolution problem at SBPL Lattice planner | Question:
I would like to try the SBPL Lattice planner on my map, but I have problems with the resolutions (scalling). My map.pmg has a resolution of 0.05, and in the SBPL Lattice planner is the resolution 0.025. Where can I change that to 0.05?
Originally posted by Ico on ROS Answers with karma: 23 on 2014-05-16
Post score: 0
Answer:
The SBPL resolution is specified in the primitives file; you'll have to update the code that generated the primitives (usually a matlab program) to use a different resolution, and re-generate your primitives.
Originally posted by ahendrix with karma: 47576 on 2014-05-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17975,
"tags": "navigation, mapping, rviz, resolution, sbpl"
} |
Extracting NER from a Spanish language text file | Question: I am trying to extract various Named Entities from a Spanish language text file. I tried using nltk but to no success. I am using Python 2 with nltk 3.x.
Answer: I suggest you take a look at the Python library Spacy it has a model for spanish language that includes NER. | {
"domain": "datascience.stackexchange",
"id": 2231,
"tags": "nlp, information-retrieval"
} |
Start to teleop My own Robot Model | Question:
Hi ,
I built my own robot model as a urdf file .. Like In this Tutorial
It looks In RVIZ :
Now, I want to start to test apps on my model like teleop to start to move it with Keyboard .
What Should I do ??
I think firstly Open a world to Gazebo
Secondly, Load my robot on the world .... I DON'T Know HOW ???????????
Third, Start to operate teleop node to start to move .... I WANT a CODE fit to my robot model ??
Actually I can't find a Stack that can operate apps on my robot
thanks
Originally posted by salma on ROS Answers with karma: 464 on 2014-03-31
Post score: 3
Original comments
Comment by raissi on 2016-06-07:
hello salma , i am now working on same you project , really i found the problems in the control for move it . i want your help if you have a time for give me some indications ! thank you . my gmail : raissimohamedimam@gmail.com for contact me . i am in wait your help by all passion ! thank you
Comment by Zahraa23 on 2023-06-17:
Hello there,
I want to ask you if the teleop package of turtlebot worked with your project as I am facing the same problem now ?
Answer:
You will need gazebo to simluate your robot. This is not trivial and might need some work. However, I think the gazebo tutorials are actually quite good. There is a lot of documentation. I would start with the general gazebo tutorials and then once you understand how it works, check out the tutorials which are specific to gazebo+ROS http://gazebosim.org/wiki/Tutorials#ROS_Integration.
Originally posted by demmeln with karma: 4306 on 2014-04-04
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by salma on 2014-04-04:
Hi demmeln, I have finished my robot description files About environment world in gazebo, Not a problem I will use the default gazebo world file The issue now that i want to use the teleop app like that in the turtlebot SDK or whatever on my own robot But i didn't find any apps on a similar robot based on wild thumper base on ROS
Comment by Raul Gui on 2014-04-09:
Salma if your version of ROS is Hydro you need to install gazebo 1.9, this version is the compatilbe with hydro.
http://gazebosim.org/wiki/1.9/install
http://gazebosim.org/wiki/Tutorials/1.9/Installing_gazebo_ros_Packages
Comment by demmeln on 2014-04-09:
@salma: Did you go through all the gazebo tutorials?
Comment by Raul Gui on 2014-04-10:
In my case I did all gazebo tutorials of gazebo version 1.9.
The important ones are: 1.Building a Robot With SDF, explain how to build your robot and add a sensor to your robot too; 2.Making a World, you need to create a world to run in gazebo and do spwan of your robot; 3. ROS Integration, explain how to do a roslaunch to run your world and do spwan of your robot.
My advise is, if you don't understand what they are saying, install the package of turtlebot and see how it works. http://wiki.ros.org/Robots/TurtleBot
Ahhh another advise, use xacro to build your robot not sdf because, xacro you can find more support info.
In this moment I am working with turtlebot, and I can run him in gazebo without any problems.
Comment by salma on 2014-04-12:
Hi Raul, demmeln
I have finished my robot model description and a world on gazebo . I have tried turtlrbot teleop package and it works well now on Gazebo :).
Now i want to build a map by gmapping .
I have opened Rviz but some error occur in RobotModel plugins and laser scan plugins.
i am working to fix them.
thanks all for your help.
Comment by Raul Gui on 2014-04-14:
Hi Salma, do you have a error on Global Status Error too? if yes, may be you have Global Options "Fixed frame: map" you need to put here a link of your robot, in turtlebot that is my case is "base_link". This will fix the Global Status and RbotoModel. The LaserScan is too the topic problem I think, in normal cases the topic that public data of laser is "/scan", search using this "rostopic list" or "rqt_graph", and find which topic is publishing, use too "rostopic echo /nametopic" to see the values.
I hope this helps.
Comment by salma on 2014-04-15:
Hi Raul, Yes this helps me thanks very much. | {
"domain": "robotics.stackexchange",
"id": 17474,
"tags": "ros, gazebo, ubuntu, ubuntu-precise, keyboard-teleop"
} |
Small kernel (i.e. proof-verifier) for Agda? | Question: Proof-assistants usually include a lot of machinery that assists in the creation of proofs. The creation process may be unsound without risking the soundness of the proof-assistant if the alleged proof that was created is later verified by a sound proof-verifier. The smaller this verifier is, the less likely it is to be unsound due to coding or reasoning errors. The term small kernel was coined to refer to such a verifier in Barendregt and Geuvers (the idea apparently goes back to De Bruijn).
I have found literature mentioning this concept in relation to Coq and HOL Light (among others), but not to Agda, hence the question:
How close is Agda to having a small kernel? How does it compare to other proof-assistants such as Coq and HOL Light?
Geuvers, H., Proof assistants: history, ideas and future, Sādhanā 34, No. 1, 3-25 (2009). ZBL1192.68629.
Barendregt, Henk; Geuvers, Herman, Proof-assistants using dependent type systems, Robinson, Alan (ed.) et al., Handbook of automated reasoning. In 2 vols. Amsterdam: North-Holland/ Elsevier; 0-444-50812-0 (vol. 2); 0-444-50813-9 (set)). 1149-1238 (2001). ZBL1005.03011.
Adams, Mark, Proof auditing formalised mathematics, ZBL07106502.
Answer: It is true that Agda currently has a much shakier foundation than say Coq or Lean. It does have an internal term syntax that could be seen as a core language (https://github.com/agda/agda/blob/master/src/full/Agda/Syntax/Internal.hs). It even has an independent typechecker for internal syntax (https://github.com/agda/agda/blob/master/src/full/Agda/TypeChecking/CheckInternal.hs).
However, this story is far from complete because it takes the signature of defined types and functions for granted. In particular, functions by dependent pattern matching are represented as case trees that need to pass coverage checking and termination checking, which means those checks are also part of Agda's 'trusted core'. In our ICFP paper last year (https://dl.acm.org/citation.cfm?doid=3243631.3236770) Andreas and I describe a potential core language for case trees, but so far there is no independent typechecker for case trees as we have for internal syntax (one day...).
Another important part of Agda's trusted core is the conversion checker, which in Agda shares most code with the constraint solver (https://github.com/agda/agda/blob/master/src/full/Agda/TypeChecking/Conversion.hs), while in Coq it's two separate algorithms. Recently I added the possibility to run the conversion checker in 'pure' mode which is guarateed to be free of side-effects (https://github.com/agda/agda/blob/master/src/full/Agda/TypeChecking/Conversion/Pure.hs).
As you can see, there is bits and pieces that could be part of a core language of Agda, but no real coherent story yet. In general, any attempt to build a core language will also be an exercise in selecting which features of Agda should be included and which ones should not (e.g. irrelevance, sized types, prop, rewrite rules, cubical, ...). Yet this selection of various features would've been much harder to implement if Agda already had a core language from the beginning, so not having a core language has advantages as well as disadvantages. | {
"domain": "cstheory.stackexchange",
"id": 4865,
"tags": "dependent-type, proof-assistants, agda"
} |
Ratio of bleach to water required to disinfect COVID-19? | Question: I am confused about the needed ratio of bleach to water required to disinfect COVID-19 from surfaces.
The guide on CDC states that: https://www.cdc.gov/coronavirus/2019-ncov/community/home/cleaning-disinfection.html
Diluted household bleach solutions can be used if appropriate for the surface. Follow manufacturer’s instructions for application and proper ventilation. Check to ensure the product is not past its expiration date. Never mix household bleach with ammonia or any other cleanser. Unexpired household bleach will be effective against coronaviruses when properly diluted.
Prepare a bleach solution by mixing:
5 tablespoons (1/3rd cup) bleach per gallon of water or
4 teaspoons bleach per quart of water
This is a 2% ratio. My problem is that it is not clear what "bleach" means. Chemically I guess it's Sodium hypochlorite. What's not clear is that what is the strength of the solution I can buy in shops.
Wikipedia states that bleach in stores can be anything between 3-25%.
Does CDC recommend the final strength of Sodium hypochlorite : water to be 2% or "bleach in stores" : water to be 2% (which can be anything between 0.06% - 0.5%
Disregarding this confusing CDC direction: What is the scientific recommendation for the required sodium hypochlorite : water ratio needed for disinfecting COVID-19 from surfaces?
Answer: Perhaps I can help answer your microbiology question which is really about SARS-CoV-2 which is an enveloped mRNA virus. There is a lot of misunderstanding about SARS-CoV-2 and the medical disease it causes, COVID-19. The confusion is understandable as this is a novel coronavirus with a high rate of transmission & people around the globe are still in a learning curve about it. It's important to understand how to decimate SARS-CoV-2 whenever possible. I'm an RDN & years ago I went to grad school on an Allied Health Traineeship thanks to citizens of the U.S.A. Although now retired, as well as having been a clinician, I'm a former prof. I believe in giving back to society which has supported me.
Many pieces in the press unfortunately use terms of SARS-CoV-2 and then COVID-19 interchangeably, which they should not. This adds to public confusion. Keep in mind that disinfection in this instance pertains to decimation of the virus SARS-CoV-2 per se.
Cleaning should precede disinfection. Disinfection is biocidal elimination on fomite surfaces of microorganisms, in this instance SARS-CoV-2. After applying disinfecting dilution be sure to allow it adequate action time and then allow surface to air dry.
When CDC says to use “5 tablespoons (aka 1/3 cup) bleach per gallon of water” or “4 teaspoons bleach per quart of water” that is in effect 1000 ppm. CDC guidance assumes household bleach concentration to be 5-6.25%, even though most NaOCl manufacturers int the U.S.A. went to 8.25% back around 2012-2013. Check any bottle you have (or box it came in) for the NaOCl concentration for that batch of bleach. When first manufactured concentration is higher, but over time actual available free chlorine level drops and adequate potency passed by 1 year out from manufacturing date. If stored at higher room temperature, cut that down to 6 months which is more realistic. Adequate 1000 ppm of NaOCl for enough time (minimum 1 minute) can kill SARS-CoV-2. If the non-porous fomite surface happens to be a countertop where inadvertently later food might be placed on, then after first cleaning, then disinfecting, be sure to rinse with potable H2O, then use a sanitizing solution dilution and allow to air dry.
See: "COVID-19 – Disinfecting with Bleach" from Michigan State U Center for Research on Ingredient Safety which will explain how to read date code of manufacturing on a household bleach (NaOCl) bottle and how much bleach to add to cold water in order to reach what is effectively a 1000 ppm dilution if bleach is still ‘fresh enough’ aka adequate concentration in bottle as purchased. The bleach concentration bottle code information can be read as follows (quoted or paraphrased from article):
Example: code E619337.
●First two characters E6 identify the company facility that manufactured the bleach.
●Second two numbers 19 tell the year the company manufactured the bleach.
●Last three numbers 337 tell the day of the year the company manufactured the bleach.
So, code E619337 tells us this bottle of bleach was manufactured at facility E6 in 2019 on the 337 day of the year (using a Julian calendar), which is December 3.
This bottle of bleach technically expires one year from December 3, 2019, so it needs to be used or disposed of by December 2, 2020.
Similarly a product code A420027 tells us the product was manufactured at facility A4 in 2020 on the 27 day of the year, which is January 27. The product expires one year from January 27, 2020, so it needs to be used or disposed of by January 26, 2021.
Adjust the Julian calendar keeping in mind the year as 2020 is a leap year.
From experience using test strips with bleach concentrate dilutions, often that expiration timeline is closer to 6 months (commercial settings open bottle only 1 month).
canr.msu.edu/news/covid-19-disinfecting-with-bleach
Public Health Ontario has an online "Chlorine Dilution Calculator" to help anyone determine how much NaOCl "household" bleach of a given concentration level to add to how much safe tap water to achieve a certain ppm of diluted chlorine sol'n. One can calculate sanitizing or even disinfecting diluted chlorine sol'n levels using the online calculator. Although the site is in Canada, they kindly include both English and Metric units. https://www.publichealthontario.ca/en/health-topics/environmental-occupational-health/water-quality/chlorine-dilution-calculator
Advisement of 1000 ppm for a NaOCl dilution based on publication:
Kampf, G. et al. Persistence of coronaviruses on inanimate surfaces and their inactivation with biocidal agents. Journal of Hospital Infection, Volume 104, Issue 3, 246 - 251 (March 2020). https://doi.org/10.1016/j.jhin.2020.01.022
Fomite transmission (contact w/ inanimate or nonpathogenic object surface transmission exclusive of food components) reduction via biocidal agents such as 0.1%-0.5% NaOCl aka 1000-5000 ppm. “...Human coronaviruses on inanimate
surfaces can be effectively inactivated by surface disinfection procedures with 62-71% ethanol, 0.5% hydrogen peroxide or 0.1% sodium hypochlorite within 1 minute.” from pre-print https://doi.org/10.1016/j.jhin.2020.01.022
Edit: The question is really about disinfecting for SARS-CoV-2, the mRNA virus which causes COVID-19 in humans.
The edit is to:
1) Clarify the actual question, which deals w/ microbiology relating to SARS-CoV-2, not a medical disease condition COVID-19;
2) Acknowledge that the press has incorrectly used the two different terms interchangeably creating public confusion & explain why many professionals are responding in forums around the world trying to clear up that confusion by providing accurate information;
3) Include current full rather than shortened links to details of NaOCl disinfection of SARS-CoV-2 with enough detail that if links are later unavailable readers will still know what to do. | {
"domain": "biology.stackexchange",
"id": 10442,
"tags": "biochemistry, microbiology, virology, coronavirus"
} |
Finding power using work as $Fd$ or as change in energy | Question: This is a question I had been given a few years ago and I have found two different answers depending on how I find work. If I use $W=Fd$ vs $W=E_f-E_i$, I get two separate answers. I must be missing some assumption. This is for grade 11 Physics.
A man is lowering a 38 kg pail of nails down 15 m to the ground by a rope. He does not want the pail to hit the ground in free fall but he is not strong enough to completely stop it. Even though the pail starts at rest, when it hits the ground it is moving at 8.3 m/s. Assume no friction or air resistance. If the whole process takes 2.5 minutes, what was the power output of the man?
Answer 1
$$P=\frac{Ef-Ei}{t}$$
$$P=\frac{1/2mv^2-mgh}{t}$$
$$P=\frac{1/2*38*8.3^2-38*9.81*15}{150}$$
$$P=-29W$$
Answer 2
$$Acceleration=\frac{8.3}{150}$$
$$a=0.055...$$
$$F_{net}=Fg-Fa$$
$$F_a=F_g-F_{net}$$
$$F_a=mg-ma$$
$$F_a=370.677...$$
Now,
$$P=\frac{F_a*d}{t}$$
$$P=\frac{370.677..*15}{150}$$
$$P=-37W$$
Answer: The problem is that you have assumed the acceleration was constant and then calculated it using:
$$ a = \frac{v_{final} - v_{initial}}{time} = \frac{8.3}{150} = 0.0553~\text{m/s}^2 $$
But suppose we calculate the distance that the bucket would have moved if the acceleration had this value. That distance is:
$$ s = ut + \tfrac12 at^2 = \tfrac12 \times 0.0553 \times 150^2 = 623~\text{m} $$
But the bucket only moved $15$ metres, so the conclusion is that the acceleration cannot have been constant. And if the acceleration does vary with time the data you have been given is not enough for you to calculate this variation. That means you cannot work out the net force on the bucket and therefore you cannot use that net force to calculate the work done. | {
"domain": "physics.stackexchange",
"id": 78983,
"tags": "homework-and-exercises, newtonian-mechanics, energy, work, power"
} |
Solving particle in a ring problem | Question: While solving the particle in a ring we get a general solution of the form:
$$\psi(x) = A\exp(imx) + B \exp(-imx)$$
Where $m=\left(\frac{2iE}{h}\right)^.5$. Imposing the boundary condition I get that $m$ should be an integer, but most of the books drop one of the terms in the general solution. Why is that? They write the solutionn as $\psi(x)=A\exp(i mx)$. I understand $m$ is integral, but this is obviously not the general solution. How are they getting it?
Answer: Since you haven't provided a reference, I can only guess why this is done. However, I am quite sure that it's just shorthand, as @baponkar points out in the comment above. $m$ can be either positive or negative, and so the linearly independent solutions can be compactly written as $e^{i m x}$ without loss of generality.
The solution that you have written is the solution to the time-independent Schrodinger Equation $$-\frac{\hbar^2}{2 M R^2}\frac{\partial^2\psi}{\partial\theta^2} = E\,\psi.$$
The solutions to this equation are $e^{im\theta}$ and $e^{-im\theta}$ where the boundary conditions force $m$ to be an integer. The more general state of definite energy can therefore be written as $$\psi_m(\theta) = A e^{i m \theta} + B e^{-i m \theta},$$ with an energy eigenvalue of $$E_m = \frac{\hbar^2 m^2}{2MR^2}.$$
As you can see, in general when $m \neq 0$, $\psi_m$ and $\psi_{-m}$ are not the same, but they give the same energy eigenvalue, and thus such pairs of states are degenerate. ($\psi_0$ is just a constant, and corresponds to the energy 0, and thus isn't degenerate.)
An arbitrary state of the system (not a state of definite energy) can be expressed as a linear combination of the states of definite energy $\psi_m$ in the following way:
$$\Psi(\theta,t) = \sum_{m=0}^\infty \left(A_m e^{im\theta} + B_m e^{-im\theta}\right) e^{-iE_mt/\hbar},$$
but since $E_m=E_{-m}$, you could just as well write it as $$\Psi(\theta,t) = \sum_{m=-\infty}^\infty C_m e^{im\theta}e^{-iE_mt/\hbar}.$$
You could interpret the last decomposition as decomposing the wavefunction in the basis $\{ e^{im\theta}, m\in\mathbb{Z}\},$ which is a perfectly valid basis that spans the space. (Any arbitrary periodic function can be decomposed in terms of complex exponentials.)
There is a slight advantage to using this basis, as I have pointed out in my answer to Wavefunction of a particle on a polar potential: the eigenfunctions $e^{im\theta}$ correspond not only to eigenfunctions of $\hat{H}$ but also of the ($z-$component of the) angular momentum $\hat{L}_z$. Thus, unlike $\psi_m$, the states $e^{im\theta}$ have a specific value of angular momentum, whose sign is given by the sign of $m$. (You could very crudely think of them as "right-rotating" or "left-rotating" solutions, depending on whether $m$ is positive or negative respectively.) | {
"domain": "physics.stackexchange",
"id": 73196,
"tags": "quantum-mechanics, schroedinger-equation, boundary-conditions"
} |
How does rosdep install decide which packages to apt-get install? | Question:
When running rosdep install on a clean machine it pulls down lots of packages from apt-get or whatever is installed. As far as I can determine, it is something connected with the package.xml, but sometimes a <depend>foo</depend> ends up installing libfoo-dev.
Where is the code that does this? Is there an option to apt-get remove afterwards as a Docker image won't usually need the -dev variants?
I would guess there isn't an automatic way to do this, but if I could find the logic I could cobble something together for my own needs.
Originally posted by KenYN on ROS Answers with karma: 541 on 2019-03-19
Post score: 0
Answer:
When running rosdep install on a clean machine it pulls down lots of packages from apt-get or whatever is installed. As far as I can determine, it is something connected with the package.xml, but sometimes a <depend>foo</depend> ends up installing libfoo-dev.
Please see whether #q215059 and #q217475 answer your questions sufficiently.
Is there an option to apt-get remove afterwards as a Docker image won't usually need the -dev variants?
No, there is no option for that (in rosdep at least), but package manifests should split out their dependencies in exec and build dependencies. If they do that properly, you could automate removing the build dependencies. Unfortunately this would also depend on the rosdep db containing separate keys for devel and runtime dependencies, which isn't always the case (and in addition on all platforms that you'd like to deploy to, to actually split dependencies such that they are different for build and run phases, which isn't always the case either).
I would guess there isn't an automatic way to do this, but if I could find the logic I could cobble something together for my own needs.
You might be interested in taking a slightly different approach, seeing as you're already using Docker.
See the Hermetic Robot Deployment Using Multi-Stage Dockers presentation from ROSCon'18: video (slides).
Originally posted by gvdhoorn with karma: 86574 on 2019-03-19
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by KenYN on 2019-03-19:
#q215059 has a lot of useful info. The slides look good too. Building our own distribution in one container then copying over the /opt/ros/ directory to another and installing things until it stops breaking seems the way to go.
Comment by gvdhoorn on 2019-03-19:\
Building our own distribution in one container then copying over the /opt/ros/ directory to another and installing things until it stops breaking seems the way to go.
I'm not sure that is what the presenter means / they are doing. I believe the video of the presentation should clear things up a bit. | {
"domain": "robotics.stackexchange",
"id": 32675,
"tags": "rosdep, ros-kinetic"
} |
Calculating charge density | Question: When calculating uniform charge density, I yielded a solution with units: $$VC^2 / Nm^2$$
The answer is telling me that this is equivalent to: $$C/m^2$$
where:
$$m : meter$$
$$C : Coulomb$$
$$V : Volume$$
Perhaps someone more experienced with the field of charge density could explain why the units simplify.
Answer: $N=CV/m$, hence your solution turns into $\frac{C}{m}$, which doesn't look right
(electric field in V/m = force in N / charge in C)
I suppose, that answers your question. You can use Wolfram Alpha for unit conversion, for example in your case.
And here you'll see how force relates to voltage:
http://hyperphysics.phy-astr.gsu.edu/hbase/electric/elefie.html | {
"domain": "physics.stackexchange",
"id": 20985,
"tags": "homework-and-exercises, charge, units, density, unit-conversion"
} |
Definition of major, minor, and trace elements in analytical chemistry | Question: What is the definition of major, minor, and trace elements in a sample? I am unable to find this anywhere on the internet. Is it something like:
Major elements are those > 0.1 %
Minor elements are those > 100 ppm < 0.1 %
Trace elements are those < 100 ppm
Answer: There is no definite classification of what are the major, minor and trace elements are. While the numbers you mentioned are indeed the rule of thumb, they are rather loosely applied. It depends on field (geochemistry, solid state physics, analytical chemistry). There are however several guidelines.
Major elements are the elements that define the material in question. Some examples are:
Gold and silver in electrum,
Yttrium, aluminum and oxygen in YAG,
Hydrogen and oxygen in the ocean.
If you change the major elements, you are essentially changing the material. If you replace the hydrogen with potassium then instead of water you will get potash, which is not the same material and has vastly different properties (although this example might be a bit extreme).
Major elements are usually measured in percentage (either mass or molar), and are commonly above 1% of the chemical composition of the material.
Trace elements are elements that occur in such small concentrations that they do not change the essence of what a material is. Some examples:
Copper and platinum in electrum. It's still a gold-silver alloy, just with some other metals. The properties (such as colour) may change slightly.
Neodymium, erbium, etc. in YAG. YAG is used to generate lasers, and doping it with trace amounts of other rare earth elements may change certain properties of the laser (wavelength). However, it's still a YAG.
Almost every other element other than H, C, Cl and Na in ocean water.
Trace elements usually have concentrations of below 0.1%. In this case it's easier to measure them in units of ppm (parts per million, this mostly refers to mass). Remember: 1% = 10000 ppm. In extreme cases ppb (parts per billion) may be the appropriate unit.
Minor elements are everything in between. Technically, this means things between 1% and 0.1%. You might think of sodium and chlorine in ocean water as an example of minor elements (even though they're both slightly above 1%). To be honest, other than a mention in introductory textbooks about analytical chemistry and geochemistry, I rare see this term used in the professional literature. | {
"domain": "chemistry.stackexchange",
"id": 2363,
"tags": "analytical-chemistry, terminology"
} |
Longest substring with at most 2 distinct characters | Question: I think I solved the following problem. However, I was wondering if there was a faster or more efficient way to solve this.
I believe the the runtime is O(n) and space is O(1) (O(256) assuming ASCII)
# Longest substring with at most 2 distinct characters
#
# Given a string S, find the length of the longest substring T that contains at most 2 distinct characters
#
# For example,
# Given S = "eceba"
# T is "ece" which it's length is 3
def longest_substring(string):
if string is None or len(string) == 1 or len(string) == 0:
return 0
distinct_set = set()
longest = 0
current = 0
for i in string:
set_len = len(distinct_set)
if set_len < 2:
if i in distinct_set:
current += 1
if i not in distinct_set:
distinct_set.add(i)
current += 1
if set_len == 2:
if i in distinct_set:
current += 1
if i not in distinct_set:
distinct_set.clear()
distinct_set.add(i)
if current > longest:
longest = current
current = 1
if current > longest:
longest = current
return longest
Input:
test_cases = [None, "", "a", "ab", "aab", "bab", "babc", "bbbbcccac"]
Output:
0, 0, 0, 2, 3, 3, 3, 7
Answer: Two style issues before we continue on improving your algorithm:
Use docstrings to document functions, not comments in front of the function. In addition don't let the comment use one name for a variable, and the function another. Keep them in sync, as otherwise the comment/docstring will confuse more than help
Add blank linkes in longer code segments. This can greatly enhance the reading and understanding of your code. I tend to add a blank line before for, while and if (and elif and else) usually.
Other than these your variable and function names are OK.
Regarding improvement on your algorithm here are some other comments:
Allow texts with only one character, and length 1 – Both of these conform to at most two distinct characters, and should be included in the final set
Bug: When a character not in the set appears, you set length to 1 – You clear the set and set it to current character, and set length to one, but if your text is ceeeeebbbbbb (and you come to the b the length should really be 5 (or 6) and your distinct set should consist of e and b. You forget that the e could still be part of the longest substring, even though the combination c and e didnt't yield the longest one.
Use of set is kind of heavy – It kind of make sense to use a set to keep the two distinct character, and if it was larger subset I would argue that is wise, but as it is only two characters, it is faster to just keep track of the last two characters, and use that for length calculations.
Move increment of current out of loop – As you increment it in all the if statements except the last, you could move this to front of loop. Function will work the same, but look nicer. (Do still keep the current = 1 near the end)
Move i in distinct_set out a level – This is another block of code which is the same, and could be kept at an outer level. The point I'm trying to make is to try to avoid repeating code, and if multiple if blocks have the same code, move it out the outer level.
This leads to the following version of your code:
def longest_substring_mod(text):
"""Longest substring with at most 2 distinct characters."""
if text is None or len(text) == 0:
return 0
distinct_characters = set()
longest = 0
current = 0
for i in text:
if i not in distinct_characters:
if len(distinct_characters) == 2:
distinct_characters.clear()
if current > longest:
longest = current
# This should've been corrected to more than
# one if previous character is repeated
current = 1
distinct_characters.add(i)
if i in distinct_characters:
current += 1
if current > longest:
longest = current
return longest
This behaves mostly the same as your code, and the difference is that this also returns 1 when the text is only one character long. The other flaws/feature, is kept as is. Do notice how it has simpler logic, and is a little easier to read, whilst still doing the same stuff.
Alternate algorithm
In my first naive approach at a different algorithm, I used a loop going through the string once, and stored the position of where I first saw a given character. This worked rather nicely, until I considered the case "abbaaaaeeeeeeeeee", where the first occurence of "a" is at start, whilst the longest substring should start with the "a" at position 3. This made me change the length calculation from using positions, to stripping of the characters from end of string. This led to the following code:
def longest_substring(text):
"""Longest substring of text with at most 2 distinct characters.
>>> ### Using doctests test cases
>>> # Test cases from OP
>>> [longest_substring(s) for s in [None, "", "a", "ab", "aab",
... "bab", "babc", "bbbbcccac"]]
[0, 0, 1, 2, 3, 3, 3, 7]
>>> # Test cases illustrating fail in OP code from 200_success
>>> # See http://codereview.stackexchange.com/a/111277/78136
>>> [longest_substring(s) for s in ["eeeeebbbbb", "eeeeebbbbbc",
... "ceeeeebbbbb"]]
[10, 10, 10]
>>> # Single character tests
>>> [longest_substring(s) for s in ["a", "aa", "aaaaaaaaaa"]]
[1, 2, 10]
>>> # Icky test case, as it needs to get length of last a-repeats
>>> longest_substring("abbaaaaeeeeeeeeee")
14
"""
if text is None or len(text) == 0:
return 0
first_character = text[0]
second_character = None
max_length = 0
for idx, character in enumerate(text):
# If character is either of the two last seen characters, continue
if character == first_character or character == second_character:
continue
# ... else if we have no previous character, set it and continue
elif second_character is None:
second_character = character
continue
# Now we have a different character at idx, so compare length of
# previous substring, and set max_length accordingly
max_length = max(max_length,
idx - len(text[:idx].rstrip(first_character +
second_character)))
# Shift the distinct set of characters
first_character, second_character = text[idx-1:idx], character
# Return max_length within text, or length of last subtext
if second_character:
return max(max_length,
len(text) - len(text.rstrip(first_character +
second_character)))
else:
# No prev_character is set, meaning we only have one character in text
return len(text)
def doctest():
"""Do the doctests on this module."""
import doctest
doctest.testmod()
if __name__ == '__main__':
doctest()
print([longest_substring(test) for test in [None, "", "a", "ab", "aab",
"bab", "babc", "bbbbcccac"]])
print([longest_substring(test) for test in ["eeeeebbbbb", "eeeeebbbbbc",
"ceeeeebbbbb"]])
print([longest_substring(test) for test in ["aaaaa", "edfraaaaaa", "baacba",
"babababababaaaabbbabz", "babc", "bbbbcccac"]])
print(longest_substring("abbaaaaeeeeeeeeee"))
Some extra comments related to my code:
I've implemented testing using the doctests module, as indicated by 200_success in a comment on another answer. In addition I've extended the test cases with some from other answers, and some of my own. Note that testing the OP's code with these test cases fails...
This implementation is faster in some cases, and similar in others. I do however think it is a slightly cleaner implementation, and it is a correct implementation.
It could probably be made even faster, if finding a way to avoid the rstrip() function, but that would also require some extra variables to keep track of both the first time the first character is seen, and the first time we've seen the current character. This should allow for a simple subtraction instead of doing text sliceing and calculation. | {
"domain": "codereview.stackexchange",
"id": 16956,
"tags": "python, performance, strings"
} |
create an executable with custom msgs | Question:
Hi,
I want to create a ros executable that uses a custom msg. I used catkin to make a package with a custom message. I ran catkin_make and copied the executable to another location the node seems to run fine (I put in print statements that are continuously executing). The topic I'm publishing is part of the rostopic list but when a rostopic echo mytopic results in the following:
ERROR: Cannot load message class for mynode/mycustommsg. Are your messages built?
a rosmsg show mycustommsg shows the msg.
What should I be doing ?
Thanks.
Originally posted by canatan on ROS Answers with karma: 41 on 2013-07-31
Post score: 0
Answer:
Make sure you are in the same environment for both rosmsg and rostopic.
And be careful about copying executables around. Many use shared libs and need the environment in which they were either built or isntalled.
Originally posted by tfoote with karma: 58457 on 2013-10-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15120,
"tags": "ros, catkin, msg"
} |
Implementation of a shared pointer constructors and destructor | Question: I am writing my simple shared pointer. I am asking to review existing functions (understand that my implementations is not full, e.g now operator*)
Review please correctness
Copy/move constructors and operator=
Destructors
My code
template<typename T>
class my_shared_ptr {
public:
my_shared_ptr() {}
my_shared_ptr(T* ptr) {
ptr_ = ptr;
counter_ = new size_t(1);
}
my_shared_ptr(const my_shared_ptr& other) {
destroy_current();
ptr_ = other.ptr_;
counter_ = other.counter_;
if (other.ptr_ != nullptr) {
*counter_ += 1;
}
}
my_shared_ptr(my_shared_ptr&& other) {
ptr_ = other.ptr_;
counter_ = other.counter_;
other.ptr_ = nullptr;
other.counter_ = nullptr;
}
my_shared_ptr& operator=(const my_shared_ptr& other) {
destroy_current();
ptr_ = other.ptr_;
counter_ = other.counter_;
if (other.ptr_ != nullptr) {
*counter_ += 1;
}
}
my_shared_ptr&& operator=(my_shared_ptr&& other) {
ptr_ = other.ptr_;
counter_ = other.counter_;
other.ptr_ = nullptr;
other.counter_ = nullptr;
}
~my_shared_ptr() {
destroy_current();
}
private:
void destroy_current() {
if (counter_ and *counter_ > 1) {
*counter_ -= 1;
} else {
delete ptr_;
delete counter_;
}
}
private:
T* ptr_ = nullptr;
size_t* counter_ = nullptr;
};
template<typename T, typename... Args>
my_shared_ptr<T> make_my_shared(Args&&... args) {
auto ptr = new T(std::forward<Args>(args)...);
return my_shared_ptr<T>(ptr);
}
struct A {
int val_ = 0;
A(int val) : val_(val) { std::cout << "A(" << val_ << ")" << std::endl ; }
~A() { std::cout << "~A(" << val_ << ")" << std::endl ; }
};
int main() {
std::cout << "-----------\n";
{
my_shared_ptr<A> a1;
//my_shared_ptr<A> a2 = make_my_shared<A>(2);
my_shared_ptr<A> a2(new A(2));
{
my_shared_ptr<A> a3(new A(3));
std::cout << "log 1" << std::endl;
a1 = a3;
std::cout << "log 2" << std::endl;
a2 = a3;
std::cout << "log 3" << std::endl;
}
std::cout << "log 4" << std::endl;
}
std::cout << "Program finished!" << std::endl ;
}
Answer: Overview
Self assignment is broken.
Code Review
First issue is here:
my_shared_ptr(const my_shared_ptr& other) {
// You are creating this object for the first time
// So there is never going to be anything to destroy.
destroy_current();
ptr_ = other.ptr_;
counter_ = other.counter_;
if (other.ptr_ != nullptr) {
*counter_ += 1;
}
}
You really want move semantics to be exception safe.
I don't see any reason why it could not be exception safe.
my_shared_ptr(my_shared_ptr&& other) {
// So this function should be marked as noexcept.
ptr_ = other.ptr_;
counter_ = other.counter_;
// There is already a command to set and move (see: std::exchange)
other.ptr_ = nullptr;
other.counter_ = nullptr;
}
This is actually broken.
Self assignment is going to release the memory, then increment the pointer on the newly released counter.
my_shared_ptr& operator=(const my_shared_ptr& other) {
destroy_current();
ptr_ = other.ptr_;
counter_ = other.counter_;
if (other.ptr_ != nullptr) {
*counter_ += 1;
}
}
Try:
my_shared_ptr<int> first(new int{4});
my_shared_ptr<int>& ref = first;
first = ref; // bang.
Again. This should probably be exception safe. So mark it as such. Aslo you can use std::exchange() to do most of the work.
my_shared_ptr&& operator=(my_shared_ptr&& other) {
ptr_ = other.ptr_;
counter_ = other.counter_;
other.ptr_ = nullptr;
other.counter_ = nullptr;
}
How I would write it:
template<typename T>
class my_shared_ptr
{
T ptr = nullptr;
std::size_t* size = nullptr;
bool decrementCountReachedZero() const
{
--(*size);
return *size == 0;
}
public:
~my_shared_ptr()
{
if (size && decrementCountReachedZero()) {
delete ptr;
delete size;
}
}
// Note: Default constructor
// Uses the default initialization values.
my_shared_ptr() {}
// Take ownership of a pointer.
explicit my_shared_ptr(T* take)
: ptr(take)
, size(new std::size_t{1})
{}
// You may want a constructor that takes a `nullptr`
// Because the above will not allow nullptr because of
// the explicit
my_shared_ptr(std::nullptr_t)
: ptr(nullptr)
, size(nullptr)
{}
// Standard Copy Constructor.
my_shared_ptr(my_shared_ptr const& copy)
: ptr(copy.ptr)
, size(copy.size)
{
// OK we have constructed now.
// So increment the size if it exists
if (size) {
++(*size);
}
}
// Standard Move Constructor
my_shared_ptr(my_shared_ptr&& move) noexcept
: ptr(std::exchange(move.ptr, nullptr))
, size(std::exchange(move.size, nullptr))
{
// No action required
// as original objects pointers have been set to null
// Thus effectively transferring ownership of any
// pointer.
}
// NOTE HERE
//
// I am writing both the copy and move assignment
// operators out in full. This is NOT the best way
// to do it. I am doing this to help illustrate the
// correct way. (See below)
// For normal copy assignment.
// Use the copy and swap idiom.
my_shared_ptr& operator=(my_shared_ptr const& copy)
{
my_shared_ptr tmp{copy}; // increment of any counters
// happens in the copy.
// decrement of any counters
// happens when this object
// is destroyed (which will not
// be the same as the object created :-)
// Do the work.
// Swap the tmp and the current object.
// everything should be in the correct place.
tmp.swap(*this);
return *this;
}
// For normal move assignment.
// Make sure the current object owned by "this" is destroyed.
// Overwrite "this" one and set "move" old version to nullptr.
my_shared_ptr& operator=(my_shared_ptr&& move) noexcept
{
my_shared_ptr tmp{std::move(*this)};// This moves the current object
// into a temp. This will be
// destroyed at end of scope
// and thus reduce the counter
// if there is one.
// Do the work.
// The current object is now null
// So we swap with the source("move") thus putting nulls
// into "move" and setting this object with the new value.
move.swap(*this);
return *this;
}
// NOTE: You will notice that both versions of
// the assignment operator are identical.
// So there is a neat trick to simplify both these
// into a single assignment operator that handles
// both situations:
// Simply pass by value.
// If it is an lvalue it is copied.
// If it is an rvalue it will be moved.
// Which has the identical operation as the tmp variable
my_shared_ptr& operator=(my_shared_ptr tmp) noexcept
{
tmp.swap(*this);
return *this;
}
// At this point tmp goes out of scope.
// destroying the tmp copy or the old value
// Utility function.
void swap(my_shared_ptr& other) noexcept
{
std::swap(ptr, other.ptr);
std::swap(size, other.size);
}
friend void swap(my_shared_ptr& lhs, my_shared_ptr& rhs)
{
lhs.swap(rhs);
}
}; | {
"domain": "codereview.stackexchange",
"id": 44557,
"tags": "c++, reinventing-the-wheel, memory-management, pointers"
} |
Researching Black Hole Spin/Mass | Question: I'm a high school student investigating how one can predict the mass and spin of the product of a binary black hole system. I'm fluent in Calculus (Single Variable), Number Theory, and Graph Theory, and High School Physics.
I understand that Black Holes involve Angular Momentum, Scharzwild Radii, Einstein's Field Equations and much more. However, I have no knowledge on any of these topics. As such, it's quite overwhelming and I'm unsure where to begin (I don't have any, if at all, experience in Astronomy). Despite this hurdle, I'm prepared to learn any new ideas/concepts required to make significant progress on my research.
What books/articles/papers/topics should I read to begin understanding black holes and to begin making progress on my research?
Answer: Don't let people tell you that you need to study 5 years of math to understand anything about general relativity. That's not true. I recently got a chance to teach a gen ed course at my college on special and general relativity, which was a lot of fun. Here is a reading list based on that course, which would probably be at a good level for you, or maybe slightly too easy. These books are all fairly cheap, except possibly the last 2. On special relativity:
Takeuchi, An illustrated guide to relativity, ch. 1-9 only
Stannard, Relativity: a very short introduction
After this we did mostly cosmology, not as much on black holes. A good book on GR at this level (although we didn't use it) is
Geroch, General relativity from A to B.
I wrote up my lecture notes on the course as a brief book, which is free online:
http://lightandmatter.com/poets/
To get into a somewhat more mathematical treatment of black holes:
Taylor and Wheeler, Exploring black holes: introduction to general
relativity
After that, a good GR book that is written for undergrads (not grad students, like most GR books) is:
Hartle, Gravity: an introduction to Einstein's general relativity
This would give you a pretty decent beginner's understanding of GR, including some of the math. At that point you would have more of a mental map of where to go next to achieve your goal. Good luck, and have fun! | {
"domain": "physics.stackexchange",
"id": 62954,
"tags": "general-relativity, black-holes, astrophysics, resource-recommendations, education"
} |
Do languages in $\mathsf{coRE} \setminus \mathsf{R}$ have Turing machines? | Question: What can we say about languages in $\mathsf{coRE} \setminus \mathsf{R}$? Are there Turing machines for these languages?
I know that $\overline{HP} \in \mathsf{coRE}$ doesn't have a Turing machine, and also that all the language that do have Turing machines are in $\mathsf{RE}$, so is it true that for any language that is in $\mathsf{coRE} \setminus \mathsf{R}$ there's isn't a Turing machine? I wonder why is that so, can someone elaborate?
Answer: We can associate a language to a Turing machine in several ways.
If the Turing machine halts on all inputs, then the language accepted by the Turing machine consists of all words which cause the Turing machine to halt in an accepting state. The class $\mathsf{R}$ consists of all languages which are accepted by some Turing machine.
For an arbitrary Turing machine, the language recognized by the Turing machine consists of all words that cause the Turing machine to halt (in any state). The class $\mathsf{RE}$ consists of all languages which are recognized by some Turing machine.
If $L \in \mathsf{coRE} \setminus \mathsf{R}$, then in particular $L \notin \mathsf{R}$, and so no Turing machine accepts $L$. If $L$ were recognized by some Turing machine then $L \in \mathsf{RE}$. However, this is impossible, since then $L \in \mathsf{RE} \cap \mathsf{coRE} = \mathsf{R}$. | {
"domain": "cs.stackexchange",
"id": 16939,
"tags": "turing-machines, computability"
} |
Entanglement between what? | Question: According to the standard definition of "Entropy of Entanglement"
https://en.wikipedia.org/wiki/Entropy_of_entanglement
one starts from the density matrix of a pure state
$$
\rho=|\psi\rangle\langle\psi|,
$$
then divides the system into two parts, $A$ and $B$, traces away the degrees of freedom of one of the two subsystems, say subsystem B, and thus obtains the reduced density matrix of the remaining subsystem
$$
\rho_A=\mathrm{Tr}_B(\rho).
$$
Eventually, the entanglement between subsystem $A$ and subsystem $B$ is given by the Von Neumann entropy of $\rho_a$:
$$
S(\rho_a)=-\mathrm{Tr}[\rho_A\,\log\rho_A].
$$
My question is: does the choice of the bipartition play an essential role? I think that the final result strongly depends on how one chooses subsystem $A$ (and, of course, the complementary subsystem $B$). To my knowledge, in fact, books do not emphasize how important is the choice of the partition. Is there a region?
Answer: Indeed, the definition is pointless without a choice of partition of the underlying space. It is true that sometimes this is not made explicit. For example, people often talk about "entangled particles", but they should be really talking about specific properties of the particles being entangled, not the particles themselves. Similarly, you can have entanglement between different degrees of freedom of the same particle (though you might not have "nonlocality" in such cases). | {
"domain": "physics.stackexchange",
"id": 64948,
"tags": "quantum-information, entropy, quantum-entanglement"
} |
Calculate the number of models of a KB | Question:
Suppose we had a domain with two individuals, $x$ and $y$. Suppose we had two predicate symbols $p$ and $q$ and three constants $a$, $b$, and $c$. Suppose we had the knowledge base KB defined by:
$p(X) \leftarrow q(X)$
$q(a)$
How many of these interpretations are models of KB?
So we know that in total there are 128 interpretations (Models and non-models).
Constants $a,b,c$ can have two different individuals $\{x,y\}$: $2^3 = 8$
There are two possible values $\{true,false\}$ each for: $\pi(p(x)), \pi(p(y))$: $2^2 = 4$
There are two possible values $\{true,false\}$ each for: $\pi(q(x)), \pi(q(y))$: $2^2 = 4$
$8 \cdot 4 \cdot 4 = 128$
But now we have to subtract all the interpretations that are not acceptable (no models) from 128. Supposedly the solution should be 24, but I cannot wrap my head around it.
Answer: Here's how to count the models:
There are two options for $a$. Denote the option not chosen by $\bar{a}$ (so if $a = x$ then $\bar{a} = y$, and if $a = y$ then $\bar{a} = x$).
There is one option for $p(a),q(a)$: both have to hold.
There are three options for $p(\bar{a}),q(\bar{a})$.
There are two options each for $b,c$.
In total, we get $2 \cdot 1 \cdot 3 \cdot 2 \cdot 2 = 24$. | {
"domain": "cs.stackexchange",
"id": 15901,
"tags": "artificial-intelligence"
} |
Why is the sky never green? It can be blue or orange, and green is in between! | Question: I, like everybody I suppose, have read the explanations why the colour of the sky is blue:
... the two most common types of matter present in the atmosphere are
gaseous nitrogen and oxygen. These particles are most effective in
scattering the higher frequency and shorter wavelength portions of the
visible light spectrum. This scattering process involves the
absorption of a light wave by an atom followed by reemission of a
light wave in a variety of directions. The amount of multidirectional
scattering that occurs is dependent upon the frequency of the light.
... So as white light.. from the sun passes through our atmosphere,
the high frequencies become scattered by atmospheric particles while
the lower frequencies are most likely to pass through the atmosphere
without a significant alteration in their direction. This scattering
of the higher frequencies of light illuminates the skies with light on
the BIV end of the visible spectrum. Compared to blue light, violet
light is most easily scattered by atmospheric particles. However, our
eyes are more sensitive to light with blue frequencies. Thus, we view
the skies as being blue in color.
and why sunsets are red:
... the light that is not scattered is able to pass through our
atmosphere and reach our eyes in a rather non-interrupted path. The
lower frequencies of sunlight (ROY) tend to reach our eyes as we sight
directly at the sun during midday. While sunlight consists of the
entire range of frequencies of visible light, not all frequencies are
equally intense. In fact, sunlight tends to be most rich with yellow
light frequencies. For these reasons, the sun appears yellow during
midday due to the direct passage of dominant amounts of yellow
frequencies through our atmosphere and to our eyes.
The appearance of the sun changes with the time of day. While it may
be yellow during midday, it is often found to gradually turn color as
it approaches sunset. This can be explained by light scattering. As
the sun approaches the horizon line, sunlight must traverse a greater
distance through our atmosphere; this is demonstrated in the diagram
below. As the path that sunlight takes through our atmosphere
increases in length, ROYGBIV encounters more and more atmospheric
particles. This results in the scattering of greater and greater
amounts of yellow light. During sunset hours, the light passing
through our atmosphere to our eyes tends to be most concentrated with
red and orange frequencies of light. For this reason, the sunsets have
a reddish-orange hue. The effect of a red sunset becomes more
pronounced if the atmosphere contains more and more particles.
Can you explain why the colour of the sky passes from blue to orange/red skipping altogether the whole range of green frequencies?
I have only heard of the legendary 'green, emerald line/ flash'
that appears in particular circumstances
Green flashes are enhanced by mirage, which increase refraction... is more likely to be seen in stable, clear air,... One might expect to see a blue flash, since blue light is
refracted most of all, and ... is
therefore the very last to disappear below the horizon, but the blue
is preferentially scattered out of the line of sight, and the
remaining light ends up appearing green
but I have never seen it, nor do I know anybody who ever did.
Answer: The sky does not skip over the green range of frequencies. The sky is green. Remove the scattered light from the Sun and the Moon and even the starlight, if you so wish, and you'll be left with something called airglow (check out the link, it's awesome, great pics, and nice explanation).
Because the link does such a good job explaining airglow, I'll skip the nitty gritty.
So you might be thinking, "Jim, you half-insane ceiling fan, everybody knows that the night sky is black!" Well, you're only half right. The night sky isn't black. The link above explains the science of it, but if that's not good enough, try to remember back to a time when you might have been out in the countryside. No bright city lights, just the night sky and trees. Now when you look at the horizon, can you see the trees? Yes, they're black silhouettes against the night sky. But how could you see black against black? The night sky isn't black. It's green thanks to airglow (or, if you're near a city, orange thanks to light pollution).
Stop, it's picture time. Here's an above the atmosphere view of the night sky from Wikipedia:
And one from the link I posted, just in case you didn't check it out:
See, don't be worried about green. The sky gets around to being green all the time. | {
"domain": "physics.stackexchange",
"id": 61813,
"tags": "visible-light, atomic-physics, atmospheric-science, meteorology"
} |
Writing a C++ class with message_filters member | Question:
Hi,
I have the following code:
#include ....
using namespace sensor_msgs;
using namespace message_filters;
std::vector<boost::shared_ptr<capygroovy::Ticks const> > ticks;
ros::Time last = ros::Time::now();
void callback(const ImageConstPtr& msg,const ImageConstPtr& msg2,const capygroovy::TicksConstPtr& msg3)
{
......
ticks = ticks_cache.getInterval(last,msg3->header.stamp);
......
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "db_creator_node");
ros::NodeHandle nh;
message_filters::Subscriber<capygroovy::Ticks> ticks_sub(nh,"/ticks",1);
message_filters::Cache<capygroovy::Ticks> ticks_cache(ticks_sub,10);
message_filters::Subscriber<Image> rgb_sub(nh,"/camera/rgb/image_rect",1);
message_filters::Subscriber<Image> dpt_sub(nh,"/camera/depth/image_rect_raw",1);
typedef sync_policies::ApproximateTime<Image, Image, capygroovy::Ticks> MySyncPolicy;
Synchronizer<MySyncPolicy> sync(MySyncPolicy(10), rgb_sub, dpt_sub, ticks_sub);
sync.registerCallback(boost::bind(&callback, _1, _2, _3));
......
return 0;
}
That, of course, is wrong because I declare ticks_cache in the main and when I call the method getInterval() in the callback I get the error:
/home/dede/catkin_ws/src/db_creator/src/db_creator_node.cpp:24:
error: ‘ticks_cache’ was not declared
in this scope
I suppose that a nice solution would be to write a class with the message_filters as members but in order to initialize them you need to pass to their constructor the node handle and the topic names. The problem is that I really can't figure out how this could be done, so any help would be appreciated! Thanks!
Originally posted by schizzz8 on ROS Answers with karma: 183 on 2014-06-04
Post score: 1
Answer:
Without adding a class (which would be best) or globals, you can pass easily an extra parameter to your callback, something like this:
void callback(const ImageConstPtr& msg,const ImageConstPtr& msg2,const capygroovy::TicksConstPtr& msg3,
message_filters::Cache<capygroovy::Ticks> &ticks_cache)
{
......
ticks = ticks_cache.getInterval(last,msg3->header.stamp);
......
}
int main(int argc, char** argv)
{
......
sync.registerCallback(boost::bind(&callback, _1, _2, _3, ticks_cache));
......
return 0;
}
It really is as simple as that!
BTW, to make classes easier to create, we use a template like this:
template <typename T>
int NodeMain(int argc, char **argv, std::string const &nodeName)
{
ros::init(argc, argv, nodeName);
ros::NodeHandle nh;
ros::NodeHandle pnh("~");
T a(nh, pnh);
ros::spin();
return EXIT_SUCCESS;
}
Now you just need:
class MyNode
{
private:
void callback(const ImageConstPtr& msg,const ImageConstPtr& msg2,const capygroovy::TicksConstPtr& msg3,
message_filters::Cache<capygroovy::Ticks> &ticks_cache)
{ ... }
public:
MyNode(ros::NodeHandle publicNodeHandle, ros::NodeHandle privateNodeHandle)
{
....
sync.registerCallback(boost::bind(&callback, this, _1, _2, _3, ticks_cache));
....
}
}
int main(int argc, char **argv)
{
NodeMain<MyNode>(argc, argv, "MyNode");
}
We also have a slightly more complicated template for a nodelet, and it means we can use class MyNode as either a stand-alone node or a nodelet without any code changes.
Originally posted by KenYN with karma: 541 on 2019-11-05
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 18165,
"tags": "message-filters"
} |
What is the Schwarzschild metric in cylindrical coordinates? | Question: I was researching online for different metrics of spacetime out of curiosity, and I found one that was said to be Schwarzschild metric in cylindrical coordinates:
$$ds^2 = -\left(1-\frac{r_s}{r}\right)dt^2 + \frac{r}{r-r_s}dr^2 + r^2d\theta^2 + r^2 dz^2.$$
I cannot remember from which site it was or find it in my history, but when I started to try to understand it, I found it to be wrong.
I was amazed to see that there such metrics out there. I never learned string theory, but I also discovered that cylindrical metrics are used to model the spacetime around cosmic strings.
So does the Schwarzschild metric in cylindrical coordinates describe cosmic strings? What is the Schwarzschild metric in cylindrical coordinates?
Edit: I should have mentioned this before, but I did not. I am asking for the Schwarzschild metric in cylindrical coordinates specifically for a cylindrical source/mass aligned/centered along the $z$ axis.
Answer: As you noted, that's not the Schwarzschild metric in cylindrical coordinates.
In spherical coordinates, where the corresponding Cartesian coordinates would be
$$(x,y,z) = r (\sin θ \cos φ, \sin θ \sin φ, \cos θ),$$
the metric is given by the line element
$$\frac{r}{r - r_s} dr^2 + (r dθ)^2 + (r \sin θ dφ)^2 - \frac{r - r_s}{r} (c dt)^2.$$
To express the metric in cylindrical coordinates, where
$$(x,y) = ρ (\cos φ, \sin φ),$$
set
$$(ρ,z) = r (\sin θ, \cos θ).$$
Then
$$ρ dρ + z dz = r dr, \hspace 1em z dρ - ρ dz = r^2 dθ, \hspace 1em ρ dφ = r \sin θ dφ, \hspace 1em r = \sqrt{ρ^2 + z^2},$$
and the line element becomes:
$$\frac{\sqrt{ρ^2 + z^2}}{\sqrt{ρ^2 + z^2} - r_s} \frac{(ρ dρ + z dz)^2}{ρ^2 + z^2} + \frac{(z dρ - ρ dz)^2}{ρ^2 + z^2} + (ρ dφ)^2 - \frac{\sqrt{ρ^2 + z^2} - r_s}{\sqrt{ρ^2 + z^2}} (c dt)^2.$$
There is also the underlying question of what the cylindrical *generalization* of the Schwarzschild metric is. That's the Kerr metric which, when given in Boyer-Lindquist coordinates, is:
$$ds^2 = Σ \left(\frac{dr^2}{Δ} + dθ^2\right) + \left(r^2 + a^2\right) (\sin θ dφ)^2 + \frac{r_s r}Σ \left(c dt - a \sin^2 θ dφ\right)^2 - (c dt)^2,$$
where the corresponding Cartesian coordinates are
$$(x,y,z) = \left(\sqrt{r^2 + a^2} \sin θ \cos φ, \sqrt{r^2 + a^2} \sin θ \sin φ, r \cos θ\right),$$
and
$$r_s = \frac{2GM}{c^2}, \hspace 1em a = \frac{J}{Mc}, \hspace 1em Σ = r^2 + (a \cos θ)^2, \hspace 1em Δ = r^2 - r_s r + a^2,$$
and this $r_s$ being the same as the $r_s$ for the Schwarzschild metric.
This describes a source with angular momentum $J$, mass $M$, in relativity, with $c$ being the in-vacuo light speed. You can work that what that comes out to in cylindrical coordinates, with the coordinates modified to the following form:
$$ρ = \sqrt{r^2 + a^2} \sin θ, \hspace 1em z = r \cos θ,$$
and use the cylindrical version of the Schwarzschild metric to check this against the $J = 0$ (and $a = 0$) case. | {
"domain": "physics.stackexchange",
"id": 93259,
"tags": "homework-and-exercises, general-relativity, black-holes, metric-tensor, coordinate-systems"
} |
$NP \ vs \ co-NP$: tautology to SAT and vice versa? | Question: Let define formula $\Phi%$ given in CNF and it's complement $\overline \Phi$.
$\Phi$ is satisfiable iff $\overline \Phi$ is not tautology and vice versa.
$\Phi$ can be converted to $\overline \Phi$ in polynomial time using following method:
Change all literals with non-literals and vice versa: $\forall i:x_i \Rightarrow \overline {x_i}; \ \overline {x_i} \Rightarrow x_i$.
Change all disjunctions with conjunctions and vice versa.
This property is a corollary of CNF/DNF definitions.
This will give us a DNF $\ \overline \Phi\ $ of the same length as $\ \Phi$.
Assuming DIMACS format SAT solver can solve tautology this way:
Multiply all variables by $-1$.
Solve SAT.
Return $\overline {answer}$, where $answer$ is global (final) answer for SAT.
Example 1:
$\Phi = (x \lor y \lor \overline z) \land (\overline x \lor t) \land (\overline y \lor \overline t) = 1100.1010.0101.0000$ - is satisfiable.
$\overline \Phi = (\overline x \land \overline y \land z) \lor (x \land \overline t) \lor (y \land t) = 0011.0101.1010.1111$ - is not tautology.
Example 2:
$\Phi = (x \lor y) \land (\overline x \lor y) \land (\overline x \lor \overline y) \land (x \lor \overline y)= 0000$ - is not satisfiable.
$\overline \Phi = (\overline x \land \overline y) \lor (x \land \overline y) \lor (x \land y) \lor (\overline x \land y) = 1111$ - is tautology.
Does this proves that $NP = co-NP$? Or, maybe, somewhere I'm wrong with complexity classes?
Answer: Your algorithm doesn't prove NP = co-NP because you're using a Cook reduction to go from DNF-TAUTOLOGIES to CNF-SAT. NP and co-NP aren't separate complexity classes under Cook reductions, they are separate under Karp reductions. Under Karp reduction, changing the output of Turing machine after you've performed the polynomial-time transformation is specifically disallowed.
So while your algorithm is a way to use a SAT solver to validate proofs, it doesn't say anything about whether NP = co-NP. | {
"domain": "cs.stackexchange",
"id": 9151,
"tags": "complexity-theory, np-complete, satisfiability, np"
} |
Obstacles in camera blind spot are cleared from local costmap | Question:
I'm trying to implement obstacle avoidance using move_base package on ROS 1 Melodic. Current issue is that when the robot approaches a short obstacle, the obstacle is first correctly drawn to local costmap, but closer the robot gets to it, it starts to clear it from the costmap. I believe this happens because of the fact that the camera has a blind spot (~30cm) in front of the robot. When the camera doesn't see the obstacle anymore, move_base obstacle layer does raytracing over it to the next nearest obstacle and clears the closest one from the costmap. Desired behavior would be that the robot doesn't remove the obstacle from local costmap even when it is in the blind spot.
So the question is, is there a way to avoid this from happening? I believe one solution would be to have minimum distance for the raytracing (== 30cm, length of the blind spot), under which raytracing is not done, but move_base currently doesn't seem to have this sort of parameter.
I'm using Intel D435i depth camera which publishes a pointcloud from short distance and a laser scan from longer distance. This is how the obstacles layer looks like:
obstacles_laser:
observation_sources: camera_1_obstacles camera_1_laser
camera_1_laser: {data_type: LaserScan, clearing: true, marking: true, topic: /camera_1/laser_scan, obstacle_range: 5.5, raytrace_range: 6.0 }
camera_1_obstacles: {data_type: PointCloud2, clearing: true, marking: true, topic: /camera_1/obstacles_cloud, obstacle_range: 5.5, raytrace_range: 6.0}
Robot approaching a short obstacle:
Robot has moved close enough for the obstacle to disappear from the local costmap
Visualization of the blind spot of the Intel camera. Visible area marked with orange points.
Originally posted by jannkar on ROS Answers with karma: 36 on 2021-03-30
Post score: 0
Answer:
I know this doesn't help you for ROS1, but we added minimum obstacle and raycasting ranges for ROS2 Nav2's costmap_2d package https://navigation.ros.org/configuration/packages/costmap-plugins/obstacle.html such that you can stop clearing in the sensor's deadzone.
Originally posted by stevemacenski with karma: 8272 on 2021-03-30
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jannkar on 2021-03-31:
Nice! We will switch to ROS2 in the near future, so it is great to hear that this feature is already supported. So maybe I'll try to come up with some quick workaround while using ROS1. Thanks! | {
"domain": "robotics.stackexchange",
"id": 36260,
"tags": "navigation, ros-melodic, costmap, local-costmap, move-base"
} |
List of NP-Complete graph problems/ properties? | Question: Is there a good source to find various decision problems on graph and networks? For a project I'm doing it'd be useful to be able to look at lots of different problems. Is there a good source for finding them?
Answer: A general list of NP-complete problems can be found in Garey & Johnson's book "Computers and Intractability". It contains an appendix that lists roughly 300 NP-complete problems, and despite its age is often suggested when one wants a list of NP-complete problems.
I haven't read the book, but based on its reputation it would be a quite good start to any investigation. | {
"domain": "cstheory.stackexchange",
"id": 4928,
"tags": "graph-theory, np"
} |
How to calculate velocity for a tracked robot | Question:
Hi to all,
I've built my own skid-steering tracked robot and I wrote a simple ROS node which allows me to control it by using my keyboard.
Since I'm using encoders for both motors, I would like to be able to calculate the tracks velocity and displacement so I can publish these values as ROS topics.
The problem is that I'm not able to find the correct formula to use, can you help me please?
The total tracks length is 275 cm.
After a full driving wheel rotation, the rubber tracks displacement is 125 cm.
The tracks rubber footprint is 85 cm.
Encoder pulses for a full driving wheel rotations are 200.000.
Max output driving wheel RPMs are 51, but I think this is not relevant.
What kind of formula do I have to use if I want to calculate velocity and displacement for each track?
I know I should use the classical formula V = S / T and S = V x T but I can't understand how to use these relations for my robot since it uses tracks instead of wheels.
Is it possible to calculate the overall angular velocity for the robot?
I hope you can help me..
Originally posted by Marcus Barnet on ROS Answers with karma: 287 on 2016-03-10
Post score: 0
Answer:
For the translation part tracks are identical to wheels, the track length does not matter here, because distance traveled is identical to the wheel inside the track + the track thickness.
So for the translation you just calculate the distance traveled using the Circumference of the wheel with r = wheel radius + track thinkness.
For the velocity you already got the right formula. If you are still unsure you could just drive the robot 1m or 10m and count the pulses on each track and divide that by the distance traveled.
Now the problem is the rotation part. Tracks have the bad attribute to slip a lot while turning. (Try to turn your robot by hand without slipping on the ground to understand it), possible solutions include:
Turn the robot by 360° and count the pulses. Problem is that this value is different for each underground
Get the rotation from another sensor, like an IMU.
Originally posted by Humpelstilzchen with karma: 1504 on 2016-03-11
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Marcus Barnet on 2016-03-11:
Thank you for your support and help!
My total circumference is 260 cm, so I'm using this formula to calculate the traveled distance:
Dist = (Enc_ticks/360) * Circ
For the velocity V = S/T, how can I calculate the time? Can you suggest me any solution, please?
Comment by Humpelstilzchen on 2016-03-11:
Sorry I don't think I understood the question on how to calculate the time.
Comment by Marcus Barnet on 2016-03-12:
I mean: in order to calculate the velocity, I need to use the equation: V = S/T where [S = (Enc_ticks/360) * Circ], but how can I calculate the time? Is there any specific function in ROS which allows me to calculate time or do I have to use a timer?
Comment by Humpelstilzchen on 2016-03-12:
Take a look at the Navigation/Odometry tutorial on how to use ros::Time for this. | {
"domain": "robotics.stackexchange",
"id": 24059,
"tags": "ros, linear-velocity"
} |
Can this Google Hurdles code be made to run any faster? | Question: Anyone can play Google Hurdles today. Here is my score: 1.1 second. Is there are way of improving this score and running faster than a second?
http://www.google.com/doodles/hurdles-2012
import java.awt.AWTException;
import java.awt.Robot;
import java.awt.event.KeyEvent;
public class RunOlim {
private volatile static boolean run=true;
public static void main(String[] args) {
Robot robot = null;
Thread th = new Thread(new Runnable(){
@Override
public void run() {
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}finally {
run=false;
}
}
});
th.start();
try {
robot = new Robot();
} catch (AWTException e) {
e.printStackTrace();
}
while(run){
robot.keyPress(KeyEvent.VK_RIGHT);
robot.keyRelease(KeyEvent.VK_RIGHT);
robot.keyPress(KeyEvent.VK_LEFT);
robot.keyRelease(KeyEvent.VK_LEFT);
}
}
}
Answer: First off, this may not really be an answer to your question. You want to play it fast, but I can tell you how you get a great score. Good enough for showing off, if that's what you're after!
You can vastly improve your score, even on a very slow network.
And you could even gain even more (secret) medals.
Just play Hurdles once so it shows your score, then - e.g. using jQuery - have fun with the DOM:
$('#hplogo_sbt').html('0.1');
$('#hplogo_sb').find('.hplogo_smh').removeClass('hplogo_smh').addClass('hplogo_smg');
That's about it, 0.1 seconds and 3 gold medals (you could get up to 9 without breaking the layout if you have a 0.1 second time, just insert more "medal" dom nodes). | {
"domain": "codereview.stackexchange",
"id": 2271,
"tags": "java, performance"
} |
Why do complex ions not emit light from de-excitations of electrons? | Question: I understand that complex ions are coloured due to d-orbital splitting which results in electrons being able to absorb wavelengths of visible light and become excited to the higher energy state meaning the transmitted light is coloured. However I don't see why the electron would not simply de-excite and emit the same wavelength of light that it absorbed which would result in no visible difference. Is there some reason why the same wavelength is not emitted or perhaps the electron does not de-excite at all?
Answer: The fate of electronically excited states is not immediately obvious.
First of all the rate at which the electron emits a photon and falls down a level is dependent upon the energy difference between the two levels, it is only an expectation value of a macroscopic observable, on an individual atomic scale it is impossible to predict when the electron in the upper level will emit a photon, however multiple instances of the same measurement will tend towards the expectation value. The direct result of this is that there is a path difference between the emitted photon and the 'rest' of the photons that were not absorbed by the complex and the result is a different interference between photons in the transmitted light compared to the incident light and hence a new colour is observed.
There is another fate of excited electrons, the Franck-Condon principle invokes the very plausible assumption that the nuclei do not respond immediately (at least on the time scale of electronic transitions) to the new equilibrium bond position that is determined by the new electronic state of the complex. This means the excited electronic state will most likely not be in the lowest vibrational state of the new equilibrium bond configuration, the splitting of vibrational energy levels is orders of magnitude lower than those of electronic transitions and hence the same law that governs electronic emission applies, the electron is much more likely to fall down these vibrational energy levels first, without emitting photons of visible light.
The electron will then be expected to, by the time it is occupying the lowest vibrational state of the excited electronic state, invoke a transition down to the ground state energy level. The energy of the excited state is described in a basic form by
$$H=E^*+v$$
where $H$ is the energy of the absorbed photon, $E^*$ is the energy of the excited state in the lowest vibrational mode and $v$ is the contribution from the initial vibrational state of the excited complex. | {
"domain": "chemistry.stackexchange",
"id": 3269,
"tags": "physical-chemistry, transition-metals, color"
} |
Designing an IIR to generate a specific data sequence | Question: I know that it's possible to design an IIR with specific poles and zeroes to create specific frequency responses.
Is it also possible to design an IIR such that when you give it an impulse, it generates values of a desired sequence? Or at least a sequence within some tolerance of those values?
Answer: Yes, there are time-domain design methods for IIR filters. One of the best-known ones is Prony's method. It is well described in the book Digital Filter Design by T.W. Parks and C.S.Burrus (ch. 7.5).
If the desired sequence is (right-sided) periodic, then there's a very simple and exact solution possible (which is usually an exercise for DSP students). | {
"domain": "dsp.stackexchange",
"id": 4869,
"tags": "infinite-impulse-response"
} |
Does Sugru (Formerol) decay underwater? | Question: Sugru is described as waterproof and dishwasher-safe, but I still wonder if it will decay quickly (within months) if submerged in room-temperature tap water.
Aren't silicone rubbers susceptible to hydrolysis?
Answer: First, the silicone monomer does hydrolyze in water, causing crosslinking. However, the resultant crosslinked polymer is stable.
Second, silicone is, one might say, rabidly hydrophobic. For water to cause degradation, it would have to penetrate, but is confined to the surface. However, silicones are highly permeable to water vapor. so degradation might occur, depending on the formulation and additives.
As for the specifics for the proprietary formulation for Sugru, you'd need to test if it meets a specific need. It's ~50-75% silanes, the remainder talc and other substances, so that would would change its water resistance. | {
"domain": "chemistry.stackexchange",
"id": 17886,
"tags": "organic-chemistry, polymers, hydrolysis, carbon-family, organosilicon-compounds"
} |
Are there problems that can be solved in time $2^{n-q^c}$ with $q$ qubits? | Question: This is another attempt to formalize my former question on the topic.
I'm looking for a problem for which all known classical algorithms take exponential time, but given ANY number of few qubits (think around 53), we can achieve a speed-up that is exponential in their number.
So if the problem requires time $2^n$ on a classic computer, then I would hope for a hybrid quantum-classical algorithm that uses $q$ qubits and takes $2^{n-q^c}$ time for some constant $c$.
Here $c$ is independent of $q$, which can be any number, up to $n^{1/c}$ or so by when the problem becomes polynomial on the quantum computer.
Are there such problems?
Answer: I think the scaling $2^{n/q^c}$ is too much to ask for. Even $poly(q) 2^{O(n-q)}$ would represent an exponential speedup for each additional qubit.
And indeed, such a problem is known: simulating a quantum circuit of $n$ logical qubits on a small hybrid quantum-classical computer with only few (perfect) physical qubits $q\leq n$ has this scaling. See: https://arxiv.org/abs/1506.01396 | {
"domain": "cstheory.stackexchange",
"id": 4904,
"tags": "cc.complexity-theory, quantum-computing, tradeoff, speed-up"
} |
Mechanism of Dehydrogenation Reaction | Question: In the hydrogenation of alkenes over a metal catalyst, the two hydrogen atoms are added with syn stereochemistry.
In the metal-catalysed dehydrogenation of alkanes, is the removal of hydrogen likewise concerted with syn stereochemistry? And if so, why?
Answer: The answer you seek can be found here uwyo.edu/roddick/site/alkane_dehydrogenation.html , the catalytic cycle of dehydrogenation of alkanes by Iridium complexes does not follow a simple syn addition such as in the hydrogenation of alkenes. In the hydrogenation of alkenes, the $\ce {H2}$ dissociates and is added syn to the substrate in a well understood mechanism.
In the dehydrogenation though, the coordination of the alkane to the Ir metal center occur first, then a migratory insertion forming a 5-coordinated trigonal bipyramidal complex, then a β-elimination occur forming an unstable 6-coordinated complex which readily loses the alkene (final product). The catalyst is then regenerated by a hydrogen acceptor which can be a bulky alkene (so its product can't participate in the cycle due to steric hindrance). If no regio inducing group is present in the catalyst, the formation of the Saytzeff product (most substituted alkene) is favored. | {
"domain": "chemistry.stackexchange",
"id": 8568,
"tags": "organic-chemistry, reaction-mechanism, hydrocarbons"
} |
How do I Calculate Distance Using 2 GPS Topics | Question:
Hello,
So currently I have two GPS modules and they each have their own nodes. The 2 nodes publish their GPS information onto their own topics. So 2 nodes and 2 topics. I'm able to subscribe to both nodes but I'm having trouble using the variables in the callback function to calculate distance.
I would get this error:
NameError: global name 'gps_lat' is not defined
Here's my code below
#!/usr/bin/env python
import rospy
from std_msgs.msg import String
from nmea_msgs.msg import Sentence
from sensor_msgs.msg import NavSatFix
import math
def callback(data):
gps_lat = round(data.latitude, 6)
gps_lon = round(data.longitude, 6)
def callback1(data):
gps1_lat = round(data.latitude, 6)
gps1_lon = round(data.longitude, 6)
def listener():
rospy.init_node('gps_monitor', anonymous=True)
rospy.Subscriber("/fix", NavSatFix, callback)
rospy.Subscriber("/fix1", NavSatFix, callback1)
rospy.spin()
delta_lat = (gps_lat*(108000) - gps1_lat*(108000))
delta_lon = (gps_lon*(108000) - gps1_lon*(108000))
hyp_m = (delta_lat**2 + delta_lon**2)**0.5
hyp_ft = (hyp_m*3.2800839)
rospy.loginfo("Distance is %s in ft.", hyp_ft)
if __name__ == '__main__':
listener()
Originally posted by SupermanPrime01 on ROS Answers with karma: 27 on 2017-04-26
Post score: 0
Answer:
You can't use variables local to one function (your callback functions, in this case) in another function (the listener() function).
For working with data from multiple topics, there are several ways you can structure your program.
The first is to use global variables. Make gps_lat, gps_lon, gps1_lat and gps1_lon global variables that are updated in the callbacks and used in listener(). However this approach does not synchronise the data so you might get inaccurate values for the calculated distance if it uses a mix of new and old data.
The second is to make your node a class, with the gps data stored in member variables and the callbacks methods of the class. This would essentially be the same as the above, but without the use of global variables.
The third is to use message filters to synchronise the data received from the two sensors and calculate the distance when data is received from both. With a message filter, you provide it a set of inputs (your two topics, /fix and /fix1), and register a callback. The callback is called when the message filter has data meeting its condition, such as a message on each topic with similar timestamps. If your GPS sensors are not producing data in sync, then you might need to use the approximate time policy filter. You can use it like this:
#!/usr/bin/env python
import rospy
import message_filters
from sensor_msgs.msg import NavSatFix
def calc_distance(gps1, gps2):
gps1_lat = round(gps1.latitude, 6)
gps1_lon = round(gps1.longitude, 6)
gps2_lat = round(gps2.latitude, 6)
gps2_lon = round(gps2.longitude, 6)
delta_lat = (gps1_lat*(108000) - gps2_lat*(108000))
delta_lon = (gps1_lon*(108000) - gps2_lon*(108000))
hyp_m = (delta_lat**2 + delta_lon**2)**0.5
hyp_ft = (hyp_m*3.2800839)
rospy.loginfo("Distance is %s in ft.", hyp_ft)
def listener():
rospy.init_node('gps_monitor', anonymous=True)
gps1 = message_filters.Subscriber('/fix', NavSatFix)
gps2 = message_filters.Subscriber('/fix1', NavSatFix)
ts = message_filters.ApproximateTimeSynchronizer([gps1, gps2], 10, 5)
ts.registerCallback(calc_distance)
rospy.spin()
if __name__ == '__main__':
listener()
This will print a new distance calculation every time it gets a message on each topic that are within 5 seconds of each other.
Originally posted by Geoff with karma: 4203 on 2017-04-26
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by SupermanPrime01 on 2017-04-26:
Thank you so much Geoff. This was exactly what I needed. When I tried the global variables method the distances would continuously increase so you're right about that too. | {
"domain": "robotics.stackexchange",
"id": 27729,
"tags": "gps"
} |
Calculating a circle with bezier curves | Question: I am new to Haskell, and have been reading Learn You a Haskell for Great Good!.
I've rewritten a problem that I recently solved in JavaScript in Haskell to practice what I have been reading.
A little bit of a back story with out rambling on too much; I wanted to create a circle using bezier curve drawing commands. Similar to SVG but for an Android vector drawable. The reason I needed the circle in a path command is for animation. I was using AnimatedVectorDrawable to animate a circle into a square. If your drawable has the same path with the same number of commands in two different files it can animate fluidly between them. Making a square with curve commands is easy enough, but the circle needed some math.
What I did to quickly generate my circle path was create a script in Javascript to do it. Being a math problem I felt this was a good candidate for Haskell exercise.
Here is my Haskell script:
-- Usage:
-- beziercircle [circumfrance] [offest x] [offset y]
--
-- Example:
-- beziercircle 500
--
--
-- Calculates a circle using bezier paths for using in android
-- vector drawables
--
import System.Environment
import Text.Printf
bz = 0.552284749831
zero = read "0" :: Float
main = do
args <- getArgs
let c = parseArg 0 args
let d = c / 2
let ox = parseArg 1 args
let oy = parseArg 2 args
let ps = points d ox oy
let cs = controls zero (ip d) (op d) c ox oy
putStrLn $ (showMove (zero + ox, d + oy)) ++ (showAllCurves ps cs) ++ "Z"
where
parseArg i args = (if length args >= i + 1 then read (args !! i) :: Float else zero)
ip d = d - (d * bz)
op d = d + (d * bz)
controls a b c d x y = map (\(a, b, c, d) -> (ox a,oy b,ox c,oy d)) [c1, c2, c3, c4]
where ox = (+x)
oy = (+y)
c1 = (a, b, b, a)
c2 = (c, a, d, b)
c3 = (d, c, c, d)
c4 = (b, d, a, c)
points d x y = map (\(x,y) -> (ox x, oy y)) [p2, p3, p4, p1]
where ox = (+x)
oy = (+y)
p1 = (zero, d)
p2 = rotate90 p1
p3 = offset d p2
p4 = offset d p1
offset o (x, y) = (x + o, y + o)
rotate90 (x, y) = (y, x * (-1))
showMove (x, y) = printf "M %f %f \n" x y
showCurve (x, y) (cx1, cy1, cx2, cy2) = do
printf "C %f %f %f %f %f %f \n" cx1 cy1 cx2 cy2 x y
showAllCurves as bs = concat $ zipWith (showCurve) as bs
The script is based on this question How to create circle with Bezier Curves
Being new I am sure there are lots of places I can make this script better. I am hoping to learn from feedback! I know it can be commented more, mostly looking for ways to shrink the code using techniques that may not be familiar to me. Even if there is a Math equation that I am no seeing, that can solve this better. That would be great to learn as well!
Here is my original Javascript (Node.js) for reference as well:
// Creates a Vector path command for a circle based on based in circumfrence
var args = process.argv.slice(2);
if (args.length < 1) {
console.log("Please supply a width for the vector circle path");
}
// The second and third argument can be used to offset the circle by X adn Y number of pixels
var offsetXBy = 0;
var offsetYBy = 0;
if (args.length >= 2) {
offsetXBy = parseFloat(args[1]);
}
if (args.length >= 3) {
offsetYBy = parseFloat(args[2]);
}
function Coord(x, y) {
var self = this;
this.x = parseFloat(x);
this.y = parseFloat(y);
this.offset = function(offsetX, offsetY) {
self.x += offsetX;
self.y += offsetY;
}
}
Coord.prototype.toString = function () {
return `${this.x} ${this.y}`
}
function BezierCurve(x, y, c1, c2) {
var self = this;
this.x = x;
this.y = y;
this.c1 = c1;
this.c2 = c2;
this.offset = function(offsetX, offsetY) {
self.x += offsetX;
self.y += offsetY;
self.c1.offset(offsetX, offsetY);
self.c2.offset(offsetX, offsetY);
}
}
BezierCurve.prototype.toString = function () {
return `${this.c1.toString()} ${this.c2.toString()} ${this.x} ${this.y}`
}
var BEZIER_CONTROL_POINT = 0.552284749831;
var dimension = parseFloat(args[0]); // Circles circumfrence
var halfDimen = dimension / 2;
var controlPointOffset = halfDimen * BEZIER_CONTROL_POINT;
// from Middle Left
var firstMove = new Coord(0, halfDimen);
// curve to Top Middle
var curve1 = new BezierCurve(halfDimen, 0,
new Coord(0, halfDimen - controlPointOffset),
new Coord(halfDimen - controlPointOffset, 0));
// curve to Middle Right
var curve2 = new BezierCurve(dimension, halfDimen,
new Coord(halfDimen + controlPointOffset, 0),
new Coord(dimension, halfDimen - controlPointOffset));
// curve to Bottom Middle
var curve3 = new BezierCurve(halfDimen, dimension,
new Coord(dimension, halfDimen + controlPointOffset),
new Coord(halfDimen + controlPointOffset, dimension));
// curve back to Middle Left
var curve4 = new BezierCurve(0, halfDimen,
new Coord(halfDimen - controlPointOffset, dimension),
new Coord(0, halfDimen + controlPointOffset));
if (offsetXBy > 0) {
firstMove.offset(offsetXBy, offsetYBy);
curve1.offset(offsetXBy, offsetYBy);
curve2.offset(offsetXBy, offsetYBy);
curve3.offset(offsetXBy, offsetYBy);
curve4.offset(offsetXBy, offsetYBy);
}
console.log(`M ${firstMove.toString()}`);
console.log(`C ${curve1.toString()}`);
console.log(`C ${curve2.toString()}`);
console.log(`C ${curve3.toString()}`);
console.log(`C ${curve4.toString()}`);
console.log("Z");
Answer: Let's start with the back story:
I wanted to create a circle using bezier curve drawing commands. Similar to SVG but for an Android vector drawable. The reason I needed the circle in a path command is for animation.
The documentation for Android's VectorDrawable says that it uses exactly the same path syntax as SVG. That means that it supports A and a for circular arcs. Have you actually tested that they don't work?
The variable names are not as helpful as they could be. What is bz? Given that one of the inputs is documented as circumfrance (correct spelling is circumference), I can't understand why I don't see pi anywhere: it is actually the diameter instead?
Edit: having compiled and tested the code, it's definitely the diameter rather than the circumference, so the comment is buggy.
SVG paths have relative and absolute versions of the movement instructions. If you were to use the relative c instead of the absolute C you could eliminate the offsets, simplifying the code.
Edit: to justify my claim that using relative movement rather than absolute simplifies the code, here's the full program using relative movement, minus the header comment:
import System.Environment
import Text.Printf
bz = 0.552284749831
main = do
args <- getArgs
let d = parseArg 0 args
let r = d / 2
let ox = parseArg 1 args
let oy = parseArg 2 args
putStrLn $ (showMove (ox, r + oy)) ++ (showAllQuadrants r) ++ "Z"
where
parseArg i args = (if length args >= i + 1 then read (args !! i) :: Float else 0.0)
quadrant r = [(0.0, -bz*r), ((1.0-bz)*r, -r), (r, -r)]
quadrants r = [q1, q2, q3, q4]
where rotate90AC (x, y) = (-y, x)
q1 = quadrant r
q2 = map rotate90AC q1
q3 = map rotate90AC q2
q4 = map rotate90AC q3
showMove (x, y) = printf "M %f %f \n" x y
showQuadrant [(cx1, cy1), (cx2, cy2), (x, y)] = do
printf "c %f %f %f %f %f %f \n" cx1 cy1 cx2 cy2 x y
showAllQuadrants r = concat $ (map showQuadrant (quadrants r))
I'm sure quadrants can be simplified further by someone who knows the Haskell standard library better than me.
You say that you're looking for techniques to shrink the code. The simplest one is to not overcomplicate. Consider
controls a b c d x y = map (\(a, b, c, d) -> (ox a,oy b,ox c,oy d)) [c1, c2, c3, c4]
where ox = (+x)
oy = (+y)
vs
controls a b c d x y = map (\(a, b, c, d) -> (a+x, b+y, c+x, d+y)) [c1, c2, c3, c4]
I find the second easier to read as well as shorter. | {
"domain": "codereview.stackexchange",
"id": 22004,
"tags": "beginner, haskell, computational-geometry"
} |
Is there physical evidence to distinguish between the expansion of space and an anthropocentric universe? | Question: When we look in all directions, we see distant objects red-shifted, with the size of the red-shift correlated with the distance from us.
As I understand it, the consensus among cosmologists is that this observation is best explained by the expansion of space across intergalactic distances, a.k.a "Hubble flow", described by the FLRW metric. This makes sense to me.
However, it seems that this could also be explained by the Earth being some kind of special spot in the universe, and for some other reason everything around us is physically moving away from us, and the farther things are moving away faster, without any expansion of space; with simply static space-time. To be clear, this anthropocentric theory does not seem compelling to me, and I am not advocating for it.
My question is this:
Is there any physical evidence that can distinguish between "Hubble flow" and some kind of bizarre anthropocentric universe where everything just happens to have been propelled away from the Earth in a very specific way?
I can see that the expanding space theory is simpler and more elegant, but I'm looking for some kind of observable evidence (that has been observed, or could be observed in the future) that could falsify one or the other theory.
Thanks!
Answer:
I'm looking for some kind of observable evidence (that has been observed, or could be observed in the future) that could falsify one or the other theory.
You seem to be requiring one (very high) standard for FLRW and one different one for your Antropocentric universe. That's cheating.
There is no antropocentric theory to test. We can test FLRW's theory (and GR) and have done so. But what you are treating as an antropocentric theory is not a theory in any scientific sense. It provides no rules or laws to make predictions from, and hence no way to make tests.
Is there any physical evidence that can distinguish between "Hubble flow" and some kind of bizarre anthropocentric universe where everything just happens to have been propelled away from the Earth in a very specific way?
By definition you are basically saying that we can construct an arbitrary scenario where everything just happens to look the way it does in terms of expansion. It will always be possible to invent such "magic" circumstances if you do not have to provide a theoretical and experimental basis for them.
The difference is that FLRW uses GR to construct a general model that explains what we see and which can be checked and verified against observation.
How do you define a center?
Also note that the Antropocentric idea runs slap bang into the problem of exactly what the center of this universe is. Is it exactly in Earth? The Sun? The baryocenter of the Earth-Moon system? The (ever changing) baryocenter of the Earth-Sun-Moon system or the solar system? Something else? And whatever you choose why that choice?
Even if someone did not accept the motion of Earth around the Sun (etc) and claimed everything revolved around the Earth we're immediately into having no explanation for why that would be the case.
For me the issue of defining a center of the universe is the reason why an Anthropocentric idea makes no sense (quite apart from the complete lack of any useful theory to make predictions from). | {
"domain": "astronomy.stackexchange",
"id": 6728,
"tags": "cosmology, expansion, redshift, hubble-constant"
} |
How to get velocity from PSD graph | Question:
Hello everyone!
I have a graph, which is Spectra inside cavity. PSD vs. frequency.
I need to get velocity [m/s] from PSD [dB/Hz].
Does anyone know how to do that?
Answer: You can get the sound pressure level from PSD, and the velocity is related to the sound pressure. According to the equation of motion,
$$
\rho \frac{\mathrm{d}\vec{v}}{\mathrm{d}t} = -\nabla p \tag{1}
$$
where $\rho$ is the density of the medium, $\vec{v}$ is the particle velocity, $\nabla=\frac{\partial}{\partial x}\vec{i}+\frac{\partial}{\partial y}\vec{j}+\frac{\partial}{\partial z}\vec{k}$ is the gradient operator, and $p$ is the sound pressure. When the sound is not so loud, the equation of motion can be linearized and the total derivative $\mathrm{d}\vec{v}/\mathrm{d}t$ becomes partial derivative $\partial{\vec{v}}/\partial{t}$.
Furthermore, if you make a plane wave assumption, then the particle velocity amplitude is
$$
v = \pm \frac{p}{\rho_0 c_0} \tag{2}
$$
for plane waves of forward and backward propagation, respectively. $\rho_0$ and $c_0$ are respective the medium density and sound speed without acoustic disturbance. Their product $\rho_0 c_0$ is the characteristic specific acoustic impedance of the medium.
However, your data is acquired in a cavity with an extremely high level of PSD, thus the linearization is obviously inappropriate. The total derivative in Eq. (1) should be
$$
\frac{\mathrm{d}\vec{v}}{\mathrm{d}t} = \frac{\partial \vec{v}}{\partial t} + \frac{\partial\vec{v}}{\partial x} \frac{\partial x}{\partial t}+ \frac{\partial\vec{v}}{\partial y} \frac{\partial y}{\partial t}+ \frac{\partial\vec{v}}{\partial z} \frac{\partial z}{\partial t} \\
= \frac{\partial \vec{v}}{\partial t} + v_x\frac{\partial\vec{v}}{\partial x}+ v_y\frac{\partial\vec{v}}{\partial y}+ v_z\frac{\partial\vec{v}}{\partial z}
\tag{3}
$$
In addition, velocity is also related to frequency, but it seems that you want an overall velocity. You can obtain the time-domain sound pressure according to the Parseval's theorem, and then calculate the particle velocity. | {
"domain": "dsp.stackexchange",
"id": 10057,
"tags": "frequency-spectrum, frequency, power-spectral-density, autoregressive-model"
} |
Why don't we use space filling curves for high-dimensional nearest neighbor search? | Question: Some space filling curves like the Hilbert Curve are able to map an n-dimensional space to a one dimensional line whilst preserving locality. Does that mean that we could map a dataset of high dimensional points to a line and expect the order of the nearest neighbors to be preserved?
If so, wouldn't that be more efficient than building a Ball tree?
Answer: Space filling curves are sometimes used for nearest neighbor search. See these applications of Z-order curves and Hilbert curves.
The idea is as follows. Let $f$ be a space-filling curve. Given a point $x$, index it as $f^{-1}(x)$.* Given a query point $y$, return all points indexed in an interval around $f^{-1}(y)$. If $y$ is close to $x$, there is a good chance that $x$ will be returned so long as $f^{-1}$ tends to preserve locality. Different space filling curves have this property to different degrees.
* Note that space-filling curves are not injective so the inverse is not uniquely defined. But in practice we choose a finite grid on $[0, 1]^n$ and an appropriate iterate that is bijective so we don't have a problem. | {
"domain": "datascience.stackexchange",
"id": 6090,
"tags": "machine-learning, dimensionality-reduction"
} |
DeprecationWarning in ROS 2 Foxy | Question: I am working on a project where a remote in a Jetson Nano on an RC-Car from my Laptop to run ROS2 Nodes.
So far everything is working fine despite of this warning, but i'm wondering if it can lead to problems later on.
Below is the demo talker node:
anassq@anassq-ThinkPad:-$ ros2 run demo_nodes_cpp talker
[INFO] [1693382772.600519335] [talker]: Publishing: 'Hello World: 1'
[INFO] [1693382773.600493124] [talker]: Publishing: 'Hello World: 2'
[INFO] [1693382774.600495367] [talker]: Publishing: 'Hello World: 3'
^C[INFO] [1693382775.563863256] [rcicpp]: signal_handler(signal_value=2)
anassq@anassq-ThinkPad:-$
Below is the demo listener node:
jetson@nano:—$ ros2 run demo_nodes_cpp listener
/opt/ros/foxy/bin/ros2:6: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import load_entry_point
[INFO] [1693382772.604064646] [listener]: I heard: [Hello World: 1]
[INFO] [1693382773.596332466] [listener]: I heard: [Hello World: 2]
[INFO] [1693382774.595766385] [listener]: I heard: [Hello World: 3]
^C[INFO] [1693382891.440914460] [rcicpp]: signal_handler(signal_value=2)
jetson@nano:—$
Please Note: The Ubuntu image i flashed on the jetson-nano comes with already installer python and many other packages. Here is the source of that image: https://github.com/Qengineering/Jetson-Nano-Ubuntu-20-image
I tried updating the system and the rosdep, but it didn't work.
Does anyone have a suggestion to how this could be solved ?
Thanks a lot for the read
Answer: Welcome to Robotics Stack Exchange!
It is a warning coming from the colcon-core module of ROS 2. Please ignore it as it does not affect any functionality of ROS. Feel free to read more at https://github.com/colcon/colcon-core/issues/552 | {
"domain": "robotics.stackexchange",
"id": 38533,
"tags": "ros2, nvidia, jetson, ros-foxy"
} |
Prepare a cross-platform QT C-wrapper class for unit testing and mocking | Question: The situation
I recently started a cross platform QT project (arm, linux-x86, windows) that
aims to interact with CAN-Bus hardware. I want to learn and get used to unit testing from scratch as good as possible while working on that project.
As I have a very limited experience in writing unit tests as well as writing well-designed code to test, it is challenging for me to design my code well, especially because I constantly interact with low level hardware. This requires mocking or emulating as well as unit testing.
In my main can bus class, which I do not want to talk about here (yet), I need to interface a low level C library libsocketcan. Thus I wrote a little wrapper class that works well.
Now I think, that little class would be perfect to learn good testable design.
The testing and mocking framework I use is Googletest, however I think that's not that important from a general point of view when discussing testable design.
My goals
I want to be able to unit test my SocketCan-class itself in an elegant way. So I hope for helpful reviews that might lead to a good re-design for the class.
Probably I have to mock the class somehow to make unit tests working reasonable for it.
When I use that SocketCan-class as a dependency in my main class which acts as a high-level abstraction layer for CAN, I also want the testability of that class not to be reduced by the usage of the SocketCan class.
I want to unit test the whole class, even on a platform, where SocketCan is not available (Windows). How to design it properly for that goal?
My thoughts
I am reading a lot regarding unit testing and mocking. For example, there is an article about not to mock, what you do not own. So do I have to wrap my wrapper class again to get it mockable easily?
Should I even unit test that SocketCan class at all or does it in your eyes not provide enough functionality to make tests for it?
Bonus topic: I am a bit doubtful whether my implementation of the cross platform ability is a reasonable way to go. I have that single generic header file that I use for all platforms, and two different implementation *.cpps, that are selected to compile by the build system, depending of the target platform.
My Code
mycanbus_socketcan.h
#ifndef MYCANBUS_SOCKETCAN_H
#define MYCANBUS_SOCKETCAN_H
class CANLIBSHARED_EXPORT SocketCan {
Q_GADGET
public:
enum SocketCanState {
ErrorActive,
ErrorWarning,
ErrorPassive,
BusOff,
Stopped,
Sleeping,
RequestFailed
};
Q_ENUM(SocketCanState)
static bool prepareInterface(const QString interface, const int baudrate);
private:
static QByteArray getInterfaceNameFromQString(const QString interfaceName);
static SocketCanState getState(const QString interface);
static bool setBitrate(const QString interface, const int baudrate);
static bool interfaceUp(const QString interface);
static bool interfaceDown(const QString interface);
};
#endif // MYCANBUS_SOCKETCAN_H
mycanbus_socketcan_windows.cpp
#include "mycanbus_socketcan.h"
#include <QCoreApplication>
Q_LOGGING_CATEGORY(lcSocketCan, "my.can.socketcan")
bool SocketCan::PrepareInterface(const QString interface, const int baudrate)
{
qCInfo(lcSocketCan) << QCoreApplication::translate("", "No socketcan implementation for your operating system."
" Ignoring interface %1, baudrate %2")
.arg(interface)
.arg(baudrate);
return false;
}
mycanbus_socketcan_linux.cpp
#include "libsocketcan.h"
#include "mycanbus_socketcan.h"
#include <QCoreApplication>
#include <QMetaEnum>
Q_LOGGING_CATEGORY(lcSocketCan, "my.can.socketcan")
bool SocketCan::prepareInterface(const QString interface, const int baudrate)
{
bool result;
SocketCanState state;
// Shutting down, reconfiguring, bringing up, state check
result = interfaceDown(interface);
state = getState(interface);
if (state == SocketCanState::RequestFailed){
qCWarning(lcSocketCan) << QCoreApplication::translate("","Could not get the current state of interface %1, aborting.").arg(interface);
return false;
}
result = setBitrate(interface, baudrate);
if (result == false){
qCWarning(lcSocketCan) << QCoreApplication::translate("","Could not set baudrate %1 for interface %2").arg(baudrate).arg(interface);
return false;
}
result = interfaceUp(interface);
if (result == false){
qCWarning(lcSocketCan) << QCoreApplication::translate("","Could not bring interface %1 up.").arg(interface);
return false;
}
state = getState(interface);
if (state == SocketCanState::RequestFailed){
qCWarning(lcSocketCan) << QCoreApplication::translate("","Could not get the current state of interface %1, aborting.").arg(interface);
return false;
}else
{
return true;
}
}
SocketCan::SocketCanState SocketCan::getState(const QString interface)
{
//Checking for the interface state
int libSocketCanState;
SocketCanState state = SocketCanState::RequestFailed;
int callSuccessfully = can_get_state(getInterfaceNameFromQString(interface), &libSocketCanState);
if (callSuccessfully != 0) {
qCWarning(lcSocketCan) << QCoreApplication::translate("", "Socketcan state request failed for interface %1").arg(interface);
state = RequestFailed;
} else {
switch (libSocketCanState) {
case CAN_STATE_ERROR_ACTIVE:
state = ErrorActive;
break;
case CAN_STATE_ERROR_WARNING:
state = ErrorWarning;
break;
case CAN_STATE_ERROR_PASSIVE:
state = ErrorPassive;
break;
case CAN_STATE_BUS_OFF:
state = BusOff;
break;
case CAN_STATE_STOPPED:
state = Stopped;
break;
case CAN_STATE_SLEEPING:
state = Sleeping;
break;
}
}
QMetaEnum stateEnum = QMetaEnum::fromType<SocketCan::SocketCanState>();
qCDebug(lcSocketCan) << QCoreApplication::translate("", "Socketcan state for interface %1 is: %2").arg(interface).arg(QString(stateEnum.name()) + "::" + stateEnum.valueToKey(state));
return state;
}
bool SocketCan::setBitrate(const QString interface, const int baudrate)
{
qCDebug(lcSocketCan) << QCoreApplication::translate("", "Trying to set baudrate %1 for interface %2").arg(baudrate).arg(interface);
if (can_set_bitrate(getInterfaceNameFromQString(interface), baudrate) == 0) {
qCDebug(lcSocketCan) << QCoreApplication::translate("", "Baudrate set successfully");
return true;
} else {
qCWarning(lcSocketCan) << QCoreApplication::translate("", "Baudrate could not be set");
return false;
}
}
bool SocketCan::interfaceUp(const QString interface)
{
qCDebug(lcSocketCan) << QCoreApplication::translate("", "Trying to bring interface %1 up").arg(interface);
if (can_do_start(getInterfaceNameFromQString(interface)) == 0) {
qCDebug(lcSocketCan) << QCoreApplication::translate("", "Interface brought up successfully");
return true;
} else {
qCWarning(lcSocketCan) << QCoreApplication::translate("", "Interface could not be brought up!");
return false;
}
}
bool SocketCan::interfaceDown(const QString interface)
{
qCDebug(lcSocketCan) << QCoreApplication::translate("", "Trying to shut interface %1 down").arg(interface);
if (can_do_stop(getInterfaceNameFromQString(interface)) == 0) {
qCDebug(lcSocketCan) << QCoreApplication::translate("", "Interface shut down successfully");
return true;
} else {
qCWarning(lcSocketCan) << QCoreApplication::translate("", "Interface could not be shut down!");
return false;
}
}
QByteArray SocketCan::getInterfaceNameFromQString(const QString interfaceName){
QByteArray ba = interfaceName.toLocal8Bit();
return ba;
}
libsocketcan.h
3rd party: https://github.com/lalten/libsocketcan/blob/master/include/libsocketcan.h
Answer: This answer doesn't particularly address your questions, but it does talk about some generic C++ stuff. To make this clearer, I'm going to divide it between 'Feedback' and 'Opinion'.
Feedback
When passing const QString interface, pass it as a reference. Nearly all class instances should be passed as references.
#include "mycanbus_socketcan.h"
#include <QCoreApplication>
bool result;
SocketCanState state;
// Shutting down, reconfiguring, bringing up, state check
result = interfaceDown(interface);
state = getState(interface);
should simply be
bool result = interfaceDown(interface);
SocketCanState state = getState(interface);
It's not (old) C, so initialize and declare things where they're used, not at the beginning of the function.
if (state == SocketCanState::RequestFailed){
qCWarning(lcSocketCan) << QCoreApplication::translate("","Could not get the current state of interface %1, aborting.").arg(interface);
return false;
}else
{
return true;
}
The else here is not needed. Simply return true, because the previous block will have already returned false. This happens elsewhere in your code as well.
Opinion
#ifndef MYCANBUS_SOCKETCAN_H
All modern C++ compilers support #pragma once. I prefer to use it. You can weigh the pros and cons.
In my opinion, system header includes should be done before user includes. C++ is order-sensitive when it comes to includes. | {
"domain": "codereview.stackexchange",
"id": 32870,
"tags": "c++, unit-testing, wrapper, qt, mocks"
} |
Why does alkaline battery is more likely to damage quartz clock? | Question: When I purchase a quartz table/wall clock, most of the time it has a sticker saying not to use alkaline battery and from this quora question. There is also mixed thought about this but in theory (I think) both alkaline and zinc-carbon battery should not have any differences because both of them have $1.5\,\text{V}$ so if there really differences, I would love to understand the physics behind them.
Answer: Alkaline zinc cells use different electrochemical system than ordinary acidic zinc-carbon cells.
The 1.5 V voltage is just a nominal value. The actual open voltage largely depends on the cell type, long and short term discharging and aging history. A new cell can have up to 1.6 V, an old used cell can have down to 1.2 V.
Another case is the voltage on load. Alkaline cells have generally much lower internal resistance, so the voltage drop during providing the current is smaller.
As the practical consequence, the effective voltage can be higher for alkaline cells.
Personally, I am not sure if I have ever noticed at such long time running devices a warning against alkaline battery usage, unless they were designed for NiCd/NiMH rechargeables. As such devices are ideal for alkaline batteries. | {
"domain": "physics.stackexchange",
"id": 73359,
"tags": "electricity, electrochemistry"
} |
Building a Crystal Radio Questions | Question: I have been reading several books and articles about building a crystal radio and the explanations about the inner workings of the circuit seem vague. All articles and books mention the coil and the capacitor in the circuit, which I still don't know what it is they do. It is mentioned how the coil, depending on the windings, will only receive a certain type of electromagnetic waveform, how is this possible? What does the amount of windings in the coil have to do with the type of waveform? Additionally, how exactly does the capacitor work to only grab certain frequencies?
Several questions in one but this is very troublesome and can't find answers anywhere. Thanks to all who contribute.
Answer: The coil is an inductor, which stores energy in a magnetic field. The coil & capacitor together form a basic electronic oscillator.
If you start out with a charged capacitor, it will start to discharge, forcing a current through the coil. This will set up a magnetic field in the coil, which takes up some of the energy that used to be stored in the capacitor. When the capacitor has discharged, and is no longer forcing a current, then magnetic field starts to collapse. This causes the coil to induce (hence the name inductor) a current in the circuit in the same direction, which keeps charge flowing to the other side of the capacitor. Once the inductor has fully discharged, the capacitor is charged again, but to the opposite of its original polarity, and the cycle starts over, in reverse.
The natural frequency of this oscillator depends on the size of the capacitor and the size of the inductor. Thus, changing the number of coils will change the circuits inductance which changes the natural frequency.
When you add in an antenna to collect radio waves, the radio waves will drive the oscillation of the circuit at their own frequency. For a wave whose frequency is very close to the natural frequency of the oscillator, each cycle of the radio wave will drive the oscillator in the same direction, and they'll constructively interfere, adding more energy to the oscillator on each cycle. Waves whose frequencies don't match up with the natural frequency of the oscillator circuit will, however, destructively interfere.
It's like pushing a swing- if you always push back when it reaches its highest height, then you keep it going and push it higher and higher. If, however, you push on it halfway through a swing, or at random intervals, then it becomes erratic and doesn't go much of anywhere.
So, by changing the number of windings in the coil, you tune it to amplify different frequencies of radio waves, which determines what the radio picks up for you to hear. Other frequencies are not amplified, so you don't hear them. | {
"domain": "physics.stackexchange",
"id": 20654,
"tags": "electromagnetic-radiation, oscillators, crystals, radio-frequency"
} |
Evaporation of large charged black holes | Question: Black holes evaporate (Hawking Radiation) acting as black bodies with the temperature inversely proportional to the mass.
No physical process, be it evaporation or any other "trick", can make a black hole "super-extremal", namely it can't make the square of the angular momentum or the charge too large compared to the mass. As the mass drops due to evaporation the charge and angular momentum must also drop.
Angular momentum can be shed with photons, because photons have spin and can be emitted in a non-radial direction.
Charge is trickier. There are three methods to shed charge:
Accrete oppositely charged particles. This can be stopped, hypothetically at least, by isolating the hole.
Emit charged particles. There are no massless charged particles, so a temperature << 511keV strongly suppresses this positron and electron generation, so making the hole large enough should stop this.
Break down the vacuum with the electric field (requires ~10^18 V/m). Again, making the hole large enough should suppress this because the electric field in the vicinity of a near-extremal charged hole scales as 1/M.
If all three of these are suppressed, we have a new candidate for the longest lived objects possible! So will a large enough black hole evaporate itself toward extremity after which there will be essentially zero Hawking radiation?
Answer: Short answer: Yes, if an isolated black hole is large enough (supermassive) and has an initial charge comparable to its mass, then it would lose mass through Hawking radiation much quicker than it would lose charge and would eventually reach nearly extreme state. It would still continue to lose mass and charge though at much slower rates and would remain in a near extreme state nearly till the end of its long, but still finite, lifetime, exceeding by many orders of magnitude the lifetime of an uncharged black hole of the same initial mass.
Longer answer: In what follows, we are using Planck units $G=\hbar=c=1$. $Q$ and $M$ are the charge and mass of the black hole, and $e$ and $m$ are the charge and mass of the electron, the lightest charged particle.
First, let us emphasize that in a realistic astrophysical environment free electrons/positrons would quickly neutralize any significant charge that the black hole may possess, so OP's condition 1 makes the situation quite artificial.
If we consider charged particle orbits in the Reissner–Nordström metric, the condition
$$ \frac{e Q }{ r_+} > m $$
makes it energetically favorable for a pair of charged particles to form with one escaping to infinity and another falling into the black hole. Here, $r_+$ is the horizon radius, so $Q/r_+$ is the electrostatic potential at the horizon.
If Compton wavelength of an electron is much smaller than $r_+$ then pair production could be described by Schwinger's equations. The rate of pair production would be exponentially suppressed if the maximal field strength is lower than $E_{S}\sim \frac{m^2}e$. Since the field strength at the horizon is $\frac{Q}{r^2_+}$ and for a RN black hole $M\leq r_+ \leq 2M$, a black hole can carry a geometrically significant charge ($Q$ comparable to mass $M$) for a long time only if
$$
M > \frac{e}{m^2} \approx 5 \cdot 10^5 M _\odot.
$$
This also automatically enforces OP's condition 2. Such a black hole would fall into a SMBH range.
Evolution of charge and mass for such massive isolated black hole has been considered in literature:
Hiscock, W. A., & Weems, L. D. (1990). Evolution of charged evaporating black holes. Physical Review D, 41(4), 1142, doi:10.1103/PhysRevD.41.1142.
The rate of charge loss is obtained by integrating the Schwinger pair production rate over the volume near the horizon, while the mass loss is a sum of thermal radiation from massless particles and energy carried away by the charged particles. The resulting system is then integrated numerically. The overall evolution for the system is best illustrated by the following plot:
FIG. 2. Evolution paths followed by evaporating charged black holes. The charged-black-hole configuration space is divided into two regions: a “charge dissipation zone” in the upper left, where black holes rapidly discharge, and a “mass dissipation zone” to the lower right, in which evaporation causes the charge-to-mass ratio of the black holes to increase. The boundary area between these two regions is a dissipative attractor, which all charged black holes evolve toward as they evaporate.
While here is a sample evolution of charge and mass through the black hole lifetime:
FIG. 7. Mass and charge as functions of time for a black hole with $M= 168 \times 10^{6} M_\odot$ and $(Q/M)^2=0.1$ initially, and $n_\nu=3$. The charge-to-mass ratio of the black hole reaches a maximum at $(Q/M)^2=0.9999$ just as it reaches the attractor. The black hole spends most of its lifetime very close to the extreme Reissner-Nordstrom limit.
We see that a very heavy black hole with a significant initial charge $Q<M$ would first lose most of its “excess” mass ($M-Q$) and then spent most of its lifetime in a near extreme Reissner–Nordström state evolving along the attractor trajectory. The temperature of the black hole of course, never reaches zero, so there is no violation of the third law of thermodynamics.
The total lifetime of such charged black hole is dominated by the initial charge $Q_i$ and could be approximated as:
$$
T\simeq \frac{2 \pi^2 \hbar^2}{e^3} \exp \left(\frac{Q_i}{Q_0} \right)= 10^{47} \exp \left(\frac{Q_i}{Q_0} \right) \,\text{yr},
$$
where $Q_0=\frac{\hbar e}{\pi m^2}\approx 1.7\cdot 10^5 M_\odot$, the equation starts being valid for $Q_i> 60\cdot 10^6M_\odot$. This lifetime is exponentially longer than the lifetime of an uncharged black hole, that scales as $M^3$. | {
"domain": "physics.stackexchange",
"id": 59789,
"tags": "general-relativity, black-holes, charge, hawking-radiation, qft-in-curved-spacetime"
} |
Query to find users from Australia | Question: After a discussion of which users were from Australia, I wrote my first SQL query to find out:
SELECT u.DisplayName'Display Name', u.Reputation'Rep', u.Location'Location'
FROM Users u
WHERE u.Location LIKE '%Australia%'
ORDER BY 'Rep' DESC
As always, please tell me the good, the bad, and the ugly.
The query can be found here
Answer: I don't like how you're specifying the column aliases. I expect a whitespace between the column and the alias.
I like that you're not specifying the optional AS keyword though - I find it only adds clutter when it's there.
Also I would have used [square brackets] instead of single quotes, and layout the field names on separate lines, like this:
SELECT
u.DisplayName [Display Name]
,u.Reputation [Rep]
,u.Location [Location]
That way you can easily add, reorder, or comment-out a column if you need to. | {
"domain": "codereview.stackexchange",
"id": 13100,
"tags": "sql, stackexchange"
} |
Terminal based game: Part 2 | Question: Follow up to: Terminal based game
Finished up the framework.
Still sticking with the X-Term based version.
As I want a very simple framework to use for teaching (not this part initially).
But my next question is the snake game implemented using this class.
#ifndef THORSANVIL_GAMEENGINE_GAME_H
#define THORSANVIL_GAMEENGINE_GAME_H
#include <iostream>
#include <signal.h>
#include <unistd.h>
#include <termios.h>
#include <sys/select.h>
static volatile sig_atomic_t done = 0;
extern "C" void signalHandler(int signal)
{
if (!done) {
done = signal;
}
}
namespace ThorsAnvil::GameEngine
{
bool installHandler(int signal)
{
struct sigaction action;
sigemptyset(&action.sa_mask);
action.sa_flags = 0;
action.sa_handler = signalHandler;
int result = sigaction(signal, &action, nullptr);
return result == 0;
}
class Game
{
using Time = std::chrono::time_point<std::chrono::high_resolution_clock>;
using Duration = std::chrono::duration<double, std::micro>;
using Step = std::chrono::duration<double, std::milli>;
static constexpr char clearScreen[] = "\033[2J";
static constexpr char moveToZeroZero[] = "\033[0;0H";
static constexpr char hideCursor[] = "\033[?25l";
static constexpr char showCursor[] = "\033[?25h";
bool gameOver;
termios originalConfig;
Time nextDrawTime;
Time nextStepTime;
Duration durationDrawTime;
Duration durationStepTime;
Duration timeNeeded;
int sleep;
int sleepTime()
{
Time now = std::chrono::high_resolution_clock::now();
Duration sleepForDraw = nextDrawTime - now - timeNeeded;
Duration sleepForStep = nextStepTime - now;
Duration smallestSleep = sleepForDraw < sleepForStep ? sleepForDraw : sleepForStep;
int microSeconds = smallestSleep.count();
return microSeconds > 0 ? microSeconds : 1;
}
void draw()
{
Time lastDrawTime = std::chrono::high_resolution_clock::now();
std::cout << moveToZeroZero;
drawFrame();
std::cout << "Sleep: " << sleep << " \n";
durationDrawTime = std::chrono::high_resolution_clock::now() - lastDrawTime;
timeNeeded = durationDrawTime + durationStepTime;
nextDrawTime = lastDrawTime + std::chrono::milliseconds(redrawRateMilliSeconds());
}
void input()
{
if (done) {
gameOver = true;
return;
}
fd_set input;
FD_ZERO(&input);
FD_SET(STDIN_FILENO, &input);
sleep = sleepTime();
timeval timeout{sleep / 1'000'000, sleep % 1'000'000};
if (select(STDIN_FILENO + 1, &input, nullptr, nullptr, &timeout) > 0) {
char key = std::cin.get();
handleInput(key);
}
}
void logic()
{
int timeUp = (nextStepTime - std::chrono::high_resolution_clock::now()).count();
if (timeUp <= 0) {
Time lastStepTime = std::chrono::high_resolution_clock::now();
handleLogic();
durationStepTime = std::chrono::high_resolution_clock::now() - lastStepTime;
timeNeeded = durationDrawTime + durationStepTime;
nextStepTime = lastStepTime + std::chrono::milliseconds(gameStepTimeMilliSeconds());
}
}
protected:
virtual void drawFrame() = 0;
virtual int gameStepTimeMilliSeconds() {return 500;}
virtual int redrawRateMilliSeconds() {return gameStepTimeMilliSeconds();}
virtual void handleInput(char k)
{
if (k == 'Q') {
gameOver = true;
}
}
virtual void handleLogic() {}
void setGameOver()
{
gameOver = true;
}
void newGame()
{
gameOver = false;
}
public:
Game()
: gameOver(false)
, sleep(0)
{
if (!installHandler(SIGINT) || !installHandler(SIGHUP) || !installHandler(SIGTERM)) {
throw std::runtime_error("Fail: Installing signal handlers");
}
termios config;
if (tcgetattr(STDIN_FILENO, &originalConfig) != 0 || tcgetattr(STDIN_FILENO, &config) != 0) {
throw std::runtime_error("Fail: Getting keyboard state");
}
config.c_lflag &= ~(ICANON | ECHO);
config.c_cc[VMIN] = 1; /* Blocking input */
config.c_cc[VTIME] = 0;
if (tcsetattr(STDIN_FILENO, TCSANOW, &config) != 0) {
throw std::runtime_error("Fail: Setting keyboard state");
}
}
virtual ~Game()
{
tcsetattr(STDIN_FILENO, TCSAFLUSH, &originalConfig);
}
void run()
{
std::cout << clearScreen
<< hideCursor;
while (!gameOver) {
draw();
input();
logic();
}
std::cout << showCursor;
}
};
}
#endif
Not 100% confident in the chrono stuff. Any feedback on how to do that better really appreciated.
Also: I am not sure about if I should have two timers. One for display refresh gameStepTimeMilliSeconds() and one for step refresh gameStepTimeMilliSeconds(). Their interactions seems to be minor but anybody with experience in this area would love to get input on that.
Note: The linked Snake game has a draw time (unoptimized) for a brute force draw of the screen of around 950 micro seconds. So potentially we could do 1000 frames a second. This weekend I will explore the timings of an optimized draw (i.e. only updating the characters that would change).
Answer: Missing #includes
My compiler complains about a lot of undefined functions and types, because you forgot to add all the necessary #include statements.
Since this is a header only, create a .cpp file that only includes your header, and try to compile it to an object file.
Use std::chrono::steady_clock
Unfortunately, it is not well-defined what kind of clock std::chrono::high_resolution_clock actually is. It is best to avoid it. Instead, use std::chrono::steady_clock; it is guaranteed to be steady (ie, doesn't have jumps because of NTP updates, daylight savings time changes or leap seconds), and likely has the same resolution as std::chrono::high_resolution_clock anyway.
Avoid specifying time resolution unless really necessary
You are dealing with explicit milliseconds and microseconds way too early. Try to keep durations in unspecified std::chrono::duration variables for as long as possible. You should only convert it to a concrete value at the last possible moment, which is right before calling select().
I recommend using this pattern:
using Clock = std::chrono::steady_clock;
using Time = Clock::time_point;
using Duration = Clock::duration;
using Rep = Duration::rep;
…
Duration sleepTime()
{
auto now = Clock::now();
auto sleepForDraw = nextDrawTime - now - timeNeeded;
auto sleepForStep = nextStepTime - now;
return std::min(sleepForDraw, sleepForStep);
}
void input() {
…
auto sleep_us = std::max(Rep{1},
std::chrono::duration_cast<std::chrono::microseconds>(sleepTime()).count());
timeval timeout{sleep / 1'000'000, sleep % 1'000'000};
…
}
Note how much simpler sleepTime() is now. Also, why was there a Step type to begin with? You are not using it anywhere.
Issues with select()
It is possible for select() to return 1 even if there is no character available to read from std::cin.
Another issue that is more likely to occur is that std::cin is allowed to buffer its input. Consider that I press two keys in very short succession; they might both get read into std::cin's underlying stream buffer at the same time. So at first select() returns 1, and you call std::cin.get() which will return the first key. But when you call select() again, since both keys have already been read into a buffer, it will wait for a third key to be pressed.
There is no way you can safely mix select() with std::cin(). The best you can do without an external library is to make the POSIX filedescriptor 0 non-blocking, and then to read() characters from it.
Move setup and cleanup to the constructor and destructor
In run() you hide the cursor and clear the screen before doing the actual run loop, and then afterwards you show the cursor again. These things should be done in the constructor and destructor instead. Consider what happens if an exception is thrown while the game is running.
If you really want to keep it inside run(), then use an RAII object to manage the cursor visibility state. | {
"domain": "codereview.stackexchange",
"id": 45586,
"tags": "c++, game, console"
} |
Why does the pen does not move straight? | Question: If i put a pen on a table in its horizontal position and then i try to move it horizontally by giving it a small push, so that it would fall off a table, i expect it to move horizontally but my pen ( and all other pens too! ) moves diagonally when it starts moving down the table!When i remove the notebook , the pen moves like,its shown in the picture ( if i keep it horizontally also, it gives the same result)-
Why does this happen? Why does it not move horizontally ?
Answer: Because your pen is not a cylinder, but a portion of a cone. Since it is also rigid, both ends have to complete one cycle of rolling simultaneously. This means that for each cycle, if the narrow and thick ends are separated by the pen's length $L$ and have radii $r_1$ and $r_2$, respectively, they roll $2\pi r_1$ and $2\pi r_2$, respectively. The only way this can be accomplished (without skidding) is for the ends to roll in a cicrcular motion around a common center.
If I haven't miscalculated, then the distance $d$ from the "small" end to this center is
$$d = \frac{L}{r_2/r_1–1}$$. | {
"domain": "physics.stackexchange",
"id": 27466,
"tags": "classical-mechanics, rotational-dynamics"
} |
Thermodynamic stability | Question: Thermodynamically a gaseous state is more stable than a solid state for a given substance but according to minimum potential energy principle a solid should be more stable than gaseous state? I am unable to mark the difference regarding stability?
Answer: The gaseous state has a higher entropy than the solid state for a given substance, but we can't say that it's necessary more stable. The most stable equilibrium phase (for a given temperature $T$ and pressure $P$) is the one with the lowest Gibbs free energy $G=H-TS$, where $H$ is the enthalpy (representing the bonding strength, among other factors) and $S$ is the entropy (representing the number of available microstates for our given $T$ and $P$). At sufficiently low temperatures, the solid state is always more stable. | {
"domain": "physics.stackexchange",
"id": 48882,
"tags": "thermodynamics"
} |
Conjugate Theory and Redox | Question: I'm trying to apply conjugate theory, which I can apply very well to acids/bases, to redox. Can you verify my logic, which I have broken down below?
1) I know that solid sodium is a strong reducer.
2) I know this because solid sodium when placed in water reacts violently, forming, among other products, sodium ion.
3) Reducers are oxidized. Reducers lose electrons. Correct; $\ce{Na^{+}}$ is the product of placing solid sodium in water.
4) So in this half-reaction: $\ce{Na<=>Na^{+} +e^-}$, sodium is the reducer. $\ce{Na^{+}}$ is the conjugate oxidizer.
5) Oxidizers are reduced. Oxidizers gain electrons.
6) Because sodium is a strong reducer, which means it has a strong tendency to be oxidized, or a strong tendency to lose electrons, its conjugate oxidizer must be weak. Oxidizers are reduced; oxidizers gain electrons. The potential for $\ce{Na^{+}}$ to gain the electron that $\ce{Na}$ just lost must be small, or $\ce{Na}$ would not be a good reducer.
7) Therefore, a strong reducer's conjugate oxidizer must be weak.
8) Likewise, a strong oxidizer's conjugate reducer must be weak.
9) Likewise, a weak reducer's conjugate oxidizer must be strong.
10) Likewise, a weak oxidizer's conjugate reducer must be strong.
Also, if anyone could point me to a good reference on gaining insight into redox, tha would be great!
And while we're on the topic of redox, what's the mechanism for this reaction? Are both bonds cleaved homolytically? I see how the nucleophilic chlorines attack the electrophilic hydrogens but how's the bond broken?
$\ce{H_2 + Cl_2 ->2HCl}$
Answer: Your question covers two different topics:
Redox properties can be well described with redox potentials attributed to a given reaction. It is essentially the same concept that you are talking about: we assign a number (conveniently it is an actual measurable potential) to an electrode reaction which includes an oxidized and reduced form of the species we are talking about. This number, the redox potential, can show you how strong oxidizer/reducer is the corresponding corresponding form, and just as you guess, strong oxidizers generally have a weak reducer pair.
The $\ce{HCl}$ formation reaction is generally a radical chain reaction, triggered e.g. by light, and not a redox reaction. The key here is to split the $\ce{Cl2}$ (e.g photo-chemically), then the obtained free radicals induce a chain-reaction with elementary steps like
$\ce{Cl + H2 -> HCl + H}$ or
$\ce{H + Cl2 -> HCl + Cl}$
The two gas mixed is more or less inert in total darkness. | {
"domain": "chemistry.stackexchange",
"id": 1323,
"tags": "electrochemistry, redox"
} |
Accessing multiple dynamic libraries with the same extern C methods | Question: I have multiple pre-compiled dynamic libraries that use the same extern "C" function names. The functions can behave differently for each dynamic library. Ultimately these methods will be accessed in a SystemVerilog simulation via DPI (Direct Program Interface).
When trying to link all the libraries to my simulator, I noticed that the visibility of the methods is dependent on link order. This blocks me from accessing the desired method of the same name if it is not the first library.
One solution would have been to require each extern "C" function to have a unique name. But this fails in my case for two reasons:
I do not own the source code of the dynamic libraries. I can make requests, but there is no guarantee whether or when they will be fulfilled.
It would add a lot of verbosity and static code to my SystemVerilog code as each component would need to figure out which DPI method it needs to access. It also doesn't scale well if a library is added or removed.
My solution is to create my own dynamic library that uses dlopen() and dlsym() (from dlfcn.h) to dynamically access methods from the pre-compiled dynamic libraries. Fortunately, all the pre-compiled library flavors use the same root class. They also contain a publicly accessible variable identifying the compiled flavor. I can use this identifier to decide which library to dynamically reference.
I cannot share the real code. Bellow is a runnable proof of concept. I am getting the desired output. It has been a few years since I have done this kind of coding with C++, so I'm hoping I not missing something.
my_model.h
#include <stdio.h>
#include <iostream>
#include <unordered_map>
#define CONCAT_(A, B) A ## B
#define CONCAT(A, B) CONCAT_(A, B)
#define EXPORTED __attribute__((visibility("default")))
#ifndef FLAVOR
#define FLAVOR PASSION
#endif
namespace my_ns{
enum Flavor { PASSION=0, ORANGE=1, GUAVA=2, POG=-1 };
class my_model {
public:
my_model(int cfg);
virtual ~my_model() { close(); }
virtual void close() { um.clear(); }
virtual int get_info(std::string key, int* value) { return -1; }
virtual int set_info(std::string key, int value) { return -1; }
virtual int del_info(std::string key) { return -1; }
const Flavor flavor = FLAVOR;
protected:
std::unordered_map<std::string,int> um;
};
#if FLAVOR>=0
class CONCAT(my_model_,FLAVOR) : public my_model {
public:
CONCAT(my_model_,FLAVOR)(int cfg) : my_model(cfg) {}
virtual int get_info(std::string key, int* value);
virtual int set_info(std::string key, int value);
virtual int del_info(std::string key);
};
#endif
extern "C" {
int EXPORTED create_model(void** model_h, int flavor, int cfg);
int EXPORTED delete_model(void** model_h);
int EXPORTED get_info(const void* model_h, const char* key, int* value);
int EXPORTED set_info(const void* model_h, const char* key, int value);
int EXPORTED del_info(const void* model_h, const char* key);
}
}
my_model.cpp
#include "my_model.h"
namespace my_ns {
my_model::my_model(int cfg) {
um["cfg"] = cfg;
um["flavor"] = static_cast<int>(FLAVOR);
printf("Info: cfg:%0d FLAVOR:%0d (%s:%s:%d)\n", um["cfg"],FLAVOR,__func__,__FILE__,__LINE__);
}
int CONCAT(my_model_,FLAVOR)::get_info(std::string key, int* value) {
printf("Info: %s(%s,%%d) : %s:%d\n", __func__,key.c_str(), __FILE__,__LINE__);
if (um.count(key)==0) return -1;
*value = um[key] + static_cast<int>(FLAVOR);
return 0;
}
int CONCAT(my_model_,FLAVOR)::set_info(std::string key, int value) {
printf("Info: %s(%s,%0d) : %s:%d\n", __func__,key.c_str(),value, __FILE__,__LINE__);
um[key] = value * um["cfg"] * um["cfg"];
return 0;
}
int CONCAT(my_model_,FLAVOR)::del_info(std::string key) {
printf("Info: %s(%s):%s:%d)\n", __func__,key.c_str(), __FILE__,__LINE__);
if (um.count(key)==0) return -1;
um.erase(key);
return 0;
}
int create_model(void** model_h, int flavor, int cfg) {
printf("Info: cfg:%0d FLAVOR:%d (%s:%s:%d)\n", cfg, FLAVOR, __func__,__FILE__,__LINE__);
*model_h = (void*) new CONCAT(my_model_,FLAVOR)(cfg);
return (!*model_h ? -1 : 0);
}
int delete_model(void** model_h) {
CONCAT(my_model_,FLAVOR)* _model = (CONCAT(my_model_,FLAVOR)*) *model_h;
if (!_model) { fprintf(stderr, "Error: NULL model_h"); return -1; }
delete _model;
*model_h = (void*) NULL;
return 0;
}
int get_info(const void* model_h, const char* key, int* value) {
CONCAT(my_model_,FLAVOR)* _model = (CONCAT(my_model_,FLAVOR)*) model_h;
if (!_model) { fprintf(stderr, "Error: NULL model_h"); return -1; }
return _model->get_info(std::string(key), value);
}
int set_info(const void* model_h, const char* key, int value) {
CONCAT(my_model_,FLAVOR)* _model = (CONCAT(my_model_,FLAVOR)*) model_h;
if (!_model) { fprintf(stderr, "Error: NULL model_h"); return -1; }
return _model->set_info(std::string(key), value);
}
int del_info(const void* model_h, const char* key) {
CONCAT(my_model_,FLAVOR)* _model = (CONCAT(my_model_,FLAVOR)*) model_h;
if (!_model) { fprintf(stderr, "Error: NULL model_h"); return -1; }
return _model->del_info(std::string(key));
}
}
my_top.cpp
#include "my_model.h"
#include <dlfcn.h>
namespace my_ns {
my_model::my_model(int cfg) {}
const char lib[3][16] = {"./libPassion.so","./libOrange.so","./libGuava.so"};
void *my_so[3];
int link_so(Flavor flavor) {
if (my_so[flavor] == NULL) {
printf("linking %s (%s:%s:%d)\n", lib[flavor], __func__,__FILE__,__LINE__);
my_so[flavor] = dlopen(lib[flavor], RTLD_NOW);
}
if (!my_so[flavor]) {
/* fail to load the library */
fprintf(stderr, "Error: %s\n", dlerror());
return -1;
}
return 0;
}
int create_model(void** model_h, int flavor, int cfg) {
link_so(static_cast<Flavor>(flavor));
int (*dlsym_create_model)(void** model_h, int flavor, int cfg);
*(void**)(&dlsym_create_model) = dlsym(my_so[flavor], "create_model");
return dlsym_create_model(model_h, flavor, cfg);
}
int delete_model(void** model_h) {
my_model* _model = (my_model*) model_h;
link_so(_model->flavor);
int (*dlsym_delete_model)(void** model_h);
*(void**)(&dlsym_delete_model) = dlsym(my_so[_model->flavor], "delete_model");
return dlsym_delete_model(model_h);
}
int get_info(const void* model_h, const char* key, int* value) {
my_model* _model = (my_model*) model_h;
link_so(_model->flavor);
int (*dlsym_get_info)(const void* model_h, const char* key, int* value);
*(void**)(&dlsym_get_info) = dlsym(my_so[_model->flavor], "get_info");
return dlsym_get_info(model_h, key, value);
}
int set_info(const void* model_h, const char* key, int value) {
my_model* _model = (my_model*) model_h;
link_so(_model->flavor);
int (*dlsym_set_info)(const void* model_h, const char* key, int value);
*(void**)(&dlsym_set_info) = dlsym(my_so[_model->flavor], "set_info");
return dlsym_set_info(model_h, key, value);
}
int del_info(const void* model_h, const char* key) {
my_model* _model = (my_model*) model_h;
link_so(_model->flavor);
int (*dlsym_del_info)(const void* model_h, const char* key);
*(void**)(&dlsym_del_info) = dlsym(my_so[_model->flavor], "del_info");
return dlsym_del_info(model_h, key);
}
}
my_dpi.sv
package my_dpi_pkg;
import "DPI-C" function int create_model( output chandle handle, input int flavor, cfg );
import "DPI-C" function int delete_model( inout chandle handle );
import "DPI-C" function int get_info( input chandle handle, input string key, output int value );
import "DPI-C" function int set_info( input chandle handle, input string key, input int value );
import "DPI-C" function int del_info( input chandle handle, input string key );
endpackage : my_dpi_pkg
module tb;
import my_dpi_pkg::*;
initial begin
chandle passion_h,orange_h, guava_h;
string str;
int flavor,val;
$display("Create");
assert(create_model(passion_h, 0, 10)==0);
assert(create_model( orange_h, 1, 16)==0);
assert(create_model( guava_h, 2, 8)==0);
$display("\nInfo via set_info()");
assert(set_info(passion_h, "alpha", 'd13)==0);
assert(set_info( orange_h, "beta", 'h13)==0);
assert(set_info( guava_h, "gamma", 'o13)==0);
$display("\nInfo via get_info()");
assert(get_info(passion_h,"flavor", flavor)==0);
assert(get_info(passion_h,"alpha", val)==0);
$display("passion_h flavor:%0d, alpha:'d%0d",flavor,val);
assert(get_info(orange_h,"flavor", flavor)==0);
assert(get_info(orange_h,"beta", val)==0);
$display("orange_h flavor:%0d, beta:'h%0h",flavor,val);
assert(get_info(guava_h,"flavor", flavor)==0);
assert(get_info(guava_h,"gamma", val)==0);
$display("guava_h flavor:%0d, gamma:'o%0o",flavor,val);
assert(get_info(guava_h,"cfg", flavor)==0);
val = 0;
assert(get_info(guava_h,"alpha", val)==-1);
$display("guava_h cfg:%0d, alpha:'d%0d",flavor,val);
$display("Delete");
assert(delete_model(passion_h)==0);
assert(delete_model(orange_h)==0);
assert(delete_model(guava_h)==0);
$finish(0);
end
endmodule
Commands to build and run:
g++ -fvisibility=hidden -fvisibility-inlines-hidden -s -shared -fPIC -std=gnu++11 -DFLAVOR=PASSION -o libPassion.so my_model.h my_model.cpp -Wall -g || exit 1
g++ -fvisibility=hidden -fvisibility-inlines-hidden -s -shared -fPIC -std=gnu++11 -DFLAVOR=ORANGE -o libOrange.so my_model.h my_model.cpp -Wall -g || exit 1
g++ -fvisibility=hidden -fvisibility-inlines-hidden -s -shared -fPIC -std=gnu++11 -DFLAVOR=GUAVA -o libGuava.so my_model.h my_model.cpp -Wall -g || exit 1
g++ -fvisibility=hidden -fvisibility-inlines-hidden -s -shared -fPIC -std=gnu++11 -DFLAVOR=POG -o libPOG.so my_model.h my_top.cpp -Wall -g
<sv_simulator> -<dpi_keyword> libPOG.so my_dpi.sv
Output:
Create
linking ./libPassion.so (link_so:my_top.cpp:11)
Info: cfg:10 FLAVOR:0 (create_model:my_model.cpp:29)
Info: cfg:10 FLAVOR:0 (my_model:my_model.cpp:7)
linking ./libOrange.so (link_so:my_top.cpp:11)
Info: cfg:16 FLAVOR:1 (create_model:my_model.cpp:29)
Info: cfg:16 FLAVOR:1 (my_model:my_model.cpp:7)
linking ./libGuava.so (link_so:my_top.cpp:11)
Info: cfg:8 FLAVOR:2 (create_model:my_model.cpp:29)
Info: cfg:8 FLAVOR:2 (my_model:my_model.cpp:7)
Info via set_info()
Info: set_info(alpha,13) : my_model.cpp:17
Info: set_info(beta,19) : my_model.cpp:17
Info: set_info(gamma,11) : my_model.cpp:17
Info via get_info()
Info: get_info(flavor,%d) : my_model.cpp:11
Info: get_info(alpha,%d) : my_model.cpp:11
passion_h flavor:0, alpha:'d1300
Info: get_info(flavor,%d) : my_model.cpp:11
Info: get_info(beta,%d) : my_model.cpp:11
orange_h flavor:2, beta:'h1301
Info: get_info(flavor,%d) : my_model.cpp:11
Info: get_info(gamma,%d) : my_model.cpp:11
guava_h flavor:4, gamma:'o1302
Info: get_info(cfg,%d) : my_model.cpp:11
Info: get_info(alpha,%d) : my_model.cpp:11
guava_h cfg:10, alpha:'d0
Delete
./my_dpi.sv:42 $finish(0);
Answer: The code looks correct. It looks like a weird mix of C and C++ though.
Assumptions
There was a statement that they share the same base class, but I am not sure what it means if they all use C style interface (not only linkage, but dealing in void*). I will assume that the library only has the free standing functions and classes are free for modification.
The ABI looks quite safe with it using only C style built in types, so I will assume that ABI needs to be preserved.
Code Review
Dangers of dynamic loading. Dynamic loading has a lot of weird implicit behavior. If there are common dependency libraries and they have different versions, it is one way ticket to DLL hell. dlclose is also tricky because calling it can actually leave the library linked because it is a dependency to something else. There is also RPATH vs RUNPATH vs environment variables (LD_PRELOAD, LD_LIBRARY_PATH) vs system wide config ... Well, it is a pain to deal with dynamic loading.
Not using RTLD_LOCAL. If there is a common dependency symbol and one has it defined and the other does not, instead of failing it will silently link the wrong one.
Inefficiency. The code always dlsyms on a function call. It would be better to link class member functions once and reuse the retrieved function pointer.
Propagating C style interface. Is the code meant to be used on C++ side? If so, I believe it would be better to use C++ style error handling (there is expected library and std::optional with error code taken as reference, or straight up exceptions if they are supported). The reason I'm saying this is that the classes provide minimal abstraction even if it seems they could do more.
"Sticking out" symbols. I believe it would be better to not declare the functions to be linked from the loaded libraries. The last time I used dynamic loading I just declared a local function pointer variable and casted the result of dlsym. When the functions are declared, they might be accidentally linked and then cause some confusion. It is better to not expose symbols to be dynamically loaded to avoid accidental linkage.
No lifetime management. If libraries are loaded and unloaded, it is important to keep track of instances that use to-be-unloaded library. If the binary for the function to be called is unloaded then the program will terminate, probably with SIGSEGV as the page containing the binary does not belong to the process.
Better abstraction
I cannot believe I'm saying this, but GObject architecture looks good here. The idea is that the class needs to be loaded and linked exactly once, afterwards the function pointers will be reused. I did this outside of GObject where I had a factory function dlsymed and retrieve all of the information from there. After dlopening another one I would just ask for another factory function pointer. It just worked.
The idea is to deal in local pointers that the linker does not see, otherwise it will think we want to link it to something. The function definitions need to be in cpp file, I wrote them inline for brevity (overall it is untested sketch code to illustrate the proposed interface with some implementation guidelines).
class Model {
public:
virtual int get_info(std::string_view key, int* value) = 0;
virtual void set_info(std::string_view key, int value) = 0;
virtual bool del_info(std::string_view key) = 0;
virtual ~Model() {}
};
class ModelClass {
const Flavor flavour;
void* so_handle;
int (*create_model)(void**, int, int);
/* and the others */
public:
ModelClass(const ModelClass& other) = delete;
ModelClass& operator=(const ModelClass& other) = delete;
/* move operations are automatically deleted due to copy being deleted too*/
std::unique_ptr<Model> create_instance(/*args?*/) {
/*call the linked function and wrap it in something depending on the flavor,
cpp file will hide the definition of the class so the class is better defined there*/
}
Flavour get_flavour() const noexcept;
~ModelClass() {
/*unload the library, preferably track if no instances of this class are left out*/
}
private:
ModelClass(const char* lib_path) {
so_handle = dlopen(lib_path, RTLD_NOW | RTLD_LOCAL);
if (!so_handle) {
throw std::runtime_error(dlerror());
}
create_model = reinterpret_cast<int (*)(void**, int, int)>(dlsym(so_handle), "create_model");
if (!create_model) {
throw runtime_error(dlerror());
}
/*link the rest*/
}
};
There is also unncessary copying when passing std::string by value where it is clearly meant to be read only. As JDługosz mentioned, it should std::string_view.
After having all of the above, one could create a manager class and befirend them with ModelClass. The manager can be a singleton that will load everything needed at program start providing either static variables of ModelClass or a map if needed. The manager class could store the paths, constants for function names, etc.
If Boost.DLL is an option, I would just use that. I do not have experience with it, but I believe it should contain the boilerplate I wrote above. | {
"domain": "codereview.stackexchange",
"id": 42805,
"tags": "c++, c++11, library, dynamic-loading, system-verilog"
} |
How to determine the Bravais lattice and atom basis vectors from a CIF file? | Question: Say I have a CIF file describing some material in terms of its symmetry space group, lattice parameters and in-cell atom positions. A simple example might be,
data_global
_chemical_name 'Graphene'
_cell_length_a 2.46
_cell_length_b 2.46
_cell_length_c 1
_cell_angle_alpha 90
_cell_angle_beta 90
_cell_angle_gamma 120
_symmetry_space_group_name_H-M 'P 3 m 1'
loop_
_atom_site_label
_atom_site_fract_x
_atom_site_fract_y
_atom_site_fract_z
C 0.00000 0.00000 0.00000
C 0.33333 0.66667 0.00000
I would like to extract from this Bravais lattice vectors and atom basis vectors. For this specific example, this might look like,
2.456 0.0 0.0
-1.228 2.126 0.0
C
0.0 0.0 0.0
0.0 1.418 0.0
where the first two lines give the two lattice vectors (in some basis), while the next two give the positions of the atoms in the unit cell.
I have tried using Open Babel for this, converting from the CIF format to a VASP one. This works well for the example above, but fails for a material such as tin sulfide (SnS), which has a structure similar to black phosphorus---four atoms in a unit cell, only two of which are explicitly listed in the CIF file. The positions of the other two are implied by the symmetry group, and programs such as Mercury correctly visualize the structure based on the CIF file. However, converting the CIF into VASP only gives you two of the four unit cell atoms. I tried all other formats in Open Babel, with no success.
Is there a reliable way to convert a CIF file into a description in terms of a Bravais lattice plus atom basis vectors?
Answer: You can use Open Babel. You just need to "fill" the unit cell (i.e., generate all the symmetric atoms).
You just need the --fillUC option, documented here:
For a crystal structure, add atoms to fill the entire unit cell based on the unique positions, the unit cell and the spacegroup. The parameter can either be strict (the default), which only keeps atoms inside the unit cell, or keepconnect, which fills the unit cell but keeps the original connectivity.
e.g.,:
obabel SnS.cif -oVASP --fillUC >POSCAR
Result is:
Herzenbergite
1.000
4.330000000000000 0.000000000000000 0.000000000000000
0.000000000000001 11.180000000000000 0.000000000000000
0.000000000000000 0.000000000000000 3.980000000000000
Sn S Sn S
1 1 3 3
Cartesian
0.4979500000000001148 1.3192399999999999682 0.9949999999999999956
2.0697400000000003573 9.5030000000000001137 0.9949999999999999956
0.4979500000000001148 1.3192399999999999682 0.9949999999999999956
0.4979500000000001148 1.3192399999999999682 0.9949999999999999956
0.4979500000000001148 1.3192399999999999682 0.9949999999999999956
2.0697400000000003573 9.5030000000000001137 0.9949999999999999956
2.0697400000000003573 9.5030000000000001137 0.9949999999999999956
2.0697400000000003573 9.5030000000000001137 0.9949999999999999956 | {
"domain": "chemistry.stackexchange",
"id": 3364,
"tags": "crystal-structure, cheminformatics"
} |
Why using L1 regularization over L2? | Question: Conducting a linear regression model using a loss function, why should I use $L_1$ instead of $L_2$ regularization?
Is it better at preventing overfitting? Is it deterministic (so always a unique solution)? Is it better at feature selection (because producing sparse models)? Does it disperse the weights among the features?
Answer: Basically, we add a regularization term in order to prevent the coefficients to fit so perfectly to overfit.
The difference between L1 and L2 is L1 is the sum of weights and L2 is just the sum of the square of weights.
L1 cannot be used in gradient-based approaches since it is not-differentiable unlike L2
L1 helps perform feature selection in sparse feature spaces.Feature selection is to know which features are helpful and which are redundant.
The difference between their properties can be summarized as : | {
"domain": "datascience.stackexchange",
"id": 2112,
"tags": "linear-regression, regularization"
} |
Date range validator | Question: I have some code which works fine. First, it checks that dates format are valid and then, checks that first date is lower than second date. I had to use free input text and custom validator because multiple date formats are supported such as quarter number (2014 Q1) etc.
But I find my code too long for what it's doing. Any advice on code improvement ?
DEMO
function check_handler(){
var from = $('#zeDateFrom').val(), to = $('#zeDateTo').val();
if(adu_date_format(from) && adu_date_format(to)){
if( check_date_period(from.trim(), to.trim()) ){
alert('OK');
}
else alert('Start Date must be lower then End Date.');
}
else{
alert('Please use a valid date format. For your information, valid formats are:\ndd/mm/yyyy (e.g 11/12/2011)\n yyyy (e.g 2010)\n mm/yyyy (e.g 11/2009)\n and yyyy qq (e.g 2012 Q2).');
}
}
function check_date_period(from, to){
var result = false;
var yearFrom, yearTo, monthFrom, monthTo, dayFrom, dayTo;
// 0) If something to compare...
if(from==""||to=="") return true;
// 1) Test if input is type year only
if(from.length==4){
yearFrom = from; monthFrom = 1; dayFrom = 1;
}
if(to.length==4){
yearTo = to; monthTo = 12; dayTo = 0; // 0 for last day of month
}
// 2) Test if input is type year[space] quarter
var mySplit = from.split(" ");
if(mySplit.length == 2){
yearFrom = mySplit[0];
dayFrom = 1;
if(mySplit[1]=='Q1'){
monthFrom = 1;
}
if(mySplit[1]=='Q2'){
monthFrom = 4;
}
if(mySplit[1]=='Q3'){
monthFrom = 7;
}
if(mySplit[1]=='Q4'){
monthFrom = 10;
}
}
mySplit = to.split(" ");
if(mySplit.length == 2){
yearTo = mySplit[0];
dayTo = 0;
if(mySplit[1]=='Q1'){
monthTo = 3;
}
if(mySplit[1]=='Q2'){
monthTo = 6;
}
if(mySplit[1]=='Q3'){
monthTo = 9;
}
if(mySplit[1]=='Q4'){
monthTo = 12;
}
}
// 3) Test if input is type month/year
mySplit = from.split("/");
if(mySplit.length == 2){
yearFrom = mySplit[1]; monthFrom = mySplit[0]; dayFrom = 1;
}
mySplit = to.split("/");
if(mySplit.length == 2){
yearTo = mySplit[1]; monthTo = mySplit[0]; dayTo = 0; // 0 for last day of month
}
// 4) test if input is type dd/mm/yyyy
mySplit = from.split("/");
if(mySplit.length == 3){
yearFrom = mySplit[2]; monthFrom = mySplit[1]; dayFrom = mySplit[0];
}
mySplit = to.split("/");
if(mySplit.length == 3){
yearTo = mySplit[2]; monthTo = mySplit[1]; dayTo = mySplit[0];
}
// FINALLY: Compare dates
// Note: 00 is month i.e. January
monthFrom--;
if(dayTo!=0) monthTo--;
var dateOne = new Date(yearFrom, monthFrom, dayFrom); //Year, Month, Day
var dateTwo = new Date(yearTo, monthTo, dayTo); //Year, Month, Day
//alert(dayFrom+'/'+(monthFrom-1)+'/'+yearFrom+'---------'+dayTo+'/'+(monthTo-1)+'/'+yearTo)
//alert(dateOne+'-'+dateTwo);
if (dateOne < dateTwo) {
result = true;
}
return result;
}
function adu_date_format(text){
var result = false;
// 0) If something to test...
if(text=="") return true;
// 1) Test if input is type year only
if(text.length == 4){
if(text >= 2000 && text <= 2030)
result = true;
}
// 2) Test if input is type year[space] quarter
var mySplit = text.split(" ");
if(mySplit.length == 2){
if((mySplit[0] >= 2000 && mySplit[0] <= 2030) && (mySplit[1].length == 2 && mySplit[1] >= 'Q1' && mySplit[1] <= 'Q4'))
result = true;
}
// 3) Test if input is type month/year
mySplit = text.split("/");
if(mySplit.length == 2){
if((mySplit[0] >= 1 && mySplit[0] <= 12) && (mySplit[1] >= 2000 && mySplit[1] <= 2030))
result = true;
}
// 4) test if input is type dd/mm/yyyy
mySplit = text.split("/");
if(mySplit.length == 3){
if(mySplit[2] >= 2000 && mySplit[2] <= 2030)
result = true;
}
return result;
}
Answer: What jumps me most about this code is that all you do is in 2 functions.
consider moving your "parsing" operations to separate functions, one for validating the format required for parsing and one for the actual parsing:
function isValidQuarterInput(datestring){
return datestring.split(" ").length == 2;
}
function parseQuarterInputToMonth(datestring){
var mySplit = datestring.split(" ");
//move your Q1,Q2,... mapping here
}
then in check_date_period() you can just call the following
if(isValidQuarterInput(from)){monthFrom = parseQuarterInputToMonth(from);}
// this is the alternative ternary operator
monthTo = isValidQuarterInput(to) ? parseQuarterInputToMonth(to) : monthTo;
your var result = false is unnecessary, as you never use it elsewhere, you can just do the following for return:
return dateOne < dateTwo;
I personally would set a boolean wasParsed to be set to true on correct parsing of one input to jump to the new Date() part. This can potentially save execution time, but that's minor. You would have to move the parsing in the following order though:
var wasParsed = false;
if(from.length==4){
yearFrom = from; monthFrom = 1; dayFrom = 1;
wasParsed = true;
}
if(!wasParsed){
if(isValidQuarterInput(from)){
wasParsed = true;
monthFrom = parseQuarterInputToMonth(from);
yearFrom = from.split(" ")[0];
dayFrom = 1;
}
}
if(!wasParsed){
//here is the full date parsing
}
var dateOne = new Date(yearFrom, monthFrom, dayFrom);
wasParsed = false;
//repeat with to | {
"domain": "codereview.stackexchange",
"id": 6330,
"tags": "javascript, datetime, validation"
} |
Question about the formal proof of the inorder traversing | Question: In Don Knuth's famous series of books, The Art of Computer Programming, section 2.3.1, he describes an algorithm to traverse binary tree in inorder, making use of an auxiliary stack:
T1 [Initialize.] Set stack $\rm A$ empty and set the link variable $\rm P\gets T$
T2 [$\rm P=\Lambda$?] If $\rm P=\Lambda$, go to step T4.
T3 [Stack$\rm \;\Leftarrow P$] (Now $\rm P$ points to a nonempty binary tree that is to be traversed.) push the value of $\rm P$ onto stack $\rm A$, then set $\rm P\gets LLINK(P)$
T4 [$\rm P\Leftarrow Stack$] If stack $\rm A$ is empty, the algorithm terminates; otherwise pop the top of $\rm A$ to $\rm P$.
T5 [Visit $\rm P$] Visit $\rm NODE(P)$. Then set $\rm P\gets RLINK(P)$ and return to step T2.
We can plot a flow chart of the algorithm. In the succeeding paragraph, he gives a formal proof of the algorithm:
Starting at step T2 with $\rm P$ a pointer to a binary tree of $n$ nodes and with the stack $\rm A$ containing $\rm A[1]\dotsc A[m]$ for some $m\ge 0$, the procedure of steps T2-T5 will traverse the binary tree in question, in inorder, and will then arrive at step T4 with stack $\rm A$ returned to its original value $\rm A[1]\dotsc A[m]$.
However, as far as I know, such a formal proof is quite different from the general method described in section 1.2.1:
for each box in the flow chart, that if an assertion attached to any arrow leading into the box is true before the operation in that box is performed, then all of the assertions on relevant arrows leading away from the box are true after the operation.
In fact, such a method is somewhat equivalent to Hoare logic, which is used to formally check the validity of algorithms.
Can we turn the statement mentioned to prove the traversing algorithm into a schema of Hoare logic, or the assertion-attachment of a flow chart?
Thanks!
Answer: It is definitely possible to analyze this algorithm using Hoare triples. The first step would be to replace the VISIT procedure calls with some more reasonable accounting mechanism, say a list that lists the visited nodes in order. You then define formally what a binary tree is and what an inorder traversal is, something along the following lines:
Tree = Leaf N | Node N LTree RTree
Inorder(Leaf N) = N
Inorder(Node N LTree RTree) = Inorder(LTree) || N || Inorder(RTree)
Here N is the "name" of the node, and || is list concatenation. Armed with these notions, it is an exercise to construct the required Hoare triples. You will probably need to come up with even more notions (for example, you will need to explain what the contents of the stack are when a node P is popped).
What do we gain from this exercise? Do we understand the algorithm any better? Probably not. But we understand how to reason precisely about algorithms, something which is useful if you plan on doing software verification or programming language theory, areas forming the so called "Theory B". If you're more of a "Theory A" type (algorithms and complexity) then, like me, you will find such exercises somewhat beside the point. | {
"domain": "cs.stackexchange",
"id": 1491,
"tags": "algorithms, correctness-proof, graph-traversal, hoare-logic"
} |
When i have already found beamforming,do i still need to use MRC?i have some problems for them | Question: I have some problem when i learning MIMO,first thing is that i know designing beamforming and MRC can ensure better receive signals,but they are not the same,so i have some question about them.
1.I know the MRC is the one of method to ensure better received signal,so if I have already found the beamforming direction now,do i still do MRC when calculating the SNR?,like the example below
2.If the transmitter use $N_T$ antenna to transmit one signal to the receiver which is with $N_R$ antenna,now i do SVD to the MIMO channel $\mathbf H$ ($N_R \times N_T $ matrix ),and find the best beamfoming direction $f_A,(N_T \times 1$ column vector), from it.Now i have beamforming now ,so i don't need to do MRC?
3.Now the received signal is $y_t=\sqrt{P}\mathbf H \vec f_Ax+\vec n$,so now the $SNR=\frac{P||H\vec f_A||^2}{\sigma^2 _n}$?
Because i heard my classmate said that $SNR$ is not $\frac{P||H\vec f_A||^2}{\sigma ^2 _n}$,but $\frac{P||\vec u^HH\vec f_A||^2}{\sigma ^2_n}$,and this $\vec u^H$ is calculate the SVD too.The first column of $U$,SVD=$U\Sigma V^H$.Because $H\vec f_Ax$ is still a vetor,and we can't not calculate SNR from a vector,it must be calculate from a value,and $\vec u^HH\vec f_A$ is a value
Is my SNR formula right or my classmate's is right?Does anyone know the answer about my question?It confused me for lots of months
Answer: You are doing transmit beamforming and choosing your vector $\mathbf{f}_A$ so that you beamforming in the "best" direction. So you transmit the signal: $\mathbf{z}=\sqrt{P} \mathbf{f}_A x$, where $\mathbf{z}$ is a length $N_T$ vector. Now you receive the signal: $\mathbf{y} = \sqrt{P}\mathbf{H}\mathbf{f}_A x + \mathbf{n}$, which is a length $N_R$ vector.
The post-MRC SNR should still be a vector. You will have a SNR for each transmit stream, so $N_T$ SNR values. To compute the SNR of the $k^{\text{th}}$ transmit stream after MRC you take the channel gains from transmit antenna $k$ to each of the $N_R$ receive antennas, $\mathbf{h}_k$, and compute its output power: $\gamma_k = P ||\mathbf{h}_k^H \mathbf{H} \mathbf{f}_A||^2$. Or, you can also equivalently do it all at once by: $||\mathbf{H}^H \mathbf{H} \mathbf{f}_A||^2$.
Your formula is actually computing the pre-MRC SNR, whereas your friends was showing how to get the post-MRC SNR. Hope this helps! | {
"domain": "dsp.stackexchange",
"id": 8051,
"tags": "digital-communications"
} |
C - Improving (really) big number addition | Question: I was lucky enough to stumble into this task during a job interview. I was gently guided to use string reversals and a few other things to speed up my process. I kept thinking that I wouldn't have done it the way I was coached. It bothered me enough that I went home and coded up my version.
How could I improve this code?
* EDIT * added a size_t for carry. Reduced the number of calles of strlen().
Rules:
Program must be in C
Can't use atoi, and only basic types
Must be able to add arbitrarily large numbers
Output must be a string
Example usage:
addbig 12345678901234567891234567891234567890123456789123456789 9876543210987654321098765432198765432109876543210987654321
Here's the code I came up with:
#include <stdio.h> // printf
#include <stdlib.h> // malloc
#include <string.h> // strlen
char * addStrings(char* string1, char* string2) {
char *maxString = string1;
char *minString = string2;
// Override our larger if we were wrong.
if (strlen(string1) < strlen(string2)) {
maxString = string2;
minString = string1;
}
int maxSize = strlen(maxString);
// Allocate enough storage to include our carry and the null char at the end.
char * myResult = malloc(maxSize + 2);
// Null terminate the string
myResult[maxSize + 1] = 0;
size_t carry = 0;
int mindex = strlen(minString) - 1;
char x = 0;
// One loop. Avoid -std=c99 flag
int i = maxSize - 1;
for (; i >= 0; i--) {
// Default our char
x = maxString[i] - '0';
// We still have something left to add.
if (mindex >= 0){
x = x + (minString[mindex] - '0');
mindex--;
}
if (carry) x++;
if (x >= 10) {
x -= 10;
carry = 1;
} else {
carry = 0;
}
myResult[i + 1] = x + '0';
}
// Take care of any leftover carry.
if (carry) myResult[0] = '1';
return myResult;
} // END addStrings()
int main(int argc, char **argv) {
if (argc < 2) {
printf("Usage: %s num1 num2\n", argv[0]);
printf("Utility for big numbers.");
printf("Adds num1 to num2, for any length of numbers.\n");
return 1;
}
char * result = addStrings(argv[1], argv[2]);
// Removing the initial space allocated by storage if we didn't overflow
printf("%s\n", (result[0] == ' ') ? result + 1 : result);
if (result) {
free(result);
return 0;
}
// Returned an error state, memory not freed.
return -1;
}
Answer: Function Error
What is myResult[0] when carry == 0?
if (carry) myResult[0] = '1';
Improvements
As code is not changing the the contents of the string, use const for potential compiler optimization and clarity to reviewers that code is in fact not changing the characters.
// char * addStrings(char* string1, char* string2)
char * addStrings(const char* string1, const char* string2)
Code that allocates memory should clearly say that as a comment and/or function name.
// Returns an allocated string representing the sum of ....
char * addStrings(
// or
char * addStrings_alloc(
Really big strings can exceed length INT_MAX. Use size_t for indexing arrays and string lengths. size_t is the return type of strlen().
// int maxSize = strlen(maxString);
size_t maxSize = strlen(maxString);
// int i = maxSize - 1;
// for (; i >= 0; i--) {
size_t i;
for (i = maxSize; i > 0; ) {
i--;
Check the return value of malloc()
char * myResult = malloc(maxSize + 2);
if (myResult == NULL) return NULL;
No need for carry to besize_t`.
// size_t carry = 0;
int carry = 0;
No provision for - numbers. (or ones with a leading '+')
No provision for trimming leads '0' in answer with input like addStrings("007", "007")
No provision for non-digits as in addStrings("x", "007")
Simplification
// x = maxString[i] - '0';
// if (carry) x++;
x = maxString[i] - '0' + carry;
Below processing should have happened in the function.
// Removing the initial space allocated by storage if we didn't overflow
... (result[0] == ' ') ? result + 1 : result)
Minor
Nomenclature. size is the size need for the array. length is the length of the string (not counting the null character). Suggest maxLength.
// int maxSize = strlen(maxString);
int maxLength = strlen(maxString);
Why the comment? Seems unneeded // Avoid -std=c99 flag
No provision for NULL input addStrings(NULL, "007") - but then NULL is not a valid string. For an interview question, I'd at least suggest that inputs may need sanitizing before use.
Style: Even single if() are easier to debug/maintain with {}
// if (carry) myResult[0] = '1';
if (carry) {
myResult[0] = '1';
}
Test driver simplification. free(NULL) is OK.
// if (result) {
// free(result);
// return 0;
//}
free(result);
Test driver function error - off by 1
// if (argc < 2) {
if (argc < 3) {
// or better
if (argc != 3) {
I'd expect code to be tested and give a reasonable (or a least defined) result with addStrings("", "7") and addStrings("", "") | {
"domain": "codereview.stackexchange",
"id": 19960,
"tags": "c"
} |
Source and Destination IP of TCP connection | Question: Five Tuple Identifier of TCP connection is (TCP, local IP, local port, remote IP, remote port).
I have made 3 computers in virtual box and set BOX1 for client, BOX2 for median, and BOX3 for server. And made interfaces between them to communicate.
So I have executed the server program on BOX3 and client program in BOX1. The client program calls 10.0.2.2.
When I captured packets from BOX1 and BOX3, the IP Source and Destination IP address was like this.
BOX1's SYN
Src : 10.0.1.2
Dest : 10.0.2.2
BOX3's SYN
Src : 10.0.1.2
Dest : 10.0.2.2
What I have thought is that in BOX1, Src must be 10.0.1.2 and Dest must be 10.0.1.1. And in BOX3, it must be 10.0.2.1 for Src and 10.0.2.2 for Dest. Because BOX1 communicates with BOX2 and BOX3 commuicates with BOX2.
Why does IP is like that?
Thanks for your help.
Answer: If you consider the format of the TCP-IP datagram.
Source Address: The 32-bit IP address of the originator of the datagram. Note that even though intermediate devices such as routers may handle the datagram, they do not normally put their address into this field—it is always the device that originally sent the datagram.
Destination Address: The 32-bit IP address of the intended recipient of the datagram. Again, even though devices such as routers may be the intermediate targets of the datagram, this field is always for the ultimate destination.
You can get more understanding here. | {
"domain": "cs.stackexchange",
"id": 16032,
"tags": "network-analysis, tcp"
} |
Simple Physics 1 question regarding position of a particle at a time $t$ | Question: I am going over my physics homework before an exam, and noticed I genuinely had no clue how to do the following:
Given the vector V, plot the path on an xy plane, where t is in seconds.
$$V = (5m)\sin(2\pi t)i + (5m)\cos(2\pi t)j$$
So I tried thinking about it critically and then I ran into something that made me question this whole problem.
Let x and t be some different constants,
$x(\sin(2\pi t)) = 0$, regardless of what x or t you give it.
$x(\cos(2\pi t)) = x$, regardless of what x or t you give it.
From this I substitute the previous findings into the original equation
$$V = 0i + xj$$
This doesn't really make sense to me, and I am not sure how to go about plotting this.
Any help would be appreciated.
Answer: Your “regardless” statements about sin and cos are both wrong.
For example, when $t=1/4$, $\sin{2\pi t}=\sin{\pi/2}=1$.
Get a calculator and plot the points $(5\sin{2\pi t},5\cos{2\pi t})$ for $t=0.0,0.1,0.2,0.3,...,1.0$. | {
"domain": "physics.stackexchange",
"id": 65077,
"tags": "homework-and-exercises, kinematics, vectors"
} |
kinect_tools compilation problem | Question:
Hello,
I'm trying to
rosmake kinect_tools --rosdep-install
and I get the following error:
[ rosmake ] Packages requested are: ['kinect_tools']
[ rosmake ] Logging to directory/home/gsaponaro/.ros/rosmake/rosmake_output-20110525-194202
[ rosmake ] Expanded args ['kinect_tools'] to:
['kinect_tools']
[ rosmake ] Generating Install Script using rosdep then executing. This may take a minute, you will be prompted for permissions. . .
Failed to find stack for package [eigen3]
Failed to load rosdep.yaml for package [eigen3]:Cannot locate installation of package eigen3: [rospack] couldn't find package [eigen3]. ROS_ROOT[/opt/ros/diamondback/ros] ROS_PACKAGE_PATH[/home/gsaponaro/kinect/ros:/opt/ros/diamondback/stacks]
rosdep executing this script:
{{{
set -o errexit
#No Packages to install
}}}
[ rosmake ] rosdep successfully installed all system dependencies
[rosmake-0] Starting >>> kinect_tools [ make ]
[ rosmake ] All 22 linesinect_tools: 0.1 sec ] [ 1 Active 0/1 Complete ]
{-------------------------------------------------------------------------------
mkdir -p bin
cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=`rospack find rosbuild`/rostoolchain.cmake ..
[rosbuild] Building package kinect_tools
Failed to invoke /opt/ros/diamondback/ros/bin/rospack deps-manifests kinect_tools
[rospack] couldn't find dependency [eigen3] of [kinect_tools]
[rospack] missing dependency
CMake Error at /opt/ros/diamondback/ros/core/rosbuild/public.cmake:113 (message):
Failed to invoke rospack to get compile flags for package 'kinect_tools'.
Look above for errors from rospack itself. Aborting. Please fix the
broken dependency!
Call Stack (most recent call first):
/opt/ros/diamondback/ros/core/rosbuild/public.cmake:183 (rosbuild_invoke_rospack)
CMakeLists.txt:12 (rosbuild_init)
So, having read here that in DiamondBack we should use eigen instead of eigen3, I removed the 3 from manifest.xml, and I still get errors:
[rosmake-0] Starting >>> kinect_tools [ make ]
[ rosmake ] Last 40 linesnect_tools: 6.4 sec [ 1 Active 53/54 Complete ]
{-------------------------------------------------------------------------------
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp: In member function ‘void HandProcessor::segFingers(double, int)’:
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:267: error: ‘centroid’ was not declared in this scope
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:283: error: ‘class Finger’ has no member named ‘centroid’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:283: error: ‘arm’ was not declared in this scope
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:283: error: ‘class Finger’ has no member named ‘centroid’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:299: error: ‘class Finger’ has no member named ‘centroid’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp: In member function ‘void HandProcessor::identfyFingers()’:
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:313: error: ‘class Finger’ has no member named ‘centroid’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:313: error: ‘class Finger’ has no member named ‘centroid’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:314: error: ‘class Finger’ has no member named ‘centroid’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:314: error: ‘class Finger’ has no member named ‘centroid’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp: In member function ‘void HandAnalyzer::getEigens(const kinect_tools::Hand&)’:
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:392: error: ‘Eigen3’ has not been declared
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:392: error: expected ‘;’ before ‘centroid’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:394: error: expected primary-expression before ‘__attribute__’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:394: error: expected ‘;’ before ‘__attribute__’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:395: error: expected primary-expression before ‘__attribute__’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:395: error: expected ‘;’ before ‘__attribute__’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:396: error: ‘Eigen3’ has not been declared
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:396: error: expected ‘;’ before ‘cov’
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:397: error: ‘centroid’ was not declared in this scope
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:398: error: ‘cov’ was not declared in this scope
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:399: error: ‘eigen_vectors’ was not declared in this scope
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:399: error: ‘eigen_values’ was not declared in this scope
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:400: error: ‘direction’ was not declared in this scope
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:403: error: ‘armvector’ was not declared in this scope
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:404: error: ‘flipvec’ was not declared in this scope
In file included from /home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:48:
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg/pcl_tools/include/pcl_tools/segfast.hpp: In function ‘void extractEuclideanClustersFast2(pcl::PointCloud<PointT>&, std::vector<std::vector<int, std::allocator<int> >, std::allocator<std::vector<int, std::allocator<int> > > >&, double, int) [with PointT = pcl::PointXYZ]’:
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:273: instantiated from here
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg/pcl_tools/include/pcl_tools/segfast.hpp:1273: warning: comparison between signed and unsigned integer expressions
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/src/analyze_hands.cpp:273: instantiated from here
/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg/pcl_tools/include/pcl_tools/segfast.hpp:1314: warning: comparison between signed and unsigned integer expressions
make[3]: *** [CMakeFiles/analyzehands.dir/src/analyze_hands.o] Error 1
make[3]: Leaving directory `/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/build'
make[2]: *** [CMakeFiles/analyzehands.dir/all] Error 2
make[2]: Leaving directory `/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/build'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/home/gsaponaro/kinect/ros/kinect_demos/mit-ros-pkg-experimental/kinect_tools/build'
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package kinect_tools written to:
[ rosmake ] /home/gsaponaro/.ros/rosmake/rosmake_output-20110525-193810/kinect_tools/build_output.log
[rosmake-0] Finished <<< kinect_tools [FAIL] [ 6.41 seconds ]
Thank you very much for your help.
Originally posted by Giovanni Saponaro on ROS Answers with karma: 68 on 2011-05-25
Post score: 0
Answer:
Kinect_tools is obsolete. If you want the hand tracking library, you should check out hand_interaction, as referred to by www.ros.org/wiki/mit-ros-pkg/KinectDemos
Originally posted by Garratt Gallagher with karma: 26 on 2011-06-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Giovanni Saponaro on 2011-06-27:
Fair enough, thanks. | {
"domain": "robotics.stackexchange",
"id": 5662,
"tags": "ros, kinect, mit-ros-pkg, skeletal-tracker"
} |
If two cars have the same acceleration at time $t$, are the velocities of the cars the same at time $t$? | Question: So when I look at the question logically, I reason that if I had two cars: Range Rover and BMW, at time t, I could pump the gas and make them accelerate at the same rate at 0.5 m/s^2 but I could be accelerating at the same rate but different speeds i.e. the Range from 100m/s and the BMW from 50m/s.
So my conclusion would be cars with the same acceleration at time $t$ could be moving at different speeds at time $t$.
Is my logic supported by any physics or is my reasoning false?
But then again for the acceleration of the BMW and Range Rover to be the same at time t, the velocity has to be the same where velocity is speed but with direction noted. Therefore, the BMW and Range Rover would have the same speed.
Answer: Accelerations being equal doesn't necessarily mean that the velocities are equal, or vice versa. For example, your two cars could have the same acceleration, but if one starts before the other, the one that got going earlier wlil obviously be moving faster. An even simpler example, if one car is standing still and the other one is moving at constant speed, the acceleration is zero in both cases but the speed is different!
What is true, however, is that if the accelerations are the same at all times and at some point the velocities were equal, then they will remain equal forever. This is because the definition of acceleration is precisely that it is the change in velocity per unit time. If the velocities start out the same and change in the same way, they will stay equal to each other. | {
"domain": "physics.stackexchange",
"id": 31049,
"tags": "kinematics, acceleration, velocity"
} |
Can we ignore the scalar field (dilaton) term in the Polyakov sigma-model action when deriving the classical equations of motion? | Question: I have the full Polyakov sigma model action:
\begin{equation}
\begin{split}
&S=S_P + S_B + S_\Phi = \\
&- {1 \over 4 \pi \alpha'} \Big[ \int_\Sigma d^2\sigma \sqrt{-g} g^{ab} \partial_a X^\mu \partial_b X^\nu G_{\mu\nu}(X)\, + \\
&+\epsilon^{ab} B_{\mu\nu}(X) \partial_a X^\mu \partial_b X^\nu \, +\alpha'\Phi(X) R^{(2)}(\sigma) \Big] \,.
\end{split}
\end{equation}
and I want to derive the classical equations of motion by varying $X \mapsto X + \delta X$. I am confused as to what to do with the last term. It is of a higher power of $\alpha '$, so I am thinking it can just be ignored, as it's variation will be of a higher order. Is this thinking correct?
Does this question even make sense, as I'm trying to derive classical equations from a sigma-model, which as far as I have seen, is used when quantizing the string?
Answer: Strings in strong gravitational fields - Gary T. Horowitz and Alan R. Steif, Phys. Rev. D 42, 1950 confirms the argument in the question:
"For the remainder of this section we
consider a purely classical string. Since the dilaton term
is multiplied by $\alpha'$, it is a quantum correction and does
not directly affect the motion of a classical string." | {
"domain": "physics.stackexchange",
"id": 59032,
"tags": "string-theory, sigma-models, equations-of-motion"
} |
Could anyone say if this is a reliable shuffling code? | Question: I am trying to prove the famous "Monty Hall Paradox":
The Monty Hall problem is a probability puzzle named after Monty Hall, the original host of the TV show Let’s Make a Deal. It’s a famous paradox that has a solution that is so absurd, most people refuse to believe it’s true.
private static void suffle(ulong numberOfTries)
{
int correctHits = 0;
int wrongHits = 0;
var prizeIndex = new Random();
for (ulong i = 1; i <= numberOfTries; i++)
{
int[] doors = { 1, 2, 3 };
var prizeDoor = doors[prizeIndex.Next(0, doors.Length)] ;
var selectedDoor = doors[prizeIndex.Next(0, doors.Length)];
int discardedDoor = doors[prizeIndex.Next(0, doors.Length)];
while (discardedDoor == prizeDoor || discardedDoor == selectedDoor)
{
discardedDoor = doors[prizeIndex.Next(0, doors.Length)];
}
var correctGuess = selectedDoor == prizeDoor;
if (correctGuess) correctHits++;
else wrongHits++;
Console.WriteLine($"Prize Door: {prizeDoor}\nSelected Door: {selectedDoor}\nDiscarded Door: {discardedDoor}\nRight Guess? {correctGuess}\n\n");
}
Console.WriteLine($"Right Guesses: {correctHits} / {numberOfTries}\nWrong Guesses: {wrongHits} / {numberOfTries}");
}
Answer: your code is pretty neat and readable. Shuffling algorithm looks reliable to me, however I have a few comments to your C# code.
You pass the argument numberOfTries as ulong type, but correctHits and wrongHits are int. It could lead to wrong output if the numberOfTries is large enough to overflow int.MaxValue.
Variable doors are initialized in each loop. It could be initialized once in higher scope, e.g. before the for loop.
I can see repetetive code doors[prizeIndex.Next(0, doors.Length)]. This piece of code could be refactored to isolated method. For example:
private static int GetDoorNumber() => _doors[_prizeIndex.Next(0, _doors.Length)];
You can save couple of code lines introducing ternary operator. Then you do not need variable correctGuess.
correctHits += (selectedDoor == prizeDoor) ? 1 : 0;
You can calculate wrongHits as a difference of numberOfTries and correctHits. But your solution works too. | {
"domain": "codereview.stackexchange",
"id": 37998,
"tags": "c#, random"
} |
Existence Of Phase Flow Provided Potential Energy is Positive | Question: I am reading through Arnold's "Mathematical Methods Of Classical Mechanics". In the section 4D on p. 21 concerning Phase Flow there is a question that reads as follows:
Show that if Potential Energy is postive, then there is a phase flow. Hint: Use the law of conservation of energy to show that a solution can be extended without bound.
I am stumped. Can somebody give me any other hints?
For context, Arnold defines a phase flow as a one-parameter group of diffeomorphisms of the phase plane to itself (the parameter in question being time so that the position of a point $M$ in the phase plane can be traced for all time $t$).
Answer: Hint: Given the energy integral $\frac{1}{2}\dot{x}^2+U(x)=E$, if the potential energy is bounded from below, the speed is bounded from above, so that a solution cannot reach spatial infinity in finite time, i.e. the flow can in principle be extended to the whole time-axis $\mathbb{R}$, which is one of the conditions of a phase flow listed on p. 20 (and the last line of OP). | {
"domain": "physics.stackexchange",
"id": 93000,
"tags": "classical-mechanics, phase-space, differential-equations"
} |
How to prove: If $\textsf{EXP} \subseteq \textsf{P/poly} $ then $\textsf{EXP} = \Sigma^p_2$ | Question: Following is a theorem from Sanjeev Arora and Boaz Barak I am unable to prove :
If $\textsf{EXP} \subseteq \textsf{P/poly}$ then $\textsf{EXP} = \Sigma^p_2$.
The previous similar theorem was
If $\textsf{NP} \subseteq \textsf{P/poly}$ then $\textsf{PH}=\Sigma_2^p$.
The second theorem was easy to prove as $\textsf{NP}=\Sigma_1^p$. But $\textsf{EXP}$ does not have such characteristic. How do I prove the first theorem. Any hints?
Answer: The more classical statement is that if $\textsf{EXP} \subseteq \textsf{P/poly}$ then $\textsf{EXP} = \textsf{MA}$, due to Babai, Fortnow and Lund. Impagliazzo, Kabanets and Wigderson showed that $\textsf{NEXP} \subseteq \textsf{P/poly}$ iff $\textsf{NEXP} = \textsf{MA}$. See lecture notes of Bogdanov for proof sketches. | {
"domain": "cs.stackexchange",
"id": 10337,
"tags": "complexity-theory, nondeterminism"
} |
Modeling a pool ball rolling and slipping given initial velocity | Question: I am trying to model a pool game. Currently my physics is faked by giving it an initial velocity and just subtracting from it every second. To make it more realistic I want to use physics to determine how fast it is going every second.
Given an initial velocity vi, mass of the cue ball, coefficient of friction between the ball and surface, and the radius from the center as where the cue has struck, how would I determine the change in velocity.
Below is my understanding of the system. I would like some help modeling the equation as I have limited knowledge of physics past grade 12.
Answer:
Given an initial velocity vi, mass of the cue ball, coefficient of friction between the ball and surface, and the radius from the center as where the cue has struck, how would I determine the change in velocity.
Unfortunately it is not enough to know where the cue has struck the cue ball, as the the spin on the cueball depends on many more factors (for instance, the spin on the cue ball is generated by accelerating through it with the cue). Perhaps you can replace this parameter by the spin on the ball as its angular momentum vector.
The way I would then model this problem is the following. Assume that the balls are identified by the following parameters
radius $r$;
mass $m$;
position $\mathbf r$;
velocity vector $\mathbf v$ of the centre of mass;
angular velocity pseudovector $\boldsymbol\omega$.
The vertical component of the spin $\boldsymbol\omega$ is dissipated through friction, and this leads to the introduction of a friction parameter, say $\mu_z$.
As for the motion of the ball one has to distinguish between two fundamental regimes: sliding and rolling. When the ball is sliding the velocity vector changes direction because of the horizontal component of the spin, which together with the grip on the cloth causes the ball to steer left/right and/or accelerate/decelerate. The force is in the direction of the vector $\mathbf k\times\boldsymbol\omega$, where $\mathbf k$ is in the direction of the vertical axis. The magnitude depends on another coefficient, say $\mu_s$, so something of the form
$$\mathbf F_s = \mu_sr m g\widehat{\boldsymbol\omega\times\mathbf k}$$
This force is responsible for changing both the angular velocity $\boldsymbol\omega$ and the velocity of the centre of mass of the ball $\mathbf v$. Furthermore another force that acts on the centre of mass is dynamical friction, which goes against the direction of the motion
As soon as the ball starts rolling, it continues on a straight line along the direction of $\mathbf v$ until it gets to a halt. In this case the acting force is just friction, however another coefficient, say $\mu_d$, due to the different nature of this friction, might be necessary.
Hope these few ideas helped. I also recommend reading through this paper for further ideas. | {
"domain": "physics.stackexchange",
"id": 26567,
"tags": "newtonian-mechanics"
} |
Equivalence principle near a black hole | Question: At every spacetime point, there is a locally inertial frame in which the effect of gravitation is absent. Can this point be taken near the center of a black hole?
Answer: The locally inertial coordinates are the normal coordinates and these can be defined anywhere in the manifold including inside an event horizon arbitrarily close to the singularity. The singularity is excluded from the manifold so no coordinate system can be defined there.
A simple intuitive choice of normal coordinates are the coordinates of a freely falling observer. These are the Fermi normal coordinates. These coordinates can be defined at all points along the trajectory of an observer falling into the black hole up to but not including the singularity. | {
"domain": "physics.stackexchange",
"id": 97833,
"tags": "general-relativity, black-holes, coordinate-systems, singularities, equivalence-principle"
} |
Warning with hector_exploration_node (problem with the tf tree) | Question:
Hello,
I recently started using the nodes of the hector algorithms and today I tried to use the hector_exploration_node on my 3 wheel differential driven robot on which I have put a hokuyo lidar. I am generally new in ros, so some follow up questions may occur.
After the roslaunch of my model in gazebo and the state_publisher I did the following:
roslaunch hector_exploration_node exploration_planner.launch
After the command, I got the following warnings:
[ WARN] [1471047689.018501831, 134.995000000]: Timed out waiting for transform from base_link to map to become available before running costmap, tf error: . canTransform returned after 134.995 timeout was 0.1.
[ WARN] [1471047695.188449599, 140.033000000]: Timed out waiting for transform from base_link to map to become available before running costmap, tf error: Could not find a connection between 'map' and 'base_link' because they are not part of the same tree.Tf has two or more unconnected trees.. canTransform returned after 0.102 timeout was 0.1.
[ WARN] [1471047701.360674352, 145.134000000]: Timed out waiting for transform from base_link to map to become available before running costmap, tf error: Could not find a connection between 'map' and 'base_link' because they are not part of the same tree.Tf has two or more unconnected trees.. canTransform returned after 0.105 timeout was 0.1.
As the warning suggested I checked my tf tree by running:
rosrun tf view_frames
The results of the command was this .
As you can see I have 3 unconnected tf trees and the weird part is that I have 2 different base_footprints (one as /labrob/base_footprint and one as base_footprint) which I don't know why happens, since in my urdf code I just mention the frames as 'base_footprint', 'base_link', 'r_wheel', 'l_wheel' etc. Labrob is the name of my robot. I don't want to forget to mention that I am using the following state_publisher, which I roslaunch after I roslaunch the robot model in gazebo:
<launch>
<param name="robot_description" command="cat $(find labrob_description)/urdf/labrob.urdf" />
<node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" />
</launch>
The warning also mentions that there is no connection between map and base_link, which is correct according to my tree.
To sum up, I have the following questions regarding all the above:
What does practically mean tf connection between map and base_link (I mean the robot moves on the map, if that's the practical connection between the two) ?
What tutorial should I follow to establish a connection between these 2 frames (map and base_link) ?
Is there something wrong on the launch file of my state_publisher?
Could you please explain me why I see two different base_footprints?
Thank you for your time and for your answers in advance,
Chris
EDITED:
About my 4th question, I changed some things in the urdf code of the robot and I saw that the issue evolves because I have 2 different broadcasters (gazebo and state_publisher).
To be more specific, at the end of my urdf file i use the following plugin:
<gazebo>
<plugin name="differential_drive_controller" filename="libgazebo_ros_diff_drive.so">
<rosDebugLevel>Debug</rosDebugLevel>
<publishWheelTF>True</publishWheelTF>
<publishTF>1</publishTF>
<publishWheelJointState>true</publishWheelJointState>
<alwaysOn>true</alwaysOn>
<updateRate>100.0</updateRate>
<leftJoint>joint_l_wheel</leftJoint>
<rightJoint>joint_r_wheel</rightJoint>
<wheelSeparation>0.22</wheelSeparation>
<wheelDiameter>0.16</wheelDiameter>
<broadcastTF>1</broadcastTF>
<wheelTorque>30</wheelTorque>
<commandTopic>/labrob/cmd_vel</commandTopic>
<odometryFrame>odom</odometryFrame>
<odometryTopic>odom</odometryTopic>
<robotBaseFrame>base_footprint</robotBaseFrame>
<legacyMode>true</legacyMode>
<robotNamespace>labrob</robotNamespace>
</plugin>
</gazebo>
which causes all the problems and the first part of the unconnected tree. To remove the issue of the unconnected tree I removed the tag
<robotNamespace>labrob</robotNamespace>
and I got this tree:
I will rephrase my 4th question:
how can I remove the base_footprint of being the parent of the l_wheel and r_wheel ?
EDITED (2nd time) :
I will rephrase my 4 questions as a sum up with some issues I think I solved on my own:
What does practically mean tf connection between map and base_link (I mean the robot moves on the map, if that's the practical connection between the two) ?
I need to broadcast this tf, am i correct ? If yes, I know about the tf tutorial setup.
Is there something wrong on the launch file of my state_publisher ? Just to check, because it is the first time trying the state_publisher.
Is the fact that the base_footprint is the parent of the l_wheel and r_wheel normal ? From the ros_diff_drive plugin this seems normal. This part of the tf tree doesn't bother me for using the hector_exploration_node, I believe.
Originally posted by patrchri on ROS Answers with karma: 354 on 2016-08-12
Post score: 0
Answer:
After testing various things I managed to solve all of my problems on my own, so I will answer my questions in case someone meets the same issues as me:
The connection between the map frame and base_link, as I understand it, is the pose of the base_link frame within the map. So you have to find a topic like /slam_out_pose for example, to get that position of the robot on the map (pose and orientation) and broadcast it. The tf tutorials are really helpful with that, but you also need to have understood what actually happens with the code in the tutorials. It's important to note that this connection between these two frames is not based on odometry, but on the high frequency of the lidar samples.
This tutorial helps a lot. Also check this link regarding Quaternion and this link regarding Transforms, they help a lot on writing our own code.
The state publisher I have posted works good. No problems have occurred till now, so I think it's correct.
This tree linkage doesn't bother the hector's packages to do their job. I solved the issue temporarily with the broadcaster I wrote. I don't know how it got solved by on its own, but the job was done.
Chris
Originally posted by patrchri with karma: 354 on 2016-08-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25513,
"tags": "ros, urdf, tf-tree"
} |
Probability Flux reduced to imaginary part? | Question: I'm looking at Sakurai, page 400.
The probabilityflux (j) can be reduced to the imaginary part of the first part of j. Can somebody explain this?
j(x,t) $=-\frac{i\hbar}{2m}[\psi^*\nabla\psi-(\nabla\psi^*)\psi]$
$=\frac{\hbar}{m}Im(\psi^*\nabla\psi)$
Answer: Hint: A general complex number $z$ can be written as
\begin{align}
z = a + i b
\end{align}
where $a$ and $b$ are real. Now consider the complex conjugate of $z$.(i.e. $\overline{z}$)
\begin{align}
\overline{z} = a - ib
\end{align}
As a result, adding and subtracting the above two equations gives the following relations.
\begin{align}
z + \overline{z} \; = \; 2a &\implies \mathrm{Re}(z) \equiv a = \frac{z + \overline{z}}{2} \\
z -\overline{z} \; = \; 2ib &\implies \mathrm{Im}(z) \equiv b = \frac{z - \overline{z}}{2i}
\end{align} | {
"domain": "physics.stackexchange",
"id": 52629,
"tags": "quantum-mechanics, scattering, probability"
} |
Three versions of a counting-up timer in JavaScript | Question: I made 3 versions of a counting-up timer in JavaScript:
Version #1
All units of time (seconds, minutes, hours, and days) are updated every second, whether they change or not.
Pro: Shorter code.
Con: Worse performance than version #2.
const timeIntervals = [
["day", 86400],
["hour", 3600],
["minute", 60],
["second", 1]
];
var counter = 1;
var tick = () => {
var gucci = counter;
for (unit of timeIntervals) {
$("#" + unit[0]).html(Math.floor(gucci / unit[1]));
gucci %= unit[1];
}
counter++;
};
var timer = window.setInterval(tick, 1000);
Version #2
Each unit of time (seconds, minutes, hours, and days) is updated only at fixed intervals when they change.
Pro: Better performance than version #1.
Con: Longer code.
const timeIntervals = [
["day", 86400],
["hour", 3600],
["minute", 60],
["second", 1]
];
var counter = 1;
var tick = () => {
let zero = false;
for (unit of timeIntervals) {
let element = $("#" + unit[0]);
if (zero) {
element.html("0");
}
else if (counter % unit[1] === 0) {
element.html(
parseInt(element.html()) + 1
);
zero = true;
}
}
counter++;
};
var timer = window.setInterval(tick, 1000);
Version #3
Each unit of time (seconds, minutes, hours, and days) is incremented up by 1 when the counter before it reaches its maximum limit.
Pro: Doesn't require a counter variable.
Con #1: It will break if the user clicks "Inspect Element" and messes with the HTML.
Con #2: Long code.
const timeIntervals = [
["second", 60],
["minute", 60],
["hour", 24],
["day", Infinity]
];
var tick = () => {
$("#second").html(
parseInt($("#second").html()) + 1
);
for (let i = 0; i < timeIntervals.length; i++) {
let currentElement = $("#" + timeIntervals[i][0]);
if (parseInt(currentElement.html()) >= timeIntervals[i][1]) {
currentElement.html("0");
let nextElement = $("#" + timeIntervals[i + 1][0]);
nextElement.html(parseInt(nextElement.html()) + 1);
}
}
};
var timer = window.setInterval(tick, 1000);
Which of these 3 codes has the best performance, readability, and structure? Do any of them have security vulnerabilities?
Answer: Performance is not a significant concern, since all three callbacks have very little code, and they execute only once per second. Rather, you should aim for clarity. In my opinion, Version 1 is simplest and easiest to follow.
All three techniques suffer from the same weakness with window.setInterval(…, delay):
delay
The time, in milliseconds (thousandths of a second), the timer should delay in between executions of the specified function or code. If this parameter is less than 10, a value of 10 is used. Note that the actual delay may be longer; see Reasons for delays longer than specified in WindowOrWorkerGlobalScope.setTimeout() for examples.
In particular, the interval may be throttled to 10 seconds for long-running scripts in background tabs, or the callback may be delayed if the JavaScript engine is busy executing other tasks. Furthermore, the whole machine might go to sleep.
Rather, you should check the time difference with every tick. (As a side benefit, this should help address your concern about casual Web Inspector tampering.)
Use destructuring assignment to write more meaningful names than unit[0] and unit[1]. Note that in for (unit of timeIntervals), you neglected to localize unit in any way, so it's global.
The tick function should probably be declared as const rather than var. I'd also prefer to use let rather than var, as better software engineering practice.
Also, as better practice, use jquery.text() rather than .html(), if you know that the content is text without HTML markup.
const timeIntervals = [
["day", 86400000], // milliseconds
["hour", 3600000],
["minute", 60000],
["second", 1000]
];
const tick = (start) => () => {
let elapsed = Date.now() - start;
for (let [unit, ms] of timeIntervals) {
$("#" + unit).text(Math.floor(elapsed / ms));
elapsed %= ms;
}
};
let timer = window.setInterval(tick(Date.now()), 1000);
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<p>
<span id="day">0</span> days,
<span id="hour">0</span> hours,
<span id="minute">0</span> minutes,
<span id="second">0</span> seconds
</p> | {
"domain": "codereview.stackexchange",
"id": 31039,
"tags": "javascript, jquery, comparative-review, timer"
} |
Would is be possible to connect a HiTechnic prototype board to an Arduino? | Question: Does anyone know if this is possible? It's just an i2c device right? I mean you would have to cut the cable and make it so you could plug into the pins on the Arduino but you should just be able to use the wire library and say something like.
Wire.beginTransmission(0x10);
the NXT hardware developers kit tells you what pins are which http://mindstorms.lego.com/en-us/support/files/default.aspx
Thanks
EDIT. Turns out this is very possible. The main problem was that HiTechnic says the address is 0x10 and it is actually 0x08 but here is a short sketch that reads and prints some into about the device, i.e. the manufacturer and version.
#include <Wire.h>
#define ADDRESS 0x08
void setup()
{
Wire.begin();
Serial.begin(9600);
}
void loop()
{
readCharData(0, 7);
Serial.println();
readCharData(8, 8);
Serial.println();
readCharData(16, 8);
Serial.println();
Serial.println("-----------------------------");
delay(1000);
}
void readCharData(int startAddress, int bytesToRead)
{
Wire.beginTransmission(ADDRESS);
Wire.write(startAddress);
Wire.endTransmission();
Wire.requestFrom(ADDRESS, bytesToRead);
while(Wire.available())
{
char c = Wire.read();
Serial.print(c);
}
}
Answer: Per the schematics of the UltraSonic Sensor the P1.3/SCL is DIGIAI0 or J1.5 and P3.0/SDA is DIGIAI1 or J1.6. And the developer Kit Manual states it is I2C as per philips original standard, detailing all the memory address's of the ESC015 chip and with all the recommended interfacing circuitry. The only note that I see is that they state the I2C's SCL is 9600. Kind of slow. But all very do-able for an Arduino. check out http://www.openelectrons.com/index.php?module=pagemaster&PAGE_user_op=view_page&PAGE_id=7 as they have a shield to directly connect and libraries for the Arduino. | {
"domain": "robotics.stackexchange",
"id": 105,
"tags": "arduino"
} |
CV written in LaTeX | Question: I've created a CV eager for corrections. Once it is properly written it'd be converted into a template, in order to share it with the community.
Can you suggest some starting corrections or ideas? (I'm willing to implement them by my own, if I can) A brief, compilable version is pasted below:
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[margin=2.3cm,top=1.6cm, textheight=1050pt]{geometry}
%sets margin of text for the whole document
\usepackage{titlesec}
\usepackage{url}
\usepackage{hyperref}
\usepackage{bold-extra}
\usepackage[dvipsnames]{xcolor}
\pagenumbering{gobble}
\usepackage{microtype}
\usepackage{blindtext}
\titleformat{\section}[block] % Customise the \section command
{\normalfont\LARGE\bfseries\filcenter} % Make the \section headers large (\Large),
% small capitals (\scshape) and left aligned (\raggedright)
{}{0em} % A label like 'Section ...'. Here we omit labels
{} % Can be used to insert code before the heading
\titlespacing*{\section}{1em}{1em}{1em}[0em]
\titleformat{\subsection}{\color{Mahogany}\Large\raggedright\scshape\bfseries}{}{0em}{}
[\vspace{0.3ex}\titlerule]
\titlespacing*{\subsection}{0pt}{0.75em}{0.75em}
%load icons package for mail, phone and address.
\title{\textbf{Mr. Nobody\\[1em] \footnotesize DoB January 23, 1995 \hspace{1.3em} NBL City, NBL. \hspace{1.3em} 666666 \hspace{1.3em}
\url{mrnobody@gmail.com}\vspace{-1cm}}}
\date{ }
\author{}
\begin{document}
\maketitle
\subsection*{Education}
\paragraph{National University of X, Y.} MSc. in X Y (2012--2017). Thesis: \textit{Thermodynamics of
chicken in Aqueous Phase Using Computational Tools, 2017.}
\paragraph{High School, Dr XYZW.} Graduation in Natural Sciences (2011).
\subsection*{Work Experience}
\paragraph{Institute for XY Z (2018--2019).}
Introduction to chicken Teacher, NBL, NBL.
\subsection*{Projects}
\paragraph{Hazards and Safety in the corridor (2018--2019), Coordinator.}
Oriented to students of the Institute of NBL. The purpose was to
do Xperiments, reflect upon hazard and safety in the corridor (H \& S rules) and discuss results.
\paragraph{webdeveloper (2018--On Going), x.}
equis and the general equisequis site is Directed by equisen.
The aim is producing an Orange External Resource.
\paragraph{StackExchange Member (2017--On Going).}
Q \& A sites. {\href{https://stackexchange.com/users/6538373/santimirandarp}{Link to profile-overview}}
\subsection{Language}
\begin{tabular}{lr}
\textbullet\textbullet\textbullet\textbullet & \hspace{1em} A, B.\\
\textbullet\textbullet & \hspace{1em} C, D.
\end{tabular}
\subsection{Few Skills}
\begin{tabular}{lr}
\textbullet\textbullet\textbullet & A: with B, C, and D, and pandas.\\
\textbullet\textbullet & Linux Shell\slash . \\
%\item[\textbullet\textbullet\textbullet] \hspace{1em} Chemistry Laboratory Tasks.
%\paragraph{\textbullet\textbullet\textbullet} \hspace{1em} Excel, OfficeCalc.
\textbullet\textbullet\textbullet & HTML, CSS.
\end{tabular}
\subsection{Hobbies}
%\raggedright
Literature and Philosophy favorite authors: B. Russell, A. Huxley, W. Whitman, J.L. Borges.
Poetry Channel at \href{emptylink}{YouTube}.
Music Post-Rock. Play little violin and guitar.\\[1em]
I've written a gutenmorgen with a short biography of Marvin Schr\"{o}dinger available
\href{emptylink2}{here}.
\subsection{Summary}
\blindtext{3}
\end{document}
The output so far looks like this:
Answer: Some minor comments:
hyperref should be loaded after the other packages (there are only few exceptions, see https://tex.stackexchange.com/q/1863/36296)
loading url is not really necessary because you already load hyperref
you specify too many values of the geometry package. The paperheight is implicitly given by the default value of your tex distribution, the bottom margin is set to 2.3cm, the top margin to 1.6cm and the textheight to 1050pt, this is one value too much because there no free length left to adjust. I would suggest to either remove the explicit declaration of the textheight or change margin=2.3cm to hmargin=2.3cm to make sure that there is at least one free length
\begin{tabular}{@{}lr@{}} will ensure that the bullets are nicely aligned with the left boarder of the surrounding text
using formatting instructions like \hspace{} in the argument of \title{} is hacky and can cause problems with the pdf meta data. As a quick fix you can provide an alternative string to be used in the pdf meta data with \texorpdfstring{tex code here}{pdf meta data here}. The clean way would be to redefine \maketitle and include all the formatting instructions there
If you make your work available to the community, please consider adding version information and a suitable license. For example the LPPL (Latex Project Public License) encourages the users to rename a source file before editing it. Not having multiple different versions with the same name floating around the internet will make it much easier to help users on platforms like tex.se.
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[margin=2.3cm,top=1.6cm]{geometry}
%sets margin of text for the whole document
\usepackage{titlesec}
% \usepackage{url}
\usepackage{bold-extra}
\usepackage[dvipsnames]{xcolor}
\pagenumbering{gobble}
\usepackage{microtype}
\usepackage{blindtext}
\usepackage{hyperref}
\titleformat{\section}[block] % Customise the \section command
{\normalfont\LARGE\bfseries\filcenter} % Make the \section headers large (\Large),
% small capitals (\scshape) and left aligned (\raggedright)
{}{0em} % A label like 'Section ...'. Here we omit labels
{} % Can be used to insert code before the heading
\titlespacing*{\section}{1em}{1em}{1em}[0em]
\titleformat{\subsection}{\color{Mahogany}\Large\raggedright\scshape\bfseries}{}{0em}{}
[\vspace{0.3ex}\titlerule]
\titlespacing*{\subsection}{0pt}{0.75em}{0.75em}
%load icons package for mail, phone and address.
\title{\texorpdfstring{\textbf{Mr. Nobody\\[1em] \footnotesize DoB January 23, 1995 \hspace{1.3em} NBL City, NBL. \hspace{1.3em} 666666 \hspace{1.3em}
\url{mrnobody@gmail.com}\vspace{-1cm}}}{Mr. Nobody}}
\date{ }
\author{}
\begin{document}
\maketitle
\subsection*{Education}
\paragraph{National University of X, Y.} MSc. in X Y (2012--2017). Thesis: \textit{Thermodynamics of
chicken in Aqueous Phase Using Computational Tools, 2017.}
\paragraph{High School, Dr XYZW.} Graduation in Natural Sciences (2011).
\subsection*{Work Experience}
\paragraph{Institute for XY Z (2018--2019).}
Introduction to chicken Teacher, NBL, NBL.
\subsection*{Projects}
\paragraph{Hazards and Safety in the corridor (2018--2019), Coordinator.}
Oriented to students of the Institute of NBL. The purpose was to
do Xperiments, reflect upon hazard and safety in the corridor (H \& S rules) and discuss results.
\paragraph{webdeveloper (2018--On Going), x.}
equis and the general equisequis site is Directed by equisen.
The aim is producing an Orange External Resource.
\paragraph{StackExchange Member (2017--On Going).}
Q \& A sites. {\href{https://stackexchange.com/users/6538373/santimirandarp}{Link to profile-overview}}
\subsection{Language}
\begin{tabular}{@{}lr@{}}
\textbullet\textbullet\textbullet\textbullet & \hspace{1em} A, B.\\
\textbullet\textbullet & \hspace{1em} C, D.
\end{tabular}
\subsection{Few Skills}
\begin{tabular}{@{}lr@{}}
\textbullet\textbullet\textbullet & A: with B, C, and D, and pandas.\\
\textbullet\textbullet & Linux Shell\slash . \\
%\item[\textbullet\textbullet\textbullet] \hspace{1em} Chemistry Laboratory Tasks.
%\paragraph{\textbullet\textbullet\textbullet} \hspace{1em} Excel, OfficeCalc.
\textbullet\textbullet\textbullet & HTML, CSS.
\end{tabular}
\subsection{Hobbies}
%\raggedright
Literature and Philosophy favorite authors: B. Russell, A. Huxley, W. Whitman, J.L. Borges.
Poetry Channel at \href{emptylink}{YouTube}.
Music Post-Rock. Play little violin and guitar.\\[1em]
I've written a gutenmorgen with a short biography of Marvin Schr\"{o}dinger available
\href{emptylink2}{here}.
\subsection{Summary}
\blindtext{3}
\end{document} | {
"domain": "codereview.stackexchange",
"id": 34960,
"tags": "data-visualization, tex"
} |
Simplified partial trace of two operators | Question: If I have two operators A and B living in the Composite Hilbert Space $H_I \bigotimes H_{II} $ and I want to take the partial trace of $C=AB$ over the subspace $H_I$, i.e., $Tr_I[AB]$, is there any identity that can help me do this in terms of $Tr_I[A]$ and $Tr_I[B]$. Actually what I am interested in is the partial trace of the commutator $[A,B]$.
Answer: Let $H$ and $K$ be Hilbert spaces with bases $|e_a\rangle$ and $|f_i \rangle$, respectively.
Let $A,B: H \otimes K \to H \otimes K$ be two operators, and let $C=A\circ B$ be their composition. This means that they are of the form
$$ A ~=~|e_a\rangle \otimes |f_i \rangle ~ A^{ai}{}_{bj}~ \langle e^b| \otimes \langle f^j |, $$
$$B ~=~|e_b\rangle \otimes |f_j \rangle ~ B^{bj}{}_{ck}~ \langle e^c| \otimes \langle f^k |, $$
$$ C ~=~|e_a\rangle \otimes |f_i \rangle ~ A^{ai}{}_{bj}~ B^{bj}{}_{ck}~ \langle e^c| \otimes \langle f^k |, $$
where there are implicitly summed over repeated indices. The partial traces over $H$ are
$$ Tr_{H}A~=~ |f_i \rangle ~ A^{ai}{}_{aj}~ \langle f^j|, $$
$$Tr_{H}B ~=~ |f_j \rangle ~ B^{bj}{}_{bk}~ \langle f^k| , $$
$$ Tr_{H}C ~=~ |f_i \rangle ~ A^{ai}{}_{bj}~ B^{bj}{}_{ak}~ \langle f^k|. $$
$Tr_{H}C$ contains in general off-diagonal information, that are not included in $Tr_{H}A$ and $Tr_{H}B$, so $Tr_{H}C$ can in general not be written as a function of $Tr_{H}A$ and $Tr_{H}B$ only.
Similar reasoning applies to the commutator $A\circ B-B\circ A$. | {
"domain": "physics.stackexchange",
"id": 1882,
"tags": "quantum-mechanics, mathematical-physics, quantum-information"
} |
What force prevents particles from penetrating other particles? | Question: I understand that what prevents objects from penetrating each other is the electromagnetic force between the electrons in the respective objects.
But if we don't have electrons, for example a proton. What force prevents a electron or a muon from going through it?
Answer:
What force prevents particles from penetrating other particles
This is the basic question that gave rise to the need for a different mechanics than newtonian mechanics. In different words: why does not the electron fall into the proton (and for higher Z atoms the nucleus) and charge disappear? It led to the development of quantum mechanics.
Quantum mechanics is a probabilistic theory which gives probabilities for a process to be seen, dependent on energies in the system. For example the answer to "what is the probability for the electron of the hydrogen atom to fall on the proton" the mathematics of QM answers "zero" . There is a lower energy state, called ground state, orbital where the electron will stay forever unless some energy enters the system.
Forces enter in quantum mechanical equations as potentials,( electromagnetic, strong, weak,)and the solutions depend on these potentials. For example, for the orbitals of the hydrogen atom, the Schrodinger equation is solved with the 1/r potential of the classical electric field, and it gives the solutions which will give the probability of finding an electron in a specific (x,y,z) around the proton, the orbitals in the link above.
The answer to "what is the probability for an electron of high energy, higher than orbital energies where it can be captured, to penetrate a proton" is "it depends on the energy" It is called proton electron scattering and QM gives us the probabilities for its interaction. With high enough energy the quarks were seen in the proton.
Penetrate also has a different meaning than classical physics. The electron of the hydrogen atom, if its orbital has no angular momentum, has a probability of overlapping the proton , but nothing happens because of quantum mechanical stability. The same is true of heavy nuclei, but some of these are unstable, i.e. lower energy levels exist to which the neutron and proton bag can settle, and in that case there exists electron capture, a probability of capture of the electron and turn a nuclear proton to a neutron, and a transmutation to a lower Z (charge number of nuclei) nucleus by 1.
So it is a combination of energy and quantum mechanics which controls penetration, and forces act within this system( which includes conservation of various quantum numbers too that will give zero probabilities for some guesses). | {
"domain": "physics.stackexchange",
"id": 29979,
"tags": "quantum-field-theory, forces, quantum-electrodynamics"
} |
Simple, generic background thread worker class | Question: While I'm not aiming for a drop-in replacement for AsyncTask, I would like a utility class that accomplishes some of the same goals.
Considering the criticisms of AsyncTask, in many cases I just deferred responsibility to the user - if the work you're pushing to a background Thread needs to be aware of your Activity lifecycle, save a reference to the AsynchronousOperation and explicitly cancel it in onPause, and make sure you're checking for - and reacting to - cancellation in the performWorkInBackgroundThread method. If you want to send results back to the main thread, use the provided runOnUiThread method.
I believe I've used volatile and synchronized blocks correctly here, but would be happy to hear any feedback, even if it's just to point out that I've done it badly.
Other than that - does this feel useful? Safe? Any obvious places it could be improved?
import android.os.Handler;
import android.os.Looper;
import android.os.Process;
import java.util.concurrent.BlockingDeque;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
/**
* Usage:
* new AsynchronousOperation(){
* @Override
* public void performWorkInBackgroundThread(){
* // do stuff...
* // if you can break things into small steps, do so -
* // e.g., maybe download by looping through chunks instead of a library method
* // at every opportunity, check #shouldQuit, if true, bail out
* // (or use the convenience #quitIfCancelledOrInterrupted)
* // for any operation you want to publish, use runOnUiThread and a Runnable
* // this would be for things like AsyncTask.onProgressUpdate or AsyncTask.onPostExecute
* // remember to keep references to cancel if the operation depends on the lifecycle of a View or Activity
* }
* }.start();
*/
public abstract class AsynchronousOperation implements Runnable {
protected static final int INITIAL_POOL_SIZE = 1;
protected static final int MAXIMUM_POOL_SIZE = Runtime.getRuntime().availableProcessors();
protected static final int KEEP_ALIVE_TIME = 2;
protected static final TimeUnit KEEP_ALIVE_TIME_UNIT = TimeUnit.SECONDS;
protected static final BlockingDeque<Runnable> BLOCKING_DEQUE = new LinkedBlockingDeque<>();
protected static ThreadPoolExecutor sThreadPoolExecutor;
protected static Handler sHandler;
protected volatile Thread mThread;
private volatile boolean mCancelled;
/**
* Lazily instantiate the ThreadPoolExecutor, constructed with default values. If customization of these values is
* required, override this getter method in the implementation subclass.
*
* This needs to be synchronized in case an AsynchronousOperation instance is started from within another thread -
* we only ever want a single instances of this class, and it should only be accessible to AsynchronousOperation
* instances.
*
* // TODO: this can probably be volatile, rather than synchronized
*
* @return A ThreadPoolExecutor instance used by all AsynchronousOperation instances.
*/
protected ThreadPoolExecutor getThreadPoolExecutor() {
synchronized(AsynchronousOperation.class) {
if(sThreadPoolExecutor == null) {
sThreadPoolExecutor = new ThreadPoolExecutor(
INITIAL_POOL_SIZE,
MAXIMUM_POOL_SIZE,
KEEP_ALIVE_TIME,
KEEP_ALIVE_TIME_UNIT,
BLOCKING_DEQUE
);
}
return sThreadPoolExecutor;
}
}
/**
* Lazily instantiate a new Handler on the main thread. This Handler instance is common to and shared between
* all AsynchronousOperation instances, and is only accessible to those instances.
*
* Synchronize it in case someone subclasses and calls getHandler from outside of #run.
*
* @return A Handler instance used by all AsynchronousOperation instances to communicate with the main thread.
*/
protected Handler getHandler() {
synchronized(AsynchronousOperation.class) {
if(sHandler == null) {
sHandler = new Handler(Looper.getMainLooper());
}
return sHandler;
}
}
/**
* This will usually be the Thread provided by the ThreadPoolExecutor when submitted to it, but since #run
* is a public method, it might be the main thread (or any thread) if used inappropriately. Assuming this does
* not happen, you can be rely on this referencing the background Thread provided to it.
*
* @return The Thread that owned this instance the moment #run was invoked.
*/
public Thread getThread() {
return mThread;
}
/**
* Cancels an operation.
*
* This is neither synchronized nor an AtomicBoolean because the boolean primitive for the cancelled flag is
* volatile and only ever set to true (never set back to false), which should be thread-safe here.
*
* Cancellation by itself will attempt to interrupt the background thread this worker is on, but by itself will
* not interrupt any work being performed - the user should test for cancellation frequently within the
* #performWorkInBackgroundThread method.
*
* @param mayInterrupt True if cancelling this operation should also interrupt its owner Thread.
* @return True if the operation was cancelled (and had not previously been cancelled).
*/
public boolean cancel(boolean mayInterrupt) {
if(mayInterrupt && mThread != null) {
mThread.interrupt();
}
boolean alreadyCancelled = mCancelled;
mCancelled = true;
return !alreadyCancelled;
}
/**
* @return True if this AsynchronousOperation has been explicitly cancelled.
*/
public boolean isCancelled() {
return mCancelled;
}
/**
* @return True if this AsynchronousOperation instance's owner thread has been interrupted.
*/
public boolean isInterrupted() {
return mThread != null && mThread.isInterrupted();
}
/**
* @return True if this AsynchronousOperation has been explicitly cancelled or its owner thread has been interrupted.
*/
public boolean isCancelledOrInterrupted(){
return isCancelled() || isInterrupted();
}
/**
* Tests for explicit cancellation or thread interruption - if either are true, it cancels and offers another
* opportunity to interrupt the owner thread.
*
* @param mayInterrupt True if cancelling this operation should also interrupt its owner Thread.
* @return True if this AsynchronousOperation has been explicitly cancelled or its owner thread has been interrupted.
*/
public boolean quitIfCancelledOrInterrupted(boolean mayInterrupt){
boolean shouldQuit = isCancelledOrInterrupted();
if(shouldQuit){
cancel(mayInterrupt);
}
return shouldQuit;
}
/**
* Executes a Runnable instance's #run method on the main thread.
* @param runnable The Runnable instance whose #run method should be invoked on the main thread.
*/
public void runOnUiThread(Runnable runnable) {
getHandler().post(runnable);
}
/**
* Creates a reference to the current thread, sets that thread's priority, and initiates the
* #performWorkInBackgroundThread method.
*
* Unlike most Runnable implementations, this method should not be commonly overridden. It is not
* marked as final in case a subclasses wants to hook into this process, but in almost all cases the
* subclass should do its work in #performWorkInBackgroundThread rather than #run.
*/
public void run() {
mThread = Thread.currentThread();
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
performWorkInBackgroundThread();
}
/**
* Passes this instance to the common ThreadPoolExecutor, which will provide a worker thread and call this
* instance's #run method.
*/
public void start() {
getThreadPoolExecutor().execute(this);
}
/**
* Subclasses should override this method to perform work in the background thread provided by this class when
* #start is called.
*
* Any time work needs to be published to the main thread from within the method body, use #runOnUiThread.
*
* Work within this method should tend to be non-atomic and test for #quitIfCancelledOrInterrupted as often as
* possible, returning immediately if that method returns true.
*/
public abstract void performWorkInBackgroundThread();
}
Answer: 1) best practice lazy instantiation
A better lazy field instantiation pattern is the following: https://en.wikipedia.org/wiki/Double-checked_locking
Example:
static volatile ThreadPoolExecutor helper;
public ThreadPoolExecutor getHelper() {
ThreadPoolExecutor result = sThreadPoolExecutor;
if (result == null) {
return result;
}
synchronized(this) {
result = helper;
if (result == null) {
sThreadPoolExecutor = result = new ThreadPoolExecutor(...);
}
}
return result;
}
The same pattern should be used for getHandler.
Or save yourself the trouble and just always create the ThreadPoolExecutor and Handler.
2) mCancelled
Are you sure you need the whole workflow surrounding mCancelled? Relying on isInterrupted might be enough.
3) runOnUiThread
Why is runOnUiThread an instance method? It might as well be static.
4) one line creation & storing
You could let your start method return this; to do the following:
AsynchronousOperation x = new AsynchronousOperation(){}.start();
x.cancel(true);
Personally I hate that I have to put Java Thread creation and thread starting on two different lines! | {
"domain": "codereview.stackexchange",
"id": 20050,
"tags": "java, android"
} |
X86_64 implementation of STRCHR using NASM | Question: As this will eventually become part of an operating system I'm developing, there is no specific adherence to any particular ABI, even though this is being developed and tested on Ubuntu 16.04. My OS will however adopt a SYSTEM V paradigm insomuch as passing values by register, but they will be passed in the one most suitable to the algorithm.
One of my methodologies is to utilize as many bits of the register as possible, especially when 16 or less are significant. I have found too, that in a lot of cases this can simplify calling process. It does however complicate things a little ie:
mov rdi, Pointer_to_String
mov ecx, 784 << 8 | '.'
call strlen
This would traverse the buffer pointed to by RDI for the first occurence of period for a maximum of 784 characters. This snippet had evolved from just scanning for NULL for a maximum of FFFFFFFFH bytes, to scanning for any of the values (0 - FF), to traversing buffer for specified number of bytes.
Question
Do you think I've achived the maximum utility with the least amount of code.
; Determine length, including terminating character EOS. Result may include
; VT100 escape sequences.
; ENTER: RDI = Pointer to ASCII string.
; RCX Bits 31 - 08 = Max chars to scan (1 - 1.67e7)
; 07 - 00 = Terminating character (0 - FF)
; LEAVE: RAX = Pointer to next string (optional).
; FLAGS: ZF = Terminating character found, NZ otherwise (overrun).
; DF = Unmodified incase it was already set.
strlen: pushf ; Preserve DF (Direction flag)
push rcx ; Preserve registers used by proc so
push rdi ; its non-destructive except for RAX.
mov al, cl ; Byte to scan for in AL.
shr ecx, 8 ; Shift max count into bits 23 - 00
std ; Auto decrement.
repnz scasb ; Scan for AL or until ECX = 0
mov rax, rdi ; Return pointer to EOS + 1
pop rdi ; Original pointer from proglogue
jz .exit ; ZF indicates EOS was found
mov rax, rdi ; RAX = RDI, NULL string
.exit:
pop rcx
popf ; Restore direction flag.
ret
Answer: Bug #1
You set the DF flag, so your string operation is going to run backwards. You should clear the DF flag to scan your string in the forwards direction.
Bug #2
If you reach the byte limit, your function will return the original string pointer in RAX. The comment says it should return NULL. Perhaps you meant xor rax, rax instead of mov rax, rdi?
Bug #3
According to the comments, your function should return with ZF set according to the result of the scan. However, since you do pushf and then popf, the ZF result is wiped out by the popf.
Your own ABI
I would suggest following the standard X64 ABI rather than inventing a new ABI per function. For one thing, you will be able to use high level languages to interface with your assembly routines. Another thing is that it would be hard for you to remember details such as "when I call strlen, ZF set means that the terminating character was found".
Don't call it strlen
Your function is closer to strchr or strnchr than strlen. Any reasonable person would expect a function called strlen to return a length rather than a pointer. | {
"domain": "codereview.stackexchange",
"id": 22574,
"tags": "strings, assembly"
} |
Why can a contrail only form below -40 degree Celsius? | Question: I read that contrails can only form if the outside temperature is below -40 degree Celsius.
But why not earlier?
Water is already super-cooled below 0 degree Celsius. If a condensation nuclei touches the water, it will instantly freeze, from what I understood from this website:
Contrails form at -40 degrees Fahrenheit (which is also -40 Celsius), or colder. At that temperature the tiny drops of condensed water will instantly freeze.
Now, an aircraft outputs lots of soot, that can also act as condensation nuclei.
So, why aren't contrails forming at 0 degree Celsius already?
EDIT:
For example, let's just take a look at this weather balloon data:
-----------------------------------------------------------------------------
PRES HGHT TEMP DWPT RELH
hPa m C C %
-----------------------------------------------------------------------------
955.0 614 3.4 -0.6 75
932.0 812 2.2 -2.0 74
925.0 873 1.8 -2.4 74
885.0 1227 -0.3 -5.3 69
850.0 1548 -3.3 -6.2 80
842.0 1622 -4.0 -6.6 82
797.0 2051 -7.8 -8.9 92
781.0 2209 -9.2 -9.8 96
769.0 2330 -10.3 -10.4 99
As we can see, the temperature at a pressure altitude of 769, what is 2330 meter on that day, is -10 degree and the relative humidity is 99%.
My question is: Why can't contrails form in this condition? Or can they? Because the air is very humid and the engine's vapor would give it the rest to be over-saturated, what makes the water vapor condense on the soot. And the temperature would freeze them.
Every internet source says, they can only exist at or below -40 degree Celsius. But what about this condition? What stops them to form at this temperature and humidity?
I need to know this, because I'm writing my pre-scientific work about it. I understand contrail formation pretty much so far, but some small puzzle parts are still missing.
Answer: The -36.5 Celsius figure came from a theory describing contrail formation developed by Appleman in 1953. This theory gives rise to the Appleman chart, as shown below, which predicts whether contrails will form at a given pressure (i.e. altitude), temperature, and humidity. As you can see, -36.5 Celsius corresponds roughly to the highest possible temperature where contrail formation is allowed at 400 hPa, just below normal cruising altitudes (assuming 100% relative humidity). For a summary of Appleman's theory, here's a reexamination of his work (unfortunately the original paper has not been digitized outside of JSTOR as far as I know): http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281995%29034%3C2400%3AAROTFO%3E2.0.CO%3B2
Essentially, the reason that contrails need colder temperatures to form is a combination of two factors:
Water's vapor pressure increases with temperature, and
The relative humidity decreases with temperature.
In order for persistent contrails to form, the water droplets need to not immediately evaporate or sublime. If water's vapor pressure is too high and the relative humidity is too low, then your contrails evaporate away quickly. The point at which these two factors are balanced is given by the curve in the Appleman chart.
Regarding your edit, according to Appleman, contrails should not form under those temperature and pressure conditions. At a high enough humidity, water vapor will indeed condense out of the engine exhaust, but the question you should be asking is: will the water droplets have enough time to freeze before they disperse into regions of lower humidity and evaporate? Appleman says the answer to that question is no. Even small water droplets don't necessarily freeze instantly. | {
"domain": "physics.stackexchange",
"id": 46138,
"tags": "temperature, freezing"
} |
How can the process of hypertuning of XGBoost parameters be automated? | Question: I'm using xgboost for training a model on a data with extreme class imbalance. After referring from here.
After performing grid search and some manual settings, I found that the following parameters work the best for me:
weight <- as.numeric(labels) * nrow(test) / length(labels)
upscale <- sum(weight * (labels == 1.0))
xgb_params = list(
objective = 'binary:logistic',
eta = 0.1,
max_depth = 4,
eval_metric = 'auc',
max_delta_step = 10,
scale_pos_weight = upscale
)
How can the process of setting optimal hyperparameters for xgboost be automated for best AUC? Please note that some of these parameters aren't supported by the caret implementation of xgboost but are very important for the model I have to design.
Answer: In general, if you want to automate fine tuning a model's hyper parameters, its best to use a well tested package such as caret or MLR.
I've used the caret package extensively. Here is a reference of the parameters supported by caret for tuning a xgboost model.
To automatically select parameters using caret, do the following:
First define a range of values of each parameter you would want caret to search. Define this in the tuning grid.
Start model training using caret after specifying a measure to optimize, e.g. accuracy or Kappa statistic, etc.
Plot or print the performance comparison for various parameter values, refine and repeat if required.
Refer to the caret guide here to get step-by-step instructions on using it.
For handling class imbalance, I've found from my experience that adjusting weights is not as helpful as under-sampling majority class and over-sampling the minority class, or a combination of the two. However, it all depends on the size of data available and the case at hand.
In case you need to tune some parameters which are not supported by caret, then, you could write your own iterative loop to train and test the model for different values of that parameter and then choose one that works best. I think most of the really relevant parameters have already been included in caret.
You would need to adjust these parameters in case the population itself changes over time. Or, the methods to gather data and their accuracy may change which could result in performance deterioration. You could run a simple check by comparing the performance of your model over the current dataset vs. a 6 month older dataset. If the performance is similar, then you may not need to update the model in the future. | {
"domain": "datascience.stackexchange",
"id": 935,
"tags": "r, optimization, xgboost, hyperparameter, weighted-data"
} |
Parsing from one enum to another | Question: I face a problem I think I am not approaching appropiately.
I have two independent enumerators that both will contain the same members, the only difference being the values given.
I have to be able to parse from one enum type to the other on the fly and I've seen the following code works as intended:
(enumType)Enum.Parse(typeof(enumType), enum1.ToString());
But I feel like there's something I am doing wrong, I believe this code is prone to errors and I feel like I need some help approaching this problem.
The code involved is the following:
public enum AdjustCircuitsCurrent
{
_2V = 0x0001,
_1_28V = 0x0002,
_333mV = 0x0003,
_0_25A = 0x0004,
_0_01V = 0x0005,
_0_1V = 0x006,
_1V = 0x0007,
}
public enum VerificacionCircuitsCurrent
{
_2V = 0x000B,
_1_28V = 0x000C,
_333mV = 0x000D,
_0_25A = 0x000E,
_0_1V = 0x00F,
_0_01V = 0x0011,
_1V = 0x0012,
}
public VerificacionCircuitsCurrent ConvertToVerificationCircuit(AdjustCircuitsCurrent circuit)
{
return (VerificacionCircuitsCurrent)Enum.Parse(typeof(VerificacionCircuitsCurrent), circuit.ToString());
}
Again, this code works as intended, I can convert from one enum to the other, but I feel there's something wrong with it.
Answer: There's nothing particularly wrong with the code. It happens often enough that this question has plenty of upvotes.
Marc Gravell gives the exact code that you have. But you can go take a look at the link for ideas about using extension methods to make calling the conversion nicer and possibly adding an "IsDefined" check in case the enums get out of sync sometime in the future. | {
"domain": "codereview.stackexchange",
"id": 17181,
"tags": "c#, parsing, converting, enum"
} |
Brainf**k to Ruby converter -- v2 | Question: Previous iteration.
You know, I think this is the fastest I've ever pushed out an update to anything. This is Version 2 of my Brainf**k to Ruby converter, and the generated code looks... Well, like Brainf**k, converted directly to Ruby, with no attempt at making it more readable.
I'm looking for any tips on making things more idiomatic, both in the generator and generated code. The nest if/cases really bug me, but I'm not quite sure how to get rid of them, especially since just two characters are blindly replaced. I'd also like advice on making it run faster.
bf_to_ruby.rb
input_file, output_file = ARGV
code = IO.read(input_file).tr('^+-<>.,[]', '')
open(output_file, File::CREAT | File::WRONLY) do |output|
output.puts <<-END.gsub(/^[ \t]*\||\s*#@.*$/, '')
|#!/usr/bin/env ruby
|class Mem < Hash #@ `Hash` because it's more memory-efficient and allows negative values.
| def initialize; super(0); end
| def []=(i, val); super(i, val & 255); end
|end
|data = Mem.new
|pointer = 0
END
indent_level = 0
code.scan(/(\++)|(\-+)|(<+)|(>+)|([.,\[\]])/)
.map do |string|
if string[0]
next "#{' ' * indent_level}data[pointer] += #{string[0].length}"
elsif string[1]
next "#{' ' * indent_level}data[pointer] -= #{string[1].length}"
elsif string[2]
next "#{' ' * indent_level}pointer -= #{string[2].length}"
elsif string[3]
next "#{' ' * indent_level}pointer += #{string[3].length}"
elsif string[4]
case string[4]
when '['
ret = "#{' ' * indent_level}until data[pointer] == 0"
indent_level += 1
next ret #Split it so that it's clear that indent is increased *after* the line
when ']'
indent_level -= 1
next "#{' ' * indent_level}end"
when ','
next "#{' ' * indent_level}data[pointer] = $stdin.readbyte"
when '.'
next "#{' ' * indent_level}putc data[pointer]"
end
end
end.each { |line| output.puts(line) }
end
Demo
Input:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.
Output:
#!/usr/bin/env ruby
class Mem < Hash
def initialize; super(0); end
def []=(i, val); super(i, val & 255); end
end
data = Mem.new
pointer = 0
data[pointer] += 8
until data[pointer] == 0
pointer += 1
data[pointer] += 4
until data[pointer] == 0
pointer += 1
data[pointer] += 2
pointer += 1
data[pointer] += 3
pointer += 1
data[pointer] += 3
pointer += 1
data[pointer] += 1
pointer -= 4
data[pointer] -= 1
end
pointer += 1
data[pointer] += 1
pointer += 1
data[pointer] += 1
pointer += 1
data[pointer] -= 1
pointer += 2
data[pointer] += 1
until data[pointer] == 0
pointer -= 1
end
pointer -= 1
data[pointer] -= 1
end
pointer += 2
putc data[pointer]
pointer += 1
data[pointer] -= 3
putc data[pointer]
data[pointer] += 7
putc data[pointer]
putc data[pointer]
data[pointer] += 3
putc data[pointer]
pointer += 2
putc data[pointer]
pointer -= 1
data[pointer] -= 1
putc data[pointer]
pointer -= 1
putc data[pointer]
data[pointer] += 3
putc data[pointer]
data[pointer] -= 6
putc data[pointer]
data[pointer] -= 8
putc data[pointer]
pointer += 2
data[pointer] += 1
putc data[pointer]
pointer += 1
data[pointer] += 2
putc data[pointer]
You'll notice that the generated code is nearly the same as the last version's generated code, but with a lot of duplicate lines merged and indenting.
Answer: You're correct that nesting a case in an elsif ladder is a bit clunky here, but the two things that jump out at me as making the code difficult to comprehend are: 1) we have to know everything that's going on with the Regexp passed to scan() in order to figure out the intent of the if clauses, and 2) the variable string is not a String, but an Array.
You can remove the capture groups from your Regexp to just get an Array of Strings back from scan() (instead of an Array of Arrays of Strings), and figure out what exactly was matched by looking at the strings themselves - which means you don't need a nested case anymore, and makes it a little more obvious what scan() is doing without having to actually parse its argument.
Like this:
# example.rb
code = '--+++.,,'
code.scan(/\++|\-+|[.,]/).map { |str|
case str[0]
when '+'
"Plus signs: #{str.length}"
when '-'
"Minus signs: #{str.length}"
when '.'
'Dot!'
when ','
'Comma!'
end
}.each {|ll| puts ll}
Produces:
$ ruby example.rb
Minus signs: 2
Plus signs: 3
Dot!
Comma!
Comma! | {
"domain": "codereview.stackexchange",
"id": 14288,
"tags": "ruby, converting, brainfuck"
} |
Animated Score Amounts for Game | Question: This is a simple class for a label with a score that animates counting up or down. When someone in the game scores points, the numbers count up or down to the new total.
Here is an example of what it looks like:
I ran into a few problems when building this class. It was important to prevent the building of further animations while a current animation was playing, because otherwise the score would be incremented too many times. It was also important to to set the correct values at the end of the sequence of animations in order to make sure that it was always accurate when it completed.
BZAnimatedScoreLabel.h
#import <SpriteKit/SpriteKit.h>
@interface BZAnimatedScoreLabel : SKLabelNode
+(BZAnimatedScoreLabel *) labelWithText:(NSString *)text score:(int)score size:(int)fontSize;
-(void) updateForScore:(int)newScore;
@end
BZAnimatedScoreLabel.m
#import "BZAnimatedScoreLabel.h"
@implementation BZAnimatedScoreLabel {
int _score;
SKLabelNode *_scoreLabel;
NSMutableArray *_actionQueue;
BOOL _isAnimationPlaying;
}
#pragma mark - Initialization
+(BZAnimatedScoreLabel *) labelWithText:(NSString *)text score:(int)score size:(int)fontSize {
return [[BZAnimatedScoreLabel alloc]initWithText:text score:score size:fontSize];
}
-(instancetype) initWithText:(NSString *)text score:(int)score size:(int)fontSize {
self = [super initWithFontNamed:@"Arial"];
if (self) {
self.text = text;
self.fontSize = fontSize;
self.fontColor = [SKColor whiteColor];
_score = score;
_scoreLabel = [[SKLabelNode alloc]initWithFontNamed:@"Arial"];
_scoreLabel.fontSize = fontSize;
_scoreLabel.fontColor = [SKColor whiteColor];
_scoreLabel.text = [NSString stringWithFormat:@"%i", _score];
_scoreLabel.position = CGPointMake((fontSize * 4), 0);
[self addChild:_scoreLabel];
_isAnimationPlaying = NO;
_actionQueue = [[NSMutableArray alloc]init];
}
return self;
}
#pragma mark - Animation
-(void) updateForScore:(int)newScore {
if (!_isAnimationPlaying) {
if (newScore > _score) {
[self updateForHigherScore:newScore];
} else {
[self updateForLowerScore:newScore];
}
}
}
-(void) updateForHigherScore:(int)newScore {
for (int i = _score; i <= newScore; i+=10) {
[self addAnimationToQueueForAmount:i];
}
[self playQueuedAnimationsFinalScore:newScore];
}
-(void) updateForLowerScore:(int)newScore {
for (int i = _score; i >= newScore; i-=10) {
[self addAnimationToQueueForAmount:i];
}
[self playQueuedAnimationsFinalScore:newScore];
}
-(void) addAnimationToQueueForAmount:(int)amount {
[_actionQueue addObject:[SKAction runBlock:^(void){
_scoreLabel.text = [NSString stringWithFormat:@"%i", amount];
}]];
}
-(void) playQueuedAnimationsFinalScore:(int)finalScore {
_isAnimationPlaying = YES;
[self runAction:[SKAction sequence:_actionQueue] completion:^(void) {
_score = finalScore;
_scoreLabel.text = [NSString stringWithFormat:@"%i", _score];
_isAnimationPlaying = NO;
[_actionQueue removeAllObjects];
}];
}
@end
Here is an example usage in the SKScene:
//object creation
_pointsLabel = [BZAnimatedScoreLabel labelWithText:@"Points = " score:0 size:20];
_pointsLabel.position = CGPointMake(_initialScreenSize.width/1.8, _initialScreenSize.height/28);
[self addChild:_pointsLabel];
//usage
[_pointsLabel updateForScore:_game.currentScore];
I thought about expanding the initialization method of the class to include the size of the scene (for proper spacing of the points from the text) and also the position of the label, but the initialization method is already pretty long and adding two further arguments feels like too much, but I am not sure.
Answer:
If the score label is updated while the animation is still running, then
the update is simply ignored. For example, with
[_pointsLabel updateForScore:100];
[_pointsLabel updateForScore:200];
the label will animate to 100 and stay there, instead of animating to 200.
Instead of pre-computing all actions from the current score to the final value,
I would start only a single action that will display the next intermediate score, e.g. from 100 to 110. When that action has completed, start a new action.
This approach solves the problem of simultaneously running actions, and makes
both _isAnimationPlaying and the _actionQueue obsolete. Each time an action is created,
it can check whether the counter has to be incremented or decremented.
The animation is always in steps of 10, e.g. an update from 13 to 51 will
display 13, 23, 33, 43, 51. It would look nicer if multiples of 10 are displayed
where possible, in this case 13, 20, 30, 40, 50, 51.
The animation is too fast. At last on my Simulator it was running so fast that
not all intermediate steps can be recognized. I would add an small delay
between the actions.
The updateForScore: method is not really necessary. I would make score
a (public) property and override the setter method, so that
_pointsLabel.score = newValue;
updates the score and starts the animation.
Then your implementation could look like this:
BZAnimatedScoreLabel.h
@interface BZAnimatedScoreLabel : SKLabelNode
+(BZAnimatedScoreLabel *) labelWithText:(NSString *)text score:(int)score size:(int)fontSize;
@property (nonatomic) int score;
@end
BZAnimatedScoreLabel.m
#import "BZAnimatedScoreLabel.h"
@implementation BZAnimatedScoreLabel {
SKLabelNode *_scoreLabel;
int _currentScore; // The currently displayed score
}
#pragma mark - Constants
static NSString *kAnimationKey = @"BZLabelAnimationKey";
static const NSTimeInterval kAnimationDelay = 0.02;
#pragma mark - Initialization
+(BZAnimatedScoreLabel *) labelWithText:(NSString *)text score:(int)score size:(int)fontSize {
return [[BZAnimatedScoreLabel alloc]initWithText:text score:score size:fontSize];
}
-(instancetype) initWithText:(NSString *)text score:(int)score size:(int)fontSize {
self = [super initWithFontNamed:@"Arial"];
if (self) {
self.text = text;
self.fontSize = fontSize;
self.fontColor = [SKColor whiteColor];
_currentScore = _score = score;
_scoreLabel = [[SKLabelNode alloc] initWithFontNamed:@"Arial"];
_scoreLabel.fontSize = fontSize;
_scoreLabel.fontColor = [SKColor whiteColor];
_scoreLabel.position = CGPointMake(fontSize * 4, 0);
_scoreLabel.text = [NSString stringWithFormat:@"%i", score];
[self addChild:_scoreLabel];
}
return self;
}
#pragma mark - Animation
-(void)setScore:(int)score {
_score = score;
[self updateDisplay];
}
// Compute next multiple of 10 from _currentScore in the direction of _score:
-(int)computeNextScore {
int next;
if (_score > _currentScore) {
if (_currentScore >= 0) {
next = ((_currentScore + 10)/ 10) * 10;
} else {
next = ((_currentScore + 1)/ 10) * 10;
}
if (next > _score) {
next = _score;
}
} else if (_score < _currentScore) {
if (_currentScore <= 0) {
next = ((_currentScore - 10) / 10) * 10;
} else {
next = ((_currentScore - 1) / 10) * 10;
}
if (next < _score) {
next = _score;
}
} else {
next = _score;
}
return next;
}
-(void)updateDisplay {
if (_score != _currentScore) {
SKAction *wait = [SKAction waitForDuration:kAnimationDelay];
SKAction *update = [SKAction runBlock:^() {
_currentScore = [self computeNextScore];
_scoreLabel.text = [NSString stringWithFormat:@"%i", _currentScore];
}];
SKAction *checkAgain = [SKAction performSelector:@selector(updateDisplay) onTarget:self];
[self runAction:[SKAction sequence:@[wait, update, checkAgain]] withKey:kAnimationKey];
} else {
[self removeActionForKey:kAnimationKey];
}
}
@end
_currentScore is the currently displayed score, and the computeNextScore
method computes the next value to be displayed. It looks a bit complicated,
but it works correctly for both positive and negative scores.
updateDisplay starts a sequence of three actions (if necessary): wait
is for the delay, update computes the next score to be displayed and
updates the label, and checkAgain causes the updateDisplay method to
be called again.
The sequence action is created with a key. This has two advantages:
The action can be removed, and starting a new action with the same key
will automatically stop the previous action. | {
"domain": "codereview.stackexchange",
"id": 11971,
"tags": "game, objective-c, animation, sprite-kit"
} |
Getting final robot state from a moveit plan | Question:
I want to plan two consecutive motions on a move group.
How can I do that without executing the first plan.
Not that I cannot put the first goal as viapoint because I need to execute some other actions unrelated to the robot.
On the tutorials I also saw something like this:
start_state.setFromIK(joint_model_group, start_pose2);
move_group.setStartState(start_state);
But obviously this cannot guarantee continuous joint motion if there are mutiple IK solutions, I think.
My current code is this:
moveit::planning_interface::MoveGroupInterface::Plan plan1;
move_group.setPoseTarget(pose1);
bool success = (move_group.plan(plan1) == moveit::planning_interface::MoveItErrorCode::SUCCESS);
if (!success) {
ROS_ERROR_STREAM("plan1 failed");
return false;
}
moveit::planning_interface::MoveGroupInterface::Plan plan2;
/*
I want to get endState of plan1 here, something like:
move_group.setStartState(plan1.getEndState());
*/
move_group.setPoseTarget(pose2);
bool success = (move_group.plan(plan2) == moveit::planning_interface::MoveItErrorCode::SUCCESS);
if (!success) {
ROS_ERROR_STREAM("plan2 failed");
return false;
}
Originally posted by kky on ROS Answers with karma: 61 on 2019-09-06
Post score: 1
Answer:
I found the solution after some while.
It is not as clean as I would like it to be but here it is.
robot_state::RobotState start_state(*move_group.getCurrentState());
moveit::planning_interface::MoveGroupInterface::Plan plan2;
robot_state::RobotState state(start_state);
const std::vector<double> joints = plan1.trajectory_.joint_trajectory.points.back().positions;
state.setJointGroupPositions(move_group_name, joints);
move_group.setStartState(state);
move_group.setPoseTarget(pose2);
Originally posted by kky with karma: 61 on 2019-09-06
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 33734,
"tags": "moveit, ros-melodic"
} |
Brute-force Integer Diophantine equations solver | Question: I want to improve the performance of my equation solver
So I have an expression:- 42a + 75b - 30c + 80d + 25e + 50f, let's call it D.
Variables a, b, c, d, e, f are positive integers with values from 0 to 150.
I need to find and filter solutions to D = -(v + 1) equation for each v, where v is an integer in the range from 0 to 3000. Solutions are filtered in a way where a variable is either 0 or at least 1. In other words, there are a total of 64 permutations for each v. I'm searching for the solution with the lowest a + b + c + d + e + f sum. Some permutations might not include any solutions, in which case it just returns Impossible.
This is more or less a typical Diophantine equation but I couldn't find a solver that would handle it. hackmath.net has a solver, but it doesn't take in more than 4 variables with constraints and also doesn't seem to have any API so that didn't help.
That's why I decided to make my own solver. Since I'm limiting a, b, c, d, e, f values making a brute-force algorithm didn't seem like such a bad idea, so I made one.
using System.Data;
const int c_maxVal = 150;
// valueSet[,] answerArray = new valueSet[3000 , 64];
// Lower size for testing
valueSet[,] answerArray = new valueSet[1 , 64];
for (int variation = 0; variation < answerArray.GetLength(0); variation++)
{
// FINDING SOLUTIONS
List<valueSet> results = new List<valueSet>();
Parallel.For(0, c_maxVal, a =>
{
for(int b = 0; b <= c_maxVal; b++)
{
for (int c = 0; c <= c_maxVal; c++)
{
for (int d = 0; d <= c_maxVal; d++)
{
for (int e = 0; e <= c_maxVal; e++)
{
for (int f = 0; f <= c_maxVal; f++)
{
if (variation - 42 * a + 75 * b - 30 * c + 80 * d + 25 * e + 50 * f == -1)
{
lock (results)
{
results.Add(new() { _a = a, _b = b, _c = c, _d = d, _e = e, _f = f });
}
}
}
}
}
}
}
});
// FILTERING VALUES
for (int s = 0; s < answerArray.GetLength(1); s++)
{
BitArray bArr = new BitArray(new int[] { s });
bool[] bits = new bool[bArr.Length];
bArr.CopyTo(bits, 0);
int[] aRange = new int[] { bits[0] ? 1 : 0, bits[0] ? c_maxVal : 0 };
int[] bRange = new int[] { bits[1] ? 1 : 0, bits[1] ? c_maxVal : 0 };
int[] cRange = new int[] { bits[2] ? 1 : 0, bits[2] ? c_maxVal : 0 };
int[] dRange = new int[] { bits[3] ? 1 : 0, bits[3] ? c_maxVal : 0 };
int[] eRange = new int[] { bits[4] ? 1 : 0, bits[4] ? c_maxVal : 0 };
int[] fRange = new int[] { bits[5] ? 1 : 0, bits[5] ? c_maxVal : 0 };
List<valueSet> finalList = results.Where(set =>
set._a >= aRange[0] && set._a <= aRange[1] &&
set._b >= bRange[0] && set._b <= bRange[1] &&
set._c >= cRange[0] && set._c <= cRange[1] &&
set._d >= dRange[0] && set._d <= dRange[1] &&
set._e >= eRange[0] && set._e <= eRange[1] &&
set._f >= fRange[0] && set._f <= fRange[1]
).ToList();
valueSet finalSet = finalList.Find(set => set.count == finalList.Min(set => set.count));
answerArray[variation, s] = finalSet;
}
}
// Console output for testing
Console.WriteLine();
for (int s = 0; s < answerArray.GetLength(1); s++)
{
Console.WriteLine($"Permutation #{s+1}");
Console.WriteLine(answerArray[0, s]);
}
struct valueSet
{
public int _a;
public int _b;
public int _c;
public int _d;
public int _e;
public int _f;
public int count { get { return _a + _b + _c + _d + _e + _f; } }
public override string ToString() => count > 0 ? $"a=[{_a}] b=[{_b}] c=[{_c}] d=[{_d}] e=[{_e}] f=[{_f}]" : "Impossible";
}
The main solution finder is what Parallel.For loop is for and I have no complaints about that (my CPU spikes up to 90% load when the loop is running, but otherwise it does it's job and it does it fast).
Filtering through all the solutions to find each permutation with the lowest variable sum is what's significantly slowing the program. My approach is to use a for loop and to convert it's iterator to a bit array on each step and then use each bit as a binary check for each variable. If the bit is set, then the range is from 1 to 100. If the bit is not set, then the range is from 0 to 0. My PC isn't that old, but it still takes a couple minutes to go through that search when testing just 1 variation (v from the prior explanation).
What I thought of doing:
Removing certain unnecessary permutations.
Some of the permutations will never have a solution. For example if only b is used, then the expression D can only have positive answers even though it's supposed to stay negative.
Is there any way to significantly improve the performance of my filtering algorithm?
A couple things to note before answering:
I know that b, e, f variables can substitute each other as they're all multiples of 25, but I still need them to be separate variables.
I plan on storing answerArray in a CSV file later and use it as a lookup table.
Answer: I'll let someone more familiar with the domain discuss better algorithms, so I'll just talk about easy, generally-applicable ideas.
The obvious place to start, is to find the minimal solution for each subset within the Parallel.For as part of the main loop: there's no need to record all the solutions (you only need to keep the small ones), and no need for the threads to talk until they've finished processing. You can maintain a bit-max for non-zero variables as you go, and use an array (just like answerArray) to keep track of the best solution found so far in the local thread, and aggregate the candidate solutions when the threads finish (e.g. can feed the data into a shared array as the last work of the thread, or accumulate all the results and aggregate them all that once).
This way, you remove the filtering stage, most of the inter-thread communication, and - depending on the problem - potentially a lot of unnecessary memory work.
Keeping track of the minimal solution also means you can filter out candidate solutions before trying them: there's no point evaluating a larger candidate solution if you already have a smaller one. If this proved effective at reducing runtime, then you could consider heuristics for changing the order in which you test solutions (e.g. try small candidates first); parallelise over the solution array, rather than values of a (so that different threads can't be looking the minimal solution for the same combination of variables; or reintroduce thread-communication so that they share a table of minimal solutions (can consider CAS or other methods to minimise thread contention).
Misc
The ToList on finalList is unnecessary and will probably just increase memory load, and you will be evaluating finalList.Min(set => set.count) for each execution of the outer lambda, which is completely unnecessary: instead, use an ArgMin function (if using .NET 7, you have MinBy in LINQ and can just wrap the whole in a try..catch to trap the error case): it'll be clearer and faster. Addressing this alone seems to provide a significant improvement: it takes the filtering from being quadratic in the number of solutions to linear in the number of solutions.
Rather than using Where to filter the masks for of the possible zero-nonzero variable combinations, you should group them somehow. The best thing to do would be to never put them into one list in the first place, but you could also just use GroupBy to do this, grouping by an integer mask rather than all the comparisons in your current code.
Note also that you are missing an obvious opportunity to parallelise the filtering.
Don't worry about your output format: you're not outputting a lot of data so you can afford to transform it later: choose data-structures that suit the data processing.
The loop over the last parameter is redundant: you can evaluate the last parameter directly (and then check that it's an integer)
valueSet doesn't obey typical .NET naming conventions: types and public members should be in ProperCamelCase
Refit (no filtering stage)
Simple refit based on paragraphs at the top performance (not touched valueSet or e.g. change the code to compute f directly):
public static valueSet[,] VM(int variations, int maxValue)
{
Console.WriteLine($"VM");
valueSet[,] answerArray = new valueSet[variations, 64];
for (int variation = 0; variation < answerArray.GetLength(0); variation++)
{
Parallel.For(0, maxValue, a =>
{
valueSet[] candidates = new valueSet[64];
int mask = a > 0 ? (1 << 0) : 0;
mask &= ~0b111110;
for (int b = 0; b <= maxValue; b++)
{
mask &= ~0b111100;
for (int c = 0; c <= maxValue; c++)
{
mask &= ~0b111000;
for (int d = 0; d <= maxValue; d++)
{
mask &= ~0b110000;
for (int e = 0; e <= maxValue; e++)
{
mask &= ~0b100000;
for (int f = 0; f <= maxValue; f++)
{
if (variation - 42 * a + 75 * b - 30 * c + 80 * d + 25 * e + 50 * f == -1)
{
var s = new valueSet() { _a = a, _b = b, _c = c, _d = d, _e = e, _f = f };
var t = candidates[mask].count;
if (t == 0 || t > s.count)
{
candidates[mask] = s;
if (f > 0)
break;
}
}
mask |= (1 << 5);
}
mask |= (1 << 4);
}
mask |= (1 << 3);
}
mask |= (1 << 2);
}
mask |= (1 << 1);
}
lock (answerArray)
{
for (int i = 0; i < 64; i++)
{
var s = candidates[i];
var t = answerArray[variation, i].count;
if (s.count > 0 && (t == 0 || t > s.count))
answerArray[variation, i] = s;
}
}
});
}
return answerArray;
}
Ran on my machine in ~127s for maxVal = 100.
I really didn't put much effort into this, so it's not the nicest code ever, but should provide a clear example of how to do this without the explicit filtering stage and reduced opportunity for thread contention (though this clearly isn't a big deal, so possibly worth having a single array of solutions, so that they can 'share' the minimum solutions and further prune the search space, though I couldn't immediately get an improvement with some simple changes).
Faster Filtering
Lazily changing the filtering to use MinBy and try...catch helps a great deal; I've not de-duplicated the Where with e.g. a GroupBy because it makes more sense to note put them all entries with the same mask into one list in the first place, and I've not parallelised the code so that relative performance is more comparable with your original filtering code (parallelisation will help to get closer to the solution without filtering, as everything will be parallelised):
try
{
valueSet finalSet = results.Where(set =>
set._a >= aRange[0] && set._a <= aRange[1] &&
set._b >= bRange[0] && set._b <= bRange[1] &&
set._c >= cRange[0] && set._c <= cRange[1] &&
set._d >= dRange[0] && set._d <= dRange[1] &&
set._e >= eRange[0] && set._e <= eRange[1] &&
set._f >= fRange[0] && set._f <= fRange[1]
).MinBy(set => set.count);
answerArray[variation, s] = finalSet;
}
catch { }
Ran on my machine in ~182s for maxVal = 100.
I didn't let the original code run to completion for maxVal = 100, but it was running for about half an hour at least. It took 236s to run maxVal = 60 (compared to 7s for the filterless refit and 12s for the faster filtering) (I guess my CPU is slower than yours!)
Solving for all variations
The filter-free method lends itself to a modification to find solutions for a large number of variations, by computing the variation that is satisfied by each candidate solution. This answer is pretty long and intentionally focusses on the filtering per the OP, but an example of such a change in this regard can be found at https://gist.github.com/VisualMelon/71dab52a8657ac497724432207cde61a | {
"domain": "codereview.stackexchange",
"id": 44345,
"tags": "c#, performance, beginner"
} |
Bending moment in a cantilever beam | Question: If I have a cantilever beam of length L fixed at the left end to a wall and I hang a weight W from it's right free end then why should the bending moment at a point x units to right of the wall be W(L-x)?
If I understand correctly, the bending moment at a point on the beam should be the total torque of the forces acting on cross surface at that point about an axis passing through the geometric center and perpendicular to the plane of bending, then how is this equal to W(L-x)?
Answer: If you split the beam in two at the position $x$ and do Free Body Diagrams you will understand why the internal moment is such.
Each split body needs to be in balance. To balance the part of the beam between the split and the end where the load is applied a moment of $F \left( \ell -x \right)$ is needed. | {
"domain": "physics.stackexchange",
"id": 57642,
"tags": "classical-mechanics, torque, stress-strain"
} |
How is the gradient in density gradient centrifugation made? | Question: From the literature I read, I came to understand the following. Please correct me if wrong.
In CsCl gradient centrifugation, the gradient is achieved by centrifuging the CsCl solution at high rpm, which causes the heavy Cs ions to distribute unevenly throughout the solution, due to sheer centrifugal force.
In Sucrose gradient, solutions of decreasing concentrations of sucrose are layered one over the other slowly.
Why can't we achieve the sucrose gradient by simply centrifuging it like in the CsCl gradient?
Answer: Sucrose gradient separations are an example of a rate-zonal centrifugation technique (fair technical document on centirfugation separations). The idea is you layer lighter sucrose solutions on top of one another, for example, 35% at the bottom of the tube (the most dense zone) and 15% at the top band (the least dense zone). There might be 5 or 6 layers. You then layer the sample, which may contain cell lysate, at the very top.
This method is done on a centrifuge, and the idea is that in order to penetrate down to a more dense layer, the component of the lysate must exceed the density of the preceeding sucrose layer. Quite literally the dense components are pushing down through the sucrose layers. The method is suitable for organelles and proteins. The layering must be done deliberately and carefully, however, because the sucrose layers and sample will easily mix if they're disturbed. The same concept applies to ficol density gradient separations: If there's no density of solution below the sample for it to pass through, there's no separation.
On the other hand, the CsCl method is an example of an isopycnic centrifugation suitable for the separation of nucleic acids, see the above mentioned reference. This is because the DNA migrates to where the denisty of the DNA equals the density of the gradient, referred to as the neutral buoyancy or isopycnic point. | {
"domain": "biology.stackexchange",
"id": 8393,
"tags": "biochemistry, lab-techniques, experimental-design"
} |
Eloquent JavaScript chessboard | Question: Is this a good way to solve the quiz "Chessboard" from http://eloquentjavascript.net/02_program_structure.html ?
Write a program that creates a string that represents an 8×8 grid, using newline characters to separate lines. At each position of the grid there is either a space or a “#” character. The characters should form a chess board.
When you have a program that generates this pattern, define a variable size = 8 and change the program so that it works for any size, outputting a grid of the given width and height.
This is my code:
size = 10;
grid = ""
for (var i = 1; i <= size; i++) {
for (var j = 1; j <= size; j++) {
if (i % 2 === 0) {
grid+= "# "
} else {
grid+= " #"
}
}
grid+= "\n"
}
console.log(grid)
Answer: Fun question;
you should write a function that takes a parameter instead of just writing the code
A chessboard has lots of repetition, take a minute to ponder how String.repeat could make this code much simpler.
Your indentation is not perfect, consider using a site like http://jsbeautifier.org/
I am not a big fan of var within the loop, I would declare var up front.
This is a possible solution that provides the right size of the board:
function createChessboardString(size){
const line = ' #'.repeat( size ),
even = line.substring(0,size),
odd = line.substring(1,size+1);
let out = '';
while(size--){
out = out + ((size % 2) ? odd: even ) + '\n';
}
return out;
}
console.log(createChessboardString(8));
You could consider for very large boards that the board in essence repeats
odd + '\n' + even, so you could repeat that as well. The problem for me is that there are too many corner cases to consider. So personally I would go for the above for any board size < 1000. | {
"domain": "codereview.stackexchange",
"id": 26878,
"tags": "javascript, programming-challenge, ascii-art"
} |
How can I add a plugin to an existing model? | Question:
What should be contained inside the tags of the .world file. ??
Can I use tags and inside them use tags?
Originally posted by meha on Gazebo Answers with karma: 13 on 2016-02-23
Post score: 0
Answer:
All sdf tags are described in detail here
To answer your specific question... you can use include tags within a word/sdf file to include model elements, but I don't think it works for plugin elements. The most common usage of include is for models, since people typically include a model file from a world file.
Examples:
some world files
some model files
Originally posted by Peter Mitrano with karma: 768 on 2016-02-23
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by meha on 2016-02-23:
I guess, I am not clear with my question.
See http://gazebosim.org/tutorials?tut=plugins_model&cat=write_plugin#RunningthePlugin
In the .world file, plugin is applied to the box, but in my case I want the plugin to be applied onto a quadrotor (an existing model), how do I do that?
Comment by Peter Mitrano on 2016-02-23:
you just put the plugin tag in the model tag of the quadrotor model. | {
"domain": "robotics.stackexchange",
"id": 3871,
"tags": "gazebo"
} |
Time complexity of min() and max() on a list of constant size? | Question: If you use min() or max() on a constant sized list, even in a loop, is the time complexity O(1)?
Answer: That depends what exactly you mean by "constant sized". The time to find the minimum of a list with 917,340 elements is $O(1)$ with a very large constant factor. The time to find the minimum of various lists of different constant sizes is $O(n)$ and likely $\Theta(n)$ where $n$ is the size of each list. Finding the minimum of a list of 917,340 elements takes much longer than finding the minimum of a list of 3 elements. | {
"domain": "cs.stackexchange",
"id": 17954,
"tags": "time-complexity, python"
} |
Recursively find files of certain types and log their paths (C++) | Question: I've written a function that takes a list of extensions and recursively finds files of those types, and logs their paths to a text file.
Usage example (finding image files in a home directory):
// set up filestream for unicode
const std::locale utf8_locale = std::locale(std::locale(), new std::codecvt_utf8<wchar_t>());
std::wofstream log("image_paths.txt", std::ios::app); // append mode
log.imbue(utf8_locale);
const std::set<std::wstring> image_extensions = {L".jpeg", L".jpg", L".tiff", L".gif", L".bmp", L".png"};
get_files(L"C:\\Users\\username", image_extensions, log);
Output to image_paths.txt:
C:\Users\username\image.jpg
C:\Users\username\image.png
C:\Users\username\directory\ajdsk.bmp
C:\Users\username\directory\subdirectory\other file.tiff
C:\Users\username\directory with spaces\file with spaces.jpeg
Function Code:
// return 0 -- all good
// return 1 -- root doesn't exist
// return 2 -- root isn't a directory
// return 3 -- no matching files found or error opening first file
// return 4 -- hit recursion limit
auto get_files(_In_ const std::wstring root, // root dir of search
_In_ const std::set<std::wstring> &ext, // extensions to search for
_Out_ std::wofstream &log, // file to write paths to
_In_ unsigned limit = 10 /* default recursion limit */) -> int
{
if(limit == 0) return 4;
// check root path
{
DWORD root_attrib = GetFileAttributesW(root.c_str());
if(root_attrib == INVALID_FILE_ATTRIBUTES) return 1; // root doesn't exist
if(!(root_attrib & FILE_ATTRIBUTE_DIRECTORY)) return 2; // root isn't a directory
}
LPCWSTR dir; // root directory + "\*"
HANDLE file = INVALID_HANDLE_VALUE; // handle to found file
WIN32_FIND_DATAW file_info; // attributes of found file
// prepare path for use with FindFile functions
std::wstring root_slash = root;
root_slash.append(L"\\*");
dir = root_slash.c_str();
file = FindFirstFileW(dir, &file_info);
if(file == INVALID_HANDLE_VALUE) return 3; // no matching files found or error opening first file
do { // for each file in directory
// for some reason
// file_info != L"." && file_info != L".."
// won't work unless file_info.cFileName is assigned to a var
std::wstring name = file_info.cFileName;
std::wstring path = root; // full path to current file
path.append(L"\\").append(name);
if(!(file_info.dwFileAttributes & FILE_ATTRIBUTE_READONLY) // not read-only
&& !(file_info.dwFileAttributes & FILE_ATTRIBUTE_OFFLINE) // not physically moved to offline storage
&& !(file_info.dwFileAttributes & FILE_ATTRIBUTE_SYSTEM) // not a system file
&& file_info.dwFileAttributes != INVALID_FILE_ATTRIBUTES // not invalid
&& (name != L"." && name != L"..")) { // not "." or ".."
if(file_info.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) // file is a directory
get_files(path, ext, log, --limit);
else // file is not a directory
if(ext.find(PathFindExtensionW(path.c_str())) != ext.end()) // extension matches
log << path << '\n' << std::flush; // log path to file
}
} while(FindNextFileW(file, &file_info) != 0);
FindClose(file);
return 0;
}
Answer: The declarations for dir and file can be moved lower, where they are first assigned values. However, since dir is only used in one place, it can be eliminated and the value used directly.
HANDLE file = FindFirstFileW(root_slash.c_str(), &file_info);
When testing file attributes, the check for invalid attributes should be first, and you can combine several individual tests into one:
if (file_info.dwFileAttributes != INVALID_FILE_ATTRIBUTES &&
!(file_info.dwFileAttributes & (FILE_ATTRIBUTE_READONLY | FILE_ATTRIBUTE_OFFLINE | FILE_ATTRIBUTE_SYSTEM) &&
(name != L"." && name != L".."))
Since there are three attributes you want to ignore, you can define a constant to hold them rather than list them out in your if statement.
Incidentally, since cFileName is a C array, you need to assign it to a string variable to be able to use the equality comparisons with it. Or you could leave it in cFileName and use basic comparisons like strcmp. Since there are two similar strings that are very short, you could also do direct character comparison but that makes the code larger and harder to understand and should only be done when absolutely necessary.
Since you decrement limit with each recursive call to get_files, you reduce the limit for the current directory as well. If your initial directory has 16 subdirectories, your search will skip the 10th, may skip some of the subdirectories of the first 9, and will search all of the directories under the 11th and later subdirectories. You should use
get_files(path, ext, log, limit - 1);
instead. If any of the recursive calls fail in some way (return nonzero) you ignore the error and keep going. This is reasonable in this instance, but does make the "recursion limit reached" error somewhat pointless since it will never be returned to the caller. This one value should probably be handled differently, so that if any search reaches the recursion limit, this value is returned to the original caller to indicate that the results are incomplete.
Potentially more serious is that your extension comparison is case sensitive. A file called "IMAGE.JPG" will not be listed, because the extension ins in uppercase and you're looking for a lowercase one.
Unless there's an absolute need for it, you should omit the std::flush from the log outputs. This will reduce the performance as every filename will be written one at a time, instead of in larger chunks. | {
"domain": "codereview.stackexchange",
"id": 36401,
"tags": "c++, recursion, file-system, windows, winapi"
} |
Why do we write T cos theta = mg for this Q instead of mg cos theta = T | Question:
For this Q , the author of the textbook writes that T cos 45 =mg
but it also states that mg cos 45 is not equal to T. I got to know about this because this is the way , I solved it first.
My Q :1 is that why mg cos 45 is not equal to tension or T.
The main point to noice is that Tcos 45 = mg doesn’t give same value of T as T=mg cos 45.
According to me why I think mg cos 45 should be equal to T :
There is no acceleration along T. Therefore acc=0.
T and mg cos 45 also come at the same vertical line.
Also as Q2: can we say there is $a_t$ is along z axis for this question ?
Name of author: Dc pandey
Answer:
Isn't this a math problem that can be arranged as,
$T cos 45 = mg ---> \frac{T cos 45}{cos 45} = \frac{mg}{cos 45} ---> T = mg/cos 45$
There are two accelerations - centripetal (towards the center) and tangential (along the circumference).
The planes and axes of the forces | {
"domain": "engineering.stackexchange",
"id": 4083,
"tags": "mechanical-engineering, applied-mechanics"
} |
How to construct Feynman diagram for decay of pseudoscalar $\phi$ meson? | Question: I am struggling to understand the construction of the Feynman diagrams for the following decays:
The answer is given as follows:
I do not understand how the quark anti-quark pair in each diagram is produced. In the two weak interactions, the W boson changes the quark flavour but what is producing the other quarks? In the other interaction there is no mediating particle at all. Is it possible the answer is wrong and a gluon is missing? Is anyone able to explain these diagrams?
Answer: The quark anti-quark pair can come from a gluon radiated from one of the other quarks or photon radiated from any of the intermediate or final state particles. These "pair production" parts of Feynman diagrams are sometimes omitted because there are several options for this with the same number of lattices. | {
"domain": "physics.stackexchange",
"id": 78045,
"tags": "particle-physics, standard-model, feynman-diagrams, quarks, mesons"
} |
Is maximum matching problem equivalent to maximum independent set problem in its dual graph? | Question: A hypergraph $H = (V,E)$ consists of a set $V = \{v_1, v_2, \cdots, v_n\}$ of vertices and a set $E = \{e_1, e_2, \cdots , e_m\}$ of edges, each being a subset of $V$.
A subset $M \subseteq E(H)$ is a matching if every pair of edges from $M$ has an empty intersection.
The dual $H^*$ of $H$ is a hypergraph whose vertices and edges are interchanged, so that the vertices are given by $\{e_1, e_2, \cdots , e_m\}$ and whose edges are given by $X = \{X_1, X_2, \cdots, X_n\}$ where $X_j = \{e_i | v_j \in e_i \}$, that is $X_j$ is the collection of all edges containing $v_j$.
My question: Is maximum matching problem equivalent to maximum independent set problem in its dual graph?
Are both NP-hard and cannot be approximated to a constant factor in polynomial time (unless P = NP)?
Thank you!
Answer: To start with possible NP-hardness (where for each problem, we want a matching/independent set of size at least $k$):
Independent set is NP-hard on "normal" graphs (and also on hypergraphs)
Maximum matching is polynomial-time solvable on "normal" graphs, see the wikipedia page on matching.
Maximum matching is NP-hard in hypergraphs (as shown in this wikipedia page, it is even hard for hypergraphs where each edge contains only 3 vertices).
I believe both problems are equivalent in the following sense: set $S \subseteq E(H)$ is a matching in $H$, if and only if $S$ forms an independent set in $H^*$.
(If you need further explanation or if this is not your definition of equivalance, please clarify) | {
"domain": "cs.stackexchange",
"id": 8286,
"tags": "complexity-theory, graphs, approximation"
} |
Top Down Insertion in a B Tree | Question: I have a B-Tree of order 5. So the keys are between $\lceil n/2 \rceil- 1 \leq keys \leq n - 1$ and children are between $\lceil n/2 \rceil \leq children \leq n $. Am I doing it right? So a full node would be of 4 keys, how do I split that, because when I split that full node, I get uneven keys in any of the children.
Example:
$\{G\}$ at level $0$,
$\{A, C, E\}$ and $\{H, K, N, Q\}$ at level $1$
Here $\{H, K, N, Q\}$ is a full node, when I insert T, I must pre-split $\{H, K, N, Q\}$, I get one key in either of the subtree which is wrong as the selection of keys suggest that they must be between 2 and 4.
What am I doing wrong?
Answer: It seems you are doing nothing wrong. Like you say, for a B-tree of order $n$ the number of children is between $\lceil n/2 \rceil$ and $n$, the number of keys is one less.
For $n=5$, keys are between $2$ and $4$. When you add a key to a node of size $4$, you get two new nodes of size $2$ and one new key which is pushed upwards. When this happens at the root, a new root with a single key is formed.
Your example. Add $T$ to the tree. At the root $G$ go right, to $\{H,K,N,Q\}$. With $T$ this becomes $\{H,K,N,Q,T\}$ which is too heavy and we split into $\{H,K\}$ and $\{Q,T\}$. Middle key $N$ is pushed upwards to the root. In the new configuration the root $\{G,N\}$ with two keys has three children $\{A,C,E\}$, $\{H,K\}$ and $\{Q,T\}$. | {
"domain": "cs.stackexchange",
"id": 2878,
"tags": "data-structures, search-trees"
} |
What's the difference between linearly polarised and plane-polarised waves? | Question: To explain polarisation, my book gives an example of a transverse wave in a string, and explains as:
Since each point on the string moves on a straight line, the wave is also referred to as a linearly polarised wave. Further, the string always remains confined to the x-y plane and therefore it is also referred to as a plane polarised wave
The image given is somewhat like this:
The definitions for both these terms are different, so it seems to me that they are not same. But I wasn't able to find an example which illustrates the difference between these two.
I found this Quora question, but the answers don't seem convincing.
So what exactly is the difference between a linearly polarised wave and a plane-polarised wave? Because according to me, a linearly polarised wave will oscillate in only one plane.
Answer: Linearly polarized wave is same as plane polarized wave.
Why we call it linearly polarized ?
Because oscillation taking place along a line.
Why we call it plane polarized ?
Since oscillation taking place along one axis & the wave is forward in motion along another axis. Two axes are used , so called plane. | {
"domain": "physics.stackexchange",
"id": 62375,
"tags": "optics, waves, electromagnetic-radiation, terminology, polarization"
} |
Harmonics of 50 Hz | Question: I have a signal which clearly shows harmonics of 25 Hz (or 50 Hz?), this is actually my question.
I do not think the 50Hz resonance comes from the power supply leakage, since my power supplies works at some tens of kHz. I also do not have an explanation for the 25 Hz peak, I rather think it could be a manifestation of the 50Hz which probably comes from the power line, however the 25Hz magnitude is dominating...
Have you ever faced a spectra like this?
Since I am filtering my data with a notch digital filter, I would like to know the optimal way to reject the frequency with major influence. A 25Hz and 50Hz centered notch work, however I would like to understand what could be the cause of this noise resonances.
Thanks!
Answer:
It is almost certainly from the mains power. Even if your power supply is switching at a high frequency you can still pick up interference on the probes or sensor or other circuitry. Get a long electrical lead and go outside and see if the problem persists. – geometrikal
what @geometrikal said. Even if you have a really good power supply, some -60dB of what happens on the grid side will leak to your internal supply power. Now, guessing from your diagram ($f_\text{max} \approx 2\,\text{kHz}$, $\Delta_f = 2\,\text{Hz}$) I'd say I'm looking at a signal sampled at 4kHz, which has been subjected to a 2048-point FFT, abs(), semilogx plot.
So the plot contains half a second of information, and yet your highest peak is around $10^{-9}$. (By the way, I'm assuming that the base of this plot are processed samples, which don't directly represent the full ADC span as $[-1;+1]$. If it is, you should probably use an amplifier -- I don't think you have a ADC with a dynamic voltage range of 220dB -- that would be unusual.)
So, especially if the observed phenomena might be shorter than half a second, the relative strength of the power line harmonics might simply be caused by them being there throughout the whole measurement. This all comes down to you explaining (and maybe understanding) the nature of the signals you're visualizing. As a side note, I'd say the plot doesn't do a very good job at that -- you can barely see how the power in the highest frequencies seems to be higher than in the rest of the spectrum, and all I can say from this plot is "over the 100Hz to 2kHz range, power varies", which is not really much information. | {
"domain": "dsp.stackexchange",
"id": 3064,
"tags": "filters, signal-detection, noise"
} |
Grounding system of conducting plates | Question: So, I always make mistakes on problems such as this (the grounding part), so I'm hoping someone could really explain to me how the process works.
There are $n$ large parallel plate conductors carrying charges $Q_1, Q_2$,...... $Q_n$ respectively.
If the left conductor (conductor $Q_1$) is grounded, then we
have to find the magnitude of charge flowing from plate to ground.
If any conductor is grounded, we have to find the magnitude of
charge flowing from plate to ground.
I actually managed to solve the problem.
Question 1:
Initial charge on grounded conductor is $Q_1$
Assuming that both outermost plates (i.e: left surface of $Q_1$ and right surface of $Q_n$) have zero charge after grounding, we can write out the charge distributions as follows:
Final charge on grounded conductor is -($Q_2+Q_3+....Q_n$)
So, difference is $-(Q_1+Q_2+...Q_n)$ which will cause $+(Q_1+....Q_n)$ to flow from ground.
Question 2:
I assumed that the some $r^{th}$ plate (plate $Q_r$) is grounded,
Assuming that both outermost plates (ie left surface of $Q_1$ and right surface of $Q_n$) have zero charge after grounding, after writing out the charge distributions for 0 field-
Initial charge on grounded conductor $Q_r = Q_r$
Final charge (on left surface) = $-(Q_1 + Q_2 +... + Q_r-1)$
Final charge (on right surface) = $-(Q_r+1 + Q_r+2 + .... + Q_n-1 + Q_n)$
Total final charge = - $(Q_1 + Q_2 +.... +Q_n )$ (*$Q_r$ missing)
Change in charge = $-(Q_1 + Q_2 + .... + Q_n)$
So, charge flown = $(Q_1 + Q_2 + .... + Q_n)$
Numerically, the answers work out. But I don't get:
Why do the outermost charges become zero when any conductor, not even necessarily the first, is grounded?
I'd appreciate an intuitive explanation why the charge flown from the ground is independent of which conductor is grounded.
I thought of one approach:
approach: as noted, the approximation used is that all the plates are at the same potential. If any one plate is grounded then all plates are at zero potential and the sum of their charges is zero.
Is my approach is correct, are there any other approach?
What will happen when more than one plate is grounded?
Answer: Charges tend to distribute in such a way that total energy of electric field is minimum. Density of energy of electric field is proportional to $E^2$.
Take a look at the system of your plates from a big distance. It looks like one thin plate. It produces an electric field proportional to it's total charge. Well, it has some internal structure, the electric field inside is different, but the volume of all the internals is arbitrarily small. The total energy of electric field is determined by the electric field outside of the system of plates! The minimum of this energy is achieved when the total charge of the system of plates is zero. This state is reachable: any charge can come to/from the grounded plate. And we know it is minimum because it is zero!
So, the result is: whenever you ground one of the plates (any of them!), the total charge of all the plates becomes zero.
After that it's obvious why the charge of outermost surfaces is zero: electric field outside the system of plates is zero, the electric field inside any plate (including outermost ones) is zero, so the charge on outermost surfaces should be zero. | {
"domain": "physics.stackexchange",
"id": 37336,
"tags": "electrostatics, electric-fields, potential, capacitance, conductors"
} |
Degeneracies in 2D Infinite square well | Question: In quantum mechanics, a particle of mass $m$ in a 2D infinite square well has an energy spectrum of
$$E_{n_x,n_y} = \frac{n_x^2 \hbar^2 \pi^2}{2 m L_x^2} + \frac{n_y^2 \hbar^2 \pi^2}{2 m L_y^2}$$
where $n_x$ and $n_y$ are positive integers, and $L_x$ is the width of the well in the $x$ direction, while $L_y$ is the width of the well in the $y$ direction.
If $(L_x / L_y)^2$ is irrational, it is straightforward to show that there are no states with degenerate energy levels by assuming that one exists, and showing that $(L_x / L_y)^2$ can then be written as a ratio of integers, arriving at a contradiction.
If $L_x / L_y = p/q$ is rational, then choosing $n_x = Np$ and $n_y = q$ gives the same energy as $n_x' = p$ and $n_y' = Nq$ (with integer $N$), and so there do exist degenerate states.
I haven't been able to figure out what happens if $L_x / L_y$ is irrational, but $(L_x / L_y)^2$ is rational. Are degenerate states possible in such a case?
Answer: Ok, so knowing that degeneracies can exist (thanks Paul G!), I managed to cook up the following.
Let $(L_x / L_y)^2 = p/q$ be rational. Then $L_x / L_y = \sqrt{p/q} = \sqrt{pq} / q \equiv \sqrt{p'} / q$. Or alternatively, we can just write $L_x = L_y \sqrt{p} / q$ by letting $p' \rightarrow p$.
The task is then to find integers $n_x, n_y, n_x', n_y'$ that satisfy
$$
q^2 n_x^2 + p n_y^2 = q^2 n_x'^2 + p n_y'^2.
$$
We can always scale out the $q$ by letting $n_y = N_y q$ and $n_y' = N_y' q$ for integers $N_y$ and $N_y'$, so we are looking for solutions to
$$
n_x^2 + p N_y^2 = n_x'^2 + p N_y'^2.
$$
This can be rearranged to form
$$
n_x^2 - n_x'^2 = p (N_y'^2 - N_y^2)
\\
(n_x + n_x')(n_x - n_x') = p (N_y' + N_y)(N_y' - N_y).
$$
Without loss of generality, choose $n_x > n_x'$ and $N_y' > N_y$, so that all factors here are positive integers. Letting $a = n_x + n_x'$, $b = n_x - n_x'$, $c = N_y' + N_y$ and $d = N_y' - N_y$, we are then looking for solutions to
$$
ab = p cd.
$$
We can always let $a = pA$, so that we are left with solving the system $Ab = cd$. This can always be solved by choosing an $A$ and a $b$ such that at least one of $A$ and $b$ is composite, and constructing $c$ and $d$ by interchanging factors. In order to get $n_x$ etc to be integers, we also need $A$ and $b$ to be both odd or both even. (For further details on these types of systems, see http://www.inference.eng.cam.ac.uk/mackay/abstracts/sumsquares.html.)
The equation $ab = cd$ also arises in the case that $L_x = L_y$ (simply choosing $p = q = 1$), and describes "accidental degeneracies". Hence, for $(L_x / L_y)^2$ rational, accidental degeneracies will also always arise.
So, to summarize:
$L_x / L_y$ rational: degeneracies from both symmetry and accidental degeneracies
$(L_x / L_y)^2$ rational, $L_x / L_y$ irrational: accidental degeneracies only
$(L_x / L_y)^2$ irrational: no degeneracies possible | {
"domain": "physics.stackexchange",
"id": 34655,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, potential"
} |
Symmetric functions in NC¹ | Question: A boolean function $f \colon \{0,1\}^n \rightarrow \{0,1\}$ is symmetric if $f(x)$ depends only on the number of $1$s in $x$.
It is known that every boolean function is in $\mathrm{NC}^1$, i.e. there is a circuit of depth $O(\log n)$ computing it.
What is known about the constant inside the $O()$ notation? Specifically, can one construct, for every $c$, a symmetric function requiring a circuit of depth at least $c \log n$? Or there is some constant $c_0$ such that every symmetric function has a circuit of depth at most $c_0\log n$?
Answer: We can treat the input $x$ as a boolean array. Using an $O(\log n)$ depth sorting network (such as the AKS network), we can sort $x$ in nonincreasing order. Calling the result $y$, using $O(1)$ more depth we can compute $z_i = y_i \land \lnot y_{i+1}$ (extended in the right way to the boundary). The vector $z_0,\ldots,z_n$ is the indicator vector of the Hamming weight of $x$. At this point you can compute any symmetric function using $O(\log n)$ more depth.
This shows that there is a universal constant $C$ such that all symmetric functions can be computed in depth $C\log n$ using constant fan-in circuits. | {
"domain": "cs.stackexchange",
"id": 17626,
"tags": "complexity-theory, circuits, nc"
} |
How to calculate torque needed to stop a rotating beam over x seconds | Question: I have a large boom of a radio antenna weighing $3,500 kg$ which is rotating at $0.5 RPM$. The antenna is a T shape and the boom radius is $14m$. The power to the motor is cut off suddenly and the motor acts as a rigid brake. The boom is not rigid and it continues to move/bend in the same direction for 1 second (at which points it bounces back and fourth until the forces damper out). I am trying to work out the max torque this would apply to the gearbox shaft, assuming there is no friction.
For angular movement, I have designated the centroid of each boom radius as a point load (so a point load at a radius of 7 m on both ends of the boom).
I can work out the angular moment which is $m \cdot r \cdot v$, which works out to $1750 kg \cdot 7m \cdot 0.367m/s$ or $4489.86kg\frac{m^2}{s}$
This is where I am not sure about though. I have read that torque is simply the change in angular momentum over time. So would that be $4489.86kg\cdot m $ $(44.1kN \cdot m)$?
Would this be an accurate way to determine the torque in the shaft? This is for a real scenario so If I have missed any big considerations please let me know.
Answer: You make 1 major false assumption, and that is that the torque is uniform over the 1 second period of deceleration of the beam. If the motor is truly a rigid break (doesn't back drive at all) then the bottom of the beam is stopping instantaneous (infinite torque) and the far end of the beam stops 1 second later.
To get the shock load right as you stop may require testing or detailed analysis of the rigidity of your gear box, etc.
Even if we assume your idealized scenario where all the mass is at a radius of 7 meters attached to the gear box by a massless beam in bending varies with the deflection. So zero deflection (time t=0) means 0 force on the beam, so 0 torque and maximum deflection would mean maximum force and maximum torque. I suggest using beam in bending calculations to give you the torque curve vs time.
https://www.engineersedge.com/beam_calc_menu.shtml | {
"domain": "engineering.stackexchange",
"id": 2117,
"tags": "torque"
} |
How to perform one hot encoding on multiple categorical columns | Question: I am trying to perform one-hot encoding on some categorical columns. From the tutorial I am following, I am supposed to do LabelEncoding before One hot encoding. I have successfully performed the labelencoding as shown below
#categorical data
categorical_cols = ['a', 'b', 'c', 'd']
from sklearn.preprocessing import LabelEncoder
# instantiate labelencoder object
le = LabelEncoder()
# apply le on categorical feature columns
data[categorical_cols] = data[categorical_cols].apply(lambda col: le.fit_transform(col))
Now I am stuck with how to perform one hot encoding and then join the encoded columns to the dataframe (data).
Please how do I do this?
Answer: LabelEncoder is not made to transform the data but the target (also known as labels) as explained here. If you want to encode the data you should use OrdinalEncoder.
If you really need to do it this way:
categorical_cols = ['a', 'b', 'c', 'd']
from sklearn.preprocessing import LabelEncoder
# instantiate labelencoder object
le = LabelEncoder()
# apply le on categorical feature columns
data[categorical_cols] = data[categorical_cols].apply(lambda col: le.fit_transform(col))
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder()
#One-hot-encode the categorical columns.
#Unfortunately outputs an array instead of dataframe.
array_hot_encoded = ohe.fit_transform(data[categorical_cols])
#Convert it to df
data_hot_encoded = pd.DataFrame(array_hot_encoded, index=data.index)
#Extract only the columns that didnt need to be encoded
data_other_cols = data.drop(columns=categorical_cols)
#Concatenate the two dataframes :
data_out = pd.concat([data_hot_encoded, data_other_cols], axis=1)
Otherwise:
I suggest you to use pandas.get_dummies if you want to achieve one-hot-encoding from raw data (without having to use OrdinalEncoder before) :
#categorical data
categorical_cols = ['a', 'b', 'c', 'd']
#import pandas as pd
df = pd.get_dummies(data, columns = categorical_cols)
You can also use drop_first argument to remove one of the one-hot-encoded columns, as some models require. | {
"domain": "datascience.stackexchange",
"id": 11267,
"tags": "scikit-learn, pandas"
} |
Rearranging array elements depending on their indexes | Question: I am (still!) going through these CodingBat exercises for Java. Here is the one I have just done:
Return an array that contains the exact same numbers as the given array, but rearranged so that all the zeros are grouped at the start of the array. The order of the non-zero numbers does not matter. So {1, 0, 0, 1} becomes {0 ,0, 1, 1}. You may modify and return the given array or make a new array.
And here is my code:
public int[] zeroFront(int[] nums){
int zeroCount = 0;
int[] resultantArray = new int[nums.length];
int[] noZerosArray = new int[nums.length];
//Count zeros
for (int i = 0; i < nums.length; i++) {
if (nums[i] == 0) {
zeroCount++;
}
}
//Make an array without any zeros
int j = 0;
for (int i = 0; i < nums.length; i++) {
if(nums[i] != 0){
noZerosArray[j] = nums[i];
j++;
}
}
//To resultant array, add zeros first, then add remaining numbers of original
for (int i = 0; i < resultantArray.length; i++) {
if (i < zeroCount) {
resultantArray[i] = 0;
} else {
resultantArray[i] = noZerosArray[i-zeroCount];
}
}
return resultantArray;
}
Please bear in mind I am doing it without importing anything extra like java.util.Arrays etc., as , primarily, this is not accepted by the assessor and, secondarily, I want to get to grips with arrays without importing anything extra yet.
Regarding my code, I would like to know how this can be improved. Is this a good solution? It feels like it could be more efficient, bearing in mind I have three for loops and creating three extra arrays.
Even though this one seems trivial, I found it really difficult!
Answer: The trick you are missing here is a simple swap.
All you need to do is keep a "fast" and a "slow" index in the array. The "fast" index scans forwards looking for 0 values. The "slow" index is an insert-point for where the next 0 would belong.
Consider this loop:
public int[] zeroFront(int[] nums){
int slow = 0; // next zero inserted here
for (int fast = 0; fast < nums.length; fast++) {
if (nums[fast] == 0) {
nums[fast] = nums[slow];
nums[slow] = 0;
slow++;
}
}
return nums;
}
That loop finds all zero values and then swaps non-zero values with them from the beginning of the loop, ending with a situation where the zeros are all at the front.
Sometimes, tricks like these are really hard to spot, but, when pointed out, make you think: Neat! | {
"domain": "codereview.stackexchange",
"id": 13090,
"tags": "java, beginner, programming-challenge, array"
} |
Concrete and simple applications for bipartite graphs | Question: I am looking for concrete and simple problems that may be solved using bipartite graphs or bipartite graph properties. Any idea along with explanations are welcome.
Answer: Assignment Problem would be one such example:
There are a number of agents and a number of tasks. Any agent can be
assigned to perform any task, incurring some cost that may vary
depending on the agent-task assignment. It is required to perform all
tasks by assigning exactly one agent to each task and exactly one task
to each agent in such a way that the total cost of the assignment is
minimized.
Hall's Marriage Theorem would be another:
Imagine two groups; one of n men, and one of n women. For each woman,
there is a subset of the men, any one of which she would happily
marry; and any man would be happy to marry a woman who wants to marry
him. Consider whether it is possible to pair up (in marriage) the men
and women so that every person is happy.
The Mutilated Chessboard Problem can be solved using The Hall's Theorem:
Suppose a standard 8x8 chessboard has two diagonally opposite corners
removed, leaving 62 squares. Is it possible to place 31 dominoes of
size 2x1 so as to cover all of these squares? | {
"domain": "cs.stackexchange",
"id": 2750,
"tags": "graphs, bipartite-matching"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.