anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Why holes in a semiconductor are associated with a net positive charge? | Question: I am clear with the idea that holes move in direction of electric field because of motion of bound electrons that get just some energy to jump into next hole.But i am confused with giving holes a positive charge.
I understood that in intrinsic semiconductor removal of electron is associated with less screening of some protons so their is a positive charge effect that can be made associated with location where electron left i.e hole ,so some people and textbook say hole has a net positive charge.like this shown in my book and answered to question Why Holes have an effective positive charge?
But my question is if we apply this positive charge to hole in p type semiconductor then semiconductor is no more neutral that is a contradiction to property given in my textbook.Please consider explaning how in this case we can assiciate the hole with a positive charge.
In this above picture we can see how neutral charge of semiconductor is affected when we associate postive charge with holes.(Note in picture already 10 protons are screened with 10 electrons so it shows +4)
picture of textbook that says semiconductor should be neutral.
Answer: Well, think of it this way,
you've got a bunch of atoms all lined up and having equal number of protons and electrons (say, Si atoms) and hence are neutral as a whole
Now, you add a trivalent atom (which, by the way, is also neutral)-like Al-and this leaves a "space" in the arrangement, as seen in your textbook.
If an electron in the neighborhood jumps into the gap, it leaves a gap at the point where it originally was. Since the electron is no longer in the gap, the previously neutral region becomes a "positive gap", or what we call a hole.
But at the same time, the Al atom becomes negatively charged, as it has 14 electrons and 13 protons.
Therefore, the net charge on the Semiconductor adds up to is zero. | {
"domain": "physics.stackexchange",
"id": 65794,
"tags": "semiconductor-physics"
} |
Good Software Projects for Aspiring Engineers | Question: With the pandemic going on, our school has decided to officially shutdown all sponsored club meetings. Now that we are all digital, our robotics club was looking for projects that students can do at home on their computers.
Do you have any suggestions for projects that you think do a good job at teaching important robotics concepts? So far many of the ideas we have revolve around computer vision and implementing path finding algorithms, however we just want to make sure we give our students as many options as possible.
Thanks!
Answer: I guess famous toy problems in the robotics/control literature which could be easily implemented in Matlab or Python are Linear Inverted Pendulum, Two-link robotic arm, and Cart-pole problem. If you need any further help with these topics let me know. | {
"domain": "robotics.stackexchange",
"id": 2109,
"tags": "control, slam, pid, computer-vision, software"
} |
how to check all parameters of running nodes? | Question:
I'm playing with ROS and my Robotino with the robotino stacks.
I launch robotino_node i.e. launch/robotino_node.launch and edit the following line to suit my system:
"param name="hostname" value="192.168.5.5"
changed to
"param name="hostname" value="192.168.1.12"
however, after launch this node i.e. robotino_node... it showed several parameters including "hostname" but I'm dont know how to check its value if it's already set
according to RobotinoNode.cpp, this value is fixed to "192.168.5.5", I don't know if the robotino_node.launch file can re-set this value??
Originally posted by roskidos on ROS Answers with karma: 110 on 2012-10-12
Post score: 0
Answer:
ok.. I got the answer now. It's "rosparam" ... :D I've skipped this tutorial. IT's my mistake
Originally posted by roskidos with karma: 110 on 2012-10-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11349,
"tags": "ros, robotino"
} |
Oxidation states of boron | Question: I have just been looking at https://en.wikipedia.org/wiki/List_of_oxidation_states_of_the_elements and found that boron has a -5 oxidation state. I would like to know which boron compounds form this oxidation state.
Answer: As I remember I have read somewhere that $\ce{Al3BC}$ is konwn to have Boron in -5 oxidation state.
you can also reffer to this PDF page no. 139
PDF IN GERMAN | {
"domain": "chemistry.stackexchange",
"id": 11301,
"tags": "inorganic-chemistry, oxidation-state, boron-family"
} |
Why should a solution to the wave equation be finite? | Question:
A function which represents a wave must satisfy the following
differential equation:
$$\frac{\partial^2 y}{\partial t^2} = k\frac{\partial^2 y}{\partial x^2}$$
Any function that satisfies the wave differential equation represents
a wave provided that it is finite everywhere at all times.
What does "it is finite everywhere at all times" mean?
Question:Which of the following functions represent a wave?
a) $(x - vt)^2$
b) $\ln(x + vt)$
c) $e^{-(x - vt)^2}$
d) $(x + vt)^{-1}$
Only option (c) is given as the answer though all 4 satisfy the differential equation.
I believe I did not understand the significance "function should be finite everywhere at all times" which is why I am unable to answer the aforementioned question.
Answer: It's semantics. Whoever wrote the problem prefers to refer to a wave as "A function which satisfies the wave equation and which is bounded" instead of "a function which satisfies the wave equation".
Unfortunately there are bound to be conventions which you disagree with, but in academics (undergrad and lower) the only way to deal with it is to figure out which conventions the professor (or problem writer) is working with before you read the problems. It's too easy for conversations on convention to turn into, "technically, it is a wave even though it's not physical" countered with "technically, it's not a wave because it's not bounded." The best you can do is recognize an issue in terminology ASAP and deal with it in a constructive way.
A better statement, which is more objectively true, would be: "Functions like $(x-vt)^2$ solve the wave equation, but generally don't come up and are not useful in physical solutions." | {
"domain": "physics.stackexchange",
"id": 18732,
"tags": "homework-and-exercises, waves"
} |
Behaviour of $i$ in polarization identity in complex Hilbert space | Question: I'm working through this video on youtube: https://www.youtube.com/watch?v=fjj6XNRtA40
But there's a step she makes that I don't understand, and when I try to do the proof myself I get stuck. This is what the video says:
$$ ||\Psi-i\Phi||^2 = ||\Psi||^2 + ||\Phi||^2 - \langle\Psi,-i\Phi\rangle - \langle -i\Phi,\Psi\rangle $$
She says that $ (-i\Phi)^2 $ turns into $ +||\Phi||^2 $ because $ i*i=-1 $ and that cancels the negative sign. As far as I know, the minus sign already cancels itself, so there is a triple minus sign and the expression should be:
$$ ||\Psi-i\Phi||^2 = ||\Psi||^2 - ||\Phi||^2 - \langle\Psi,-i\Phi\rangle - \langle -i\Phi,\Psi\rangle $$
Obviously i'm wrong, but what step am I missing?
edit: one more question. At one point she makes the step
$$ \langle\Psi,i\Phi\rangle = i\langle\Psi,\Phi\rangle $$
And I don't understand why that is not negative:
$$ \langle\Psi,i\Phi\rangle = -i\langle\Psi,\Phi\rangle $$
because of the property
$$ \langle x,y\rangle = \langle\ \overline{y,x}\rangle $$
I would say
$$ \langle x,iy\rangle = \langle\ \overline{iy,x}\rangle = \langle -iy,x\rangle = -i\langle y,x\rangle = -i\langle \overline{x,y}\rangle = -i\langle x,y\rangle $$
Answer: Hint:
$$\left<a\phi\mid b\psi\right>=a^{\ast}b\left<\phi\mid\psi\right>$$ | {
"domain": "physics.stackexchange",
"id": 58811,
"tags": "hilbert-space, complex-numbers"
} |
Conflicting models about how energy transferred in an electric circuit | Question: I've been studying DC circuits recently, but I've run into some confusion concerning how batteries deliver energy to circuit elements.
From what I've read online, it seems that energy in circuits comes from waves that propagate through charges rather than the charges themselves. However, my book seems to convey that charges "pick up" potential energy at a battery and lose it in circuit elements.
These models of how energy is transferred seem to contradict each other. Which is right? And if the wave theory is right, then what does current or voltage have to do with it? Rather, how do current, voltage, and resistance explain the strength and properties of this wave?
If this matters, I'm using Halliday-Resnick-Krane 5th Edition Physics textbook.
Answer:
So assuming charge carriers are having their energy reduced through
circuit elements, what is that energy, exactly? I know the voltage
drop means there's a dip in potential energy per charge between those
points, but what is that a energy converted into?
Like @Aaron Stevens I've never heard anything about waves in DC circuits. I confess to not knowing anything about non zero poynting vectors that @Alfred Centauri mentioned.
Some of the potential energy that a battery supplies to electrons is inevitably converted to heat in the resistance that all circuit elements have according to
$$E_{R}=I^{2}Rt$$
Some of potential energy of the electrons may be converted to energy stored in the magnetic field of any inductance in the circuit, according to
$$E_{L}=\frac{LI^2}{2}$$
And finally some of the potential energy of the electrons may be stored as potential energy in the electric fields of capacitors according to
$$E_{C}=\frac{CV^2}{2}$$
Also, if a circuit has no elements, does the charge not lose any
charge? That seems intuitive, but I know that goes against the KVL.
At a minimum, there has to be some circuit resistance. If nothing else, all real batteries have internal resistance. If there was no circuit resistance, the charge would accelerate without opposition under the influence of the electric field, resulting in limitless current.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 60042,
"tags": "energy, electric-circuits, electric-current, charge, batteries"
} |
Nyquist theorem, sample meaning | Question: Given that this wave was sampled at a sampling frequency f:
Why does the wave sampled at a sampling frequency 3f/2 look like this?
What does 3f/2 mean? Does it mean that we sample every 2 waves 3 times?
Answer: The wave is the whole curve: you shouldn't think of your first diagram as showing "four waves" as if you were counting waves from the sea hitting the beach. Although your graphs aren't labelled, the horizontal axis is likely intended to be time, not distance, and the graph shows the oscillation of a single particle over time. So the dashed line shows the movement of the particle over time: think of it as moving up and down.
The sampling frequency is the number of times per second that you measure the particle's position. In the first diagram, the measurements all happen when the particle is in the same place so sampling at that particular frequency causes you to believe that the particle isn't moving at all (the orange line). In the second diagram, we're sampling $3/2=1.5$ times more often so, now we see the particle in different positions and, in particular, we see that it's not stationary.
Note that the two diagrams are drawn at different scales, which isn't very helpful. | {
"domain": "cs.stackexchange",
"id": 12307,
"tags": "sampling"
} |
Why are tri-organotin chloride compounds so dangerous? | Question: Organotin compounds are compounds with the tin-carbon bond and some of them(specifically the tri-organo chloride ones) are as toxic as hydrogen cyanide. Why is this though as I don't really know some of the information that this link gave me. Can someone please help?
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1475111/#:~:text=The%20organotin%20compounds%20can%20be,of%20the%20central%20nervous%20system.
Answer: To get the gravity of how toxic organotin compounds can be, let us compare it with inorganic tin compounds. Most of the inorganotin compounds are non-toxic because of their low solubility and low absorption in human body. Workers mining tin oxide suffer from "stannosis", a benign form of pneumoconiosis without any tissue reaction or harmful reaction inside human body. But don't get me wrong! They are still harmful: High intakes of it may cause abdominal pain and anaemia. Inhaling tin hydride can cause nerve damage. Tin(II) chloride are found to induce vomit and diarrhea in cats. But they are nothing when compared to organotin compounds.
Organotin compounds are more harmful due to the fact that it can create small chemical and biochemical reactions in environment to convert to more toxic form. In human body, they are far more dangerous. The toxicity of the compounds depends on the structure of the compounds and its physicochemical action on human body. Small chain alkyltin compounds are by far the most toxic especially trimethyl and triethyl tin compounds because they can absorb in gastrointestinal tract. Triethyltin particularly produces status spongiosus of the white matter of the central nervous system and causes encephalopathy and cerebral edema. Workers exposed to tributyltin reported to have suffered from dermatitis. Long chains alkyl and aryl tin are poorly absorbed and are less dangerous but still they are classified as neurotoxin.
So, why are organotin compounds toxic?
Organotin compounds are known to penetrate cell membranes due to its lipophilicity and causes damage to cell membrane, interrupt oxidative phosphylation and damage mitochondria. They can inhibit synthesis of heme oxygenase and can be immunotoxic and genotoxic. It is also reported that they are known to have carcinogenic or tetratogenic effect although not proved. You can read more on the adverse effect of organotin compounds on human body by reading paper 3 and 4.
References
CHAPTER 42 - Tin, ELENA A.OSTRAKHOVITCHM. GEORGECHERIAN, Handbook on the Toxicology of Metals (Third Edition) 2007, Pages 839-859, DOI: 10.1016/B978-012369413-3/50097-5.
Organotin Compounds Toxicity: Focus on Kidney, Carolina Monteiro de Lemos Barbosa, Fernanda Magalhães Ferrão and imageJones B. Graceli, Front. Endocrinol., 22 May 2018 | DOI: 10.3389/fendo.2018.00256
Toxicity of organotin compounds: Shared and unshared biochemical targets and mechanisms in animal cells, Alessandra Pagliarani, Salvatore Nesci, VittoriaVentrella, Toxicology in Vitro, Volume 27, Issue 2, March 2013, Pages 978-990, DOI: 10.1016/j.tiv.2012.12.002 | {
"domain": "chemistry.stackexchange",
"id": 15589,
"tags": "organometallic-compounds, toxicity, carbon-family"
} |
Laser beam deflection calculations | Question:
I would like to use laser beam deflection to measure a small movement (100s nm) in a cantilever system as shown in the diagram, but I am unsure of the calculations. This may be something simple to do and I am making it more complicated in my head.
I would like to be able to calculate how the movement of the laser point on the detector/screen (PSD in the diagram) relates to the displacements I expect of the cantilever. So I can work out a rough set up and to where to place the equipment to achieve sufficient sensitivity.
Any advice or help would be appreciated, whether calculations or setup as this is not an area I am not familiar with.
Answer: The change in $D$ will be negligible (unless the angle of incidence is very small) because the deflection is very small. The angle of incidence of the light onto the cantilever mirror equals the angle of reflection. It appears that your cantilever is tilted relative to the laser beam and the screen, so you need to do your trigonometry in 3D. In your calculation, just assume that D does not change and that the mirror tilts by a small angle. However, the angle of tilt as a function of the distance the cantilever is deflected actually will depend on the structure of the cantilever. If I were handed this problem as a homework assignment, I would ask for more specifics about the cantilever -- or would state in my answer that I'm assuming a specific structure (e.g., a uniform rectangular cross section "diving board"). | {
"domain": "physics.stackexchange",
"id": 51192,
"tags": "optics, geometric-optics"
} |
My script is about as scary as the XML it's reading | Question: This was a revision, made when the code failed after an update (at least that is what I think happened).
So I fixed it and now it works, but I think that this code can be written better and more efficiently, and perhaps made easier to read. Note that VBScript/VBA/VB6 isn't (aren't) my primary language(s), the closest thing I do is C#.
I presume that the If statements can be whittled down a little bit, (I know they can).
I am unsure of the version/flavor of VB this is, but given that it is written inside <script> tags I presume it's VBScript.
Public Function GetParameterXml()
GetParameterXml = _
"<Parameters>" &_
" <Parameter Value='TYPE' Code='T' Description='Type' Type='Combo' Tooltip='Select parameters from the drop down list.'>" &_
" <Options>" &_
" <Option Code='1' Description='Assignee Name' Value='A' />" &_
" <Option Code='2' Description='Creditor Name' Value='C' />" &_
" <Option Code='3' Description='Debtor Name' Value='D' />" &_
" </Options>" &_
" </Parameter>" &_
" <Parameter Value='NAMEFORMAT' Code='T' Description='Name Format' Type='Combo' Tooltip='Select parameters from the drop down list.'>" &_
" <Options>" &_
" <Option Code='11' Description='First Mid Last' Value='1' />" &_
" <Option Code='12' Description='Last, First Mid' Value='2' />" &_
" <Option Code='13' Description='Last, First' Value='3' />" &_
" <Option Code='14' Description='Last, First, Mid' Value='4' />" &_
" </Options>" &_
" </Parameter>" &_
"</Parameters>"
End Function
Dim Parameter: Set Parameter = Parameters.Item(Bookmark,"TYPE")
Dim Parameter2: Set Parameter2 = Parameters.Item(Bookmark,"NAMEFORMAT")
Dim sTemp
Dim oNodes: Set oNodes = Nothing
Dim oNode: Set oNode = Nothing
Dim PartyName()
Dim ArrayIndex
Dim varPartyName
Dim FirstName
Dim MidName
Dim LastName
Dim ParameterValue
Dim Suffix
ReturnData = ""
If Parameter2 is Nothing Then
ParameterValue = "1"
Else
ParameterValue = Parameter2.value
End If
ArrayIndex = 0
If Not Parameter is Nothing Then
Set oNodes = xmlDoc.selectNodes("/Record/CelloXml/Integration/Case/JudgmentEvent/Judgment/Additional/SDMonetaryAward/SDMonetaryAwardParties/SDMonetaryAwardParty[PartyConnection[@Word='"& uCase(parameter.Value) &"']]")
For each oNode in oNodes
For each oNode2 in oNode.childNodes
If oNode2.nodeName = "PartyName" Then
ReDim Preserve PartyName(ArrayIndex)
PartyName(ArrayIndex) = oNode2.text
ArrayIndex = ArrayIndex + 1
End If
Next
Next
For each varPartyName in PartyName
dim FirstMiddleName
LastName = Split(varPartyName, ",")(0)
FirstMiddleName = Trim(Split(varPartyName,",")(1))
Suffix = Trim(Split(VarPartyName,",")(2))
MidName = Trim(Split(FirstMiddleName, " ")(1))
FirstName = Trim(Split(FirstMiddleName, " ")(0))
If ParameterValue = "1" Then
If IsNull(FirstName) and IsNull(MidName) Then
ReturnData = ReturnData & LastName & Chr(13) & Chr(10)
ElseIf IsNull(MidName) Then
If IsNull(Suffix) Then
ReturnData = ReturnData & FirstName & " " & LastName & Chr(13) & Chr(10)
Else
ReturnData = ReturnData & FirstName & " " & LastName & " " & Suffix & Chr(13) & Chr(10)
End If
Else
If IsNull(Suffix) Then
ReturnData = ReturnData & FirstName & " " & MidName & " " & LastName & Chr(13) & Chr(10)
Else
ReturnData = ReturnData & FirstName & " " & MidName & " " & LastName & " " & Suffix & Chr(13) & Chr(10)
End If
End If
ElseIf ParameterValue = "2" Then
If IsNull(FirstName) and IsNull(MidName) Then
ReturnData = ReturnData & LastName & Chr(13) & Chr(10)
ElseIf IsNull(MidName) Then
If IsNull(Suffix) Then
ReturnData = ReturnData & LastName & ", " & FirstName & Chr(13) & Chr(10)
Else
ReturnData = ReturnData & LastName & ", " & FirstName & " " & Suffix & Chr(13) & Chr(10)
End If
Else
If IsNull(Suffix) Then
ReturnData = ReturnData & LastName & ", " & FirstName & " " & MidName & Chr(13) & Chr(10)
Else
ReturnData = ReturnData & LastName & ", " & FirstName & " " & MidName & " " & Suffix & Chr(13) & Chr(10)
End If
End If
ElseIf ParameterValue = "3" Then
If IsNull(FirstName) Then
ReturnData = ReturnData & LastName & Chr(13) & Chr(10)
Else
If IsNull(Suffix) Then
ReturnData = ReturnData & LastName & ", " & FirstName & Chr(13) & Chr(10)
Else
ReturnData = ReturnData & LastName & ", " & FirstName & " " & Suffix & Chr(13) & Chr(10)
End If
End If
ElseIf ParameterValue = "4" Then
If IsNull(FirstName) and IsNull(MidName) Then
ReturnData = ReturnData & LastName & Chr(13) & Chr(10)
ElseIf IsNull(MidName) Then
If IsNull(Suffix) Then
ReturnData = ReturnData & LastName & ", " & FirstName & Chr(13) & Chr(10)
Else
ReturnData = ReturnData & LastName & ", " & FirstName & Suffix & Chr(13) & Chr(10)
End If
Else
If IsNull(Suffix) Then
ReturnData = ReturnData & LastName & ", " & FirstName & ", " & MidName & Chr(13) & Chr(10)
Else
ReturnData = ReturnData & LastName & ", " & FirstName & " " & Suffix & ", " & MidName & Chr(13) & Chr(10)
End If
End If
End If
FirstName = Null
LastName = Null
MidName = Null
FirstMiddleName = Null
Suffix = Null
Next
End If
Erase PartyName
If Len(ReturnData) > 0 Then
ReturnData = Left(ReturnData, Len(ReturnData)-2)
End If
Again, this is code that I fixed and added to.
Input from the XML for the name comes in this format:
Johnson, James T, {optional-Suffix}
Answer: I'm assuming this is VBScript, and I'm not familiar at all with VBScript, and it seems pretty different from VBA/VB6 as far as iterating arrays (can't do For Each on an array in VBA/VB6) and string nullability is concerned - and VBA/VB6 isn't exactly crystal-clear as to what means what.
From this source (dates back from 2000, quotes a dead Microsoft link - emphasis & formatting mine):
"": A zero-length string (commonly called an "empty string") is technically a zero-length BSTR that actually uses six bytes of memory. In general, you should use the constant vbNullString instead, particularly when calling external DLL procedures.
Empty: A variant of VarType 0 (vbEmpty) that has not yet been initialized. Test whether it is "nil" using the IsEmpty function.
Nothing: Destroys an object reference using the Set statement. Test whether it is "nil" using the Is operator.
Null: A variant of VarType 1 (vbNull) that means "no valid data" and generally indicates a database field with no value. Don't confuse this with a C NULL, which indicates zero. Test whether it is "nil" using the IsNull function.
vbNullChar: A character having a value of zero. It is commonly used for adding a C NULL to a string or for filling a fixed-length string with zeroes.
vbNullString: A string having a value of zero, such as a C NULL, that takes no memory. Use this string for calling external procedures looking for a null pointer to a string. To distinguish between vbNullString and "", use the VBA StrPtr function: StrPtr(vbNullString) is zero, while StrPtr("") is a nonzero memory address.
So, it being a script, I'm not going to try to go and define classes here, although it seems to be supported. I'm still trying to wrap my head around all those conditions!
The conditions (and the fact that they are repeated in all main 4 branches) are actually hiding the intent.
So I would write a little Coalesce function and do away with all those checks:
Public Function Coalesce(ByVal value)
If IsNull(value) Then
Coalesce = vbNullString
Else
Coalesce = CStr(value)
End If
End Function
You seem to have a fixed amount of possible values for ParameterValue, to me this looks like a job for some Select Case rather than an If...ElseIf...ElseIf... block - also, separate the concerns of figuring out a line's content and of appending it to the result - because it's the ParameterValue that determines the order of the names, whether they're present or not doesn't matter, they can all be appended - as long as we ensure to replace Null values with non-value vbNullString:
'note: LastName can't ever be null... unless nothing was supplied at all.
Dim lineResult
Select Case ParameterValue
Case "1":
'First Mid Last [Suffix]
lineResult = Trim(Coalesce(FirstName) & " " _
& Coalesce(MidName) & " " _
& Coalesce(LastName) & " " _
& Coalesce(Suffix))
Case "2":
'Last, First Mid [Suffix]
lineResult = Trim(Coalesce(LastName) & ", " _
& Coalesce(FirstName) & " " _
& Coalesce(MidName) & " " _
& Coalesce(Suffix))
Case "3":
'Last, First [Suffix] ~> Middle name ignored?
lineResult = Trim(Coalesce(LastName) & ", " _
& Coalesce(FirstName) & " " _
& Coalesce(Suffix))
Case "4":
'Last, First [Suffix], Mid
lineResult = Trim(Coalesce(LastName) & ", " _
& Coalesce(FirstName) & " " _
& Coalesce(Suffix) & ", " _
& Coalesce(MidName))
End Select
This leaves us, in the worst cases, with a lineResult that's ended with ", ," or ",". Easy fix - not so neat, but hey between that and a bunch of nested If statements, I go with this:
If Right(lineResult, 2) = " ," Then lineResult = Left(lineResult, Len(lineResult) - 2)
If Right(lineResult, 1) = "," Then lineResult = Left(lineResult, Len(lineResult) - 1)
Now all we're missing is lineResult = lineResult & vbNewLine, and then we can do ReturnData = ReturnData & lineResult, and then we can iterate again. | {
"domain": "codereview.stackexchange",
"id": 4907,
"tags": "xml, vbscript"
} |
Why is picric acid more explosive than TNT? | Question: I've recently read that trinitrophenol (picric acid - relative effectiveness factor $1.20$) is more explosive than trinitrotoluene (TNT - relative effectiveness factor $1.00$).
Personally, I think that the mesomeric effect of the oxygen's lone pairs in picric acid should keep the nitro groups 'happy', because of the increase in electron density on them. This should enhance the overall stability of the molecule, and reduce the relative effectiveness factor. However, this is not the case. Why is this? My guess is that the electron withdrawing nature of oxygen somehow causes this.
Answer: You're trying to define the stronger explosive by comparing the reagents, but in this case comparing the products is far more important.
Trinitrotoluene is a rather oxygen-poor explosive, to the point that when detonated as a pure substance, it produces carbon soot and hydrogen gas:
$$\ce{2C7H5N3O6 -> 3N2 + 5H2 + 12CO + 2C}$$
The production of $\ce{CO}$, $\ce{CO2}$ and $\ce{H2O}$ is highly exothermic, so a lot of extra energy could be obtained if there were more oxygen to burn the soot and hydrogen. However, in detonations there's not nearly enough time for atmospheric oxygen to partake extensively in the reaction. This means that a fair amount of the explosive potential of TNT is wasted, decreasing its detonation yield per gram of substance. One way to compensate this yield loss in oxygen-poor explosives is to mix them with an oxidiser, usually an oxygen-rich substance such as $\ce{KClO3}$.
Now, on to picric acid. The structure and molar mass are very similar to TNT, but replacing the methyl group with a hydroxyl means the oxygen content of picric acid is higher. While still overall oxygen-poor, the detonation proceeds with much less soot and hydrogen generation, meaning a more complete burn, a higher energy release, and consequently a greater explosive yield.
$$\ce{2C6H3N3O7 -> 3N2 + H2 + 2H2O + 12 CO}$$
In reality, the decomposition reactions are more complex. The balanced equations above follow a rule of thumb presented here. However, a more thorough analysis should agree with this simplified result. | {
"domain": "chemistry.stackexchange",
"id": 3443,
"tags": "organic-chemistry, stability, explosives, nitro-compounds"
} |
Inconsistency in Lagrangian vs Hamiltonian formalism? | Question: Can both Lagrangian and Hamiltonian formalisms lead to different solutions?
I have a simple system described by the Lagrangian
\begin{equation}
L(\eta,\dot{\eta},\theta,\dot{\theta})=\eta\dot{\theta}+2\theta^2.
\end{equation}
The equations of motion are obtained from Euler-Lagrange eq.:
\begin{eqnarray}
4\theta-\dot{\eta}=0\; \mathrm{and}\; \dot{\theta}=0,
\end{eqnarray}
yielding the solution $\eta(t)=4\theta_0t+\eta_0$ where $\eta_0$ and $\theta_0$ are constants.
But when I obtain one of the equations of motion from the Hamiltonian (via Legendre transformation),
\begin{equation}
H=\left(\frac{\partial L}{\partial\dot\eta}\right)\dot\eta+\left(\frac{\partial L}{\partial\dot\theta}\right)\dot\theta - L =-2\theta^2,
\end{equation}
\begin{equation}
\dot\eta=\frac{\partial H}{\partial p_\eta}=0,
\end{equation}
the situation is surprisingly different from the Lagrangian approach because $\eta$ is now a constant!
Can someone give a proper explanation for this inconsistency? Am I doing something wrong here?
Answer: The problem here is that, because there exist constraints of the form $f(q,\,p)=0$, the phase space coordinates of the usual Hamiltonian formulation aren't independent. I'm not sure how you encountered this Lagrangian, but this issue is a common hiccup in electromagnetism and (if you'll pardon a more obscure example) BRST quantisation. The good news is you can still form a Hamiltonian description equivalent to the Lagrangian one. The trick is to append suitable terms to the "naïve" Hamiltonian, as explained here, and as a result the Poisson brackets are upgraded to what are called Dirac brackets.
For your problem the full Hamiltonian is $H=-2\theta^2+c_1 p_\eta+c_2( p_\theta-\eta)$, where the $c_i$ remain to be computed as functions of undifferentiated phase space coordinates. In fact $c_1=\frac{\partial H}{\partial p_\eta}=\dot{\eta}=4\theta$ while $c_2=\frac{\partial H}{\partial p_\theta}=\dot{\theta}=0$, so $H=-2\theta^2+4\theta p_\eta$. You can verify this gives you the right equations of motion. | {
"domain": "physics.stackexchange",
"id": 40519,
"tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, hamiltonian-formalism, constrained-dynamics"
} |
Why would rosserial_python fail when it connect to arduino? | Question:
Hi! I running in 14.04 indigo, I need a hand to fix this problem :
I followed the tutorial of arduino in ros , try to create a publisher but it failed
$rosrun rosserial_python serial_node.py /dev/ttyACM0
[ERROR] [WallTime: 1509590980.692109] Creation of publisher failed: line:
yaml https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml osx
unsupported pickle protocol: 4
I tried to use $rosdep update to fix but nothing happend
Arduino code is here:
#include <SPI.h>
#include <MFRC522.h>
/////setup/////
#include <ros.h>
#include <std_msgs/String.h>
#include <std_msgs/Byte.h>
#include <std_msgs/Char.h>
#include <std_msgs/Int32.h>
ros::NodeHandle nh;
std_msgs::String str_msg;
ros::Publisher chatter("RFID", &str_msg);
char hello[13] = "hello world!";
char RFID[32];
////////////////
constexpr uint8_t RST_PIN = 9; // Configurable, see typical pin layout above
constexpr uint8_t SS_PIN = 10; // Configurable, see typical pin layout above
MFRC522 rfid(SS_PIN, RST_PIN); // Instance of the class
MFRC522::MIFARE_Key key;
// Init array that will store new NUID
byte nuidPICC[4];
void setup()
{
//////node setup///////
nh.initNode();
nh.advertise(chatter);
/////////////
SPI.begin(); // Init SPI bus
rfid.PCD_Init(); // Init MFRC522
for (byte i = 0; i < 6; i++) {
key.keyByte[i] = 0xFF;
}
void loop()
{
/////////ROS setup/////////
//str_msg.data = RFID;
//chatter.publish( &str_msg );
nh.spinOnce();
//delay(1000);
///////////////////////////
// Look for new cards
if ( ! rfid.PICC_IsNewCardPresent())
return;
// Verify if the NUID has been readed
if ( ! rfid.PICC_ReadCardSerial())
return;
if (rfid.uid.uidByte[0] != nuidPICC[0] ||
rfid.uid.uidByte[1] != nuidPICC[1] ||
rfid.uid.uidByte[2] != nuidPICC[2] ||
rfid.uid.uidByte[3] != nuidPICC[3] ) {
nh.loginfo("A new card has been detected.");
nh.loginfo("The NUID tag is:");
sprintf(RFID,"%d %d %d %d",rfid.uid.uidByte[0],rfid.uid.uidByte[1],rfid.uid.uidByte[2],rfid.uid.uidByte[3]);
str_msg.data = RFID;
//show data
nh.loginfo(RFID);
chatter.publish( &str_msg );
// Store NUID into nuidPICC array
for (byte i = 0; i < 4; i++) {
nuidPICC[i] = rfid.uid.uidByte[i];
}
}
else
nh.loginfo("Card read previously.");
// Halt PICC
rfid.PICC_HaltA();
// Stop encryption on PCD
rfid.PCD_StopCrypto1();
}
And my launch file is here
<launch>
<node pkg="rosserial_python" type="serial_node.py" name="serial_node" output="screen">
<param name="port" value="/dev/ttyACM0"/>
<param name="baud" value="57600"/>
</node>
</launch>
But I run these script on the other computer it's doesn't happened...
Can somebody help me?
Originally posted by Liuche on ROS Answers with karma: 44 on 2017-11-01
Post score: 0
Original comments
Comment by ahendrix on 2017-11-01:
That's a very strange error message. Is there anything different between the computer where it works and where it doesn't work? Are they using different operating systems?
Comment by Liuche on 2017-11-02:
They are the same .
Both 14.04 and also install rosserial package and arduino is 1.8.4
It's very strange. I can't find similar question on website.
But there is a important thing. It can be used before...
Comment by Liuche on 2017-11-03:
I solved the problem! I reinstalled the entire ROS system!
Answer:
I solved the problem! I reinstalled the entire ROS system
Maybe I delete some file in ros?
I don't know,but Finally I solved it
Originally posted by Liuche with karma: 44 on 2017-11-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 29258,
"tags": "ros, arduino, rosserial-arduino, rosserial-python"
} |
Platform-agnostic windowing library | Question: I am making a C library that abstracts window creation with support for the new Vulkan API under a unified API;
I have a github repository that you can check out.
main.c
#include "vkwf.h"
int main()
{
VKWFWindow* window = VKWFCreateWindow("Test Window", 800, 600);
while (!VKWFWindowShouldClose(window))
{
VKWFWindowUpdate(window);
}
free(window);
return 0;
}
The way I am handling this is creating a general vkwf.h file, that includes a list of functions, like this:
vkwf.h
#pragma once
#ifdef __cplusplus
extern "C" {
#endif
#ifdef VKWF_PLATFORM_WINDOWS
#include "win32_window.h"
#elif VKWF_PLATFORM_MACOS
#include "macos_window.h"
#elif VKWF_PLATFORM_LINUX
#include "linux_window.h"
#endif
VKWFWindow* VKWFCreateWindow(const char* title, int width, int height)
{
return VKWFPlatformCreateWindow(title, width, height);
}
VKWFBool VKWFWindowShouldClose(VKWFWindow* window)
{
return VKWFPlatformWindowShouldClose(window);
}
void VKWFWindowUpdate(VKWFWindow* window)
{
VKWFPlatformUpdate(window);
}
void VKWFDestroyWindow(VKWFWindow* window)
{
VKWFPlatformDestroyWindow(window);
}
#ifdef __cplusplus
}
#endif
All VKWFPlatformX() functions are functions that get defined by the platform_window.h files, like this:
win32_window.h (short)
#define VKWFPlatformCreateWindow(title,width,height) VKWFWin32CreateWindow(title,width,height)
#define VKWFPlatformWindowShouldClose(window) VKWFWin32WindowShouldClose(window)
#define VKWFPlatformUpdate(window) VKWFWin32Update(window)
#define VKWFPlatformDestroyWindow(window) VKWFWin32DestroyWindow(window)
Does this look like a good approach?
Are there any drawbacks, or things that i could improve?
Answer: I wouldn't use #define macros. They expose the internal naming convention of your OS dependent functions to the world. Once you send out your library to others [as a shared library], you can never change your internal OS dependent names. The important thing is that public facing names be functions.
For example, if two systems used gcc and created ELF binaries that only called your library functions, could you compile on (e.g.) FreeBSD and that binary would run without rebuild on linux. There are other issues with doing this, so it [probably] isn't practical, but, it's something to think about.
A better way [what I've done when faced with a similar problem] is to put the OS dependent code in a .c and add static to the definitions.
The public functions just call the static ones. The static function names are the same, regardless of platform
The optimizer will either inline the static OS dependent function or will use tail call optimization. So, it's just as fast as macros but a lot cleaner.
Side note: Your naming convention is a bit [MS] Windows centric (i.e. camel hump case). I prefer snake case (e.g. see GTK, etc.). For an example, see the bottom.
vkwh.h:
#pragma once
#ifdef __cplusplus
extern "C" {
#endif
VKWFWindow* VKWFCreateWindow(const char* title, int width, int height);
VKWFBool VKWFWindowShouldClose(VKWFWindow* window);
void VKWFWindowUpdate(VKWFWindow* window);
void VKWFDestroyWindow(VKWFWindow* window);
#ifdef __cplusplus
}
#endif
vkwf.c:
#include "vkwf.h"
#ifdef VKWF_PLATFORM_WINDOWS
#include "win32_window.c"
#elif VKWF_PLATFORM_MACOS
#include "macos_window.c"
#elif VKWF_PLATFORM_LINUX
#include "linux_window.c"
#endif
VKWFWindow* VKWFCreateWindow(const char* title, int width, int height)
{
return VKWFPlatformCreateWindow(title, width, height);
}
VKWFBool VKWFWindowShouldClose(VKWFWindow* window)
{
return VKWFPlatformWindowShouldClose(window);
}
void VKWFWindowUpdate(VKWFWindow* window)
{
VKWFPlatformUpdate(window);
}
void VKWFDestroyWindow(VKWFWindow* window)
{
VKWFPlatformDestroyWindow(window);
}
win32_window.c:
#include "vkwf.h"
static VKWFWindow*
VKWFPlatformCreateWindow(const char* title, int width, int height)
{
// ...
}
static VKWFBool
VKWFPlatformWindowShouldClose(VKWFWindow* window)
{
// ...
}
static void
VKWFPlatformWindowUpdate(VKWFWindow* window)
{
// ...
}
static void
VKWFPlatformDestroyWindow(VKWFWindow* window)
{
// ...
}
Here's an example of the snake case for the public functions. Note that since the platform specific functions are static, then can use shorter prefixes:
#include "vkwf.h"
#ifdef VKWF_PLATFORM_WINDOWS
#include "win32_window.c"
#elif VKWF_PLATFORM_MACOS
#include "macos_window.c"
#elif VKWF_PLATFORM_LINUX
#include "linux_window.c"
#endif
VKWFWindow* VKWF_create_window(const char* title, int width, int height)
{
return platform_create_window(title, width, height);
}
VKWFBool VKWF_window_should_close(VKWFWindow* window)
{
return platform_window_should_close(window);
}
void VKWF_window_update(VKWFWindow* window)
{
platform_update(window);
}
void VKWF_destroy_window(VKWFWindow* window)
{
platform_destroy_window(window);
} | {
"domain": "codereview.stackexchange",
"id": 32887,
"tags": "c, library, gui, portability"
} |
Why is the spectrum of a blue flame the way it is? | Question: In the spectrum of the blue part in a candle flame, there’s a violet emission at 432 nm due to excited CH* molecules (chemiluminescence). Why 432? Why not 400 or 500? There are emissions at 436, 475 and 520 nm too. Why these numbers? Is it because the energies of the photons emitted correspond to these wavelengths, as E = hc/λ?
Answer: Atoms and small molecules have discrete energetic states. When an excited molecule relaxes to the ground state by emitting a photon, the energy (wavelength) of this photon is equal to the energy difference between the 2 states. | {
"domain": "physics.stackexchange",
"id": 20504,
"tags": "visible-light, electromagnetic-radiation, electrons, molecules, combustion"
} |
[ROS2] Any idea on how to pass parameters (YAML) to a Xacro file via launch python script? | Question:
In my launch python script I can load and pass parameters from a .yaml file into my code without any problem:
from ament_index_python.packages import get_package_share_path
from launch_ros.parameter_descriptions import ParameterValue
from launch_ros.actions import Node
from launch.substitutions import Command, LaunchConfiguration
from launch.actions import DeclareLaunchArgument, ExecuteProcess
from launch import LaunchDescription
import os
def generate_launch_description():
package_p = get_package_share_path('my_pkg')
urdf_path = os.path.join(package_p, 'my_robot.urdf')
config_path = os.path.join(package_p, 'my_params.yaml')
prog = Node(package = "my_pkg",
executable = "my_pkg",
output = "screen",
parameters = [config_path])
return LaunchDescription([prog])
The parameters defined in the my_params.yaml file are passed to my node.
The problem is that I need to pass the same parameters to my xacro file, where I defined my robot.
<?xml version="1.0"?>
<robot name="my_robot" xmlns:xacro="http://www.ros.org/wiki/xacro">
<xacro:arg name="first_parameter"/>
</robot>
As you can see I don't know how to "catch" the parameters into the xacro/urdf file.
Do you have any idea?
Originally posted by Andromeda on ROS Answers with karma: 893 on 2022-06-05
Post score: 1
Answer:
Is your end goal to use xacro for your robot_state_publisher at runtime? Xacro actually has a function (load_yaml()) that allows you to read in a yaml from a file path. In this case you can hand the path of the file through your xacro call using the xacro_arg_name:=param_to_pass notation. If you are running Galactic or later here is how I would do it (with launch configurations for full runtime flexibility):
EDIT:
For foxy you may need to use an opaque functions as described in the second answer of this question
Launch File:
#!/usr/bin/env python3
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument, LogInfo
from launch.substitutions import LaunchConfiguration, Command, PathJoinSubstitution
from launch_ros.actions import Node
from launch_ros.descriptions import ParameterValue
from launch_ros.substitutions import FindPackageShare
def generate_launch_description():
this_pkg = FindPackageShare('test_package')
model_path = LaunchConfiguration('model_path')
params_path = LaunchConfiguration('params_path')
return LaunchDescription([
DeclareLaunchArgument(
'model_path',
default_value=PathJoinSubstitution([this_pkg,'xacro','model.xacro']),
description='Full path to model xacro file including filename and extension'
),
DeclareLaunchArgument(
'params_path',
default_value=PathJoinSubstitution([this_pkg,'params','model.yaml']),
description='Full path to parameter yaml file including filename and extension'
),
Node(
package='robot_state_publisher',
executable='robot_state_publisher',
name='robot_state_publisher',
output='screen',
parameters=[{
'robot_description': ParameterValue(Command(['xacro ',model_path,' params_path:=',params_path]), value_type=str)
}],
)
])
Xacro File: model.xacro
<?xml version="1.0"?>
<robot name="model" xmlns:xacro="http://www.ros.org/wiki/xacro">
<!--
The xacro:arg "params_path" exposes the arg to launch file.
The xacro:property "params_path" is required to get the evaluated symbol into xacro before "load_yaml"
The xacro:proterty "mp" (which stands for "model parameters") gets the dictionary from "params_path"
and is accessed using ${mp['key_name']} which will evaluate to the key value
The xacro:arg and xacro:property "params_path" shares a name but does not seem to clobber eachother.
They are not required to have the same name, it was just convenient
-->
<xacro:arg name="params_path" default=""/> <!-- Need argument to get from launch file -->
<xacro:property name="params_path" value="$(arg params_path)"/> <!-- Need seperate property for xacro inorder processing -->
<xacro:property name="mp" value="${load_yaml(params_path)}"/> <!-- Read in the yaml dict as mp (short for model parameters) -->
<xacro:property name="PI" value="3.1415926535897931" />
<xacro:property name="wheel_mass" value="${mp['wheel_mass']}"/>
<xacro:property name="wheel_width" value="${mp['wheel_width']}"/>
<xacro:property name="wheel_radius" value="${mp['wheel_radius']}"/>
<link name="base_link"/>
<link name="body_link">
<!-- Visual box geometry -->
<visual name="visual">
<geometry>
<box size="1 1 0.5"/>
</geometry>
</visual>
<!-- Collision box geometry-->
<collision name="collision">
<geometry>
<box size="1 1 0.5"/>
</geometry>
</collision>
<!-- Inertia for a box -->
<inertial>
<origin xyz="0 0 0" rpy="0 0 0"/>
<mass value="1645.0"/>
<inertia
ixx="${(1.0/12.0) * 1645.0 * ( 1 * 1 + 0.5 * 0.5 )}"
ixy="0"
ixz="0"
iyy="${(1.0/12.0) * 1645.0 * ( 1 * 1 + 0.5 * 0.5 )}"
iyz="0"
izz="${(1.0/12.0) * 1645.0 * ( 1 * 1 + 1 * 1 )}" />
</inertial>
</link>
<joint name="body_joint" type="fixed">
<origin xyz="0 0 0.25" rpy="0 0 0" />
<parent link="base_link"/>
<child link="body_link"/>
</joint>
<link name="axle_link">
<inertial>
<!-- Pose -->
<origin xyz="0 0 0" rpy="0 0 0"/>
<!-- Dummy Mass -->
<mass value="10.0"/>
<!-- Dummy Inertia Sphere -->
<inertia
ixx="${(2.0/5.0) * 10.0 * 0.1 * 0.1}"
ixy="0"
ixz="0"
iyy="${(2.0/5.0) * 10.0 * 0.1 * 0.1}"
iyz="0"
izz="${(2.0/5.0) * 10.0 * 0.1 * 0.1}" />
</inertial>
</link>
<joint name="axle_joint" type="fixed">
<origin xyz="0 0 2.0" rpy="0 0 0"/>
<parent link="body_link"/>
<child link="axle_link"/>
</joint>
<link name="wheel_link">
<visual>
<origin xyz="0 0 0" rpy="0 ${PI/2} ${PI/2}"/>
<geometry>
<cylinder length="${wheel_width}" radius="${wheel_radius}"/>
</geometry>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 ${PI/2} ${PI/2}"/>
<geometry>
<cylinder length="${wheel_width}" radius="${wheel_radius}"/>
</geometry>
</collision>
<inertial>
<origin xyz="0 0 0" rpy="0 ${PI/2} ${PI/2}"/>
<mass value="${wheel_mass}"/>
<inertia
ixx="${(1/12) * wheel_mass * (3 * wheel_radius * wheel_radius + wheel_width * wheel_width) }"
ixy="0.0"
ixz="0.0"
iyy="${(1/12) * wheel_mass * (3 * wheel_radius * wheel_radius + wheel_width * wheel_width) }"
iyz="0.0"
izz="${(1/2) * wheel_mass * wheel_radius * wheel_radius}"/>
</inertial>
</link>
<joint name="wheel_joint" type="fixed">
<origin xyz="0 0.25 0" rpy="0 0 0" />
<parent link="axle_link"/>
<child link="wheel_link"/>
<axis xyz="0 1 0" rpy="0 0 0 "/>
</joint>
</robot>
YAML File: model.yaml
wheel_mass: 10.0
wheel_width: 0.25
wheel_radius: 0.5
Originally posted by djchopp with karma: 328 on 2022-06-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 37745,
"tags": "ros, ros2, yaml, parameters, xacro"
} |
Can simultaneous double pathogen infections happen, or are they prevented? | Question: Is there something in immunology that prevents a simultaneous infection with a 2nd pathogen? For example, I've never heard of someone getting both dengue and malaria together. Or, say, Ebola and Marburg fever. Of course, sheer probability makes it unlikely but is there something more fundamental that prevents it?
Maybe a more tractable version of the question is within the class of viruses or one type e.g. respiratory viruses. Is there something that prevents getting infected by multiple pathogens at once?
Answer: You can absolutely have two infections occurring together.
One term which is used is 'co-infection'. Also 'secondary infection', in case where either the first infection or the treatment made it more likely for the second infection to occur. For example, secondary infections causing pneumonia were a big issue with Covid19.
I have done a quick google search on 'dengue malaria co-infection', and cases do exist. In this paper (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3614227/ - Concurrent malaria and dengue infection: a brief summary and comment), the authors comment on the fact that for this particular combination of diseases, the co-occurence is indeed quite low. They have various hypotheses:
The vectors for either disease are specific to geographically distinct areas, e.g. mosquitoes that prefer city vs rural areas.
(I've seen it contradicted in another paper but no time to search).
Pure statistics: the likelihood of getting both diseases is much less likely than getting one, especially if getting one disease results in behaviour modification (e.g. staying at home sick).
Underdiagnosis: if you go to the doctor with one obvious disease, you probably will not get tested for other co-infections you might have.
It is possible that, for other pairs of diseases, immunity may play a role (your body is already on high alert from one infection) but that can also provide a risk (your body is so exhausted from fighting infection #1 that you can't fight infection #2). I am not aware, though, of any general statement that would apply to all diseases. | {
"domain": "biology.stackexchange",
"id": 12272,
"tags": "immunology, virology, medicine, infection"
} |
Can I add expert data to the replay buffer used by the DDPG algorithm in order to make it converge faster? | Question: I am working on a restricted reinforcement learning environment, i.e. the environment breaks very often (i.e.: the communication between the simulator and reinforcement learning agent breaks after some time). So, it is getting difficult for me to continue training in this environment.
The continuous state-space is $\mathcal{S} \subseteq \mathbb{R}^{10}$ and the continuous action-space $\mathcal{A} \subseteq \mathbb{R}^{2}$.
What I want to know is whether I can add expert data to the replay buffer, given that DDPG is an off-policy algorithm?
Or I should go with the behavior cloning technique to train the actor-network only, so that it converges rapidly?
I just want to get the work done first and then I can think of exploring the environment.
Answer:
What I want to know is whether I can add expert data to the replay buffer, given that DDPG is an off-policy algorithm?
You certainly can, that is indeed one of the advantages of off-policy learning algorithms; they're still "correct", regardless of which policy generated the data that you're learning from (and a human expert providing the experience to learn from can also be viewed as such a "policy").
There are potential issues to be aware of though. For example, if you just put some expert-generated data in there and don't allow your agent to explore by itself, the experiences that you can learn from may be quite limited in the parts of the state-action space that they explore. So if your expert does not sufficiently explore the entire space, you cannot expect the agent to learn how to act if for whatever reason it ever ends up in some unexplored space. This is no different from what would happen if you trained with an agent that had too little exploration (like a greedy agent).
Or I should go with the behavior cloning technique to train the actor network only, so that it converges rapidly?
I cannot confidently say which approach would work better, so I cannot really answer this... I imagine the answer may also be different for specific different problem domains. But the basic principle of learning from expert data with an off-policy algorithm is not inherently wrong. | {
"domain": "ai.stackexchange",
"id": 2604,
"tags": "reinforcement-learning, ddpg, off-policy-methods, experience-replay, behavioral-cloning"
} |
Any package that would get the lidar and imu and odometery, the occuupancy grid map as an input and get me the location of the robot as an output? | Question:
I am a total ROS newbie and i am wondering if you could help me finding a package that would get the lidar and imu and odometery and the occuupancy grid map as an input and get me the exact location of the robot as an output? thank you in advance.
Originally posted by Amr on ROS Answers with karma: 11 on 2017-02-11
Post score: 1
Original comments
Comment by pavel92 on 2018-08-23:
Welcome to ROS :)
In the future dont post the question in your title. The title should be short description of your problem. You can try now to edit your question and change the title. In regards to your question, check amcl localization package as @Subodh Malgonde pointed out in his answer
Answer:
amcl does exactly what you are looking for.
Originally posted by Subodh Malgonde with karma: 512 on 2018-08-23
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 26985,
"tags": "localization, imu, navigation, lidar, package"
} |
Are there gases from the Earth useful to generate electricity besides "natural gas"? | Question: Natural gas:
...is a naturally occurring hydrocarbon gas mixture consisting primarily of methane, but commonly including varying amounts of other higher alkanes, and sometimes a small percentage of carbon dioxide, nitrogen, hydrogen sulfide, or helium.
I saw the graphic below in the BBC News article Trump climate: Challenges loom after Obama policies scrapped. It lists natural gas as the source of about 33% of the US electricity generation in 2015, but also lists "Other gases" as the source of less than 1%. Presumably it is a meaningful fraction of that 1% or else the BBC would not have included it. While the source of the data is listed as the US Energy Information Agency, the graphic itself was prepared by the BBC if I understand correctly.
Are thee other gases available from the Earth that could account for this fraction of 1%? I don't mean gases that are produced during an industrial refinement process, but perhaps gases that were simply separated. Just for example could it be natural propane? The same Wikipedia article mentions heavier hydrocarbons, but I don't understand if these are already present in the Earth and just being separated, or if they are produced primarily as reaction products.
EDIT: Based on comments, a substantially different source of gas that was chemically similar to natural gas would still be of interest. So just for example, methane from a bog should count in this case, since it does not involve many of the geological processes involved in the production of fossil fuels (e.g. the same timescales or temperatures).
Answer: Fred's answer looks correct, but in this case, the source is right there under the chart.
US Energy Information Administration (type "other gases" in the search box)
Explained here in the footnotes
Other Gas includes blast furnace gas and other manufactured and waste
gases derived from fossil fuels.
0.3% if anyone is interested. Prior to 2011 the definition of other gases was different. | {
"domain": "earthscience.stackexchange",
"id": 1123,
"tags": "gas, fossil-fuel"
} |
Printing an arrow of asterisks in Haskell | Question: This code will print an arrow of asterisks, like:
*
**
***
****
***
**
*
raisingAsterisks, decreasingAsterisks, arrow :: Int -> [String]
raisingAsterisks n = take n $ iterate ('*' :) "*"
decreasingAsterisks = reverse . raisingAsterisks
arrow n = raisingAsterisks n ++ (tail (decreasingAsterisks n))
main :: IO()
main = mapM_ putStrLn $ arrow 4
Answer: The way you're actually generating the list of Strings seems fine, but the formatting, at least in my opinion, can be improved: since raisingAsterisks and decreasingAsterisks are both relatively small "helper" functions for arrow, I would suggest putting them inside a where clause to make things a little easier to read.
arrow :: Int -> [String]
arrow n = increasing ++ tail decreasing
where increasing = take n $ iterate ('*' :) "*"
decreasing = reverse increasing
In this context, the variable names could be shortened, due to the fact that it's easy to understand what's happening. Note that the n variable no longer needs to be passed, so increasing and decreasing actually are no longer even functions!
I hope you agree that changing your code just a little greatly improves readability. | {
"domain": "codereview.stackexchange",
"id": 15710,
"tags": "haskell, ascii-art"
} |
Is there any nongeneral CFG parsing algorithm that recognises EPAL? | Question: EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:
$S \rightarrow a a$
$S \rightarrow b b$
$S \rightarrow a S a$
$S \rightarrow b S b$
EPAL is the 'bane' of many parsing algorithms: I have yet to encounter any parsing algorithm for unambiguous CFGs that can parse any grammar describing the language. It is often used to show that there are unambiguous CFGs that cannot be parsed by a particular parser. This inspired my question:
Is there some parsing algorithm accepting only unambiguous CFGs that works on EPAL?
Of course, one can design an ad-hoc two-pass parser for the grammar that parses the language in linear time. I'm interested in parsing methods that have not been designed specifically with EPAL in mind.
Answer: Consider the following sketch of a parsing strategy at your own risk.
Instead of reading the input only from one end, we read from both sides and look for matching rules. We can do this in recursive descent style; in a call to $A()$, find prefix $w$ and suffix $v$ to the input such that there is a rule $A \to wBv$, descend to $B()$ on the remaining word. If there is no matching rule, reject the word.
This algorithm parses all linear, unambiguous grammars. It takes linear time if all rule pairs $A \to wBv$ and $A \to w'B'v'$ have $w \not\equiv_p w'$ or $v \not\equiv_s v'$¹. This includes EPAL. Otherwise we need to look ahead so we might take $\Theta(n^2)$ time.
The idea does not work for non-linear grammars at all. Linear but ambiguous grammars can in general not be parsed without backtracking (for negative inputs at least).
$w \not\equiv_p v$ means here that $w \not\sqsubseteq v$ and $v \not\sqsubseteq w$, i.e. neither word is a prefix of the other. $\not\equiv_s$ is similar for suffixes. | {
"domain": "cs.stackexchange",
"id": 10,
"tags": "formal-languages, formal-grammars, parsers"
} |
Why does a force not do any work if it's perpendicular to the motion? | Question: I have a book that says the Moon's orbit is [in this context assumed to be] circular. The Earth does no work on the moon. The gravitational force is perpendicular to the motion. Why is there no work done if support force is perpendicular to the motion?
Answer: As explained by SchrodingersCat, mathematically work is proportional to the scalar product of force and line element. Therefore any forces acting perpendicular to the path do not contribute to the work.
Now you might want to ask why work is defined like this. I would like to justify this definition taking your example of the moon.
In physics work is intimately related to energy: basically if you want to change the energy of an object you need to do work on it. Now in the case of the moon there are two relevant energies, (1) kinetic energy of the moon related to the magnitude (but not direction) of the moon's velocity, i.e. its speed; and (2) gravitational energy related to the position of the moon in the earth's gravitational field; this one depends on the distance moon-earth.
For (1), since perpendicular forces do not change the magnitude of velocity (only their direction), the perpendicular force should not enter into the equation of work (since it does not contribute to the energy change).
For (2) if you displace the moon always perpendicular to the direction of the gravitational force, you stay at the same distance, i.e. at the same gravitational potential energy. Therefore such perpendicular displacements do not change the energy and should not enter in the expression for work. | {
"domain": "physics.stackexchange",
"id": 37728,
"tags": "newtonian-mechanics, forces, energy, rotational-dynamics, work"
} |
array with reusable elements | Question: I need array to store elements, but I don't want to make memory allocations at runtime. So I reuse elements. Some fields are never changes so as I reuse elements I need to assign those fields just once.
This is what I wrote, only sceleton, need to add "errors test" etc, but enough to demonstrate the idea:
public sealed class ResizeableArray<T> where T : class, new()
{
// made public to use in "foreach", but better to be private
public T[] array;
public int Count { get; private set; }
public ResizeableArray(uint maxLength = 128)
{
if (maxLength <= 0) throw new ArgumentOutOfRangeException("maxLength");
array = new T[maxLength];
for (int i = 0; i < maxLength; i++)
{
array[i] = new T();
}
Count = 0;
}
public T this[int key]
{
get { return array[key]; }
}
public void Clear()
{
Count = 0;
}
public T Add()
{
return array[Count++];
}
public void RemoveRange(int index, int count)
{
var newSize = Count - count;
for (int i = index; i < newSize; i++)
{
array[i] = array[i + count];
}
Count = newSize;
}
public void RemoveAt(int index)
{
throw new NotImplementedException();
}
}
Usage is simple, for example
var element = array.Add();
// configure, assign only "mutable" fields
element.field1 = value1;
element.field2 = value2;
What do you think? Is it good class or you can suggest something better?
upd posting final version I use in production. I think this class can be usefull if you constantly need to reconfigure some array. It's probably will be faster no to allocate new object over and over (but need to measure). Also it's probably less error prone to have always the same instance of the object.
public sealed class ResizeableArray<T> where T : class, new()
{
private T[] array;
public int Count { get; private set; }
public ResizeableArray(uint maxLength = 128)
{
if (maxLength <= 0) throw new ArgumentOutOfRangeException("maxLength");
array = new T[maxLength];
for (int i = 0; i < maxLength; i++)
{
array[i] = new T();
}
MaxLength = maxLength;
Count = 0;
}
public uint MaxLength { get; private set; }
public T this[int key]
{
get { return array[key]; }
}
public void Clear()
{
Count = 0;
}
public T Add()
{
return array[Count++];
}
// sloooow. don't use it
public T InsertAt0()
{
var zeroOrder = array[Count];
for (int i = Count; i > 0; i--)
{
array[i] = array[i - 1];
}
array[0] = zeroOrder;
return zeroOrder;
}
}
Answer:
my question is about idea in general
It's flawed, in several ways.
Premature Optimization
The code smell I get is that the whole point is avoiding instantiating and then disposing Array elements. Reasons we stay away from optimizing up front
We end up spending too much time on it
Corrupting our business design for the sake of something that we don't even know is an issue.
We don't know yet where and how much actual performance is affected.
We cannot know if our up front optimization is in fact better than what we would have done; it could be worse!
Resizing?
I thought this is what we're trying to avoid.
Where is the "resizing"? I assume it will be in Add() and Remove(). If there is no resizing - the size is fixed at instantiation, then the class name is wrong.
The conventional wisdom says resizing Array is less performant than resizing a List. In fact the C#/.NET team went to great lengths to make sure List automatic resizing performs well. Do you have the time and other resources to make your home-spun Array resizing worth not using what the .NET framework already gives you?
Wrong Perspective
I see the client having to write his code in terms of Arrays and array elements when it should be in terms of your business objects - TradeOrder, for example.
The structure - the array - should not be the emphasis in your design. If you keep going down this road you are in for ugly maintenance problems over time.
Give the business classes "collection friendly" capabilities
I imagine sorting, searching, preventing duplicates, uniqueness, etc. might be important qualities when we make a collection of things. So TradeOrder should implement IEquatable and IComparable.
ResizeableArray, or any other TradeOrder user's code, should not be doing this:
if (trade1.stockName == trade2.stockName && trade1.shares == trade2.shares && ...)
Client code should be able to do this:
if (trade1.Equals(trade2))
Should ResizableArray inherit Array?
Array already implements foreach, for example. You'll get all the built-in goodness and it will be exposed at the 'class level' to the client. There's lots of nifty search and sort methods that take advantage of ICompareable implementation.
Some fields are never changes so as I reuse elements I need to assign those fields just once
Memory Leaks
As ResizableArray gets used it will have empty and occupied elements scattered throughout the Array. You will end up writing code to scan the array for every Add(). Otherwise you'll be adding new elements when there are empty elements available.
It looks like we leave TradeOrder objects in unused array elements. This is the case at instantiation, clearly. Also removing things involves Count but not nulling the reference there. I guarantee you'll be spending lots of extra code and lots of debugging time trying to keep the Count in synch with the actual active objects.
We don't know what an empty element is
The Count is being incremented/decremented but we're leaving objects in place. I'm assuming that at some point we're done with a given object, in which case it should be disposed of. Besides the memory issue, how do we know what array elements are in use and which ones we can over-write?
// made public to use in "foreach", but better to be private
public T[] array;
NO.
ResizeableArray should implement IEnumerable.
You are forcing client code to be written like this:
foreach (var trade in tradeOrders.array)
When they should write:
foreach (var trade in tradeOrders)
This is a violation of OO principles. Single Responsibility, Law of Demeter (least knowledge), Encapsulation ... a discussion for another time.
As a practical matter do not force the client to have to know how to manipulate the class properties. ResizableArray should know how to iterate itself.
The client should not tell the ResizeableArray object to resize, or how. | {
"domain": "codereview.stackexchange",
"id": 4799,
"tags": "c#, array"
} |
Reliability of a service | Question: Assume that I need to determine the reliability of a service. The service includes component a (software reliability=0.95) and component b (software reliability=0.98). I have 2 computers: Computer A (hardware reliability=0.99) and Computer B (hardware reliability=0.99).
I have two following cases:
Case 1: Deploy both a and b on computer A. For this case, the service
reliability is around 0.923.
Case 2: Deploy a on computer A, and b on computer B. For this case, the service reliability is around 0.912
I really wonder why the service reliability in case 2 is lower than in case 1. The thing is A and B have the same hardware reliability. Can someone clarify that?
Answer: In Case 1, for the service to not fail we need three things to not fail: computer A, component a and component b. In contrast, in Case 2 we also need computer B to not fail. Plainly the probability in the latter case is smaller, by a factor of 0.99.
You can calculate the non-failure probability by just multiplying the non-failure probabilities for the relevant objects:
Case 1: $0.99 \cdot 0.98 \cdot 0.95 = 0.92169$.
Case 2: $0.99 \cdot 0.99 \cdot 0.98 \cdot 0.95 = 0.9124731$. | {
"domain": "cs.stackexchange",
"id": 13998,
"tags": "software-engineering, reliability"
} |
What is the difference between joint_state_publisher and joint_state_controller? | Question:
So the question is in the title. It looks like both of them do the same thing, don't they? Is there any reason to prefer one over another?
Thanks!
Originally posted by Long Smith on ROS Answers with karma: 75 on 2018-09-14
Post score: 4
Answer:
So the question is in the title. It looks like both of them do the same thing, don't they?
well .. sort of, but not really actually.
joint_state_publisher (JSP) is a stand-alone node, which has no way of interacting with hardware directly. In fact, the node itself is never really involved when you work with real robots, unless you have multiple publishers of JointState messages (on multiple topics) and have a desire to create a single topic onto which the complete state of all your JointState publishers is broadcast. The JSP can take all those topics and coalesce the msgs on them into a single one, containing all joints and all the state and publish that to a different topic.
The JSP can also be used with its GUI, in which case you can influence the values of the various joints it has found in the robot_description parameter using sliders. This is typically done when testing URDFs, or when you don't yet have an actual driver for a particular piece of hardware but still want to be able to generate JointState messages for that piece of hw. It's not a simulator of course, more of a UI for a data generator component.
The joint_state_controller however does not support any of this. It is a part of ros_control and cannot be used without it (of course you can, but that would need some work).
It's job is to use the data that comes out of the RobotHW ::read(..) method and (eventually) convert that into a JointState message and publish it. It knows about transmissions and other configurable aspects of ros_control. It only does its work for joints for which it has been configured to do that, and it only does that in a ros_control context.
So summarising:
the joint_state_publisher is a stand-alone node, which does not interface with hardware directly. It is typically used when you don't have hardware, or when you have multiple publishers of JointState msgs and want a single, coherent view over all those joint_state topics.
the joint_state_controller is a class in one of the packages of ros_control. It is not a stand-alone node. This one does interface with hardware. It's called a "controller" but it's not: it is a class that transforms data from an internal ros_control representation to JointState messages and publishes those.
As to your question:
Is there any reason to prefer one over another?
Which one you use will depend entirely on what you are intending to do. But there is also not really a choice here, as I don't believe the two entities overlap in any really meaningful way.
If you have real hardware and need JointState messages published for it, and using ros_control makes sense (because you also need to actually control actuators fi), then using joint_state_controller would be the way to go. The JSP cannot help you in this case, as it does not interface with hw.
If you just want to test a URDF, or want to generate JointState messages for joints you don't yet have a driver for, or want to coalesce multiple JointState publications into one, then the JSP can help.
Originally posted by gvdhoorn with karma: 86574 on 2018-09-14
This answer was ACCEPTED on the original site
Post score: 19 | {
"domain": "robotics.stackexchange",
"id": 31770,
"tags": "ros-melodic, joint-state-publisher"
} |
Which wavelength is the most quiet, for a ground-based radio telescope? | Question: This might seem like a backwards question, but I'm interested in what wavelength to select in a (hypothetical) ground-based radio telescope observation to expect detecting as little as possible! :)
It should still be reachable from space, i.e. a wavelength with a high atmospheric absorption is probably a bad candidate.
I guess this could be answered by looking at a broad averaged spectrum collected by a radio telescope, and comparing against an atmospheric absorption spectrum chart.
Answer: You may be looking for the "water hole". See http://www.setileague.org/general/waterhol.htm. | {
"domain": "astronomy.stackexchange",
"id": 685,
"tags": "radio-astronomy"
} |
What nomenclature do names like fumarate and malate derive from? | Question: When reviewing the citric acid cycle, I find names like glutarate and succinate to be helpful in thinking about the structure of the molecules because I was taught about the sequential dicarboxylic acid naming scheme: oxalic (2 carbon), malonic (3C), succinic (4C), glutaric (5C) etc.
But I often struggle with fumarate and malate because I have not learned the numbering or nomenclature system that these names are derived from. Can someone point me in the right direction so that I can learn the background behind these names?
Knowing that will be helpful to my recall of the molecular structure. Then I can stop memorizing and start understanding.
Answer: Unfortunately, you will have to memorize the structures and the names because there is no chemical structure hidden in those names.
From Elsevier's Dictionary of Chemoetymology
fumaric acid $\ce{C4H4O4}$, derived from the genus name Fumaria (fumitory), from fumus (Latin:
smoke)
Fumaria plant
malic acid $\ce{C4H6O5}$, derived from malum (Latin: apple)
The same goes for all other common names like oxalic acid, acetic acid etc. | {
"domain": "chemistry.stackexchange",
"id": 14740,
"tags": "nomenclature"
} |
Coherent state of second quantized hamiltonian | Question: In my preparation for the exam I tried to solve the exercise 2.4 in Coleman's Introduction to Many-Body Physics. I like diagonalizing Hamiltonian, so I picked this problem. Also to learn more about coherent states, which we just accidentally discussed.
In this problem one has the bosonic Hamiltonian
\begin{align}
H = \omega \left( a ^ { \dagger } a + \frac { 1 } { 2 } \right) + \frac { 1 } { 2 } \Delta \left( a ^ { \dagger } a ^ { \dagger } + a a \right)
\end{align}
and transforms it with the Bogoliubov transformation
\begin{align}
\begin{aligned} b & = u a + v a ^ { \dagger } \\ b ^ { \dagger } & = u a ^ { \dagger } + v a \end{aligned}
\end{align}
to \begin{align}
H = \tilde { \omega } \left( b ^ { \dagger } b + \frac { 1 } { 2 } \right)
\end{align}
This is done by $\tilde \omega = \frac{1}{2uv} \Delta$. Now the coherent state comes into play:
The Hamiltonian has a boson pairing term. Show that the ground state of $H$ can be written as a coherent condensate of paired bosons, given by
\begin{align}
| \tilde { 0 } \rangle = e ^ { - \alpha \left( a ^ { \dagger } a ^ { \dagger } \right) } | 0 \rangle
\end{align}
Calculate the value of $\alpha$ in terms of $u$ and $v$. (Hint: $ | \tilde { 0 } \rangle $ is the vacuum for $b$, i.e. $ b | \tilde { 0 } \rangle = \left( u a + v a ^ { \dagger } \right) | \tilde { 0 } \rangle = 0 $. Calculate the commutator of $ \left[ a , e ^ { - \alpha a ^ { \dagger } a ^ { \dagger } } \right] $ by expanding the exponential as a power series. Find a value of $\alpha$ that guarantees that $b$ annihilates the vacuum $ | \tilde { 0 } \rangle$.)
Why is this a coherent state? - I don't know much about coherent states, but basically it means that $ \hat { a } | \alpha \rangle = \alpha | \alpha \rangle $. So I don't see this condition being fulfilled. They state it's the ground state because $b$ annihilates the vacuum and simultaneously it should be the coherent state. But Wikipedia says:
Physically, this formula ($ \hat { a } | \alpha \rangle = \alpha | \alpha \rangle $) means that a coherent state remains unchanged by the annihilation of field excitation or, say, a particle.
As far as I understood this, the coherent state shouldn't be zero after application of $b$.
How to calculate the commutator? - I tried the following: $$ \left[ a , \mathrm { e } ^ { - \alpha a ^ { \dagger } a ^ { \dagger } } \right] = \left[ a , \sum _ { k = 0 } ^ { \infty } \frac { \left( - \alpha a ^ { \dagger } a ^ { \dagger k } \right) } { k ! } \right] = \left[ a , - \alpha a ^ { \dagger } a ^ { \dagger } + \frac { \alpha ^ { 2 } } { 2 } \left( a ^ { \dagger } a ^ { \dagger } \right) ^ { 2 } + \ldots \right] = - 2 \alpha a ^ { \dagger } - 2 \alpha ^ { 2 } \left( a ^ { \dagger } \right) ^ { 3 } - \ldots $$
After this I cannot compute the next element in the sum and didn't know how to continue.
Answer: A coherent state (in the Perelomov sense) is a displaced ground state. Here, the operators
$$
K_0=\frac{1}{2}\left(a^\dagger a+a a^\dagger\right)\, ,
K_+=a^\dagger a^\dagger\, ,\qquad K_-=a\,a
$$
span an $\mathfrak{su}(1,1)$ algebra. The displacement operator for this is an $SU(1,1)$ transformation, which can be normal-ordered to give the (unnormalized) states
$$
\vert\xi\rangle = e^{\xi a^\dagger a^\dagger}\vert 0\rangle\, ,\qquad \hbox{or}\qquad
\vert\xi\rangle = e^{\xi a^\dagger a^\dagger}\vert 1\rangle\, .
$$
In the unitary form one can write
$$
e^{-i\alpha K_0}e^{-i\beta K_y}\vert 0\rangle
$$
where $2iK_y=K_+-K_-$. The overlap
$$
\langle n\vert e^{-i\alpha K_0}e^{-i\beta K_y}\vert 0\rangle
$$
is actually an $SU(1,1)$ group function; such functions have a closed form expression closely related the usual Wigner $D$-functions for $SU(2)$. For $SU(1,1)$ they can be found (along with other details) in
Ui, Haruo. "Clebsch-Gordan formulas of the SU (1, 1) group." Progress of Theoretical Physics 44.3 (1970): 689-702.
Searching for "su(1,1) coherent states" in Google will produce multiple helpful hits.
The key hint is that, if your write your $H$ in terms of $K_0$ and $2K_x=K_++K_-$, then the $SU(1,1)$ transformation $e^{-i\beta K_y}$ will digonalize $H$ for some $\beta\in \mathbb{R}$. This is similar to the way a Hamiltonian $J_0+bJ_x$ is diagonalised by a rotation about $\hat y$. So the key is to work through commutators like $[K_\pm,K_y]$ to unwrap the effect of the exponential on $K_0$ and $K_x$; if done correctly some magic will happen. | {
"domain": "physics.stackexchange",
"id": 55746,
"tags": "quantum-mechanics, coherent-states"
} |
Why do I get different results from two calculation methods? | Question: I am wondering what the reason for the following is
We know that ,
exponential has a taylor representation :
$$exp(x)=1+x+\frac{x^{2}}{2!}+\frac{x^{3}}{3!}+...$$
Using the first n terms , in R , I compute
$\exp(-10)$ and get one answer.
However, if I compute $\frac{1}{\exp(10)}$, I get a slightly different answer.
For example, when I use the first 30 terms of the Taylor Series,
I get
$\exp(-10)=0.0009703416$
but $\frac{1}{\exp(10)}=4.539993\times10^{-5}$
Once I get to a large enough number of iterations, both converge to the correct value.
So why do we get differing answers when I use the same method, only replacing by a division of 1 over?
Thanks!
Answer: For $\exp(-10)$, more terms have to be computed since the 30-th term $\dfrac{(-10)^{30}}{30!}\approx 0.00377$ is several times bigger than the partial sum so far, 0.0009703416.
For $\exp(10)$, all terms are positive. The 30-th term $\dfrac{(10)^{30}}{30!}\approx 0.00377$ is already substantially negligible to the partial sum so far, 22026.464.
Here is the output of the program of my Python program that computes $\exp(-10)$ by that Taylor series. The first column tells the ordinals of the terms. The second columns show the values of that terms. The third terms is the partial sums so far.
0 1.0 1.0
1 -10.0 -9.0
2 50.0 41.0
3 -166.66666666666666 -125.66666666666666
4 416.66666666666663 291.0
5 -833.33333333333326 -542.33333333333326
6 1388.8888888888887 846.55555555555543
7 -1984.1269841269839 -1137.5714285714284
8 2480.1587301587297 1342.5873015873012
9 -2755.7319223985887 -1413.1446208112875
10 2755.7319223985887 1342.5873015873012
11 -2505.2108385441716 -1162.6235369568703
12 2087.6756987868098 925.05216182993945
13 -1605.9043836821616 -680.8522218522221
14 1147.0745597729726 466.22233792075053
15 -764.71637318198179 -298.49403526123126
16 477.94773323873864 179.45369797750737
17 -281.14572543455211 -101.69202745704473
18 156.19206968586229 54.500042228817563
19 -82.206352466243317 -27.706310237425754
20 41.103176233121658 13.396865995695904
21 -19.572941063391266 -6.1760750676953613
22 8.8967913924505755 2.7207163247552142
23 -3.8681701706306848 -1.1474538458754706
24 1.6117375710961186 0.46428372522064798
25 -0.64469502843844739 -0.18041130321779941
26 0.24795962632247978 0.067548323104680369
27 -0.09183689863795548 -0.024288575533275111
28 0.032798892370698378 0.00851031683742327
29 -0.011309962886447721 -0.0027996460490244501
30 0.0037699876288159102 0.00097034157979145996
31 -0.0012161250415535199 -0.00024578346176205997
32 0.00038003907548547002 0.00013425561372341999
33 -0.00011516335620772 1.90922575157e-05
34 3.3871575355210003e-05 5.2963832870910003e-05
35 -9.6775929586300006e-06 4.3286239912280003e-05
36 2.6882202662899999e-06 4.5974460178560002e-05
37 -7.2654601791999997e-07 4.5247914160649999e-05
38 1.9119632050000001e-07 4.5439110481150002e-05
39 -4.9024697569999999e-08 4.539008578359e-05
40 1.225617439e-08 4.5402341957980002e-05
41 -2.9893108300000001e-09 4.5399352647149997e-05
42 7.1174066999999995e-10 4.5400064387830001e-05
43 -1.6552109000000001e-10 4.5399898866740002e-05
44 3.761843e-11 4.539993648517e-05
45 -8.3596500000000006e-12 4.5399928125519999e-05
46 1.81732e-12 4.539992994283e-05
47 -3.8665999999999999e-13 4.5399929556169997e-05
48 8.0549999999999995e-14 4.5399929636719997e-05
49 -1.6440000000000001e-14 4.5399929620280003e-05
50 3.2899999999999998e-15 4.5399929623570001e-05
51 -6.4000000000000005e-16 4.5399929622930003e-05
52 1.2e-16 4.5399929623049997e-05
53 -2.0000000000000001e-17 4.539992962303e-05
54 0.0 4.539992962303e-05
55 -0.0 4.539992962303e-05
4.539992962303116e-05
Although the partial sums becomes stable after 55-th term, not all digits of 4.539992962303e-05 is correct, because of the precision of the terms such as the 8-th term. The actual value of $\exp(-10)$ should be something like 4.5399929762485e-05
In the end, it is faster and easier to compute $\exp(-x)$ to a specified precision using $\dfrac1{\exp(x)}$ instead of directly for $x>0$, especially when $x$ is large. | {
"domain": "cs.stackexchange",
"id": 13014,
"tags": "floating-point, numerical-analysis"
} |
Work-Energy Theorem but Work is $F.S_{COM}$ (Extended) | Question: From the definition of Work Done, we know it is $\vec F . \vec s$ , where $\vec s$ is displacement of point of application of force.
Everything was alright until I read these texts from Kleppner and Kolenkow:
highlighted texts show clearly, they were considering the displacement of centre of mass. Also, below the highlighted text they clear say that "dR is the displacement of the centre of mass".
Also,
Moreover, In this example they have considered the work done by static friction, the point of contact is at rest because of pure rolling.
Now, I did read this answer: https://physics.stackexchange.com/a/121906 which is indirectly related to this and does somehow give intuition that it is the point of application of force (I held the same view until I read these), but the answer doesn't seem to clarify what they did.
Why did KK decide to take the centre of mass displacement? Is it correct? If so, when can we take like that?
I want to know whether the formula they used is valid or not. If so, doesn't it contradict with the answer i linked? If it is valid, in what cases? Only for rigid bodies?Also, how would you justify the definition of work here?... Pls explain all these points keeping in mind the answer that I linked.
Answer: The work-energy theorem is a difficult concept, and it is frequently misunderstood. Note that before equation 5.15 we are given equation 5.14, which is: $$\mathbf{ F} = M \ddot{\mathbf{R}}$$ This is Newton's 2nd law. The $\mathbf{F}$ in Newton's 2nd law is the net force $$\mathbf{F_{net}}=\sum_{i=1}^n \mathbf{F_i}$$ To me, this is a little confusing already. The authors should always (in my opinion) write the net force as $\mathbf{F_{net}}$ or with some other similar indication that it is a net force and not an individual force.
Similarly, the $\mathbf{R}$ in Newton's 2nd law is the position center of mass (COM). While it is not as critical as the net-force distinction, it would also help in clarity if it were written $\mathbf{R_{com}}$. So if we write 5.15 with full clarity then we get $$\int_{\mathbf{R_{com,a}}}^{\mathbf{R_{com,b}}}\mathbf{F_{net}}\cdot d\mathbf{R_{com}}=\frac{1}{2}M \mathbf{V_{com,b}}^2-\frac{1}{2}M \mathbf{V_{com,a}}^2$$
This is clearer, but less concise and more effort to write.
Why did KK decide to take the centre of mass displacement? Is it correct? If so, when can we take like that?
They took the center of mass displacement because that is the $\mathbf{R}$ in Newton's 2nd law. As such, it is correct whenever you are using Newton's 2nd law. Indeed, that is the meaning of the 2nd law.
However, although it is always correct in Newtonian mechanics, the point that causes confusion is the following. The quantity on the left of 5.15 is called the "net work" where the word "net" refers to the "net force". In turn, the word "net" in "net force" is used because the "net force" is the sum of all of the individual forces acting on the system. This inevitably leads to the confusion that students believe that the "net work" is also the sum of all of works done by each individual force acting on the system. This is false.
Let's use the term "total work" to refer to the sum of all works done by each individual force acting on the system. The "net work" is not necessarily equal to the "total work". They are two separate concepts, and they can even differ when there is only a single force acting on the system!
The total work gives the total change in energy. The net work gives only the portion of the change of total energy corresponding to a change in the kinetic energy of the center of mass, often called the translational kinetic energy.
For example, consider a spring being compressed at a constant rate, $v$, by a force $F$ from my hand while the other side is attached to a fixed wall. By Newton's 2nd law, the force from the wall is $-F$. From these we can calculate both the "net work" and the "total work". The "net work" is the net force times the displacement of the COM, and since the net force is $F-F=0$ the "net work" is also zero regardless of the displacement. The work energy theorem says that the spring is not gaining KE. The "total work" is the sum of the work from each force. The wall has a displacement of zero, so the wall's work is $0$. The hand has a work of $\int F \ ds$, which from Hooke's law is $1/2 \ k x^2$, assuming that the spring started at equilibrium. So the total work is $0+1/2 \ kx^2$. Both the "net work" and the "total work" are correct. The "net work" says that the spring is not gaining KE. The "total work" says that the spring is gaining total energy. So, in this case, the difference is the increase in internal energy, the elastic potential energy.
If it is valid, in what cases? Only for rigid bodies? Also, how would you justify the definition of work here?
In the question about the drum, they are also using the rotational version of the work energy theorem. This would be the "net rotational work" about the COM. It is also valid when the rotational version of Newton's 2nd law is valid. So, in this case $fb\theta$ is the "net torque" not the torque due to the single force $f$. Again, this "net rotational work" is a separate concept from the "total work" ("total work" is not divided into rotational and translational parts). Those two quantities differ, even in this case where there is a single force providing torque. For the "net rotational work" it doesn't matter that the point of application of the force providing the torque is not moving, that is relevant for the separate concept of "total work". | {
"domain": "physics.stackexchange",
"id": 100422,
"tags": "newtonian-mechanics, rotational-dynamics, work"
} |
base32 implementation in PHP | Question: I don't actually know much about how base32 (or base64) works, but I noticed that there was no official base32 implementation in PHP, so I figured I'd make one.
I Googled around a bit to figure out how it works, and found this page. Using the examples at the bottom, I hacked up this base32 class. GitHub project: https://github.com/NTICompass/PHP-Base32
<?php
/**
* NTICompass' crappy base32 library for PHP
*
* http://labs.nticompassinc.com
*/
class Base32{
var $encode, $decode, $type;
// Supports RFC 4648 (default) or Crockford (http://www.crockford.com/wrmg/base32.html)
function __construct($alphabet='rfc4648'){
$alphabet = strtolower($alphabet);
$this->type = $alphabet;
// Crockford's alphabet removes I,L,O, and U
$crockfordABC = range('A', 'Z');
unset($crockfordABC[8], $crockfordABC[11], $crockfordABC[14], $crockfordABC[20]);
$crockfordABC = array_values($crockfordABC);
$alphabets = array(
'rfc4648' => array_merge(range('A','Z'), range(2,7), array('=')),
'crockford' => array_merge(range(0,9), $crockfordABC, array('='))
);
$this->encode = $alphabets[$alphabet];
$this->decode = array_flip($this->encode);
// Add extra letters for Crockford's alphabet
if($alphabet === 'crockford'){
$this->decode['O'] = 0;
$this->decode['I'] = 1;
$this->decode['L'] = 1;
}
}
private function bin_chunk($binaryString, $bits){
$binaryString = chunk_split($binaryString, $bits, ' ');
if($this->endsWith($binaryString, ' ')){
$binaryString = substr($binaryString, 0, strlen($binaryString)-1);
}
return explode(' ', $binaryString);
}
// String <-> Binary conversion
// Based off: http://psoug.org/snippet/PHP-Binary-to-Text-Text-to-Binary_380.htm
private function bin2str($binaryString){
// Make sure binary string is in 8-bit chunks
$binaryArray = $this->bin_chunk($binaryString, 8);
$string = '';
foreach($binaryArray as $bin){
// Pad each value to 8 bits
$bin = str_pad($bin, 8, 0, STR_PAD_RIGHT);
// Convert binary strings to ascii
$string .= chr(bindec($bin));
}
return $string;
}
private function str2bin($input){
$bin = '';
foreach(str_split($input) as $s){
// Return each character as an 8-bit binary string
$s = decbin(ord($s));
$bin .= str_pad($s, 8, 0, STR_PAD_LEFT);
}
return $bin;
}
// starts/endsWith from:
// http://stackoverflow.com/questions/834303/php-startswith-and-endswith-functions/834355#834355
private function startsWith($haystack, $needle){
$length = strlen($needle);
return substr($haystack, 0, $length) === $needle;
}
private function endsWith($haystack, $needle){
$length = strlen($needle);
return substr($haystack, -$length) === $needle;
}
// base32 info from: http://www.garykessler.net/library/base64.html
// base32_encode
public function base32_encode($string){
// Convert string to binary
$binaryString = $this->str2bin($string);
// Break into 5-bit chunks, then break that into an array
$binaryArray = $this->bin_chunk($binaryString, 5);
// Pad array to be divisible by 8
while(count($binaryArray) % 8 !== 0){
$binaryArray[] = null;
}
$base32String = '';
// Encode in base32
foreach($binaryArray as $bin){
$char = 32;
if(!is_null($bin)){
// Pad the binary strings
$bin = str_pad($bin, 5, 0, STR_PAD_RIGHT);
$char = bindec($bin);
}
// Base32 character
$base32String .= $this->encode[$char];
}
return $base32String;
}
// base32_decode
public function base32_decode($base32String){
$base32Array = str_split(str_replace('-', '', strtoupper($base32String)));
$binaryArray = array();
$string = '';
foreach($base32Array as $str){
$char = $this->decode[$str];
if($char !== 32){
$char = decbin($char);
$string .= str_pad($char, 5, 0, STR_PAD_LEFT);
}
}
while(strlen($string) %8 !== 0){
$string = substr($string, 0, strlen($string)-1);
}
return $this->bin2str($string);
}
}
?>
This code works, I've tested it using this page, and it gives the same result, but I don't think this is the best way of doing base32.
Is there a better way to do base32 that's maybe more efficient than what I have?
Answer: Your constructor going through all the work to build both alphabets and then throwing one away seems odd. I'd probably have a Base32 base class, and have the two alphabets be subclasses.
Using the binary conversion does seem problematic. This is especially true since the numbers are already in binary inside the computer.
I can see a few different approaches:
Make an array of 5 bit numbers:
value = 0
bits_remaining = 0
while more data or bits_remaining:
while bits_remaining > 5:
remove first five bits of value and place into array
value = value << 8 + ord(next letter in data)
Hand code for each 8 byte case:
codes = array(
(value[0] & 0xfd)) >> 3,
(value[0] & 0x3) << 3 | value[1] & (0x7) >> 3,
...
Since (8 * 5) % 8 = 0 you can chunk your data into eight bit pieces and just hand code the neccesary bitflags to figure out which index should be fetched.
Use GMP
value = gmp_init(0)
for( letter in data)
{
value = gmp_or( gmp_mult( value, gmp_pow(2, 8)), ord(letter))
}
gmp_strval(value, 32)
(Actually algorithm have not been thought through, but perhaps this might give you an idea of things to try) | {
"domain": "codereview.stackexchange",
"id": 4013,
"tags": "php, algorithm"
} |
Is there way to calculate $\sum_{i<j<k\leq n} A_i \cdot A_j \cdot A_k$ faster than $O(n^3)$ | Question: Given array $A$ of size $n$, we want to calculate $\sum_{i<j<k\leq n} A_i \cdot A_j \cdot A_k$. Is there way to speed this up rather than the standard $O(n^3) $ calculation with 3 for loops. I haven't worked with sums so I don't know their properties, but I was thinking if there is some way to rewrite the formula to get faster calculations.
Answer: Let $A = \sum_{i=1}^n A_i$. We have
$$
\begin{align*}
A^3 &= \sum_{i=1}^n A_i^3 + 3\sum_{i=1}^n \sum_{j \neq i} A_i^2 A_j + 6 \sum_{i < j < k} A_i A_j A_k \\ &=
\sum_{i=1}^n A_i^3 + 3 \sum_{i=1}^n A_i^2 (A - A_i) + 6 \sum_{i < j < k} A_i A_j A_k.
\end{align*}
$$
This gives a linear time algorithm.
More generally, the theory of symmetric polynomials ensures that every symmetric polynomial can be computed in linear time, since it is a polynomial in the quantities $B_d := \sum_{i=1}^n A_i^d$. For example,
$$
\sum_{i<j<k} A_i A_j A_k = \frac{B_1^3+2B_3-3B_1B_2}{6}.
$$ | {
"domain": "cs.stackexchange",
"id": 11418,
"tags": "algorithms, mathematical-programming"
} |
What are some methods to reduce a dataframe so I can pass it as one sample to an SVM? | Question: I need to classify participants in an NLP study into 3 classes, based on multiple sentences spoken by the participant. I performed a feature extraction on each sentence, and so I am left with a matrix of length (# of sentences spoken x feature vector length for each sentence) for each participant. So, for me, each sample is represented by a matrix of varying length, since some participants spoke more sentences than others. What are some ways for me to reduce the dimensionality of each matrix, and also standardize the length, so I can perform an SVM with each participant as a sample?
I am also interested in learning about other methods to classify my samples, if SVMs are not the best fit. Thank you.
Answer: SVM are not meant to solve "arbitrarily long" classification problem, therefore you have few choices:
use PCA for sequences, however it takes very long since it has to build a giant matrix over which it can perform PCA
change model, and pick a better suited one (eg RNN)
pad and cut your data (most often is not needed the whole phrase to predict the output)
introduce some prior knowledge, for example order the most recurrent words, and remove all of those that don't add anything to the meaning (be careful, this might lead to many problems, for example if you remove "not")
use recurrent autoencoders, to transform a sentence to a fixed size vector, over which you can perform any ML algorithm (but this might cause some problem from the POV of explainability)
In my opinion, cut&pad is the best option to start with, simple to implement and often very powerful, but this might change from context to context | {
"domain": "datascience.stackexchange",
"id": 10929,
"tags": "nlp, svm, reshape"
} |
Can this FindString function be optimized further, in terms of speed? | Question: int FindStr(char* str, int strsize, char* fstr, int from)
{
for(int i=from, j=0; i<strsize; i++)
{
if(str[i]==fstr[j])
j++;
else
{i-=j; j=0;}
if(fstr[j]=='\0')
return i-j+1;
}
return -1;
}
The function searches for a string fstr in str and returns its index in str if found, otherwise, it will return -1. It's also possible to specify where to start searching in the string.
My question is, can I optimize this function further? Also, do you see any potential problems in this function?
Answer: Some comments to add to the first two of @200_success:
strstr does a good job. Why reinvent?
sizes are often passed as size_t rather than int
modifying the loop variable within the loop is generally considered bad
practice
i -= j executes on every loop unless there is a match. Mostly in this
case j is zero so the line has no effect, but it still executes
if fstr is an empty string it returns the wrong result (1)
add some spaces around operators and after if, for | {
"domain": "codereview.stackexchange",
"id": 5498,
"tags": "c++, optimization, strings, search"
} |
Why doesn't calculating the de Broglie wavelength work with h in eV·s? | Question: I'm trying to calculate the de Broglie wavelength of a particle with a known momentum $p = 980.93 \,\mathrm{GeV}/\mathrm s$. The de Broglie relation is:
$$
\lambda = \frac{h}{p}
$$
but I notice that if I use the value of $4.136\times 10^{-15}\, \mathrm{eV}\cdot\mathrm s$ for the Planck's constant, the answer that I get from the equation is in units of $\mathrm s^2$, not m. Specifically:
$$
\lambda = \frac{4.136 \times 10^{-15} \mathrm{eV}\cdot \mathrm s}{980.93 \, \mathrm{GeV}/\mathrm s} = 4.25\times 10^{-27}\, \mathrm s^2
$$
which is not at all equivalent to the value that one gets when using $h = 6.626\times 10^{-34} \,\mathrm J\cdot\mathrm s$ instead (since $\mathrm J\cdot\mathrm s$ can be converted into $\frac{\mathrm{kg}\cdot \mathrm m^2}{\mathrm s^2}\cdot \mathrm s$).
$$
\lambda = \frac{6.626\times10^{-34}\,\frac{\mathrm{kg}\cdot \mathrm m^2}{\mathrm s^2}\cdot \mathrm s}{5.237\times10^{-16}\frac{\mathrm{kg}\cdot \mathrm m}{\mathrm s}} = 1.27 \times 10^{-18} \,\mathrm m
$$
What's going on here?
Answer: You're running into trouble because in order to give momentum units of energy, you're setting the speed of light equal to 1, $c=1$. If you keep the units of $c$ the momentum should be given in units of $\text{eV}/c$. By dimensional analysis you can check for yourself that eV/s does not have units of momentum (kg$\cdot$m/s).
Therefore, in your case the momentum is actually given by $p = 980.93~ \text{GeV}/c$ which yields
$$\lambda = \frac{4.136 \times 10^{-15} \text{eV}\cdot \text{s}}{980.93 \, \text{GeV}/c} = \frac{4.136 \times 10^{-15} \text{eV}\cdot \text{s} \cdot 2.99 \times 10^8 \text{m/s}}{980.93 \, \text{GeV}} = 1.27\times 10^{-18}\, \text{m}. $$ | {
"domain": "physics.stackexchange",
"id": 48529,
"tags": "special-relativity, wave-particle-duality"
} |
ros launch issue | Question:
As I have just started learning Ros with the help of Ros tutorials the problem which i am facing is that according to Ros tutorials link http://www.ros.org/wiki/ROS/Tutorials/UsingRxconsoleRoslaunch
when i type roslaunch command in the new terminal it gives me an error that
danishraza@ubuntu:~/ros_workspace$ roscd beginner_tutorials
roscd: No such package 'beginner_tutorials'
I have Ubuntu 10.04 Lts on my machine and Ros electric.
Originally posted by danish on ROS Answers with karma: 11 on 2012-02-21
Post score: 0
Answer:
This tutorial assumes you have created the 'beginner_tutorials' package in the preceding tutorial on Creating a ROS Package.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2012-02-21
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by danish on 2012-02-22:
i have already created the package but the problem is still there kindly suggest some another answer.
Comment by Stefan Kohlbrecher on 2012-02-22:
Then you most likely don't have your ROS_PACKAGE_PATH set up correctly, so your 'beginner_tutorials' package is not found by ROS. Have a look at this guide: http://www.ros.org/wiki/ROS/Tutorials/InstallingandConfiguringROSEnvironment | {
"domain": "robotics.stackexchange",
"id": 8318,
"tags": "roslaunch"
} |
$N$-body problem in 2 dimensions | Question: I have been trying to implement a 2D $N$-body physics simulator and am currently using newton's law of gravitation to calculate the magnitude of the force a pair of particles experiences.
$$F=\frac{GM_1M_2}{r^2}. \tag{1}$$
However, as I have found in this thread, it seems more accurate to alter the inverse square law to simply $1/r$ instead, which makes sense to me since we are working in 2 dimensions instead of 3. Something along the lines of
$$F=\frac{GM_1M_2}{r}. \tag{2}$$
However, every other 2D implementation of the $N$-body problem uses the former equation, and I am wondering whether one is more correct than the other? Which one is more suitable for my case?
Answer: It depends on what you want to simulate. When you use the inverse square law
$$F = \frac{G M_1 M_2}{r^2},$$
the particles will behave as they do in our 3-dimensional space as viewed from a certain angle, thereby "ignoring" the spacial coordinate that is perpendicular to the plane of view.
If you actually want to see how particles would behave unter the influence of gravity if our universe only had 2 spacial dimensions, then you would use
$$F = \frac{G M_1 M_2}{r}.$$
It's interesting to play around with both cases, as they give you very different dynamics. | {
"domain": "physics.stackexchange",
"id": 96635,
"tags": "newtonian-mechanics, newtonian-gravity, gauss-law, spacetime-dimensions"
} |
Species identification - What is this "bug" | Question: The title says it all. What the heck is it?
Edit:
I found this on my car, in Kansas city.
Answer: After conducting a small investigation, it would appear most likely that the spider captured in the attached photograph is an arachnid in the family 'Thomsidae' or 'Crab Spiders'.
Although the proposed genus of Araniella proposed above by user137 shares a superficial similarity to this family, thomsids generally do not construct webs to ensnare prey - I might hazard that the web on your car does not hold the same structure, as shown below, to that of an Orb Weaver Spider (Araniella).
From the available information it is challenging to propose a genus, let alone a species. Based on the location of the car, I may propose the sub-family Misumenoides or genus Thomisus.
The general characteristics of crab spiders are the ambush of prey as opposed to trapping prey; the inverted V-shape under the cephalothorax; and the sideways movement of certain species in the Thomsidae family. It is of interest to note that any member in the population could vary appreciably in colour.
Wikipedia article concerning Crab Spiders available from: https://en.wikipedia.org/wiki/Thomisidae
Britannica regarding Crab Spiders available from: https://www.britannica.com/animal/crab-spider
Information regarding Misumenoides is available from: https://en.wikipedia.org/wiki/Misumenoides | {
"domain": "biology.stackexchange",
"id": 8771,
"tags": "species-identification, entomology"
} |
What is the meaning of Schrodinger equation solution for bound state of delta potential well? | Question: Let's assume that we have delta potential well with $V = -\lambda\delta(x)$, where $\lambda >0$. Now if we solve Schrodinger equation, we get one eigenvalue $E_b=-\frac{m\lambda^2}{\hbar^2}$ with only one eigenfunction $\psi(x) = \sqrt{\frac{m\lambda}{\hbar^2}}\exp(-\frac{m\lambda}{\hbar^2}|x|)$. What does that even mean?
Having only one eigenfunction means no matter how many time we measure energy of the system, we would get $E_b$. So on average we will have $<E> = E_b$. It seems a bit problematic, since we can produce an electron beam with $E<0$ where $E$ can be any number like $E_p$. It's not restricted to only $E_b$. And that would mean conservation of energy would be violated. In other words we have $<E>\neq E_p$
Do note that in quantum mechanics, it doesn't matter if in the first few measurements we get an energy like $E_m$ where $E_m \neq E_p$, it's even natural. But on average we expect $<E> = E_p$. At least it seems the case for other stationary (constant potential w.r.t to time) systems. Or am I wrong, and we should throw conservation of energy, altogether?
After all it's possible to prove this point by Ehrenfest theorem.
$$\frac{d}{dt}<A> = \frac{i}{h}<[A,H]>+<\frac{\partial A}{\partial t}>$$
We have $A=H$ here, so
$$\frac{d}{dt}<H> = 0$$
in other words, we do have conservation of energy, and it has nothing do to with uncertainty principle, which is something else entirely. If you don't like it, It's ok. Just assume that as $\Delta t \to \infty$, We don't have $<E> = E_p$ for this particular system.
One possible answer is only an electron with energy $E_b$ will be bounded to this system. That's, if we produce a beam with energy $E_b$ we would have a bound state, else we don't have a bound state at all. But it's not good, since it's possible to ask, What will be happened to a beam with $E<0$ and $E \neq E_b$ in this system? Not only it's not a bound state, It's not an unbounded state as well (just look at Schrodinger equation). How can I explain behavior of this kind of electrons with Schrodinger equation?
Edit for comments:
So in short I would like to ask what's meaning of $E_p<0$ where $E_p \neq E_b$. Is it a bound state?
If it's, then does that mean conservation of energy will be violated here? (As I stated in second paragraph).
If it is not a bound state, then what is it?! It can not be a unbound state since $E_p<0$. It is not a bound state, or unbound state?!
Answer:
we can produce an electron beam with $E<0$ where $E$ can be any number like $E_p$.
We can't. All the states with $E<0$ are evanescent waves. They can only decay exponentially towards e.g. $x\to+\infty$, but then they'll grow exponentially towards $x\to-\infty$.
The potential well bends the wavefunction, so that exponential growth can be altered ($\exp(x)$ becomes a mix of $\exp(-x)$ and $\exp(x)$ after the well) and, when $E=E_b$, be turned into exponential decay.
So in short I would like to ask what's meaning of $E_p<0$ where $E_p \neq E_b$. Is it a bound state?
These are not stationary states. Actually, they are not quantum states at all. All solutions of the Schrödinger's equation with such values of $E$ are exponentially divergent at least on one side. This makes these "wavefunctions" not only non-square-integrable: none of these functions will even solve the boundary value problem if you set homogeneous Dirichlet or Neumann boundary conditions at finite points $x=\pm a$.
If it is not a bound state, then what is it?! It can not be a unbound state since $E_p<0$. It is not a bound state, or unbound state?!
A question to you: what are the solutions of the particle-in-a-box problem where $E$ is not one of the eigenvalues $E_n$? Exactly the same: they are not solutions of the boundary value problem.
if we solve Schrodinger equation, we get one eigenvalue $E_b=-\frac{m\lambda^2}{\hbar^2}$ with only one eigenfunction $\psi(x) = \sqrt{\frac{m\lambda}{\hbar^2}}\exp(-\frac{m\lambda}{\hbar^2}|x|)$. What does that even mean?
This means that the only state when a particle doesn't escape to infinity is $E=E_b$. All the other states correspond to infinite motion. A particle with $E\ge0$ can emit a photon and transition to this $E=E_b$ state. Conversely, a particle in $E=E_b$ state can absorb a photon and transition to a state with $E\ge0$. | {
"domain": "physics.stackexchange",
"id": 68297,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation, quantum-interpretations, dirac-delta-distributions"
} |
Publishing tf from Gazebo camera to Rviz | Question:
Hello,
I am using the braccio_arm (https://github.com/grassjelly/ros_braccio_urdf) and trying to learn and simulate pick and place. I am able to launch the arm independently and work with it. Now, I created a world in Gazebo and would like to publish the PCL data from gazebo to Rviz and then plan to use pick and place. I face 2 issues,
I am unable to transform data from camera_link in Gazebo to base_link of robot. Kindly help. I followed the tutorial here and was added camera_link on my gazebo fie and URDF file.
Gazebo world has a kinect at 1.1 m in z and a table with 3 shapes on it. The kinect plugin code:
<plugin name='camera_link_plugin' filename='libgazebo_ros_openni_kinect.so'>
<baseline>0.2</baseline>
<alwaysOn>true</alwaysOn>
<updateRate>0.0</updateRate>
<cameraName>camera_ir</cameraName>
<imageTopicName>/camera/depth/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/depth/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageInfoTopicName>/camera/depth/camera_info</depthImageInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
<frameName>camera_link</frameName>
<pointCloudCutoff>0.05</pointCloudCutoff>
<distortionK1>0</distortionK1>
<distortionK2>0</distortionK2>
<distortionK3>0</distortionK3>
<distortionT1>0</distortionT1>
<distortionT2>0</distortionT2>
<CxPrime>0</CxPrime>
<Cx>0</Cx>
<Cy>0</Cy>
<focalLength>0</focalLength>
<hackBaseline>0</hackBaseline>
</plugin>
</sensor>
My URDF links for the robot:
<robot xmlns:xacro="http://www.ros.org/wiki/xacro"
xmlns:sensor="http://playerstage.sourceforge.net/gazebo/xmlschema/#sensor"
xmlns:controller="http://playerstage.sourceforge.net/gazebo/xmlschema/#controller"
xmlns:interface="http://playerstage.sourceforge.net/gazebo/xmlschema/#interface"
name="braccio">
<xacro:property name="camera_link" value="0.05" /> <!-- Size of square 'camera' box -->
<xacro:property name="damping_value" value="241.35"/>
<xacro:property name="friction_value" value="24.135"/>
<xacro:property name="kinect_box_length" value="0.3556" />
<xacro:property name="kinect_box_width" value="0.1778" />
<xacro:property name="kinect_box_height" value="0.0762" />
<xacro:property name="kinect_box_mass" value="1.274595" />
<xacro:macro name="inertial_matrix_cuboid" params="mass box_length box_width">
<inertial>
<mass value="${mass}" />
<inertia ixx="${mass/12*(box_length*box_length)}"
ixy = "0" ixz = "0"
iyy="${mass/12*(box_width*box_width)}" iyz = "0"
izz="${mass/12*(box_length*box_length + box_width*box_width)}" />
</inertial>
</xacro:macro>
<xacro:macro name="transmission_block" params="joint_name idx">
<transmission name="tran_${idx}">
<type>transmission_interface/SimpleTransmission</type>
<joint name="${joint_name}">
<hardwareInterface>hardware_interface/PositionJointInterface</hardwareInterface>
</joint>
<actuator name="motor__${idx}">
<hardwareInterface>hardware_interface/PositionJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
</xacro:macro>
<link name="base_footprint"/>
<joint name="base_footprint_joint" type="fixed">
<parent link="base_footprint"/>
<child link="base_link"/>
</joint>
<link name="base_link">
<visual>
<geometry>
<cylinder length="0.01" radius=".053" />
</geometry>
<material name="black"/>
<origin rpy="0 0 0" xyz="0 0 0"/>
</visual>
<inertial>
<mass value="2"/>
<inertia ixx="0.015" ixy="0" ixz="0" iyy="0.00666666666667" iyz="0" izz="0.0216666666667"/>
</inertial>
</link>
<link name="camera_link">
<xacro:inertial_matrix_cuboid mass="${kinect_box_mass}" box_length="${kinect_box_length}" box_width="${kinect_box_width}"/>
</link>
<joint name="camera_joint" type="fixed">
<origin xyz="0 0 1.1" rpy="0 0 0"/>
<parent link="base_link"/>
<child link="camera_link"/>
</joint>
<link name="braccio_base_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_base.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="orange"/>
<origin rpy="0 0 3.1416" xyz="0 0.004 0" />
</visual>
<inertial>
<mass value="2"/>
<inertia ixx="0.015" ixy="0" ixz="0" iyy="0.00666666666667" iyz="0" izz="0.0216666666667"/>
</inertial>
<collision>
<origin rpy="0 0 3.1416" xyz="0 0.004 0"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_base.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="shoulder_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_shoulder.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="orange"/>
<origin rpy="0 0 0" xyz="-0.0045 0.0055 -0.026"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 0 0" xyz="-0.0045 0.0055 -0.026"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_base.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="elbow_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_elbow.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="orange"/>
<origin rpy="0 0 0" xyz="-0.0045 0.005 -0.025"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 0 0" xyz="-0.0045 0.005 -0.025"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_elbow.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="wrist_pitch_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_wrist_pitch.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="orange"/>
<origin rpy="0 0 0" xyz="0.003 -0.0004 -0.024"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 0 0" xyz="0.003 -0.0004 -0.024"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_wrist_pitch.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="wrist_roll_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_wrist_roll.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="white"/>
<origin rpy="0 0 0" xyz="0.006 0 0.0"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 0 0" xyz="0.006 0 0.0"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_wrist_roll.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="left_gripper_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_left_gripper.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="white"/>
<origin rpy="0 1.5708 0" xyz="0 -0.012 0"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 1.5708 0" xyz="0 -0.012 0"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_left_gripper.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="right_gripper_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_right_gripper.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="white"/>
<origin rpy="0 1.5708 0" xyz="0 -0.012 0.010"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 1.5708 0" xyz="0 -0.012 0.010"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_right_gripper.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<joint name="base_joint" type="revolute">
<axis xyz="0 0 1"/>
<limit effort="1000.0" lower="0.0" upper="3.1416" velocity="1.0"/>
<origin rpy="0 0 0" xyz="0 0 0"/>
<parent link="base_link"/>
<child link="braccio_base_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="shoulder_joint" type="revolute">
<axis xyz="1 0 0"/>
<limit effort="1000.0" lower="0.2618" upper="2.8798" velocity="1.0"/>
<origin rpy="-1.5708 0 0" xyz="0 -.002 0.072"/>
<parent link="braccio_base_link"/>
<child link="shoulder_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="elbow_joint" type="revolute">
<axis xyz="1 0 0"/>
<limit effort="1000.0" lower="0" upper="3.1416" velocity="1.0"/>
<origin rpy="-1.5708 0 0" xyz="0 0 0.125"/>
<parent link="shoulder_link"/>
<child link="elbow_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="wrist_pitch_joint" type="revolute">
<axis xyz="1 0 0"/>
<limit effort="1000.0" lower="0" upper="3.1416" velocity="1.0"/>
<origin rpy="-1.5708 0 0" xyz="0 0 0.125"/>
<parent link="elbow_link"/>
<child link="wrist_pitch_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="wrist_roll_joint" type="revolute">
<axis xyz="0 0 -1"/>
<limit effort="1000.0" lower="0.0" upper="3.1416" velocity="1.0"/>
<origin rpy="0 0 1.5708" xyz="0 0.0 0.06"/>
<parent link="wrist_pitch_link"/>
<child link="wrist_roll_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="gripper_joint" type="revolute">
<axis xyz="0 -1 0"/>
<limit effort="1000.0" lower="0.1750" upper="1.2741" velocity="1.0"/>
<origin rpy="0 -0.2967 0" xyz="0.010 0 0.03"/>
<parent link="wrist_roll_link"/>
<child link="right_gripper_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="sub_gripper_joint" type="revolute">
<axis xyz="0 1 0"/>
<mimic joint="gripper_joint"/>
<limit effort="1000.0" lower="1.2741" upper="2.3732" velocity="1.0"/>
<origin rpy="0 3.4383 0" xyz="-0.010 0 0.03"/>
<parent link="wrist_roll_link"/>
<child link="left_gripper_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<material name="orange">
<color rgba="0.57 0.17 0.0 1"/>
</material>
<material name="white">
<color rgba="0.8 0.8 0.8 1.0"/>
</material>
<material name="black">
<color rgba="0 0 0 0.50"/>
</material>
<xacro:transmission_block joint_name="base_joint" idx="1"/>
<xacro:transmission_block joint_name="shoulder_joint" idx="2"/>
<xacro:transmission_block joint_name="elbow_joint" idx="3"/>
<xacro:transmission_block joint_name="wrist_pitch_joint" idx="4"/>
<xacro:transmission_block joint_name="wrist_roll_joint" idx="5"/>
<xacro:transmission_block joint_name="gripper_joint" idx="6"/>
<xacro:transmission_block joint_name="sub_gripper_joint" idx="7"/>
<gazebo>
<plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so">
<robotNamespace>/braccio</robotNamespace>
</plugin>
</gazebo>
</robot>
My launch file in Gazebo:
<launch>
<!-- these are the arguments you can pass this launch file, for example paused:=true -->
<arg name="paused" default="false"/>
<arg name="use_sim_time" default="true"/>
<arg name="gui" default="true"/>
<arg name="headless" default="false"/>
<arg name="debug" default="false"/>
<!-- We resume the logic in empty_world.launch -->
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="world_name" default="$(find braccio_gazebo)/worlds/pick_place_multi.world"/>
<arg name="debug" value="$(arg debug)" />
<arg name="gui" value="$(arg gui)" />
<arg name="paused" value="$(arg paused)"/>
<arg name="use_sim_time" value="$(arg use_sim_time)"/>
<arg name="headless" value="$(arg headless)"/>
</include>
<!-- Convert an xacro and put on parameter server -->
<param name="robot_description" command="$(find xacro)/xacro --inorder $(find braccio_arduino_ros_rviz)/urdf/braccio_arm.xacro" />
<!-- Spawn a robot into Gazebo -->
<node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-param robot_description -urdf -x 1.3 -y -0.3 -z 0.8 -model braccio" />
</launch>
My launch file in Rviz,
<launch>
<arg name="model" default="$(find braccio_arduino_ros_rviz)/urdf/braccio_arm.xacro"/>
<arg name="gui" default="true"/>
<arg name="rvizconfig" default="$(find braccio_arduino_ros_rviz)/rviz/urdf.rviz" />
<param name="robot_description" command="$(find xacro)/xacro.py $(arg model)"/>
<param name="use_gui" value="gui"/>
<node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher">
<param name="use_gui" value="gui"/>
</node>
<node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" />
<node name="rviz" pkg="rviz" type="rviz" args="-d $(arg rvizconfig)" required="true" />
</launch>
When I visualize the PCL data it is rendered in Rviz perpendicular to y-axis, what could be the issue here?
Originally posted by dpakshimpo on ROS Answers with karma: 161 on 2018-03-12
Post score: 1
Original comments
Comment by Delb on 2018-03-15:
I would suggest you to change your link world to base_footprint (just change the names) Gazebo is able to link base_footprint to world(see #q208051).
Also, your camera link needs the inertial tag too.
Answer:
Look at #q283961, you missed the inertial tag which is required by Gazebo.
For your PCL data, have you tried rotating your box in the urdf ?
Originally posted by Delb with karma: 3907 on 2018-03-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by dpakshimpo on 2018-03-12:
Thanks for you input Delib, unfortunately, it did not work too.
Comment by dpakshimpo on 2018-03-12:
How do I get to 5 points, so that I can upload images?
Comment by Delb on 2018-03-13:
Can you edit your question to show the modifications ? Also I noticed you defined the same origin in the joint tag and link tag which means you won't have your box at the same position as the tf which could be an issue (put origin of link and collision tag to 0 0 0
Comment by dpakshimpo on 2018-03-15:
Hello Delb,
I have modified the URDF/XACRO file to remove the camera link visual element and tired to visualise the PCL data in Rviz, as reported earlier. I tried to write a node to transform the data from camera_link to world, that also did not help.
Comment by dpakshimpo on 2018-03-16:
For some reason my comment to the other answer is missing, i updated the xacro file with the inertial tag for camera link (this is based on the actual dimensions of Kinect) the issue is not resolved. My Gazebo has a kinect spawned at same height as mentioned in URDF and that has the plugin code
Comment by Delb on 2018-03-16:
Can you add your launch file too please ? The urdf seems fine though.
Do you have this line somewhere ? xmlns:sensor="http://playerstage.sourceforge.net/gazebo/xmlschema/#sensor"
Comment by dpakshimpo on 2018-03-18:
Hello Delb,
Thanks for your time again. Edited the qn with launch files for Gazebo and Rviz. I added the above xml tag to my XACRO now. | {
"domain": "robotics.stackexchange",
"id": 30275,
"tags": "ros-kinetic"
} |
Cover a graph with complete graphs | Question: I want to find the smallest possible function $k(n,m)$ such that for any graph $G$ with $n$ vertices and $m$ edges, there exists $n$ vertex sets $S_1,S_2,...,S_n\subseteq V$ each with size $k(n,m)$ and every edge $(u,v)$ has $u,v$ both contained in some $S_i$. In other words, $n$ complete graphs suffice to cover all edges.
The first question is whether $k(n,m)$ can be $n^{1/2-\epsilon}$ when $m$ is sub-quadratic. The question is interesting because $n$ complete subgraph each with size $O(n^{1/2})$ suffice to cover any graph.
I searched for clique edge covering but most of the results only consider the case where cliques cannot cover non-edge. I wonder if there exists any similar research in the setting when cliques can cover non-edge.
Answer: Here are asymptotic bounds for $k(n, m)$ that are tight up to a logarithmic factor.
Note the threshold around $m = \Theta(n^{3/2})$:
Theorem 1.
$~~~~\frac{1}{21}\min(\lceil\sqrt n\rceil, \lceil m/(n\log n)\rceil)
~\le~
k(n, m) ~\le ~\min(\sqrt {n}, 2\lceil m/n\rceil).$
Here's the proof. We show the upper bounds (Lemma 1) then the lower bounds (Lemma 2).
Lemma 1. $k(n, m) \le \min(\sqrt {n}, 2\lceil m/n\rceil)$
Proof. First we show $k(n, m) \le \sqrt n$.
Partition the vertex greedily into some $p$ parts of size at most $\sqrt n/2$. Then, for each of the ${p\choose 2} \le n$ unordered pairs $\{A, B\}$ of distinct parts, create a vertex set $S_i$ consisting of $A\cup B$ (having size at most $\sqrt n$). There are at most $n$ such pairs $\{A, B\}$, and every edge has both endpoints in $A\cup B$ for some pair. This shows $k(n, m) \le \sqrt n$.
To finish we show $k(n, m) \le 2\lceil m/n\rceil$.
Greedily partition the edge set $E$ into $n$ sets $E_1, \ldots, E_n$
such that each contains at most $\lceil m/n\rceil$ edges.
For each $i\in [n]$, let $S_i$ be the set of vertices used in edge set $E_i$.
Then $|S_i| \le 2 |E_i| \le 2\lceil m/n\rceil$, as desired.$~~~~\Box$
Here is the lower bound.
Lemma 2. $k(n, m) \ge \min(\sqrt n, \lceil m/(n\ln n)\rceil)/21$
Proof.
Let $m' = n^{3/2}\ln n$.
The lower bound (of $\sqrt n/21$) for $m\ge m'$ follows from the lower bound for $m=m'$.
And a lower bound of $1$ holds trivially for all positive $m \le n\ln n$.
So assume without loss of generality that $n\ln n \le m \le n^{3/2}\ln n$.
Let $k=k(n, m)$ and $p = m/n^2$.
Let $G=(V, E)$ be a random graph where each edge is independently present with probability $p$.
Claim 1: With probability $1-o(1)$ $G$ has the following two properties:
$m/3 = p n^2/3 \le |E| \le p n^2 = m$
For every vertex subset $S\subseteq V$ of size $k$,
we have $|E_S| \le 7 k \ln n$,
where $E_S$ denotes the set of edges with both endpoints in $S$.
Suppose the claim is true. Let $G$ have these two properties.
By definition of $k$ there are $n$ $k$-vertex induced subgraphs in $G$ that collectively contain all of $G$'s edges.
By Property 2, each of these $n$ subgraphs contains at most $7 k \ln n$ edges from $S$,
so collectively they contain at most $7 n k \ln n$ edges.
But they contain $E$, and (by Property 1) $|E|\ge m/3$.
So $7 n k \ln n \ge m/3$. Simplifying implies $k \ge m/(21 n \ln n)$, as desired.
To complete the proof we show the claim.
The expected number of edges in $G$ is ${n\choose 2}p \sim n^2p/2$.
By a standard Chernoff bound the probability
that $n^2p/3 \le |E| \le n^2 p$ fails to hold is at most $2\exp(-\Omega(n^2 p))$, which (by the choice of $p$) is $o(1)$.
So Property 1 fails with probability $o(1)$.
The number of $k$-vertex subgraphs of $G$ is ${n\choose k} \le n^k$.
For each, (using $k \le n^{1/2}$ and $m\le n^{3/2}\ln n$)
its expected number of edges is
${k \choose 2} p \le 3.5 k \ln n$,
so by a standard Chernoff bound the probability that the number is more than $2\times 3.5 k\ln n$ is at most
$\exp(-3.5 k\ln (n)/3) \le \exp(-1.1 k \ln n)$.
So by the naive union bound the probability that Property 2 fails is at most
$n^k \exp(-1.1 k\ln n) = \exp(-0.1 k\ln n) = o(1).$
So Property 2 fails with probability $o(1)$.
By the naive union bound, the probability that either Property 1 or Property 2 fails is $o(1)$.
This proves the claim, and the lemma. $~~~~\Box$ | {
"domain": "cstheory.stackexchange",
"id": 5635,
"tags": "graph-theory, graph-algorithms, co.combinatorics, combinatorics, extremal-combinatorics"
} |
Transfer of moments in beams with internal hinges | Question:
We have a beam with 1 internal hinge just like the picture. We know the internal hinge can not transfer moments between AB and BD. We know the 30 kN force makes some moments on BD & AB. If B (internal hinge) can not transfer moments between the two elements, then why does the 30 kN (which is applied on BD) cause moments in AB?
I don't know if i'm clear enough but the 30 kN force makes some moments on AB while we know that no moments can transfer between AB & BD.
Answer: Imagine if AB didn't exist, so all we have is BD with the pinned support at D and the force at C. In this case, BD is hypostatic and becomes a mechanism, rotating around D.
Obviously, we know that doesn't happen when AB is there. This tells us that the hinge is supporting BD, apparently generating an upwards vertical force at B.
However, Newton's Third Law tells us that every action causes an equal and opposite reaction. So if BD "feels" an upwards vertical force at B, then AB will "feel" an equal downwards vertical force at B as well.
This "downwards force" at B (as seen by AB) will then cause a bending moment in AB.
For a more visual demonstration, notice that this structure can be replaced by two individual beams AB and BD. BD is a simply-supported beam, and AB is a cantilever with a concentrated load equal to the reaction found in BD's support at B.
Diagram obtained with Ftool, a free 2D frame analysis program. | {
"domain": "engineering.stackexchange",
"id": 3100,
"tags": "structural-analysis, structures, beam, moments"
} |
Correct definition of an 'acoustic mode'? | Question: I am reading 'The Oxford Solid State Basics' by S.H.Simon in which on page 92 defines an acoustic mode as:
... any mode that has linear dispersion as $k\rightarrow 0$.
Whilst on page 94 he defines it as:
... one mode will be acoustic (goes to zero energy at $k=0$).
Unless all modes that tend to zero do so linearly and vice versa then these two definitions don't overlap. Thus my question is as follows: does one of these conditions imply the other and if not what is the correct definition for an acoustic mode?
Answer: No, one does not imply the other, and I disagree with the first definition.
For example, the dispersion relation of the ZA mode in graphene goes to zero like $x^2$, so energy goes to zero as $k \to 0$ but does not do so linearly.
The 'A' in 'ZA' stands for acoustic, so that's an example of a nonlinear acoustic mode.
(That said, the first definition has some merit. The slope of a linear dispersion relation as $k \to 0$ is the speed of sound, which is a constant -- at least in isotropic materials. "Acoustic" modes get their name because they behave like sound at long wavelengths, and non-linear dispersion relations don't have a speed of sound. So there is logic in saying that non-linear dispersion relations are not acoustic. However, I don't think that's the common definition.) | {
"domain": "physics.stackexchange",
"id": 40454,
"tags": "solid-state-physics, terminology, definition, vibrations, phonons"
} |
Does Torricelli's law assume constant height? | Question: I am planning on doing an experiment where I derive the optimal height for a hole in a container filled with water so that the stream travels the furthest. In order to determine this, I am planning on using Torricelli's law (V = √2gh) to determine the initial velocity, then by using the distance traveled and the time elapsed (time for the stream to touch the ground), I can determine a relationship between h/(H-h) and R, thus finding a general rule for other situations.
My question is: Does Torricelli's law assume that the height is constant, or does it only use the initial height of the water?
Answer: Torricelli's law applies at each height. As the height $h$ changes, the velocity $v$ of the water coming out of the hole also changes, and along with this so does the range $R$.
This means that if the cross-sectional area $a$ of the hole is comparable with the cross-sectional area $A$ of the water tank, then both $h$ and $R$ are likely to change while you try to measure them. If you start by measuring $h$ then by the time you measure $R$ the value of $h$ will have changed.
To get round this you could make $A \gg a$. Then $h$ and $R$ will change slowly, giving you time to measure both approximately at the same time.
There is actually no need to measure $v$ at all. Simply record $h$ vs $R$ and plot $h/H$ vs $R$. Although $H$ also varies there is no need to measure it because $H=h+Y$ where $Y$ is the height of the hole above the base, which is constant.
What you can use Torricelli's Law for is to predict the optimal value of $h/H$ then compare this with the value found in your experiment and try to explain any difference.
Note that the water below the level of the hole plays no part in this experiment (or does it?) so you can simply raise the water tank to a new height instead of looking for a taller tank. You can place the hole a few cm from the bottom of the tank. This makes the best use of the tank which you have got. Also, for the same value of $h$, the larger you make the value of $Y$ then the greater the value of $R$ which will result. This means that your proportional errors in measuring $R$ will be smaller.
Another technique is to make a video recording of the experiment, with one metre rule fixed beside the tank for reading water level and one along the ground for measuring $R$. Then you can make a truly simultaneous measurement of $h$ and $R$, and you can obtain a better measurement of the centre of the stream for $R$. | {
"domain": "physics.stackexchange",
"id": 64819,
"tags": "fluid-dynamics, fluid-statics, flow, bernoulli-equation"
} |
cmake fails: Inconsistency detected by ld.so | Question:
I recently set up a new Ubuntu 14 VM to install indigo on. I installed indigo, and since I'm working with an Aldebaran Nao humanoid robot, I also downloaded the naoqi SDK. Then, I downloaded these packages: nao_robot, nao_extra, roboticsgroup_gazebo_plugins, pal_msgs, and pal_gazebo_plugins. I tried a catkin_make after this, and got the following error:
Base path: /home/amstrudy/catkin_ws
Source space: /home/amstrudy/catkin_ws/src
Build space: /home/amstrudy/catkin_ws/build
Devel space: /home/amstrudy/catkin_ws/devel
Install space: /home/amstrudy/catkin_ws/install
Running command:
cmake /home/amstrudy/catkin_ws/src -DCATKIN_DEVEL_PREFIX=/home/amstrudy/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/home/amstrudy/catkin_ws/install -G Unix Makefiles" in "/home/amstrudy/catkin_ws/build"
Inconsistency detected by ld.so: dl-version.c: 224: _dl_check_map_versions: Assertion `needed != ((void *)0)' failed!
Invoking "cmake" failed
I uninstalled the packages, commented out the source lines and LD_LIBRARY_PATH and PYTHON_PATH lines in my bashrc (which pointed to stuff in naoqi), deleted and reinstalled indigo, but still no luck. I know that catkin_make worked as some point because I used it to setup my catkin_ws. I've researched around but haven't found a solution.
Thanks.
EDIT:
I tried deleting the naoqi SDK from my VM, and voila, cmake (and catkin_make) are both back up and running. I have no idea how this could be? I need naoqi, so any ideas as to how to fix this?
Originally posted by amstrudy on ROS Answers with karma: 43 on 2017-09-12
Post score: 0
Original comments
Comment by jayess on 2017-09-12:
I edited your question to make it easier to read. I did my best but I feel that I may have mangled it a bit. Feel free to redo it, but please don't use code tags. Just highlight the code/errors/commands and click the 101010 button.
Comment by amstrudy on 2017-09-12:
@jayess thank you!
Comment by jayess on 2017-09-12:
What does running catkin_make from the root of your workspace give you?
Answer:
Why are you directly invoking cmake? You should be using catkin_make from the root of your workspace.
Originally posted by jayess with karma: 6155 on 2017-09-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by amstrudy on 2017-09-12:
I'm not directly invoking cmake, but catkin_make. When I was investigating the error I narrowed the problem down to cmake. Any attempt to use the command results in the above error.
Also, please see my edits above! | {
"domain": "robotics.stackexchange",
"id": 28833,
"tags": "catkin-make, ros-indigo, nao"
} |
Editing system files in Linux (as root) with GUI and CLI text editors | Question: My intention is to POSIX-ly write one generalized function for running various text editors I use for different purposes through sudoedit, i.e. editing files as root safely. Safely = for instance, if a power loss occurs during the file edit; another example could be lost SSH connection, etc.
Originally, I had these Bash functions defined for this purpose in my .bash_aliases file:
function sucode
{
export SUDO_EDITOR='/usr/bin/code --wait'
sudoedit "$@"
}
function susubl
{
export SUDO_EDITOR='/opt/sublime_text/sublime_text --wait'
sudoedit "$@"
}
function suxed
{
export SUDO_EDITOR='/usr/bin/xed --wait'
sudoedit "$@"
}
Since yesterday, I'm trying to generalize that solution for other Linux users to be able to take advantage of it. Momentary peek:
# Text editing as root; The proper way through `sudoedit`.
sudoedit_internal()
{
[ "${#}" -lt 3 ] && { printf '%s\n' 'sudoedit_internal(): Invalid number of arguments.' 1>&2; return; }
editor_path=$( command -v "${1}" )
[ -x "${editor_path}" ] || { printf '%s\n' "sudoedit_internal(): The editor path ${editor_path} does not exist on this system." 1>&2; return; }
editor_wait_option=${2}
shift 2
env SUDO_EDITOR="${editor_path} ${editor_wait_option}" sudoedit "${@}"
}
# CLI
suvi() { sudoedit_internal vi '' "${@}"; }
sunano() { sudoedit_internal nano '' "${@}"; }
# GUI
sucode() { sudoedit_internal code -w "${@}"; }
susubl() { sudoedit_internal subl -w "${@}"; }
suxed() { sudoedit_internal xed -w "${@}"; }
These 5 editors I use. Please take that as an example only.
As I should not update this question any further, you can find the up-to-date version of this script snippet in my Unix & Linux Answer.
Answer: It's good form to return non-zero on error. The not-optional option is a little ugly and an environment variable may work better.
Some extraneous syntax can go:
1 before >&2 is implied
{} around unsubstituted dereferences doesn't add anything
echo is an alias for printf "%s\n"
testing for error instead of success allows test && echo && return without braces
command -v tests validity for you; no need to test again
you've moved the complexity into a function already; reward yourself by using aliases to invoke it
sudoedit_internal()
{
[ $# -lt 2 ] && echo "sudoedit_internal(): Invalid number of arguments." >&2 && return 1
! command -v "$1" >/dev/null && echo "sudoedit_internal(): The editor $1 does not exist on this system." >&2 && return 1
editor="$1"; shift
SUDO_EDITOR="$editor $opt" sudoedit "$@"
}
for ed in vi nano ; do alias su$ed="opt= sudoedit_internal $ed"; done
for ed in code subl xed ; do alias su$ed="opt=-w sudoedit_internal $ed"; done | {
"domain": "codereview.stackexchange",
"id": 34145,
"tags": "linux, posix, sh, text-editor"
} |
In calculating policy gradients, wouldn't longer trajectories have more weight according to the policy gradient formula? | Question: In Sergey Levine's lecture on policy gradients (berkeley deep rl course), he show that policy gradient can be evaluated according to the formula
In this formula, wouldn't longer trajectories get more weight (in finite horizon situations), since the middle term, the sum over log pi, would involve more terms? (Why would it work like that?)
The specific example I have in mind is pacman, longer trajectories would contribute more to the gradient. Should it work like that?
Answer:
wouldn't longer trajectories get more weight?
Not necessarily. Gradient $\triangledown_{\theta}$ could be negative or positive (1D analogy), therefore, larger number of gradients could have a smaller weight, which makes sense. A consistent short trajectory is more informative (has more weight) than an inconsistent long trajectory with sign-alternating policy gradients.
Why would it work like that?
If we are comparing two consistent trajectories, where most gradients are in the same direction, this formula makes sense again. A long consistent trajectory contains more useful information (more steps that confirm each other) than a short one. In real life, compare the informativeness of a successful week to a successful year for your policy learning. | {
"domain": "datascience.stackexchange",
"id": 4837,
"tags": "reinforcement-learning, policy-gradients"
} |
Median filter problems about the output image | Question:
An image has an isolated cluster of dark pixels on a light background. The area of the cluster
is $(n - 1)/2$ pixels where $n$ is an odd positive integer. What happens to the cluster when it is filtered with a median filter of size of $n \times n$? Explain why this happens.
I guess the output picture will be blurred since there are dark pixels.
Answer: To give you a hint, here is how you can do this calculation with a PC, to check your assumptions:
import cv2
import numpy as np
import matplotlib.pyplot as plt
N = 8
n = 5
n12 = (n-1)/2
I = np.ones((N,N), dtype=np.uint8)
I[2:(2+n12),2:(2+n12)] = 0
I2 = cv2.medianBlur(I, n)
print "Input image:"
print I
print
print "Output image"
print I2
The program output is:
Input image:
[[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 0 0 1 1 1 1]
[1 1 0 0 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]]
Output image
[[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]
[1 1 1 1 1 1 1 1]]
So, the median filter completely removes the dark area. Now, it's up to you to understand how the median filter works and why its completely removing the dark area. You can ask specific questions, on what is your problem now. | {
"domain": "dsp.stackexchange",
"id": 4598,
"tags": "image-processing, homework, median-filter"
} |
movement detection in ROS | Question:
Hello Everyone, I tried a program on OpenCV about movement detection and It works pretty well since it's capturing 2 camera frames and making differentiation between them (from image to grayscale then threshold then absolute difference plus other variables like the sensitivity of the differentiation), however when I try to move the same program to ROS it's not working,
My assumption about this is that when I use image transport nodes, the image callback function just sends one image per parameter and then the comparison between both frames would be the same. can you guys please suggest me something I could use?... I don't have any idea about what to do and I checked in Internet and didn't find anything.
Originally posted by Diego on ROS Answers with karma: 46 on 2015-05-26
Post score: 0
Answer:
I guess you could create a global "image" variable and then if the variable is not set yet, just use the callback to store the variable. If the variable is set, use the callback you compare the new image with the stored image and afterwards set the saved image to the new image.
Some untested python pseudocode:
def image_cb(msg):
global old_img
if old_img is None:
old_img = msg
return
movement_result = compare_imgs(old_img, msg)
old_img = msg
Originally posted by daenny with karma: 376 on 2015-05-27
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Diego on 2015-06-03:
Hello, thanks for your answers. I had done something similar to this and it worked. | {
"domain": "robotics.stackexchange",
"id": 21772,
"tags": "opencv"
} |
Using a service interface to get user input from dialogs | Question: I haven't worked with WPF/MVVM for a while, but now I'm back fixing some issues with one of my old apps and remembering the struggles I always had when dealing with asking for user input (eg. showing a dialog) from a view model.
I've put some thought into it and decided to try routing user input requests through a service interface so that it can be mocked/tested, while keeping the view logic out of my view model.
public interface IUserInputService
{
UserRegistration PromptUserRegistration();
}
I then wrote a generic method that creates a new instance of a view and view model, connects them together, shows the dialog and then returns the dialog view model and result (eg. accepted, cancelled) back to the caller. Finally, the caller can extract information from the returned view model and return the actual desired result.
The view models have events which are called when the view model is "done", eg: Accepted and Cancelled. These are hooked up using the generic method.
public class UserInputService : IUserInputService
{
// Implementation of the service interface using the generic method below
public UserRegistration PromptUserRegistration()
{
var result = ShowDialog<UserRegistrationDialog, UserRegistrationViewModel>((v, vm) =>
{
vm.Accepted += (s, e) => v.DialogResult = true;
vm.Cancelled += (s, e) => v.DialogResult = false;
});
if (result.State != DialogState.Accepted)
{
return null;
}
return new UserRegistration
{
FirstName = result.ViewModel.FirstName,
LastName = result.ViewModel.LastName
};
}
// The generic method
private DialogResult<TViewModel> ShowDialog<TView, TViewModel>(
Action<TView, TViewModel> viewModelConnected
)
where TView : Window, new()
where TViewModel : ViewModelBase, new()
{
TViewModel viewModel = new TViewModel();
TView view = new TView();
view.DataContext = viewModel;
viewModelConnected(view, viewModel);
view.ShowDialog();
DialogState state;
switch (view.DialogResult)
{
case true:
state = DialogState.Accepted;
break;
case false:
state = DialogState.Declined;
break;
default:
state = DialogState.Cancelled;
break;
}
return new DialogResult<TViewModel>(state, viewModel);
}
}
Now I can pass around the service interface using DI/IoC and view models can ask for user input without caring how (eg. via dialog or mocking).
The other types (DialogResult and DialogState) are these:
public enum DialogState
{
Accepted,
Declined,
Cancelled
}
public class DialogResult<TViewModel>
{
public DialogResult(DialogState state, TViewModel viewModel)
{
State = state;
ViewModel = viewModel;
}
public DialogState State { get; }
public TViewModel ViewModel { get; }
}
So my questions are...
Is this a good approach?
What would you do differently and why?
Any other suggestions regarding code quality?
I want to stick to the principles of MVVM as much as possible (though within reason) while keeping things testable... Let me know what you think.
Answer: I have worked in a project where such an IUserInputService was used. After few years, the implementation was a big class containing the dialog calling logic for all modules.
Today, I would prefer a generic service which is able to connect a view with the view model without implementing the logic for each concrete dialog.
Code Review
Your code looks clean and tidy
The communication between view model and the view via events may be OK in your case, because the view model is just an internal structure not known outside the method. However, in generell I would prefer a way where the view is not references by the view model (wich is indirect the case because the delegates contain a reference to the view). Usually, I use a view model property "bool Close" and bind it to the view so that the view can react on changes and close itself.
Your approach does not allow creation of view models via DI container. If the VM has dependencies, they must be passed to the method PromptUserRegistration.
Configuring the VM with default values requires to parameterize the PromptUserRegistration as well.
All things considered, I would prefer an approach where the view model is created / configured by the calling logic and shown via an generic service because it is more flexible and less distributed code.
Note that there are also frameworks available that adress that problem. E.g. https://github.com/FantasticFiasco/mvvm-dialogs | {
"domain": "codereview.stackexchange",
"id": 29759,
"tags": "c#, wpf, mvvm"
} |
What caused a measurable patch of Mt. St. Helens' ash to be deposited across Oklahoma? | Question: Looking at the ashfall distribution map from the 1980 eruption of Mt. St. Helens (below), there is a measured 'patch' of ash depth significant enough to be measured occurring in Oklahoma.
What caused a measurable patch of Mt. St. Helens' ash to be deposited across Oklahoma?
Answer: I will propose an explanation. From experience I would say it have something to do with the US meteorological systems (systems as in masses of air) active the 18th of May 1980 and the week(s) after.
A bit of background : air masses can be high pressure (cool, and stable) or low pressure (warmer, more energy, more volatile, less stable). The next figure show in red a low unstable and warm system, and in blue a cool, stable system. (full website here)
Further, let's take into account the meteorological nature of Oklahoma - as an area in a volatile sector, where warm wet air masses from the gulf collide with colder, dryer masses from the west and the north - area part of what is broadly know as the tornado alley. On the wiki page for OK we see a typical system condition favorable for volatile weather. Green masses are warm, blue are cold and red is where the action is likely to occur.
Tornado or other frequent wild weather events are the result of a clash between those widely different air masses, temperature, moisture and pressures often leading to storm cells or other unpleasant windy/stormy situations.
Now let's get back to our volcano. I found those daily meteorological maps from the NOAA. When entering the proper dates, let's say 18th of May 1980 and the week after, we see the various systems in action. To keep things simple, I picked the 18th of May, and the 21st.
The 18th of May there was a low pressure system about right over the volcano - which was ideal to transport and displace a lot of ashes very far. The 20th, at 7:00 am, there was a Low in formation over Oklahoma which would become full fledged the 21st (see figure) and there was bad weather that day over Oklahoma City because of clashing cell. Eventually causing precipitation (rain and ash?).
I am not a climate-meteo-specialist, but I have good confidence that this scenario could explain a part of the reason why there is ash in Oklahoma. | {
"domain": "earthscience.stackexchange",
"id": 408,
"tags": "volcanology, volcanic-hazard"
} |
Grooves on water tank | Question: Why are there grooves on almost every water tank?
Just typing water tank in Google images would reveal what I am talking about. And here is a sample picture!
I think it would mostly be regarding water pressure.
Please share your knowledge.
Answer: Of all the reasons I can think of, I came up with two advantages that the grooves may provide.
The grooves increase the distribution of material away from the neutral axis. Therefore they increase the moment of area. An increased moment of area means that the material is more resistant to bending.
$\sigma_=\frac {My}I$
It's just like a sheet of paper. If you hold a sheet of paper from an end, it easily bends over. But if you roll it up into a cylinder, then hold it from an end; that should keep it straight like a cylinder.
But why don't they just bulge the whole think out? (That should provide even more resistance to bending). The answer is that though it does increase the resistance to bending, it also increases the shear stress between adjacent layers. The right balance is obtained by implementing an $I$-beam shape. They make multiple $I$ beam shapes in the form of grooves.
In short, the grooves increase resistance to bending and decreases shear stress between adjacent layers of the tank.
The grooves provide holding support, allowing for better transportation of the tank. | {
"domain": "physics.stackexchange",
"id": 58367,
"tags": "thermodynamics, pressure, everyday-life, water, material-science"
} |
Orientation of a rocket going in a circle? | Question: This problem is from my physics homework:
A rocket is moving at constant speed in a perfect circle in deep space, far away from any planets or stars. The diagram below shows the circle the ship follows along with 4 points labeled A, B, C, and D during the rocket's motion.
Basically here' there's a diagram of a circle with the top, bottom, left, and four points evenly spaced around the edges, dividing the circle into quarters.
Sketch the circle on your written homework and on your sketch show how the rocket must be oriented at each of the 4 labeled points. Clearly show which way the front end of the rocket points and also clearly show which end of the rocket has the rocket engine. Tell whether the engine is firing, or not. (This part is graded by hand, not online.)
There's a clip art of a rocket, but it only has one engine, so I'm not quite sure how the propulsion is supposed to work.
So here's what I know: if the rocket is going around in a circle, it's constantly accelerating towards the center. But it's not actually going towards the center, so I assume it's not pointed towards the center. Would it be pointed tangent to the circle? Or would it be aimed inwards slightly?
Answer: It is undergoing uniform circular motion, so its linear velocity is tangential to the circle, and its acceleration is towards the centre of the circle.
You can easily prove that it is tangential using calculus on complex numbers;
Let $z=Re^{i \theta}=x+iy$
Differentiating both sides with respect to time,
$\frac{dz}{dt}=Ri e^{i \theta}*\omega=i \omega z$, where $\omega$ is the angular velocity.
This means that the linear velocity is tangential to any point on the locus given by $|z|=R$.
differentiating again, $\frac{d^2z}{dt^2}=-R \omega^2z$, so the acceleration is perpendicular to the particle, directed inwards towards the centre.
Now, the rocket is going in uniform circular motion- hence it has an acceleration directed towards the centre of the circle. In addition, as it is in deep space far away from other masses, we can neglect the gravitational attraction due to those objects. The only possible source for the centripetal force is hence the propulsive force of the engines.
As it is directed towards the centre of the circle, I imagine that the rocket would be pointed inwards towards the centre. | {
"domain": "physics.stackexchange",
"id": 16887,
"tags": "homework-and-exercises, kinematics, acceleration, velocity, vectors"
} |
Identifying Nucleophiles and Electrophiles | Question: Are all polar molecules both nucleophilic and electrophilic, depending on which atom you are talking about? Take water for example. Oxygen bears the partial negative charge. Oxygen is nucleophilic. Because oxygen isolates electron density from the hydrogens, the hydrogens bear partial positive charges. So would the hydrogens be electrophilic?
Answer: Nucleophilicity and Electrophilicity are no absolute measurable quantities - they are based on an per entry basis and usually in reference to some kind of reaction. So basically you are right in assuming that polar molecules have both properties. In the special case of water these properties also stem from $$\ce{H2O <=> {}^{-}OH + H3+O}$$
But let us have a look at some molecules:
$\ce{CO}$ pretty good nucleophile especially towards transition metals. It has quite a small dipole towards the carbon, but a massive overload of electron density due to the $\sigma$-type HOMO
$\ce{CO2}$ also quite polar bonds, but no (permanent) dipole - still quite a potent electrophile given the right nucleophile
$\ce{SO3}$ has no permanent dipole (but a transitional one) although quite polar bonds. However, it is only a strong electrophile.
$\ce{NH3}$ quite polar, hence a reasonably good nucleophile. But it is also almost always way too strong a base to act as an electrophile
$\ce{NH(Et)2}$ not so different from ammonia, quite polar, but yet a terrible nucleophile, because it is too big
$\ce{BH3}$ not very polar (no permanent dipole) but quite a good electrophile
$\ce{I-}$ a monopol, very polarisable, good nucleophile, good leaving group.
...
The consideration of the strength of nucleophiles and electrophiles lies in many variables. It might be polarity, local electron density, molecular orbitals, polarisabilities. It is often highly dependent on the solvent and the other reactants. If you add $\ce{I-}$ to a acyl chloride you will certainly have a substitution to some extend:
$$\ce{R-COCl + I- <=> R-COI + Cl-}$$
If you add hydroxide ions, then you will certainly end up with the acid:
$$\ce{R-COCl + I- + {}^{-}OH <=> R-COOH + I- + Cl-}$$
Look at it nucleophilicity or electrophilicity from a case to case basis. | {
"domain": "chemistry.stackexchange",
"id": 1230,
"tags": "acid-base"
} |
What happens to an electron in vacuum? | Question: What happens to a free electron in vacuum? Does it accelerate? Does it keep absorbing energy from vacuum fluctuations? Or does it lose all its energy and ceases to be an electron? Please avoid technical terms, as I am relatively new to this field. Thank you!!
Answer: All systems described by relativistic four vectors , i.e. have a fixed invariant mass, and an electron as all elementary particles has a fixed invariant mass, by energy momentum conservation retain their energy and momentum, until they interact. So the electron if it is at rest, remains at rest, if in motion keeps its momentum.
If you are thinking the theoretical normalization it is used in describing interactions to avoid infinities in calculations. There are no interactions for a free electron in vacuum. | {
"domain": "physics.stackexchange",
"id": 68712,
"tags": "electrons, vacuum"
} |
What does the set {n | n is an integer and n = n + 1} represent? | Question: I am reading Michael Sipser's book Introduction to the Theory of Computation, which mentions the set $$S = \{ n \mid \text{$n$ is an integer and $n = n + 1$}\}.$$ This doesn't make any sense to me.
I would understand if $n$ were equal to infinity or something, so it probably wouldn't matter if we added $1$ to it.
Am I understanding it correctly?
Or is this just an empty set?
Answer: But infinity isn't an integer. Since there is no integer $n$ such that $n=n+1$, you're right that the set is empty. | {
"domain": "cs.stackexchange",
"id": 21443,
"tags": "terminology, computability, number-theory"
} |
How can we directly add half cell potentials to measure the EMF of a galvanic cell? | Question: We know that $E^\circ_{\text{cell}}=E^\circ_{\text{reduction at cathode}}+E^\circ_{\text{oxidation at anode}}$. For example, in a cell $\ce{Zn(s) | Zn^2+(aq) || Cu^2+(aq) | Cu(s)}$, $$E^\circ_{\text{cell}}=E^\circ_{\text{reduction at cathode}}+E^\circ_{\text{oxidation at anode}}=E^\circ_{\ce{Cu^2+/Cu}}+E^\circ_{\ce{Zn/Zn^2+}}$$
This makes sense to me. I think of $E^\circ_{\text{cell}}$ as the $E^\circ$ value of the net cell reaction i.e. of $\ce{Zn(s) + Cu^2+(aq) -> Zn^2+(aq) + Cu(s)}$.
But, consider this cell: $\ce{Zn(s) | Zn^2+(aq) || Ag^+(aq) | Ag(s)}$ According to my textbook, the $E^\circ_{\text{cell}}$ value here is again defined by:
$$E^\circ_{\text{cell}}=E^\circ_{\text{reduction at cathode}}+E^\circ_{\text{oxidation at anode}}=E^\circ_{\ce{Ag+/Ag}}+E^\circ_{\ce{Zn/Zn^2+}}$$
But, I think this is not correct.
Consider the fact that this cell is made up of two half cell reactions:
$$
\begin{array}{}
\text{Oxidation}&\ce{Zn(s)}&\ce{-> Zn^2+(aq) + 2e-}&E_1^\circ\\
\text{Reduction}&\ce{Ag^+(aq) + e-}&\ce{-> Ag(s)}&E_2^\circ\\
\text{Net cell reaction}&\ce{Zn(s) + 2Ag^+(aq)}&\ce{-> Zn^2+(aq) + 2Ag(s)}&E_3^\circ\\
\end{array}
$$
and that we can't add $E^\circ$ values directly, since they are an intensive property. Instead, we need to say that $\Delta G_3=\Delta G_1+\Delta G_2$ and then write $2E_3^\circ=2E_1^\circ+E_2^\circ$ or $E_3^\circ=\frac{2E_1^\circ+E_2^\circ}2$, which is definitely $\neq E_1^\circ+E_2^\circ$, as my textbook says.
I believe I have correctly presented everything here, and cannot figure out my mistake. Where am I wrong?
Answer: You are writing $\Delta G_3=\Delta G_1+\Delta G_2$. But it should be $\Delta G_3=\Delta G_1+2\Delta G_2$ (I am considering $\Delta G_2$ as free energy change for the conversion of $\ce{Ag+}$ to $\ce{Ag}$, which is considered 2 times in the final reaction) Because $\Delta G$ also depends on the number of moles of reactants and products, if you add two times the second reaction, the $\Delta G$ will also be used as twice its value. There you've made a mistake.
Thus the equation will be $$2E_3^0 = 2E_2^0 + 2E_1^0$$ So you will have $E_3 ^0 = E_1^0 + E_2^0$. | {
"domain": "chemistry.stackexchange",
"id": 9793,
"tags": "electrochemistry"
} |
Can well-formed formulas in predicate logic for a given signature be recognized in LOGSPACE? | Question: I read that visibly pushdown languages are supposed to model the typical simple formal languages like XML better than deterministic context free languages. The visibly pushdown languages can be recognized in LOGSPACE. I wonder whether the well-formed formulas in predicate logic for a given signature are a visibly pushdown language, or can at least be recognized in LOGSPACE.
Here is a typical well-formed formula in predicate logic, for the given signature $(f(\cdot,\cdot), g(\cdot))$:
$\forall x (f(x,y)=z \land g(z)=x)$
A corresponding context free grammar would be
$P \to T=T$
$P \to (P \land P)$
$P \to (P \lor P)$
$P \to \lnot P$
$P \to \forall V P$
$P \to \exists V P$
$T \to V$
$V \to x$
$V \to y$
$V \to z$
$T \to f(T,T)$
$T \to g(T)$
Some people prefer to use reverse polish notation:
$xyfz=zgx=\land x\forall$
A corresponding context free grammar would be
$P \to TT=$
$P \to PP\land$
$P \to PP\lor$
$P \to P\lnot$
$P \to PV\forall$
$P \to PV\exists$
$T \to V$
$V \to x$
$V \to y$
$V \to z$
$T \to TTf$
$T \to Tg$
Because I have now written down two explicit grammars, let the question just be whether these two grammars generate visibly pushdown languages, or can at least be recognized in LOGSPACE.
Answer: Yes, well-formed formulas in predicate logic for a given signature be recognized in LOGSPACE, at least for the two notations (grammars) given in the question!
The normal notation is indeed a visibly pushdown language, with $\Sigma_c=\{(\}$, $\Sigma_r=\{)\}$. When we encounter the symbol $($, then we push the number of still expected occurrences of of terms or proposition (and whether we expect terms or propositions) on the stack. When we encounter the symbol $)$, then we first check that exactly the expected number of terms or proposition occurred, then overwrite the counter by the value on the stack, and decrease it by one. The counter is finite, because its maximum value is the maximum arity of the signature, and zero is its minimal value, because we reject when the counter would become negative.
The given RPN grammar is also a visibly pushdown language, because the maximum arity of the given signature doesn't exceed two. You push on the stack the variables, pop from the stack for the binary function, the quantifier and the equality relation, and do nothing for the rest. What we actually push on the stack is the type (empty, term, or proposition) of the top element of the "real RPN" stack, before the new element is pushed. Convince yourself that we can always know this type (variable is not a type here).
Also RNP formulas for signatures with higher arity can be recognized in LOGSPACE, because they can be converted in constant time and space to an equivalent signature that whose arity doesn't exceed two, by using a pair constructor. | {
"domain": "cs.stackexchange",
"id": 4709,
"tags": "complexity-theory, formal-languages, first-order-logic"
} |
Are Triceratops and Eotriceratops similar enough to be considered as the same genus? | Question: Everybody knows Triceratops, the horned dinosaur from Late Cretaceous North America, who lived 68-66 MYA.
In 2007, Eotriceratops xerinsularis was named and described (Wikipedia). It lived in (what is now) Canada about 68 MYA. It is a little bigger than Triceratops but in many aspect very similar, e.g. they both have long horns and solid frill, and postcranial skeleton. Gregory S. Paul, in his Princton Field Guide to Dinosaurs 2016, suggested putting Eotriceratops xerinsularis in the same genus as Triceratops, renaming it to Triceratops xerinsularis. I find this suggestion appealing, but I know Paul's taxonomy is unorthodox and not mainstream (e.g. I do not agree with the thesis that Torosaurus is merely a full adult Triceratops).
So the question is: can Eotriceratops xerinsularis be put in the same genus as Triceratops and is this suggestion is plausiable enough?
Answer: Taxonomists have wide leeway over assigning higher taxa. The publication process in which Eotriceratops was described as a new genus made it a new genus. However, given a different set of reviewers, they may have rejected the new generic description, and suggested that the author put it into Triceratops. The process is somewhat arbitrary, and there are some random elements to higher taxon assignment. But it is the system we use. If somebody were to write a paper to synonomize Eotriceratops with Triceratops, and that paper were accepted, then Eotriceratops would cease to be a valid name. Its all about publication precedence. Gregory Paul may have good reason for his suggestion, but it must be published before a change is made. | {
"domain": "biology.stackexchange",
"id": 9377,
"tags": "taxonomy, palaeontology, dinosaurs"
} |
Largest alkane having a given alkane as its base name | Question: What is the largest (most carbon atoms) alkane having heptane as its base name?
For example, 2,2,3,3-tetramethylbutane is the largest (most carbon atoms) alkane retaining butane as its base name.
Answer: Any alkyl substituent of butane in position 2 or 3 cannot be longer than $\ce{CH3}$ since that would lead to a longer parent chain. And obviously, there cannot be any alkyl substituent at all in the first or the last position of the butane chain. Therefore, the largest structure based on a butane parent chain is 2,2,3,3-tetramethylbutane.
This principle can be expanded to a heptane parent chain. The maximum length for alkyl substituent chains are 0 for position 1 and 7, 1 for position 2 and 6, 2 for position 3 and 5, and 3 for position 4. Therefore, the largest theoretical structure based on a heptane parent chain is 3,3,5,5-tetra-tert-butyl-4,4-bis[3-(tert-butyl)-2,2,4,4-tetramethylpentan-3-yl]-2,2,6,6-tetramethylheptane ($\ce{C53H108}$). | {
"domain": "chemistry.stackexchange",
"id": 11267,
"tags": "organic-chemistry, nomenclature"
} |
How to show $T(\rho,\sigma)≥\sum_i|r_i − s_i|$ with $r_i,s_i$ eigenvalues of $\rho,\sigma$? | Question: The proof of the Fannes' inequality replies on the formula $T(ρ, σ)≥\sum_i|r_i − s_i|$, where $r_i,s_i$ are the eigenvalues of $\rho,\sigma$, in the descending order.
In the proof given in Box 11.2, Page 512, Chapter 8, Quantum Computation and Quantum Information by Nielsen and Chuang, it is given that
By the spectral decomposition, we may decompose $\rho−\sigma=Q−R$, where $Q$ and $R$ are positive operators with orthogonal support, so $T (\rho,\sigma)=tr(R)+tr(Q)$. Defining $V\equiv R+\rho=Q+\sigma$, we have $T (\rho, \sigma) = tr(R) + tr(Q) = tr(2V ) − tr(\rho) − tr(\sigma)$. Let $t_1\geq t_2\geq · · · \geq t_d$ be the eigenvalues of $T$. Note that $t_i\geq \max(r_i,s_i )$, so $2t_i\geq r_i + s_i + |r_i−s_i |$, and it follows that
$$
T(\rho,\sigma)\geq|r_i − s_i|
$$
My understanding
Let $H=\rho-\sigma$, which is hermitian since both $\rho$ and $\sigma$ are positive semidefinite.
The spectral decomposition of $H$ can be written as, $H=\rho-\sigma=VD_HV^\dagger$.
Let $D_Q$ be the diagonal matrix that has all the positive eigenvalues of $H$, and $D_R$ contains the negation of all the negative eigenvalues of $H$, such that we can define $VD_QV^\dagger=Q$ and $VD_RV^\dagger=R$. Therefore,
$$
D_H=D_Q-D_R\\
H=\rho-\sigma=VD_HV^\dagger=V(D_Q-D_R)V^\dagger=VD_QV^\dagger-VD_RV^\dagger=Q-R
$$
$\implies D_Q,D_R$, and thereby $Q,R$ have orthogonal supports, since $Q$ only supports the positive eigenspace and $R$ only supports the negative eigenspace.
$\therefore D_QD_R=D_RD_Q=0\implies QR=RQ=0$
So we have, $\rho-\sigma=Q-R$, where $Q$ and $R$ are positive operators with orthogonal supports.
The trace distance between $\rho$ and $\sigma$ is defined as, $T(\rho,\sigma)=\dfrac{1}{2}tr|\rho-\sigma|=\dfrac{1}{2}tr\sqrt{(\rho-\sigma)^\dagger(\rho-\sigma)}$
\begin{align}
(\rho-\sigma)^\dagger(\rho-\sigma)&=(Q-R)^\dagger(Q-R)=(Q^\dagger-R^\dagger)(Q-R)\\
&=Q^\dagger Q+R^\dagger R-Q^\dagger R-R^\dagger Q\\
&=QQ+RR-QR-RQ=Q^2+R^2\text{ , since }QR=RQ=0\\
&=QQ+RR+QR+RQ=(Q+R)^2\\
\sqrt{(\rho-\sigma)^\dagger(\rho-\sigma)}&=Q+R
\end{align}
So the trace distance becomes $T(\rho,\sigma)=\dfrac{1}{2}tr(Q+R)=\dfrac{1}{2}\Big(tr(Q)+tr(R)\Big)$
Defining $V\equiv R+\rho=Q+\sigma$ obtains $2T(\rho,\sigma)=tr(Q)+tr(R)=tr(2V)-tr(\rho)-tr(\sigma)$
How do I proceed further to obtain $T(\rho,\sigma)\ge\sum_i|r_i − s_i|$? Or is it rather $2T(\rho,\sigma)\ge\sum_i|r_i − s_i|$?
Min-Max theorem
The Rayleigh quotient for any vector $|x\rangle$ is defined to be the ratio $r(x)=\dfrac{\langle x|V|x\rangle}{\langle x|x\rangle}$, where $r(x)$ is scaling invariant, ie., $r(tx)=\dfrac{\langle tx|V|tx\rangle}{\langle tx|tx\rangle}=\dfrac{t^2\langle x|V|x\rangle}{t^2\langle x|x\rangle}=\dfrac{\langle x|V|x\rangle}{\langle x|x\rangle}=r(x)$
$\therefore$ it is sufficient to study the special case $\langle x|x\rangle=1$, so that the critical points of the function $\dfrac{\langle x|V|x\rangle}{\langle x|x\rangle}$ is the same as that of $\langle x|V|x\rangle$ subjected to the constraint ${\langle x|x\rangle}=1$.
We can prove that the critical points of $r(x)$ are the eigenvectors $|u_i\rangle$ of the operator $V$. Therefore, $t_{\min}=\min_{u\neq 0}r(u)$ and $t_{\max}=\max_{u\neq 0}r(u)$.
If $t_1\geq t_2\geq \cdots\geq t_k\geq\cdots\geq t_d$ be the eigenvalues of $V$ in the descending order then this implies
\begin{align}
t_d&=\min_{u\neq 0}r(u)=\min_{x\neq 0}\{r(x):|x\rangle\in U\text{ and }\dim(U)=d\}\\
t_{d-1}&=\min_{u\neq 0\in u_d^\perp}r(u)=\max\{\min_{x\neq 0}\{r(x):|x\rangle\in U\text{ and }\dim(U)=d-1\}\}\\
\vdots\\
t_{k}&=\min_{u\neq 0\in \{u_d,u_{d-1},\cdots, u_{k+1}\}^\perp}r(u)\\
&=\max\{\min_{x\neq 0}\{r(x):|x\rangle\in U\text{ and }\dim(U)=k\}\}\\
&=\max\{\min_{x\neq 0}\{\langle x|V|x\rangle:|x\rangle\in U\text{ and }\dim(U)=k\}\}\\
&=\max\{min_{x\neq 0}\{\langle x|R+\rho|x\rangle:|x\rangle\in U\text{ and }\dim(U)=k\}\}\\
&=\max\{\min_{x\neq 0}\{\langle x|R|x\rangle+\langle x|\rho|x\rangle:|x\rangle\in U\text{ and }\dim(U)=k\}\}\\
&\geq\max\{\min_{x\neq 0}\{\langle x|\rho|x\rangle:|x\rangle\in U\text{ and }\dim(U)=k\}\}\\
&=r_k
\end{align}
Answer: By the min-max theorem, we have$^1$
$$
\begin{align}
t_k&=\max_{\quad U\\\dim U=k}\min_{|x\rangle\in U\\\langle x|x\rangle=1}\langle x|V|x\rangle\tag1\\
&=\max_{\quad U\\\dim U=k}\min_{|x\rangle\in U\\\langle x|x\rangle=1}\left[\langle x|R|x\rangle+\langle x|\rho|x\rangle\right]\tag2\\
&\ge\max_{\quad U\\\dim U=k}\min_{|x\rangle\in U\\\langle x|x\rangle=1}\langle x|\rho|x\rangle\tag3\\
&=r_k\tag4
\end{align}
$$
where we used $V:=R+\rho$ and the fact that $R$ is positive semi-definite. This is a rigorous statement of the intuitive fact that adding a positive semi-definite operator cannot reduce eigenvalues. Since $V=Q+\sigma$, an analogous argument gives us $t_k\ge s_k$. Therefore,
$$
2t_k\ge2\max(r_k, s_k)=r_k+s_k+|r_k-s_k|.\tag5
$$
This allows us to bound the trace distance $2T(\rho,\sigma)=\mathrm{tr}(2V)-\mathrm{tr}(\rho)-\mathrm{tr}(\sigma)$ as follows
$$
\begin{align}
2T(\rho,\sigma)&=\mathrm{tr}(2V)-\mathrm{tr}(\rho)-\mathrm{tr}(\sigma)\tag6\\
&=\left(\sum_k2t_k\right)-\mathrm{tr}(\rho)-\mathrm{tr}(\sigma)\tag7\\
&\ge\left(\sum_kr_k+s_k+|r_k-s_k|\right)-\mathrm{tr}(\rho)-\mathrm{tr}(\sigma)\tag8\\
&=\left(\mathrm{tr}(\rho)+\mathrm{tr}(\sigma)+\sum_k|r_k-s_k|\right)-\mathrm{tr}(\rho)-\mathrm{tr}(\sigma)\tag9\\
&=\sum_k|r_k-s_k|.\tag{10}
\end{align}
$$
This agrees with your expectation that $2T(\rho,\sigma)\ge\sum_k|r_k-s_k|$ rather than $T(\rho,\sigma)\ge\sum_k|r_k-s_k|$ as in $(11.46)$ in Nielsen & Chuang. It appears there is a mistake in Nielsen & Chuang which you can confirm by comparing to the article in wikipedia. Also, compare to the last paragraph on page $404$ where a similar argument is made without the error. Finally, we can see that $(11.46)$ cannot be true by substituting $\rho=|0\rangle\langle 0|$ and $\sigma=\frac{I}{2}$. It looks as if box $11.2$ might be using $T(\rho,\sigma):=\|\rho-\sigma\|_1$ without the $\frac12$ factor.
$^1$ There is a typo box $11.2$. It says
Let $t_1\ge t_2\ge \dots\ge t_d$ be the eigenvalues of $T.$
but it should read
Let $t_1\ge t_2\ge \dots\ge t_d$ be the eigenvalues of $V.$
Note that $T$ denotes the trace distance which is not a linear function and hence doesn't have eigenvalues. | {
"domain": "quantumcomputing.stackexchange",
"id": 4853,
"tags": "textbook-and-exercises, nielsen-and-chuang, information-theory, entropy, trace-distance"
} |
N numbers, N/2 pairs. Minimizing the maximum sum of a pairing. Proving greedy algorithm | Question: So say I have n numbers, where n is even. I want to pair the numbers such that the maximum sum of the pairs is minimized. For example -2, 3, 4, 5. The ideal pairing is (-2, 5), (3, 4), since its maximum sum is 3 + 4 = 7, and that is the minimal sum possible for a max sum in any pairing. The key to the algorithm is to sort the values from least to greatest. Then pair the least with the greatest, and so on, until you reach the center of the ordering.
Example: 3, -2, 4, 5
Algorithm sorts the values: -2 , 3, 4, 5
Then pairs first with last: (-2, 5)
Then pairs the next available first and last: (3, 4)
Terminates since no pairs left.
This is a greedy algorithm and I am trying to prove that it is always correct using a "greedy stays ahead" approach. My issue is that I am struggling to show that the algorithm's maximum sum is always $\leq$ optimal maximum sum. My intention was to suppose for contradiction that the optimal maximum sum is $<$ the algorithm's maximum sum. But I'm not sure how to find a contradiction. How would this proof go?
Answer: Can you see why $\max((-2)+5, 3+4) \lt \max(-2+3, 4+5)$?
The reason is simple. Because on the right hand side, the maximum number 5 is not paired with the minimum number.
Let the numbers are $a_1\le a_2\le \cdots\le a_n$. Let the numbers be paired in some way.
If $a_n$ is paired with $a_1$, we are done at this round.
Suppose $a_n$ is paired with $a_j$, $j\not= 1$. Then $a_1$ is paired with $a_k$ for some $k\not= n$. So we have two pairs, $\{a_n, a_j\}$ and $\{a_1, a_k\}$. The sums of these two pairs are $a_n + a_j$ and $a_1 + a_k$, the large one of which is $a_n+a_j$.
Let us switch $a_j$ and $a_1$ so that $a_n$ will pair with $a_1$, and $a_j$ will pair with $a_k$. The sums of the two new pairs are $a_n + a_1$ and $a_j + a_k$, both of which is at most $a_n+a_j$, i.e., the large one of them is at most $a_n+a_j$. So after the switch, the maximum sum of the pairs involving $a_n, a_j, a_k, a_1$ does not increase. Since other pairs stays the same, So after the switch, the maximum sum of all pairs does not increase.
Continuing this process, we will make sure the largest number remaining be paired with the smallest number remaining at each round. The maximum sum of the pairs will never increase at each round. After $n/2$ rounds, we will reach the pairing where $a_k$ is paired with $a_{n+1-k}$.
You can see the above approach is indeed the "greedy stays ahead" approach. | {
"domain": "cs.stackexchange",
"id": 15961,
"tags": "algorithms, correctness-proof, greedy-algorithms"
} |
What may cause voltage between 2 distant probes of multimeter in apartment room? | Question: In an empty apartment room,my multimeter shows 3.6 mV voltage when I put 2 probes with 2 meters distance.
I am curious,what may cause voltage between 2 distant probes of multimeter in apartment room?
Thanks for any hint.
Answer: Small 'ghost' or 'phantom' voltages may be seen on modern digital multimeters (DMMs) when set to a low voltage range, especially inside building with AC wiring in walls, under floors, etc. DMMS tend to have high input impedance. This is something that users of older style meters with a moving pointer did not experience.
The cause is usually capacitive coupling between the meter leads and household AC wiring in the walls, under floors, etc. You may notice a change in the reading if you move the wires about, short the probes together, touch one of the probes, etc.
This 'feature' of modern DMMs can confuse people when testing conductors in AC power wiring to see if they are alive or dead, for example a wire disconnected at both ends, but running parallel to energised circuits carrying power may give quite a big reading to ground (e.g. 75 volts in a 110 volt situation). The very high input impedance of a DMM does not load the conductor and so the DMM picks up the induced voltage. An old moving-coil meter would present a tiny load but still enough to collapse the induced voltage.
It also means you need to be careful and take steps in lab work if measuring very small AC voltages. | {
"domain": "physics.stackexchange",
"id": 90678,
"tags": "electromagnetism, electromagnetic-radiation, magnetic-fields, nuclear-physics, voltage"
} |
Which heuristics guarantee the optimality of A*? | Question: The following is a statement and I am trying to figure out if it's true or false and why.
Given a non-admissible heuristic function, A* will always give a solution if one exists, but there is no guarantee it will be optimal.
I know that a non-admissible function is $h(n) > h^*(n)$ (where $h^*(n)$ is the real cost to the goal), but I do not know if there is a guarantee.
Which heuristics guarantee the optimality of A*? Is the admissibility of the heuristic always a necessary condition for A* to produce an optimal solution?
Answer:
Given a non-admissible heuristic function, A* will always give a solution if one exists, but there is no guarantee it will be optimal.
I won't duplicate the proof here, but it isn't too hard to prove that any best-first search will find a solution for any measure of best, given that a path to the solution exists and infinite memory. A* is a best-first search algorithm, so it will always find a solution if one exists.
Which heuristics guarantee the optimality of A*? Is the admissibility of the heuristic always a necessary condition for A* to produce an optimal solution?
Admissibility is not a necessary condition. Take any admissible heuristic $h_1$ and make a new function $h(n) = h_1(n)+5$. This heuristic is not admissible, but if you run A* on it, it will still find optimal solutions.
But, we also have to ask what you mean by "the optimality of A*", because optimality can have two senses here. My point in the previous paragraph is in the sense of returning optimal paths. An alternate interpretation is that no algorithm performs fewer expansions that A* with the same information. This is probably not what was meant, and the answer in that context is far more complicated. But, with an inconsistent (but admissible) heuristic, A* can perform exponentially more expansions than other known algorithms, and thus is not the optimal algorithm to use. | {
"domain": "ai.stackexchange",
"id": 2140,
"tags": "search, heuristics, a-star, admissible-heuristic"
} |
Is every subset of a RE language also RE, in general? | Question: I'm trying to understand the question in my title in an intuitive way:
If I have an RE language A, then some TM, say TM(A) accepts on it. If I take a subset of A, say A2, then all elements of A2 will cause TM(A) to halt in the accept state.
However, is it in general possible to then create a TM for A2, say TM(A2), such that A2 is the max possible set that halts TM(A2) in the accept state. Thus making A2 its language - and would this language be RE?
Thanks
Answer: No, it is not true that every subset of a recursively enumerable (RE) language is also RE. In fact, it make sense to say that a subset of a RE language is not RE more often than not.
For an example, let us consider the RE language, $\Sigma^*$ that contains all words. It is the language of the Turing machine which always halts and accepts in its first step. Note that every language is a subset of $\Sigma^*$. Every language that is not RE is an example.
Here are a few exercises for you to check.
Exercise 1. Is every subset of a regular language regular?
Exercise 2. Is every subset of a context-free language context-free?
Exercise 3. Is every subset of a recursive language recursive? | {
"domain": "cs.stackexchange",
"id": 12987,
"tags": "formal-languages, formal-grammars, chomsky-hierarchy"
} |
Difference between thermodynamic and kinetic stability | Question: What is the difference between thermodynamic and kinetic stability? I'd like a basic explanation, but not too simple. For example, methane does not burn until lit -- why?
Answer: To understand the difference between kinetic and thermodynamic stability, you first have to understand potential energy surfaces, and how they are related to the state of a system.
A potential energy surface is a representation of the potential energy of a system as a function of one or more of the other dimensions of a system. Most commonly, the other dimensions are spatial. Potential energy surfaces for chemical systems are usually very complex and hard to draw and visualize. Fortunately, we can make life easier by starting with simple 2-d models, and then extend that understanding to the generalized N-d case.
So, we will start with the easiest type of potential energy to understand: gravitational potential energy. This is easy for us because we live on Earth and are affected by it every day. We have developed an intuitive sense that things tend to move from higher places to lower places, if given the opportunity. For example, if I show you this picture:
You can guess that the rock is eventually going to roll downhill, and eventually come to rest at the bottom of the valley.
However, you also intuitively know that it is not going to move unless something moves it. In other words, it needs some kinetic energy to get going.
I could make it even harder for the rock to get moving by changing the surface a little bit:
Now it is really obvious that the rock isn't going anywhere until it gains enough kinetic energy to overcome the little hill between the valley it is in, and the deeper valley to the right.
We call the first valley a local minimum in the potential energy surface. In mathematical terms, this means that the first derivative of potential energy with respect to position is zero:
$$\frac{\mathrm dE}{\mathrm dx} = 0$$
and the second derivative is positive:
$$\frac{\mathrm d^2E}{\mathrm dx^2} \gt 0$$
In other words, the slope is zero and the shape is concave up (or convex).
The deeper valley to the right is the global minimum (at least as far as we can tell). It has the same mathematical properties, but the magnitude of the energy is lower – the valley is deeper.
If you put all of this together, (and can tolerate a little anthropomorphization) you could say that the rock wants to get to the global minimum, but whether or not it can get there is determined by the amount of kinetic energy it has.
It needs at least enough kinetic energy to overcome all of the local maxima along the path between its current local minimum and the global minimum.
If it doesn't have enough kinetic energy to move out of its current position, we say that it is kinetically stable or kinetically trapped. If it has reached the global minimum, we say it is thermodynamically stable.
To apply this concept to chemical systems, we have to change the potential energy that we use to describe the system. Gravitational potential energy is too weak to play much of a role at the molecular level. For large systems of reacting molecules, we instead look at one of several thermodynamic potential energies. The one we choose depends on which state variables are constant. For macroscopic chemical reactions, there is usually a constant number of particles, constant temperature, and either constant pressure or volume (NPT or NVT), and so we use the Gibbs Free Energy ($G$ for NPT systems) or the Helmholtz Free Energy ($A$ for NVT systems).
Each of these is a thermodynamic potential under the appropriate conditions, which means that it does the same thing that gravitational potential energy does: it allows us to predict where the system will go, if it gets the opportunity to do so.
For kinetic energy, we don't have to change much - the main difference between the kinetic energy of a rock on a hill and the kinetic energy of a large collection of molecules is how we measure it. For single particles, we can measure it using the velocity, but for large groups of molecules, we have to measure it using temperature. In other words, increasing the temperature increases the kinetic energy of all molecules in a system.
If we can describe the thermodynamic potential energy of a system in different states, we can figure out whether a transition between two states is thermodynamically favorable – we can calculate whether the potential energy would increase, decrease, or stay the same.
If we look at all accessible states and decide that the one we are in has the lowest thermodynamic potential energy, then we are in a thermodynamically stable state.
In your example using methane gas, we can look at Gibbs free energy for the reactants and products and decide that the products are more thermodynamically stable than the reactants, and therefore methane gas in the presence of oxygen at 1 atm and 298 K is thermodynamically unstable.
However, you would have to wait a very long time for methane to react without some outside help. The reason is that the transition states along the lowest-energy reaction path have a much higher thermodynamic potential energy than the average kinetic energy of the reactants. The reactants are kinetically trapped - or stable just because they are stuck in a local minimum. The minimum amount of energy that you would need to provide in the form of heat (a lit match) to overcome that barrier is called the activation energy.
We can apply this to lots of other systems as well. One of the most famous and still extensively researched examples is glasses.
Glasses are interesting because they are examples of kinetic stability in physical phases. Usually, phase changes are governed by thermodynamic stability. In glassy solids, the molecules would have a lower potential energy if they were arranged in a crystalline structure, but because they don't have the energy needed to get out of the local minimum, they are "stuck" with a liquid-like disordered structure, even though the phase is a solid. | {
"domain": "chemistry.stackexchange",
"id": 1396,
"tags": "thermodynamics, kinetics, stability"
} |
Python list permutations in lexicographic order | Question: This code is about Lexicographic Ordering algorithms and its works but how can i change the style of the code to professional coding style?
from typing import List
def swap(_list: List, index1: int, index2: int):
_list[index1], _list[index2] = _list[index2], _list[index1]
return _list
def lexico_graphic(items: List):
paths = [items]
while True:
# step 1
largest_i = None
for i in range(len(items) - 1):
if items[i] < items[i + 1]:
largest_i = i
if largest_i is None:
return paths
# step 2
largest_j = 0
for j in range(len(items)):
if items[largest_i] < items[j]:
largest_j = j
# step 3
items = swap(items, largest_i, largest_j)
# step 4
items = items[:largest_i + 1] + items[:largest_i:-1]
paths.append(items)
thanks for your time
Answer: In Python objects are passed into functions by creating a reference that points to the same value, meaning that when you invoke your function swap() on list items, local _list refers to the exact same items object you passed in there. When you mutate _list in the next line (_list[index1], _list[index2] = _list[index2], _list[index1]) the underlying object changes, which means your function is no longer pure: you were trying to return a new list with 2 items swapped, but instead modified the original list.
If you don't want this function to mutate the original list you can write it like this:
def swap(_list: List, index1: int, index2: int):
new_list = _list[:] # equivalent to `_list.copy()`
new_list[index1], new_list[index2] = new_list[index2], new_list[index1]
return new_list
As you can see the original list never changes.
If you instead wanted the function to mutate your list, you can get rid of the return statement:
def swap(_list: List, index1: int, index2: int):
_list[index1], _list[index2] = _list[index2], _list[index1]
and use it without the assignment:
swap(items, largest_i, largest_j)
# instead of
items = swap(items, largest_i, largest_j)
The algorithm looks fine, but naming may need some adjustments:
_list - underscores in python are used as access modifiers, this parameter can be named items for example
lexico_graphic - it's unclear what this function does by looking at its name | {
"domain": "codereview.stackexchange",
"id": 44060,
"tags": "python, python-3.x"
} |
SIC C command checker programming fragment | Question: This program gets a string then checks if that's a command by comparing it to the 8 options available. It also checks if it has parameters. It these things with the help of the getting to know the length and getting checking for string equality.
/* BASIC HELPERS START */
/* BASIC LENGTH */
//a simple function to get total length of a string/command
int helperLength(char a[])
{
for(int c = 0; c<1000; c++)
if (a[c] == '\0')
return c;
}
/* BASIC STRING EQUALITY */
//most equality comparison between 2 strings
//programmingsimplified basically gave EQ for the most part
int CCHelperEQ(char a[], char b[])
{
int c = 0;
while (a[c] == b[c])
{
if (a[c] == '\0' || b[c] == '\0')
break;
c++;
}
if (a[c] == '\0' && b[c] == '\0')
return 0;
else
return -1;
}
/* BASIC HELPERS END */
/* WITH and WITHOUT PARAMETERS START */
/* COMMANDS WITH PARAMETERS */
//check if the parameters have errors
//might need to use three new functions for each of the three parameters checkers
int CCHelperWP(char commWP[],char WCommand[], int Pstart, int Cend)
{//command with parameter, which command, start of parameters, end of command
//return 1 if paramets work
int parC = Pstart;//parameter counter
char string1[] = "load "; //1
char string4[] = "dump "; //4
char string6[] = "assemble "; //6
if (CCHelperEQ(commWP, string1) == 0)
{
while (parC < Cend)
{//checks if the parameter can work for the command
if (commWP[parC] == '\0')
return 0;
parC++;
}
return 1;
}
else if (CCHelperEQ(commWP, string4) == 0)
{
int dumpC=0;
while (parC < Cend)
{//checks if the parameter can work for the command
//dump needs two hex values but at this point I do not need to check the parameters are hex only that they exist so this is enough
if (commWP[parC] == '\0' && dumpC == 0)
return 0;
else if (commWP[parC] == ' ' && dumpC == 0)
dumpC++;
else if (commWP[parC] == ' ' && dumpC == 1)
return 0;
parC++;
}
return 1;
}
else if (CCHelperEQ(commWP, string6) == 0)
{
while (parC < Cend)
{//checks if the parameter can work for the command
if (commWP[parC] == '\0')
return 0;
parC++;
}
return 1;
}
return 0;
}
/* COMMANDS WITHOUT PARAMETERS */
//check the command we are using is one without parameters
//needed otherwise error where functions that need parameters could give a bug with the set-up on compare_command
int CCHelperWO(char commNP[])
{//command no parameter
char string1[] = "execute";
char string2[] = "debug";
char string3[] = "help";
char string4[] = "directory";
char string5[] = "exit";
if (CCHelperEQ(commNP, string1) == 0)
{
return 1;
}
else if (CCHelperEQ(commNP, string2) == 0)
{
return 1;
}
else if (CCHelperEQ(commNP, string3) == 0)
{
return 1;
}
else if (CCHelperEQ(commNP, string4) == 0)
{
return 1;
}
else if (CCHelperEQ(commNP, string5) == 0)
{
return 1;
}
else return 0;
}
/* WITH and WITHOUT PARAMETERS END */
//compare_command main helper function
int CCHelper(char a[], char b[])
{
int c = 0;
int length = helperLength(a);
while (a[c] == b[c])
{
if (a[c] == '\0' && b[c] == '\0')
{//checks if the command is one that doesn't need parameters
if(CCHelperWO(b) == 1)
{ return 0; }
else return -1;
}
if (b[c] == '\0')
if (1 == CCHelperWP(a, b, c, length))
return 0;
c++;
}
}
int compare_command(char co[]) {
/*
going to give an int value to each command
with lots of if statements to check if the words is the command
after the words is the command I will have other functions
that check for the different parameters required for each command.
*/
char stringLF[] = "load "; //1
char stringExe[] = "execute"; //2
char stringDeb[] = "debug"; //3
char stringDSE[] = "dump "; //4
char stringH[] = "help"; //5
char stringAF[] = "assemble "; //6
char stringDir[] = "directory"; //7
char stringExit[] = "exit"; //8
if (CCHelper(co, stringLF) == 0)
{//With Parameter
return 1;
}
else if (CCHelper(co, stringExe) == 0)
{//Without Parameter
return 2;
}
else if (CCHelper(co, stringDeb) == 0)
{//Without Parameter
return 3;
}
else if (CCHelper(co, stringDSE) == 0)
{//With Parameter
return 4;
}
else if (CCHelper(co, stringH) == 0)
{//Without Parameter
return 5;
}
else if (CCHelper(co, stringAF) == 0)
{//With Parameter
return 6;
}
else if (CCHelper(co, stringDir) == 0)
{//Without Parameter
return 7;
}
else if (CCHelper(co, stringExit) == 0)
{//Without Parameter
return 8;
}
else return 9;
}
I do not know if this programming fragment is a bit overcomplicated, or if using pointers would help. Any tips would be greatly appreciated since I feel I'm using too many functions for what I'm doing.
Answer: You say:
I feel I'm using too many functions for what I'm doing.
I think you're right. You appear to have reimplemented 2 standard library functions with minor changes. Your helperLength() function could be written as:
int helperLength(char a[])
{
int result = strlen(a);
if (result >= 1000)
{
result = 1000;
}
return result;
}
The question is, why are you limiting the length to be 1000 in this way? It seems like you could replace calls to helperLength() with a call to strlen() and do the limiting at the call site (if it even needs to be done) unless you're going to be reusing the function in lots of places.
Likewise, your CCHelperEQ function could be written as:
int CCHelperEQ(char a[], char b[])
{
return strcmp(a, b) != 0;
}
Additionally, you seem to be doing the same work repeatedly. In CCHelperWP() the while loop in the first if block looks identical to the one in the last if block`. You should make it a function and call it from both. And it could also probably be replaced with this logic:
return strlen(&commWP[Pstart]) >= (Pend - Pstart);
You CCHelperWO() function makes good use of the CCHelperEQ() function. Nice reusability there! (But I'd still replace those calls with just a call to strlen().)
The CCHelper() function would be easier to read if you wrote it like this:
int CCHelper(char a[], char b[])
{
if (strcmp(a, b) == 0)
{
if (CCHelperWO(b) == 1)
{
return 0;
}
return -1;
}
int length = strlen(a);
int bLen = strlen(b);
if (CCHelperWP(a, b, bLen, length) == 1)
{
return 0;
}
// DO SOMETHING HERE!
}
Notice that at the end, it's possible that you've reached a state where you should return something, but there's no return statement. That's an error and you need to put something there.
Finally your compare_command() function could be rewritten using a table driven design. It would look something like this:
int compare_command(char co[]) {
const char* commands[] = {
"load ", //1
"execute", //2
"debug", //3
"dump ", //4
"help", //5
"assemble ", //6
"directory", //7
"exit" //8
};
const numEntries = sizeof(commands) / sizeof(commands[0]);
for (int i = 0; i < numEntries; ++i)
{
if (strcmp(co, commands [ i ]) == 0)
{
return i + 1;
}
}
return numEntries + 1;
}
Once you've done all of that, I recommend rethinking your naming strategy. The names in your code are very difficult to read. Using the CC prefix is fine, since this is a "command checker". But names like CCHelperWO and commNP are very terse. I recommend using longer, more descriptive names, like CCParseCommandNoParam() and command for those particular names.
Also, you might want to rethink whether you should really have to handle commands with parameters differently from commands without parameters. It seems like a loop that gets the command and then checks how many parameters it should have, and then parses that many parameters would collapse this down to 1 code path for all commands. | {
"domain": "codereview.stackexchange",
"id": 27597,
"tags": "c, strings"
} |
In the Bohr model of the atom, does the centripetal force balance out the electrostatic force? | Question: Is this picture correct?
It's the Bohr Model for electron moving around the nucleus.
In my opinion, the centripetal force should be directed towards the center, and the electrostatic force should be directed outward.
And i found a counterexample :
Plus, in all my classcial mechanics classes, the centripetal force was inward.
So can someone check this picture and tell me if it's wrong or right.
And if it's right, can you elaborate on why it is?
Answer: Yes, the arrow representing a centripetal force should point inwards.
The electrostatic force is the centripetal force here. There is only one force (the electrostatic force), the centripetal force is not something different but a way of describing what the actual force does (i.e. that it forces the object onto an orbit).
If one only wants to show real forces, then there is only one arrow showing the electrostatic force (like in the second picture), but if you want to represent both aspects, then the arrow representing the centripetal force should be exactly the same (equal in size and direction).
The first picture implies (wrongly) that there is a balance between two (equal but opposite) forces. If this was true, then the electron would not move in an orbit but in a straight line. An object on an orbit is continually accelerated towards the centre, otherwise it wouldn't follow a curved path. | {
"domain": "physics.stackexchange",
"id": 83157,
"tags": "quantum-mechanics, electrostatics, electrons, atomic-physics, orbitals"
} |
Using stoichiometry to split a complex mineral | Question: Here is the problem I am stuck with, in particular just the first part of it.
I have calculated the moles $4.492 * 10^{-3} \,\mathrm{mol}$ and I am quizzed with regards to the equation that occurs. First assumption I am making is that the gas released is carbon dioxide.
Is it possible to separate out $\ce{(Mg_{x}Cu_{1-x})2(CO3)(OH)2}$ into simply the $\ce{(Mg_{x}Cu_{1-x})(CO3)}$ part that reacts with the dilute sulfuric acid to produce the carbon dioxide, the moles of which I know?
Answer:
Is it possible to separate out $\ce{(Mg_{x}Cu_{1-x})2(CO3)(OH)2}$ into simply the $\ce{(Mg_{x}Cu_{1-x})(CO3)}$ part that reacts with the dilute sulfuric acid to produce the carbon dioxide, the moles of which I know?
$4.492 * 10^{-3} \,\mathrm{mol}$ of $\ce{CO2}$ implies $4.492 * 10^{-3} \,\mathrm{mol}$ of $\ce{(Mg_{x}Cu_{1-x})2(CO3)(OH)2}$.
$4.492 * 10^{-3} \,\mathrm{mol}$ of $\ce{(Mg_{x}Cu_{1-x})2(CO3)(OH)2}$ is 0.8167 g.
Then, considering the atomic weights of C, O and H, subtract the contribution of these elements from 0.8167 g to determine the mass of $4.492 * 10^{-3} \,\mathrm{mol}$ of $\ce{(Mg_{x}Cu_{1-x})2}$.
Then divide by 2 to get the mass of $4.492 * 10^{-3} \,\mathrm{mol}$ of $\ce{Mg_{x}Cu_{1-x}}$;
Then use the atomic weights of Mg and Cu to solve for $x$. | {
"domain": "chemistry.stackexchange",
"id": 2811,
"tags": "inorganic-chemistry, analytical-chemistry, stoichiometry"
} |
Why does the naive bayes algorithm make the naive assumption that features are independent to each other? | Question: Naive Bayes is called naive because it makes the naive assumption that features have zero correlation with each other. They are independent of each other. Why does naive Bayes want to make such an assumption?
Answer: By doing so, the joint distribution can be found easily by just multiplying the probability of each feature whilst in the real world they may not be independent and you have to find the correct joint distribution. It is naive due to this simplification. | {
"domain": "datascience.stackexchange",
"id": 4154,
"tags": "machine-learning, probability, naive-bayes-classifier"
} |
Phase and Amplitude estimation | Question: Context:
I have used FFT many times, but for real, non-periodic signals I consider it a poor estimator.
For most of my applications I am only interested on the power spectrum, so I use the Welch's method. I really like its approach to take overlapping segments, apply a window to each of them, calculate the FFT and finally average the result to get the power spectrum estimation. The method is easy to understand and its results are smooth and stable.
However I have one application that I am also interested in the phase of the signal. In fact am not interested in the phase estimation of particular signal segment, I want to see how the phases of very similar spread spectrum signals vary over time.
My signals spectrum are spread, but they always have a peak in the PSD, however this peak frequency varies over time. I will probably only need to trend the phase related to the frequency of this peak.
Questions:
1) Does it make sense to use Welch's method for magnitude and phase, instead of power? I mean average magnitude and phase, instead of power.
2) Is there any magnitude and phase spectrum estimator that is smooth/stable like Welch's method?
Answer: answer to 1, No, the phase of an tone isn't constant from one FFT to the next, so averaging phase provides no reduction of error unless there is reason to think there is like in a coherent algorithm like SAR.
Answer to 2 is a bit more complicated but is yes, because you need to do carrier and timing recovery in a communications systems
For something less structured There are several frequency estimation algorithms that work on FFT data like:
Macleod, Malcolm D. "Fast nearly ML estimation of the parameters of
real or complex single tones or resolved multiple tones." IEEE
Transactions on Signal processing 46.1 (1998): 141-148.
and this just a personal favorite. There are 270 papers that cite that paper and many before that. This method is nearly ML, so not Welch but not shabby.
La Scala, Barbara F., and Robert R. Bitmead. "Design of an extended
Kalman filter frequency tracker." IEEE Transactions on Signal
Processing 44.3 (1996): 739-742.
Is more along the lines of tracking frequency and there are 170 papers that cite this paper.
a good book is:
Quinn, Barry G., and Edward James Hannan. The estimation and tracking
of frequency. Vol. 9. Cambridge University Press, 2001.
A recent tutorial paper
Vila-Valls, Jordi, et al. "Are PLLs dead? A tutorial on Kalman filter-based techniques for digital carrier synchronization." IEEE Aerospace and Electronic Systems Magazine 32.7 (2017): 28-45.
might be useful.
Phase introduces complications because phase is relative to some reference and having the same reference for multiple tones may or may not be meaningful.
One of the regular contributors here, Peter K has a web page (which I don't recall where) that has a lot of material that may be of interest. | {
"domain": "dsp.stackexchange",
"id": 6483,
"tags": "frequency-spectrum, power-spectral-density, phase, estimation"
} |
Number of Geostationary Orbits | Question: It is stated that there is only one geostationary orbit whose height can be calculated using:-
$H = [\frac{GM_ET^2}{4π^2}]^{\frac{1}{3}} - R$
But there can be more than one geostationary orbits if I use a different trick.
$ω = \frac{2π}{T}$
$ω = \frac{v_c}{R+H}$
$\therefore H = \frac{v_cT}{2π} - R$
$H$ represents height of geostationary orbit;
$G$ represents Gravitational constant;
$M_E$ represents Mass of Earth;
$T$ represents time period of rotation of Earth;
$v_c$ represents centripetal velocity;
$R$ represents Radius of Earth;
$\omega$ represents angular frequency;
My first equation states that there can be only one geostationary orbit as everything on Right Hand Side is constant.
My second equation states that there can be more than one geostationary orbit depending on the centripetal velocity.
Question:-
Can there be more than one geostationary orbits depending on the centripetal velocity?
Answer: Orbit is not like a circular racetrack where you can choose however fast you'd like to move along a fixed path - in orbit, your speed and trajectory are inextricably linked. Your error is in assuming that you can define an orbital path at whatever height you want, and then choose the speed/acceleration required to complete it in 24 hours.
That's not how it works - orbit is freefall, meaning the only acceleration comes from gravity alone. If you are too far away, gravity is too weak and your orbit will take more than 24 hours, and if you are too close, gravity is too strong and your orbit will take less than 24 hours. For a circular orbit, the centripetal acceleration, and therefore period, is determined solely by the height. Geosynchronous orbit can only occur at one particular height.
At an intuitive level, you're suggesting that something in low earth orbit like the ISS (orbital altitude 400km, period 90 minutes) could be in a synchronous orbit instead just by slowing down so that the same trajectory takes 24 hours instead of 90 minutes. But that's not possible, the ISS would just crash to the ground if it reduced its speed without changing its altitude. Similarly, you couldn't put the moon into synchronous obit just by speeding it up, as it would need to go so fast it would escape the planet entirely. Orbital velocities and trajectories are not independent. | {
"domain": "physics.stackexchange",
"id": 96132,
"tags": "orbital-motion, celestial-mechanics, satellites"
} |
$ \hbar^2$ Correction to the Bohr-Sommerfeld Quantization Condition | Question: We can get the Bohr-Sommerfeld quantization from the WKB method as answered. Since we use approximation, there should be an error in the system, I know this is not right all the time; in some conditions approximation can yield the real result. So, is there a way to find the $ \hbar^2$ correction?
Answer: OK, from the point of WKB, if we could find $S_2$, coefficient of the term $ (\hbar/i)^2$, and plug it into the Schrödinger equation, we would have find the correction. From the WKB, we already know that $S_0 = \int p(x)dx$ and $S_1 = - \frac{1}{2} \ln p + C$. We also have the condition, $ S_1''+(S_1')^2+2S_0'S_2' = 0$, so plugging $S_1$and $S_0$ written in terms of $p$, we got $S_2$ = $\frac{1 \cdot p'}{4 \cdot p^2} + \frac{1}{8} \int \frac{p'^2}{p^3}dx$. WKB says that, $\psi = e^{\frac{i}{\hbar}s}$ and $s = s_0 + \frac{\hbar}{i}s_1 + \frac{\hbar}{i}^2 s_2$. If we plug $s$ containing the $s_2$ term to the Schrödinger equation, we can find the correction. | {
"domain": "physics.stackexchange",
"id": 98593,
"tags": "quantum-mechanics, atomic-physics, approximations, semiclassical, deformation-quantization"
} |
Difference between and diffusion and heat equations? | Question: I read everywhere that diffusion and heat equations are similar. The same differential equations can be solved for both.
Consider a finite one-dimensional diffusion or heat transfer where one end is insulated and the other end is kept with a constant flux.
The boundary conditions are the same for diffusion or heat transfer: flux is zero at one end and constant at the other.
$$\frac{\partial u}{\partial x} = C; x = 0$$
$$\frac{\partial u}{\partial x} = 0; x = l$$
and the initial state of
$$u(x,0) = 0\ ||\ u(x,0) = K $$
It can be any constant value (initial concentration or temperature).
In the case of heat transfer, the temperature can rise indefinitely. But in the case of diffusion, there is a capacity limit.
How should the diffusion and heat equations be solved for these boundary conditions? Is the solution is the same in both cases?
Answer: They are both solved by means of Separation of Variables.
Assume we're looking for a function of the form $u(x,t)$ that satisfies:
$$\frac{\partial u}{\partial t}=\alpha\frac{\partial^2u}{\partial x^2}$$
And satisfies some boundary conditions.
Assume the function to be the product of two functions: $X(x)$ and $T(t)$:
$$u(x,t)=X(x)T(t)$$
Insert into the original PDE:
$$X(x)T'(t)=\alpha T(t)X''(x)$$
Divide both sides by $X(x)T(t)$, rearrange and introduce a separation factor $-\lambda^2$:
$$\frac{1}{\alpha}\frac{T'}{T}=\frac{X''}{X}=-\lambda^2$$
We now have two ODEs:
$$\frac{X''}{X}=-\lambda^2$$
And:
$$T'+\alpha \lambda^2T=0$$
For $X(x)$ we get:
$$X(x)=A\sin(\lambda x)+B\cos(\lambda x)$$
This will be solved by finding the eigenvalues of $\lambda$, using the boundary conditions (example).
With the eigenvalues of $\lambda$ the second ODE for $T(t)$ can also be solved easily:
$$T(t)=C_1e^{-\alpha \lambda^2t}$$
A third boundary condition for the initial state $u(x,0)$ will also be required, to put it all together.
You can find the full derivation here (which I wrote a while back), applied to a 1D ($x$) diffusion problem.
Whether the solutions of the heat equation and diffusion equation are the same (or at least similar in form, bar the material constants) will depend on boundary and initial conditions.
The boundary conditions I used in the linked example do yield eigenvalues for $\lambda$ without particular problems.
$$\frac{\partial u}{\partial x}=0\:\text{at } x=0, x=L$$
A good discussion of various types of boundary conditions and the consequences for the heat equation can be found here.
Another set of boundary conditions as suggested in the comments could be:
$$\Big(\frac{\partial u}{\partial x}\Big)_{x=l}=0$$
$$u(0,t)=C$$
This corresponds to a rod heated to constant temperature $C$ at one end and insulated at the other end.
First we make a transformation of temperature by defining:
$$u=u_{real}-C$$
That then means that the second BC becomes homogeneous, always desirable:
$$u(0,t)=0$$
At the end of our toil we'll simply find:
$$u_{real}(x,t)=u(x,t)+C$$
With the second BC: $u(0,t)=0$:
$$X(0)T(t)=0$$
Assume $T(t)\neq 0$, then:
$$X(0)=0$$
$$A\sin(\lambda 0)+B\cos(\lambda 0)=0$$
$$\implies B=0$$
And:
$$X(x)=A\sin(\lambda x)$$
With the first BC, $u_x(l)=0$:
$$X'(l)T(t)=0$$
Assume $T(t)\neq 0$:
$$A\cos(\lambda l)=0$$
Assuming $A\neq 0$, then:
$$\lambda l=\frac{n\pi}{2}$$
So our eigenvalues become:
$$\lambda_n=\frac{n\pi}{2l}$$
For $n=1,2,3,...$
The functions $X_n(x)$ are:
$$X_n(x)=A_n\sin\Big(\frac{n\pi x}{2l}\Big)$$
So we have:
$$u_n(x,t)=A_ne^{-\Big(\frac{n\pi}{2l}\Big)^2\alpha t}\sin\Big(\frac{n\pi x}{2l}\Big)$$
And using the superposition principle:
$$u(x,t)=\displaystyle \sum_{n=1}^{\infty}A_ne^{-\Big(\frac{n\pi}{2l}\Big)^2\alpha t}\sin\Big(\frac{n\pi x}{2l}\Big)$$
As regards the need for an initial condition $u(x,0)$, it should be fairly self-evident that if we're looking to find the time evolution of the distribution of temperature or concentration, we must know what that distribution was at $t=0$. This is much like wanting to know where a car driving at velocity $v(t)$ will be at time $t$: we need to know where it was at $t=0$, otherwise the question can have no definite answer.
In our problem, the initial condition $u(x,0)=f(x)$ will be used to determine the coefficients $A_n$. At $t=0$ the exponential factors all drop out and we get:
$$u(x,0)=f(x)=\displaystyle \sum_{n=1}^{\infty}A_n\sin\Big(\frac{n\pi x}{2l}\Big)$$
The coefficients are determined by Fourier, from:
$$A_n=\frac{2}{l}\int_0^lf(x)\sin\Big(\frac{n\pi x}{2l}\Big)dx$$
Remember that at the end of all of this:
$$u_{real}(x,t)=u(x,t)+C=C+\displaystyle \sum_{n=1}^{\infty}A_ne^{-\Big(\frac{n\pi}{2l}\Big)^2\alpha t}\sin\Big(\frac{n\pi x}{2l}\Big)$$ | {
"domain": "physics.stackexchange",
"id": 44511,
"tags": "thermodynamics, differential-equations, diffusion"
} |
Dijkstra's algorithm runtime for dense graphs | Question: The runtime for Dijkstra's algorithm implemented with a priority queue on a sparse graph is $O((E+V)\log V)$. For a dense graph such as a complete graph, there can be $V(V-1)/2$ edges.
Since $E \sim V^2$, is the runtime $O((V+V^2)\log V)$?
Answer: The runtime of Dijkstra's algorithm (with Fibonacci Heaps) is $O(|E|+|V|\log|V|)$, which is different from what you were posting.
If $|E|\in \Theta(|V|^2)$, that is your graph is very dense, then this gives you runtime of $O(|V|^2+|V|\log|V|)=O(|V|^2)$. A better runtime would be "surprising", since you have to look at every edge at least once.
When using binary heaps, you get a runtime of $O((|E|+|V|)\log|V|)$ which for $|E|\in \Theta(|V|^2)$ gives $O(|V|^2\log |V|)$. | {
"domain": "cs.stackexchange",
"id": 8193,
"tags": "graphs, algorithm-analysis, asymptotics, landau-notation"
} |
Cartesian velocity control of robot arm | Question:
I have a 6DOF robot arm, and would like to start driving the end effector around with a joystick. I think the most natural way to do this would be to map my joystick axes to Cartesian velocities of the end effector. I have tried executing very small paths in the desired directions using MoveGroupCommander.compute_cartesian_path(), but I end up getting very jumpy results. Are there any facilities built into MoveIt or otherwise to help me do this?
Originally posted by rand on ROS Answers with karma: 38 on 2015-02-01
Post score: 0
Answer:
I actually found a good solution to this in the code for pr2_teleop_general. The basic idea is to:
Grab the current robot state using RobotCommander.get_current_state()
Call the GetPositionFK service of your move_group to get the the end effector position using this state as input.
Scale your Cartesian velocity by a constant and add it onto this position. Call the GetPositionIK service of your move_group to get an inverse kinematics solution for this.
Subtract your current joint angles from the IK joint angles, and multiply by some constant to get a velocity for every joint.
Originally posted by rand with karma: 38 on 2015-02-02
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by sangfuu on 2015-04-07:
rand, did this method worked for you or did you find something else?
I am trying to do the same thing with my 7DOF robot. Thanks | {
"domain": "robotics.stackexchange",
"id": 20752,
"tags": "ros, inverse-kinematics, moveit, move-group"
} |
PCL Header to Sensor_msg convertion problem | Question:
Guys,
I have this code snippet (I wrote it):
#include "depth_image_proc.h"
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <pcl_ros/point_cloud.h>
#include <pcl/point_types.h>
#include <sensor_msgs/image_encodings.h>
#include <image_geometry/pinhole_camera_model.h>
#include <sensor_msgs/PointCloud2.h>
generatePCL::generatePCL(const sensor_msgs::ImageConstPtr& depth_msg,
const sensor_msgs::CameraInfoConstPtr& info_m)
{
cloud_msg.header = depth_msg->header;
cloud_msg.height = depth_msg->height;
cloud_msg.width = depth_msg->width;
cloud_msg.is_dense = false;
cloud_msg.points.resize(cloud_msg.height * cloud_msg.width);
convert(depth_msg);
}
Where cloud_msg is defined as:
pcl::PointCloud<pcl::PointXYZ> cloud_msg;
But after I run the code I get this error:
depth_image_proc.cpp: In constructor ‘generatePCL::generatePCL(const ImageConstPtr&, const CameraInfoConstPtr&)’:
depth_image_proc.cpp:16:19: error: no match for ‘operator=’ (operand types are ‘pcl::PCLHeader’ and ‘const _header_type {aka const std_msgs::Header_<std::allocator<void> >}’)
cloud_msg.header = depth_msg->header;
^
Is there a library I am missing? Or a new format?
This code was running in Fuerte, and now I am trying to make it run in Indigo.
Originally posted by Pototo on ROS Answers with karma: 803 on 2015-01-21
Post score: 0
Answer:
There were incompatible changes to the way ROS uses PCL between Groovy and Hydro.
Please consult the Hydro migration guide for details.
EDIT: since Groovy there is a new pcl_conversions package that may help with your problem. The API doc is here.
Originally posted by joq with karma: 25443 on 2015-01-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Pototo on 2015-01-21:
I still can't find the way to convert sensor_msgs::ImageConstPtr& to pcl::PointCloudpcl::PointXYZ
Every function I sue (such as pcl::toPCL) always gives me an error that there is not matching function. | {
"domain": "robotics.stackexchange",
"id": 20646,
"tags": "c++, pcl, camera, ros-fuerte, camera-depth-points"
} |
Streaming H264 video from PiCamera to a JavaFX ImageView | Question: I'm currently working on a robotics application where a video feed is being displayed from a Raspberry Pi 3.
I've been working on a way to stream the video directly into JavaFX (the rest of the UI is created in this), however, my knowledge of video streaming is very limited. The goal for the video system is to maintain decent video quality and FPS while reducing latency as much as possible (looking for sub 100 ms). H264 video was chosen as the format for it's speed, but I hear that sending raw video could be faster as there is no compression (could not get raw video to work well at all).
Running my code I am capable of streaming a Pi camera at about 120-130ms of latency and ~48 frames per second. I would like to continue to reduce the latency of this application, and would like to make sure that I'm making decisions for the correct reasons.
The largest issue I have so far is start-up time; it takes about 15-20 seconds for the video to initially launch and catch up to the latest frame.
The following code is an MCVE of the video system. If anyone is interested in reproducing this, you can get it running on a Raspberry Pi (mine is a Raspberry Pi 3) with python-picamera installed. You'll also need a Java Client with JavaCV installed. My version info is org.bytedeco:javacv-platform:1.3.2.
Python side:
We decided to use a Python library to control the video stream because it provides a nice wrapper around the picamera command-line tool. The output from the video is being sent over a TCP connection and will be received by a Java client. (The way we remotely launch this application has been left out of the review because I just wanted this post to focus on the video aspects)
import picamera
import socket
import signal
import sys
with picamera.PiCamera() as camera:
camera.resolution = (1296, 720)
camera.framerate = 48
soc = socket.socket()
soc.connect((sys.argv[1], int(sys.argv[2])))
file = soc.makefile('wb')
try:
camera.start_recording(
file,
format='h264',
intra_period=0,
quality=0,
bitrate=25000000)
while True:
signal.pause()
finally:
file.close()
soc.close()
Why did I choose these values?:
camera.resolution = (1296, 720), camera.framerate = 48 were the largest images I could output at a frame-rate fast enough to reduce latency.
intra_period=0 Wanted the images to remain small, and by setting the intra_period to zero, no I frames/full frames (apart from the first frame) will be sent; reducing the time between frames
quality=0 from the docstring: Quality 0 is special and seems to be a "reasonable quality" default
bitrate=25000000 Wanted to set the bitrate as high as possible to not slow down video transfer when lots of changes in the frames (when P frames/partial frames become large)
Java side:
The Java decoder was written using JavaCV and sends the TCP H264 stream into an FFmpegFrameGrabber. The decoder then converts the Frame into a BufferedImage, and then into a WritableImage for JavaFX.
public class FFmpegFXImageDecoder {
private FFmpegFXImageDecoder() { }
public static void streamToImageView(
final ImageView view,
final int port,
final int socketBacklog,
final String format,
final double frameRate,
final int bitrate,
final String preset,
final int numBuffers
) {
try (final ServerSocket server = new ServerSocket(port, socketBacklog);
final Socket clientSocket = server.accept();
final FrameGrabber grabber = new FFmpegFrameGrabber(
clientSocket.getInputStream());
) {
final Java2DFrameConverter converter = new Java2DFrameConverter();
grabber.setFrameRate(frameRate);
grabber.setFormat(format);
grabber.setVideoBitrate(bitrate);
grabber.setVideoOption("preset", preset);
grabber.setNumBuffers(numBuffers);
grabber.start();
while (!Thread.interrupted()) {
final Frame frame = grabber.grab();
if (frame != null) {
final BufferedImage bufferedImage = converter.convert(frame);
if (bufferedImage != null) {
Platform.runLater(() ->
view.setImage(SwingFXUtils.toFXImage(bufferedImage, null)));
}
}
}
}
catch (final IOException e) {
e.printStackTrace();
}
}
}
This can then be placed into a JavaFX view like below:
public class TestApplication extends Application {
static final int WIDTH = 1296;
static final int HEIGHT = 720;
@Override
public void start(final Stage primaryStage) throws Exception {
final ImageView imageView = new ImageView();
final BorderPane borderPane = new BorderPane();
imageView.fitHeightProperty().bind(borderPane.widthProperty()
.divide(WIDTH).multiply(HEIGHT));
imageView.fitWidthProperty().bind(borderPane.widthProperty());
borderPane.setPrefSize(WIDTH, HEIGHT);
borderPane.setCenter(imageView);
final Scene scene = new Scene(borderPane);
primaryStage.setScene(scene);
primaryStage.show();
new Thread(() -> FFmpegFXImageDecoder.streamToImageView(
imageView, 12345, 100, "h264", 96, 25000000, "ultrafast", 0)
).start();
}
}
Why did I choose these values?:
frameRate=96 Wanted the framerate of the Client to be twice the speed of the stream such that I'm not waiting on frames
bitrate=25000000 to match the stream
VideoOption preset="ultrafast" To try and reduce the startup time for the stream.
Final Questions:
What are some ways I improve the latency of this system?
How can I reduce the start-up time of this stream? It currently takes about 15 seconds to launch and catch up.
Are the parameters chosen for JavaCV and PiCamera logical? Is my understanding of them correct?
Answer: Architectural Ideas
Let's start at the Architecture of your Application and the data transfer.
There's basically two places where we can optimize the performance of your application. I'm ignoring latency for now, since that is mostly determined by the network and the performance of the image processing.
This means if we can improve the speed of image processing, latency will also go down. That's the first component.
The second component is to make the network transfer less vulnerable to overhead and stalling. A great recap of TCP versus UDP is this joelonsoftware article.
Taking the information there into account, I'd think you're better off sending your video over UDP.
Performance review
It's problematic to require a full rendering of an image to update the image-view. Most realtime rendering applications use the idea of a frame-/backbuffer. What happens there is the following:
An image is rendered to the backbuffer.
When it's finished, the framebuffer and the backbuffer are swapped
The next image is rendered to the backbuffer while the framebuffer is displayed.
As far as I can tell, you're missing out on a lot of performance by ineffectively handling how data is passed between network and image-view. It'd help to see what Java2DFrameConverter does exactly.
Code style review
There's a few things that struck me as odd in your code. The following is a review without taking the performance into account directly:
streamToImageView takes a lot of arguments. You can drastically reduce their number by partially applying them outside of the method. Additionally the converter can be a static field, though I can understand if you want to have an instance for every invocation of the method.
This might also be the place for the backbuffer idea, since you can reuse Image instances when rendering. I'm not sure, but you might be able to just set the Image to the imageView once and then reuse the already set instance.
The method then looks like this:
public static void streamImageToView(ImageView view, int port, int socketBacklog, Consumer<Grabber> grabberSettings) {
try (final ServerSocket server = new ServerSocket(port, socketBacklog);
final Socket clientSocket = server.accept();
final FrameGrabber grabber = new FFmpegFrameGrabber(clientSocket.getInputStream());
) {
grabberSettings.accept(grabber);
grabber.start();
while (!Thread.interrupted()) {
final Frame frame = grabber.grab();
if (frame == null) {
continue;
}
final BufferedImage bufferedImage = converter.convert(frame);
if (bufferedImage != null) {
Platform.runLater(() -> {
SwingFXUtils.toFXImage(bufferedImage, view.getImage());
view.setImage(view.getImage());
// might not be required. Forces repaint
});
}
} catch (IOException ex) {
// same as before
}
}
}
I like very much that you're making all the variables final wherever possible.
FWIW I leave the partial application of the streamToImageView arguments to you as an exercise ;) | {
"domain": "codereview.stackexchange",
"id": 29104,
"tags": "java, python, raspberry-pi, video, opencv"
} |
Differences in the behaviour of pinching a garden hose and closing a tap | Question: Let's say you have a garden hose connected to an ordinary water tap which is opened fully. If you pinch the end of the hose, water leaves the hose at a higher speed (and this can be useful while watering plants, to reach pots which are further away). However when a tap (with no hose connected) is opened only slightly, water flows out at a low speed, possibly even in drops.
The actions of pinching the end of a hose and of almost-closing an open tap seem similar, so why the difference in behaviour?
Answer: This diagram shows the difference between closing the tap and pinchng the end of the hose:
In both cases you are reducing the area the water has to flow through, and this increases the water velocity in the constriction. The upper diagram shows what happens when you close the tap. Closing the tap increases the velocity of the water at the constriction, but as soon as the water is past the constriction is slows down again and it emerges from the end of the hosepipe with a relatively low velocity.
The lower diagram shows what happens when you pinch the end of the pipe. The constriction increases the velocity of the water but because the constriction is right at the end the water doesn't have a chance to slow down again so it leaves the end of the pipe with a relatively high velocity. | {
"domain": "physics.stackexchange",
"id": 71867,
"tags": "fluid-dynamics"
} |
Intuitive explanation of subspace | Question: There are many techniques in signal processing that use eigen analysis (MUSIC, SVD, eigen decomposition, etc) that result in signal and noise subspaces.The mathematical definitions for signal subspaces are abundant, but what is the intuitive, tangible explanation of what a subspace is representing? More importantly, how does one interpret the values of a subspace? What exactly does this result provide and what is an example of how one would use it? Nearly any topic I can think of in signal processing has very intuitive explanations of complex topics - but I've yet to see a good one for subspaces.
EDIT: The crux of the question is, what is the intuitive explanation of subspace as it applies to signal processing algorithms and applications (i.e., not the linear algebra explanation)?
Answer: TL;DR: Subspaces are low-dimensional, linear portions of the entire signal space that are expected to contain (or be close to) a large part of the observable and useful signals or transformations thereof, with additional tools that allow us to compute interesting things on the data
We are given a set of data. To manipulate them more easily, it is common to embed them, or represented them in a well-adapted mathematical structure (from the plenty of structures we have in algebra or geometry), to perform operations, prove things, develop algorithms, etc. For instance in channel coding, group or ring structures can be better adapted. In a domain called mathematical morphology, one uses lattices.
Here, for standard signals or images, we often suppose a linear structure: signals can be weighted, added: $\alpha x+ \beta y$. This is the base for linear systems, like traditional windowing, filtering (convolution), differentiating, etc.
So, a mathematical structure of choice lies in vector spaces. Vector spaces equipped with tools: a dot product (that can be used to compare data), a norm (to mesure distances). These tools help us compute. Indeed, energy minimization and linearity are strongly related.
Then, a data of $N$ samples naturally lives in the classical linear space of $N$ dimension. It is quite big (think of million-pixel images). It contains an awful lot of other "uninteresting" data: any $N$ dimensional "random" vector. Most of them are and will never be observed, have no meaning, etc.
The reasonable quantity of signals that you can record, up to variations, is very small relatively to the big space.
Even more, we are often interested in structured information. So if you subtract noise effects, unimportant variations, the proportion of useful signals is very very tiny within the whole potential signal space.
One very useful hypothesis (heuristic, to help discover) is that those interesting signals live close together, or at least along regions of the space that "make sense". An example: suppose that some extraterrestrial intelligence has no other detection system than a very precise dog detector. They will get, across the Solar system, almost nothing, except many points located on something vaguely looking like a sphere, with large empty spaces (oceans), and sometimes very concentrated (urban areas). And the point cloud moves around a center, with a constant periodicity, and rotating on itself. Those aliens have discovered something!
Anyway, the partial-sphere looking point cloud is interpretable... maybe a planet?
So, our dog point cloud could have been fully 3D, but they are concentrated on a 2D surface (lower dimension), that seems relatively regular (in altitude) and smooth: most dogs live at intermediate altitudes.
These smooth low-dimensional parts of space are sometimes called smooth manifolds or varieties. Their structure and operators allow to compute things. For instance: distances, distributions, etc. Inter-dog distances make more sense when computed along the Earth surface (in spherical 2D coordinates) than directly through the planet with the standard 3D norm! But this can still be complicated to deal with. Let us simplify this a bit more.
Looking a little closer, the dog points are almost located on close-to-flat surfaces: countries, even continents. Those flat surfaces are portions of linear (or affine) subspaces. Still, you can now compute inter-dog distance, more easily, and design an algorithm for dog matching that will make you rich.
The story continues a bit. Sometimes, natural data does not assemble around a clear structure, directly. Unveiling this inherent structure is at the core of DSP. To help us in this direction, we can resort to data transformations to concentrate them better (Fourier, time-frequency, wavelets), filtering.
And if we find a suitable subspace, most algorithms become simpler, more tractable, and so on: adaptive filtering, denoising, matching.
[ADDITION] A typical use is the following: a signal can be better concentrated with a well-chosen orthogonal transform. In the meantime, a zero-mean random Gaussian noise remains Gaussian under an orthogonal transformation. Typically, the covariance matrix can be diagonalized. If you sort the eigenvalues in decreasing ordre, the smallest ones tend to flatten (they correspond to noise), and the highest more or less correspond to the signal. Hence, by thresholding the eigenvalues, it because possible to remove the noise. | {
"domain": "dsp.stackexchange",
"id": 9157,
"tags": "music, eigendecomposition"
} |
catkin src/* vcs update | Question:
Hey everyone,
I'm probably missing something here...
Is there/What is the equivalent of rosbuild's rosws update for catkin
in the sense of updating all repository checkouts in the source folder simultaneously?
Originally posted by v4hn on ROS Answers with karma: 2950 on 2013-05-02
Post score: 1
Answer:
Use wstool with catkin workspaces.
Use rosws with rosbuild workspaces.
The two types of workspaces and tools are completely orthogonal.
The equivalent to rosws update is wstool -t path/to/src update.
Originally posted by William with karma: 17335 on 2013-05-02
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by v4hn on 2013-05-02:
Ah, thank you for the quick answer.
Seems like I overlooked it here [http://ros.org/wiki/catkin/Tutorials/workspace_overlaying]. | {
"domain": "robotics.stackexchange",
"id": 14036,
"tags": "ros, catkin, packages, update"
} |
In general which laser would cut clear glass more easily, a visible spectrum laser or an infrared laser? | Question: Assuming the two lasers have identical characteristics and power output except their wavelength, which one would be more successful and easy in cutting clear glass?
In case there is doubt that you can even cut clear glass with a laser, see here these two video demonstrations:
Visible spectrum blue laser cutting glass
Infrared laser cutting glass
Secondly, what is the refractive index of glass at infrared wavelengths (~1.52 at visible white light)?
Answer: You dont define what it means to cut easier, but I will address how you select wavelength to cut faster and how you can select wavelength to be more precise in your cuts (although this precision likely does not matter for the vast majority of laser cutting applications)
For lasers that differ only in wavelength, you would typically want to select the wavelength with the highest absorption in the material you want to cut, if everything else was equal. Photons that are reflected or transmitted through the material won't transfer their energy to the matetial you are cutting and therefore wont contribute to the cut. This way you maximize cutting speed. Obviously other considerations come into play with practical lasers, with laser power being a big one.
If you want to do extremely high precision cutting (micro/nanoscale), you will tend to want a shorter wavelength. Lasers with a shorter wavelength can be focused to a smaller spot size, increasing precision. This is only important at micro or nanoscale cutting.
There are other ways a laser can cut a material than by direct absorption in the material to cut, but I will ignore those for this answer since you have control of the laser wavelength. | {
"domain": "physics.stackexchange",
"id": 84673,
"tags": "optics, condensed-matter, laser, infrared-radiation, glass"
} |
rosbag file freezes while playing | Question:
Hey,
i want to work with a uncompressed 3.9 GB bag file. (72s long)
The file is played with the --clock option and the use_sim_time parameter is set to true.
When i just start a roscore and play back the file, it
s not freezing, that is the time at the display is running till the end without lags.
The messages are published as they should, but if i now start another node, rviz for example, the time at the display freezes, even after rviz is started or even if i first start rviz and then start the bag file.
And the messages are no longer published, even the clock topic doesn't publish messages any more for the most time. Randomly it is publishing a few messages.
Did anyone have the same problem? Or do you have any idea what the problem could be?
I am working with ros-electric the latest version and ubuntu 10.04.
I tried also a smaller bag file(380MB) and got smaller lags. Do you think the problem is only that the bag file is too big?
Here ist the rosbag info output:
version: 2.0
duration: 1:12s (72s)
start: Jul 28 2011 20:13:19.65 (1311876799.65)
end: Jul 28 2011 20:14:32.63 (1311876872.63)
size: 3.9 GB
messages: 141043
compression: none [3643/3643 chunks]
types: nav_msgs/Odometry [cd5e73d190d741a2f92e81eda573aca7]
sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
sensor_msgs/LaserScan [90c7ef2dc6895d81024acba2ac42f369]
std_msgs/Float64 [fdb28210bfa9d7c91146260178d9a584]
stereo_msgs/DisparityImage [04a177815f75271039fa21f16acad8c9]
tf/tfMessage [94810edda583a504dfda3829e70d7eec]
visualization_msgs/MarkerArray [f10fe193d6fac1bf68fad5d31da421a7]
topics: /camera/depth/camera_info 1209 msgs : sensor_msgs/CameraInfo
/camera/depth/disparity 1208 msgs : stereo_msgs/DisparityImage
/camera/depth/image 1209 msgs : sensor_msgs/Image
/camera/rgb/camera_info 1225 msgs : sensor_msgs/CameraInfo
/camera/rgb/image_color 1225 msgs : sensor_msgs/Image
/cortex_marker_array 21853 msgs : visualization_msgs/MarkerArray
/cur_tilt_angle 35823 msgs : std_msgs/Float64
/imu 35825 msgs : sensor_msgs/Imu
/pose 723 msgs : nav_msgs/Odometry
/scan 679 msgs : sensor_msgs/LaserScan
/tf 40064 msgs : tf/tfMessage
Originally posted by Runje on ROS Answers with karma: 41 on 2011-10-12
Post score: 4
Answer:
Ive had this happen on larger bags, if the bags have lots of data throughput. It looks like the bag is a nomral size, but depending on your hardware, maybe its the issue.
Try to set the rosbag playback rate down and see if it plays.
Originally posted by phil0stine with karma: 682 on 2012-02-28
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by Mac on 2012-02-28:
I second this. The bagfile itself isn't too big; I've worked with much larger bagfiles without a problem. On the other hand, it looks like your bagfile is fairly data-intensive (three image topics), and (particularly if the frame rate is high) that's going to hurt. Setting -r0.5 or 0.25 might help. | {
"domain": "robotics.stackexchange",
"id": 6945,
"tags": "rosbag"
} |
Finding the Pythagorean triplet that sums to 1000 | Question: Project Euler problem 9 says:
A Pythagorean triplet is a set of three natural numbers, a < b < c,
for which,
a2 + b2 = c2
For example, 32 + 42 = 9 + 16 = 25 = 52.
There exists exactly one Pythagorean triplet for which a + b + c =
1000. Find the product abc.
I am a beginner in programming, especially Python. I got the answer for the problem, but how can I optimize it?
a=0
b=a+1
exit=False
#a loop
for x in xrange(1000):
a+=1
b=a+1
c=b+1
#b loop
for y in xrange(1000):
b+=1
if(1000-b-a < 0):
break
c = 1000-a-b
if(a**2 + b**2 == c**2):
print "The number is ", a*b*c
x=1000
exit = True
else:
print "A is ", a ,".\n B is ", b, ".\n C is ",c,"."
if exit:
break
if exit:
break
Answer: Well, you got the answer, but your code could be tidier. The main weaknesses I see are:
Too many variables: The problem calls for numbers a, b, and c. Introducing variables x and y needlessly complicates the code. It's always the case that a = x + 1, so x is redundant. Then, with y (but not really using y), you loop up to 1000 times with b starting from a + 1.
Your loop structure can be distilled down to…
for a in xrange(1, 1001):
for b in xrange(a + 1, a + 1001):
if a + b > 1000:
break
c = 1000 - a - b
# etc.
However, the if-break could still be improved upon — see below.
Using variables for flow control: It's cumbersome to set an exit flag, then have code elsewhere inspect the flag. There are more assertive techniques to go where you want to go immediately. You want to break out from a nested loop, so search Stack Overflow and read this advice.
As previously mentioned, there is an even better way to enumerate possibilities for a, b, and c such that their sum is always 1000. That way, you don't need the if-break in your code.
def euler9():
# Ensure a < c
for c in xrange(2, 1000):
for a in xrange(1, c):
# Ensure a + b + c == 1000. Since a is counting up, the first
# answer we find should have a <= b.
b = 1000 - c - a
# Ensure Pythagorean triple
if a**2 + b**2 == c**2:
print("a = %d, b = %d, c = %d. abc = %d" % (a, b, c, a * b * c))
return
euler9()
Alternatively:
def euler9():
for a in xrange(1, 1000):
for b in xrange(a + 1, 1000 - a):
# Ensure a + b + c == 1000. Since b is counting up, the first
# answer we find should have b <= c.
c = 1000 - a - b
# etc. | {
"domain": "codereview.stackexchange",
"id": 5351,
"tags": "python, optimization, beginner, project-euler, mathematics"
} |
What are the necessary and sufficient conditions for a wavefunction to be physically possible? | Question: Often times it is stated in books that a quantum state is physically realizable only if it is square integrable. For example, in Griffiths (2018 edition) page 14 he stated
Physically realizable states correspond to the
square-integrable solutions to Schrödinger’s equation.
But when we have an operator with continuous eigenvalues like the position operator $\hat{X}$ our eigenstates are not square-integrable. Like the eigenfunction for the eigenvalue $x'$ of $\hat{X}$ is $\delta(x-x')$ which is not square-integrable and we also know -
$$ \langle x|x'\rangle=\delta(x-x')\Rightarrow \langle x|x\rangle=\infty$$
But clearly $|x\rangle$ is a physical state as wavefunction collapses to that after measurement. So definitely the definition by Griffiths is not completely correct. I came across about Rigged Hilbert space while searching in Stack but I am still not completely sure whether all physical states lie in Rigged Hilbert space or all states in Rigged Hilbert space are physical. So, my question is what are the necessary and sufficient conditions for a wavefunction to be physically possible?
Answer: If you want to use the theory of probability, a necessary condition for a wavefunction to be physically meanigful is $$\psi \in L^2(\mathbb{R}^3,d^3x)\:.$$ That is because, as a basic postulate of QM, we have that:
$\qquad\qquad\qquad\qquad$ $|\psi(x)|^2$ is the probability density to find the particle at $x$,
and the total probability must be $1$:
$$\int_{\mathbb{R}^3} |\psi(x)|^2 d^3x =1 <+\infty$$
(Values different from $1$ can be next obtained by considering non-normalized wavefunctions).
That this condition is also sufficient is a much more delicate issue which largely depends on the physical hypotheses you assume on realizable pure states.
In principle all vectors
$\psi \in L^2(\mathbb{R}^3,d^3x)$ are admitted.
No continuity or differentiability requirements make sense, even if it is sometimes erroneously stated. That is because all observables, as another basic postulate of QM, are self-adjoint operators and no differential operator is selfadjoint: in fact, one should deal with selfadjoint extensions of these differential operators, whose domains are made of non-differentiable functions, generally speaking.
Rigged Hilbert spaces, i.e., the rigorous version of Dirac (fantastic!) formalism elaborated by Gelfand and coworkers, has to be considered a mere formal/mathematical tool.
In particular distributions as $\delta(x-x')$ are not physically meaningful as they do not satisfy the condition to be elements of $L^2$. | {
"domain": "physics.stackexchange",
"id": 76301,
"tags": "quantum-mechanics, hilbert-space, wavefunction, schroedinger-equation, quantum-states"
} |
Segmentation Fault in tf::MessageFilter | Question:
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff6536f2b in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
line:
laser_notifier_ = new tf::MessageFilter<sensor_msgs::LaserScan>(laser_sub_,listener_, target_frame_, 10);
Backtrace
---------
0 0x00007ffff6536f2b in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
1 0x0000000000442f60 in getTFPrefix (this=<optimized out>)
at /opt/ros/fuerte/stacks/geometry/tf/include/tf/tf.h:329
2 tf::MessageFilter<sensor_msgs::LaserScan_<std::allocator<void> > >::setTargetFrames (this=0x7fffffffc998,
target_frames=...) at /opt/ros/fuerte/stacks/geometry/tf/include/tf/message_filter.h:211
3 0x0000000000451c91 in setTargetFrame (target_frame=..., this=0x7fffffffc998)
at /opt/ros/fuerte/stacks/geometry/tf/include/tf/message_filter.h:195
4 MessageFilter<message_filters::Subscriber<sensor_msgs::LaserScan_<std::allocator<void> > > > (max_rate=...,
nh=<error reading variable: access outside bounds of object referenced via synthetic pointer>, queue_size=10,
target_frame=..., tf=..., f=..., this=0x7fffffffc998)
at /opt/ros/fuerte/stacks/geometry/tf/include/tf/message_filter.h:156
5 LaserLineExtractor::LaserLineExtractor (this=0x7fffffffc3b0, n=..., base_link=..., odom_link=...)
at src/lines.cpp:148
6 0x0000000000430b87 in main (argc=1, argv=<optimized out>)
at src/lines.cpp:885
It seems as though tf_prefix_ is not initialized properly which is causing this segfault.
Ros version - fuerte
Distributor ID: Ubuntu
Description: Ubuntu 12.04.2 LTS
Release: 12.04
Codename: precise
uname -a:
Linux desktop 3.0.0-32-generic #50-Ubuntu SMP Thu Feb 28 22:32:30 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Originally posted by itzsid on ROS Answers with karma: 1 on 2013-03-30
Post score: 0
Answer:
If this is still an issue. Please create a minimal demonstration and submit an issue on github: https://github.com/ros/geometry/issues
Originally posted by tfoote with karma: 58457 on 2015-01-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13610,
"tags": "ros"
} |
Children's Mini Game Handy-Eye Coordination | Question: This is a game for children 3years+ to learn Hand-Eye Coordination with Mouse Movement.
Goal is to catch all dinosaurs in the jungle, get the egg reward and then repeat the game.
I tried to add sounds but unfortunately chrome rules say if the user doesn't interact with the page (aka click, etc.) no sound play is allowed so I wasn't able to do a hover sound for collecting a dino.
Same goes for background music. Getting a button which was first hidden to press for restart the game also didn't work so I did an delayed 8 seconds reload.
Maybe I do it in PHP?
For best experience press F11 for Browser Full Screen :D
With Pics
https://github.com/CodeLegend27/Hand-Eye-Coordination-Child-3y-plus
Pure Code
https://codepen.io/CodeLegend27/pen/RwRRGNa
Kind regards
HTML
<!DOCTYPE html>
<html>
<head>
<meta charset='utf-8'>
<meta http-equiv='X-UA-Compatible' content='IE=edge'>
<title>Child Game</title>
<script src="https://code.jquery.com/jquery-3.5.1.min.js" integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=" crossorigin="anonymous"></script>
<meta name='viewport' content='width=device-width, initial-scale=1'>
<link rel='stylesheet' type='text/css' media='screen' href='dist/main.css'>
<script src='main.js'></script>
</head>
<body style="overflow: hidden;">
<div id="egg" style="display: none;">
</div>
<div id="egg" style="display: none;">
</div>
<img id="gg" src="img/net-3850048_960_720.png" style="position:absolute; width: 50px; overflow: hidden;">
<script>
$(document).mousemove(function(e){
$("#gg").css({left:e.pageX, top:e.pageY});
});
</script>
<div class="main">
<div class="row">
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"><p>😀</p></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
</div>
<div class="row">
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
</div>
<div class="row">
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
</div>
<div class="row">
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
</div>
<div class="row">
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
<div class="square"></div>
</div>
</div>
</body>
</html>
CSS
.main {
height: 100vh;
width: 100%;
}
.row {
display: flex;
}
.square {
display: flex;
width: 10vw;
height: 20vh;
align-items: center;
justify-content: center;
}
p {
font-size: 15em;
height: 100%;
cursor: pointer;
width: 100%;
display: none;
}
body {
cursor: url("../img/net-3850048_960_720.png"), auto;
height: 100vh;
/* background: rgb(18, 4, 250);
background: linear-gradient(180deg, rgba(18, 4, 250, 1) 0%, rgba(0, 212, 255, 1) 100%); */
background-image: url("../img/jungle-4003374_1920.jpg");
background-size: cover;
}
* {
margin: 0;
padding: 0;
}
#egg {
position: absolute;
bottom: 30px;
display: inline-block;
width: 126px;
height: 60px;
padding-top: 120px;
text-align: center;
text-shadow: 1px 1px rgba(250, 249, 246, 0.9);
color: rgba(168, 168, 168, 0.8);
font-weight: 900;
border-radius: 63px 63px 63px 63px/108px 108px 72px 72px;
background: radial-gradient(75% 100% at 63px 15px, #f7f7f2 0%, #eaeae3 30%, #b1ac9d 100%);
box-shadow: 0px 3px 18px -4px rgba(40, 40, 40, 0.9);
margin: 20px;
margin-right: 45px;
transition: all 0.2s linear 0;
-webkit-transition: 0.2s;
-webkit-transition-timing-function: linear;
-webkit-animation: anime_egg_move 3s infinite;
-webkit-animation-timing-function: linear;
-webkit-animation-direction: alternate;
}
@-webkit-keyframes anime_egg_move {
to {
-webkit-transform: rotate(360deg);
margin-left: 700px;
}
}
JS
//
if( /Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test(navigator.userAgent) ) {
alert("Please Play on Desktop PC")
}
// IMG ARRAY
var images = ['dino8.png', 'dino1.png', 'dino2.png','dino3.png', 'dino4.png', 'dino5.png', 'dino6.png', 'dino7.png'];
$( document ).ready(function() {
// POPULATE SITE WITH IMAGES -- Randomly, Random size
$('.square').each(function() {
var randNumb = Math.floor(Math.random() * 71) + 30;
var rand = Math.floor(Math.random() * images.length);
$(this).append('<img class="dino" src="img/' + images[rand] + '"/ width="'+ randNumb +'%" height="'+randNumb+'%">');
});
// COUNT TILL GAME FINISH, Show Egg and smiley, RELOAD PAGE
let x = 0;
$('.dino').hover(function(){
$(this).remove();
x++
if (x == 50){
$("#egg").css("display", "block");
$("p").css("display", "block");
setTimeout(location.reload.bind(location), 8000);
}
console.log(x);
});
});
Answer: Indentation Much of the indentation in the HTML, CSS, and JS is inconsistent, which makes the code moderately harder to read than would be ideal. Consider using an IDE which automatically formats code properly so things can be made readable without having to mess with it manually.
Duplicate IDs are invalid HTML You have two elements with div id="egg", which isn't permitted in HTML. The $("#egg") will only select the first one, because jQuery is only expecting one such element to exist in the document. Maybe just completely remove the second one?
gg? You have img id="gg". It's not entirely clear what this refers to just from reading the markup. I initially thought it might be a reference to "good game" and would appear after the game is over, but it's not, it's an image that follows the mouse. Maybe call it mouseImage instead, or something like that?
Or, you could set the whole cursor to the image via CSS, which might be even better, allowing you to remove that element entirely:
cursor:url(/img/net-3850048_960_720.png), auto;
Smiley face? You render the smiley by putting it inside a square:
<div class="square"><p>😀</p></div>
and by hiding the <p> until the game is over. This is very confusing; I'd expect the squares to contain only the dino pictures. Something completely unrelated should be in a separate element, outside of the <div class="row">s. It's not at all clear what the purpose of the HTML entity is either - consider using an <img> instead so it can be understood at a glance.
<img src="smiley.png">
$('img[src="smiley.png"]').css('display', 'block');
Inline CSS You have a number of inline styles, like
<body style="overflow: hidden;">
<div id="egg" style="display: none;">
</div>
Best to separate concerns; better to put them all in one place, in the CSS file, so that you can then do
<body>
<div id="egg">
</div>
in the HTML.
Refuse to run on mobile Since hover events are required for the app to work, rather than just alerting that the app should be run on a desktop, consider not running the app at all if mobile is detected. For example, you could do something like:
if( /Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test(navigator.userAgent) ) {
$('.error').text("Please Play on Desktop PC");
} else {
main();
}
where main runs the main script.
As shown above, better to display messages to the user in the HTML rather than alert, which blocks the browser and is quite user-unfriendly.
Readyness Rather than wrapping everything in:
$( document ).ready(function() {
Consider using the more standard form nowadays of
$(function() {
Or, even better, remove it entirely, and use the HTML to ensure the JS only runs once the DOM is populated: either give your script the defer attribute (best option IMO), or move your script to right above the bottom of the <body>. Then there's no need for $( document ).ready(function() {, or to wait for DOMContentLoaded, or anything like that.
Images array Rather than hard-coding:
var images = ['dino8.png', 'dino1.png', 'dino2.png','dino3.png', 'dino4.png', 'dino5.png', 'dino6.png', 'dino7.png'];
Consider simply choosing a random number from 1 to 8 instead, and interpolating that into the middle of dino#.png (see below).
randNumb and rand? What's the difference? It's not obvious at a glance what the difference is or what those variables mean without continuing below. Give them better names that indicate what they're used for:
const totalImages = 8;
$('.square').each(function() {
const widthAndHeight = Math.floor(Math.random() * 71) + 30;
const src = `img/dino${1 + Math.floor(Math.random() * totalImages)}.png`;
$(this).append(`<img class="dino" src="${src}" width="${widthAndHeight}%" height="${widthAndHeight}%"`);
});
You could also use .attr instead of directly concatenating HTML strings - concatenating HTML strings can result in odd display issues, unexpected elements appearing in the DOM, and arbitrary code execution when the input isn't trustworthy. While the input happens to be trustworthy here, it'd probably a good habit to get into to avoid concatenating when possible:
const src = `img/dino${1 + Math.floor(Math.random() * totalImages)}.png`;
$('<img />')
.appendTo(this)
.attr({
src,
width: widthAndHeight + '%',
height: widthAndHeight + '%',
});
That's not only better practice, it's also easier to read.
Note the use of ES2015 syntax - you can count on the vast majority of users to be using browsers that support ES2015. Very few people are still using IE11, and those that are (old business enterprise networks, mostly) are less likely to have overlap with teaching children. If you want IE11 support, the most maintainable solution IMO is to write in (clean, readable, concise) modern syntax and then automatically transpile down to ES5 for production.
End condition Rather than doing if (x == 50){ to check if all dinos have been removed, consider checking whether the .dino selector string returns any elements instead - this'll let you avoid hard-coding the number, and will let you remove the x variable entirely:
if (!$('.dino').length) {
// show egg
}
No need for hover Since you aren't attaching a mouseleave event, the hover handler isn't so appropriate - you only want to listen for mouseenter events:
$('.dino').on('mouseenter', function() {
// ... | {
"domain": "codereview.stackexchange",
"id": 39698,
"tags": "javascript, jquery, game, html"
} |
Don’t understanding argument on words of a certain form | Question: In my book they use the following argument which I don’t understand :
Let $L$ be a langage such that there is $m_1, ..., m_k \in \Sigma^*$ such that $ L \subset m_1^*...m_k^*$. Now choose the minimum $n$ such that there exists $l_1,..., l_n$ such that every word of $L$ is a prefix of a word in $l_1^*...l_n^*$.
Hence by the minimality of $n$ there exist a word $u \in L$ such that $u$ is not a prefix of a word in $l_1^*...l_{n-1}^*$.
What I don’t understand is why such an $u$ exist. I don’t see why the minimality of $n$ prove the existence of such an $u$...
For example if I take $L = ab^*c^*d$ then I think the minimal $n$ is $4$ with the words $l_1 = a, l_2 = b, l_3 = c, l_4 = d$ and in this case such an $u$ clearly does not exist.
So is my book wrong or I am completely missing something here ?
Thank you !
Answer: I think the book is right. In your example, the word $u=abcd$ satisfies the conditions: it is not a prefix of a word in $a^* b^* c^*$.
In general, I think that proof is valid. I'll try to explain why the minimality of $n$ proves the existence of such a word $u \in L$. If there did not exist such a word $u \in L$, then that would mean that every word $u \in L$ is a prefix of $l_1^* \cdots l_{n-1}^*$, which in turn would mean that $n$ wasn't the minimum possible (we could have used $n-1$), which contradicts how $n$ was chosen in the second sentence. That's a contradiction. If you start from an assumption and derive a contradiction, then you can conclude that the assumption must have been wrong. In particular, the only alternative that remains possible is that there does exist such a word $u \in L$.
Put another way, there are only two possibilities: either there does exist such a word $u \in L$, or there doesn't. I've shown why the second choice is always impossible. That means the first choice must actually always be true. | {
"domain": "cs.stackexchange",
"id": 13416,
"tags": "automata, word-combinatorics"
} |
Bloch sphere representation of uncertainty | Question: If we consider the Bloch sphere in quantum mechanics, which is a two level representation of a quantum mechanical system, then any state can be represented as $$| \psi \rangle = \cos\left(\theta/2\right)|0 \rangle + e^{i \phi} \sin\left(\theta/2\right) | 1 \rangle.$$ We can show that for any state $| \psi \rangle$, the graphical representation of the expectation values $\langle \hat{S}_z \rangle$, $\langle \hat{S}_x \rangle$ and $\langle \hat{S}_y \rangle$ which are the projections onto the $z,x$ and $y$ axis of the Bloch sphere, respectively. Is there any clear graphical representation for the uncertainty in measurments of $\hat{S}_z, \hat{S}_z$ and $\hat{S}_y$?
Answer: Yes, there is something, and it holds even for mixed states, not only for pure states.
Take an arbitrary density matrix ${\hat \rho}$ corresponding to a point A = $(a_x, a_y, a_z)$ within the unit Bloch sphere, $|{\vec a}| \le 1$, such that
$$
{\hat \rho} = \frac{1}{2} \Big( {\hat I} + {\vec a}\cdot {\hat {\vec \sigma}}\Big)
$$
Since the spin averages along directions $x$, $y$, $z$ are the components $a_x$, $a_y$, $a_z$,
$$
\langle {\hat \sigma}_i \rangle = a_i\;, \;\;\; i = x,\;y\;,z
$$
the corresponding uncertainties read
$$
\langle (\Delta {\hat \sigma}_i)^2 \rangle = Tr [({\hat\sigma}_i^2 - \langle {\hat \sigma}_i \rangle^2) {\hat \rho}] = 1 - a_i^2\;, \;\;\; i = x,\;y\;,z
$$
Consider for example $\langle (\Delta {\hat \sigma}_x)^2 \rangle = 1 - a_x^2$. Slice the Bloch sphere with a plane perpendicular to the x-axis and passing through point A. Then the radius of the resulting circular cut is $\sqrt{1-a_x^2} = \sqrt{\langle (\Delta {\hat \sigma}_x)^2 \rangle}$.
For pure states, when $|{\vec a}| =1$ and
$$
\langle (\Delta {\hat \sigma}_i)^2 \rangle = 1 - a_x^2 = a_y^2 + a_z^2
$$
the circular cut crosses through point A, and $\sqrt{a_y^2 + a_z^2}$ is just the distance from A to the x-axis. Similarly for the other axes. So in general,
For a pure state $\psi$ the spin uncertainty along any direction ${\vec n}$ is the distance from its representative point A = $(\langle {\hat \sigma}_x \rangle, \langle {\hat \sigma}_y\rangle, \langle {\hat \sigma}_z\rangle)$ on the Bloch sphere to that axis.
Obviously the only direction for which this distance and the corresponding uncertainty is null is the direction of ${\vec a}$ itself. | {
"domain": "physics.stackexchange",
"id": 39338,
"tags": "quantum-mechanics, quantum-spin, bloch-sphere"
} |
How to optimize C# console application | Question: i'm wondering is there any way to optimize this code?
It must be able to work with big files, for example, formatting raw 140kb .txt file (12.5k words) takes 2 seconds (measured with Stopwatch class).
Text example http://pastebin.com/2p88v8EN
Maybe i used here some bad techniques or there is some part to simplify? Maybe multithreading? I'm not familiar with this yet.
Would be grateful for help!
Code below:
class TextManipulations
{
public string[] wordsDist; // main array, contains words in alphabetic order and output lines
public void TextFormat(string sourcePath) // creating method that will format our source text according to task
{
string textInput = System.IO.File.ReadAllText(sourcePath).ToLower(); // reading text from file, lowercased at start for precise search
MatchCollection m = Regex.Matches(textInput, @"\b[\w']+\b"); // exact search of all alphanumeric "words" including words with apostrophe
List<string> words = new List<string>(); // creating List<T> for containing unknown amount of words
foreach (Match match in m) // assigning all matches to List<string>
{
words.Add(match.ToString());
}
words.Sort(); // sorting words in alphabetic order
wordsDist = words.Distinct().ToArray(); // assigning words to main array without duplicates
System.IO.File.WriteAllLines(@"D:\output.txt", wordsDist); // writing words into txt file to edit in setLineNumbers method
}
public void setLineNumbers(string sourcePath) // creating method for adding line numbers
{
string[] linesOutput = new string[wordsDist.Count()]; // creating array that will contain line numbers
string[] lines = System.IO.File.ReadAllLines(sourcePath); // assigning source text by lines
for (int j = 0; j < wordsDist.Count(); j++) // main cycle checking each word for presence in each line
{
for (int i = 0; i < lines.Count(); i++)
{
if (Regex.IsMatch(lines[i].ToLower(), "\\b" + wordsDist[j] + "\\b")) // using ToLower() here, because we can't use it in line 33
{
linesOutput[j] += (i + 1).ToString() + ", "; // adding line numbers according to word
}
}
}
for (int i = 0; i < wordsDist.Count(); i++) // connection of two relative arrays
{
wordsDist[i] += "_______________________________" + linesOutput[i];
wordsDist[i] = wordsDist[i].Remove(wordsDist[i].Length - 2); // removing last ',' char
}
System.IO.File.WriteAllLines(@"D:\output.txt", wordsDist); // writing final output result into txt file
}
}
Answer: I posted the basis of this answer to your question when you first asked it on StackOverflow.
The best answer is to remove both the Distinct() and the Regex and process everything by character.
class TextManipulations
{
public string[] WordsDist;
public void TextFormat(string sourcePath)
{
HashSet<string> Words = new HashSet<string>();
foreach (string line in File.ReadLines(sourcePath))
{
int wordStart = 0;
for (int i = 0; i < line.Length; i++)
{
char c = line[i];
if (c == ' ')
{
string word = line.Substring(wordStart, i - wordStart).Trim().ToLower();
Words.Add(word);
wordStart = i + 1;
}
}
if (wordStart < line.Length)
{
string word = line.Substring(wordStart, line.Length - wordStart).Trim().ToLower();
Words.Add(word);
}
}
WordsDist = Words.OrderBy(word => word).ToArray();
File.WriteAllLines("output.txt", WordsDist);
}
public void SetLineNumbers(string sourcePath)
{
Dictionary<string, List<int>> wordLineNumbers = WordsDist.ToDictionary(word => word, word => new List<int>());
int lineNo = 0;
foreach (string line in File.ReadLines(sourcePath))
{
lineNo++;
int wordStart = 0;
for (int i = 0; i < line.Length; i++)
{
char c = line[i];
if (c == ' ')
{
string word = line.Substring(wordStart, i - wordStart).Trim().ToLower();
wordLineNumbers[word].Add(lineNo);
wordStart = i + 1;
}
}
if (wordStart < line.Length)
{
string word = line.Substring(wordStart, line.Length - wordStart).Trim().ToLower();
wordLineNumbers[word].Add(lineNo);
}
}
File.WriteAllLines("output.txt", wordLineNumbers.Select(kvp => string.Format("{0}, {1}", kvp.Key, string.Join(", ", kvp.Value))));
}
}
However this changes the functionality slightly as it doesn't strip out punctuation and the like (and only splits on white spaces), but if you want to keep the exact same output then this will do the job:
class TextManipulations : ITextManipulations
{
private const string AlphanumericWords = @"\b[\w']+\b";
private static readonly Regex wordRegex = new Regex(AlphanumericWords, RegexOptions.Compiled);
public string[] WordsDist;
public void TextFormat(string sourcePath)
{
HashSet<string> Words = new HashSet<string>();
foreach (string line in File.ReadLines(sourcePath))
{
foreach (Match wordMatch in wordRegex.Matches(line))
{
Words.Add(wordMatch.Value.Trim().ToLower());
}
}
WordsDist = Words.OrderBy(word => word).ToArray();
File.WriteAllLines("output.txt", WordsDist);
}
public void SetLineNumbers(string sourcePath)
{
Dictionary<string, List<int>> wordLineNumbers = WordsDist.ToDictionary(word => word, word => new List<int>());
int lineNo = 0;
foreach (string line in File.ReadLines(sourcePath))
{
lineNo++;
foreach (string word in wordRegex.Matches(line).OfType<Match>().Select(wordMatch => wordMatch.Value.Trim().ToLower()).Distinct())
{
wordLineNumbers[word].Add(lineNo);
}
}
File.WriteAllLines("output.txt", wordLineNumbers.Select(kvp => string.Format("{0}_______________________________{1}", kvp.Key, string.Join(", ", kvp.Value))));
}
}
Some tests...While your Pastebin link is no longer available I created a suitable 172KB file with 4.2k unique words.
Performance of calling TextFormat followed by SetLineNumbers
Your code total runtime: 8689ms
Code by Peter Kiss total runtime: 440ms
My split only on white space code: 61ms
My code using a single Compiled Regex: 87ms
Additionally my code reads the file line-by-line and reuses the same memory rather than loading the entire file into memory. This means that for very large files (say 100 MB+) my code will continue to scale better due to not copying around of additional arrays all over the place.
Testing with much larger files this pattern continues, eg: 1446kb input file with 12100 unique words.
Your code total runtime: test not run (would take too long)
Code by Peter Kiss total runtime: 9079ms
My split only on white space code: 279ms
My code using a single Compiled Regex: 651ms
So with a larger file my 2nd set of code performs 13 times faster compared to 5 times faster with a 172KB file. | {
"domain": "codereview.stackexchange",
"id": 5055,
"tags": "c#, optimization, console"
} |
What is the point of an active shape model in facial recognition? | Question: From what I understood, the Active Shape Model (ASM) is used in facial recognition. I figured out how the ASM works but I don't get why we use it.
How this shape is suppose to be interpreted for facial recognition? Should it be compared directly to the database?
I have read many research papers about ASM and facial recognition but none of them explains the point of it.
Answer: The active shape model algorithm is used to locate precisely the eyes, eyebrows, nose and mouth of a person. It is also used to align faces that are tilted.
It is not used directly for face matching and it has nothing to do with databases. It is a preprocessing step.
You (I) will need another algorithm to match its result with a database. (ICA for example) | {
"domain": "cs.stackexchange",
"id": 5284,
"tags": "algorithms, computer-vision"
} |
Impact of the sample rate of the emitting signal : communication between HackRF One and USRP NI 2921 | Question: I am transmitting a 1 kHz cosine wave with a HackRF One. I receive it with a NI2921 USRP at a sample rate of 200kHz. All the process is on GNURadio Companion.
I changed the sample rate at emission to see the impact. I started at 200kHz and lowered of 1kHz each time. The sample rate at reception stays the same : 200kHz. Here are the results :
From 200kHz to 96kHz, nothing appeared. Then, at a sample rate of 95kHz at emission, the cosine spectrum showed two peaks at 13kHz and 87kHz, centered on 50kHz (DC component) giving a cosine of frequency 37kHz.
All the frequencies on the table above are the only ones showing something on the FFT of the received signal.
My questions are :
Why does the sample rate at emission impact the frequency seen at the receiver?
Why are there frequencies that do not make the cosine spectrum appear? (Only the frequencies present in the table work). For example, if I have the two same sample rates at emission and reception (200kHz for our example), it does not work.
Why are the frequencies seen on the FFT plot different from 1kHz, the real cosine frequency emitted with the HackRF One?
Answer: The HackRF doesn't support those sampling rates. From reading the firmware over reading libhackrf to reading the gr-osmocom hackrf_sink_c:
The set of supported sample rates is very limited:
osmosdr::meta_range_t hackrf_sink_c::get_sample_rates()
{
osmosdr::meta_range_t range;
/* we only add integer rates here because of better phase noise performance.
* the user is allowed to request arbitrary (fractional) rates within these
* boundaries. */
range += osmosdr::range_t( 8e6 );
range += osmosdr::range_t( 10e6 );
range += osmosdr::range_t( 12.5e6 );
range += osmosdr::range_t( 16e6 );
range += osmosdr::range_t( 20e6 ); /* confirmed to work on fast machines */
return range;
}
So, use 8 MS/s.
The hackrf is instructed to use a different sampling rate than you think it uses, which obviously "scales" the spectrum. | {
"domain": "dsp.stackexchange",
"id": 7565,
"tags": "gnuradio"
} |
What units are used for the Stefan-Boltzmann law? | Question: I have a star with given temperature in Kelvin and radius in solar radii. I tried to calculate the luminosity of the star using Stefan Boltzmann's law, and got an absurd number (over 1 million). What am I doing wrong, and are there any units that I should use instead of Kelvin and solar units?
Answer: The Stefan-Boltzmann constant $\sigma$ is not a dimensionless quantity, it comes with units. So whatever units you use, you must ensure that the value you use for the Stefan-Boltzmann constant is consistent with them.
So using the value expressed in terms of SI units:
$$\sigma = 5.670\,374\,419\ldots \times 10^{-8}\,\rm W\,m^{-2}\,K^{-4}$$
you would either have to work with radius, luminosity and temperature in metres, watts and kelvins, or convert $\sigma$ to the units you are actually using.
For example, if you want to work in terms of solar radii and luminosities you have to account for the conversion factors $L_\odot = 3.828 \times 10^{26}\,\rm W$ and $R_\odot = 6.957 \times 10^8\,\rm m$, giving:
$$\sigma = 7.169\ldots \times 10^{-17}\ L_\odot\, R_\odot^{-2}\, \rm K^{-4}$$ | {
"domain": "astronomy.stackexchange",
"id": 5102,
"tags": "star, temperature, luminosity, mathematics, stefan-boltzmann-law"
} |
Is the speed of light dictated by Vacuum Permittivity, Vice Versa or Neither? | Question: Instinct, and my limited knowledge of Maxwell's Equations and the Wave Equation tell me that the first statement is true.
By my interpretation, the relationship between the frequencies and wavelengths of e.m. waves (and hence the speed of light) is dictated
by the relationship between electric and magnetic fields, which is in turn dictated by Vacuum Permittivity, which I believe (possibly in error)
to be an inherent property of our universe.
Is this right, or is the speed of light somehow dictating Vacuum Permittivity?
Or have I got something totally wrong?
Answer: At first sight, I understand that it might be plausible that permeability and permittivity seem to be fundamental constants of spacetime which are together forming the constant of speed of light.
However, the speed of light is the more fundamental parameter. The speed of light c is not limited to electromagnetic waves, it is equally the speed of gravitational waves which are not at all electromagnetic. Thus it is easy to see that $c$ is more fundamental than $ε_0$ and $µ_0$.
For an intuitive model you can think of EM waves as only one form among others for the propagation at the universal speed limit, however with the particularity that they are based on two kinds of forces so that the speed limit has to be distributed among two kinds of forces (electric and magnetic), giving $ε_0$ and $µ_0$. | {
"domain": "physics.stackexchange",
"id": 33183,
"tags": "electromagnetism, speed-of-light, maxwell-equations"
} |
Securing login PHP script | Question: I have made a login system where the inserted_id and inserted_password are sent to the login.inc.php via the XMLHttpRequest. I'm not sure if my PHP script is secure. I need some securing advice for my script.
login.inc.php:
<?php
session_start();
$conn = mysqli_connect("localhost", "root", "", "users");
$params = json_decode(file_get_contents('php://input'), true);
$inserted_id = $params['inserted_id'];
$inserted_password = $params['inserted_password'];
$stmt = mysqli_stmt_init($conn);
if (mysqli_stmt_prepare($stmt, "SELECT * FROM user WHERE account_name=? OR email=?;")) {
mysqli_stmt_bind_param($stmt, "ss", $inserted_id, $inserted_id);
mysqli_stmt_execute($stmt);
$row = mysqli_fetch_assoc(mysqli_stmt_get_result($stmt));
if ($row == null) {
echo ("DOESNT EXISTS");
} else {
if (password_verify($inserted_password, $row['password'])) {
$_SESSION['user_id'] = $row['id'];
echo("SUCCESS");
} else {
echo("PASSWORD_FAIL");
}
}
}
?>
signup.inc.php:
<?php
$conn = mysqli_connect("localhost", "root", "", "users");
$params = json_decode(file_get_contents('php://input'), true);
$inserted_first_name = $params['first_name'];
$inserted_last_name = $params['last_name'];
$inserted_dob = $params['dob'];
$inserted_email = $params['email'];
$inserted_account_name = $params['account_name'];
$inserted_password = $params['password'];
$stmt = mysqli_stmt_init($conn);
if (mysqli_stmt_prepare($stmt, "SELECT * FROM user WHERE email=?;")) {
mysqli_stmt_bind_param($stmt, "s", $inserted_email);
mysqli_stmt_execute($stmt);
if (mysqli_num_rows(mysqli_stmt_get_result($stmt)) > 0) {
echo("EMAIL_TAKEN");
} else {
$hashed_password = password_hash($inserted_password, PASSWORD_DEFAULT);
$created_id = rand(111111111, 999999999);
$stmt = mysqli_stmt_init($conn);
if (mysqli_stmt_prepare($stmt, "INSERT INTO user(id, first_name, last_name, dob, email, account_name, password) VALUES (?, ?, ?, ?, ?, ?, ?);")) {
mysqli_stmt_bind_param($stmt, "issssss", $created_id, $inserted_first_name, $inserted_last_name, $inserted_dob, $inserted_email, $inserted_account_name, $hashed_password);
$result = mysqli_stmt_execute($stmt);
echo ($result ? "SUCCESS" : "FAIL");
}
}
mysqli_stmt_close($stmt);
}
?>
Answer: The database access information (server address, user name, password, database name) are repeatedly hard coded in each PHP file instead of having them in one common place, where they can be easily changed and configured for each run time environment (development, production, etc.)
Additionally a (production) database should not be accessed using a root user with a blank password.
Personally I never accessed a database in PHP using the built-in low level functions. I found using a database access library lead to less boilerplate code, they had a better readable API, and most importantly built-in security measures, that you can't forget to use. At the very least I would outsource database access into separate repository classes/modules/services.
A common security measure for logins is to have the script return the same result for both unknown account name and wrong password, so that an attacker can't find out if a specific user has an account or not. | {
"domain": "codereview.stackexchange",
"id": 33848,
"tags": "php"
} |
Why do we neglect higher order terms in Cauchy's Equation? | Question: Cauchy's Equation for finding the refractive index for a light of given wavelength is:
$$n(\lambda)=A+\dfrac{B}{\lambda^2}+\dfrac{C}{\lambda^4}.....$$
This formula however is simplified to
$n(\lambda)=A+\dfrac{B}{\lambda^2}$
by neglecting higher order terms.
This is what I don' t understand. Wavelength of visible light is approximately $6\cdot10^{-7}m$ which is less than $1$. Shouldn't the contribution of higher order terms be more than lower order terms.
Answer: The Cauchy equation is empirical relationship.
However, the refractive index can be obtained from the classical Lorentz model where a light wave creates oscillatory motion of the electrons and the electron displacements form dipole moments. This polarizes the medium , and the refractive index can be estimated theoretically.
(https://www.phys.ksu.edu/personal/cdlin/class/class02a/s2-jing-li.ppt)
$$n=\sqrt{1+\frac{\omega_p^2}{\omega_0^2-\omega^2-i\gamma\omega} }$$
$\omega_p$ is the plasma frequency and $\omega_0$ are the resonance absorption edges. If we can neglect the absorption $\gamma=0$ .
$$n=\sqrt{1+\frac{p^2}{1-x^2}}\propto a+bx^2+cx^4...$$
Where $x=\frac{\omega}{\omega_0}$~$\frac{\lambda_0}{\lambda}$, $p=\omega_p/\omega_0$= constant
When we Taylor expand we see that in some approximation the empirical equation is justified. And we can expect that the Cauchy equation would fit the refractive index in limited spectral regions, for some materials.
Higher orders do not add much. More often is used the Sellmeier equation which describes better the behavior of the refractive index. | {
"domain": "physics.stackexchange",
"id": 55811,
"tags": "optics, geometric-optics"
} |
Code that finds the largest palindrome from two three-digit factors | Question: My code is really ugly, but I don't know how to make it better.
#include "stdafx.h"
#include <iostream>
#include <math.h>
using namespace std;
bool palindromeTest(double numberVectorAddition){//tests if the factors of the palindrome are both below 3 digits each.
double p = numberVectorAddition;
double i = 999;
double c;
for (;; i--){
if (fmod(p, i) != 0)
continue;
if (fmod(p, i) == 0)
c = p / i;
if (1000 <= c){
cout << "One of the factors is too high. Looking for next largest palindrome..." << endl;
cout << endl;
return false;
}
else if (c <= 1000){
cout << "Factors of the number both below 1000 are " << i << " and " << c;
return true;
}
}
}
void nLargestPalindrome(double number){
int i = 5; //subtract "1" because array starts at zero
int intermediate = i; //intermediate value - equal to the array index;
double *decimalDigit = new double[i + 1];
while (0 <= i){ //acquire digits of number
decimalDigit[i] = (number - fmod(number, pow(10, i))) / pow(10, i);
cout << "Value of digit place " << i << " is " << decimalDigit[i] << endl;
number = fmod(number, pow(10, i));
i--;
}
for (int q = 0; q < 100; q++){ //decreases size of palindrome, and also tests after the decrease.
double numberVectorAddition = pow(10, 5)*decimalDigit[0] + pow(10, 4)*decimalDigit[1] + pow(10, 3)*decimalDigit[2] + pow(10, 2)*decimalDigit[3] + pow(10, 1)*decimalDigit[4] + decimalDigit[5];
cout << decimalDigit[0] << decimalDigit[1] << decimalDigit[2] << decimalDigit[3] << decimalDigit[4] << decimalDigit[5] << '\t';
cout << numberVectorAddition << endl;
if (palindromeTest(numberVectorAddition))
break;
decimalDigit[2]--;
decimalDigit[3]--;
if (decimalDigit[2] == 0){
numberVectorAddition = pow(10, 5)*decimalDigit[0] + pow(10, 4)*decimalDigit[1] + pow(10, 3)*decimalDigit[2] + pow(10, 2)*decimalDigit[3] + pow(10, 1)*decimalDigit[4] + decimalDigit[5];
cout << decimalDigit[0] << decimalDigit[1] << decimalDigit[2] << decimalDigit[3] << decimalDigit[4] << decimalDigit[5] << '\t';
cout << numberVectorAddition << endl;
if(palindromeTest(numberVectorAddition))
break;
if ((decimalDigit[1] == 0) && (decimalDigit[2] == 0)){
decimalDigit[1] = 9;
decimalDigit[4] = 9;
decimalDigit[2] = 9;
decimalDigit[3] = 9;
decimalDigit[0]--;
decimalDigit[5]--;
continue;
}
decimalDigit[2] = 9;
decimalDigit[3] = 9;
decimalDigit[1]--;
decimalDigit[4]--;
}
}
delete decimalDigit;
}
int main(){
double n = 997799; //997799 is equal to 999*999 - 2; palindrome is smaller.
nLargestPalindrome(n); //finds the largest palindrome below "n" that has both factors of three digits.
cin.get(); //stops the command window from closing prematurely
}
Answer: Code formatting
Your code seems to be poorly indented. I suggest you have a look at the documentation of your favorite text editor to see how to fix this.
Basic logic
It doesn't make much sense to check :
if (fmod(p, i) != 0)
continue;
if (fmod(p, i) == 0)
Similarly,
if (1000 <= c){
return false;
}
else if (c <= 1000){
is quite redundant.
using namespace std;
Using using namespace std; is usually frowned upon, some will say that it is ok in a cpp file, some will say the opposite. I'll let you decide.
The right type
You are using double but long int are more than enough for what you are trying to do.
Right place to define variables
It is usually a good idea to define variables in the smallest possible scope. Among other things, it helps the reader not to have to keep too many things in mind. Also, it helps you see that intermediate is an unused variable. As a side-note, it is a good idea to activate all the warnings in your compiler to detect such a thing (and many other potential errors).
Define constants
Instead of having 5 in the middle of your code and need to comment it, you could store 6 in a constant named for instance NB_DIGITS and remove the need for any comment.
Use for loops when you can
Now, the loop to aquire digits can be rewritten :
for (int i = NB_DIGITS -1; 0 <= i; i--) { //acquire digits of number
decimalDigit[i] = (number - fmod(number, pow(10, i))) / pow(10, i);
cout << "Value of digit place " << i << " is " << decimalDigit[i] << endl;
number = fmod(number, pow(10, i));
}
Compute things only once
You call pow(10, i) way too many times compared to what you actually need. You can simply write :
double pow_ten = pow(10, i);
decimalDigit[i] = (number - fmod(number, pow_ten)) / pow_ten;
cout << "Value of digit place " << i << " is " << decimalDigit[i] << endl;
number = fmod(number, pow_ten);
Smarter decomposition of a number
Instead of going through the hassle of getting digits starting with the one with the strongest weight, it is much easier to start by the end. Also, this makes you perform the iteration in the logicial order.
Better names
numberVectorAddition is a pretty bad name for a parameter. n conveys enough information as far as I can tell.
palindromeTests is also quite bad. hasDivisorsInRange is probably better.
Finally, DecimalDigit is a bit redundant are digits already include the notion of 10 but also, it probably should be plural to tell that it's a collection.
No need for new/delete here
You don't need to use new/delete here. If you just define it long int digits[NB_DIGITS];, it will be deleted when it does out of the scope.
Clearer responsability for functions
It could be interesting for your function to return values instead of printing them. This would lead to separation of concerns. One of the multiple advantages is to make your code easier to test.
At this point your code, looks like :
#include <iostream>
#include <math.h>
using namespace std;
const int NB_DIGITS = 6;
bool hasDivisorsInRange(long int n){//tests if the factors of the palindrome are both below 3 digits each.
for (long int i = 999;; i--){
if (n % i == 0)
{
long int c = n / i;
if (1000 <= c){
return false;
}
else if (c <= 1000){
cout << "Factors of the number both below 1000 are " << i << " and " << c << endl;
return true;
}
}
}
}
long int LargestPalindrome(long int number){
long int digits[NB_DIGITS];
for (int i = 0; i < NB_DIGITS; i++) { //acquire digits of number
digits[i] = number % 10;
number /= 10;
cout << "Value of digit place " << i << " is " << digits[i] << endl;
}
for (int q = 0; q < 100; q++){ //decreases size of palindrome, and also tests after the decrease.
long int n = pow(10, 5)*digits[0] + pow(10, 4)*digits[1] + pow(10, 3)*digits[2] + pow(10, 2)*digits[3] + pow(10, 1)*digits[4] + digits[5];
cout << digits[0] << digits[1] << digits[2] << digits[3] << digits[4] << digits[5] << '\t';
cout << n << endl;
if (hasDivisorsInRange(n))
return n;
digits[2]--;
digits[3]--;
if (digits[2] == 0){
long int n2 = pow(10, 5)*digits[0] + pow(10, 4)*digits[1] + pow(10, 3)*digits[2] + pow(10, 2)*digits[3] + pow(10, 1)*digits[4] + digits[5];
cout << digits[0] << digits[1] << digits[2] << digits[3] << digits[4] << digits[5] << '\t';
cout << n2 << endl;
if(hasDivisorsInRange(n2))
return n;
if ((digits[1] == 0) && (digits[2] == 0)){
digits[1] = 9;
digits[4] = 9;
digits[2] = 9;
digits[3] = 9;
digits[0]--;
digits[5]--;
continue;
}
digits[2] = 9;
digits[3] = 9;
digits[1]--;
digits[4]--;
}
}
return 0;
}
int main(){
long int n = 906608; //997799 is equal to 999*999 - 2; palindrome is smaller.
int pal = LargestPalindrome(n); //finds the largest palindrome below "n" that has both factors of three digits.
cout << "Solution is " << pal << endl;
}
Better algorithm
You are trying to iterate over palindromes in a quite complicated way (that I do not understand at all). To iterate over palindromes composed of 6 digits is nothing but iterating over values between 100 and 999 and adding the mirrored version of the number at the end.
This can be badly written :
long int LargestPalindrome(long int number){
for (int beg = (number / 1000); beg > 100; beg--)
{
// Could be done in a clean way with array but I'm lazy
int i = beg;
int a = i % 10; i/=10;
int b = i % 10; i/=10;
int c = i % 10; i/=10;
long int pal = beg * 1000 + 100 * a + 10 * b + c;
if (pal < number && hasDivisorsInRange(pal))
return pal;
}
return 0;
}
For n = 906608, this finds 888888 which happens to be a better solution that the one found by your code.
Also, you could perform a smarter research for divisors.
I have to go, I'll try to continue this answer at some point. | {
"domain": "codereview.stackexchange",
"id": 11198,
"tags": "c++, palindrome"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.