anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Faster Algorithm to convolve/correlate two sparse 1-D signals in python (or any language)
Question: I have two signals which I need to correlate or convolve. Each signal is sampled non-uniformly and the values of the signal I have with me are the timestamp and the magnitude of the signal at that timestamp. The values of the signal at all the other times can be assumed to be zero. The timestamps of the signal have a resolution running in microseconds. An example of how the signal looks is shown below: As can be seen, the resolution of the signal is in microseconds and signal is mostly sparse. If I were to the convolve two signals of this type, I would first have to pad the signals with zeros (since I would have to discretise the signal). While the padding can be done with resolution of microseconds, The number of values to be multiplied becomes too big and the operation becomes increasingly slow. Most of the multiplications in this convolution would be multiplication of zeros(which are pretty much useless). I have therefore chosen a round off value of 2 places (0.xxxxxx becomes 0.xx),since I have to perform 40,000 similar convolutions. I have written my resampling function as shown below. import numpy as np import math def resampled_signal_lists_with_zeros(signal_dict, xlimits): ''' resamples the given signal with precision determined by the round function. signal_dict is a dictionary with timestamp as key and signal magnitude is the value of the key. xlimits is an array containing the start and stop time of the signal. ''' t_stamps_list = list(signal_dict.keys()) t_list = list(np.arange(int(math.floor(xlimits[0])), int(math.ceil(xlimits[1])), 0.005)) t_list = [round(t, 2) for t in t_list] s_list = list() time_keys = [round(t, 2) for t in t_stamps_list] i = 0 for t in t_list: if i < len(t_stamps_list): if t==time_keys[i]: s_list.append(signal_dict[t_stamps_list[i]]) i+=1 else: s_list.append(0) else: s_list.append(0) return t_list, s_list The correlation of two signals padded in the above manner is done using scipy as follows: from scipy.signal import correlate output = correlate(s_1, s_2, mode='same') The output calculated in the above manner is pretty slow .Since the signal is pretty sparse and most of the multiplications in the signal are multiplications of zero, I think there should be a better way to do the same operations. Is there a way to get the result of the convolutions of the two sparse signals faster? Answer: Interpolating discontinuous waveforms is usually not a good idea. The way I would approach your problem would be to recognize your signals as pulse trains. So I would assume that the convolution a single pulse from one waveform with a single pulse from the other waveform was also a pulse. so signals A and B could be represented as a train of continuous Dirac functions. $$ s_A(t)=\sum_{i=1}^{N-1} a_i \delta(t-\tau_A(i)) $$ $$ s_B(t)=\sum_{i=1}^{M-1} a_i \delta(t-\tau_B(i)) $$ where $\tau_A(i)$ and $\tau_B(i)$ are your non uniform arrival times. There is an identity for delta functions $$ f(t) \star \delta(t-a)= f(t-a) $$ where $\star$ denotes convolution, so $$ s_A(t)\star c_1\delta(t-a)= c_1\sum_{i=1}^{N-1} a_i \delta(t-(\tau_A(i)+a)) $$ and $$ s_A(t)\star \left[c_1\delta(t-a) + c_2\delta(t-b)\right]= c_1\sum_{i=1}^{N-1} a_i \delta(t-(\tau_A(i)+a))+c_2\sum_{i=1}^{N-1} a_i \delta(t-(\tau_A(i)+b)) $$ which can be generalized to $s_A(t)\star s_B(t)$ you really just need calculate the time shifts and amplitude products. I will leave it up to you how to handle the amplitudes but the code below assumes all the impulses are magnitude one. clear all close all clc %% M=2 ; % number of spikes in waveform A N=5; % number of spikes in waveform B T=10; % arbitrary duration %% waveform_a_times=rand(1,M)*T; waveform_b_times=rand(1,N)*T; %% for i=1:N time_diff(i,:)=waveform_a_times+waveform_b_times(i); end conv_times=sort(reshape(time_diff,1,M*N)); figure(1) subplot(3,1,1),stem(waveform_a_times,ones(1,M)); title('waveform A') subplot(3,1,2),stem(waveform_b_times,ones(1,N)); title('waveform B') subplot(3,1,3),stem(conv_times,ones(1,N*M)); title('convolution of waveform A with waveform B') for correlation $$ f(t)\star \delta(t+a)=f(t+a)$$ you’re also more likely to have coincident times for correlation which you should test for.
{ "domain": "dsp.stackexchange", "id": 7830, "tags": "python, convolution, cross-correlation" }
An imperative program is analogous to how a Turing machine works?
Question: Since Turing machines has great influence on typical hardware architecture (Von Neumann) and both uses concept of state, is correct to say that an imperative program is analogous to how a Turing machine works? β€œComputability via Turing machines gave rise to imperative programming. Computability described via Ξ»-calculus gave rise to functional programming.β€β€Šβ€”β€ŠCooper, S. B., Leeuwen, J. (2013) . Alan Turing: His Work and Impact. And "Imperative programming languages such as Fortran, Pascal etcetera as well as all the assembler languages are based on the way a Turing machine is instructed: by a sequence of statements." - Barendregt, H., Barendsen, E. (2000) . Introduction to Lambda Calculus Answer: At a high enough level and when contrasted with functional programming, sure. Turing machine models and imperative programs have in common that they start from an input and take a series of steps that change a state stored in memory, ending with some output. This contrasts with lambda calculus and functional programming which generally and loosely do not have the above features. I think reading any more into the comparison than this, or trying to take it any farther, is probably a mistake. As Yuval says, Turing machines are quite different from modern machines and program execution environments.
{ "domain": "cs.stackexchange", "id": 7640, "tags": "turing-machines, computer-architecture, imperative-programming" }
capture publish and subscribe image using ROS
Question: Hello Everyone, I am newbie in ROS i have written C code for Capturing images using OpenCV and then using socket communication I am sending that images to remote location. Now I have to create same in ROS so that one node can capture images and other nodes can publish and subscribe multiple images. I dont know whether it is possible or not in ROS. Plz guide me something and share link that can easily work.. Originally posted by KDROS on ROS Answers with karma: 67 on 2014-11-17 Post score: 0 Answer: ROS has the image_transport library which adds additional convenience methods for publishing and subscribing to images and their matching camera transformation matrices. In addition, there is the cv_camera package which provides a ROS node that captures images through the OpenCV API and publishes them as ROS messages. This should take care of the capture and publish part of your pipeline. There are also many other camera drivers available for ROS which provide the same API for other types of cameras, and can expose vendor-specific APIs such as shutter, gain and synchronization control. You'll want to explore the image_transport tutorials to subscribe to an existing image stream, convert it to an OpenCV format and process it. Originally posted by ahendrix with karma: 47576 on 2014-11-17 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by KDROS on 2014-11-19: I was using Usb_cam for capturing images. It is publishing image on topic "image_raw", with camera info, I dont have need of the camera info,I want to writesubscriber, Suggest changes required in link Comment by ahendrix on 2014-11-19: I'm sorry; I don't understand you question well enough to give a better answer. Comment by KDROS on 2014-11-19: Sorry, Goto the Link and see publisher topic in usb_cam_node.cpp , It will publish image and camera info in this topic, How to write subscriber node for that Comment by ahendrix on 2014-11-19: The image_transport subscriber tutorial should work. You may need to change the image topic, either in the tutorial or through remapping at runtime. Comment by KDROS on 2014-11-20: Since usb_cam is publishing two things image n camera info, so I think there is need to change callback function. What is remapping at runtime? Comment by ahendrix on 2014-11-20: You said that you don't care about the camera info. Why do you want to subscribe to it? Comment by ahendrix on 2014-11-20: Remapping allows you to change topic names without recompiling. See: http://wiki.ros.org/Remapping%20Arguments for more info. Comment by KDROS on 2014-11-24: want to know very basic que. When will usb_cam publish image? after fixed time or, whenever i will save the image using right click in image_view? Comment by ahendrix on 2014-11-24: The ROS camera drivers publishe images continuously. image_view is just another subscriber to this topic; it doesn't control how frequently images are published. You should use able to verify this using the ROS introspection tools such as rostopic echo and rostopic hz Comment by KDROS on 2014-11-25: so is it possible to run image_view from other computer connected to master computer, (master where i am using usb_cam). Comment by KDROS on 2014-11-25: Scenario is like..master core will use usb_cam and capture images and it will publish, remote computer connected with master will subscribe the published image. I think whatever u told,if running image_view from remote computer is possible,then there is no need of anything Comment by KDROS on 2014-11-26: Thanx ahendrix. I will post new que very soon. I have completed this job.
{ "domain": "robotics.stackexchange", "id": 20064, "tags": "ros, libopencv, image-transport" }
iOS Memory Game - Swift 4
Question: I'm learning iOS through this past Stanford CS193P course. I've just completed the first assignment and would love a code review as I can't actually turn in an assignment for feedback since the course has already completed. The requirements can be found here but any suggestions on best practices and my implantation would be helpful. The app works and is in a state I would submit if I could. I'm just looking for another set of eyes. GitHub import Foundation /** a game card that contains the following var: - isFaceUp - isMatched - identifier */ struct Card: Hashable { var hashValue: Int { return identifier } static func == (lhs: Card, rhs: Card) -> Bool { return lhs.identifier == rhs.identifier } var isFaceUp = false var isMatched = false private var identifier: Int private static var identifierFactory = 0 private static func getUniqueIdentifer() -> Int { identifierFactory += 1 return identifierFactory } init(){ self.identifier = Card.getUniqueIdentifer() } } import Foundation struct Concentration { private(set) var cards = [Card]() private(set) var score = 0 private(set) var flipCount = 0 private var indexOfOneAndOnlyFaceUp: Int? { get { return cards.indices.filter({cards[$0].isFaceUp}).oneAndOnly } set { for flipDownIndex in cards.indices { cards[flipDownIndex].isFaceUp = (flipDownIndex == newValue) } } } private var selectedIndex = Set<Int>() private var lastIndexWasSelected = false /// returns true if all cards have been matched var allCardsHaveBeenMatched: Bool { for index in cards.indices { if !cards[index].isMatched { return false } } return true } /** Choose a card at an index. Handles flip count, if a card is faced up, and matching of cards */ mutating func chooseCard(at index: Int){ assert(cards.indices.contains(index), "Concentration.chooseCard(at:\(index)): index is not in the cards") let cardWasPreviouslySelected = selectedIndex.contains(index) if !cards[index].isMatched { // only flip cards that are visible flipCount += 1 if let matchIndex = indexOfOneAndOnlyFaceUp, matchIndex != index { // 2 cards are face up, check if cards match if cards[index] == cards[matchIndex] { cards[index].isMatched = true cards[matchIndex].isMatched = true if lastIndexWasSelected { // add extra to account for subtracting earlier score += 3 } else { score += 2 } }else { // no match if cardWasPreviouslySelected {score -= 1} } cards[index].isFaceUp = true } else { // one card is selected, turn down other cards and set this card if cardWasPreviouslySelected { score -= 1 } indexOfOneAndOnlyFaceUp = index lastIndexWasSelected = cardWasPreviouslySelected } } selectedIndex.insert(index) } init(numberOfPairsOfCards: Int){ assert(numberOfPairsOfCards > 0, "Concentraation.init(numberOfPairsOfCards:\(numberOfPairsOfCards) you must have multiple pairs of cards") for _ in 0..<numberOfPairsOfCards { let card = Card() cards += [card, card] } shuffleCards() } mutating private func shuffleCards() { for _ in 0..<cards.count { // sort seems better than .swap() cards.sort(by: {_,_ in arc4random() > arc4random()}) } } } extension Collection { var oneAndOnly: Element? { return count == 1 ? first : nil } } import Foundation class Theme { enum Theme: UInt32 { case halloween case love case animal case waterCreatures case plants case weather } /// get an array of icons by theme func getThemeIcons(by theme: Theme) -> [String] { switch theme { case .halloween: return ["", "", "", "", "", "☠️", "", ""] case .love: return ["","", "", "", "❀️", "", "", ""] case .animal: return ["", "", "", "", "", "", "", ""] case .waterCreatures: return ["", "", "", "", "", "", "", ""] case .plants: return ["", "", "", "", "", "", "", ""] case .weather: return ["", "❄️", "β˜€οΈ", "", "β˜”οΈ", "", "☁️", "", "β›ˆ"] } } private func random() -> Theme { let max = Theme.weather.rawValue let randomIndex = arc4random_uniform(max + UInt32(1)) return Theme(rawValue: randomIndex) ?? Theme.halloween } /** get a random array of themed icons - Author: Anna */ func getRandomThemeIcons() ->[String] { return getThemeIcons(by: random()) } } import UIKit class ViewController: UIViewController { private lazy var game = Concentration(numberOfPairsOfCards: numberOfPairsOfCards) var numberOfPairsOfCards: Int { return (cardButtons.count + 1) / 2 } @IBOutlet weak var finishedLabel: UILabel! @IBOutlet weak var scoreCountLabel: UILabel! @IBOutlet weak var flipCountLabel: UILabel! private var flipCount = 0 { didSet { flipCountLabel.text = "Flip Count: \(flipCount)" } } private var scoreCount = 0 { didSet { scoreCountLabel.text = "Score: \(scoreCount)"} } @IBOutlet var cardButtons: [UIButton]! @IBAction func touchCard(_ sender: UIButton) { if let cardNumber = cardButtons.index(of: sender){ game.chooseCard(at: cardNumber) updateViewFromModel() }else { print("card is not in cardButton array") } } @IBAction func touchNewGame(_ sender: UIButton) { // reset game game = Concentration(numberOfPairsOfCards: (cardButtons.count + 1) / 2) // reset theme choices emojiChoices = theme.getRandomThemeIcons() // update view updateViewFromModel() } private func updateViewFromModel(){ flipCount = game.flipCount scoreCount = game.score for index in cardButtons.indices { let button = cardButtons[index] let card = game.cards[index] if card.isFaceUp { button.setTitle(emoji(for: card), for: UIControlState.normal) button.backgroundColor = #colorLiteral(red: 1, green: 1, blue: 1, alpha: 1) } else { button.setTitle("", for: UIControlState.normal) button.backgroundColor = card.isMatched ? #colorLiteral(red: 1, green: 1, blue: 1, alpha: 0) : #colorLiteral(red: 1, green: 0.5763723254, blue: 0, alpha: 1) } } finishedLabel.textColor = game.allCardsHaveBeenMatched ? #colorLiteral(red: 1, green: 1, blue: 1, alpha: 1) : #colorLiteral(red: 0.9999960065, green: 1, blue: 1, alpha: 0) finishedLabel.text = game.score >= 0 ? "Nice work! " : "Phew, ly made it" } private var theme = Theme() private lazy var emojiChoices = theme.getRandomThemeIcons() private var emoji = [Card: String]() private func emoji(for card: Card) -> String { if emoji[card] == nil, emojiChoices.count > 0 { emoji[card] = emojiChoices.remove(at: emojiChoices.count.arc4random) } return emoji[card] ?? "?" } } extension Int { var arc4random: Int { if self > 0 { return Int(arc4random_uniform(UInt32(self))) } else if self < 0 { return -Int(arc4random_uniform(UInt32(abs(self)))) }else { return 0 } } } Answer: Concentration.swift And I have few notes: It's better to describe what this entity is about. You can add comment above declaration. import Foundation struct Concentration { private(set) var cards = [Card]() private(set) var score = 0 This code can be rewritΠ΅en: var allCardsHaveBeenMatched: Bool { for index in cards.indices { if !cards[index].isMatched { return false } } return true } better/shorter version: var allCardsHaveBeenMatched: Bool { !cards.contains(where: { !$0.isMatched } } This method is doing too much and it's hard to read it, consider to spit it into few smaller methods (Single Responsibility Principle). /** Choose a card at an index. Handles flip count, if a card is faced up, and matching of cards */ mutating func chooseCard(at index: Int){ Card.swift Use UUID() instead private static var identifierFactory = 0 private static func getUniqueIdentifer() -> Int { identifierFactory += 1 return identifierFactory } Theme.swift Can be rewritten like this (Using enum without enclosing class): import Foundation enum Theme: UInt32 { case halloween case love case animal case waterCreatures case plants case weather /// get an array of icons by theme var icons: [String] { switch self { case .halloween: return ["", "", "", "", "", "☠️", "", ""] case .love: return ["","", "", "", "❀️", "", "", ""] case .animal: return ["", "", "", "", "", "", "", ""] case .waterCreatures: return ["", "", "", "", "", "", "", ""] case .plants: return ["", "", "", "", "", "", "", ""] case .weather: return ["", "❄️", "β˜€οΈ", "", "β˜”οΈ", "", "☁️", "", "β›ˆ"] } } static func randomTheme() -> Theme { let max = Theme.weather.rawValue let randomIndex = arc4random_uniform(max + UInt32(1)) return Theme(rawValue: randomIndex) ?? Theme.halloween } } To summarize: your code is not bad, but need some improvements. You are using encapsulation to hide implementation details and prefer values type (like structs) over reference types (classes). It's good stating point! PS: Welcome to Code Review!!
{ "domain": "codereview.stackexchange", "id": 30810, "tags": "swift, ios" }
Minimum number of vectors needed in different planes for their resultant to be zero
Question: I was doing some problems on physics when i saw a question that asked what is the minimum number of vectors needed in different planes for their resultant to be zero I thought about this and came to the conclusion that it should be three. ex : suppose one vector is $3\mathbf{\hat{i}}+ 3\mathbf{\hat{j}}$ in the $xy$ plane , another $-3\mathbf{\hat{i}} + 3\mathbf{\hat{k}}$ in the $xz$ plane and another $-3\mathbf{\hat{j}} -3\mathbf{\hat{k}}$ in the $yz$ plane . So their resultants should be zero. But the answer is $4$. I don't understand why. Please correct me if am making a mistake. Thanks in advance. * Answer: I think you may have some confusion about what you are being asked. To begin with in the three vectors you have described are coplanar i.e. they lie in the same plane, or to be specific the three vectors are linearly dependent. You actually need one more vector to give a zero resultant. See the plots below: Both images are completely equivalent. The first being a plot of all three vectors and the second is a rotation showing they all lie in the same plane.
{ "domain": "physics.stackexchange", "id": 79390, "tags": "homework-and-exercises, vectors" }
Interacting conformal field theories in spacetime dimensions higher 6?
Question: Are there any papers which directly tackle the question of whether or not there exists interacting CFTs in spacetime dimensions higher than 6? It has been proven that there do not exist any superconformal field theories in spacetime dimensions higher than 6. This question is about conformal field theories in general, without supersymmetry. Answer: There are no papers in the literature that address the question directly. If you're interested, you can have a look at hep-th/1305.0004 by Fitzpatrick, Kaplan and Poland which addresses a certain large-$D$ limit for conformal four-point functions, but they don't make any definite statements about whether such CFTs exist or not. Let me make an edit, since some comments mentioned some known examples in $D>6$. There are indeed various examples of interacting CFTs in $D>6$ that obey the axioms of CFT except for unitarity. The idea is that you take some field, let say a scalar field $\phi$, with a kinetic term $$ L \sim \phi \Box^\alpha \phi $$ where $\alpha > 1$, either integer of fractional. (You can make rigorous sense of the operator $\Box^\alpha$ with some work.) The scaling dimension of $\phi$ is then $D/2-\alpha < D/2-1$, so the famous unitarity bound $\Delta \geq D/2-1$ is violated. Now for $\alpha$ sufficiently large and $n$ a given integer, you can write down interactions of the form $$ \delta L = g \phi^n $$ where $g$ is classically marginal for some $D = D_*$, and then you can repeat the usual RG story (do an expansion in $D = D_* - \epsilon$ dimensions etc.) The same extends to fermions and higher-spin fields. Such theories are interesting but probably not quite what OP is looking for.
{ "domain": "physics.stackexchange", "id": 56534, "tags": "quantum-field-theory, conformal-field-theory, spacetime-dimensions, interactions" }
Thermodynamic functions of state for freely jointed polymer chain derived from partition function
Question: I'm reading a stat thermo text (Terrel Hill) about the freely jointed chain problem. It all goes well until I hit the thermodynamic function of state derived from the canonical partition function. The problem goes like the following: Consider a linear polymer chain made up of M units, where M is large enough so that one chain can be considered a thermodynamic system. Each unit can exist in the states $i = 1, 2, . . . , n$ with partition functions $j_i(T)$ and lengths $l_i$. The total length of the chain is $l$. The system (chain) is characterized thermodynamically by $l, M, T$. Because this is a freely jointed chain, the canonical ensemble partition function for the system follows the formalism of the mixture of ideal gas species. Therefore, the partition function $Q$ is then: $$Q(l, M, T)=\sum_M M!\prod^n_{i=1}\frac{j_i^{M_i}}{M_i!} \tag{1}$$ where $M_i$ is the number of units with length $l_i$, and the sum is over all sets $M=\{M_1, M_2, ..., M_n\}$ consistent with the restrictions: \begin{align*}\sum_{i=1}^nM_i &=M, \tag{2} \\ \sum_{i=1}^nl_iM_i &=l \tag{3}\end{align*} If we choose $l$ as an independent variable, the text says that the appropriate thermodynamic equation is: $$dA=-SdT+\tau dl+\mu dM \tag{4}$$ With $\tau$ being the force pulling on the chain. My question is: Why "$+\tau dl$" rather than "$-\tau dl$"? Since we have $dA=-SdT -pdV + \mu dM$ for ideal gas. Answer: It's a tensile force. You do work by pulling on the chain, so as to extend it. For the ideal gas, you do work by compressing it.
{ "domain": "physics.stackexchange", "id": 54745, "tags": "thermodynamics, statistical-mechanics" }
Combining the Einstein field equations with the geodesic equation
Question: I've seen this question How does Einstein field equations interact with geodesic equation?, but it doesn't make any sense to me. If spacetime is a Lorentzian manifold, then surely one thing general relativity tells us is what the possible manifolds are when gravity is the only "force". And in that context, the field equations themselves don't restrict the manifold at all -- any manifold has an Einstein tensor $G_{\mu\nu}$ that represents a possible matter distribution in that it is "conserved" (zero divergence). So the form of the manifold is restricted only by the geodesic equation. How so? Well, here's my thought process: First of all, we can say that $G(\vec u, \vec u)$, for a timelike unit vector $\vec u$, is the mass density flowing along $\vec u$. And all we require is that that mass follows a geodesic. So, if we take the geodesic along $\vec u$, the same mass should remain on it the whole time. But since the mass can spread out spatially over time, we actually have to consider a tight family of initially parallel geodesics, and say that the density integrated over their spatial cross-section (volume) $V$ is what remains constant. So we would basically get $$\nabla_{\mu} (G_{\mu\mu} \cdot V_{geodesic}) = 0.$$ Now, the variation of the cross-sectional volume along a tight family of initially parallel geodesics is exactly what the Ricci curvature describes. Except that it is the second derivative of that volume, because the first derivative is identically zero, because they are initially parallel. But we wouldn't want the $\nabla_{\mu}$ to be ${\nabla_{\mu}}^2$, because that would allow the total mass to change linearly. So clearly something is off with the above equation. Or maybe I'm barking up the wrong tree, but the conceptual statement makes sense to me. Can anyone clarify this, and/or point me to an online reference that explains conceptually how the two equations are combined? Answer: I feel there is something interesting to explore here, but it is not physical to assert "any manifold ... represents a possible matter distribution". The equivalent statement in Newtonian gravity would be to assert that there are no constraints on mass, not even that it cannot be negative. But such a constraint is an important aspect of the theory as a physical theory; it cannot be dropped or ignored. In G.R. the constraints on $T^{ab}$ (going by such names as weak energy condition, strong energy condition and so on) are part of the story of general relativity considered as a physical theory, as opposed to merely some mathematical tools to deal with manifolds. But it has never been quite fully settled which of these constraints one should insist upon. It is an open area of research at the boundary of quantum theory and general relativity. A widely discussed aspect of G.R. is that the geodesic equation, with its interpretation as an equation of motion for particles, follows from the field equation (see e.g. MTW book). This is a bit like the way you can deduce the Lorentz force equation from the Maxwell equations plus energy conservation. So there are connections between geodesic equation, field equation and conservation. But that's about as far as this answer is going to take it. It is perhaps more of a long comment than strictly an answer, but I hope you may find it helpful.
{ "domain": "physics.stackexchange", "id": 76514, "tags": "general-relativity, spacetime, differential-geometry, curvature, geodesics" }
Time period of an anharmonic but periodic motion
Question: How do I find the time period of anharmonic motion given an expression of force as a function of $x$? This is the question I was solving: $$ U(x)=k|x|^3 $$ where $k$ is a positive constant. If the amplitude of the oscillation is $a$, then how is the time period $T(a)$ related to $a$? Answer: The standard approach for a conservative system is to write the full energy: $$E= \frac{m\dot{x}^2}{2}+U(x),$$ which allows to express $$\frac{dt}{dx}=\frac{1}{\dot{x}}=\sqrt{\frac{m}{2}} \frac{1}{\sqrt{E-U(x)}}.$$ One can now integrate over half of a period, i.e. from the leftmost to the rightmost positions, defined by $U(x_L)=E, U(x_R)=E$: $$\frac{T}{2} = \sqrt{\frac{m}{2}}\int_{x_L}^{x_R}dx \frac{1}{\sqrt{E-U(x)}}.$$ In the case of a symmetric potential with a minimum at $x=0$ ($U(-x) = U(x)$), we can define the amplitude as $U(\pm a) = E$ and rewrite the last equation as $$\frac{T}{2} = \sqrt{\frac{m}{2}}\int_{-a}^{a}dx \frac{1}{\sqrt{U(a)-U(x)}}= \sqrt{2m}\int_{0}^{a}dx \frac{1}{\sqrt{U(a)-U(x)}} .$$
{ "domain": "physics.stackexchange", "id": 67008, "tags": "homework-and-exercises, classical-mechanics, time, oscillators, anharmonic-oscillators" }
Player VS player Tic-Tac-Toe game
Question: This program is supposed to be a player VS player Tic-Tac-Toe game. The computer then checks whether you've won, lost, or tied. The code does work, but I don't want it to work through hacking; I want it to be done the right way. Sub Main() 'Initialise variables Dim StrCoordinate(3, 3) As String Dim BooGameOver As Boolean = False Dim BooIsNextANaught As Boolean = False Dim IntLoopCounterX As Integer = 0 Dim IntLoopCounterY As Integer = 0 Dim IntTempStorage As Integer = 1 Dim StrPlayAgain As String Dim IntTempStorage2 As Integer = 0 Dim BooIsInputValid As Boolean = False Do 'Set all coordinates' contents to "-" For IntLoopCounterY = 0 To 2 For IntLoopCounterX = 0 To 2 StrCoordinate(IntLoopCounterX, IntLoopCounterY) = ("-") Next Next 'Display Instructions Console.WriteLine("╔═════════════════════════════════════════════╗") Console.WriteLine("β•‘Welcome to this tic-tac-toe game! β•‘") Console.WriteLine("β•‘Please enter the coordinates as you are told β•‘") Console.WriteLine("β•‘in order to place your X or 0 there. β•‘") Console.WriteLine("β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•") Console.WriteLine() Console.WriteLine("Coordinates are as follows:") Console.WriteLine(vbTab & "╔═══╦═══╦═══╗") Console.WriteLine(vbTab & "β•‘1,1β•‘2,1β•‘3,1β•‘") Console.WriteLine(vbTab & "╠═══╬═══╬═══╣") Console.WriteLine(vbTab & "β•‘1,2β•‘2,2β•‘3,2β•‘") Console.WriteLine(vbTab & "╠═══╬═══╬═══╣") Console.WriteLine(vbTab & "β•‘1,3β•‘2,3β•‘3,3β•‘") Console.WriteLine(vbTab & "β•šβ•β•β•β•©β•β•β•β•©β•β•β•β•") Console.ReadKey() Console.WriteLine("Press any key once you have understood the instructions.") Console.Clear() Do 'Display Empty Table Console.WriteLine() Console.WriteLine(vbTab & "╔═══╦═══╦═══╗") For IntLoopCounterY = 0 To 2 Console.Write(vbTab & "β•‘ ") For IntLoopCounterX = 0 To 2 Console.Write(StrCoordinate(IntLoopCounterX, IntLoopCounterY) & " β•‘ ") Next Console.WriteLine() If IntLoopCounterY = 2 Then Console.WriteLine(vbTab & "β•šβ•β•β•β•©β•β•β•β•©β•β•β•β•") Else Console.WriteLine(vbTab & "╠═══╬═══╬═══╣") End If Next Console.WriteLine() 'As player one or two for input If BooIsNextANaught = False Then Do 'Ask player 1 for input Console.ForegroundColor = ConsoleColor.Cyan Console.WriteLine(" PLAYER ONE") Console.WriteLine("═══════════════════════════════════════════════════════════════════════════") Console.WriteLine("Please enter coordinate X, then enter. Then enter coordinate Y, then enter.") Console.WriteLine("Coordinates range from 1 to 3.") 'Input coordinates for X Do Console.Write("X (Column)> ") IntTempStorage = Console.ReadLine Console.Write("Y (Row)> ") IntTempStorage2 = Console.ReadLine Loop Until IntTempStorage > 0 And IntTempStorage < 4 And IntTempStorage2 > 0 And IntTempStorage2 < 4 'Check input Select Case StrCoordinate((IntTempStorage - 1), (IntTempStorage2 - 1)) 'Is input being used allready? Case Is = ("X"), ("0") BooIsInputValid = False Console.Clear() Console.BackgroundColor = ConsoleColor.DarkRed Console.WriteLine("══════════════════") Console.WriteLine("ALREADY USED SPACE") Console.WriteLine("Please try again: ") Console.WriteLine("══════════════════") Console.WriteLine() Console.WriteLine() Console.WriteLine(vbTab & "╔═══╦═══╦═══╗") For IntLoopCounterY = 0 To 2 Console.Write(vbTab & "β•‘ ") For IntLoopCounterX = 0 To 2 Console.Write(StrCoordinate(IntLoopCounterX, IntLoopCounterY) & " β•‘ ") Next Console.WriteLine() If IntLoopCounterY = 2 Then Console.WriteLine(vbTab & "β•šβ•β•β•β•©β•β•β•β•©β•β•β•β•") Else Console.WriteLine(vbTab & "╠═══╬═══╬═══╣") End If Next Console.WriteLine() Console.BackgroundColor = ConsoleColor.Black Case Is = ("-") 'Set StrCoordinate(input) with X StrCoordinate((IntTempStorage - 1), (IntTempStorage2 - 1)) = ("X") BooIsInputValid = True BooIsNextANaught = True 'check if winner: For IntLoopCounterY = 0 To 2 'check all rows: If StrCoordinate(0, IntLoopCounterY) = ("X") And StrCoordinate(1, IntLoopCounterY) = ("X") And StrCoordinate(2, IntLoopCounterY) = ("X") Then BooGameOver = True Console.WriteLine("══Player 1 wins══") Console.ReadKey() 'check both diagonals: ElseIf StrCoordinate(0, 0) = ("X") And StrCoordinate(1, 1) = ("X") And StrCoordinate(2, 2) = ("X") Then BooGameOver = True Console.WriteLine("══Player 1 wins══") Console.ReadKey() ElseIf StrCoordinate(0, 2) = ("X") And StrCoordinate(1, 1) = ("X") And StrCoordinate(3, 1) = ("X") Then BooGameOver = True Console.WriteLine("══Player 1 wins══") Console.ReadKey() 'check all columns ElseIf StrCoordinate(IntLoopCounterY, 0) = ("X") And StrCoordinate(IntLoopCounterY, 1) = ("X") And StrCoordinate(IntLoopCounterY, 2) = ("X") Then BooGameOver = True Console.WriteLine("══Player 1 wins══") Console.ReadKey() End If Next Case Else Console.WriteLine("It's Dead, Jim!") End Select Console.ResetColor() Loop Until BooIsInputValid = True Else Do 'Ask player 2 for input Console.ForegroundColor = ConsoleColor.Yellow Console.WriteLine(" PLAYER TWO") Console.WriteLine("═══════════════════════════════════════════════════════════════════════════") Console.WriteLine("Please enter coordinate X, then enter. Then enter coordinate Y, then enter.") Console.WriteLine("Coordinates range from 1 to 3.") Do Console.Write("X (Column)> ") IntTempStorage = Console.ReadLine Console.Write("Y (Row)> ") IntTempStorage2 = Console.ReadLine Loop Until IntTempStorage > 0 And IntTempStorage < 4 And IntTempStorage2 > 0 And IntTempStorage2 < 4 Select Case StrCoordinate((IntTempStorage - 1), (IntTempStorage2 - 1)) Case Is = ("X"), ("0") BooIsInputValid = False Console.Clear() Console.BackgroundColor = ConsoleColor.DarkRed Console.WriteLine("══════════════════") Console.WriteLine("ALREADY USED SPACE") Console.WriteLine("Please try again: ") Console.WriteLine("══════════════════") Console.WriteLine() Console.BackgroundColor = ConsoleColor.Black Console.WriteLine(vbTab & "╔═══╦═══╦═══╗") For IntLoopCounterY = 0 To 2 Console.Write(vbTab & "β•‘ ") For IntLoopCounterX = 0 To 2 Console.Write(StrCoordinate(IntLoopCounterX, IntLoopCounterY) & " β•‘ ") Next Console.WriteLine() If IntLoopCounterY = 2 Then Console.WriteLine(vbTab & "β•šβ•β•β•β•©β•β•β•β•©β•β•β•β•") Else Console.WriteLine(vbTab & "╠═══╬═══╬═══╣") End If Next Console.WriteLine() Case Is = ("-") StrCoordinate((IntTempStorage - 1), (IntTempStorage2 - 1)) = ("0") BooIsInputValid = True BooIsNextANaught = False 'check if winner: For IntLoopCounterY = 0 To 2 'check all rows: If StrCoordinate(0, IntLoopCounterY) = ("0") And StrCoordinate(1, IntLoopCounterY) = ("0") And StrCoordinate(2, IntLoopCounterY) = ("0") Then BooGameOver = True Console.WriteLine("══Player 2 wins══") Console.ReadKey() 'check both diagonals: ElseIf StrCoordinate(0, 0) = ("0") And StrCoordinate(1, 1) = ("0") And StrCoordinate(2, 2) = ("0") Then BooGameOver = True Console.WriteLine("══Player 2 wins══") Console.ReadKey() ElseIf StrCoordinate(0, 2) = ("0") And StrCoordinate(1, 1) = ("0") And StrCoordinate(3, 1) = ("0") Then BooGameOver = True Console.WriteLine("══Player 2 wins══") Console.ReadKey() 'check all columns ElseIf StrCoordinate(IntLoopCounterY, 0) = ("0") And StrCoordinate(IntLoopCounterY, 1) = ("0") And StrCoordinate(IntLoopCounterY, 2) = ("0") Then BooGameOver = True Console.WriteLine("══Player 2 wins══") Console.ReadKey() End If Next Case Else Console.WriteLine("It's Dead, Jim!") End Select If Not StrCoordinate((IntTempStorage - 1), (IntTempStorage2 - 1)) = ("-") Then Else StrCoordinate((IntTempStorage - 1), (IntTempStorage2 - 1)) = ("0") BooIsInputValid = True BooIsNextANaught = False End If Console.ResetColor() Loop Until BooIsInputValid = True End If IntTempStorage = 0 IntTempStorage2 = 0 Console.Clear() If BooGameOver = False Then For IntLoopCounterY = 0 To 2 For IntLoopCounterX = 0 To 2 If Not StrCoordinate(IntLoopCounterX, IntLoopCounterY) = ("-") Then IntTempStorage2 = IntTempStorage2 + 1 End If Next Next If IntTempStorage2 = 9 Then BooGameOver = True Console.WriteLine("It's a draw") End If IntTempStorage = 0 IntTempStorage2 = 0 End If Loop Until BooGameOver = True Console.WriteLine("Would you like to play again? Y/N") StrPlayAgain = UCase(Console.ReadLine) Select Case StrPlayAgain Case Is = ("Y") BooGameOver = False Case Else BooGameOver = True End Select Loop Until BooGameOver = True REM End Console.WriteLine() Console.ForegroundColor = ConsoleColor.Blue Console.Write("EndOfProgram. ") Console.ForegroundColor = ConsoleColor.DarkBlue Console.Write("Enter any key to close.") Console.WriteLine() Console.ReadKey() Console.ResetColor() End Sub Answer: You have a big bad block of monolithic code here, stuffed in a does-it-all Main() procedure that is begging to be broken down into smaller, more focused procedures and functions. You need to first identify concerns: Writing to console Reading from console Evaluating game state Writing to the console involves several distinct operations: Rendering the game title/instructions screen. Rendering the game screen (the 3x3 grid). Rendering the game over screen. Reading from the console also involves several distinct operations: Displaying a message and waiting for Enter to be pressed. Displaying a message and getting valid coordinates from the user. Displaying a message and getting a Y or N answer from the user. Create classes to encapsulate each concern, and write methods for each operation. As far as best practices are concerned, I'm biased with IoC and dependency-injection, so my Sub Main() would probably look something like this: Public Class Program Sub Main() Dim ConsoleReader As New ConsoleUserInputProvider() Dim Game As New TicTacToeGame(ConsoleReader) Game.Run() End Sub End Class The Game object encapsulates the game's logic; the ConsoleUserInputProvider exposes methods that TicTacToeGame uses to get the user's input. It implements some IUserInputProvider interface, which could just as well be implemented by some WinFormsUserInputProvider or WpfUserInputProvider if you wanted some fancypants GUI instead of just a console; the way you have your code, the concerns of game logic and console input/output are so tightly coupled it could very well be simpler to just rewrite the application from scratch to give it a new UI. Your naming style is - I'll be gentle - from another era. Hungarian Notation is outdated and doesn't contribute in any way to make your code more readable: drop those type prefixes!
{ "domain": "codereview.stackexchange", "id": 5202, "tags": "vb.net, tic-tac-toe" }
Reflection through multiple mirrors
Question: Two plane mirrors are arranged at right angles to each other. A ray of light is incident on the horizontal mirror at an angle X. For what value of X does the ray emerges parallel to the incoming ray after reflection from the vertical mirror? Answer: if you assume optical geometrics rules, note that no change of medium is implied by the exercise you may conclude that incident angle = reflection angle on both mirrors, such that if X is the first incidence angle, calculated with respect to the first mirror, the second reflection angle is given by 90-X, calculated with respect to the second mirror. If the first incident ray is inclined X with respect to the horizontal, will incide on the second at an angle 90Β°-X, so it will be reflected at 90Β°+X, i.e. with an inclination of X; so, the incident on the 1st, and the reflected on the 2nd, will always be parallel independently from X;
{ "domain": "physics.stackexchange", "id": 46686, "tags": "reflection" }
Why do we feel hot because of sunlight?
Question: sunlight , light generally , is an electromagnetic wave which turns into heat when it contacts a matter (solid,liquid,etc..) is that right ? Answer: Light like sunlight is an electromagnetic wave with components with different frequencies. These components follow a particular distribution of intensities. One portion of the energy of this light resides in what is called infrared radiation and most materials absorb in that range (link provides some more extra information regarding this radiation and heat). This absorption predominantly causes the heating of matter that you mention.
{ "domain": "physics.stackexchange", "id": 21229, "tags": "electromagnetism, thermodynamics" }
How to represent isobaric processes using the equation for the first law of thermodynamics?
Question: According to Kaplan physics book, there are β€œmultiple possible forms” for FLoT equation with isobaric processes but that doesn’t make sense to me. I know that the FLoT equation is $$\Delta U = Q - W$$ I also know that in isobaric processes $W = P\Delta V$. So I’d assume that for any question involving isobaric processes, some combination of $P$, $V_f$, $V_i$, $Q$, and $\Delta U$ would be given and I would have to just use these two equations to find the desired variable. I’m not sure what other forms there would be Answer: It's not clear what your goal is, but if it is to determine the work done for an isobaric process, then obviously if you know $\Delta U$ and $Q$ you can calculate $W$. In order to calculate the work directly from $W=P\Delta V$, the constant pressure $P$ is always the external pressure, and of course $\Delta V=V_{f}-V_{i}$. If the isobaric process is reversible, then the gas is in internal equilibrium and its pressure throughout will always gas equal the external pressure. For a reversible process involving an ideal gas, the ideal gas law applies at every point in the process. For an irreversible process, the ideal gas law only applies to the initial and final equilibrium states. Beyond that, without seeing Kaplan's statement "there are 'multiple possible forms' for FLoT equation with isobaric processes" in context, it's not clear what this means. For example, for an ideal gas reversible isobaric process one can substitute $mC_{v}\Delta T$ for $\Delta U$ and substitute $mC_{p}\Delta T$ for $Q$ and thereby change the form of the first law to: $$mC_{v}\Delta T=mC_{p}\Delta T-P\Delta V$$ For what it's worth, hope this helps.
{ "domain": "physics.stackexchange", "id": 75239, "tags": "thermodynamics, pressure, work, volume" }
What are good beginner packages for object avoidance?
Question: Hi there, I'm using ROS for a class project, but I'm very much a beginner, so I'm still getting adapted to the libraries and packages and all. I intend to use this package ...ros.org/wiki/cob_collision_velocity_filter?distro=groovy to use as the foundation for object avoidance; however, I'm unsure if that's a better start than building object avoidance from the ground up. The robot on which I'll use the code has a kinect, laptop camera, and proximity laser. I don't know how well the packages will match up. I'd like your suggestions on what I should try first so I won't have to invest too much time in a chicken egg or something over my head. Thank you! Originally posted by chaosprodigy on ROS Answers with karma: 1 on 2013-02-26 Post score: 0 Answer: I think the standard navigation stack should do the trick. (http://www.ros.org/wiki/navigation/Tutorials). I think it's difficult enough getting all of the navigation pieces working together, it's probably best not to re-invent the wheel, especially when you're just starting. Originally posted by Jon Stephan with karma: 837 on 2013-02-26 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 13082, "tags": "ros, packages, beginner" }
How to get the melody from a signal
Question: I'm trying to retrieve a melody from a signal (simple sine with 3 notes). I've windowed the signal and performed autocorrelation to get the fundamental frequency of each window and then converted frequency to note. So for a signal with notes "G A E" I am getting something like "G G G A A B E E E E F". It makes sense that notes are repeated and there are some mistakes... But how should I deal with this? If I use a bigger window then I might miss some notes with shorter duration. I can think of a really naive algorithm to get the melody for this example but what are some real techniques that are usually applied? Answer: Some common approaches: Simple post-processing Apply a mode filter on your resulting sequence of pitches. This will increase temporal inaccuracies though! Pre-segmentation Instead of analyzing your signal according to a grid of fixed-size windows, try to detect first the boundaries between notes using an onset detection algorithm. Once you have performed the onset detection, your signal is divided into segments in which you can make the assumption that the pitch is constant. You then apply your pitch detection algorithm to each segment. This works best for instruments with a sharp attack - like guitar or piano (playing monophonic melodies) - but not very well in which the pitch of a held note is modulated without sharp attacks between notes (flute, whistling, singing voice). Dynamic programming You define a function $C_i(f)$ which indicates how unlikely it is that frequency $f$ is the fundamental frequency of the $i$-th signal window. You can use, for example, the reciprocal of the autocorrelation function; or sort your pitch hypotheses. For example, if the 4 highest peaks in the autocorrelation function of the 6th signal window are at 220 Hz, 440 Hz, 500 Hz and 250 Hz, $C_6(220) = 0$, $C_6(440) = 1$, $C_6(500) = 2$ and so on. The higher the function, the most unlikely the argument is to be true fundamental frequency. You define a transition cost $T(f_1, f_2)$ which indicates how unlikely it is that a window with fundamental frequency $f_1$ will be followed by a window with fundamental frequency $f_2$. This transition cost can embed basic musicological knowledge (which notes are more likely to follow other notes) but will mostly serve as temporal smoothing. You use dynamic programming to find the sequence $f^*$ that minimizes: $$\sum_i C_i(f^*_i) + T(f^*_{i - 1}, f^*_i)$$ Intuitively, this searches for a sequence of pitches that is both compatible with your observations, but also doesn't change too much. Adjusting the relative weight between $C$ and $T$ will allow you to control the amount of smoothing. HMM Manually annotate a large collection of files. Train a HMM using autocorrelation vectors as your feature vectors, and musical notes as your states. To transcribe a melody, find the most likely sequence of states that explain your autocorrelation vectors. This is done by the Viterbi algorithm, which will take exactly the same form as the dynamic programming method given above. The only difference is that there is this time a rigorous statistical framework for defining and learning the scoring and transition functions.
{ "domain": "dsp.stackexchange", "id": 1865, "tags": "autocorrelation, pitch" }
Do death rate and birth rate become equal at replacement fertility rate?
Question: A population transitions from stage 3 to stage 4 when the fertility rate reaches the population replacement level of 2.1. My question is that if the population replacement level reached, shouldn't the net growth rate be zero? i.e., birth rate = death rate? But in the figure, in stage 4, birth rate > death rate and there is still a population increase. How is this so? Answer: The transition to stationary populations is between stage 4 to stage 5 in all exemples of the DT model that I've seen. Not between stage 3 and 4. This is also what is being implied in the figure that you've included. In your link they also write: Countries like India in the third phase of demographic transition have fertility rates that have declined significantly from previously high levels but have not reached the population-stabilizing "replacement level" of 2.1 children per woman. This is not saying that countries reach the replacement level 2.1 at the end of stage 3. The same stages are also described at the Wikipedia page on DT and at populationeducation.org. So you are correct; at replacement levels, populations shouldn't increase and the net growth should be zero.
{ "domain": "biology.stackexchange", "id": 8388, "tags": "ecology, population-dynamics, population-biology, demography" }
EMF Generated according to Faraday's Law
Question: According to Faraday's Law, due to a relative movement between the current carrying loop and the magnetic field, an EMF is induced in the loop causing a current flow. However, according to Maxwell-Faraday's Law, $$\nabla \times \bf E = -\frac{\partial \bf B}{\partial t}$$ Clearly, if the magnetic field changes, the electric field becomes non-conservative and electric potential is no longer defined. In that case, how can we even define the emf of the circuit? What does induced emf actually mean then? Answer: how can we even define the emf of the circuit? The EMF is defined as a line integral of the electric field. According to Faraday's Law, due to a relative movement between the current carrying loop and the magnetic field, an EMF is induced in the loop Notice the wording: The EMF is induced on (or around) the loop, not "between two points" as we would say about a potential difference. Of course we can also have an EMF for an open path between two points (as opposed to a loop), but we must specify the path to be taken between the points. The EMF between the points can be different for different paths between the points. Clearly, if the magnetic field changes, the electric field becomes non-conservative Again, this is exactly it. If the field were conservative, then the EMF around any loop in that field would be 0. Only with a non-conservative field can we have a non-zero EMF around a closed loop. Since we define the EMF for each path of integration between two points, the EMF will be well-defined even when the potential difference between two points in a system is not.
{ "domain": "physics.stackexchange", "id": 96858, "tags": "electromagnetism, electric-fields, potential, maxwell-equations, electromagnetic-induction" }
Where does mass go when energy is converted in to photons?
Question: If matter and anti-matter annihilate each other they emit a photon with the energy that corresponds to the mass, right? This is the best example I could think of matter/energy being converted directly in to photons. Now, if a photon doesn’t have mass (because it goes at the speed of light) where does all that mass from the matter and anti-matter go? Doesn`t it break the law of conservation of mass? If photons have energy why dont they have mass? Thank you in advance for your professional help Answer: From a relativistic point of view, the energy of a particle with mass $m$ and momentum $p$ is given by: $$E^2=m^2c^4+p^2c^2$$ where $c$ is the speed of light. You can clearly see that a massive particle will have some mass energy $E_0=mc^2$, but also some kinetic energy. The photon being massless, the above equation reduces to: $$E_\gamma=pc$$ One consequence of this equation is that if a photon exists, i.e. has a non-zero energy, it must be moving. There is no "conservation of mass" law in physics - instead, we use the more general concept of conservation of energy (since from the first equation above, mass can be understood as a form of energy). The energy of the photon is related to its frequency by the Planck-Einstein relation: $$E_\gamma=h\nu$$ where $h$ is Planck's constant and $\nu$ is the frequency of the associated electromagnetic wave (remember that the photon is "the particle of light", i.e. an excitation of a field). Since you know the speed of light, $c$, and its frequency $\nu$, you can also derive its wavelength: $$c=\lambda\nu$$ And the energy of the photon can then be expressed as: $$E_\gamma=h\nu=\dfrac{hc}{\lambda}$$ Finally we get: $$E_\gamma=pc=\dfrac{hc}{\lambda}\Longrightarrow \lambda=\dfrac{h}{p}$$ which is nothing else than the de Broglie wavelength, and the basis for quantum mechanics. Indeed, this postulates that a particle (not necessarily massless) with momentum $p$ can be represented as a wave with wavelength $\lambda$.
{ "domain": "physics.stackexchange", "id": 20719, "tags": "energy-conservation, mass-energy" }
What are possible other "point sources" in the Fermi-LAT paper?
Question: This Fermi-LAT Collaboration paper (from 2015) looks at the Galactic centre and fits the $\gamma$-ray data with smooth interstellar medium emission, and with "point sources". It mentions that some of them overlap with known supernova remnants, some with pulsars, and some could be attributed to mis-identified interstellar emission. Are there any other possible interpretations for these point sources? Answer: Sure, in general they could be any gamma-ray emitting point sources: general (non-known-pulsar) neutron stars, or stellar mass black holes, or even background AGN. I'm not sure what they have in mind for 'interstellar emission' but that might also include a hydrodynamic shock in the interstellar medium here or there.
{ "domain": "physics.stackexchange", "id": 38631, "tags": "experimental-physics, astrophysics, astronomy, gamma-rays" }
Would a spinning magnet generate a current?
Question: If you were to have a suspended magnet in the shape of an annulus with a wire going through the hole in the center: Would rotating the magnet like a wheel (staying in place but spinning) generate a current through the wire? If so, what formulas and theorems explain this phenomenon? Answer: Yes. Faraday's Law: Whenever the flux linked or associated with a circuit changes, a voltage is induced in the voltage. $$V_{Ind} = N B l v$$ N = Number of turns. In your case, N = 1, since it is a straight conductor. B = flux density in T (Wb/m$^2$). l = length of coil cutting magnetic field in m. v - velocity in m/s. Overall, you get $\mu$V or mV, but it is Faraday's Law. Edit... Magnetic field of a Toroid is shown, with a red wire. No rotation = No voltage. Rotation but wire is perfectly at center = No voltage. But if the wire is not perfectly at center, the wire will cut the lines of flux. The magnetic field changes and you will get you get $\mu$V or mV.
{ "domain": "physics.stackexchange", "id": 39486, "tags": "electromagnetism, magnetic-fields" }
Bead following a fixed wire under gravity: Equations of motion
Question: Suppose that we have a bead $p$ of mass $m$ following a fixed wire described by a graph of a smooth function $f:\mathbb{R} \to \mathbb{R}$ without friction under the force of gravity, i.e. $p(t) = (x(t),y(t))$ is constrained to the set $\{(x,f(x)) \mid x \in \mathbb{R}\}$ and the force $G = (0,-gm)$ is acting on it. Using Lagrangian mechanics and the generalized coordinate $q = x$ with the Lagrangian $$L(x,\dot{x}) = \frac{m}{2}(\dot{x}^2+f'(x)^2\dot{x}^2) - gmf(x),$$ it is easy to see from the Euler-Lagrange equation $\frac{d}{dt} \frac{\partial L}{\partial \dot{x}} = \frac{\partial L}{\partial x}$ that the equation of motion for $x$ is given by $$ \ddot{x}(1+f'(x)^2)+\dot{x}^2f'(x)f''(x) = -gf'(x)\,. $$ However, we can take another approach: Let $F(x)$ be the parallel and $N(x)$ be the normal force on the wire in the point $(x,f(x))$ such that $G = F(x) + N(x)$ for all $x \in \mathbb{R}$, i.e. we decompose the force of gravity. Its is easy to see that $$ F(x) = -gm \frac{f'(x)}{1+f'(x)^2} \binom{1}{f'(x)} $$ and since the bead is constrained to the wire, only $F(x)$ is acting on it (the normal force $N(x)$ is counteracted by the wire). Hence the equation of motion for the bead is $$ m \frac{d^2}{dt^2} \binom{x(t)}{y(t)} = F(x(t))\,, $$ i.e. in particular $$ \ddot{x} = -g\frac{f'(x)}{1+f'(x)^2}\,, $$ which is clearly a different equation of motion for $x$. Question: Where does the second approach go wrong (it seems very "intuitive")? What would be the correct way to solve this problem in Newton-mechanics? Answer: since the bead is constrained to the wire, only $F(x)$ is acting on it (the normal force $N(x)$ is counteracted by the wire). The bead is constrained to move along the wire precisely because the normal force acts on it. The normal force is whatever it needs to be so that the motion follows the wire. Furthermore, the normal force is not just a function of $x$, but also of the velocity; going faster will cause the constraint force to be larger (think of a bead going around a circle). Because of the above, unless you are able to reason through what $N$ should be before determining the motion, Newtonian Mechanics won't really help you solve problems like these. Usually to determine $N$ one uses Lagrangian mechanics and then either works backwards to find what $N$ needs to be, or uses techniques such as Lagrange multipliers to determine the constraint force(s).
{ "domain": "physics.stackexchange", "id": 77616, "tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, constrained-dynamics" }
PHP function to access a database and return json
Question: <?php include_once('../dbInfo.php'); function getReport($user_table) { $tables = array( "day" => "p_day", "month" => "p_month" ... etc. ..... ); $table = $tables[$user_table]; if(!$table) { die(json_encode(array("error" => "bad table name"))); } $con = getConnection(); // getConnection is in '../dbInfo.php' $query = "select * from " . $table; $res = mysql_query($query, $con); if(!$res) { die(json_encode(array("error" => "no results from table"))); } $fields_num = mysql_num_fields($res); $fields = array(); for($i=0; $i < $fields_num; $i++) { $field = mysql_fetch_field($res); $fields[$i] = $field->name; } $i = 0; while($row = mysql_fetch_array($res)) { $rows[$i] = $row; $i++; } $json = array("rows" => $rows, "headers" => $fields); $jsontext = json_encode($json); return $jsontext; } ?> What this code is doing: access the database, selecting rows from a table, and returning them as a serialized json object a table name is looked up in $tables -- the keys are acceptable user input, the values are actual table/view names in the database data is selected from the table the data is put into a big hash the hash is serialized as a json string and returned Specific issues I'm concerned about: security -- is my DB connection info safe? This file is in the root directory of public content, so dbiInfo.php, with the database connection information, is not publicly accessible (I think) security -- am I open to SQL injection attacks? I build a SQL query with string concatenation security -- $user_table is untrusted input; is it safe? It's only used as a key to look up trusted input ... error handling -- have I dealt with all error conditions there are lots of versions of PHP functions -- am I using the right ones? General issues: following conventions quality/readability/comments Edit: the data is publicly available -- I'm worried about somebody getting more than read access to one of the listed tables, or any access to any other table in the DB. Answer: This: $tables = array( "day" => "p_day", "month" => "p_month" ... etc. ..... ); $table = $tables[$user_table]; if(!$table) { die(json_encode(array("error" => "bad table name"))); } will throw a notice if $user_table is not a valid array key, something you should have already tested. Rewrite as: $tables = array( "day" => "p_day", "month" => "p_month" ... etc. ..... ); if(!array_key_exists($user_table, $tables)) { die(json_encode(array("error" => "bad table name"))); } $table = $tables[$user_table]; I really hate it when functions die(), and in your case there's no point in that. You could simply do: if(!$table) { return json_encode(array("error" => "bad table name")); } since the function is expected to return a json formatted string. If you really want to die() you should do that where you call the function and the returned string is an error. Or you could just return false when an error occurs and the raw array when everything works, and when calling the function do something like: $result = json_encode(getReport("some_table")); That way getReport() will be useful even when you don't need json encoded output. But that's up to you and how you actually use it. As @LokiAstari mentions mysql_* functions are deprecated and should be avoided. I would skip mysqli_* functions too and use PDO which will give you the extra bonus of switching to another database engine if you ever need to and it has a nice OO interface. For everything else, I'm with @LokiAstari.
{ "domain": "codereview.stackexchange", "id": 4399, "tags": "php, security" }
How can black hole increase its mass?
Question: From observer point of view an object, which falls into black hole never crosses its horizon. Then how does black hole appears and grows its mass? Or does any black hole looks (and feels by all other information sources) for us like a void sphere with all mass around its horizon? Answer: The Schwarzschild coordinates (which seem to suggest that no object ever crosses the event horizon when viewed from far outside) were derived for stationary case: no matter flows onto the black hole, the black hole has constant mass. In fact, Schwarzschild was assuming zero stress-energy tensor (vacuum solution). However if you start adding a lot of mass to the black hole, the situation changes. Imagine you throw a little object towards the event horizon. It "seems" to freeze on the surface of the horizon (it actually visually disappears due to the red shift). Later on, there is a huge amount of material streaming to the black hole. It is thousands of times more mass than the original mass of the black hole. At this point the conditions under which Schwarzschild found his solution no longer stand, because the stress-energy tensor is far from being zero. The event horizon will grow, since it forms wherever the gravitational potential reaches certain value. By adding more mass you unavoidably enlarge the volume where the potential has the required value to form the event horizon. The case of non-constant mass is described by the Vaidya metric. Mathematically this is described on pages 133-134 of this book.
{ "domain": "physics.stackexchange", "id": 14296, "tags": "general-relativity, black-holes, mass-energy, event-horizon, observers" }
Could many widely separated space telescopes be combined for VLBI on IR/visible wavelengths?
Question: I have read about ground-based Very Long Baseline Interferometry telescope arrays able to achieve huge resolution at IR/visible wavelengths. There are also space-ground VLBI configurations in operation today that work at radio wavelengths. Is it feasible to employ many space-based optical/IR telescopes to achieve a high angular resolution visible light image of something the size of Jupiter? (Maybe during a transit or occultation, since it would be faint on its own.) What would the separation need to be for a space telescope array of that size? Would it be overly difficult to get precise enough timing, or to synchronize the relative motions of the orbiters or to deconvolve that motion from the sources? Could it be made of many considerably smaller widely separated mirrors, or would it be better to use a fewer number of large mirrors? Answer: Interferometry is dependent upon preserving the phase data from signals from widely separated locations. This can be done physically by relaying the actual EM radiation along waveguides or optical paths, or it can be done synthetically by precisely recording enough phase data from the signals to simulate the same process in a computer. This requires measuring the signal at a very high resolution and precisely measuring the exact 3 dimensional position of the telescope at the scale of the relevant wavelength. This is possible with radio waves but is not technologically possible with optical or IR light. Consider VLBI using a 1 micron wavelength signal (infrared light). You would need to precisely locate each observatory to sub-micron precision, which is far beyond the capacity of any GPS based system. You would need to record the phase data for the signal at a rate higher than 300 terahertz. We don't have sensors or computers that are capable of doing that. And that doesn't even get into the requirement of storing petabytes of data per second. Perhaps in the future someone will find a more clever way of doing it, but using the same method as radio VLBI is not even remotely feasible for optical wavelengths.
{ "domain": "physics.stackexchange", "id": 3060, "tags": "astronomy, telescopes, interferometry" }
What is the energy required to create mass of m at a height of h above the Earth?
Question: What is the energy required to create mass of m at a height of h above the Earth? Is it $E= m c ^2$ or $E = mc ^ 2 + mgh$ ? Let's reverse the process also. If you convert mass $m$ at $h = 0$ to energy then $$E=mc^2 \tag{1}$$ Now if you raise the mass to a height $h$ and convert it to energy which you are going to measure at the height $h$ then $$E=mc^2 + mgh \tag{2}$$ Is equation (2) correct? If this is correct then If you take a rock of mass $m$ on the Earth to very large distance or provide it with escape velocity so that it escapes the Earth's gravity (ignoring any other gravitational field), What is the energy contained in that rock? Is it equation (1) or $$ E = mc^2 + \dfrac{1}{2}mv^2, \tag{3}$$ where $v$ is the escape velocity? If equation (3) is the accurate one according to the discussion above then once the mass has come out of the gravitational field the only way to store this extra energy will be by an increase in mass. So, $$ dm = mv^2/(2c^2) $$ or $$ dm = mgh/(c^2) $$ Answer: Think logically. Assume that you want to create a mass on the earth, where $h=0$ (assumption). Therefore: $$E=mc^2$$ You as well need to consume some work to take the mass from $0 \to h$. So the energy needed is the energy you need to create it plus the one you need to "lift" it. So: $$\sum E = E - W_{W(spent)} = E - (-mgh) = mc^2 + mgh = m(c^2 + gh)$$ Everyday example: Which state has more energy: a tidied or an untidies room? The answer is the tidied one because we've spent energy to tidy it Since the gravitational field is a conservative one the work done to do this action is always $-\Delta U = -mg \Delta h$, so if you were already at $h$ then the change in height is 0. It may be a bit confusing but it has to do with your choice of zero potential energy level
{ "domain": "physics.stackexchange", "id": 11982, "tags": "special-relativity, gravity, potential-energy, mass-energy" }
Wavelength and resolution
Question: I'm reading some texts that seam to assume knowledge of light that I'm not too familiar with. How does wavelength of light relate to the minimum distance span that can be observed (i.e. you cannot make a lens big enough to resolve individual atoms), and is this a light phenomena or an intrinsic wave phenomena? Answer: This is a wave phenomenon. Suppose that you have a plane water wave. Say it hits a small object. If the object is smaller than the wavelength, it won't disturb the wave much. If the wavelength is smaller, and the object is the same size or larger than the object, then the wave will scatter off the object. I'm trying to find a video of this in a ripple tank but can't seem to find one online.
{ "domain": "physics.stackexchange", "id": 5060, "tags": "waves, visible-light" }
What is this Japanese study of the Pfizer vaccine measuring?
Question: https://www.pmda.go.jp/drugs/2021/P20210212001/672212000_30300AMX00231_I100_1.pdf If you put it into google translate, the abstract contains the following: "In the biodistribution test, luciferase RNA-encapsulated LNP was intramuscularly administered to BALB / c mice. theAs a result, the expression of luciferase was observed at the administration site, and the expression level was lower than that in the liver.Was also recognized. Expression at the administration site of luciferase was observed from 6 hours after administration, and 9 days after administration.Disappeared. Expression in the liver was also observed 6 hours after administration and disappeared by 48 hours after administration. Also,Intramuscular administration of radiolabeled LNP encapsulating luciferase RNA to rats to quantify biodistributionUpon evaluation, the radioactivity concentration was the highest at the administration site. Liver is highest except at the administration siteIt was good (up to 18% of the dose)." Another translation (via DeepL) of the first part of the same paragraph: In the biodistribution study, luciferase RNA-encapsulated LNP was administered intramuscularly to BALB/c mice, and the expression of luciferase was observed at the site of administration, and also in the liver, although at a lower level. The expression of luciferase at the site of administration was observed from 6 hours post-dose and disappeared by 9 days post-dose. The luciferase was also observed in the liver at 6 hours post-dose and disappeared by 48 hours post-dose. Does this mean that they tracked the actual distribution of the vaccine in the body of the rats, or was it just a study of the lipid byproducts? Answer: There are at least three types of studies being described. In a paragraph before the quoted one, they talk about testing for the PEGylated lipids ALC-0315 and ALC-0159 in plasma, liver, urine, and feces. The lipids rapidly moved from the blood to the liver, and were found in the feces; none were detected in the urine. The other two kinds of studies are described in the quoted paragraph. One studied the protein product produced by the animal's cells from the administered mRNA coding for luciferase. The cell product is detected by the light it emits. The luciferase was detected both at the site of injection and in the liver, and continued being detected at the injection site for much longer. The last study was of the same kind of mRNA nanoparticles, except that these were radioactively labeled. The radioactivity from them was detected at the injection site and in the liver. This indicates there were nanoparticles (or the radioactive components of them) present in both places. So they tracked the lipids, and either the whole vaccine nanoparticles or some other component of them, but also, and most significantly, they tracked the animal's cellular product of the vaccine's mRNA (luciferase, standing in for the true vaccine's spike protein).
{ "domain": "biology.stackexchange", "id": 11528, "tags": "pharmacokinetics" }
Weird projectile motion question
Question: The question is as follows: A ball is thrown from a point $O$ towards a vertical wall in such a way that, after rebounding from the wall, it returns to $O$ without striking the ground. The ball’s initial velocity has magnitude $U$ and is at an angle $ΞΈ$ above the horizontal. When the ball strikes the wall, the horizontal component of its velocity is reversed and halved, but the vertical component is unchanged. (i) Show that $U^2\sin{2\theta}=3gb$, where $b$ is the horizontal distance of the wall from $O$. (ii) The point $P$ at which the ball strikes the wall is at a height $\frac{2}{9}b$ above the level of $O$. Find $U$ in terms of $b$ and $g$. (iii) The ball is thrown again from $O$ with the same speed $U$, strikes the wall at the point $Q$, different from $P$ and returns to $O$ without striking the ground. Find, in terms of $b$, the height of $Q$ above the ground. I found parts (i) and (ii) relatively straight-forward to solve, and I happened to get $U=\sqrt{5gb}$ for part (ii), My question is: How is at possible that a particle is projected with the same speed from the same point able to follow the same trajectory both ways but hit a different point on the wall? Or am I missing something here? Answer: I'm not sure I understand the problem exactly, so do let me know if this doesn't answer your question: If you've solved the first part, you should be convinced that the particle returns to $O$ when it's projected at a value of $\theta$ that satisfies the following equation: $$\sin{2\theta} = \frac{3 g b}{U^2}.$$ It can be shown that this equation has two roots in the regime $0<\theta<\pi/2$. See, for example, this Math.SE answer to Two roots of arcsin(x) in the range [0,2Ο€]. Essentially, it all boils down to the fact that $\sin{\theta} = \sin{(\pi-\theta)},$ and therefore that $$\sin{2\theta} = \sin(\pi - 2\theta) = \sin\Big(2\left(\frac{\pi}{2} - \theta\right)\Big).$$ In other words, $\theta$ and $\pi/2 - \theta$ are both solutions to the equation, and therefore there are two values of $\theta$ that satisfy the specified relation and consequently two heights that do, too!
{ "domain": "physics.stackexchange", "id": 71142, "tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, projectile" }
Is deciding whether a graph admits two vertex-disjoint spanning trees of bounded size difference NP-hard?
Question: I'd like to decide whether, given a connected graph $G = (V, E)$ and an integer $k$ as input, $G$ admits two vertex-disjoint subgraphs $T_1 = (V_1, E_1)$ and $T_2 = (V_2, E_2)$ such that $T_1$ and $T_2$ are trees, $V_1 \cup V_2 = V$ (so in some way they are spanning), and $|V_1 - V_2| \leq k$ (they aren't too different in size). Some graphs, such as perfect binary trees or Hamiltonian graphs, do admit such balanced spanning trees ($k=o(1)$), whereas others do not, such as star graphs ($k = n - 2$). Is this NP-hard? Is it already NP-hard when fixing $k=0$? Also, is the smallest $k$ for which such trees can be found a known parameter, like maybe ``balanced vertex-disjoint spanning tree distance'' or something? I did not find any information on vertex-disjoint spanning trees like this, I always end up with an article on edge-disjoint spanning trees (which each individually are spanning). EDIT: vertex-arboricity (resp. equitable vertex-arboricity) seems quite close, but here one aims to partition the vertices such that their induced subgraph is a forest (resp. forests of the same size). They don't seem interested in the exact number of partitions (whereas I fix it to two) and I'm not interested in induced subgraphs or forests. Answer: For $k = 0$, this problem is equivalent to the problem known as Balanced Connected Vertex 2-Partition problem: given a graph $G = (V,E)$, find a partition of the vertex set $V = V_1 \cup V_2$ such that $|V_1| = |V_2| = |V|/2$ and the induced subgraphs $G[V_1]$ and $G[V_2]$ are connected. Note that if an induced subgraph $G[V_i]$ is connected then we can take a spanning tree of it as $T_i$. This problem is NP-hard even for bipartite graphs [1 Theorem 2.2]. For the approximation version of this problem, see this question on TCS.SE Partition a graph into 2 connected subgraphs. [1]: Dyer, Martin E., and Alan M. Frieze. "On the complexity of partitioning graphs into connected subgraphs." Discrete Applied Mathematics 10.2 (1985): 139-153.
{ "domain": "cs.stackexchange", "id": 20938, "tags": "graphs, partitions, spanning-trees" }
Compiling single package under winros
Question: Hello guys, I have successfully installed winros on my Win7 system and need to create my own Messagetype. This worked very well by createing a CMakeLists.txt in the directory and adding these two lines. add_message_files(DIRECTORY msg FILES message.msg) generate_messages() I tried to compile the package with cmake but that didn't work. I got stucked with an error about the message_generationConfig.cmake which I found under C:\opt\ros\groovy\x86\share\message_generation\cmake. The commandline output advised me to add that path to an environment variable like message_generation_DIR but that haven't worked. Well, I finally compiled all packages and my own with the messagetypes again with "winros_make" and then the Headerfile was generated. Has anybody figured out how to compile one single package with cmake or has somebody used the rosmake component under winros? Best regards Originally posted by ChrisEule on ROS Answers with karma: 26 on 2013-06-09 Post score: 0 Answer: The winros_make command just makes a call out to cmake, but potentially passes alot of variables. You can see part of winros_make's scripting which makes the call here. You should also see the output for the cmake command that winros_make executed directly on standard output (check the print lines in the script above). At any rate, that should give you an idea of what magic winros_make is doing under the hood. Note that rosmake is legacy - it is the tool that was used for the old build system. winros_make is effectively the windows counterpart of catkin_make on linux. It just adds a few extra windows specific features to make it easier for the user. We plan to perhaps have a script to simplify building of packages only. I did that with fuerte - just haven't gotten that far with groovy yet. Originally posted by Daniel Stonier with karma: 3170 on 2013-06-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14486, "tags": "ros, compile, package, winros" }
Is it possible to have a dynamic $Q$-function?
Question: I am trying to use Q-learning for energy optimization. I only wish to have states that will be visited by the learning agent, and, for each state, I have a function that generates possible actions, so that I would have a Q-table in form of a nested dictionary, with states (added as they occur) as keys whose values are also dictionaries of possible actions as keys and Q-values as values. Is this possible? How would it affect learning? What other methods can I use? If it is possible and okay, and I want to update the Q-value, but the next state is one that was never there before and has to be added to my nested dictionary with all possible actions having initial Q-values of zero, how do I update the Q-value, now that all of the actions in this next state have Q-values of zero? Answer: I have a function that generates possible actions, so that I would have a Q-table in form of a nested dictionary, with states (added as they occur) as keys whose values are also dictionaries of possible actions as keys and q-values as values. Is this possible? How would it affect learning? What other method can I use? (Disclaimer: I provided this suggestion to OP here as an answer to a question on Data Science Stack Exchange) Yes this is possible. Assuming you have settled on all your other decisions for Q learning (such as rewards, discount factor or time horizon etc), then it will have no logical impact on learning compared to any other approach to table building. The structure of your table has no relevance to how Q learning converges, it is an implementation detail. This choice of structure may have a performance impact in terms of how fast the code runs - the design works best when it significantly reduces memory overhead compared to using a tensor that over-specifies all possible states and actions. If all parts of the state vector could take all values in any combination, and all states had the same number of allowed actions, then a tensor model for the Q table would likely be more efficient than a hash. I want to update Q-value, but the next state is one that was never there before and has to be added to my nested dictionary with all possible actions having initial Q-values of zero; how do I update the Q-value, now that all of the actions in this next state have Q-values of zero. I assume you are referring to the update rule from single step Q learning: $$Q(s,a) \leftarrow Q(s,a) + \alpha(r + \gamma \text{max}_{a'} Q(s',a') - Q(s,a))$$ What do you do when you first visit $(s,a)$, and want to calculate $\text{max}_{a'} Q(s',a')$ for the above update, yet all of $Q(s',a') = 0$, because you literally just created them? What you do is use the zero value in your update. There is no difference whether you create entries on demand or start with a large table of zeroes. The value of zero is your best estimate of the action value, because you have no data. Over time, as the state and action pair are visited multiple times, perhaps across multiple episodes, the values from experience will back up over time steps due to the way that the update formula makes a link between states $s$ and $s'$. Actually you can use any arbitrary value other than zero. If you have some method or information from outside of your reinforcement learning routine, then you could use that. Also, sometimes it helps exploration if you use optimistic starting values - i.e. some value which is likely higher than the true optimal value. There are limits to that approach, but it's a quick and easy trick to try and sometimes it helps explore and discover the best policy more reliably.
{ "domain": "ai.stackexchange", "id": 1254, "tags": "ai-design, q-learning, optimization" }
Could a "living planet" alter its own trajectory only by changing its shape?
Question: In Stanislaw Lem's novel Solaris the planet is able to correct its own trajectory by some unspecified means. Assuming its momentum and angular momentum is conserved (it doesn't eject or absorb any mass), would this be possible (in Newtonian mechanics) and how? If not, can it be proven? The assumption is that the planet orbits a star (or perhaps a binary star) system. Intuitively this seems possible to me. For example, tidal forces result in a planet losing its rotational energy, so it seems possible that by altering its shape, a body could alter at least its rotation speed. My ideas go as follows: Assume we have an ideal rod consisting of two connected mass points. The rod rotates and orbits around a central mass. When one of the points moves towards the central body, we extend the rod, getting it closer to the center. thus increasing the overall gravitational force that acts on the rod. When one of the points is getting away from the center, we shrink the rod again, thus decreasing the combined gravitational force. I haven't run any simulations yet, but it seems this principle could work. Update: An even more complex scenario (conserving momentum and angular momentum) would be if the planet ejected a piece of matter and absorbed it again after some time. Answer: If you allow for non-Newtonian gravity (i.e., general relativity), then an extended body can "swim" through spacetime using cyclic deformations. See the 2003 paper "Swimming in Spacetime: Motion by Cyclic Changes in Body Shape" (Science, vol. 299, p. 1865) and the 2007 paper "Extended-body effects in cosmological spacetimes" (Classical and Quantum Gravity, vol. 24, p. 5161). Even in Newtonian gravity, it appears to be possible. The second paper above cited "Reactionless orbital propulsion using tether deployment" (Acta Astronautica, v. 26, p. 307 (1992).) Unfortunately, the paper is paywalled and I can't access the full text; but here's the abstract: A satellite in orbit can propel itself by retracting and deploying a length of the tether, with an expenditure of energy but with no use of on-board reaction mass, as shown by Landis and Hrach in a previous paper. The orbit can be raised, lowered, or the orbital position changed, by reaction against the gravitational gradient. Energy is added to or removed from the orbit by pumping the tether length in the same way as pumping a swing. Examples of tether propulsion in orbit without use of reaction mass are discussed, including: (1) using tether extension to reposition a satellite in orbit without fuel expenditure by extending a mass on the end of a tether; (2) using a tether for eccentricity pumping to add energy to the orbit for boosting and orbital transfer; and (3) length modulation of a spinning tether to transfer angular momentum between the orbit and tether spin, thus allowing changes in orbital angular momentum. If anyone wants to look at the article and edit this answer accordingly with a more detailed summary, feel free. As pointed out by Jules in the comments, the "previous paper" mentioned in the abstract appears to be this one, which is freely available. The idea of "swimming in spacetime" was also discussed on StackExchange here and here.
{ "domain": "physics.stackexchange", "id": 31671, "tags": "classical-mechanics, gravity, newtonian-gravity" }
No Data from MAVLink_ROS
Question: Hi, I plan to use MAVLink-ROS to read the IMU data from APM2.5. Although it compiles sucessfully by rosmake, the data cannot be received. I use rqt_graph to check the topic and node, but there is no topic or node connected to the node "MAVLink_ros". As a beginner to ROS and Linus, I do not know how to solve the issue. My Ubuntu version is 12.10 and ROS is Hydro. Many thnaks. Originally posted by Xiang on ROS Answers with karma: 46 on 2014-03-03 Post score: 0 Original comments Comment by Xiang on 2014-03-03: Should I rosmake the MAVLink_ROS_PKG in a "built" workspace? Answer: HI you could try this python generator: https://github.com/posilva/mav2rosgenerator Best Pedro Originally posted by pmosilva with karma: 56 on 2014-07-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 17144, "tags": "ros, mavlink, ros-hydro" }
Two HTTP notification methods
Question: I have two very similar methods, which make HTTP requests, the difference is that one makes PUT and another GET. Is there any proper Ruby way not to repeat the setup code and not to pass the flag parameter? def notify_client(url, params) uri = URI.parse(url) https = Net::HTTP.new(uri.host, uri.port) https.use_ssl = !Rails.env.development? req = Net::HTTP::Patch.new(uri.path) req.body = {data: {attributes: params}}.to_json res = https.request(req) puts "Response #{res.code} #{res.message}: #{res.body}" end def notify_vendor(url, params) uri = URI.parse(url) https = Net::HTTP.new(uri.host, uri.port) https.use_ssl = !Rails.env.development? req = Net::HTTP::Get.new(uri.path) req.body = {data: {attributes: params}}.to_json res = https.request(req) puts "Response #{res.code} #{res.message}: #{res.body}" end Answer: In such cases I try to move repeated code to separate method that accepts block. def notify_client(url, params) notify(url, params) { |path| Net::HTTP::Patch.new(path) } end def notify_vendor(url, params) notify(url, params) { |path| Net::HTTP::Get.new(path) } end def notify(url, params) uri = URI.parse(url) https = Net::HTTP.new(uri.host, uri.port) https.use_ssl = !Rails.env.development? req = yield uri.path req.body = {data: {attributes: params}}.to_json res = https.request(req) puts "Response #{res.code} #{res.message}: #{res.body}" end That code refactoring quite simple. If you need more details - let me know.
{ "domain": "codereview.stackexchange", "id": 20255, "tags": "ruby, http" }
Determining the causality of a signal with it's pole-zero plot
Question: I have the following question: Pole-zero plot of x(t) and y(t) are given below: The signal $g(t)$ and $h(t)$ are defined as $g(t)=x(t)e^{-3t}$ and $h(t)=y(t)*e^{-t}u(t)$. If $g(t)$ and $h(t)$ are both absolutely integrable, determine whether the signals $g(t)$, $h(t)$ are left-sided/right-sided. My try: I take the laplace transform of both the signals and get $G(s)=X(s+3)$ and $H(s)=Y(s)\cdot \frac{1}{s+1}$ Also because both $g(t)$ and $h(t)$ are absolutely integrable, their transforms must be stable so both must have $jw$-axis in their respective ROC. As we can see $X(s+3)$ shifts the pole-zero plot to the left by $3$ units so we have all the poles in the left $s$-plane and ROC would be $Re\{s\}>-1$ hence $g(t)$ is right sided. Similarly $H(s)$ has all the poles in the left $s$-plane and ROC is again $Re\{s\}>-1$ hence $h(t)$ is right sided. Is this reasoning correct? Answer: Looking at the pole-zero plots of the continuous-time signals $x(t)$ and $y(t)$, and the new signals $g(t)= x(t)e^{-3t}$ and $h(t) = y(t) \star e^{-t}u(t)$, the pole locaitons of $g(t)$ and $h(t)$ are found to be: $$g(t) = x(t)e^{-3t} \implies G(s) = X(s+3) \implies \text{ Re(poles) } = \{-1,-1 \} $$ $$h(t) = y(t) \star e^{-t}u(t) \implies H(s) = Y(S) \frac{1}{s+1} \implies \text{ Re(poles) } = \{-2,-2,-1 \} $$ From these pole locations we see the following: 1-$g(t)$ has two possible ROCs: $Re\{s\} <-1$, and $Re\{s\} > -1$, and only the second one includes the $j\omega$ axis and hence can be stable. 2-$h(t)$ has three possible ROCs: $Re\{s\}<-2$, $-2 < Re\{s\} <-1$, and $Re\{s\} > -1$, and only the last one includes the $j\omega$ axis and is stable. So for $h(t)$ and $g(t)$ to be absolutely integrable (stable), their ROC's must include $j\omega$ axis and this means their ROCs are to the right of the largest poles which implies that the signals are causal right sided signals.
{ "domain": "dsp.stackexchange", "id": 6872, "tags": "laplace-transform" }
Do counter rotating galaxies have dark matter?
Question: Have counter rotating dark matter galaxies been observed? Counter rotating galaxies, you may already know, are galaxies where some stars or arms rotate in one direction and other stars or arms rotate in an opposite direction, possibly due to the merger of two or more galaxies. Answer: As you probably know, the presence of dark matter in galaxies can be assumed true due to the analysis of the velocity curves. In 1970, Freeman determined the velocity profiles of galaxies using the 21 cm line and he found that for NGC300 and M33 there should have been much more gravitational mass outside the last bright point. In the same year, Rubin and Ford (1970) determined the velocity profile for M31: the profile was flat until 24kpc, which is much greater than the last photometric radius. The physical predicted model of a rotation curve of a galaxy must decrease smoothly following a keplerian model after the last luminous radius. As you can see, most studied galaxies show that their velocity curves are flat outside of their last visible point. The most accepted idea to solve this discrepancy between real and the predicted models is the hypothesis of the presence of dark matter in the galaxy halo. Another important parameter to estimate the presence of dark matter is the mass/luminosity ratio. For our Galaxy it has an approximate value of $~50 M_{\odot}/L_{\odot}$ (Binney and Tremaine 2008). This means that there should be mass that is not visible, maybe condensated in dark matter, brown dwarfs or other non luminous bodies. In order to answer your question, counter rotating galaxies may have similar velocity curves and they can have the presence of gravitational but not luminous mass in there halo. As you can see in this small paper on the counter rotating Sa NGC3539 galaxy, https://ned.ipac.caltech.edu/level5/March14/Corsini/Corsini2.html, there is a plot at the end, which perfectly shows the velocity profile: it stays flat outside of the last radius instead of decreasing as predicted. This can be explained assuming that the halo is filled with dark matter.
{ "domain": "physics.stackexchange", "id": 85956, "tags": "astrophysics, dark-matter, galaxies, galaxy-rotation-curve" }
How to predict NaN (missing values) of a dataframe using ARIMA in Python?
Question: I have a dataframe df_train of shape (11808, 1) that looks as follows: Datum Menge 2018-01-01 00:00:00 19.5 2018-01-01 00:15:00 19.0 2018-01-01 00:30:00 19.5 2018-01-01 00:45:00 19.5 2018-01-01 01:00:00 21.0 2018-01-01 01:15:00 19.5 2018-01-01 01:30:00 20.0 2018-01-01 01:45:00 23.0 2018-01-01 02:00:00 20.5 2018-01-01 02:15:00 20.5 and a second df nan_df of shape (3071, 1) that looks as follows: Datum Menge 2018-05-04 00:15:00 nan 2018-05-04 00:30:00 nan 2018-05-04 00:45:00 nan 2018-05-04 01:00:00 nan 2018-05-04 01:15:00 nan 2018-05-04 01:30:00 nan 2018-05-04 01:45:00 nan 2018-05-04 02:00:00 nan 2018-05-04 02:15:00 nan The nan values in the nan_df need to be predicted using time series forecasting. What I have done: The code below divides the df df_train and runs the ARIMA model on that to predict the values for the test set import pandas as pd from pandas import datetime import matplotlib.pyplot as plt from statsmodels.tsa.arima_model import ARIMA from sklearn.metrics import mean_squared_error def parser(x): return datetime.strptime(x,'%m/%d/%Y %H:%M') df = pd.read_csv('time_series.csv',index_col = 1,parse_dates =[1], date_parser = parser) df = df.drop(['Unnamed: 0'],axis=1) df_train = df.dropna() def StartARIMAForecasting(Actual, P, D, Q): model = ARIMA(Actual, order=(P, D, Q)) model_fit = model.fit(disp=0) prediction = model_fit.forecast()[0] return prediction NumberOfElements = len(df_train) TrainingSize = int(NumberOfElements * 0.7) TrainingData = df_train[0:TrainingSize] TrainingData = TrainingData.values TestData = df_train[TrainingSize:NumberOfElements] TestData = TestData.values #new arrays to store actual and predictions Actual = [x for x in TrainingData] Predictions = list() #in a for loop, predict values using ARIMA model for timepoint in range(len(TestData)): ActualValue = TestData[timepoint] Prediction = StartARIMAForecasting(Actual, 3, 1, 0) print('Actual=%f, Predicted=%f' % (ActualValue, Prediction)) Predictions.append(Prediction) Actual.append(ActualValue) Error = mean_squared_error(TestData, Predictions) print('Test Mean Squared Error (smaller the better fit): %.3f' % Error) # plot plt.plot(TestData) plt.plot(Predictions, color='red') plt.show() Now, I wanted to do the same to predict the nan values in the nan_df, this time using the entire df_train dataframe and I did it as follows: X = df_train.copy().values nan_df = df.iloc[11809:, :].values real = [x for x in X] nan_Predictions = list() #in a for loop, predict values using ARIMA model for timepoint in range(len(nan_df)): nan_ActualValue = nan_df[timepoint] nan_Prediction = StartARIMAForecasting(real, 3, 1, 0) print('real=%f, Predicted=%f' % (nan_ActualValue, nan_Prediction)) nan_Predictions.append(nan_Prediction) real.append(nan_ActualValue) When I do this, I get the following error: Traceback (most recent call last): File "<ipython-input-42-33f3e242230d>", line 4, in <module> nan_Prediction = StartARIMAForecasting(real, 3, 1, 0) File "<ipython-input-1-043dac0dd994>", line 17, in StartARIMAForecasting model_fit = model.fit(disp=0) File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\statsmodels\tsa\arima_model.py", line 1157, in fit callback, start_ar_lags, **kwargs) File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\statsmodels\tsa\arima_model.py", line 946, in fit start_ar_lags) File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\statsmodels\tsa\arima_model.py", line 562, in _fit_start_params start_params = self._fit_start_params_hr(order, start_ar_lags) File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\statsmodels\tsa\arima_model.py", line 539, in _fit_start_params_hr if p and not np.all(np.abs(np.roots(np.r_[1, -start_params[k:k + p]] File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\numpy\lib\polynomial.py", line 245, in roots roots = eigvals(A) File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\numpy\linalg\linalg.py", line 1058, in eigvals _assertFinite(a) File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\numpy\linalg\linalg.py", line 218, in _assertFinite raise LinAlgError("Array must not contain infs or NaNs") LinAlgError: Array must not contain infs or NaNs So, I would like to know how can I predict the nan values in the nan_df? Answer: You may apply Wolfram Language to your project. There is a free Wolfram Engine for developers and with the Wolfram Client Library for Python you can use these functions in Python. I will first create some data (too few rows provided in OP) in a pandas.DataFrame using a Python WolframLanguageSession to simulate a ARIMAProcess with RandomFunction. Imports import pandas as pd import iso8601 from wolframclient.evaluation import WolframLanguageSession from wolframclient.language import wl, wlexpr Start WolframLanguageSession wolfSession = WolframLanguageSession(); A simulation can be run by print( wolfSession.evaluate( wl.RandomFunction( wl.ARIMAProcess(1.4, [-0.9], 1, [-0.01, -0.08], 2.6), [0, 3] )('Values') ) ) [1.1054178694529107, 1.860340990531042, 1.5519448249118848, 5.088452598965132] Run a simulation of 100 steps (n) with dates in 15 minute intervals as in example data snip. n=100; df = pd.DataFrame( { 'Datum' : pd.date_range('2018-01-01 00:00', periods=n, freq=pd.offsets.Minute(15)), 'Menge' : wolfSession.evaluate( wl.RandomFunction( wl.ARIMAProcess(1.4, [-0.9], 1, [-0.01, -0.08], 2.6), [0, n-1] )('Values') ) }, ); The result can be visualised with DateListPlot by Exporting in one of the supported Raster Image Formats or Vector Graphics Formats. wolfSession.evaluate( wl.Export( '<path with image filename>', wl.DateListPlot(wl.Query(wl.Values,wl.Values)(df), PlotTheme='Detailed') ) ) Now that there is some data a forecast can be performed. First TimeSeriesModelFit will be use to fit an ARIMA process. Then TimeSeriesForecast to predict future values. The Wolfram Engine interprets a pandas.DataFrame as a Dataset object. TimeSeriesModelFit expects a TimeSeries or a list of time-value pairs. Therefore, using Query the conversion is made to a list of time-value pairs. ts_model=wolfSession.evaluate( wl.Query( wl.RightComposition( wl.Values, wl.Function(wl.TimeSeriesModelFit(wl.Slot(1),'ARIMA')) ), wl.Values )(df) ); print(wolfSession.evaluate(ts_model('BestFit'))) ARIMAProcess[1.4524650139005593, [-0.9099324923212446], << 1 >>, [-0.07171874225371022], 2.507600357444524] 20 steps forward can be simulated with ts_forecast=wolfSession.evaluate(wl.TimeSeriesForecast(ts_model,[20])); print(wolfSession.evaluate(ts_forecast('Values'))) [69.49895300256293, 73.9505213906962, 71.35235968644419, 75.16897645534839, 73.14857786048489, 76.43946920329194, 74.89744525567367, 77.75304796344956, 76.60710728838431, 79.10230095679928, 78.28430817717482, 80.48109139973985, 79.93463198084233, 81.88433817573274, 81.56270217242249, 83.30783423643538, 83.17234688189897, 84.74809624199086, 84.76673571338941, 86.20224006662474] ts_forecast is a TemporalData object whose properties include 'Dates' and 'Values'. These can be use to convert it into a Python pandas.DataFrame for further processing in Python. df_forecast = pd.DataFrame( { 'Datum' : list(map( lambda d: iso8601.parse_date(d), wolfSession.evaluate( wl.Map( wl.Function(wl.DateString(wl.Slot(1),'ISODateTime')), ts_forecast('Dates') ) ) )), 'Menge' : wolfSession.evaluate(ts_forecast('Values')) }, ); print(df_forecast.iloc[:3,:]) Datum Menge 0 2018-01-02 01:00:00+00:00 69.498953 1 2018-01-02 01:15:00+00:00 73.950521 2 2018-01-02 01:30:00+00:00 71.352360 Further processing can also be continued with the Wolfram Engine. For example, 95% confidence interval bands for the forecast. conf = .95; quant = wolfSession.evaluate(wl.Quantile(wl.NormalDistribution(), 1 - (1 - conf) / 2)); errors = wolfSession.evaluate(wl.Sqrt(ts_forecast('MeanSquaredErrors'))); error_bands = wolfSession.evaluate( wl.TimeSeriesThread( wl.Function([wl.Dot([1, -quant], wl.Slot(1)), wl.Dot([1, quant], wl.Slot(1))]), [ts_forecast, errors] ) ); wolfSession.evaluate( wl.Export( '<path with image filename>', wl.DateListPlot( [wl.Query(wl.Values,wl.Values)(df), error_bands, ts_forecast], PlotStyle=[wl.Automatic, wl.Gray, wl.Automatic], Filling=[wl.Rule(2,[3])], PlotTheme='Detailed' ) ) ); Terminate the session wolfSession.terminate(); Hope this helps.
{ "domain": "datascience.stackexchange", "id": 5278, "tags": "machine-learning, python, forecasting, time-series" }
Why should the Fermi level of a n-doped semiconductor be below the one of a p-doped?
Question: In a pn-junction, the difference in Fermi level between the p doped and the n doped regions causes the apparition of a built-in electric field at equilibrium. This electric field goes from the n to the p, (so the positive carriers, for example, would not feel anymore the Coulomb attraction from the ionized atom donors), meaning that the Fermi level of the n doped region is below the one of the p doped, but I don't see any elementary argument explaining that. Answer: Before the p-doped and n-doped materials are joined, maybe we can think that their conduction and valence bands are aligned (although that's probably a dubious assumption). We know that when they join the Fermi-levels must be flat so we need to lower the n-type material down in energy. We lower the n side because electrons row down hill, that is to say we are minimising their energy. Or you could say we move the p-side up because hole roll up hill. The result is the same: the n-side is lower than the p-side. After the charge has equilibrated, the end result is the bands bend to accomodate the flat Fermi level.
{ "domain": "physics.stackexchange", "id": 30929, "tags": "statistical-mechanics, condensed-matter, semiconductor-physics" }
Why is freezing something "fast enough" be approximated as adiabatic?
Question: I am reading Scott Shell's Thermodynamics and Statistical Mechanics, and I just saw this statement: If freezing happens fast enough, to a good approximation the process can be considered adiabatic. The author has mentioned this statement while talking about a problem regarding the spontaneous freezing of subcooled water at $-10^{\circ}$C. Why is this the case? I thought freezing is a constant temperature, constant pressure phenomenon. When you freeze something, a certain amount of heat $Q$ is removed from it and it escapes into an environment of some temperature $T$. How can freezing ever be adiabatic? Answer: If the liquid water starts out sub-cooled at -10 C, it is not at thermodynamic equilibrium, and there is a driving force for it to change spontaneously into a mixture of liquid water and ice at 0 C. Its own heat of freezing provides the energy necessary to bring about this change at constant enthalpy. There is no need to remove heat from the system by transferring it to the surroundings. Heat transfer is a process that takes time to occur, and the amount of heat transferred increases with both the temperature difference between the surroundings and the system, and the amount of time available for the heat to be transferred. In this case, the process takes place very rapidly, and even if there is a temperature difference between the surroundings and the system (assuming, for example, the surroundings are at 0 C), there is not enough time for a significant amount of heat to be transferred. Response Let M represent the amount of liquid water at -10 C and let m be the amount of ice resulting at 0 C, and (M-m) the amount of liquid water at 0 C. Take as a reference state for calculating enthalpy, liquid water at 0 C. Then, per unit mass relative to the reference state, the enthalpy per unit mass of the liquid water at -10 C is $$h=MC(-10-0)=-10MC$$ where C is the heat capacity of liquid water. In the final state, the enthalpy per unit mass of the ice is $$h=-m\Delta H_f$$where $\Delta H_f$ is the latent heat of fusion of ice. And the enthalpy per unit mass of the liquid water is $$h=(M-m)C(0-0)=0$$ So applying the first law of thermodynamics to this system with Q = 0 (adiabatic) yields: $$-10MC=-m\Delta H_f$$
{ "domain": "physics.stackexchange", "id": 71695, "tags": "thermodynamics, phase-transition" }
Context-free string that satisfies constraint?
Question: I'm not sure on how to come up with a string that is in terms of $p$ for $ a^ib^jc^k$ where $ i+j = k^2$ Where $p$ is the pumping length. I can't seem to find anything in terms of $p$ that satisifies the constraint. Any ideas or sugeestions are appreciated! Answer: It is not necessary to define a string that has exactly length $p$. It only has to have length at least $p$. So $(i,j,k) = (2p^2,2p^2,2p)$ so that $2p^2+2p^2 = (2p)^2$ for a string of length $4p^2+2p$ would be OK.
{ "domain": "cs.stackexchange", "id": 10060, "tags": "context-free, pumping-lemma" }
Calculate compressive stress in shaft/key surface from torque
Question: When designing shaft keys we calculate compressive stress in hub/shaft/key and compare it with allowable stress. The usual formula is $$p = \cfrac{F}{A} = \cfrac{\cfrac{M}{R}}{L_e \cdot t}$$ where $p$ is compressive stress, $M$ is applied torque, $R$ is shaft radius, $L_e$ is effective key length and $t$ is keyway depth. The $t$ can be sometimes replaced by $\frac{h}{2}$, where $h$ is key height. I would like to try and calculate the force $F$ more precisely although it is not necessary because of other uncertainties and simplifications in the calculations making this still an approximation. It is just an exercise for fun and out of curiosity. Using following formulas $F = \cfrac{M}{r}$, $\cfrac{F_x}{F} = \cfrac{R-y}{r}$, $r = \sqrt{(R-y)^2 + (0.5 \cdot b)^2}$ I derived formula for normal force $F_x$ acting on key's surface as a function of parameter $y$, where $y$ is distance from top of the shaft. $$F_x(y) = \cfrac{M \cdot (R-y)}{(R-y)^2 + 0.25 \cdot b^2}$$ $$[F_x] = \cfrac{\textrm{N} \cdot \textrm{mm} \cdot (\textrm{mm} - \textrm{mm})}{(\textrm{mm} - \textrm{mm})^2 + 1 \cdot \textrm{mm}^2} = \cfrac{\textrm{N} \cdot \textrm{mm}^2}{\textrm{mm}^2} = \textrm{N}$$ Now I would like to calculate the compressive stress $p$ with my newly defined normal force $F_x(y)$ but I do not know how to sum the force or what value of parameter $y$ I should choose. The questions are: Is my approach correct? Can I get a single force value, something like $F_{total}$ with its position $y_{final}$? Can I use it to calculate $p$? What value of parameter $y$ I should choose for the compressive stress calculation? I feel like questions 2 and 3 are dependent on each other. I was thinking about somehow summing the force from $y_1$ to $y_2$ with some form of integral. I know I can calculate total force when given a distributed load $q(x)$ by using formula $F = \int_{a}^{b}{q(x)dx}$. The position of said force would be in the centroid of the distribution graph. The problem is that $F_x(y)$ is not distributed load just by looking at the units ($\textrm{N}$ instead of $\textrm{N/mm}$). Maybe I could go straight for the compressive stress $p$ and avoid my current force problem or maybe I am misunderstanding the whole problem? EDIT 1: I did the calculations with some numbers using the usual simple formula and NMech's approach and I got different results ($\approx$ one order of magnitude). Symbolic and numerical integration was done using online tool Integral Calculator. Given $M = 10^5 \mathrm{\,Nmm}$, $R = 17.5 \mathrm{\,mm}$, $b = 10 \mathrm{\,mm}$, $L_e = 15 \mathrm{\,mm}$, $y_2 = t = 4.7 \mathrm{\,mm}$. $$y_1 = R - \sqrt{R^2 - (0.5\,b)^2} = 17.5 - \sqrt{17.5^2 - (0.5 \cdot 10)^2} = 0.73 \mathrm{\,mm}$$ Approach 1: Using the usual simple formula. $$F = F_x = \frac{M}{R} = \frac{10^5}{17.5} = \mathbf{5\,714.286} \mathrm{\,N}$$ Approach 2: Using $I_p$ with outer shaft diameter $r = R$, integrating $dF(y)$. $$I_p = \frac{\pi R^4}{2} = \frac{\pi \cdot 17.5^4}{2} = 147\,324 \mathrm{\,mm}^4$$ Here I don't know what to do with the expression $\ln(\mathrm{mm}^2)$. If it was equal to $1$, then the final unit would be $\mathrm{N}$, which would make sense. From my quick Google search it seems that we should only take logarithm of dimensionless quantity (1), (2). $$% F = % \int_{0.73}^{4.7}{\frac{10^5 \cdot 15}{147\,324} \sqrt{(17.5 - y)^2 + (0.5 \cdot 10)^2} dy} \approx % \mathbf{631.051} \mathrm{\,N} $$ Approach 3: Substituting $I_p$ with $\frac{\pi \cdot r^4}{2}$ and then $r$ with $\sqrt{(R-y)^2 + (0.5 \cdot b)^2}$, integrating $dF(y)$. $$ F(y) = % \int{\frac{2 \, L_e M}{{\pi}} \cdot \left(\left(R - y\right)^2 + (0.5 \, b)^2\right)^{-\frac{3}{2}}dy} = % \frac{16 \, L_e M \cdot \left(y-R\right)}{\pi b^3 \sqrt{\frac{4 \, \left(y - R\right)^2}{b^2}+1}} + C$$ $$[F(y)] = % \frac{1 \cdot \mathrm{mm} \cdot \mathrm{N} \cdot \mathrm{mm} \cdot\left(\mathrm{mm} - \mathrm{mm}\right)}{1 \cdot \mathrm{mm}^3\sqrt{\frac{1 \cdot \left(\mathrm{mm} - \mathrm{mm}\right)^2}{\mathrm{mm}^2}+1}} = % \frac{\mathrm{N} \cdot \mathrm{mm}^3}{\mathrm{mm}^3} = % \mathrm{N} $$ $$F = \int_{0.73}^{4.7}{\frac{2 \cdot 15 \cdot 10^5}{{\pi}\cdot\left(\left(17.5-y\right)^2+\frac{10^2}{4}\right)^\frac{3}{2}}} \approx \mathbf{1\,025.790} \mathrm{\,N}$$ Approach 4: Similar to #2, integrating $dF_x(y) = dF(y) \frac{R - y}{r(y)}$. $$ \newcommand{\mm}{\mathrm{mm}} \newcommand{\N}{\mathrm{N}} F_x(y) = % \int{\frac{L_e M \cdot \left(R - y\right)}{I_p}} = % \frac{L_e M y \cdot \left(2 \, R - y\right)}{2 \, I_p} + C $$ $$ [F_x(y)] = % \frac{\mm \cdot \N \cdot \mm \cdot \mm \cdot \left(1 \cdot \mm - \mm\right)}{1 \cdot \mm^4} = % \frac{\N \cdot \mm^4}{\mm^4} = % \N $$ $$ F_x = % \int_{0.73}^{4.7}{\frac{15 \cdot 10^5 \cdot \left(17.5 - y\right)}{147\,324}} \approx % \mathbf{597.626} \mathrm{\,N} $$ Approach 5: Similar to #3, integrating $dF_x(y) = dF(y) \frac{R - y}{r(y)}$. $$ F_x(y) = % \int{\frac{2 \, L_e M \cdot \left(R - y\right)}{{\pi} \cdot \left(\left(R - y\right)^2 + (0.5 \, b)^2\right)^2}} = % \frac{L_e M}{{\pi} \cdot \left(\left(R - y\right)^2 + (0.5 \, b)^2\right)} + C $$ $$ [F_x(y)] = % \frac{\mm \cdot \N \cdot \mm}{1 \cdot \left(\left(\mm - \mm\right)^2 + (1 \cdot \mm)^2\right)} = % \frac{\N \cdot \mm^2}{\mm^2} = % \N $$ $$ F_x = % \int_{0.73}^{4.7}{\frac{2 \cdot 15 \cdot 10^5 \cdot \left(17.5 - y\right)}{{\pi} \cdot \left(\left(17.5 - y\right)^2 + (0.5 \cdot 10)^2\right)^2}} \approx % \mathbf{969.253} \mathrm{\,N} $$ Result from approach #4 is smaller than result from #2, same with #5 and #3. This is to be expected because $F_x$ is a projection of the total force $F$ onto the x-axis. What staggers me is the huge difference between #1 and other approaches. I though the final force could be somewhere in the interval $F \in \langle4000, 8000\rangle \mathrm{\,N}$. Should I have expected these results or is there something wrong? Answer: The simple answer is $$ F=M/r $$ and that is acceptable for practical purposes. As OP correctly notes, that does not distinguish between $F$ and $F_x$. That distinction is not considered in typical design practice. The complicated answer starts with noting that the stress distribution in keys is not uniform. In the question, it's assumed that the compressive stress is uniform but contact with the shaft and hub at the midplane of the key leads to singular stress concentrations. Finite element modeling is probably the best tool for answering your question with more sophistication than is offered by textbooks.
{ "domain": "engineering.stackexchange", "id": 5045, "tags": "mechanical-engineering, stresses, torque, moments, forces" }
Advantages of 2-VSB vs BPSK?
Question: Both BPSK and 2-VSB transmit one bit of information per smybol and both modulate the real part of the signal. The difference is, as far as I understand, that BPSK does not modulate the imaginary part of the complex signal at all, while 2-VSB uses hilbert transformation to suppress the lower sideband of the transmitted signal. Are there any advantages of doing this or is it just adding complexity without any advantages? Answer: Suppressing the lower sideband decreases the bandwidth by a factor of $2$ compared to BPSK. Since we're not talking about single-sideband (SSB) modulation, but vestigial sideband (VSB) modulation, we don't gain a factor of $2$, but a factor that is slightly smaller than two, because a part of one of the sidebands is retained. Note that both BPSK and 2-VSB carry the information only in the (bi-polar) amplitude of the signal, i.e., only in the in-phase component, not in the quadrature component. The complex baseband signal for 2-VSB only occurs because the larger part of one of the sidebands is filtered out. This, however, is not necessarily done by complex baseband processing (using the Hilbert transform), but it can be done by applying a band pass filter after modulating the baseband signal. The latter option only uses real-valued processing.
{ "domain": "dsp.stackexchange", "id": 3995, "tags": "digital-communications, bandwidth, hilbert-transform, bpsk" }
Binary data utility class
Question: I have recently been trying to get a solid grasp of how to work with binary data, both for networking and file storage, in C++. In java, there are many utility classes that made this easy. Since these don't exist in C++, I figured making a limited class that does something similar would be a good way to learn how to work with binary data. BinaryBlob.h: #pragma once #include <vector> #include <string> typedef unsigned char byte; class BinaryBlob{ public: void loadFromFile(const std::string &filePath); void writeToFile(const std::string &filePath) const; void writeBool(bool b); bool readBool(); void writeInt(int b); int readInt(); void writeString(const std::string &str); std::string readString(); void writeBytes(byte *data, unsigned length); inline void readBytes(byte *data, unsigned length); //inline so that it is inlined with the other read functions, irrelevant to non-member functions since those won't be inlined in separate implementation files private: std::vector<byte> m_binaryData; unsigned m_readIndex = 0; constexpr static unsigned BUFFER_SIZE = 1024; }; BinaryBlob.cpp: #include "BinaryBlob.h" #include <cstdint> #include <fstream> void BinaryBlob::loadFromFile(const std::string &filePath){ std::ifstream fin(filePath, std::ios::binary); byte buffer[1024]; while(fin){ fin.read(reinterpret_cast<char*>(buffer), BUFFER_SIZE); writeBytes(buffer, fin.gcount()); } } void BinaryBlob::writeToFile(const std::string &filePath) const { std::ofstream fout(filePath, std::ios::binary); fout.write(reinterpret_cast<const char*>(m_binaryData.data()), m_binaryData.size()); } void BinaryBlob::writeBool(bool b) { byte byteData = b ? 1 : 0l; writeBytes(&byteData, 1); } bool BinaryBlob::readBool(){ byte byteData; readBytes(&byteData, 1); return byteData == 1; } void BinaryBlob::writeInt(int b){ std::uint32_t absValue = std::abs(b); bool isPositive = b > 0; writeBool(isPositive); writeBytes(reinterpret_cast<byte*>(&absValue), sizeof(std::uint32_t)); } int BinaryBlob::readInt(){ bool isPositive = readBool(); std::uint32_t absValue; readBytes(reinterpret_cast<byte*>(&absValue), sizeof(std::uint32_t)); return absValue * (isPositive ? 1 : -1); } void BinaryBlob::writeString(const std::string &str) { writeInt(str.length()); //I know this would probably be better as an unsigned value, but because this is a simple practice, no method to write unsigned values is created for(unsigned i=0;i<str.length();++i){ char c = str[i]; writeBytes(reinterpret_cast<byte*>(&c), 1); } } std::string BinaryBlob::readString() { int length = readInt(); std::string str; str.reserve(length); char *characterData = new char[length]; readBytes(reinterpret_cast<byte*>(characterData), length); for(unsigned i=0;i<length;++i){ str.push_back(characterData[i]); } delete characterData; return str; } void BinaryBlob::writeBytes(byte *data, unsigned length){ for(unsigned i=0;i<length;++i){ m_binaryData.push_back(data[i]); } } void BinaryBlob::readBytes(byte *data, unsigned length){ for(unsigned i=0;i<length;++i){ data[i] = m_binaryData[m_readIndex+i]; } m_readIndex += length; } Test program 1 (designed to test if the file io functions work): #include <string> #include <iostream> #include "BinaryBlob.h" int main(int argc, char *argv[]){ if(argc != 3){ std::cerr << "Usage: " << argv[0] << " <input-file> <output-file>" << std::endl; return -1; } std::string inputPath = argv[1]; std::string outputPath = argv[2]; BinaryBlob binaryBlob; binaryBlob.loadFromFile(inputPath); binaryBlob.writeToFile(outputPath); return 0; } Test program 2 (designed to test if the type <-> binary data functions work): #include <iostream> #include "BinaryBlob.h" int main(int argc, char *argv[]){ if(argc != 3){ std::cerr << "Usage: " << argv[0] << " <file> <create/read>" << std::endl; return 0; } std::string file = argv[1]; std::string command = argv[2]; BinaryBlob binaryBlob; bool b; int i; std::string str; if(command == "create"){ std::cout << "Enter a boolean: "; std::cin >> b; std::cout << "Enter an integer: "; std::cin >> i; std::cout << "Enter a string: "; std::cin >> str; binaryBlob.writeBool(b); binaryBlob.writeInt(i); binaryBlob.writeString(str); binaryBlob.writeToFile(file); }else if(command == "read"){ binaryBlob.loadFromFile(file); b = binaryBlob.readBool(); i = binaryBlob.readInt(); str = binaryBlob.readString(); std::cout << "Boolean is: " << b << std::endl; std::cout << "Integer is: " << i << std::endl; std::cout << "String is: " << str << std::endl; } return 0; } The specific things I am interested in feedback on are (although any other feedback of any sort is most certainly welcome, as long as it's remotely useful): Because I only have access to a handful of Intel and AMD cpus, which are all very similar, I didn't test this code for portability (ie. running on an ARM CPU), to see if files generated on one CPU are readable on the other. Ignoring endianness, is there anything I missed? If I am writing code designed to be run on x86 CPUs only (both Intel and AMD), but with a variety of compilers and OSes, do I need to worry about anything more than I did here (including endianness)? Is my use of reinterpret_cast correct? For the string writing, I am just casting the string's data to its byte representation. Is this reliable? I know the C++ standard does not guarantee a character encoding, but do these encodings differ in practice on x86 CPUs? What about on broader ranges (ie. ARM?). Does this depend on the compiler or OS? If I wanted to correctly handle this, such that it is fully portable, how would I do so? How far, if at all, am I jumping outside the C++ standard with this code, into undefined behavior? Unless this is "not at all," how can I accomplish the same thing fully within the standard? Is this even possible? Answer: Nice Question. I would hesitate to port the code as it is currently implemented. For Portability Don't Ignore Warnings When the library will be distributed on multiple platforms it might be better to use the -Werror flag to make all warnings into errors. Any warnings may become errors on other architectures. At a minimum when compiling for a portable library the -Wall flag should be used. Using Xcode on El Capitan (Mac OSX) I got multiple warnings about integer versus long mismatch. It might be better to change the length parameter in these functions to take size_t rather than unsigned int: void writeBytes(byte *data, unsigned length); inline void readBytes(byte *data, unsigned length); void writeInt(int b); A common practice these days is to make all array indexes size_t for example std::string::reserve is defined using size_t. Inconsistent Use of Constants In the function BinaryBlob::loadFromFile(const std::string &filePath) The buffer size is 1024 while the read uses the defined constant BUFFER_SIZE, it would be best to define buffer in terms of BUFFER_SIZE as well. It might be best to use a system defined constant for the buffer size to improve performance. Some systems may have a 4K, 8K or 16K file buffer size. Reading the file system block size and then processing the block with in the program will definitely improve performance. Inline The inline keyword is now only a recommendation and the compiler might ignore it. An optimizing compiler may inline every one of these functions as well. Deleting an Allocated Array In std::string BinaryBlob::readString() the line delete characterData; is not doing what you think it should be doing, it would be best to change this to delete [] characterData; to delete the entire array. Testing Everything in One Shot By moving the code within the two different mains into functions both tests could have been completed in one test run. Tests should always be repeatable, and they may not be repeatable using command line input. It might be good to check out open source test frameworks. One such test framework for C++ is CppUnit. Style and Readability All of the for loops are compressed for(unsigned i=0;i<str.length();++i){ It is generally more acceptable to put spaces between symbols to make it more readable: for(unsigned i=0; i < str.length(); ++i){
{ "domain": "codereview.stackexchange", "id": 28401, "tags": "c++, file, io" }
[autoware.auto] vehicle command topic changed for LGSVL?
Question: I am no longer able to use joystick control in the simulator. It looks like the simulator is listening for vehicle commands on this topic: /vehicle_cmd But the joystick interface is now publishing them to this topic: /lgsvl/vehicle_control_cmd I can remap the topic, but is there a "right" way to change which control topic the sim listens to? Originally posted by Jeffrey Kane Johnson on ROS Answers with karma: 452 on 2020-03-18 Post score: 1 Answer: I ended up just deleting my entire adehome directory and reinstalling everything. That fixed it. Originally posted by Jeffrey Kane Johnson with karma: 452 on 2020-03-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 34606, "tags": "ros2" }
Jarvis's March Convex Hull
Question: I'm more concerned about the coding best practices and how it's written here than the actual algorithm and math. I'm concerned about stack constraints, passing arguments,..etc. If there is anything critically wrong in the way the code is written - I would really appreciate comments! #include <iostream> #include <vector> #include <math.h> using namespace std; struct point{ float x, y; point(){ x=0; y=0; } point(float aX, float aY){ x = aX; y = aY; } }; float computePolarAngle(point convex_pt, point candidate_pt){ //move origin to convex_pt float candidate_x = candidate_pt.x - convex_pt.x; float candidate_y = candidate_pt.y - convex_pt.y; float angle = (180/M_PI)*atan2(candidate_y,candidate_x); //incase the angle goes into the negative we always want to find the counterclockwise //angle from x-axis centered at the selected point if(angle<0){ angle = 360+angle; } return angle; } //implementation of Jarvis's March ("Gift Wrapping") void convexHull(const vector<point>& points, vector<point>& convex_points, vector<int>& convex_points_indices){ //checks if(points.size() <= 2){ cout<<"points should consist of more than 2 points"<<endl; } cout<<"******************* points: *******************"<<endl; for(int index=0; index<int(points.size()) ; index++){ cout<<(points.at(index)).x<<","<<(points.at(index)).y<<endl;//.x<<","<<points[index].y<<endl; } //compute lowest point (min y-coord) point lowestPt(points.at(0).x, points.at(0).y); int lowestIndex = -1; for(int index=0; index < int(points.size()) ; index++){ if(points.at(index).y < lowestPt.y) { lowestPt.x = points.at(index).x; lowestPt.y = points.at(index).y; lowestIndex = index; } } //add lowest point to convex hull convex_points.push_back(points.at(lowestIndex)); convex_points_indices.push_back(lowestIndex); //store remaining points (initially same as points) -> deep copy vector<point> remaining_points = points; //gift wrapping part loop while we have not reached the starting convex point again point convex_pt(lowestPt.x, lowestPt.y); bool flag_complete = false; float eps = 1e-10; while(!flag_complete){ //compute polar angles of all points float minIndex = -1; float minAngle = 360; for(int index=0; index < int(remaining_points.size()) ; index++){ float polarAngle = computePolarAngle(convex_pt, remaining_points.at(index)); if(polarAngle < minAngle && polarAngle!=0){//second condition is so that we dont compare a point with itself minAngle = polarAngle; minIndex = index; } } bool flagX = fabs(lowestPt.x - remaining_points.at(minIndex).x)<eps; bool flagY = fabs(lowestPt.y - remaining_points.at(minIndex).y)<eps; if(flagX && flagY){ flag_complete = true; continue; } else{ convex_points.push_back(remaining_points.at(minIndex)); convex_points_indices.push_back(minIndex); convex_pt.x = remaining_points.at(minIndex).x; convex_pt.y = remaining_points.at(minIndex).y; remaining_points.erase(remaining_points.begin()+minIndex); } } cout<<"******************* convex hull: *******************"<<endl; cout<<"found "<<convex_points.size()<<" points as the convex hull:"<<endl; for(auto it = convex_points.begin() ; it!=convex_points.end() ; it++){ cout<<(*it).x<<", "<<(*it).y<<endl; } } int main(){ //example set of points where the convex hull should be the 4 corners (a slightly deformed square) vector<point> points; points.push_back(point(1,1)); points.push_back(point(1.1,-1)); points.push_back(point(-1, 1)); points.push_back(point(-1.1,-1.1)); points.push_back(point(0.75,0.5)); points.push_back(point(-0.5,0.3)); points.push_back(point(0.25,-0.8)); points.push_back(point(0.1,-0.9)); vector<point> convex_points; vector<int> convex_points_indices; convexHull(points, convex_points, convex_points_indices); } Answer: General C++ advice There's a few general suggestions here that are for the most part independent of your algorithms. using namespace std using namespace std; This is usually a code smell, generally speaking the only time you would really want to do this is when you are making some small throwaway program for testing a concept or making an example (or similar). In that case the reduction in typing time actually has a positive ROI. But you don't want to do it in any production code because this pollutes the global namespace which is something you should try to avoid, any benefit from time saved typing is immediately wiped out the first time your program breaks because a name conflict. See this Stack Overflow question for more info on that. If the main purpose for doing this is to cut down typing std:: then you can selectively bring in just the names you need by doing: using std::cout; using std::cin; an so on. This cuts down on typing without the downside of polluting the global namespace. Prefer initialization list In this code you are using assignment to initialize members: struct point{ float x, y; point(){ x=0; y=0; } point(float aX, float aY){ x = aX; y = aY; } }; However using the member initializer list is usually a better way of initializing variables with the constructor. struct point{ float x, y; point(): x(0), y(0) { } point(float aX, float aY): x(aX), y(aY) { } }; Doing it this way ensures that members get correctly initialized and this format can also provide the compiler opportunities to optimize the code generated. For the inbuilt default types int, char, etc. there is no performance difference but for user defined types there can be. See the C++ FAQ entry for more on this topic. Prefer pass by reference to const You have a function: float computePolarAngle(point convex_pt, point candidate_pt){ //function doesn't change convex_pt } It's preferable to instead pass these parameters by reference to const like so: float computePolarAngle(point const& convex_pt, point coinst& candidate_pt){ //function doesn't change convex_pt } There's 2 main reasons to do this: You no longer need to make a unnecessary copy of the parameters for your function You more clearly state your intent with the code, any change to bar will now throw an error at compile time. This can prevent undesirable things from happening. We all make mistakes, this just helps us catch one class of mistake sooner. Typedefs In many places in your code you have: vector<point> You might want to make a typedef for this type: typedef std::vector<point> point_container_t; Then use that typedef throughout your code. This makes it easier to make changes later on if you decide to change the data types you use. This isn't always a clear cut decision, choose based on the ROI for doing so in your project. Initializer list With C++11 we can condense this code: vector<point> points; points.push_back(point(1,1)); points.push_back(point(1.1,-1)); points.push_back(point(-1, 1)); points.push_back(point(-1.1,-1.1)); points.push_back(point(0.75,0.5)); points.push_back(point(-0.5,0.3)); points.push_back(point(0.25,-0.8)); points.push_back(point(0.1,-0.9)); to: vector<point> points { point(1,1), point(1.1,-1), point(-1, 1), point(-1.1,-1.1), point(0.75,0.5), point(-0.5,0.3), point(0.25,-0.8), point(0.1,-0.9) }; Once again, C++11 removes a bunch of boilerplate. Documentation You don't have any, you might want to consider adding some. I'm a big fan of doxygen with C++, have a look into that. Convex hull function There's a few issues with this function so lets break them down.: Function length One of the main problems with this function is that for what it does it's way too long in terms of lines of code. You really should be breaking out some of this code into smaller functions. Specifically, there's a few times where you are not following the don't repeat yourself principle. For example, things like this: point lowestPt(points.at(0).x, points.at(0).y); int lowestIndex = -1; for(int index=0; index < int(points.size()) ; index++){ if(points.at(index).y < lowestPt.y) { lowestPt.x = points.at(index).x; lowestPt.y = points.at(index).y; lowestIndex = index; } } should really be in their own functions: int compute_lowest_index(vector<point> const& points){ //calculate index return index; } This will help you out when you write unit tests for your code. Making as many of these small functions that are easily testable will let you greatly improve the confidence in the correctness of your code when paired with a good suite of unit tests. Bail out early with bad input You have this check for invalid input: if(points.size() <= 2){ cout<<"points should consist of more than 2 points"<<endl; } But then you don't do anything if the input is invalid. If you know you have bad input don't do any processing on it. When you know you have bad data return immediately at that point or throw an exception. Loops You have a few examples in your code like this: for(int index=0; index<int(points.size()) ; index++){ cout<<(points.at(index)).x<<","<<(points.at(index)).y<<endl; } I'm guessing you got a compilation warning for a comparison of signed with unsigned variable. If this is why you changed the code that's a good thing, it's always good to see people compiling with all warnings then taking those warnings seriously. However I'd fix that warning differently by using size_t and removing that ugly cast to int: for(size_t index=0; index<points.size(); index++){ cout<<(points.at(index)).x<<","<<(points.at(index)).y<<endl; } First, you probably want to take the check for points.size() outside the loop because you don't need to evaluate it every loop iteration: size_t points_size = points.size(); for(size_t index=0; index<points_size ; index++){ cout<<(points.at(index)).x<<","<<(points.at(index)).y<<endl; } Then if we were to do this in the c++11 way we could just use a range based for loop: for(const point& pt : points){ cout << pt.x <<","<< pt.y << endl; } This is much more terse and removes a lot of the boilerplate. Going further though, you could just make an operator overload to print a point directly. You need to add an overload for ostream's operator<< for the point class: ostream& operator<<(ostream &os, const point &pt){ os << pt.x << "," << pt.y; // print the point return os; } Then you have to change the point struct as follows to allow this: struct point{ //rest of point struct friend ostream& operator<<(ostream &os, const Account &dt); } You can then just call it like this: cout << pt << endl; Conversion from radians to degrees Currently you are converting from radians to degrees: float angle = (180/M_PI)*atan2(candidate_y,candidate_x); But you only ever use the angle to determine if one angle is greater than another. Using degrees gives you no advantage over radians in this regard but you waste CPU cycles making the conversion you also waste accuracy because floats are not exact representations of decimals. So not only does this waste CPU cycles it also makes your results less accurate. See this classic paper for more on that topic. If you need to display the results somewhere as degrees make the conversion at that point in time. If you don't need to convert it then don't waste the overhead of the conversion. Prefer returning values over references In this function that computes the convex hull you are passing in a reference to store the convex hull points that you computed: void convexHull(const vector<point>& points, vector<point>& convex_points, vector<int>& convex_points_indices); Your display code is in this function too. This reduces flexibility for future users. What if someone just wants to compute the value and doesn't want to print out the results at the same time. Currently, you don't give them that option because you have the output tightly coupled with the function. Ideally you should have a very clear separation between UI and processing. So generally speaking, it's a good idea to return the results instead of passing a reference, let the user of your function decide how to display it. Additionally, this makes your code closer to pure functional code and will greatly reduce the cognitive complexity for people who use your code later on. You want your functions to be like a black box where you can provide parameters and get a result out of it. When you have the void functions that manipulate a reference you can no longer treat the function in that way as it's manipulating a variable you pass in to it. You now have to look into the internals of the function to see what it's doing to your variables. Fewer side effects leads to greater productivity which is why I would suggest just returning a value. vector<point> convexHull(const vector<point>& points, vector<int>& convex_points_indices){ //return the result }
{ "domain": "codereview.stackexchange", "id": 11101, "tags": "c++, algorithm, c++11, computational-geometry" }
Falling rotating object in higher order potential fields
Question: For which $n$ would an object with a non-zero rotation fall to the center of this field? $$\alpha >0\\ V(r) = \frac{\alpha}{r^n}$$ (Apparently it should never touch the center if it has non-zero rotation and $n=1$). I am totally stumped, as I cant see why the order should change whether or not it falls into the center. Any ideas/explanations would be greatly appreciated. Answer: Consider an object with non-zero rotation, and velocity vector $\bf{v}$. That it has a non-zero rotation means that, at least as long as $r \neq 0$, $\textbf{v}\times\textbf{r} = rv_\perp \neq 0$, where $\bf{r}$ is the radial position vector and $v_\perp$ is the component of $\bf v$ perpendicular to $\bf r$. As a simple consequence of conservation of angular momentum we must have $$ v_\perp \propto \frac{1}{r}. $$ This imposes $$ \dot{v}_\perp = \frac{dv_\perp}{dt} = \frac{dv_\perp}{dr}\frac{dr}{dt} \propto \frac{1}{r^2}v_r. $$ On the other hand the potential \begin{align} V(\textbf{r}) &\propto \frac{1}{r^n} \end{align} imposes an an acceleration $$ \textbf{a} \propto -\nabla V \propto \frac{1}{r^{n+1}}\textbf{e}_r, $$ whence $\dot{v}_r = |\textbf{a}|$. We thus have $$ \frac{\dot{v}_\perp}{\dot{v}_r} \propto r^{n-1}v_r. $$ For $n > 1$ we have \begin{align} \lim_{r\to 0}\frac{\dot{v}_\perp}{\dot{v}_r} = 0. \end{align} We can interpret this to mean that as the object is brought closer to the center the acceleration gradually becomes purely radial. If the potential acts attractively this means that the object may eventually collide with the center. However, for $n = 1$ we have \begin{align} \lim_{r\to 0} \frac{\dot{v}_\perp}{\dot{v}_r} \propto \lim_{r\to 0} v_r \neq 0, \end{align} which means that even when the object is brought closer to the center the acceleration will never become radial: the object will never accelerate directly towards (nor directly away from) the center. And since it was originally not moving towards the center, it will never collide with the center.
{ "domain": "physics.stackexchange", "id": 34271, "tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics, potential" }
wait for service:Service [/gazebo/set_physics_properties] has not been advertised, waiting
Question: When I am trying to launch the my_world.launch I am facing this problem. Code for my_world.launch is <?xml version="1.0" encoding="UTF-8"?> <launch> <arg name="debug" default="false" /> <arg name="gui" default="false" /> <arg name="pause" default="true" /> <arg name="world" default="$(find my_gazebo)/world/empty_world.world" /> <include file = "$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" value="$(arg world)" /> <arg name="debug" value="$(arg debug)" /> <arg name="gui" value="$(arg gui)" /> <arg name="paused" value="$(arg pause)" /> <arg name="use_sim_time" value="true" /> </include> </launch> empty_world.world code is here: <?xml version="1.0" ?> <sdf version="1.6"> <world name="default"> <include> <uri>model://sun</uri> </include> <include> <uri>model://ground_plane</uri> </include> </world> </sdf> I am facing this problem: glv@VaraPrasad:~/catkin_ws$ roslaunch my_gazebo my_world.launch ... logging to /home/glv/.ros/log/09788d08-d7b3-11e8-a35c-9822ef7fc42b/roslaunch-VaraPrasad-4041.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://VaraPrasad:44853/ SUMMARY ======== PARAMETERS * /rosdistro: kinetic * /rosversion: 1.12.14 * /use_sim_time: True NODES / gazebo (gazebo_ros/gzserver) ROS_MASTER_URI=http://localhost:11311 process[gazebo-1]: started with pid [4058] [ INFO] [1540402702.643569843]: Finished loading Gazebo ROS API Plugin. [ INFO] [1540402702.644173326]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting... [ INFO] [1540402702.908795885, 0.024000000]: waitForService: Service [/gazebo/set_physics_properties] is now available. [ INFO] [1540402702.963873717, 0.078000000]: Physics dynamic reconfigure ready. ^C[gazebo-1] killing on exit shutting down processing monitor... ... shutting down processing monitor complete done glv@VaraPrasad:~/catkin_ws$ glv@VaraPrasad:~/catkin_ws$ roslaunch my_gazebo my_world.launch ... logging to /home/glv/.ros/log/09788d08-d7b3-11e8-a35c-9822ef7fc42b/roslaunch-VaraPrasad-4309.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://VaraPrasad:37831/ SUMMARY ======== PARAMETERS * /rosdistro: kinetic * /rosversion: 1.12.14 * /use_sim_time: True NODES / gazebo (gazebo_ros/gzserver) ROS_MASTER_URI=http://localhost:11311 process[gazebo-1]: started with pid [4326] [ INFO] [1540402736.589018029]: Finished loading Gazebo ROS API Plugin. [ INFO] [1540402736.589433988]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting... I am using Ros Kinetic version , Ubuntu 16.04 and Gazebo 9 Thanks In Advance. Originally posted by GLV on ROS Answers with karma: 33 on 2018-10-24 Post score: 0 Answer: Can you copy and paste the terminal output instead of a screenshot please ? It's not searchable like that. For your issue, you launch Gazebo with pause set to true so you son't start the service, this will start it : roslaunch my_gazebo my_world.launch pause:=false The proper solution is to modify this value in your launch file. Moreover you tried with verbose :=true, yet you didn't defined the arg verbose in your launch file so it's basically doing nothing. Finally I don't know if you are aware of it but you also set gui to false so you won't have the Gazebo GUI with your launch file, set it to true in your launch file to launch it or simply type the command $gzclient after your roslaunch. Originally posted by Delb with karma: 3907 on 2018-10-24 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by GLV on 2018-10-24: Thank you Mr.Delb. Your answer is worked for me. Basically I don't have much idea about this I am trying to launch my robot(mobile robot that created in gazebo) in gazebo with roslaunch and also I want to control it by teleop_keys. Any suggestions regarding arguments? Comment by Delb on 2018-10-24: Good, you can mark the answer as correct too please. For your question you can create a new one if you have an issue but I would suggest to look at this gazebo tutorial
{ "domain": "robotics.stackexchange", "id": 31960, "tags": "gazebo, ros-kinetic, ubuntu, ubuntu-xenial" }
Why do we blink instead of winking each eye independently?
Question: Question Why do we blink both eyes at the same time rather than winking each eye as needed? Why would winking independently be better? The benefit would be a minor improvement whereby a person would receive continuous sight of the area of vision shared by both eyes. I'd also assume that it would be more likely for the wink-as-needed mechanism to evolve than the blink-as-needed mechanism were there no benefit to blinking over winking. Why then has evolution favoured blinking? My guess at why is that the overhead on our brain of switching between binocular vision to monocular vision is more significant than the overhead of momentarily pausing our visual processing. I also believe that some animals don't blink, but wink; e.g. chameleons. This supports my theory as their eyes operate independently, so their brains would have to cope with the variance in each eye's information anyway. That said, their wink is different; i.e. they roll their eye inside their head, rather than having an eyelid. Research so far I found a couple of Reddit posts: https://www.reddit.com/r/explainlikeimfive/comments/2kzx4s/eli5_why_does_blinking_take_me_almost_no_effort/ - explains why blinking is easier than winking; but only based on how we are rather than how we got there (i.e. why evolution would have resulted in different muscles for winking vs blinking). https://www.reddit.com/r/explainlikeimfive/comments/3mijgw/eli5_why_do_we_blink_with_both_eyes_at_same_time/ - doesn't have any hard fact answers; just discussion of opinion. Answer: Looking at it completely in terms of evolutionary pressure: 1.1 The whole idea of binocular vision for humans, (and some other species) is depth perception. If you suggest continuous, alternate winking, the cognition of the individual would be seriously affected. The 'wink as needed' mechanism (assuming all other visual processes are as they have evolved today) would be like continuously alternating between a pair of binoculars and a telescope while still trying to move forward. 1.2 If you are suggesting that there would still be an evolutionary pressure for winking, then its very likely that the evolutionary pressure for coordinated motion/ efficient cognition acting together with the evolutionary pressure for depth perception would be much higher as compared to it. (This would especially be true if eye-brain coordination evolved before the blinking mechanism, as is seen to be the case, which explains your point on 'how we are rather than how we got here'. Obviously fish have evolved some eye-brain coordination, but don't blink) This link explains evolutionary pressure for depth perception/stereopsis to an extent https://en.wikipedia.org/wiki/Depth_perception#Survival 2.1 As for the chameleon, the winking/ eye rolling mechanism might be very closely linked to the evolution of chameleon vision. It is seen as the link for studying evolution of binocular vision and stereopsis(depth perception) from monocular vision. https://en.wikipedia.org/wiki/Chameleon_vision
{ "domain": "biology.stackexchange", "id": 5103, "tags": "eyes, human-eye, human-evolution" }
On modifying imported STL files in Solidworks
Question: Some weeks ago I downloaded a couple of 3d models from Thingieverse in STL format and I wanted to make some minor modifications before printing. The modification were: I wanted to split a multi part STL files into separate models (I had trouble with printing all of them together) I wanted to move a few millimeters a feature to improve (IMHO) the load carrying capacity of the model. However, upon importing in in Solidworks I realized that I was unable to edit by applying new features based on the existing STL. (From what I recall I was not even able to remove some of the parts with a cut extrude feature). I did a little search and realised that generally STL files are not editable. (In the end, I used blender to edit the part, and I made the other part from scratch). So my (main) question is: Is there a procedure I can follow so that I can convert an STL file to a native sldprt file (FeatureWorks has a similar functionality but I couldn't apply it) Additionally, I have the following bonus questions ( I can create separate post if you feel the question needs more focus) : Is the non editability of the STL a feature meant to protect intellectual property (similar to a PDF)? What is the concept of STL in layman's terms? Does it store, vertices/edges/faces/volumes? (I got the impression that the STL build the model by creating basic tetrahedra) Answer: Let's look at your bonus questions first: No. Intellectual property isn't relevant, and STLs are definitely "editable" - I use Meshmixer to edit them directly. It is meant to make a file safe to open on pretty much any device with predictable results, similar to a PDF. STL's store the vertices. SOLIDWORKS creates models using what's known as "Boundary Representation" or BREP, and creates a 'mathematically perfect' shape (to within floating point calculation tolerances). You will notice that a cylinder created in SOLIDWORKS is a single smooth surface, where a cylinder saved as an STL is made up of numerous flat polygons, which bridge the gaps between the vertices. The size of these facets (and so the tradeoff between STL complexity and accuracy to the ideal geometry) is determined during export from BREP to Mesh. Mesh support is pretty new in SOLIDWORKS, it's improved hugely over the more recent software versions, but still has a way to go. The information below is based on SW2020. When you drag/drop an STL into SOLIDWORKS, by default, it imports as a graphic body. Your first step should be: INSERT -> FEATURES -> CONVERT TO MESH BODY. Select everything, hit OK, and then delete all graphic bodies from the feature tree, leaving only "Imported" bodies. Splitting multi-part STLs is usually handled directly in the slicing software. You can simply delete the "Imported" bodies that you don't want, or use a "Delete/Keep Bodies" feature to do so non-destructively if you prefer. Moving a feature is a little trickier, and may be best handled, again, with mesh-specific software such as Meshmixer or Blender, depending on your specific model. That said, it is possible to edit Mesh bodies with SOLIDWORKS. There are two main ways you can interact with Mesh bodies. These are with surfaces (e.g. Surface Cut, Split, etc.), or with another mesh body, using the "Combine" tool. So, to Add a feature to your mesh, you can model said lump as normal, use "Convert to Mesh Body" to create a mesh, with whatever resolution you require, and then use "Combine" and "Add" to join these together. To Cut a feature from your mesh, it's best to use the same process with the "Subtract" option, as this gives you control over the resolution. Alternatively, if the cutting tool sticks out of the original mesh, you can delete one face of the solid to make a surface, and use surface cut. To move a feature, therefore, your would need use a surface or plane with the "split" tool to separate the feature from the main body, use "move/copy bodies" to place it in your desired location, and then use "combine" to reattach it when you are done. With a native SOLIDWORKS file, you can use features like "move face", and the other faces around the feature being moved are able to 'grow' to fill in any gaps. STL files simply do not contain the data of where their faces are in the same way, and cannot afford this functionality. In a Mesh editing specific software, you would be able to select the STL faces/vertices directly and move these around, but this would likely leave deformed polygons either side of the moving feature - which requires additional steps to fix anyway! EDIT: I never answered your actual main question - can you convert an .STL to a native .SLDPRT, allowing you to interact using all tools? Yes and no. You can convert the mesh to a solid BREP made of lots of flat surfaces. This takes a very long time, and is not recommended except on the simplest of models. You can also use the Scan-To-3D Add in to assist on re-modelling a mesh part. You still need to model the part 'from scratch', but sketch generation is considerably sped up. This is a whole 'nother can of worms, and not really intended for 'Thingiverse modifications" http://help.solidworks.com/2020/english/SolidWorks/scanto3d/c_Scanto3d_overview.htm
{ "domain": "engineering.stackexchange", "id": 4049, "tags": "design, solidworks" }
PR2 object manipulation hand posture fail: `Hand posture controller timed out`
Question: I have been running the PR2 tabletop manipulation demo from the following link: http://www.ros.org/wiki/pr2_pick_and_place_demos/Tutorials/The%20Pick%20and%20Place%20Autonomous%20Demo Since I'm not using the actual pr2, I've altered the demo slightly to allow it to work. Instead of 'roslaunch /etc/ros/robot.launch' (since I can't seem to find it in my ros install), I run these four commands in order: roslaunch gazebo_worlds empty_world.launch roslaunch pr2_gazebo pr2.launch roslaunch gazebo_worlds table.launch roslaunch gazebo_worlds cylinder_object.launch Also, I adjusted the location of the cylinder object so it lies towards the edge of the table adjacent to the robot and in the left bisection of the table. This forces the demo to always choose the left arm to start. Then, I "start" the autonomous demo-- "start" being the command I send to the python script menu asking what action the PR2 robot should take. The command starts the autonomous pick and place demo. Both arms retract to a starting position behind the table, the PR2 head scans the area for objects, the scan results in 1 object found, the PR2 positions its left arm over the object, and finally it spouts this error: [ERROR] [1311602621.821849339, 143.811000000]: Hand posture controller timed out on goal (1) [ERROR] [1311602621.889935325, 143.811000000]: Grasp error; exception: grasp execution:mechanism:Hand posture controller timed out No matter where I position the cylinder object on the table, it always results in this error. I have not tried every table position for the cylindrical object, obviously. Do I need to spawn a different object? Put the object in just the right place? Why won't it pick up the cylindrical object? Thanks for your input on this frustrating matter. Originally posted by seanarm on ROS Answers with karma: 753 on 2011-07-25 Post score: 3 Original comments Comment by seanarm on 2011-07-25: Comment by seanarm on 2011-07-25: The cylinder is gazebo_worlds/objects/cylinder_object.urdf.xacro. It has a base to make it stand on end without falling. The size of the actual cylinder, as defined in that file, is: (see next comment) Comment by seanarm on 2011-07-25: Tutorial doesn't work for me as written because I don't have a /etc/ros/robot.launch, nor do I have a robot.launch that I can find in my ROS installation. The change I made opens gazebo and spawns a PR2, a table, and a cylinder object. Comment by tfoote on 2011-07-25: How big is the cylinder? Comment by Asomerville on 2011-07-25: "I have altered..." Are you saying that the tutorial doesn't work as written, and you've made a change so that it does? Answer: That error is caused by the simulated gripper not "settling" on the grasped object - essentially, in simulation, the gripper joint keeps making small movements to account for simulation error, and therefore the controller never considers that is has stalled. You can change what the controller considers "stall" velocity by editing pr2_controller_configuration_gazebo/pr2_default_controllers.launch and putting in the following parameters: Originally posted by Matei Ciocarlie with karma: 586 on 2011-07-25 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by hsiao on 2011-07-26: Python demo script crashes? What was the error? Comment by Matei Ciocarlie on 2011-07-26: Unfortunately, in simulation, holding grasped objects is not very stable. The simulator keeps taking small corrective improvements until eventually the object slips out, especially if it's cylinder... Maybe something like a cube with the faces parallel with the gripper pads will be held longer? Comment by seanarm on 2011-07-26: This worked. The python demo script crashes near the end of the routine, but I suspect this has something to do with the position of the cylinder object on the table... maybe. Regardless, it picks up the cylinder object, then drops it a short while later.
{ "domain": "robotics.stackexchange", "id": 6246, "tags": "ros, object-manipulation, pr2-tabletop-manipulation-apps" }
CakePHP change 'miniMap' action and respect template
Question: I have some questions on how I can improve this "action" (method) in "controller": My template has a navbar with dynamic content (if the user is logged in a special button appears, his name appears in the navbar among other custumizaΓ§Γ΅es) also the integration of modes is dynamic based on whether the user is logged in or not, this insert in my controller calls some methods that will check if the user are logged and enter the data into the template. I am inserting code that search the database (Query Builder) directly into the action. I am using a comment line like "//--------------------------" to separate concepts and operations. Is it correct? I'm using one full template for every action (e.g.: One full template for action (method) add of Products Controller) (Note: most of pages (add, edit, delete, index) use different .js and .css files). Controller: <?php namespace App\Controller; use App\Controller\AppController; use Cake\Event\Event; use Cake\ORM\TableRegistry; class StoresController extends AppController { public function miniMap() { $setting = [ 'fields' => ['id', 'banner_description', 'path_banner', 'url_redirect'], 'conditions' => ['banner_type_id' => 2], 'limit' => 1 ]; $fullBanners = TableRegistry::get('Banners') ->find('all', $setting)->hydrate(false)->toArray(); $this->set('fullBanners', $fullBanners); //------------------------------------------------------------------------- $setting = [ 'fields' => ['id', 'banner_description', 'path_banner', 'url_redirect'], 'conditions' => ['banner_type_id' => 1], 'limit' => 3 ]; $smallBanners = TableRegistry::get('Banners') ->find('all', $setting)->hydrate(false)->toArray(); $this->set('smallBanners', $smallBanners); //------------------------------------------------------------------------- $this->set('userId', $this->Auth->user('id')); //------------------------------------------------------------------------- $this->set('username', $this->Auth->user('username')); } } Template: <?php $this->layout = false; ?> <!DOCTYPE html> <html> <head> <?= $this->Html->charset() ?> <?= $this->Html->meta('viewport','width=device-width, initial-scale=1.0') ?> <?= $this->Html->meta('title',$pageTitle) ?> <?= $this->Html->meta('favicon.ico','/cart.png', ['type' => 'icon']) ?> <?= $this->Html->meta('keywords','') ?> <?= $this->Html->meta('description','') ?> <?= $this->Html->meta('robots','index,follow') ?> <?= $this->Html->css('library/datepicker/css/datepicker.css') ?> <?= $this->Html->css('library/bxslider-4-4.1.2/jquery.bxslider.css') ?> <?= $this->Shrink->css(['styles/style.css', 'styles/menu-plugin.css']) ?> <?= $this->Html->script('library/bxslider-4-4.1.2/jquery.bxslider.min.js',['defer' => true]) ?> <?= $this->Html->script('actions/main.js',['defer' => true]) ?> <?= $this->Shrink->fetch('css') ?> </head> <body> <?= $this->element('Navbar/navbar_main') ?> <div class="wrapper"> <div class="container"> <div class="row"> <?= $this->element('Body/categories') ?> <?= $this->element('Body/stores') ?> </div> </div> </div> <?= $this->element('Footer/footer_information') ?> <?php if ($userId == false): ?> <?= $this->element('Modal/create_account_modal') ?> <?= $this->element('Modal/login_modal') ?> <?php else: ?> <?= $this->element('Modal/logout_modal') ?> <?php endif; ?> </body> </html> Answer: There are numerous things you can do to make the code easier to maintain. Put code in your models The minimap function calls find on the banners table twice - with config that doesn't change. It'd make things a lot cleaner to do this: class BannerTable extends Table { public function full() { $setting = [ 'fields' => ['id', 'banner_description', 'path_banner', 'url_redirect'], 'conditions' => ['banner_type_id' => 2], 'limit' => 1 ]; return $this ->find('all', $setting)->hydrate(false)->toArray(); } public function small() { $setting = [ 'fields' => ['id', 'banner_description', 'path_banner', 'url_redirect'], 'conditions' => ['banner_type_id' => 1], 'limit' => 3 ]; return $this ->find('all', $setting)->hydrate(false)->toArray(); } } As should be apparent written like this, the settings arrays are near duplicates - you can easily consolidate the two methods into one, or change into finder methods as you see fit. The controller code then becomes: public function miniMap() { $this->loadModel('Banners'); $fullBanners = $this->Banners->full(); $smallBanners = $this->Banners->small(); $userId = $this->Auth->user('id'); $username = $this->Auth->user('username'); $this->set(compact('fullBanners', 'smallBanners', 'userId', 'username')); } Which is much more concise. This also does away with any need to delimit the code with comment lines (I wouldn't recommend that, if you feel it's necessary put each delimited block of code in a separate method). Note that the auth component stores data in the session, which is accessible in the view. Consider reading the current user data directly out of the session rather than passing variables around which contain information that's duplicated elsewhere. Use caching There's no need to get the banner info from the db on every request, you can wrap that in a cache call: use Cake\Cache\Cache; ... public function miniMap() { list($fullBanners, $smallBanners) = Cache::remember('banners', function () { $this->loadModel('Banners'); $fullBanners = $this->Banners->full(); $smallBanners = $this->Banners->small(); return [$fullBanners, $smallBanners]; }); $userId = $this->Auth->user('id'); $username = $this->Auth->user('username'); $this->set(compact('fullBanners', 'smallBanners', 'userId', 'username')); } Template structure The template is clean and easy to read but it is not a good habit to put the layout in the view file. Especially since the reason given is this: Full template is because the css and js is different between other templates A view file should look more like this: <?php $this->Html->css('library/datepicker/css/datepicker.css', ['block' => true]); ... ?> <ul> <?php foreach($examples as $example): ?> <li><?= .. whatever code to output the content ?> </li> <?php endforeach; ?> </ul> With a layout that looks more like this: <html> <head> <?= $this->fetch('meta');?> <?= $this->fetch('css');?> </head> <body> <?= $this->fetch('content'); ?> <?= $this->fetch('script'); ?> </body> </html> By making use of the block option, the same layout file can be used even though the css/js/whatever in each template is different. If the layout structure differs - That's a good reason to use a different layout file via: $this->layout = 'different'; in the template file. Miscellaneous comments Don't use classes you aren't using - Event is not used Load models using loadModel - Using the table registry of course works, but loading models in controllers is normally done with loadModel - it is more efficient. The template makes no reference to the variables being set in the method miniMap - fullBanners, smallBanners and username are unused - if they aren't used don't set them. Provide a complete example - The controller code and template don't relate to each other - it's hard to put either into an appropriate context as the code in the question, whilst complete enough to comment on, isn't complete enough to see how it's used.
{ "domain": "codereview.stackexchange", "id": 17206, "tags": "php, template, controller, cakephp" }
Why does the $L_2$ norm give the shortest path between 2 points?
Question: Why not the $L_1$ or $L_3$ distances? Is there some deep reason why the universe (at least at human scales) looks pretty much Euclidean? Could we imagine a different universe where a different $L_p$ metric would seem "natural"? I know it's kind of a deep question, but the specialness of 2 here has always made me wonder. Answer: If we want a sense of localness (or calculus to work), we'd like to be able to obtain the length by adding up the length from pieces of the path (for example using a ruler, or counting paces as we walk along the path between two points). However, even considering just two dimensions we see something interesting for $L_p$. $$\left(|x|^p + |y|^p\right)^{1/p} = \sum_{i=1}^N \left(\left|\frac{x}{N}\right|^p + \left|\frac{y}{N}\right|^p\right)^{1/p}$$ This trivially works with $p=1$, and due to a special symmetry at $p=2$ it works there as well. This will not work for other $p\neq 0$ (I am unsure of how to extend the definition to check $p=0$). The special symmetry at $p=2$ is that the distance measurement becomes rotationally invariant. So the seemingly mundane reasons of space has more than one dimension locality uniformity seem to already select $L_2$ as special. Any other choice would give a preferred coordinate system, and possibly break locality. So what would a different universe in which $L_1$ or something else is the natural choice? If you imagined an N dimensional Cartesian lattice world, so one with discrete lengths, and a clearly preferred coordinate basis, this would make $L_1$ a more natural choice. I'm not sure of a good picture for a universe in which $L_p, p>2$ would be a natural choice. There would be preferred directions, and you could only consider an object as a whole (not in parts), which seems to suggest in such a hypothetical universe you couldn't even experience your life as a sequence of moments (which I guess would make sense if we have highly non-local physics and therefore causality is out the window). Interesting question.
{ "domain": "physics.stackexchange", "id": 16683, "tags": "metric-tensor, geometry" }
Insect (I hope) identification
Question: Look, let's get something straight. I am not the worlds biggest sissy when it comes to bugs, but I'm not David Attenborough either. Now that that's out of the way, down to business. Living in Michigan as I do, I was quite surprised to find what looked like the spawn of Predator and a cicada living in my back yard. Suffice to say it made me nervous. In one of the pictures below, I have included a cigarette with about 3 drags smoked off of it for sizing. Please offer insight into the true nature of this animal. Answer: In the absence of someone more qualified, I believe this to be a prairie crayfish, based on where it was found (land) and what it is (decapod of some kind that looks like a crayfish). Therefore it is unfortunately not an insect. It's in the right range and roughly the right terrain. I welcome entomologically qualified advice!
{ "domain": "biology.stackexchange", "id": 3945, "tags": "species-identification" }
Niemark's theorem - simulating POVMs with PVMs
Question: I am having trouble understanding Niemark's theorem from books (e.g watrous). The wikipedia page is clearer but most calculations are not justified. I want to contruct the PVM associated with the state discrimination POVM. Can someone explain to me where does the expression for ${\displaystyle {\sqrt {F_{?}}}={\sqrt {\frac {2|\langle \varphi |\psi \rangle |}{1+|\langle \varphi |\psi \rangle |}}}|\gamma \rangle \langle \gamma |}$ come from where ${\displaystyle |\gamma \rangle ={\frac {1}{\sqrt {2(1+|\langle \varphi |\psi \rangle |)}}}(|\psi \rangle +e^{i\arg(\langle \varphi |\psi \rangle )}|\varphi \rangle ).}$? and why $U_{UQSD}$ outputs what it does ? Answer: The tl;dr of how to go from a given POVM to a PVM is the following: Take your state on which you do the measurement to a larger Hilbert space using a linear isometry $A$. Do a projective measurement in the ancilla registers of the larger Hilbert space. For a POVM with elements $\{E_i\}$ where $\sum_i E_i = I$, consider the operator $$A = \sum_i U\sqrt{E_i}\otimes \vert i\rangle,$$ where $U$ is an arbitrary unitary. Note that you have a second unitary freedom in defining the basis of the ancilla system i.e. you can replace all the states $\vert i\rangle$ with $U'\vert i\rangle$ for some other unitary $U'$ and everything that follows still works. You can work out that $A^\dagger A = I$ i.e. $A$ is an isometry. Having taken your quantum state to a larger Hilbert space using $A$, you can now do a projective measurement. The PVM elements $\{\Pi_i\}$ are given by $\Pi_i = I\otimes \vert i\rangle\langle i\vert$, which obviously satisfy the requirements of a PVM. Since $E_i = A^\dagger (I\otimes \vert i\rangle\langle i\vert)A$, we have that the post-measurement probabilities are correct i.e. $$\mathrm{Tr}(E_i\rho) =\mathrm{Tr}(\Pi_i (A\rho A^\dagger)).$$ The right hand side is manifestly a composition of the isometry followed by a projective measurement. Following the notation from the Wikipedia page, for state discrimination between two states, you have a POVM with elements $\{F_\psi, F_\phi, F_?\}$. You first find these three POVM elements (see here for how). Next, take the square roots of those elements and append an ancilla register in order to construct the required isometry $A$. It is notationally cumbersome to write $A$ out so I will omit the explicit answer you desire but hopefully you understand how to get there. Once you have $A$, you do the projective measurement on the ancilla registers and this implements the POVM.
{ "domain": "quantumcomputing.stackexchange", "id": 5379, "tags": "measurement, povm, state-discrimination" }
Relation between RAM and Turing machine
Question: Denote $D$ a set of finite sequences of integers. In Papadimitriou's "Computational Complexity" in theorem 2.5 it is proved that if a RAM program $\Pi$ computes a function $\phi$ from $D$ to integers in time $f(n)$, then there is a $7$-string Turing machine $M$, which computes $\phi$ in time $O(f(n)^3)$. Consider the following example: $\phi(S)$ is a sum of first two numbers of a finite sequence $S$, or $0$, if $|S| < 2$. It seems to me that a RAM, defined in Papadimitriou's book computes $\phi$ in time $O(1)$. Thought, any Turing machine, computing $\phi$, should work for at least linear time (we can take an input of length $n$ with only two numbers in sequence). I don't see how to cope with this contradiction. Could you please describe me, where I am wrong? Thanks a lot. Answer: You're right, there is a missing assumption that f(n) is at least O(n). Indeed, the last line of the proof state that the operand are of size O(f(n)), while what has been proven before is O(f(n)+l(I)+l(B)). Since l(I) is O(n), the hypothesis is needed for this simplification.
{ "domain": "cs.stackexchange", "id": 3052, "tags": "turing-machines, time-complexity, simulation, computation-models" }
Magnitude refers to number or number with units?
Question: This question is about terminology for physical quantities. When we talk about magnitude (while talking about scalars and vectors) do we refer to just number or Number along with units? example: If a person weighs 120 pounds, then "120" is the numerical value and "pound" is the unit. Which is magnitude? 120? or 120 pounds? EDIT: In the book I'm using its written as The number indicates the magnitude of the scalar quantity and is inversely proportional to the unit chosen. This statement is wrong. Right? Its not the number alone. Its along with units. Answer: After a bit of thinking I myself found the answer. Let us consider example: mass of an ant is 1000 milli grams. mass of an elephant is 1 ton. If just number was taken as magnitude then magnitude of mass of ant would be greater than that of elephant. Which clearly shouldn't be. So magnitude is number along with units.
{ "domain": "physics.stackexchange", "id": 48201, "tags": "terminology, measurements, units" }
Computing the intersection of two big lists is not fast enough
Question: import random import string LENGTH = 5 LIST_SIZE = 1000000 def generate_word(): word = [random.choice(string.ascii_lowercase) for _ in range(LENGTH)] word = ''.join(word) return word list1 = [generate_word() for _ in range(LIST_SIZE)] list2 = [generate_word() for _ in range(LIST_SIZE)] intersection = [word for word in list1 if word in list2] print(len(intersection)) I have two big lists and I try to find out how many items they have in common. The code looks nice but is extremely slow. I computed that my PC can handle ~90M comparisons per sec. so the code would run for about 3 hours. A college of mine told me that the code can be significantly sped up. Can you give me some tips how to make it faster? Answer: Some issues are - Don't store things in memory unless necessary In this line: word = [random.choice(string.ascii_lowercase) for _ in range(LENGTH)] You're constructing a list in memory, which is bad for two reasons. First, a list is mutable, but you don't need the overhead of mutability, so you can use a tuple, which is slightly lighter. More importantly, this shouldn't land in memory at all. It should be stored as a generator, thus: word = (random.choice(string.ascii_lowercase) for _ in range(LENGTH)) Try cStringIO instead of a join I don't have an interpreter in front of me, so I can't guarantee that this will help, but try replacing your generator + join with a simple loop that writes to an instance of cStringIO. Use timeit to evaluate your performance. Use sets Lists are absolutely not the right thing to use here. Call set thus: list1 = set(generate_word() for _ in range(LIST_SIZE)) list2 = set(generate_word() for _ in range(LIST_SIZE)) intersection = list1 & list2
{ "domain": "codereview.stackexchange", "id": 30798, "tags": "python, performance" }
Converting int values in Java to human readable strings in English
Question: (See also the next iteration.) I have rolled this short program that converts int values to the human readable strings: package com.github.coderodde.fun; import java.util.HashMap; import java.util.Map; import java.util.Scanner; public class IntToHumanReadableStringConverter { private static final Map<Integer, String> TENS_MAP = new HashMap<>(); static { TENS_MAP.put(1, "one"); TENS_MAP.put(2, "two"); TENS_MAP.put(3, "three"); TENS_MAP.put(4, "four"); TENS_MAP.put(5, "five"); TENS_MAP.put(6, "six"); TENS_MAP.put(7, "seven"); TENS_MAP.put(8, "eight"); TENS_MAP.put(9, "nine"); TENS_MAP.put(10, "ten"); TENS_MAP.put(11, "eleven"); TENS_MAP.put(12, "twelve"); TENS_MAP.put(13, "thirteen"); TENS_MAP.put(14, "fourteen"); TENS_MAP.put(15, "fifteen"); TENS_MAP.put(16, "sixteen"); TENS_MAP.put(17, "seventeen"); TENS_MAP.put(18, "eighteen"); TENS_MAP.put(19, "nineteen"); TENS_MAP.put(20, "twenty"); TENS_MAP.put(30, "thirty"); TENS_MAP.put(40, "fourty"); TENS_MAP.put(50, "fifty"); TENS_MAP.put(60, "sixty"); TENS_MAP.put(70, "seventy"); TENS_MAP.put(80, "eighty"); TENS_MAP.put(90, "ninety"); } public static String convert(int num) { StringBuilder sb = new StringBuilder(); if (num < 0) { sb.append("minus "); num = -num; } int billions = num / 1_000_000_000; num -= billions * 1_000_000_000; int millions = num / 1_000_000; num -= millions * 1_000_000; int thousands = num / 1_000; num -= thousands * 1_000; int units = num; if (billions > 0) { sb.append(convertUnitsImpl(billions)); sb.append(" billion "); } if (millions > 0) { sb.append(convertHundredsImpl(millions)); sb.append(" million "); } if (thousands > 0) { sb.append(convertHundredsImpl(thousands)); sb.append(" thousand "); } if (units > 0) { sb.append(convertHundredsImpl(units)); } return sb.toString(); } private static String convertUnitsImpl(int unit) { return TENS_MAP.get(unit); } // Converts a num in range [0, 999] to the human readable string: private static String convertHundredsImpl(int num) { StringBuilder sb = new StringBuilder(); int hundreds = num / 100; num -= 100 * hundreds; if (hundreds > 0) { sb.append(convertUnitsImpl(hundreds)); sb.append(" hundred "); } if (num > 0) { String tensString = TENS_MAP.get(num); if (tensString == null) { int units = num % 10; num -= units; sb.append(TENS_MAP.get(num)); sb.append("-"); sb.append(convertUnitsImpl(units)); } else { sb.append(tensString); } } return sb.toString(); } public static void main(String[] args) { Scanner scanner = new Scanner(System.in); while (true) { int num = scanner.nextInt(); String s = convert(num); System.out.println(s); } } } For example, when I input -123456, I get minus one hundred twenty-three thousand four hundred fifty-six. Critique request As always, I would like to hear whatever comes to mind. Answer: I'm not keen on the class name or method names - see my alternatives below. I find perhaps a little more logic in the code than I'd look for - normally I try to find some way of making this sort of process data-driven, using some tabular representation of ranges, limits, and so on. With reference to the string names, I don't believe you need a map at all. This is crying out for use of arrays, with some wrinkles for the peculiarities of spelling numbers between ten and twenty. I'd go for an array for small numbers (1 - 19), one for multiples of ten upto ninety. I'd have a tabular representation of the "big" numbers, and a bit of special-casing for hundreds. Then the code can be a bit more elegant and modular, at least in my opinion ;-) Note my use of "long" to handle big negatives. My example hard-codes the numbers for testing. Like "200_success" I'm not taken with the loop control in yours! I enjoyed thinking about this - here's how I might do it. (Working out how to add "and" in at points where an native speaker would put it is an interesting extension that I haven't tackled!) public class NumberSpeller { private static final String[] SMALL_NUMBERS = { null, // special case zero "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", // "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen" // }; private static final String[] MULTIPLES_OF_TEN = { null, // special case zero "ten", "twenty", "thirty", "fourty", "fifty", "sixty", "seventy", "eighty", "ninety" }; private static final class BigNumber { // poor choice of name, perhaps private int size; private String name; private BigNumber(int size, String name) { this.size = size; this.name = name; } public int getSize() { return size; } public String getName() { return name; } } // int can't go over single-digit billions ... private static final BigNumber[] BIG_NUMBERS = { new BigNumber(1_000_000_000, "billion"), new BigNumber(1_000_000, "million"), new BigNumber(1_000, "thousand") }; // Special case, but try and be as consistent as possible private static final BigNumber HUNDRED = new BigNumber(100, "hundred"); public static String spell(int candidate) { if (candidate == 0) { return "zero"; } StringBuilder sb = new StringBuilder(); long num = candidate; // Handle full range of negative ints... if (num < 0) { sb.append("minus "); num = -num; } for (BigNumber bigNumber : BIG_NUMBERS) { long current = num / bigNumber.getSize(); num %= bigNumber.getSize(); if (current > 0) { spellBigNumber(sb, current); sb.append(bigNumber.getName()).append(" "); } } if (num > HUNDRED.getSize()) { spellSmallNumber(sb, num / HUNDRED.getSize()); sb.append(HUNDRED.getName()).append(" "); num %= HUNDRED.getSize(); } spellMediumNumber(sb,num); return sb.toString(); } private static void spellBigNumber(StringBuilder sb, long current) { long hundreds = current / HUNDRED.getSize(); if (hundreds > 0) { spellSmallNumber(sb, hundreds); sb.append(HUNDRED.getName()).append(" "); current %= HUNDRED.getSize(); } spellMediumNumber(sb, current); } private static void spellMediumNumber(StringBuilder sb, long num) { if (num >= 20) { sb.append(MULTIPLES_OF_TEN[(int) (num / 10)]).append(" "); num %= 20; } if (num > 0) { spellSmallNumber(sb, num); } } private static void spellSmallNumber(StringBuilder sb, long l) { sb.append(SMALL_NUMBERS[(int) l]).append(" "); } public static void main(String[] args) { int[] candidates = { 2147483647, 147483648, 47483648, 7483648, 483648, 83648, 3648, 648, 48, 8, -2147483648, 2_000_000_000, 2_000_000, 2_000, 200, 20, 14, 1, 0 }; for (int candidate : candidates) { System.out.format("%d - %s%n", candidate, spell(candidate)); } } }
{ "domain": "codereview.stackexchange", "id": 43480, "tags": "java, integer, numbers-to-words" }
Why is this sequence of recurrence relevant?
Question: I am learning how to solve the time complexity for the recurrence relation $$ T(n) = 2T(n - 1) + n^2\text{, where }T(1) = 1 $$ The solution notes that I should begin by considering the following sequence: $$ \text{Define }T^k(n) = T^{k - 1}(n + 1) - T^{k - 1}(n) \\ \text{and let }T^{(0)}(n) = T(n) $$ How is the sequence $T^k(n)$ relevant to solving my original recurrence relation? This just seems to be complicating the problem by defining an entirely new sequence. Answer: Calculus of differences We make Kaya's nice observations slightly more formal. The sequence $T^k$ is the $k$th difference sequence of the original sequence $T$. You can think of it as a discrete $k$th derivative. For example, if $T$ is a $k$th degree polynomial then $T^k$ is constant. We will see below what happens when $T$ is exponential. For a sequence $T$ (all sequences range over the positive integers), let $\Delta(T)$ be defined by $$ \Delta(T)(n) = T(n+1)-T(n). $$ So $T^k = \Delta^{(k)}(T)$, that is $T^k$ is obtained by applying $k$ times the operator $\Delta$. The operator $\Delta$ has an inverse, which corresponds to indefinite integration: $$ T(n) = T(1) + \sum_{k=1}^{n-1} \Delta(T)(k). $$ If we are only given the sequence $\Delta(T)$ then we don't know $T(1)$, and we can think of it as a constant of integration. For concreteness, define another operator $\Sigma$ given by $$ \Sigma(T)(n) = \sum_{k=1}^{n-1} T(k). $$ The "fundamental theorem of calculus" then states that $$ \Delta(\Sigma(T)) = T $$ and $$ \Sigma(\Delta(T)) = T - T(1). $$ Another useful property of the operators $\Delta$ and $\Sigma$ is their linearity. Furthermore, we have the nice formula $$\Delta^{(k)}(n^k) = k!,$$ which can be proved by induction on $k$. This formula also implies that $$\Delta^{(k+1)}(n^k) = 0.$$ In fact, this is true for any polynomial of degree at most $k$. Conversely, $\Sigma(n^k)$ is some polynomial of degree $k$. We can do the same for exponentials: $$ \Delta(c^n) = (c-1)c^n, \quad \Sigma(c^n) = \frac{c^n-1}{c-1}.$$ Finally, it will be useful to use the shift operator $S$ given by $$S(T)(n) = T(n+1).$$ This operator commutes with $\Delta$: $S(\Delta(T)) = \Delta(S(T))$. Solving the recurrence Now to the problem at hand. The sequence $T$ satisfies $$S(T) = 2T + (n+1)^2. $$ Taking the third difference, we get $$S(\Delta^{(3)}(T)) = 2\Delta^{(3)}(T).$$ Here we used the linearity of $\Delta$, the commutativity of $\Delta,S$, and the fact that the third derivative of $(n+1)^2$ vanishes. We can conclude that $$ \Delta^{(3)}(T) = C 2^n, \quad C = \Delta^{(3)}(T)(1)/2. $$ We now want to repeatedly apply the operator $\Sigma$. Each time we apply $\Sigma$, we get some constant of integration. After the first time, we get $$ \Delta^{(2)}(T) = C 2^n + C_1 $$ for some $C_1$, using $\Sigma(2^n) = 2^n-1$. While we can express $C_1$ in terms of $T$, it is better not to. The next time we apply $\Sigma$, $C_1$ turns into a linear terms, and so $$ \Delta^{(1)}(T) = C 2^n + C_1 n + C_2 $$ for some $C_2$. The next time we apply $\Sigma$, $C_1 n$ turns into a quadratic term, and so $$ T = C 2^n + P_2 $$ for some quadratic polynomial $P_2$. It remains to find $C$ and $P_2$. Recall that $C = \Delta^{(3)}(T)(1)/2$. Kaya calculated $\Delta^{(3)}(T)(1) = 12$ and so $C = 6$. Therefore $$ T = 6 \cdot 2^n + P_2. $$ The first few values in $T$ are $1,6,21$, and so $P_2(1) = -11$, $P_2(2) = -18$, $P_2(3) = -27$. Write $P_2 = B_2 n^2 + B_1 n + B_0$, and so $\Delta^{(2)}(P_2) = 2B_2$. On the other hand, $\Delta(-11,-18,-27,\ldots) = (-7,-9,\ldots)$ and so $\Delta^{(2)}(-11,-18,-27,\ldots) = (-2,\ldots)$, and we conclude that $B_2 = -1$. We can now calculate $B_1 n + B_0 = (-10,-14,\ldots)$, and so $B_1 = -4$ and $B_0 = -6$. We can conclude that $P_2 = -n^2 - 4n - 6$, and we get Kaya's formula. Generalities More generally, if $T$ is a sequence satisfying $T(n+1) = c T(n) + P_k(n)$ for some polynomial $P_k$ of degree $k$, then the very same method shows that $$ T(n) = C c^n + P_{k+1}(n) $$ for some constant $C$ and polynomial $P_{k+1}$ of degree $k+1$. In order to find $C$ and $P_{k+1}$, we can either follow the route used above - express $C$ in terms of $c$, $k$ and $\Delta^{(k+1)}(T)(1)$ - or calculate $T(1),\ldots,T(k+2)$ and solve linear equations in the unknowns $C$ and the coefficients of $P_{k+1}$. Even more generally, we can handle recurrences of the form $T(n) = c_1 T(n-1) + \cdots + c_l T(n-l) + P_k(n)$. As before, after applying $\Delta^{(k+1)}$, the inhomogeneous terms $P_k$ disappears, and we can solve the resulting recurrence relation. Applying $\Sigma^{(k+1)}$, we will get an "error term" of the form $P_{k+1}$ as before. General recurrence relations have solutions which are sums of expressions of the form $n^r c^n$ for integer $r \geq 0$, so we also need to calculate the (iterated) discrete integrals of these functions; the result of applying $\Sigma$ once is $P_{r+1}(n) c^n$ for some polynomial $P_{r+1}$ of degree $r+1$, and so applying it $k+1$ time we obtain $P_{r+k+1}(n) c^n$ for some polynomial $P_{r+k+1}$ of degree $r+k+1$. As before, we can find the coefficients of all polynomials by solving linear equations, given enough values of the sequence $T$.
{ "domain": "cs.stackexchange", "id": 1576, "tags": "proof-techniques, recurrence-relation" }
Instantaneous axis of rotation of a rigid body
Question: For the description of rigid body motion, any point $O$ of the rigid body could be taken as reference, since the velocity of a generic point $P$ can be written in function of the angular velocity $\Omega$ and of the velocity of $O$, independently of the choice of $O$. $$\dot{P} = \dot{ O} + \Omega∧(P βˆ’O)\tag{1}$$ That means that the motion of $P$ is seen as the composition of the traslation of $O$, plus a rotational motion about an axis passing through $O$: let's call this axis $\gamma$. In which cases it is correct to say that $\gamma$ is an instantaneous axis of rotation of the rigid body? In order to do that must the point $O$ (on the axis $\gamma$) have zero velocity (i.e. $\dot{O}=0$)? Or can I define $\gamma$ as an instantaneous axis of rotation in any case when I write down $(1)$? Answer: A body can have parallel velocity on the instantaneous axis of rotation. This parallel velocity is sometimes designated with a scalar pitch value $h$, such that $\vec{v}_O = h \vec{\omega}$ Consider a point O on the IAR and a point P outside of it. You have $$ \vec{v}_P = \vec{v}_O + (\vec{r}_P-\vec{r}_O) \times \vec{\omega} $$ Now I can prove that given the motion $\vec{v}_P$ of an arbitrary point P and the rotational velocity $\vec{\omega}$. The motion can always be decomposed into an axis of rotation point $\vec{r}_O$ and a parallel velocity $\vec{v}_O$. Lemma Given $\vec{v}_P$ and $\vec{\omega}$ there exists a unique (relative) location $\vec{r}$ such that $$ \vec{v}_O = \vec{v}_P + \vec{\omega}\times \vec{r} = h \vec{\omega} $$ so the velocity $\vec{v}_O = h \vec{\omega} $ is parallel to $\vec{\omega}$ only. This location is on the instantaneous axis of rotation closest to P such that $\vec{r}_O = \vec{r}_P + \vec{r}$. Proof Take $\vec{r}= \frac{\vec{\omega} \times \vec{v}_P}{\| \vec{\omega}\|^2}$ in the transformation and expand out the terms $$ \vec{v}_O = \vec{v}_P + \vec{\omega}\times \left( \frac{\vec{\omega} \times \vec{v}_P}{\| \vec{\omega}\|^2} \right)$$ Now use the vector triple product identity $a \times (b \times c) = b (a\cdot c) - c (a \cdot b)$ $$\require{cancel} \vec{v}_O =\vec{v}_P +\frac{\vec{\omega}(\vec{\omega}\cdot \vec{v}_P) - \vec{v}_P (\vec{\omega}\cdot \vec{\omega})}{\| \vec{\omega}\|^2} = \cancel{\vec{v}_P} + \frac{\vec{\omega}(\vec{\omega}\cdot \vec{v}_P)}{\| \vec{\omega}\|^2}-\cancel{\vec{v}_P} $$ Now define the scalar pitch as $$h = \frac{\vec{\omega} \cdot \vec{v}_P}{\| \vec{\omega} \|^2}$$ and the above becomes $$\vec{v}_O = h \vec{\omega}$$ So the velocity on O is parallel to the rotation only. Example A body rotates by $\vec{\omega} = (0,0,10)$ and a point P located at $\vec{r}_P=(0.8,0.2,0)$ has velocity $\vec{v}_P = (2,-3,1)$. Find the IAR point O and pitch value.$h$. $$ \vec{r}_O = \vec{r}_P + \frac{\vec{v}_P \times \vec{\omega}}{\| \vec{\omega} \|^2} = (0.8,0.2,0) + \frac{(-30,20,0)}{10^2} = (0.5,0,0)$$ $$h = \frac{\vec{\omega} \cdot \vec{v}_P}{\|\vec{\omega}\|^2} = \frac{10}{10^2} = 0.1$$ Confirmation $$\vec{v}_P = h \vec{\omega} + (\vec{r}_P - \vec{r}_O) \times \vec{\omega} = (0,0,1) + (0.3,0.2,0) \times (0,0,10) = (2,3,-1) \; \checkmark $$ NOTES: See also related answer to question Why is angular velocity of any point about any other point of a rigid body always the same?
{ "domain": "physics.stackexchange", "id": 29921, "tags": "rotational-dynamics, rotation, rigid-body-dynamics" }
Codeword length of a character with frequency more than $\frac{2}{5}$ in huffman coding
Question: I read the following post. Now i encountered this question, what is codeword length of a character with frequency more than $\frac{2}{5}$ huffman coding? I think it can 2bit or 1bit, but i can't prove it. Any help for proving that appreciated. Answer: Consider the classical algorithm that constructs the Huffman code and focus on the moment in which the singleton vertex $v$ corresponding to the character with frequency larger than $\frac{2}{5}$ was first merged with another tree $T$. At this point in time, either the frequency of $T$ was at least $\frac{2}{5}$ (which means that all trees had frequency at least $\frac{2}{5}$), or $T$ was the only tree with frequency smaller than $\frac{2}{5}$ (otherwise $T$ would have been merged with some other tree). This means that there can be at most one additional tree in the forest other than $v$ and $T$. The maximum depth of $v$ that can be obtained by combining (up to) $3$ trees is $2$. Clearly both $1$ and $2$ are possible lengths of a codeword, as shown by symbol $a$ in the following two examples (frequencies are in blue).
{ "domain": "cs.stackexchange", "id": 18686, "tags": "algorithms" }
Why infrared absorption is a nonlinear technique?
Question: I am looking for a good explanation explaining why infrared absorption technique is essentially nonlinear (eg. for carbon monoxide quantification). When using UV/visible/near-IR absorption technique, Beer-Lambert Law often is valid and then you have quite straight way to convert transmittance into absorbance which linearly respond to concentration (provided BLL limitations are not met). Above technique relies on electronic transitions, when IR absorption technique relies on vibrational and rotational transitions for molecules which has permanent or transient dipole moments. I would like to figure out why such technique is essentially nonlinear. Answer: Edit: putting the summary of discussion in comments into my answer. Beer-Lambert law assumes that every photon has equal probability to be absorbed by every molecule. It is only valid for sufficiently monochromatic light – that is, the bandwidth of light source should be smaller than the width of the absorption line. When absorption is measured with a laser tuned to a particular rovibrational line of a molecule, the Beer-Lambert law works well. The method discussed here uses a broadband light source that covers a spectral range containing many rovibrational lines of $CO$, but also regions between those lines, where light is not absorbed at all. There is no simple model that could describe such behavior. In theory, one can calculate the absorption spectrum of a molecule, including temperature- and pressure-dependent line broadening, and then calculate it's convolution with the spectrum of the light source. In practice, using gas samples of known composition one can build a calibration curve Absorption vs gas concentration and use it to convert absorption to gas concentration. . My old answer described another case where Beer-Lambert law may be not valid: The only nonlinear effect that comes to my mind is saturation. Beer-Lambert Law is valid if the population of the ground state is not depleted. However, vibrational bands in mid-IR have very strong absorption, and with intense light source (like laser) one could deplete the ground state - see answer to this question.
{ "domain": "physics.stackexchange", "id": 15228, "tags": "spectroscopy, absorption, infrared-radiation, non-linear-optics" }
Find all pairs of numbers such that the number of binary 1's in their OR is maximum
Question: So, I tried to solve this question and my code did not execute within time-limits. https://www.hackerrank.com/challenges/acm-icpc-team Here's my code, from itertools import combinations nm =input().split() n = int(nm[0]) m = int(nm[1]) mat=[] z=[] for i in range(n): mat.append(input()) a=combinations(range(1,n+1),2) for i in a: count=0 for j in range(m): if mat[i[0]-1][j]=="0" and mat[i[1]-1][j]=="0": count=count+1 z.append(m-count) print(max(z)) print(z.count(max(z))) How can I improve it? Answer: Needless for loop usage for loops in Python are heavy. Using for loops needlessly leads to performance loss. For instance, rather than doing this: for j in range(m): if mat[i[0]-1][j]=="0" and mat[i[1]-1][j]=="0": You could just count the number of ones using int(), bin() and | as below: orResult = int(mat[i[0] - 1], 2) | int(mat[j[0] - 1], 2) numberOfSubjects = bin(orResult).count("1") Confusing usage of count Rather then counting the number of 0's and then subtracting it from m, you could initialize count to m and then decrement it each time 0's are encountered. Then, in the end, count would denote the number of 1's. Variable naming Of course, I had to mention this. Variable names are not self-explanatory and it's difficult to understand what they represent exactly. For instance, z means nothing meaningless to me. Please come up with better names. For instance, change a = combinations(range(1,n+1),2) To indexPairs = combinations(range(1,n+1),2) Optimised Code from itertools import combinations nm = input().split() n = int(nm[0]) m = int(nm[1]) attendees=[] for i in range(n): attendees.append(input()) indexPairs = combinations(range(1,n+1),2) maxNumberOfSubjects = 0 numberOfBestTeams = 0 for pair in indexPairs: orResult = int(attendees[pair[0] - 1], 2) | int(attendees[pair[1] - 1], 2) numberOfSubjects = bin(orResult).count("1") if maxNumberOfSubjects == numberOfSubjects: numberOfBestTeams += 1 elif maxNumberOfSubjects < numberOfSubjects: maxNumberOfSubjects = numberOfSubjects numberOfBestTeams = 1 print (maxNumberOfSubjects) print (numberOfBestTeams) I tested the code and it passed all the test cases.
{ "domain": "codereview.stackexchange", "id": 37023, "tags": "python, algorithm, python-3.x, programming-challenge, time-limit-exceeded" }
References on second-order quantifier elimination and related topics
Question: I was wondering whether something like elimination of second-order quantifiers exist, and indeed it seems it does. I've found there's a workshop on this topic, and the webpage describes exactly what I need: Second-order quantifier elimination (SOQE) is the problem of equivalently reducing a formula with quantifiers upon second-order objects such as predicates to a formula in which these quantified second-order objects no longer occur. Unexpectedly (at least for me) the topic seems to be related to a lot of other things that I need for what I'm doing: Craig interpolants, uniform interpolants, Beth definability. So is there a reference (maybe a survey?) about second-order quantifier elimination and its connections with the topics above? Answer: Since there are no answer I think it may be useful to post what I've found: Dov M. Gabbay, Renate A. Schmidt, Andrzej Szalas Second-Order Quantifier Elimination - Foundations, Computational Aspects and Applications. Studies in logic : Mathematical logic and foundations 12, College Publications 2008, ISBN 978-1-904987-56-7, pp. I-VIII, 1-308 It seems quite complete and well-written. That's what I was looking for.
{ "domain": "cstheory.stackexchange", "id": 5750, "tags": "reference-request, first-order-logic" }
C# display a point cloud
Question: Hello! I got the point cloud data (9830400byte array) in windows, which cames from ubuntu ROS Indigo. As far as i know the 9830400byte = 640 * 480 * 4 chanels(rgba) * 8(different z corinates). And i wanted to display it in a windows form. Anyone know any method how can i do it? Thanks for your help! Beni Originally posted by benyke on ROS Answers with karma: 1 on 2014-11-02 Post score: 0 Original comments Comment by ahendrix on 2014-11-02: How are you getting your point cloud data? Do you have a ROS subscriber on windows, or some other way of transferring the point cloud data? Comment by benyke on 2014-11-03: I have a ROS.NET client on windows, and i subscibed to /camera/depth/points with a sensor_msgs/PointCloud2 message type. And i ment the uint8[] data size is 9830400byte = 640 * 480 * 4 chanels(rgba) * 8(different z corinates). And i wanted to display it.. somehow.. in a windows froms app. Answer: PointClouds come in many forms, with different sets of fields and different data layouts. The additional metadata in the sensor_msgs/PointCloud2 message describe how many rows and columns there are, which fields are present in each pixel, and what order they're in. EDIT PCL has a visualization library. It should be possible to use it from .NET: http://stackoverflow.com/questions/18203173/net-and-pcl-point-cloud-library . It will take a little massaging to convert a ROS message into a PCL message, but the formats are basically the same. Originally posted by ahendrix with karma: 47576 on 2014-11-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 19926, "tags": "pointcloud" }
Is it practical to build a giant telescope in Moon? (Considering the fact that the atmosphere of moon is very rare)
Question: China's FAST (Five-hundred-meter Aperture Spherical radio Telescope) is the largest radio telescope in the world.(This is huge!!!) As we all know Space Telescopes are better than Earth-Based telescopes always.(Hubble space telescope was such a tremendous success)Even though ground-based observatories are usually located in highly elevated areas with minimal light pollution, but still they contend with atmospheric turbulence, which limits the sharpness of images taken from this vantage point. What if all the space agencies collaborate together to build a telescope half the size of China's FAST in moon(Even smaller than half),even then it will be more powerful than the "FAST" right(because the atmosphere of moon is very less dense)?Will it be impractical?It may take decades to build one but still the outcome will be huge(We may even look and find aliens!!) (Taking in account that Elon Musk is already planning Colonization of Mars)This project looks somewhat tamable than that,right? Is there a possibility of this happening? Is it feasible enough? Answer: It's very unlikely that large optical telescopes will ever be built on the Moon, because the Moon is almost the worst possible place to build them. (The surfaces any of the planets other than Earth are worse.) It has no particular advantages over orbit and costs a lot more to build there. The Moon looked like a good location when observatory technology was film-based, because observatories required people and people needed a base to live in and work best under gravity. But we now have the technology to shade an orbiting telescope from the Sun, point it exquisitely accurately, and take endless photos without changing the film -- all in orbit. Orbit is easier to get to than the Lunar surface -- even the Webb (should it ever launch) which will be parked a million miles from Earth, is easier to get to than the surface of the moon. If we had the space capabilities to build a giant reflector on the Moon, we could build an even bigger and better one in orbit for the same cost. You spoke of FAST. FAST is a radio telescope. It's likewise much cheaper to build radio telescopes in orbit than on the Lunar surface. The one possible advantage of the Moon is that the far side of the Moon might have advantages of being especially radio quiet. But even there, it seems likely that any space radio telescope would be a large array of radio telescopes joined to form a giant synthetic aperture, and that would almost certainly want to be much larger than earth-based instruments -- for which the Moon is already too small.
{ "domain": "astronomy.stackexchange", "id": 2963, "tags": "the-moon, telescope, observational-astronomy, space-telescope, deep-sky-observing" }
Can lasers be combined to achieve higher power?
Question: For example, if an object's atoms require light with 100 nm to become ionized, can four 400 nm lasers concentrated at one point on the object achieve ionization? Or will the combined power vary based on different factors? Answer: ..., if an object's atoms require light with 100 nm to become ionized, can four 400 nm lasers concentrated at one point on the object achieve ionization? No they cannot unless we're talking extremely high powers (see Akhmeteli's Answer). This is the essence of the Photoelectric effect: that ionization (or any electronic transition between bound states) requires a threshold shortness of wavelength to make it happen. The intensity of the radiation concerned only affects the rate at which transition events happen. These basic experimental observations can be explained by assuming that the EM field can only transfer energy to electrons in discrete bundles of amount $h\,\nu$, where $h$ is Planck's Quantum of Action (aka Planck's Constant) and $\nu$ the light frequency. When the transition event in question is ionization, the threshold energy needed for a single event, i.e. the binding energy of the electron to the atom concerned, is called the Work Function. Further Question from OP Since you stated that only high powered lasers can ionize with longer wavelengths than would be required, is there a way of calculating how powerful the laser would need to be? If we still use the 400 nm laser as an example, how powerful would that laser need to be to ionize the atom that requires 100 nm light? It's the intensity rather than power that does it: generally multiphoton processes are highly concentrated around a focus of a system. You would need to look up the five or six photon cross section of the ion in question at the wavelength in question, and work it out the six-photon event probability and rate from the intensity at the focus. Note that I say five or six photon process: the photon energies don't add linearly for multiphoton processes owing to the weird mechanics of virtual states that mediate these processes. Pulsed lasers with very low duty cycle fraction are used to excite multiphoton processes. For two photon processes, typically hundreds of milliwatts at a focussed through a 0.3NA objective or higher give significant two photon processes in a micron-sized region at the focus when 5 femtosecond pulses are used at 10MHz pulse repetition rate (i.e. duty cycle of the order of $10^{-7}$). For each pulse, this is equivalent to about $10{\rm MW}$ through the focus, corresponding to an intensity of about $10^{19}{\rm W\,m^{-2}}$ at the focus. Four photon processes are orders of magnitude less probable, so we're probably talking about kilowatt / megawatt femtosecond lasers pulsed with a $10^{-7}$ duty cycle.
{ "domain": "physics.stackexchange", "id": 36479, "tags": "photons, laser, ionization-energy" }
Optics and Refraction
Question: A monochromatic red ray of light, enters a beaker of water , $\gamma = \gamma ' / \mu $ , $\mu $ is the refractive constant. The wavelength should reduce, (taking $\mu$ as $1.33$ ), so the light should become green, but we see it as red light. Why? Answer: This is an important question because it gets at the concept of color, and what that really means. Mitchell is correct that when you observe scattering from a laser beam sent through a beaker of water, you are really only seeing the photons which have left the water ($\mu\approx1.33$) and the beaker (glass $\mu\approx1.5$), traveled through the air ($\mu\approx1.00029$), entered your eye ($\mu\approx1.33$), and were absorbed by your retina. So which color do you see? And does it change if you repeat the experiment underwater or use oil ($\mu\approx1.5$) in the beaker instead? The key to understanding why we always see red light in this experiment rather another color is due to the nature of your photoreceptors, which (like most light absorbers) are not concerned with the wavelength of the light. They instead respond to the frequency of the light, $\nu$ (or photon energy $E=h\nu$), which remains unchanged as light propagates from one (linear) medium to the next. The reason for this is that there are certain energies, determined by quantum mechanics, which are assigned to the electronic states in the absorbing material (your retina, in this case). In order to conserve energy, the photon needs to have the right energy to be absorbed, promoting the material from a low energy state to a high energy state. Wavelength has nothing to do with it. Indeed, when we say "color" in reference to light, we are often actually speaking of the frequency rather than wavelength. When we do refer to a wavelength as a color, it is always with an implicit assumed refractive index, usually $\mu=1$.
{ "domain": "physics.stackexchange", "id": 42607, "tags": "optics, refraction" }
Return row or column from a 2D array
Question: I have a 2D array class that is actually mapped to a 1D vector. For example, to obtain an element at a particular position, the code is as follows: template <typename T> T mat <T>::at(const size_t &row, const size_t &col) const { return m_values[row*ncol() + col]; } Here's the relevant section of the class in-context: template <typename T> class mat { private: std::vector <T> m_values; size_t m_nrow, m_ncol; public: mat(); size_t ncol() const{ return m_ncol; }; size_t nrow() const{ return m_nrow; }; T at(const size_t &row, const size_t &col) const; std::vector <T> row(const size_t &row) const; std::vector <T> col(const size_t &col) const; }; m_nrow and m_ncol contain the number of rows and columns of the 2D array, respectively, while m_values contains the values of the array. They're assigned in another (long) constructor. I now need to be able to return an entire row or column. Here's how that's currently implemented: template <typename T> std::vector <T> mat <T>::row(const size_t &row) const { std::vector <T> tmp; for(size_t j=0; j < ncol(); j++) { tmp.push_back(at(row, j)); } return tmp; } template <typename T> std::vector <T> mat <T>::col(const size_t &col) const { std::vector <T> tmp; for(size_t i=0; i < nrow(); i++) { tmp.push_back(at(i, col)); } return tmp; } These implementations are easy to reason about but not particularly efficient: there's the pushing back to a temporary vector and the repeated calculation of row*ncol() in at(). Is there a more idiomatic way of doing this in C++11? Answer: Lack of modifiability Your at(), row() and col() return copies. This means that once you construct your mat, it's permanently const. Also, for some Ts, this makes these functions unnecessarily expensive. Prefer instead for at() to return a reference: T& at(size_t row, size_t col); T const& at(size_t row, size_t col) const; Although, typically with the standard library, the at() member functions also do range checking. So consider some other non-throwing indexing mechanism (maybe operator()) and have at() defer to that one while doing bounds checking. Note that you don't need to pass the size_ts by reference-to-const, value is fine. Now, for the other two, prefer instead: std::vector<std::reference_wrapper<T>> row(size_t ); std::vector<std::reference_wrapper<const T>> row(size_t ) const; std::vector<std::reference_wrapper<T>> col(size_t ); std::vector<std::reference_wrapper<const T>> col(size_t ) const; reference_wrapper<T> since you can't have vector<T&>. Constructing those rows/cols Since you're storing everything in row order, returning a row can just use the iterator-pair constructor of std::vector: std::vector<std::reference_wrapper<T>> row(size_t r) { auto start = m_values.begin() + r * ncol(); return {start, start + ncol()}; } Returning a column is much more annoying, but you definitely want to use reserve() to avoid any extra allocations: std::vector<std::reference_wrapper<T>> col(size_t c) { std::vector<std::reference_wrapper<T>> result; result.reserve(nrows()); for (size_t r = 0; r < nrows(); ++r) { result.push_back((*this)(r, c)); } return result; } Where I'm assuming operator() is the non-throwing version of at.
{ "domain": "codereview.stackexchange", "id": 17523, "tags": "c++, c++11, matrix" }
Why would lactate be high in diabetics?
Question: Why are lactate level high in diabetes? For example, type II diabetes are resistant to insulin. If those patients are insulin resistant their gluconeogenesis should be working at a high rate and, because of that, lactate uptake by the liver should be removing lactate from the blood. Alternatively, type I diabetics don't produce insulin, so the ratio insulin/glucagon would always be very low and gluconeogenesis should be stimulated... So I don't understand why lactate levels are high in diabetes... Can someone help me? PS. This question came up after doing an experiment in school, with diabetic rats and normal rats. Diabetic rats had higher levels of lactate and my professor said that it was because the diabetic rats don't perform gluconeogenesis and so, lactate accumulates in the plasma. But it doesn't make sense for me. Answer: This condition is also known as "lactic acidosis" and can be pretty dangerous, since it influences the pH of the blood. When we metabolize glucose to produce ATP and NADH it is metabolized finally to pyruvate in a process called glyolysis (I am not going into detail here since this is nicely explained in the Wikipedia). Pyruvate can then be used further in the body in the Gluconeogenesis, the Citric acid cycle and other pathways. If a lot of energy is needed Pyruvate is converted by the pyruvate dehydrogenase into acetyl-CoA. The problem with diabetes is that the pyruvate dehydrogenase can be inhibited in diabetes. If the body then needs a lot of energy pyruvate will be converted into lactate which is released by the cells into the bloodstream. Gluconeogenesis cannot be activated since this needs either Pyruvate, Acetyl-CoA or Oxaloacetate as starting material. The other problem with this process is that this happens in the liver and when the glucose is released into the bloodstream, it cannot be taken up by the cells due to the lack of insulin. The conversion from Pyruvate to Lactate is katalysed by the Lactate dehydrogenase and needs NADH + H+ as a co-factor. The image below is from the Wikipedia article on Lactate dehydrogenase. This reaction is highly exergonic which means its preferred direction is to the right side. To change this and make Pyruvate from Lactate you need an excess of NAD+ which usually only happens in the liver. There is also an epidimiologic study available showing a strong correlation between elevated blood lactate levels and diabetes type II: "Association of blood lactate with type 2 diabetes: the Atherosclerosis Risk in Communities Carotid MRI Study."
{ "domain": "biology.stackexchange", "id": 1678, "tags": "biochemistry, metabolism, endocrinology, glucose" }
Linear molecule that is liquid at room temp. and pressure?
Question: I am trying to find a linear molecule that is a liquid at room temperature and pressure. Do you know of any? By linear I mean that all the bonds are co-linear (ie. simple alkane carbon chains don't count). Non hazardous materials are preferable. Answer: Technically liquid, but close to the boiling point and a bit nasty to work with, is carbon disulfide. It melts at about -112Β°C and boils at +46Β°C.
{ "domain": "chemistry.stackexchange", "id": 13801, "tags": "molecules" }
How long would it take for a change to the DNA would take effect?
Question: I am not a Biology Student (sorry guys!) but I have thought about this question for a while and hope that maybe one of you can answer it. Lets say that I had my DNA changed so that I wouldn't be vulnerable to certain diseases or perhaps to make my body change so that I'd lose weight easier (Just a hypothetical here not sure if sci-fi have skewed my view on DNA). From the change occurs to the DNA until the changes take effect, how long would that hypothetically be? Answer: OK, let's set up a specific hypothetical situation so we have some details: A virus (we'll call it Human Nasty Virus 1 (HNV-1)) can infect T cells (a type of white blood cell in the immune system) by binding to a certain receptor on its surface, which we'll call the Nasty Virus Receptor (NVR). Scientists have found that HNV-1 absolutely needs a certain protein sequence in NVR to bind, as people with a certain mutation lacking that sequence are completely immune to HNV-1 infection. You sign up for a clinical trial using gene editing to specifically remove that DNA sequence from the NVR gene. Once that changed gene is transcribed and translated into protein, it is expressed on the surface of the cell, and HNV-1 can no longer bind. T cells are derived from stem cells that live in the bone marrow. In order for the gene therapy to work, doctors will need to perform a bone marrow transplant - they will remove a sample of bone marrow, give it to the scientists for the gene editing process, then reinject the edited cells back into you. The editing process involves subjecting purified stem cells to a process called CRISPR, allowing them to grow in culture for a while, then screening the cells to select only those where the editing was successful. The successfully-edited cells are separated, cultured some more to expand their numbers, then given back to the doctors, who put them into your body, either into the bloodstream or directly back into the marrow. So now you have your edited cells back inside you - how long will it take to become immune to HNV-1? Well, the honest answer is that, in the absence of ablative chemotherapy (see the bone marrow transplant link above for more info) prior to reinjection of the edited stem cells, you may never become completely immune. Here's why: Unless all of the bone marrow stem cells in your body are destroyed before reinjecting the edited ones, you will still have fairly high numbers of what are called "wild-type" or un-edited stem cells and mature T cells in your body, and those stem cells will continue making new wild-type T cells. Additionally, there are already wild-type T cells in circulation and in secondary lymphoid organs such as lymph nodes and the spleen, and some of these cells can be very long-lived (months to years). If for some reason you decide to undergo complete ablative chemotherapy to essentially destroy your immune system, the reinjected stem cells will eventually repopulate everything and you'll be immune to HNV-1, although you will have potentially lost your immunity to everything else you've ever been exposed to, unless the chemo is done in such a way as to preserve your immune memory cells. This is actually a best-case scenario, as we were working with cells that can (relatively) easily be removed from the body, edited, screened, and reinjected. If you were trying to cure a genetic disease that affected solid tissue like the muscles (see Duchenne muscular dystrophy for an example), you obviously wouldn't be able to remove all the muscles and edit them ex vivo, you'd have to do the editing in the body, most likely using a viral delivery system for the editing machinery. The viruses wouldn't infect every single cell, and the editing wouldn't work in every cell that got infected, so the best you could hope for would be a partial cure. For some diseases, this may be enough, and many companies are working on this from various angles. There are also other options besides CRISPR-mediated gene editing that have higher success rates in vivo. Finally, getting back to your original question of how long it takes from DNA alteration to expression of the gene product, the answer is that it depends on the gene and the context. In the case of our hypothetical situation earlier, as soon as the NVR gene's DNA is edited, the new sequence begins to be transcribed and translated into protein. There is still some wild-type protein left over initially, so it depends on the transcription rate of the gene and the turnover of the existing protein. Some proteins have an extremely long half-life, on the order of years or decades (or more!), while some degrade within minutes to hours of being synthesized.
{ "domain": "biology.stackexchange", "id": 5746, "tags": "dna" }
Extract data from Image.msg
Question: I need to process the data from a webcam (i used USB_CAM Package). Initially I created a listener that from the topic "/usb_cam/image_raw" extrapolates the array uint8 [] data ... but ... Here's the python code: # / usr / bin / env python import roslib; roslib.load_manifest ('webcam_image') import rospy Image import from sensor_msgs.msg def callback (date): print "I heard% s"% str (Image.data) def listener (): rospy.init_node ('read_data', anonymous = True) rospy.Subscriber ("usb_cam / image_raw", Image, callback) rospy.spin () if __ name__ == '__main__': listener () And there the output on the shell when I start: I heard < member'data' of'Image' objects> If i write: rostopic echo /usb_cam/image_raw appears correctly: header: seq: 491 stamp: secs: 1372323952 nsecs: 364679466 frame_id: usb_cam height: 4 width: 4 encoding: rgb8 is_bigendian: 0 step: 12 data: [103, 111, 98, 104, 112, 99, 102, 110, 104, 103, 111, 105, 100, 112, 100, 101, 113, 101, 103, 109, 107, 102, 108, 106, 101, 108, 108, 102, 109, 109, 102, 109, 109, 102, 109, 109, 103, 110, 102, 102, 109, 101, 105, 113, 100, 100, 108, 95] What could be wrong? Originally posted by MarcoTe6 on ROS Answers with karma: 1 on 2013-06-26 Post score: 0 Original comments Comment by MarcoTe6 on 2013-06-28: I've to send image data to LabVIEW via Rosbridge and Clearpath Toolkit. The problem is that the image is not well understood in the passage, so i'm trying to process the image data to transform the array in a string, and send it by a specific topic. But first i'm to extract data from Image.msg Answer: What is the problem exactly? ROS uses an imagetransport method for images, not a regular topic. Images can be viewed using rosrun image_view image_view image:=/usb_cam/image_raw Originally posted by davinci with karma: 2573 on 2013-06-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14716, "tags": "ros, array, webcam" }
How does one actually 'fire an electron'?
Question: As a mathematician, I have practically zero experience and knowledge of experimental methods. Recently, learning about some famous experiments in the history of quantum physics, I heard about an experiment in Japan (the names of the people involved has long since departed from my memory) that confirmed de Broglie's idea of 'matter waves', i.e. that wave-particle duality isn't just for photons. The idea, as I remember, was that electrons were fired very slowly at a detector plate, but there was a small filament running perpendicular to the detector, so that it would block some electrons. As predicted, however, over time, the pattern on the detector showed interference patterns that we would normally expect from waves. My question is not about the quantum physics behind this, so much as the actual experimental methods. How do you actually prepare a bunch of electrons in a specific state, and then how do you actually 'fire' them all, from the same point, one after the other? Is there some sort of molecule that you can prepare that chucks out electrons when you hit it with something else, or something like this? More generally, I suppose, is there some sort of study/reference for experimental methods, or is it the sort of thing that you just learn as a physicist? For example, if I want to do an experiment but need to know how to make electrons, say, in a certain state, and fire them at a certain speed, and then prepare some photons in a certain state and do something with them, etc., would I turn to a certain book and look at the chapter 'How To Prepare And Fire Electrons At Various Speeds And In Certain Quantities', or would I look back at previous experiments, or would I just look at my notes from the lectures I followed during my under/post-graduate studies on experimental methods? Answer: Electrons have charge and thus their trajectory and speed can be manipulated by an electric field. A standard procedure to make a beam of electrons would be to pass current through a conductor. Eventually the conductor will get hot enough to the point where the electrons on the surface have enough energy to escape. You can allign this disoriented "spray" of electrons by a controlled, external, electric field.
{ "domain": "physics.stackexchange", "id": 51545, "tags": "experimental-physics, experimental-technique" }
pysam pileup: what reads appear in the pileup?
Question: Cross-posted on Biostars (with no answers currently). I hope that's OK. I thought that iff a read matches the reference at a position, it would appear in the pileup column for that position. But tracking a single read in pysam shows that that's not the case. This read appears in some pileup columns for positions to which it does not map, and does not appear in some pileup columns to which it does map. What's wrong in my understanding? Updated code (and updated output) moved to Gist. Edit: Here's the raw SAM read. I wasn't separating the read from it's mate when I calculated the figures above. E00489:44:HNNVYCCXX:1:2120:26524:19135 163 5 345248 0 48M102S = 345446 350 TGTGTGTGTGTGTGTGTGTGTCGGATGATGTCCCTGGCTGTGTGTGGGCGGGAGTGCGTGGGGGAGGGTGAGAGTGTGGATGTCGGTGGTCGCGGCTGCGTGAGAGAGGGGGTGTGTGGGGGGGGGGGGGGGGGGGGGTGTGGGTGGGCG AAAFFJFJJJJJJJJJJJJJJJJJJJJJJJJJFJJJJJJJJJJJ-J-7-777-7-7-7--7-77-77F---7-7AFJ7----7-7FAA7-7---7----77-777------7-7)7-AAA-A)7FA)<)<--)<)--<-FAJFJFFFF-< XA:Z:5,+345312,48M102S,0;5,+345554,48M102S,1;5,+345452,4S44M102S,0;5,+345192,48M102S,1; MC:Z:6M2D144M MD:Z:48 RG:Z:HNNVYCCXX.1 NM:i:0 MQ:i:60 AS:i:48 XS:i:48 E00489:44:HNNVYCCXX:1:2120:26524:19135 83 5 345446 60 6M2D144M = 345248 -350 TGGGGATGTGTGTGTGTGTGTCGGATGATGTCCCTGGTTGTGTGTGGGGATGTGTGTGTGTGTGGGTTGATGGTCCTGGCTGTGTGTGTGTGTGGGTGTGCGTGTGTGGGTGTGTGTGTGTGTGTGTCGGATGATGTCCCTGGCTGTGTG JJAF7-FJJJFJFF-AFJF7--<A--<F<7-7--FA--F-J--A-7--7-FJFJ7JFJJF<JA77-7----7----<F7-<JJF-<<FAJJJJA-7-<JAJJ<7FFJJ<JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJFFFAA MC:Z:48M102S MD:Z:6^TG31C25C2A5T0C76 RG:Z:HNNVYCCXX.1 NM:i:7 MQ:i:0 AS:i:119 XS:i:69 I guess this explains why the read appears in more than 150 pileup columnsβ€”in some of the columns, it's really its mate. But given the CIGAR strings for these paired reads, I would expect one of these mates to appear in 198 = (48 + 6 + 144) pileup columns, at each position where one of them maps? Update: I re-ran this script, using @Bioathlete's suggestion of using the flags to distinguish the read and its mate. In case pileup() was filtering reads with low base quality, I also calculated how many positions the read has baseQ >= 13. It's 150 positions, but this read still appears in pileup columns for only 121 positions. Answer: The two entries in the sam file represent mate pairs for the given ID. You can tell the difference based on the sam flags. Picard has a nice tool to determine the meaning of the flags. You are missing coverage within the first read. I think that it is related to the low quality bases. samtools mpileup filters bases with a quality below 13 and pysam just wraps the samtools C functions.
{ "domain": "bioinformatics.stackexchange", "id": 632, "tags": "python, sam, sequence-alignment, pysam" }
Is finding Logspace reductions harder than P reductions?
Question: Motivated by Shor's answer related to different notions of NP-completeness, I am looking for a problem that is NP-complete under P reductions but not known to be NP-complete under Logspace reductions (preferably for a long time). Also, Is finding Logspace reductions between NP-complete problems harder than finding P reductions? Answer: Kaveh is correct in saying that all of the "natural" NP-complete problems are easily seen to be complete under (uniform) $\mathrm{AC}^0$ reductions. However, one can construct sets that are complete for NP under logspace reductions that are not complete under $\mathrm{AC}^0$ reductions. For instance, in [Agrawal et al, Computational Complexity 10(2): 117-138 (2001)) an error-correcting encoding of SAT was shown to have this property. As regards a "likely" candidate for a problem that is complete under poly-time reductions but not under logspace reductions, one can try to cook up an example of the form {$(\phi,b)$ : $\phi$ is in SAT and $z$ is in CVP [or some other P-complete set] iff $b=1$, where $z$ is the string that results by taking every 2nd bit of $\phi$}. Certainly the naive way to show that this set is complete will involve computing the usual reduction to SAT, and then constructing $z$ and computing the bit $b$, which is inherently poly-time. However, with a bit of work, schemes such as this can usually be shown to be complete under logspace reductions via some non-naive reduction. (I haven't worked out this particular example...)
{ "domain": "cstheory.stackexchange", "id": 5431, "tags": "cc.complexity-theory, np-hardness" }
Is there an RGB equivalent for smells?
Question: Millions of colors in the visible spectrum can be generated by mixing red, green and blue - the RGB color system. Is there a basic set of smells that, when mixed, can yield all, or nearly all detectable smells ? Answer: There are about 100 (Purves, 2001) to 400 (Zozulya et al., 2001) functional olfactory receptors in man. While the total tally of olfactory receptor genes exceeds 1000, more than half of them are inactive pseudogenes. The combined activity of the expressed functional receptors accounts for the number of distinct odors that can be discriminated by the human olfactory system, which is estimated to be about 10,000 (Purves, 2001). Different receptors are sensitive to subsets of chemicals that define a β€œtuning curve.” Depending on the particular olfactory receptor molecules they contain, some olfactory receptor neurons exhibit marked selectivity to particular chemical stimuli, whereas others are activated by a number of different odorant molecules. In addition, olfactory receptor neurons can exhibit different thresholds for a particular odorant. How these olfactory responses encode a specific odorant is a complex issue that is unlikely to be explained at the level of the primary neurons (Purves, 2001). So in a way, the answer to your question is yes, as there are approximately 100 to 400 olfactory receptors. Just like the photoreceptors in the visual system, each sensory neuron in the olfactory epithelium in the nose expresses only a single receptor gene (Kimball). In the visual system for color vision there are just three (red, green and blue cones - RGB) types of sensory neurons, so it's a bit more complicated in olfaction. References - Purves et al, Neuroscience, 2nd ed. Sunderland (MA): Sinauer Associates; 2001 - Zozulya et al., Genome Biol (2001); 2(6): research0018.1–0018.12 Sources - Kimball's Biology Pages
{ "domain": "biology.stackexchange", "id": 6516, "tags": "neuroscience, neurophysiology, vision, sensation, olfaction" }
failed to drive pr2 robot on multi_level_map_demo gazebo
Question: I followed pr2_simulator/Tutorials/MultiLeveMapWithRamps. I use ROS electric to run roslaunch pr2_gazebo_wg multi_level_map_demo.launch rosrun pr2_teleop teleop_pr2_keyboard But I can't move pr2 on teleop_pr2_keyboard node. How to solve it? Thank you~ Originally posted by sam on ROS Answers with karma: 2570 on 2012-07-09 Post score: 0 Answer: I found that one solution. roslaunch pr2_teleop teleop_keyboard.launch Thank you~ Originally posted by sam with karma: 2570 on 2012-07-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10128, "tags": "ros" }
The statevector of three-qubit bit-flip encoding circuit for entangled state
Question: I was trying to implement a three-qubit bit-flip code for shared entangled state. I was curious to analyze this mathematically, but the problem is I can't calculate the statevector after encoding. Here is the circuit: I used Qiskit to calculate the statevector, but it turns out to be 0's for every qubit. I am very confused because I don't know if this is valid or not! Could you please guide me a bit? Also, should this circuit have higher fidelity? Please comment on this as well. Answer: The method Statevector.evolve()[1] returns the evolution result. It does not change the instance it called in. So all what you need is to change your code to become: av = Statevector.from_label('000000') result = av.evolve(bit_flip_qc) result.data
{ "domain": "quantumcomputing.stackexchange", "id": 3740, "tags": "qiskit, error-correction" }
In a roller coaster, does the rear car have a higher acceleration/speed?
Question: I am wondering about this question since I asked myself: why do people feel more weightless in the rear car of a roller-coaster than in the front car? To feel the effect of weightlessness, you must accelerate at the acceleration of the gravity (around 9.8m/s^2). Thus, you do not feel that effect in the front car but more likely in the rear car. But all the cars are connected together, and one individual car cannot accelerate faster or go faster because they will get pulled/pushed from the other cars. I am stuck right now to get the answer. If all cars must go at the same acceleration or same speed at different points on the tracks, why does the rear-car feel more weightless? To have that feeling you must accelerate near the gravitational acceleration ... it doesn't make sense! I have put the air friction, the frictional forces outside of this since I am guessing their force shouldn't be taken in consideration in that kind of situation. Answer: The acceleration along the track is always equal for every car, but for each car that acceleration aligns with the hills/gravity in different ways. As the front car crests a hill, the coaster is decelerating; the front car is being pulled backward by the other cars. But as the rear car crests a hill, it's being pulled forward by the rest of the cars. The front car is accelerated down hills. The rear car is accelerated over hills. This is why they feel different to ride.
{ "domain": "physics.stackexchange", "id": 43884, "tags": "newtonian-mechanics, newtonian-gravity, acceleration, speed" }
Refactor embedded switch
Question: A particular application may refer to a specific property in one of three languages in different contexts: in lowercase English, in uppercase English, and in Hebrew. To facilitate ease of converting one language to another, this monstrosity exists: public string translatePersonType(string personType, string lang) { switch (personType) { case "employee": case "EMPLOYEE": case "Χ’Χ•Χ‘Χ“": switch (lang) { case "eng": return "employee"; case "ENG": return "EMPLOYEE"; case "heb": return "Χ’Χ•Χ‘Χ“"; } break; case "contractor": case "CONTRACTOR": case "Χ§Χ‘ΧœΧŸ": switch (lang) { case "eng": return "contractor"; case "ENG": return "CONTRACTOR"; case "heb": return "Χ§Χ‘ΧœΧŸ"; } break; case "supplier": case "SUPPLIER": case "Χ‘Χ€Χ§": switch (lang) { case "eng": return "supplier"; case "ENG": return "SUPPLIER"; case "heb": return "Χ‘Χ€Χ§"; } break; case "customer": case "CUSTOMER": case "ΧœΧ§Χ•Χ—": switch (lang) { case "eng": return "customer"; case "ENG": return "CUSTOMER"; case "heb": return "ΧœΧ§Χ•Χ—"; } break; default: return ""; }// end switch (personType) return ""; }// end translatePersonType Can this be refactored to be more concise and maintainable? For reference, this is the database field behind the property: personType SET('CUSTOMER','SUPPLIER','EMPLOYEE', 'CONTRACTOR') NOT NULL Thanks. Answer: OK, this is Java and might take a little tickling to work in C#, but: final String[] employee = {"employee", "EMPLOYEE", "Χ’Χ•Χ‘Χ“"}; final String[] contractor = {"contractor", "CONTRACTOR", "Χ§Χ‘ΧœΧŸ"}; final String[] supplier = {"supplier", "SUPPLIER", "Χ‘Χ€Χ§"}; final String[] customer = {"customer", "CUSTOMER", "ΧœΧ§Χ•Χ—"}; final String[][] classifications = {employee, contractor, supplier, customer}; public String translatePersonType(String personType, String lang) { int index = 0; if (lang == "ENG") index = 1; if (lang == "heb") index = 2; for (String[] classification : classifications) { if (Arrays.asList(classification).contains(personType)) { return classification[index]; } } return null; }
{ "domain": "codereview.stackexchange", "id": 1465, "tags": "c#" }
Premise of life cycle synchronization between predator and prey
Question: While reading about the predator satiation hypothesis of the periodical cicadas' 13/17 year life cycles, I started wondering about its premise. By the way, I understand the math behind the prime number life cycle. 13 and 17, being prime numbers are only divisible by themselves and 1, and thus have as few divisors as possible. But my question is about the premise of why a predator would even want to synchronize its life cycle with a divisor of the prey's life cycle in the first place. If the predator has a set life cycle, but its individuals are evenly distributed in regards to their development within that life cycle at any given time (for example there could be roughly even amounts of mature and immature predators at any given time), then what difference does the life cycle make? Here I am assuming that eating cicadas is most beneficial to a specific developmental stage of the predators, which is the only reason I can think of why there would be an advantage to this synchronization. By the way, is this even a correct assumption in regards to this hypothesis? To be clear, my main question is why life cycle synchronizations between predator and prey matter when/if predators aren't all undergoing the same stages of their life cycles simultaneously. edit: To be even more clear, I am asking what the benefits are to the predator to synchronize its life cycle with the prey (the cicadas, in my example). My reference: Wikipedia says the following: *It was hypothesized that the emergence period of large prime numbers (13 and 17 years) was a predator avoidance strategy adopted to eliminate the possibility of potential predators receiving periodic population boosts by synchronizing their own generations to divisors of the cicada emergence period. Goles, E.; Schulz, O.; Markus, M. (2001). "Prime number selection of cycles in a predator-prey model". Complexity 6 (4): 33–38. doi:10.1002/cplx.1040. I have interpreted "synchronizing their own generations to divisors..." to mean synchronizing the length of the life cycle. I am happy to hear of another interpretation. Answer: New Answer, based on first comment by user2686410 and subsequent edits to the question. I have interpreted "synchronizing their own generations to divisors..." to mean synchronizing the length of the life cycle. I am happy to hear of another interpretation. First, the overall goal of Goles et al. (2001) does not seem to test hypotheses related to the evolution of life cycles in the cicadas per se. It seems their overall goal instead may have been to explore the generation of prime numbers from numerical theory, using biologically-based models. For example, from the abstract they state, The model marks an encounter of two seemingly unrelated disciplines: biology and number theory. A restriction to the latter, provides an evolutionary generator of arbitrarily large prime numbers. and at the end of the discussion they write, Although there are traditional methods for prime number detection (see e.g., Ref. 23) that are faster than the methods presented here, it is remarkable that the generation of prime numbers can be performed using a biological model. To more directly address your question and if I understand their models, Goles et al. (2001) do seem to allow the the life cycles of both predators and prey to co-evolve yet this is not always clear. We compare now a prey mutating to a cycle Y' with the resident prey (cycle length Y ) at constant X. Analogously, we compare mutant cycles X' with resident cycles X at constant Y (Goles et al. 2001, pg. 34; emphasis added). Here, they seem to be saying that the life cycle of one (predator or prey) can change while other other remains constant (prey or predator). The life cycles don't coevolve. Subsequent formulas and statements imply an interactive effect between the predator and prey life cycles. (Honestly, their writing is somewhat opague to me and they do not define all of their mathematical terms, making it difficult to follow the logic of their models.) Regardless, the results of their initial models, shown in the figure below, shows the prey converging on a 17 year life cycle while the predators converge on a 4 year cycle. Somewhat more complex models tended to converge on 13 and 17 year cycles, although other primes like 11, 19 and 23, had similar probabilities of evolving. Goles et al. (2001) admit at the outset of their paper that there is no evidence for predators with periodical cycles that align with the life cycles of cicadas. This would be consistent with general ecological observations. Most predators consume multiple prey items. Highly specialized predators would be more vulnerable to extinction if their prey source went extinct. Another numerical model, developed by Tanaka et al. (2009) explicitly to explore life cycle evolution of cicadas, does not include predator life cycles at all. Their results still converge on 13 and 17 year life cycles although they restricted the possible range of life cycles to between 10 and 20 years. Thus, it does not appear to be necessary for predators to evolve a life cycle that coincides with prey life cycles for the predator satiation hypothesis to work. Of course, this is consistent with the real world observations that cicadas do not appear to have predators with concommitant life cycles. Original answer, lightly edited In the case of the cicadas, they are the prey, not the predators. They are not synchronizing their life cycles with the predators (if I am interpreting your entire question correctly). The argument is that cicadas have evolved synchronized life cycles to maximize survivorship. By having a large number of individuals emerge within a very short period of time, the very high density means most individuals will live long enough to reproduce because the predators (birds, small rodents, etc.) will be satiated (see Williams and Simon 1995 for a review and pointers to early literature). Some predators of cicadas will be always present so the cicadas are not synchronizing to any specific predator. Goles et al. (2001) used numerical models to argue that such life cycles will tend to converge on prime numbers. (As discussed in the update above) Tanaka et al. (2009) recently argued, again using numerical models, that the prime number-based life cycles of cicadas evolved as a result of the Allee effect. The Allee effect basically states that there is a positive association between the fitness of individuals in the population and the population size or density. Below a certain size, a population is unable to sustain itself and is vulnerable to extinction. Tanaka et al. argue that the Allee effect results in prime-based life cycles under varying environmental parameters. They further argue that maintenance of alternate life cycles minimize hybridization between groups with different life cycle timings. What seems to really remain to understand the evolution of cicada life cycles is to unravel the genetic mechanisms involved with timing. Literature Cited Goles, E. et al. 2001. Prime number selection of cycles in a predator-prey model. Complexity 6: 33-38. Tanaka, Y. et al. 2009. Allee effect in the selection for prime-number cycles in periodical cicadas. Proceedings of the National Academy of Sciences USA 106: 8975-8979. Williams, K.S. and C. Simon. 1995. The ecology, behavior, and evolution of periodical cicadas. Annual Reviews of Entomology 40: 269-295.
{ "domain": "biology.stackexchange", "id": 2703, "tags": "ecology, population-dynamics, predation" }
Asymptotic runtime of a nested sum
Question: i have to calculate the asymptotic runtime of a nested sum: Sum $\sum_{n=1}^{log_{4}(n)} \sum_{n=1}^{\sqrt{n}}1$ My solution is $\Theta(log(n) \cdot \sqrt{n})$, which is wrong, because the solution says $\Theta(\sqrt{n})$. I dont understand where my mistake is: Sum $\sum_{i=1}^{log_{4}(n)} \sum_{j=1}^{\sqrt{n}}1 = \sum_{i=1}^{log_{4}(n)} \sqrt{n} = \sqrt{n} \sum_{i=1}^{log_{4}(n)}1 = \sqrt{n} \cdot (log(n)+1) =$ $ log(n) \cdot \sqrt{n} + \sqrt{n} \stackrel{?}{=} \Theta(log(n) \cdot \sqrt{n}) $ Maybe my formula is wrong. Here is the pseudo-code i have to analyse: someFunc(n) i = n while (i >= 1) i=i/4; //we do sqrt(i) operations for j=1 to sqrt(i) do_some_atomic_op() //O(1) end //i will reduce in every step! end end Answer: In fact you do change the value of $n$, see $n=n/4$, and so the while-loop results in $\sum_{i=1}^{\log_4{n}}{\sqrt{i}}$ which is equal to $\sqrt{1} + \sqrt{2} + \dots + \sqrt{\log_4{n}}$. Upper bound $\sqrt{1} + \sqrt{2} + \dots +\sqrt{\log{n}} \leq \log{n}\sqrt{\log{n}}$ which is less than $\sqrt{n}$ for sufficiently large $n$s (you may prove it using induction or using some other techniques).
{ "domain": "cs.stackexchange", "id": 9467, "tags": "asymptotics, runtime-analysis" }
ROS2 - Remote Node Setup
Question: Is it possible using ROS2, to have two nodes on different networks each connected to the Internet, communicate with each other (i.e. publish and subscribe messages and topics)? If so, any information or insights on how to accomplish this type of setup would be greatly appreciated. I understand how to setup two nodes on the same network so that they communicate using ROS2 but I can't find the settings or perimeters needed for 'remote' nodes to locate and communicate together. Any and all help is greatly appreciated. Originally posted by cbquick on ROS Answers with karma: 41 on 2018-03-27 Post score: 4 Original comments Comment by coenig on 2019-02-19: @cbquick Have you been able to accomplish this in the meanwhile? If so, can you share some insights? Would be greatly appreciated! Comment by cbquick on 2019-02-19: Unfortunately no, I don't have a solution at this time. I wish you the best of luck! Comment by coenig on 2019-02-20: Thanks anyway! If I find a feasible solution I will write an answer here (but I won't dive too deep as I assume that there will be an official solution at some point...) Answer: We don't have any networking guides for ROS 2 yet. However, ROS 2 uses DDS by default, so you could draw on those resources to get help configuring your network to allow DDS traffic. For both ROS 1 and ROS 2, communication over the internet has two challenges, discovery and peer-to-peer communication. tl;dr I'd use VPN between the remote computers, and set it up to forward multicast UDP as well. discovery But, for the case you specifically laid out, it will be difficult. DDS does discovery over UDP multicast and UDP unicast. So doing discovery across the internet is not going to work that way. This is because UDP multicast will not work over the internet as far as I know. In ROS 1 you could solve this by making the ROS master publicly accessible on the internet, and pointing the nodes on the remote machine to the public IP and port for the machine running the ROS master. However, the nodes still communicate peer-to-peer. In ROS 2, I don't know of a generic solution, but individual DDS implementations may have tools for relaying discovery information across the internet. Though I don't think that will be very easy to use with ROS 2 because there isn't a portable way to do it that of which I am aware. peer-to-peer communication After connecting discovery, you still need individual nodes to be able to communicate with each other. In ROS 1 this is very hard because nodes let the OS pick ports randomly, so you don't know how to NAT or open your firewall to let them talk. In ROS 2, the ports that are used are deterministic according to an algorithm, for example see: https://eprosima-fast-rtps.readthedocs.io/en/latest/pubsub.html#id4 So you could open the right range of ports for peer-to-peer in ROS 2 if you wanted. summary So in summary, it's possible to do discovery over the internet with ROS 1, but not realistic to do discovery with ROS 2 over the internet. For peer-to-peer, it's impossible to predict the ports ROS 1 will use, but you can predict and therefore open up ports for ROS 2. But for both ROS 1 and ROS 2, I wouldn't suggest exposing either publicly to the internet for security reasons. So in conclusion, I would recommend using VPN for either ROS 1 or ROS 2, for both simplicity and security. Originally posted by William with karma: 17335 on 2018-03-27 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by gvdhoorn on 2018-03-28: re: VPN: you could take a look at tinc. It's a relatively easy to setup peer-to-peer VPN, works on just about any platform and deals with NAT and other weird networking setups transparently. Comment by cbquick on 2018-03-28: Thank you both! @William Would it be possible, with the public IP address to the Internet based nodes, to use the fast RTPS 'Advanced Configuration' options for network configuration, listening locators, and sending locators to specify nodes on different networks and get ROS2 communication working? Comment by William on 2018-03-28: @cbquick, yes, though it depends on whether or not you can do this without changing the code itself (not sure off hand). It might be something the Fast-RTPS community could answer better than me. Comment by William on 2018-03-28: We don't expose those options through our API, but if you know you're using Fast-RTPS you can get the Fast-RTPS objects directly from our API like this: https://github.com/ros2/demos/blob/master/demo_nodes_cpp_native/src/talker.cpp Comment by shoemakerlevy9 on 2018-05-21: Does ROS2 still use the ros_master_uri environment variable? Comment by Dirk Thomas on 2018-05-21: No, ROS 2 doesn't use any master and therefore also not that environment variable.
{ "domain": "robotics.stackexchange", "id": 30471, "tags": "ros2" }
How to connect ROS to ABB controller
Question: Dear all, I am trying to connect my Ubuntu with ROS Hydro to ABB controller that is using IRB120 robotic arm, but I am unable to understand how to make it connect. I am looking at this . But I am unsure what the robot_ip parameters should be. Originally posted by Vinh K on ROS Answers with karma: 110 on 2017-06-09 Post score: 0 Original comments Comment by gvdhoorn on 2017-06-09: First: please don't use ROS Hydro. It's long been EOLd, and you're missing out on a lot of improvements and support. Second: can you please tell us where you found the text that you show in your question? I cannot find it in any of the official (ros-i) tutorials. Comment by Vinh K on 2017-06-09: I got it from here: http://dspace.uah.es/dspace/handle/10017/20806 Comment by Vinh K on 2017-06-09: in chapter 15 Comment by Lina on 2019-02-18: The link, does not work anymore, do you have by any chance, the link updated? Answer: In short: after you have completed setting up your controller according to the official tutorial, the robot_ip argument should be set to the IP(v4) address of your IRC5 controller (either real or simulated). This means you'll have to have configured networking correctly on your IRC5, and be able to ping the controller from your ROS PC. If you only installed ROS Hydro because the PDF you linked instructs you to, I would recommend you start over with either Ubuntu Trusty (14.04) and ROS Indigo, or Ubuntu Xenial (16.04) and ROS Kinetic. The ABB packages are supported on both of these distributions and ROS releases. Then follow the official tutorial. Originally posted by gvdhoorn with karma: 86574 on 2017-06-10 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Vinh K on 2017-06-21: I was able to ping it. That is working
{ "domain": "robotics.stackexchange", "id": 28091, "tags": "robotic-arm, ros, abb, server" }
Why do we consider that if initial & final positions are same then displacement of an object is Zero?
Question: I was thinking on a basic high school problem about motion on one axis and came across one of those problem where displacement is zero because final and initial positions are same. My question is that why do we have concept of displacement where we take the shortest path an object takes? it is cause of many problems like if the "displacement" is zero then work done will also be zero (even if there is some distance covered) and this will be irrespective of energy consumed .Why don't we just ditch the idea of a hypothetical shortest path and only consider distances. One of reason i came across was that distance is scalar while velocity is vector I think we can devise a system where we can define if distance is being taken as scalar or vector....(as in if distance is only in straight line then distance will be considered vector else scalar) I know there is something stupid i am missing but motive of this post is to rediscover what that thing is.... Answer: (a) We are often interested in studying a body's motion throughout a period of time. Suppose that it followed a curved path. Then its displacements over successive time intervals would keep changing direction. So the body's velocity would keep changing direction, giving the body accelerations. That's one context in which we need the concept of displacement. (b) You seem particularly interested in cases where a body performs a 'round trip' so its overall displacement is zero. You say that the work done on the body is zero. That's true for the work done on the body by a gravitational field, or, if the body has a charge, by an electric field due to static charges. I don't see why that's a problem, and I don't understand what you mean by "this will be irrespective of the energy consumed". Note that not all forces on a body have this "conservative" property. If you push a book across a table, and then push it back to its starting point, you will have done a finite amount of work against friction – which always opposes the relative motion between surfaces.
{ "domain": "physics.stackexchange", "id": 80917, "tags": "newtonian-mechanics, kinematics, distance, displacement" }
Why is the $D^0$ oscillation so different from the $K^0$ and $B^0$?
Question: I have looked for this answer into many articles and books but I am not able to figure out why $D^0\to\bar{D}^0$ is so highly suppressed if compared to the $B^0 \to \bar{B}^0$ and $K^0 \to \bar{K}^0$ diagrams. In principle, I guess that the GIM mechanism acts to the cancellation of diagrams which include vertices with opposite sign CKM factors. However, this effect should be the same for $K^0$, $B^0$ and $D^0$ mesons. I suspect then that this difference could be introduced by the different masses of quarks $c$, $b$, $s$, but I don't understand exactly how. Could anyone clarify me this difference and also cite a reference? Answer: Finally, I am now able to provide an answer to my question. The weak charged current interaction is described by the gauge field $W_\mu^\pm$ through the interaction Lagrangian term: \begin{equation} \mathcal{L}_I = -\frac{g}{\sqrt{2}} (\overline{u}_L, \overline{c}_L, \overline{t}_L) \gamma^\mu {W_\mu}^- V_\text{CKM} \begin{pmatrix} d_L \\ s_L \\ b_L \end{pmatrix} - \frac{g}{\sqrt{2}} (\overline{d}_L, \overline{s}_L, \overline{b}_L) \gamma^\nu W_\nu^+ V_\text{CKM} \begin{pmatrix} u_L \\ c_L \\ t_L \end{pmatrix}. \end{equation} Here, $V_\text{CKM}$ denotes the unitary Cabibbo-Kobayashi-Maskawa (CKM) matrix \begin{equation} V_\text{CKM} = \begin{pmatrix} V_\text{ud} & V_\text{us} & V_\text{ub} \\ V_\text{cd} & V_\text{cs} & V_\text{cb} \\ V_\text{td} & V_\text{ts} & V_\text{tb} \\ \end{pmatrix} , \end{equation} and $u_L, c_L, t_L, d_L, s_L, b_L$ represent the left-handed projections of Dirac spinors associated to quark types. Actually, the magnitudes of each $V_\text{CKM}$ term are given by \begin{equation} V_\text{CKM} \approx \begin{pmatrix} 0.97383^{+0.00024}_{-0.00023} & 0.2272^{+0.0010}_{-0.0010} & (3.96^{+0.09}_{-0.09})\times10^{-3} \\ 0.2271^{+0.0010}_{-0.0010} & 0.97296^{+0.00024}_{-0.00024} & (42.21^{+0.10}_{-0.80})\times10^{-3} \\ (8.14^{+0.32}_{-0.64})\times10^{-3} & (41.61^{+0.12}_{-0.78})\times10^{-3} & 0.999100^{+0.000034}_{-0.000034} \\ \end{pmatrix}. \end{equation} Moreover, it is useful to consider $V_\text{CKM}$ in the Wolfstein parametrization \begin{equation} V_\text{CKM} \approx \begin{pmatrix} 1-\lambda^2/2 & \lambda & A\lambda^3(\rho-i\eta) \\ - \lambda & 1-\lambda^2/2& A \lambda^2 \\ A\lambda^3(1-\rho-i\eta) & -A\lambda^2 & 1 \\ \end{pmatrix} + \mathcal{O}(\lambda^4). \end{equation} It is possible to show that the mass difference related to the $K^0$ and $D^0$ mass eigenstates is approximated by the first order box diagram through \begin{align} \Delta M_K & \approx \frac{G_F^2}{4\pi} m_K f_K^2 \sum_{q=u,c,t} m_q^2 |V_\text{qs}V_\text{qd}|^2,\\ \Delta M_D & \approx \frac{G_F^2}{4\pi} m_D f_D^2 \sum_{q=d,s,b} m_q^2 |V_\text{cq}V_\text{uq}|^2, \end{align} where $G_F$ represents the Fermi constant, $m_K, m_D$ the masses associated to $K^0, D^0$ and $f_K,f_D$ the corresponding decay constants. These latter are generally determined from the weak decays $K^\pm \to l^\pm \nu, D^\pm \to l^\pm \nu$ and they take the values \begin{equation} f_{K} = (156.1 \pm 0.12) \; \text{MeV}, \qquad f_{D} = (206.7 \pm 11) \; \text{MeV}. \end{equation} Moreover, considering the different quark masses and the amplitudes related to the CKM matrix $V_\text{ij}$, the sum of $\Delta M_K$ results dominated by the charm quark while in $\Delta M_D$ by the strange quark term: \begin{align*} \sum_{q=u,c,t} m_q^2 |V_\text{qs}V_\text{qd}|^2 & \approx m_c^2 |V_\text{cs}V_\text{cd}|^2 \propto m_c^2 \mathcal{O}(\lambda^2),\\ \sum_{q=d,s,b} m_q^2 |V_\text{cq}V_\text{uq}|^2& \approx m_s^2 |V_\text{cs}V_\text{us}|^2 \propto m_s^2 \mathcal{O}(\lambda^2). \end{align*} The ratio of the mass differences $\Delta M_K$ and $\Delta M_D$ is then dominated by the $m_c, m_s$ mass term \begin{equation} \frac{\Delta M_D}{\Delta M_K} \propto \frac{m_s^2}{m_c^2}\approx 7\cdot10^{-2} \end{equation} which show clearly why the $D^0$ oscillation is so different from the $K^0$: because the quark charm is much heavier (~14 times) than the quark strange.
{ "domain": "physics.stackexchange", "id": 30152, "tags": "quantum-field-theory, particle-physics, standard-model, feynman-diagrams, weak-interaction" }
How would natural (resonant) frequencies affect amplitudes?
Question: I read $y=A\sin(2\pi ft)$, where $A$=Amplitude, $f$=Frequency, $t$=Time and $y$=$Y$ position of the wave. Since natural frequencies only take the most effect when they are close to the frequency. How would one natural frequency and several natural frequencies affect the equation? Would I be correct in thinking it's something to effect of: y=Y_Position*NaturalFrequency where Y_Postion is the first equation and NaturalFrequency is similar to the first equation but with a low amplitude? Answer: If you are driving a resonant linear system, which is characterized by a natural frequency $f_n$ and quality factor $Q$, with your specified sinusoidal input $y_{in}$ of amplitude $A$ and frequency $f$, the steady-state output $y_{out}$ will be: $$ y_{out} = \frac{A}{1+j \frac{1}{Q} \frac{f}{f_n} - \left(\frac{f}{f_n} \right)^2} $$ This equation gives a complex phasor quantity, which describes the amplitude and phase (with respect to the input) of the output. The higher the $Q$ of the system, the higher the output when the driving frequency is near resonance. At low frequencies (compared with the natural frequency) the output just tracks the input, while at high frequencies, the output falls off like $1/f^2$ and lags the input by a half-cycle (180 degrees).
{ "domain": "physics.stackexchange", "id": 5711, "tags": "waves, acoustics, frequency, resonance" }
What is the largest practical unit of mass?
Question: I have seen in internet and books that the largest practical unit for mass is Chandrasekhar limit which is 1.4 times the mass of sun. It gives the minimum mass for a star to become a neutron star. But there is a bigger unit called TOV limit which gives the minimum mass required for a star to become a black hole which is 3 times the mass of sun. Are these units the largest natural units of mass? Would these units of mass be more useful than the solar mass? Answer: Because units of mass (like units of anything) are used to quantify other things, eg. the star Betelgeuse has a mass about 10x that of the sun - a solar mass is a useful sized unit rather than writing a very long answer in Kg. Using the Chandrasekhar limit doesn't really gain anything in understanding. Possibly they are more universal in that they are the same everywhere, while solar mass is a local unit. If we ever need to read astrophysical papers from an alien civilisation this might become a minor issue (like reading USA weather reports). You could argue that using the minimum theoretical mass for a star would be more useful to compare different stars - since no answer would be a fraction. But mainly we would have to re-learn the axis on all those diagrams. In larger scale cosmology the mass of a galaxy might be a more useful unit
{ "domain": "physics.stackexchange", "id": 48642, "tags": "mass, astrophysics, units" }
Is the work-done by a magnetic field on a moving point charge zero?
Question: Blue vector is the magnetic field vector. Red vector is the velocity vector of the point charge (light grey) q. Green vector is the force vector (force on moving charge q due to the magnetic field). I have heard that workdone by a magnetic field on a moving charge is zero. Let's say that a charge q which was already moving with a velocity 'V' came under the influence of a magnetic field 'B'. It started experiencing a force 'F' due to the magnetic field given by F = q(V vector X B vector). Now due to this it will also start moving along the 'y' axis with some acceleration. So wouldn't we say that the work-done = F x displacement in y ? Edit: What I am trying to say is that the displacement of the charge will have a 'y' component and the force here is also acting in the 'y' direction', so shouldn't the expression I wrote above be valid? Answer: The work done by magnetic field in this case is zero. Why? Because the magnetic field always acts perpendicular to the velocity, thus $$W=\int_{\mathbf s_1}^{\mathbf s_2} \mathbf F\cdot \mathrm d\mathbf s=\int_{t_1}^{t_2} (\mathbf F\cdot \mathbf v) \mathrm d t=0 \tag{\(\because \mathbf F\perp \mathbf v\)}$$ Now, you might be wondering that how does this reconcile with the acceleration in the $y$ direction that you observed. Well, the reason why you ended up with wrong conclusions is because you only considered the work done by the component of the magnetic force in the $y$ direction. The correct version of your statement should be \begin{align} W&=\int_{\mathbf s_1}^{\mathbf s_2} (\mathbf F_x +\mathbf F_y)\cdot (\mathrm d\mathbf s_x +\mathrm d \mathbf s_y)\\ &=\underbrace{\int_{s_{x1}}^{ s_{x2}} F_x\: \mathrm ds_x}_{\large{<\:0}}+\underbrace{\int_{ s_{y1}}^{ s_{y2}} F_y\: \mathrm ds_y}_{\large{>\:0}}=0 \end{align} (Here $\mathbf F_x$, $\mathbf F_y$ and $\mathbf s_x$ and $\mathbf s_y$ are the forces and displacements, respectively, in the $x$ and $y$ directions, respectively) As you can see that despite the vertical force of magnetic force doing some positive work, the horizontal component of the force would have done equal and opposite negative work, thus cancelling out each other to yield a zero net work done by the magnetic force.
{ "domain": "physics.stackexchange", "id": 69769, "tags": "electromagnetism, magnetic-fields, kinematics" }
Have general relativistic effects of all of the components of the stress-energy tensor been measured?
Question: The stress-energy tensor is: Have general relativisic effects of all of the components of the stress-energy tensor been measured? I've heard that the accelerating expansion of the universe is due to negative pressure, but I don't know which terms are involved in the precession of mercury's orbit or frame dragging. Answer: Nice question. The gravitational effects of the pressure components have been directly verified in at least two ways that I know of. The early universe was radiation-dominated, so I don't think you can reproduce any of the relevant cosmological data (e.g., big bang nucleosynthesis) if you leave out the pressure terms. Kreuzer 1968 is a laboratory test, and Bartlett 1986 tested it using lunar laser ranging. I've written an exposition here of how Kreuzer can be interpreted as a test of pressure as a source, and this review article by Will summarizes the Bartlett experiment in section 3.7.3. Others might be able to comment on direct experimental tests of the shear stress and momentum density as gravitational sources, but if they didn't contribute, you'd be breaking Lorentz invariance, since a diagonal stress-energy tensor gains off-diagonal elements under a boost. The precession of Mercury's orbit can be explained in terms of the motion of a test particle in a Schwarzschild metric, so it doesn't test anything about the non-00 terms in the stress-energy tensor. It seems likely to me that Gravity Probe B's detection of frame dragging can be interpreted as a direct test of the momentum-density part (since I think it can be interpreted as a gravitomagnetic effect). Bartlett and van Buren, Phys. Rev. Lett. 57 (1986) 21. Kreuzer, Phys. Rev. 169 (1968) 1007
{ "domain": "physics.stackexchange", "id": 8263, "tags": "general-relativity, experimental-physics, stress-energy-momentum-tensor" }
CHSH violation and entanglement of quantum states
Question: How is the violation of the usual CHSH inequality by a quantum state related to the entanglement of that quantum state? Say we know that exist Hermitian and unitary operators $A_{0}$, $A_{1}$, $B_{0}$ and $B_{1}$ such that $$\mathrm{tr} ( \rho ( A_{0}\otimes B_{0} + A_{0} \otimes B_{1} + A_{1}\otimes B_{0} - A_{1} \otimes B_{1} )) = 2+ c > 2,$$ then we know that the state $\rho$ must be entangled. But what else do we know? If we know the form of the operators $A_{j}$ and $B_{j}$, then there is certainly more to be said (see e.g. http://prl.aps.org/abstract/PRL/v87/i23/e230402 ). However, what if I do not want to assume anything about the measurements performed? Can the value of $c$ be used to give a rigourous lower bound on any of the familar entanglement measures, such as log-negativity or relative entropy of entanglement? Clearly, one could argue in a slightly circular fashion and define an entanglement measure as the maximal possible CHSH violation over all possible measurements. But is there anything else one can say? Answer: In a paper C.-E. Bardyn et al., PRA 80(6): 062327 (2009), arXiv:0907.2170, they discuss constrains on the state, given how much the CHSH equality is violated ($S=2+c$), but without putting any assumptions on the operator used. In general people consider schemes, when operators (for a Bell-type measurement) are random or one or more parties cannot be trusted. One of the key phrases is device-independent and maybe also loophole-free (as even a slight misalignment of operators may change the results dramatically).
{ "domain": "physics.stackexchange", "id": 3301, "tags": "research-level, quantum-information, quantum-entanglement, non-locality" }
Conservation of momentum when friction is present
Question: Conservation of momentum applies when net force is zero. Suppose that there is a system of a canon and a canonball. Total momentum of the system is zero before canonball is fired. Now canonball is fired from the canon, and in frictionless cases, horizontal-axis momentum of the whole system would be preserved. Now friction between ground and canon is added. What happens to the momentum of the whole system in this case? I am confused because for inelastic collsion of the case when a bullet hits a block, it is often said that momentum is preserved during the collision (though not after the collision) even when friction exists between the bullet and the block. Can I say that friction is negligible at the moment of canonball being fired because $F\Delta t = \Delta p$ would be small as $\Delta t$ is very small? Answer: The momentum of the whole system is still conserved -- it's just that when you add friction between the cannon and the ground, you have to include the ground (and in fact the whole planet that it's attached to) as part of "the system". When the cannon ball flies off in one direction, the cannon is pushed in the other. But due to friction the cannon pushes in turn on the Earth, which is not fixed to anything, so its velocity changes very slightly. However, the mass of the Earth is very large, so the change in velocity is minute. A typical cannonball would have a mass of the order $10\:\mathrm{kg}$. Assuming a muzzle velocity of $100\:\mathrm{m/s}$, the ball gains about $10^3\:\mathrm{kgm/s}$ of momentum. The Earth's momentum therefore changes by the same amount in the opposite direction, but since the Earth's mass is around $6\times 10^{24}\:\mathrm{kg}$, this corresponds to a velocity change of $10^3/6\times 10^{24} \sim 10^{-22}\:\mathrm{m/s}$, which is so small it's effectively unmeasurable. In fact, the above is a slight oversimplification. The Earth isn't completely solid, so what actually happens is that the ground near the cannon will start moving first, and then set other parts of the Earth moving very shortly afterwards. The result is seismic waves, which echo around the planet until they eventually get dissipated. If the cannon is big enough these can be measured from very far away - potentially from the other side of the world. But once all the waves have died down, the end result is the same: the Earth's velocity has changed by a very small amount. Similar considerations can be made regarding angular momentum about the Earth's centre, i.e. firing the cannon will also slightly change the Earth's rotation. Of course, this tiny velocity change will be exactly cancelled out once the shot hits something and transfers its momentum back to the Earth through the same process.
{ "domain": "physics.stackexchange", "id": 9753, "tags": "newtonian-mechanics, momentum, projectile, drag" }
Dispersion relation of QM in the presence of a potential
Question: Correct me if I'm wrong, but equations in QM are quite always obtained by looking at the energy dependance of the problem of interest. For example, for Schrodinger's equation one just uses $E = \frac{p^2}{2m}$ which, when translated into the language of operators, gives the known formula (for a free particle). The same happens with Klein-Gordon. One starts with $E^2 = m^2c^4 + c^2 p^2$ and then, again, by translating to operators arrives at the KG equation (for a free particle, again). Here comes the question, When using De Broglie relations $E = \hbar \omega$ $p = \hbar k$ One can found the dispersion relation of the solutions easily by solving the equations or by replacing this identities in the definition of energy. What happens in the presence of a potential? Does the dispersion relation changes? Can we make, using the right potential, solutions for the Schrodinger equation that are not dispersive? Answer: When you have a position dependent potential $V$, you have no simple wave solutions to the SchrΓΆdinger equation anymore and, in general, the de Broglie relations do not hold. However, if you have a potential that varies slowly with position, there exists the so called Wentzel-Kramers-Brillouin (WKB) approximation for the solution of the SchrΓΆdinger equation, which leads to quasi-sinusoidal wave functions with wavelengths and amplitudes that change slowly with position. This is frequently used for obtaining approximate analytical (quasi-classical) solutions, e.g., for tunneling probabilities. More about the WKB approximation, you will find here https://en.wikipedia.org/wiki/WKB_approximation .
{ "domain": "physics.stackexchange", "id": 34719, "tags": "quantum-mechanics, waves, schroedinger-equation, dispersion, foundations" }
What is the brightest star (by absolute magnitude) that we can see by naked eye?
Question: Is it Rho Cassiopeiae or Mu Sagittarii? I see in the Stellarium 0.13.0 that Mu Sagittarii has absolute magnitude -11.43!! Answer: Mu Sagittarii is a star system, not a single star. If that can be included, then Eta Carinae should be included, and it has an absolute magnitude of -12.0. It's a star system about 7,500 light-years from Earth. It looks like the brightest (absolute magnitude) single star visible to the unaided eye is WR 24 (in Carina Nebula). Its absolute magnitude is βˆ’11.1 and apparent magnitude is 6.48, so just barely visible. source: Wikipedia - List of most luminous known stars edit: Rho Cassiopeiae's absolute magnitude is -9.5.
{ "domain": "astronomy.stackexchange", "id": 961, "tags": "star, magnitude" }
Find minimum value in a BST: using boolean operators, and using conditionals
Question: I've been studying the BST code in Paul Graham's ANSI Common Lisp. He provides (my comments): (defun bst-min (bst) (and bst ; [3] (or (bst-min (node-l bst)) bst))) ; [4] (defun bst-max (bst) (and bst ; [5] (or (bst-max (node-r bst)) bst))) ; [6] ;; [1] How do you find min and max of a bst? ;; Recall, the core point is that all items in left subtree are lower, ;; and all items in right subtree are higher, & this holds for the whole ;; tree, -otherwise binary search would miss-, and not just within a ;; single parent child relationship. ;; [2] Therefore, to find min, we just need to go left until we can go left ;; no more, & that's the min. If the above property didn't hold, you ;; might go left, then right, then left, and that final one there ;; could be as low as you like; but no, it must be greater than its ;; ancestor of which it is a right child. ;; [3] A null node has no min. The base case. Returns nil if bst is empty. ;; [4] For a non null node, the min is either the min of its left node, or ;; if that's null, then it's this node, instead. ;; [5] A null node has no max. The base case. Returns nil if bst is empty. ;; [6] For a non null node, the max is either the max of its right node, or ;; if that's null, then it's this node, instead. I find this code more or less fine, but I still find it not easy to read or cognitively process. I'm wondering whether that subjective fact is a feature of the code itself (i.e., is the code really hard to read), or whether it's something I need to stick with and then it will become very natural; remembering Hickey's advice about lisp that it is simple but not easy. So, I produced this version by translating from a java implementation. It's easier for me to read (currently). (defun bst-min (bst) (if (null (node-l bst)) bst (bst-min (node-l bst)))) (defun bst-max (bst) (if (null (node-r bst)) bst (bst-max (node-r bst)))) Is the first version somehow a relic of former days? Would anyone program like that now? Question I would like to ask helpful peeps here is: which version would you favour and why? Answer: You are asking: which version would you favour and why? Instead of answering directly to your question, I will try to show why that formulation is not, at least in my opinion, β€œa relic of former days”, but quite idiomatic for Common Lisp (and other β€œterse” languages). My attempt is done by first recalling an important concept of the language, and then by showing how to apply this concept to your definition of the function, by successive refining of it. In Common Lisp there is the concept of β€œGeneralized Boolean”: the symbol nil (which represents also the empty list), represents false and all the other objects represent true. The concept is so deeply rooted in the language, and it so frequently used in defining primitive functions and special forms, that it has become an habit of programmers in this language to rely on this concept as much as possible, in order to shorten (and in some way to simplify) the code. Let’s start from your definition: (defun bst-min (bst) (if (null (node-l bst)) bst (bst-min (node-l bst)))) First of all, this definition does not work for the edge case in which the tree is empty. In this case, (node-l bst) causes the error: The value NIL is not of the expected type NODE. Let’s try to correct it by adding a check before that case: (defun bst-min (bst) (cond ((null bst) nil) ((null (node-l bst)) bst) (t (bst-min (node-l bst))))) Now we can note that the first two branches of the conditional have the same result: bst (which is nil in the first case), so that we can simplify the code by oring the two conditions: (defun bst-min (bst) (if (or (null bst) (null (node-l bst))) bst (bst-min (node-l bst))))) Since both the conditions of the or test the β€œemptyness” of an object (i.e. if it is equal to nil), for the concept of generalized boolean we can consider that (null x) is equivalent to (not x), and or with two not can be β€œsimplified” to an and with positive tests and inversion of the branches of the if: (defun bst-min (bst) (if (and bst (node-l bst)) (bst-min (node-l bst)) bst)) Note that this version is conceptually simpler than the previous versions, correct and more understandable (at least for me!). However, we can note the presence of still an annoying point: (node-l bst) is called twice. To remove the double call, we can note that, assuming that bst is not null, now the recursive call, (bst-min (node-l bst)) gives the correct result both if (node-l bst) is present or not (in fact we have modified the function to treat the nil case). So we can call only once the selector by first trying bst-min on it, and, if it returns nil, by returning bst instead. This is done with the macro or, that returns the first non-null argument by evaluating them from the left: (defun bst-min (bst) (if bst (or (bst-min (node-l bst)) bst) bst)) which is equivalent to: (defun bst-min (bst) (if bst (or (bst-min (node-l bst)) bst) nil)) ; <- note this, which is different from the previous definition The idiomatic way of writing the previous definition in Common Lisp is to use when instead of if, since the former returns nil when the condition is false: (defun bst-min (bst) (when bst (or (bst-min (node-l bst)) bst))) which is finally equivalent (from the point of view of the computation performed) to the Graham's formulation. In fact, by calling macroexpand-1 in SBCL on the two bodies gives exactly the same result: (IF BST (OR (BST-MIN (NODE-L BST)) BST)) So both definitions can be considered β€œidiomatic”, with that of Graham a little more β€œlispy”, given the homogenous use of logical operators.
{ "domain": "codereview.stackexchange", "id": 35242, "tags": "comparative-review, common-lisp" }
Was there a period of global warming before the start of the last ice age?
Question: I am curious to know if there was a period of global warming that took place before the start of the last ice age and I would like to know how long this period of global warming lasted. Answer: There was an interglacial befor the last glaciation: glacial–interglacial cycles last ~100,000 years (middle, black line) and consist of stepwise cooling events followed by rapid warmings, as seen in this time series inferred from hydrogen isotopes in the Dome Fuji ice core from Antarctica NOAA
{ "domain": "earthscience.stackexchange", "id": 2320, "tags": "climate-change, ice-age" }
Should deep residual networks be viewed as an ensemble of networks?
Question: The question is about the architecture of Deep Residual Networks (ResNets). The model that won the 1-st places at "Large Scale Visual Recognition Challenge 2015" (ILSVRC2015) in all five main tracks: ImageNet Classification: β€œUltra-deep” (quote Yann) 152-layer nets ImageNet Detection: 16% better than 2nd ImageNet Localization: 27% better than 2nd COCO Detection: 11% better than 2nd COCO Segmentation: 12% better than 2nd Source: MSRA @ ILSVRC & COCO 2015 competitions (presentation, 2-nd slide) This work is described in the following article: Deep Residual Learning for Image Recognition (2015, PDF) Microsoft Research team (developers of ResNets: Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun) in their article: "Identity Mappings in Deep Residual Networks (2016)" state that depth plays a key role: "We obtain these results via a simple but essential concept β€” going deeper. These results demonstrate the potential of pushing the limits of depth." It is emphasized in their presentation also (deeper - better): - "A deeper model should not have higher training error." - "Deeper ResNets have lower training error, and also lower test error." - "Deeper ResNets have lower error." - "All benefit more from deeper features – cumulative gains!" - "Deeper is still better." Here is the sctructure of 34-layer residual (for reference): But recently I have found one theory that introduces a novel interpretation of residual networks showing they are exponential ensembles: Residual Networks are Exponential Ensembles of Relatively Shallow Networks (2016) Deep Resnets are described as many shallow networks whose outputs are pooled at various depths. There is a picture in the article. I attach it with explanation: Residual Networks are conventionally shown as (a), which is a natural representation of Equation (1). When we expand this formulation to Equation (6), we obtain an unraveled view of a 3-block residual network (b). From this view, it is apparent that residual networks have O(2^n) implicit paths connecting input and output and that adding a block doubles the number of paths. In conclusion of the article it is stated: It is not depth, but the ensemble that makes residual networks strong. Residual networks push the limits of network multiplicity, not network depth. Our proposed unraveled view and the lesion study show that residual networks are an implicit ensemble of exponentially many networks. If most of the paths that contribute gradient are very short compared to the overall depth of the network, increased depth alone can’t be the key characteristic of residual networks. We now believe that multiplicity, the network’s expressability in the terms of the number of paths, plays a key role. But it is only a recent theory that can be confirmed or refuted. It happens sometimes that some theories are refuted and articles are withdrawn. Should we think of deep ResNets as an ensemble after all? Ensemble or depth makes residual networks so strong? Is it possible that even the developers themselves do not quite perceive what their own model represents and what is the key concept in it? Answer: Imagine a genie grants you three wishes. Because you are an ambitious deep learning researcher your first wish is a perfect solution for a 1000-layer NN for Image Net, which promptly appears on your laptop. Now a genie induced solution doesn't give you any intuition how it might be interpreted as an ensemble, but do you really believe that you need 1000 layers of abstraction to distinguish a cat from a dog? As the authors of the "ensemble paper" mention themselves, this is definitely not true for biological systems. Of course you could waste your second wish on a decomposition of the solution into an ensemble of networks, and I'm pretty sure the genie would be able to oblige. The reason being that part of the power of a deep network will always come from the ensemble effect. So it is not surprising that two very successful tricks to train deep networks, dropout and residual networks, have an immediate interpretation as implicit ensemble. Therefore "it's not depth, but the ensemble" strikes me as a false dichotomy. You would really only say that if you honestly believed that you need hundreds or thousands of levels of abstraction to classify images with human accuracy. I suggest you use the last wish for something else, maybe a pinacolada.
{ "domain": "ai.stackexchange", "id": 1431, "tags": "neural-networks, machine-learning, deep-learning, deep-neural-networks, residual-networks" }
Special relativity and massless particles
Question: I encountered an assertion that a massless particle moves with fundamental speed c, and this is the consequence of special relativity. Some authors (such as L. Okun) like to prove this assertion with the following reasoning: Let's have $$ \mathbf p = m\gamma \mathbf v ,\quad E = mc^{2}\gamma \quad \Rightarrow \quad \mathbf p = \frac{E}{c^{2}}\mathbf v \qquad (.1) $$ and $$ E^{2} = p^{2}c^{2} + m^{2}c^{4}. \qquad (.2) $$ For the massless case $(.2)$ gives $p = \frac{E}{c}$. By using $(.1)$ one can see that $|\mathbf v | = c$. But to me, this is non-physical reasoning. Relation $(.1)$ is derived from the expressions of impulse and energy for a massive particle, so its scope is limited to massive cases. We can show that a massless particle moves with the speed of light by introducing the Hamiltonian formalism: for a free particle $$ H = E = \sqrt{p^{2}c^{2} + m^{2}c^{4}}, $$ for a massless particle $$ H = pc, $$ and by using Hamilton's equation, it's easy to show that $$ \dot {|r|} = \frac{\partial H}{\partial p} = c. $$ But if I don't want to introduce the Hamiltonian formalism, what can I do to prove an assertion about the speed of a massless particle? Maybe the expression $\mathbf p = \frac{E}{c^{2}}\mathbf v$ can be derived without using the expressions for the massive case? But I can't imagine how to do it by using only SRT. Answer: For the reasons given in the comment above, I think the argument from the $m\rightarrow 0$ limit is valid. But if one doesn't like that, then here is an alternative. Suppose that a massless particle had $v<c$ in the frame of some observer A. Then some other observer B could be at rest relative to the particle. In that observer's frame of reference, the particle's three-momentum $\mathbf{p}$ is zero by symmetry, since there is no preferred direction for it to point. Then $E^2=p^2+m^2$ is zero as well, so the particle's entire energy-momentum four-vector is zero. But a four-vector that vanishes in one frame also vanishes in every other frame. That means we're talking about a particle that can't undergo scattering, emission, or absorption, and is therefore undetectable by any experiment.
{ "domain": "physics.stackexchange", "id": 9358, "tags": "special-relativity, mass, hamiltonian-formalism" }
CNNs: understanding feature visualization Channel Objectives (SOLVED)
Question: I'm trying to follow a paper on deep NN feature visualization using beautiful examples from the GoogLeNet/Inception CNN. see: https://distill.pub/2017/feature-visualization/ The authors use backpropagation to optimize an input image to maximizes the activation of a particular (Inception) neuron/feature, or entire channel. For example, Inception Layer 4a, Unit 11 is feature 12 of 192 from the 1x1 convolution path of Inception Layer 4a before filter concatenation (see: https://distill.pub/2017/feature-visualization/appendix/googlenet/4a.html#4a-11). For Layer 4a 1x1 convolution the shapes are: # Layer 4a input: [14,14,480] output: [14,14,512] # 1x1 convolution kernel: [1,1,480] # total 192 kernels output: [14,14,192] # channels [0..192] of Layer4a output Layer4a slice: tf.slice( layer4a_output, (0,0,0), (14,14,192) ) # Layer4a Unit 11 layer4a_unit_11 = tf.slice(layer4a_output, (11,0,0), (1,1,1)) # numpy [11,1,1] In a related article, the authors state (see: https://distill.pub/2018/building-blocks/) , "We can think of each layer’s learned representation as a three-dimensional cube. Each cell in the cube is an activation, or the amount a neuron fires. The x- and y-axes correspond to positions in the image, and the z-axis is the channel (or detector) being run." Furthermore, they offer a diagram which super-imposes the cube of Layer4a over the input image with the (x,y) axis overlaying the image itself. I understand that the Neuron Objective is the input image that produces the highest activation for Layer 4a, Unit 11 which can be found at index=[11,0,0] of Layer 4a output=[14,14,512]. In this case, (x,y)=[11,0]. Each [1,1,480] kernel generates a feature map of shape=[14,14,1] with a total of 196 activations. kernel => channel or feature map and activation => neuron or feature. Question But what is the intuitive concept of the (Positive) Channel Objective? In this example, Unit 11 sits in the same channel as 14x14=196 other neurons, but the channel objectives for all these neurons are different. If the optimized image for the Channel Objective maximizes the sum of neuron activations for channel 0, (e.g. slice=[14,14,0] of 192 1x1 convolutions or 512 total layer 4a channels) wouldn't it be the same for all 192 neurons in the same channel? Obviously, by the examples we see this is not true. How does the Channel Objective relate to the Neuron Objective for Unit 11? ANSWER I understand that the Neuron Objective is the input image that produces the highest activation for Layer 4a, Unit 11 which can be found at index=[11,0,0] of Layer 4a output=[14,14,512]. This is where my understanding went off the rails. Layer 4a Unit 11 is actually channel/feature 12 of 192 for the 1x1 convolution. It is NOT the 12 of 196 neuron of channel 1. My fault for confusing 192 channels with 196 neurons/channel. Instead, as mentioned the in answer, Unit 11 is a single neuron in channel 11, usually located near the center, e.g. Neuron Objective is (x,y,z)=(7,7,11) and Channel Objective is (x,y,z)=(:,:,11) Answer: As you noticed, the activations on mixed4a for a normal sized input is [14,14,512]. Intuitively, these dimensions correspond to [x_position, y_position, channel]. When we talk about "channel" and "neuron" objectives for visualization of channel 11, we mean: Neuron objective: maximize activations[7, 7, 11] maximize channel 11 in one position (generally middle) Channel objective: maximize tf.reduce_sum(activations[:, :, 11]) maximize channel 11 everywhere This diagram from feature visualization also goes over the difference:
{ "domain": "datascience.stackexchange", "id": 6007, "tags": "neural-network, cnn, visualization, inception" }