anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Low-level differences between jointTrajectoryController and jointGroupPositionController | Question:
Hi everyone,
I'm having problem when running servoing of a robotic manipulator. I can't find adequate explanation which could
explain my difficulties. When running robotic arm with MoveIt!, e.g. when I use jointTrajectoryController, arm executes motion
perfectly (schunk lwa4p), which could imply that position commands sent to each joint arrive correctly. However,
when I use jointGroupPositionController (which works in Gazebo), real arm moves only some joints and not all of them.
I'm wondering how could I discover/figure out low-level differences between jointTrajectoryController and jointGroupPositionController?
Thank you for your time,
Have a nice weekend.
Originally posted by zozan on ROS Answers with karma: 1 on 2021-07-02
Post score: 0
Answer:
This question has very informative answers: https://answers.ros.org/question/356349/difference-between-arm_controller-and-joint_group_position_controller/.
Also, you can try using JointPositionController for each joint to verify that the joints are moving as expected.
Originally posted by Pratik Somaiya with karma: 146 on 2021-08-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 36630,
"tags": "ros, moveit, ros-melodic, ros-control"
} |
Lazy sub sequence | Question: For a genetic algorithm I'm writing, I need to sub-sequence a list, but I need it to be lazy so I can compose it with other lazy functions. If I introduce strictness into the chain, I risk potentially massive slow-downs since each step in the chain requires a full traversal of the population.
Amazingly, such a function doesn't appear to built into the core, so I needed to write one:
(defn lsubseq
"Lazily sub-sequences any iterable collection.
The left-index is inclusive, while the right is exclusive."
[coll left-index right-index]
(map second
(filter #(<= left-index (first %) (dec right-index))
(map vector (range) coll))))
I find this to be simultaneously atrocious and beautiful. It works exactly as I expected, so I'm happy with it in that regard.
What I'm not really crazy about is the need to enumerate the collection, only to strip the enumerations before returning. I know it's lazy, so the overhead of this should be minimal, but it still seems like a roundabout of achieving this.
What I want reviews on:
Is there really no built-in for this? This seems like something that I would expect in a standard library.
Are there any improvements that could be made?
Answer: One way to improve:
(defn lsubseq
[coll left-index right-index]
(take (- right-index left-index) (drop left-index coll)))
There is also a built-in function - 'subvec' - which should meet your performance objective if your input 'coll' is a vector. | {
"domain": "codereview.stackexchange",
"id": 22851,
"tags": "clojure"
} |
Cytokine responsiveness | Question: Why is it that cytokine responsiveness is less in progenitor cells than their ancestors(stem cells)?
What will be the benefit of such reduction in responsiveness?
Answer: My answer assumes you are asking about in-vivo effects of cytokine responsiveness.
It has to do with the regulation of the differentiation process. Cytokine responsiveness depends on other factors like cell-cell signaling. To be sure that the differentiation occurs in the right place, the progenitor cells need a signal feedback from the surrounding context (the actual tissue where the cells will differentiate). If the feedback matches, they differentiate.
Stem cells on the contrary, are physically distant from that context, so they lack all the feebacks needed and do not respond to cytokines stimulation as the progenitors do.
Here's some literature about it:
Developmental changes in progenitor cell responsiveness to cytokines.
Mechanisms regulating lineage diversity during mammalian cerebral cortical neurogenesis and gliogenesis.
Signal transduction pathways involved in the lineage-differentiation of NSCs: can the knowledge gained from blood be used in the brain?
Enhanced responsiveness of committed macrophage precursors to macrophage-type colony-stimulating factor (CSF-1) induced in vitro by interferons alpha + beta 1.
However, note that the whole process is still largely unclear and it may vary for different cell types, so consider my answer as a generalization of the concept. | {
"domain": "biology.stackexchange",
"id": 6264,
"tags": "cell-biology, human-physiology"
} |
Can you determine acceleration from positions and velocities only? | Question: I just began reading the Landau and Lifshitz book on classical mechanics. It states on the first page of Chapter 1 that:
Mathematically, this means that, if all the coordinates $q$ and velocities $\dot{q}$ are given at some instant of time, the accelerations $\ddot{q}$ at that instant are uniquely defined.
For given positions and velocities of a system of particles at a given instant, can't each particle have an arbitrary acceleration? Also, aren't accelerations determined from forces?
Answer: The explanation comes from earlier in that paragraph:
If all the co-ordinates and velocities are simultaneously specified, it is known from experience that the state of the system is completely determined and that its subsequent motion can, in principle, be calculated.
This is just saying the familiar thing that if you know the laws of physics for the system in question, you have to specify the (generalized) positions and velocities at one instant, and then you can predict the motion for all time -- including the acceleration.
So the text isn't entirely clear, but it's definitely not saying that knowing just the generalized coordinates and velocities is enough; it's also implicitly saying that you need an equation of motion. | {
"domain": "physics.stackexchange",
"id": 21791,
"tags": "newtonian-mechanics, classical-mechanics, acceleration"
} |
Effects of manually editing gmapping maps? | Question:
I have a map of my house generated by gmapping and now I am using it with amcl for navigation. Does it make sense to clean up the map in an image editor? For example, if you have a hallway wall that is slightly jagged in the original map, does it make sense to replace the original pixels with a perfectly straight line? I'm just wondering if this would help or hinder amcl for subsequent navigation.
Thanks!
patrick
Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2011-12-05
Post score: 3
Answer:
Cleaning up the map as you're suggesting shouldn't hurt localization performance, but probably won't help either. By default, amcl uses a likelihood field model when interpreting laser data; this model has the effect of "blurring" the obstacles in the map, so moving a pixel a bit one way or the other won't have a huge effect.
If you compare performance with both maps, I'd be interested to know the results.
Originally posted by Brian Gerkey with karma: 2916 on 2011-12-05
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Pi Robot on 2011-12-05:
Thanks Brian! I'm about to run some amcl endurance tests with the TurtleBot. If I get time to run them with different versions of the map, I'll post back the results.
Comment by Kishore Kumar on 2016-08-30:
Can i draw a complete map(.pgm file) on my own and feed that to Robot for navigation? if so how can this be accomplished?
Comment by pallavbakshi on 2017-02-02:
@Kishore Kumar - Did you find a way? Even though that should be highly unlikely since the map was constructed without odometry and it will hard to make the map interactive on RVIZ. | {
"domain": "robotics.stackexchange",
"id": 7524,
"tags": "navigation, gmapping, amcl"
} |
Exchange-correlation potential for one electron system | Question: For classical ion, the DFT solution of the ground-state electronic system is given by
$$
\left[-\frac{1}{2}\nabla^2 + V_H(\mathbf{r}) + V_{ei}(\mathbf{r}) + V_{xc}(\mathbf{r})\right]\psi(\mathbf{r}) = \varepsilon \psi(\mathbf{r})
$$
where $V_{ei}(\mathbf{r})$ is the Coulomb potential for electron-ion interaction, $V_{xc}(\mathbf{r})$ is the potential due to the exchange-correlation energy, and $V_H(\mathbf{r})$ is the classical Hartree potential defined as
$$
V_H(\mathbf{r}) = \int\frac{n(\mathbf{r'})}{|\mathbf{r} - \mathbf{r'}|}\ \mathrm{d}^3\mathbf{r'}.
$$
Based on many references that I read, I get an impression that the density in the Hartree potential is the total density from all orbitals, not the total density of electrons in other orbitals. That means even if there is one electron, the classical Hartree potential should be non-zero.
By rewriting the exact Schrodinger equation for one electron which is,
$$
\left[-\frac{1}{2}\nabla^2 + V_{ei}(\mathbf{r})\right]\psi(\mathbf{r}) = \varepsilon \psi(\mathbf{r}),
$$
does that mean for systems with only one electron, $V_{xc}(\mathbf{r}) = -V_H(\mathbf{r})$?
Answer: As the electronic Hartree potential is accounting for the average electrostatic repulsion between the electrons, it is indeed completely spurious for a one electron system, as there should be no electron-electron interaction whatsoever.
And yes, using an exchange-correlation potential that exactly cancels the Hartree potential is the perfect choice for a one electron system.
Since $V_{xc}=-V_H$ would be a poor choice for multi-electron systems, meta-exchange-correlation funtionals take advantage of the fact that one can approximately deduce from the Kohn-Sham kinetic energy density whether the charge density is locally composed mostly of a single orbital or of several ones. For the limit of single orbitals, these meta-exchange-correlation functionals are typically constructed such that the exchange energy cancels the Hartree energy of the hydrogen atom (see e.g. page 8 of Sun, Ruzsinszky, & Perdew "Strongly Constrained and Appropriately Normed Semilocal Density Functional" arXiv:1504.03028v3). | {
"domain": "physics.stackexchange",
"id": 65621,
"tags": "quantum-mechanics, schroedinger-equation, density-functional-theory"
} |
Classic two-player memory game | Question: This is a classic memory game with a points counter for the two players.
The app works fine, but since this is my first project in Swing, I would appreciate the critical opinion of some expert, as I'm sure there is plenty of space for code improvement/optimization.
What do you think about the code? What should have I done in a different/better way? What are your recommendations in terms of optimization?
public class Pixeso extends JFrame {
private JPanel contentPane;
ImageIcon[] iconarray;
ArrayList<Integer> list1 = new ArrayList<Integer>();
JButton[] buttonarray = new JButton[20];
Random rand1 = new Random();
JButton button1;
JButton button2;
int counter = 1;
Timer timer1;
int points1;
int points2;
boolean player1 = true;
private final JPanel panel1 = new JPanel();
private final JPanel panel2 = new JPanel();
private final JLabel label1 = new JLabel("0");
private final JLabel label2 = new JLabel("0");
private final JLabel lblNewLabel = new JLabel("Player 1");
private final JLabel lblNewLabel_1 = new JLabel("Player 2");
/**
* Launch the application.
*/
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
try {
Pixeso frame = new Pixeso();
frame.setVisible(true);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
/**
* Create the frame.
*
* @throws IOException
*/
public Pixeso() throws IOException {
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setBounds(100, 100, 1204, 908);
contentPane = new JPanel();
contentPane.setBackground(Color.GREEN);
contentPane.setBorder(new EmptyBorder(0, 0, 0, 0));
setContentPane(contentPane);
contentPane.setLayout(new FlowLayout(FlowLayout.CENTER, 5, 5));
panel1.setBackground(Color.GREEN);
contentPane.add(panel1);
panel1.setLayout(new GridLayout(5, 4, 5, 5));
panel2.setBorder(new LineBorder(new Color(0, 0, 0)));
contentPane.add(panel2);
lblNewLabel.setHorizontalAlignment(SwingConstants.CENTER);
lblNewLabel.setVerticalAlignment(SwingConstants.TOP);
lblNewLabel.setFont(new Font("Tahoma", Font.PLAIN, 16));
lblNewLabel.setBackground(Color.WHITE);
label1.setHorizontalAlignment(SwingConstants.CENTER);
label1.setVerticalAlignment(SwingConstants.TOP);
label1.setForeground(Color.RED);
label1.setFont(new Font("Tahoma", Font.PLAIN, 16));
label1.setBackground(Color.WHITE);
label2.setHorizontalAlignment(SwingConstants.CENTER);
label2.setForeground(Color.RED);
label2.setVerticalAlignment(SwingConstants.TOP);
label2.setFont(new Font("Tahoma", Font.PLAIN, 16));
label2.setBackground(Color.WHITE);
panel2.setLayout(new GridLayout(2, 2, 20, 10));
panel2.add(lblNewLabel);
panel2.add(label1);
lblNewLabel_1.setHorizontalAlignment(SwingConstants.CENTER);
lblNewLabel_1.setVerticalAlignment(SwingConstants.TOP);
lblNewLabel_1.setFont(new Font("Tahoma", Font.PLAIN, 16));
lblNewLabel_1.setBackground(Color.WHITE);
panel2.add(lblNewLabel_1);
panel2.add(label2);
// put the imagines in a URL[]
URL[] immagini = new URL [11];
immagini[0] = new URL("http://i.imgur.com/421DcmK.jpg");
immagini[1] = new URL("http://i.imgur.com/mpx0yXN.jpg");
immagini[2] = new URL("http://i.imgur.com/9i8UkrI.jpg");
immagini[3] = new URL("http://i.imgur.com/KN86BKv.jpg");
immagini[4] = new URL("http://i.imgur.com/KN86BKv.jpg");
immagini[5] = new URL("http://i.imgur.com/mS3dRj7.jpg");
immagini[10] = new URL("http://i.imgur.com/7vdVgHa.jpg");
immagini[7] = new URL("http://i.imgur.com/njAuT7Q.jpg");
immagini[8] = new URL("http://i.imgur.com/5hWZQG8.jpg");
immagini[9] = new URL("http://i.imgur.com/bwZAiyL.jpg");
immagini[6] = new URL("http://i.imgur.com/rHbAnOD.jpg");
iconarray = new ImageIcon[11];
// convert imagines in icons
for (int i = 0; i <= 10; i++) {
iconarray[i] = new ImageIcon(immagini[i]);
// inizializzo list1
list1.add(0);
}
// add 20 JButtons to panel 1 and set initial icon
for (int i = 0; i < 20; i++) {
panel1.add(new JButton(iconarray[10]));
// insert JButtons in buttonarray
buttonarray[i] = (JButton) panel1.getComponent(i);
// add ImageButtonListener method to each JButton
buttonarray[i].addActionListener(new ImageButtonListener());
}
// add a number between 0 and 9 for each JButton
int y = 0;
while (y < 20) {
int x = rand1.nextInt(10);
list1.set(x, list1.get(x).intValue() + 1);
if (list1.get(x) <= 2) {
buttonarray[y].setName(Integer.toString(x));
y++;
}
}
timer1 = new Timer(2000, new TimerListener());
}
// this timer show clicked cards for two seconds
private class TimerListener implements ActionListener {
public void actionPerformed(ActionEvent e) {
button1.setIcon(iconarray[10]);
button2.setIcon(iconarray[10]);
timer1.stop();
// active = true;
}
}
// method to change JButton image
class ImageButtonListener implements ActionListener {
public void actionPerformed(ActionEvent e) {
// waiting for timer to pop, user clicks not accepted
if (timer1.isRunning())
return;
for (int i = 0; i < 20; i++)
if (e.getSource() == buttonarray[i]) {
int x = Integer.parseInt(buttonarray[i].getName());
buttonarray[i].setIcon(iconarray[x]);
// button1= first clicked button
if (counter == 1) {
button1 = buttonarray[i];
counter++;
}
// button 2= second clicked button, check I didn't click same card twice
if (counter == 2 && buttonarray[i] != button1) {
button2 = buttonarray[i];
compareicons();
}
}
}
// check if icons match
private void compareicons() {
if (button1.getIcon() == button2.getIcon()) {
button1.setEnabled(false);
button2.setEnabled(false);
//add up points to player who found two matching icons
if (player1 == true) {
points1++;
label1.setText(Integer.toString(points1));
} else {
points2++;
label2.setText(Integer.toString(points2));
}
}
//if cards are different, switch to other player
else {
if (player1 == true) {
player1 = false;
} else {
player1 = true;
}
timer1.start();
}
//reset counter
counter = 1;
}
}
}
Answer: I would have some suggestions on the code
Make your fields private.
Do not specify the implementation class, as in ArrayList<Integer> list1 = new ArrayList<Integer>();. Use List<Integer> list1 = new ArrayList<Integer>(); instead.
Either initialize the fields in declaration or in constructor, don't mix both approach. I personally would prefer consistency in code.
Give more meaningful names to variables, especially fields. Say what is button1 and button2? points1 and points2?
In general use Lists instead of arrays directly even if it is fixed size. It's pretty legacy. In substitute I would use List<JButton> buttons = Arrays.asList(new JButton[20]);, and speaking of which
Avoid hard-coding the number of things as literal. Especially in for-loop. Trouble ensues when the length changes. Use a constant like private static final int NUM_OF_BUTTONS = 20; and initialize the buttons as List<JButton> buttons = Arrays.asList(new JButton[NUM_OF_BUTTONS]);. Then for looping all the buttons, say for(int i = 0; i < buttons.size(); i++), or even better for(JButton button : buttons). In general, I would do that whenever I see I use the same literal twice.
Avoid superfluous boolean equality checking or assignment. E.g. if (player1 == true) can be replaced by if(player1), if (player1 == true) { player1 = false; } else { player1 = true; } can be replaced by player1 = !player1;
That's it on top of my head | {
"domain": "codereview.stackexchange",
"id": 15665,
"tags": "java, performance, beginner, swing"
} |
How many shortest distances change when adding an edge to a graph? | Question: Let $G=(V,E)$ be some complete, weighted, undirected graph. We construct a second graph $G'=(V, E')$ by adding edges one by one from $E$ to $E'$. We add $\Theta(|V|)$ edges to $G'$ in total.
Every time we add one edge $(u,v)$ to $E'$, we consider the shortest distances between all pairs in $(V, E')$ and $(V, E' \cup \{ (u,v) \})$. We count how many of these shortest distances have changed as a consequence of adding $(u,v)$. Let $C_i$ be the number of shortest distances that change when we add the $i$th edge, and let $n$ be the number of edges we add in total.
How big is $C = \frac{\sum_i C_i}{n}$?
As $C_i = O(|V|^2)=O(n^2)$, $C=O(n^2)$ as well. Can this bound be improved? Note that I define $C$ to be the average over all edges that were added, so a single round in which a lot of distances change is not that interesting, though it proves that $C = \Omega(n)$.
I have an algorithm for computing a geometric t-spanner greedily that works in $O(C n \log n)$ time, so if $C$ is $o(n^2)$, my algorithm is faster than the original greedy algorithm, and if $C$ is really small, potentially faster than the best known algorithm (though I doubt that).
Some problem-specific properties that might help with a good bound: the edge $(u,v)$ that is added always has larger weight than any edge already in the graph (not necessarily strictly larger). Furthermore, its weight is shorter than the shortest path between $u$ and $v$.
You may assume that the vertices correspond to points in a 2d plane and the distances between vertices are the Euclidian distances between these points. That is, every vertex $v$ corresponds to some point $(x,y)$ in the plane, and for an edge $(u,v)=((x_1,y_1),(x_2,y_2))$ its weight is equal to $\sqrt{(x_2-x_1)^2 + (y_2-y_1)^2.}$
Answer: Consider the following linear chain with $n+1$ nodes, $n$ edges and viciously chosen weights:
[source]
Clearly, the edges could have been added in order of their weights and there are $n \in \mathcal{O}(|V|)$ of them. Adding the dashed edge (which is legal) creates shorter paths for all pairs $(u_i,b_j)$ with $i,j = 1,\dots,k$. As $k \approx \frac{n}{4}$ and assuming that $n \in \Theta(|V|)$, both first and last row contain $\Theta(|V|)$ many nodes each and the addition causes $\Theta(|V|^2)$ many shortest path changes.
We can now move "outwards" now, i.e. add the next edge with weight $n+2$ between $u_{k-1}$ and $b_{k-1}$ and so on; if we continue this to $(u_1,b_1)$, we cause in total $\Theta(|V|^3)$ shortest path changes.
If this does not convince you, note that you can actually start this "process" with $(c_1,c_2)$ and work outwards from there; this way you add $\approx n$ edges which cause in total $\approx \sum_{i=1}^{n}i^2 \in \Theta(n^3) = \Theta(|V|^3)$ many shortest path changes---this is just impossible to draw to fit on one screen. | {
"domain": "cs.stackexchange",
"id": 111,
"tags": "algorithms, graphs, shortest-path"
} |
Will ROS able to support linux mint in the future? | Question:
I know that linux mint 12 is based on ubuntu 11.10, so it can install ros-fuerte ( I have tried to install it).
Is ROS can always support linux mint for later update?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2012-04-29
Post score: 1
Answer:
Mint is "experimental". I doubt it will ever be officially "supported".
How well "experimental" platforms work depends on whether they have an active user community contributing fixes for rosdep and other issues. People have reported various problems with Mint. I can't tell how well they have been resolved.
Originally posted by joq with karma: 25443 on 2012-04-29
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 9181,
"tags": "ros, linuxmint"
} |
Decision Tree Optimize Deviation From Objective | Question: I have the following problem: I have three classes/modes, let's call them car, bike, and walking. For any given test data instance with some environmental variables such as distance, road quality etc, I would like to predict the cheapest mode. In all instances in the training and test data set, each of the modes is associated with a cost. I would like to use a decision tree due to it being easy-to-understand. Is it possible to change the "performance metric" during training such that the final tree minimizes the cost deviation due to wrongful classification rather than maximizing the accuracy?
I know that the goal of minimizing the cost deviation can be achieved by employing three (linear) regression models, and then choosing the cheapest mode, but I would like to keep the "easy-to-understand" property of decision trees. Also, some of the explaining variables are non-linear (for example, walking is preferred, if it is neither too hot nor too cold).
Is this possible in general?
Answer: It is possible, is not an easy task but it's possible.
Custom Loss Function
Gradient boosting is widely used in industry and has won many Kaggle competitions. The internet already has many good explanations of gradient boosting (we’ve even shared some selected links in the references), but we’ve noticed a lack of information about custom loss functions: the why, when, and how. This post is our attempt to summarize the importance of custom loss functions in many real-world problems — and how to implement them with the LightGBM gradient boosting package. | {
"domain": "datascience.stackexchange",
"id": 5194,
"tags": "decision-trees, cost-function"
} |
The Zero-Crossing rate threshold for a voiced/unvoiced decision | Question: I've implemented a function that calculates the zero-crossing rate for a given signal. I've used this same function to calculate the pitch. To differentiate voiced signals from unvoiced with reference to ZCR: A high ZCR means that the signal is unvoiced and a low ZCR means that it is voiced. My question is whether there is a threshold above which we can consider that a signal is unvoiced.
Answer: Plain ZCR criterion is not enough for robust and accurate voiced/unvoiced separation. That being said, your threshold should be adaptive, there is no fixed threshold that works well for all speech waveforms. It depends on the approach you follow but a statistical threshold should do the work most of the time. You can google your question to find relative papers and see if their approach suits your purpose.
Just a note, ZCR is not the best choice for pitch estimation - it just gives a rough estimation. | {
"domain": "dsp.stackexchange",
"id": 8437,
"tags": "pitch"
} |
Do alkali metals not form diatomic molecules? | Question: I know hydrogen forms a diatomic molecule $\ce{H2}$, where the electronic configuration of hydrogen is $\ce{1s}$.
But why doesn't lithium also form a diatomic molecule? Its electronic structure is $\ce{1s^2 2s}$, so can't two lithium atoms come together, share their outer electron and form $\ce{Li2}$? Same with $\ce{Na}$, $\ce{K}$ etc.
Am I missing something obvious?
Answer: Diatomic molecules of alkali metals are detected in the gas phase. However, it's so happens that the bond in them is very weak and at the temperature the alkali metal vapors develop only a few percents of the metal in the vapors exists as diatomic molecules. It so happens, that metal bonding allows to achieve an overall more energetically favored state.
Outer orbitals of the alkali metal atoms are very diffuse, so their bonds are weak. Only Lithium, the smallest of the alkali metals, has practically meaningful covalent chemistry to speak of. | {
"domain": "chemistry.stackexchange",
"id": 9732,
"tags": "inorganic-chemistry, bond, molecules"
} |
Is it necessary to calculate lane normalisation factor when doing western blot data analysis? | Question: I have recently done my first western blot and I am doing data analysis to quantify my blot. I have labelled my membrane against inactive GSK3 and active GSK3 which are phosphoproteins so I am using total GSK3 as an internal loading control. I have read some guides and handbooks about western blot data analysis and I have seen that some of them calculate a lane normalisation factor to account for variations in signal intensities of the loading control. For all loading control bands in each lane, they divide by the loading control band with highest intensity to get the lane normalisation factor. Then for each band of the target protein of interest they divide by the lane normalisation factor to get the normalised intensity.
I was wondering in general with western blots is it good practice to calculate the lane normalisation factor when doing the data analysis? Any insights are appreciated.
Answer: Generally speaking, the proper way to quantify a western blot is to normalize to a loading control such as Actin or GAPDH. In this case it would be (pGSK3/Actin)/(GSK3/Actin) as total GSK3 is not a loading control. A loading control is to protein that is accepted as unvarying in concentration across multiple samples if the same protein amoun is used, and to my knowledge GSK3 would show more biological variability than accepted loading controls. This would be considered the best practice to compare protein concentrations across samples.
That being said, if all you care about is the ratio of active to total, pGSK3/Total GSK3 is probably sufficient to assess if your manipulation changed active/inactive:total ratios. For example, in autophagy assays you can measure autophagic flux by measuring LC3-I to LC3-II, and I've never seen them normalized to a loading control since its a ratiometric readout contained within a single sample. | {
"domain": "biology.stackexchange",
"id": 11273,
"tags": "cell-biology, western-blot"
} |
relationship between energy and sampling rate | Question: Excuse my silly question, but i really want to know does changing the sampling rate affects the energy (bandwidth) of a signal? therefore improves cross correlation output?
Answer: You must observe the nyquist frequency when sampling a signal.
In order to sample a signal without introducing artifacts, you must first filter out everything in the signal that has a frequency higher than half of your sampling rate. If you sample a 1000Hz, then you must first filter out everything above 500Hz.
Yes, changing the sampling rate can change the bandwidth of digital representation of the signal. A lower sampling rate means a smaller bandwidth.
This will only improve crosscorrelation if the noise is all high frequency stuff, and the signal you are trying to detect is in the lower frequencies. You could get the same effect, however, by using a low pass filter on the sampled data. That would also remove the high frequencies and make the correlation clearer. | {
"domain": "dsp.stackexchange",
"id": 8077,
"tags": "audio, sampling"
} |
Empty Line delimiter, single line output | Question: I'm used to only processing, at most, one line of a file at a time. This is my first time changing the delimiter, and the objective here is to take a file containing lines such as:
Bubbles,
Blossom and
Buttercup
Nostalgic
Examples for the
Win.
Quick Brown Fox
Jumping Over Lazy
Dog.
and produce single line output:
Bubbles, Blossom and Buttercup.
Nostalgic Examples for the Win.
Quick Brown Fox Jumping Over Lazy Dog.
What I've done works, but it feels like a work-around to some superior alternative I'm sure exists. What do you think?
import java.io.File;
import java.io.FileNotFoundException;
import java.util.regex.Pattern;
import java.util.Scanner;
public class TestDelim {
public static void main(String[] args) throws FileNotFoundException {
Scanner input = new Scanner(new File(args[0]))
.useDelimiter(Pattern.compile("^\\s*$", Pattern.MULTILINE));
Scanner output;
StringBuilder sb = new StringBuilder();
while (input.hasNext()) {
output = new Scanner(input.next());
while (output.hasNextLine()) {
sb.append(' ').append(output.nextLine());
}
System.out.println(sb.toString().trim());
sb.setLength(0);
}
}
}
Answer: Multiline text processing takes some getting used to.
First up, if you are going to do multiline processing, then you should read all the data in to a String, and forget about Scanners, etc. 1-line-at-a-time processing is convenient for many reasons, but mostly to reduce the amount of data in memory at any one time. Consider the following:
Path source = Paths.get("poem.txt");
String poem = new String(Files.readAllBytes(source));
Now you have the complete poem in a single variable poem.
Now, a paragraph is identified by an empty line (or more) between texts. In Regex terms, this is two or more newlines and other whitespace:
private static final Pattern PARAGRAPH = Pattern.compile("\\s*^\\s*$\\s*", Pattern.MULTILINE);
Note that \n newline is part of the \\s pattern, so the pattern will match whitespace padded breaks containing at least two newlines.
Also, a pattern for replacing all whitespace with a single space, is:
private static final Pattern MULTISPACE = Pattern.compile("\\s+");
Now, what we need is a compaction routine to convert the input string to a formatted output:
public static String compactLines7(final String source) {
StringBuilder sb = new StringBuilder(source.length());
for (String para : PARAGRAPH.split(source)) {
sb.append(MULTISPACE.matcher(para).replaceAll(" ")).append("\n");
}
return sb.toString();
}
Note that the above will leave a trailing newline on the output.
I quite like the Java 8 way, though (which will not have a newline):
public static String compactLines(final String source) {
return Stream.of(PARAGRAPH.split(source))
.map(para -> MULTISPACE.matcher(para).replaceAll(" "))
.collect(Collectors.joining("\n"));
}
Putting this together in code like your example, it is:
private static final Pattern PARAGRAPH = Pattern.compile("\\s*^\\s*$\\s*", Pattern.MULTILINE);
private static final Pattern MULTISPACE = Pattern.compile("\\s+");
public static String compactLines(final String source) {
return Stream.of(PARAGRAPH.split(source))
.map(para -> MULTISPACE.matcher(para).replaceAll(" "))
.collect(Collectors.joining("\n"));
}
public static final void main(String[] args) throws IOException {
String source = new String(Files.readAllBytes(Paths.get(args[0])));
System.out.println(compactLines(source));
} | {
"domain": "codereview.stackexchange",
"id": 12235,
"tags": "java, strings, regex, io"
} |
Teleoperationg a Turtlebot without a wireless joystick | Question:
Does anyone knows a way to teleoperate a turtlebot through a joystick plugged on the workstation? I don't have a wireless joystick...
Originally posted by lucascoelho on ROS Answers with karma: 497 on 2011-09-16
Post score: 1
Answer:
If you take a look here and set up the roscore to run on the workstation and point the TurtleBot's ROS_MASTER_URI to the workstation, you can run the joy node and your teleop node on the workstation and the TurtleBot will receive those commands. Be careful, however, because I've seen significant lag while teleoping over the network, but it will work.
Originally posted by DimitriProsser with karma: 11163 on 2011-09-16
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Jie Sky on 2014-11-25:
Do you know how to use a normal joystick to control a turtlebot based kobuki?I have some trouble with it . I refer to this website:http://wiki.ros.org/turtlebot_teleop/Tutorials/hydro/Joystick%20Teleop | {
"domain": "robotics.stackexchange",
"id": 6710,
"tags": "ros, turtlebot, joystick, teleoperation"
} |
How to measure propulsive efficiency of a human powered boat? | Question: I am asking for help to measure the effective propulsive force and its effect on boat glide on a human powered vessel using a paddle.
Senario
A person using a paddle to propel a boat or canoe. Setting aside the weight of the vessel and body weight of the person as being constant. Provided the water is flat, undisturbed.
Question
How can I effectively measure the propulsive power of the strokes in relation to how they help move the boat. How can I assess the propulsive power in relation to the momentum and drag once the vessel is in motion (to figure what number of strokes become optimal in gliding the boat). The goal is to measure how effective the strokes are, not necessarily how fast from a to b, which can be timed. The lessons from the study would help the adoption or modification of more effective techniques.
Is there a simple device I can make with your creative help to get this relatively objective measure?
As a thought, I am imagining an hourglass set sideways. This is theoretical to help with ideas as I know it won't work realistically. Suppose as the boat moves forward, the forward thrust is going to displace sand inside to move back to the rear part of the hourglass. The amount of sand per number of strokes will help with measure. This is theoretical but I am looking for something realistic that can be used. Maybe a drag scale, a spring loaded drag box with a dial?
I would also like to understand the physics involved in more lay terms, please. Thank you!
Answer: You are asking about the propulsive efficiency of the stroke. ie How much of the energy expended by the rower (input) results in useful work (output). The power meter seems to measure the input. The difficulty here seems to be how to define useful work.
Several articles available on the internet discuss how to measure rowing efficiency. For example, Propulsive Efficiency of Oars identifies the useful output as overcoming boat drag. It says that the maximum efficiency you can expect is about 80%.
To measure efficiency, you first need to calibrate the boat.
The power dissipated by boat drag is $P=Fv$ where $F$ is the force required to pull the boat along at speed $v$, the force being applied in the same direction as $v$. Experimentally you would need to tow the boat (laden with passive rower/s and oars - or the equivalent weight) with a constant force and measure the average speed through the water. Alternatively tow at constant speed relative to the water and note the average force.
You will need some kind of force meter. If towing from the water you can pull in the direction the boat moves, but you will need to avoid disrupting flow past the boat. If towing from land (eg on a canal towpath) you avoid disrupting the water ahead of the boat but the applied force is no longer in the direction the boat is moving, so you need to measure angles between the towlines and the direction of the boat, and apply geometry to calculate the applied force in the direction of motion. Ideally you would do the calibration in a water tunnel, varying water flow speed and measuring constant force in the direction of motion.
A graph of $P=Fv$ against $v$ gives you the calibration of boat dissipation power at various speeds.
You then measure the power exerted by the rower/s - eg using the power meter - and the speed achieved. The boat dissipation power (at the rowing speed) as a % of rowing power gives you the propulsive efficiency of the rower/s. | {
"domain": "physics.stackexchange",
"id": 37050,
"tags": "fluid-dynamics, momentum, drag, inertia, propulsion"
} |
How can we feel the effects of a Black Hole if all the mass is gone? | Question: This question may help me learn more about the subtleties involved between the notions of gravity, in the Newtonian sense and those of curved spacetime, in the General Relativity sense.
I will take the risk that it may also show my lack of understanding of basic GR concepts.
Unfortunately, any possible answers may be a matter of interpretation and/or opinion, as although GR has been confirmed in many ways, we are , as far as I know, lacking direct observational evidence of it's more exotic aspects, such as Black Holes.
No offence intended, but personal opinions as to what is inside a black hole are not intended as part of the question, I just want to stick to the question on a physical basis only.
My question is based on a comment by Kip Thorne, in essence, saying that "inside" a Black Hole is empty and that thinking there is crushed matter of any kind inside is an incorrect intuitive picture.
If the material that is the source of the black hole no longer exists, (gone to another universe, down a wormhole to another part of our universe, or whatever, take your pick of possible outcomes), how can we still be affected by it, either gravity wise or curved spacetime wise?
In other words, if the mass is gone, it's gone, so how can we still feel the effects of it, unless time runs so slowly at the proposed event horizon that, for coordinate observers, it's effects are always felt?
EDIT What Thorne actually says is "the matter is gone, it's completely destroyed, it no longer exists", Quantum Physics, PBS NOVA on YouTube so
Ernie's answer has validity, imo and
It's a NOVA production, not a peer reviewed article in a generally accepted publication, it may be taken as a broad popularisation. END EDIT
Apologies if there is a duplicate somewhere on this site, I could not see one in the suggestions as I wrote this question.
Answer: This is really Kip Thorne's interpretation of the black-hole collapse. Many other scientists and theorists would disagree and in many quantum gravities the interior of the black hole is not empty and the central singularity is somehow regularized.
In any case, what you really feel pulling on you is not the mass very far away, what you really feel is the space-time configuration immediately around you. One patch of space-time does not really know whether there is some mass far away, it really curves only due to the matter-energy contained in it and due to the conditions on it's boundary. In this way, through boundary conditions of very small patches of space-time, the information about a matter source is "passed on" to very far away regions.
But this also means that if you take a single point and "wrap" it around with some very extreme boundary condition, the space-time can warp and curve around it as if an infinite density of mass was there. Specifically, far away from such a "boundary point", you would not be able to tell whether the space-time is curved due to some star or due to this weird point. In this precise sense, theoretical physicists talk about black hole solutions as of "vacuum solutions" because there is no non-zero matter density anywhere in the space-time, only a singular point.
But whether the matter is indeed "squeezed out" of the space-time leaving only a relict in the form of a space-time singularity or whether it is somehow "in the singularity" (and whether such a singularity even exists) is a matter of interpretation and controversy. | {
"domain": "physics.stackexchange",
"id": 22803,
"tags": "general-relativity, gravity, black-holes"
} |
Unable to Locate ROS Packages? | Question:
Ok so I am completely new to ROS, and I am having some trouble with the tutorial. I managed to install from-source, but now whenever I try to download the indigo tutorial package (sudo apt-get install ros-indigo-ros-tutorials) I get the following error:
E: Unable to locate package ros-indigo-ros-tutorials
How do I fix this? I can't quite figure it out and all the other questions I looked at didn't give me helpful information.
Thanks!
Originally posted by icepaka89 on ROS Answers with karma: 33 on 2016-05-22
Post score: 1
Answer:
When you want to install ROS packages from the repositories, you have to add them to your sources.list first, otherwise apt-get does not now where to find the packages.
See also: http://wiki.ros.org/indigo/Installation/Ubuntu
Like BennyRe said: if not necessary do not install from source.
Originally posted by JRikken with karma: 31 on 2016-05-23
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by icepaka89 on 2016-05-23:
I had to install from source because " sudo apt-get install ros-indigo-desktop-full " also did not work. This was the only way to even get any form of download going
Comment by JRikken on 2016-05-30:
So, are your repositories configured correctly?
If you run:
grep -h ^deb /etc/apt/sources.list /etc/apt/sources.list.d/*
The output should at least contain:
deb http://packages.ros.org/ros/ubuntu trusty main
Otherwise apt-get can never find ros-indigo-desktop-full or ros-indigo-ros-tutorials | {
"domain": "robotics.stackexchange",
"id": 24713,
"tags": "ros, tutorials"
} |
Detect optimized | Question: I've been trying to optimize this piece of code:
void detect_optimized(int width, int height, int threshold)
{
int x, y, z;
int tmp;
for (y = 1; y < width-1; y++)
for (x = 1; x < height-1; x++)
for (z = 0; z < 3; z++)
{
tmp = mask_product(mask,a,x,y,z);
if (tmp>255)
tmp = 255;
if (tmp<threshold)
tmp = 0;
c[x][y][z] = 255-tmp;
}
return;
}
So far I've tried "Blocking" and a few other things, but I can't seem to get it to run any faster.
Blocking resulted in:
for(yy = 1; yy<height-1; yy+=4){
for(xx = 1; xx<width -1; xx+=4){
for (y = yy; y < 4+yy; y++){
for (x = xx; x < 4+xx; x++){
for (z = 0; z < 3; z++)
{
tmp = mask_product(mask,a,x,y,z);
if (tmp>255)
tmp = 255;
if (tmp<threshold)
tmp = 0;
c[x][y][z] = 255-tmp;
}}}}}
Which ran at the same speed as the original program.
Any suggestions would be great.
mask_function cannot be changed, but here is its code:
int mask_product(int m[3][3], byte bitmap[MAX_ROW][MAX_COL][NUM_COLORS], int x, int y, int z)
{
int tmp[9];
int i, sum;
// ADDED THIS LINE (sum = 0) TO FIX THE BUG
sum = 0;
tmp[0] = m[0][0]*bitmap[x-1][y-1][z];
tmp[1] = m[1][0]*bitmap[x][y-1][z];
tmp[2] = m[2][0]*bitmap[x+1][y-1][z];
tmp[3] = m[0][1]*bitmap[x-1][y][z];
tmp[4] = m[1][1]*bitmap[x][y][z];
tmp[5] = m[2][1]*bitmap[x+1][y][z];
tmp[6] = m[0][2]*bitmap[x-1][y+1][z];
tmp[7] = m[1][2]*bitmap[x][y+1][z];
tmp[8] = m[2][2]*bitmap[x+1][y+1][z];
for (i=0; i<9; i++)
sum = sum + tmp[i];
return sum;
}
Answer: Do not expect much:
void detect_optimized(int width, int height, int threshold)
{
int x, y, z;
int tmp;
int widthM1= width-1;
int heightM1=height-1;
for (y = 1; y < widthM1; y++){
for (x = 1; x < heightM1; x++){
for (z = 0; z < 3; z++){
tmp = mask_product(mask,a,x,y,z);
if (tmp>255)
c[x][y][z] = 0;
else if (tmp<threshold)
c[x][y][z] = 255;
else
c[x][y][z] = 255 ^ tmp; // in this case xor is the same as -
}
}
}
return;
}
You can also unroll the z- loop, by copying the inner body 2 more times.
If you can manage to change the mask_function:
int mask_product(int m[3][3], byte bitmap[MAX_ROW][MAX_COL][NUM_COLORS], int x, int y, int z)
{
int xp1=x+1;
int xm1=x-1;
int yp1=y+1;
int ym1=y-1;
return m[0][0]*bitmap[xm1][ym1][z];
+ m[1][0]*bitmap[x][ym1][z];
+ m[2][0]*bitmap[xp1][ym1][z];
+ m[0][1]*bitmap[xm1][y][z];
+ m[1][1]*bitmap[x][y][z];
+ m[2][1]*bitmap[xp1][y][z];
+ m[0][2]*bitmap[xm1][yp1][z];
+ m[1][2]*bitmap[x][yp1][z];
+ m[2][2]*bitmap[xp1][yp1][z];
}
Also have a look if you can make your compiler inline the mask_product method. | {
"domain": "codereview.stackexchange",
"id": 7549,
"tags": "optimization, c, image"
} |
2 player TicTacToe | Question: Can I optimize this Tic-tac-toe game or make it smaller?
Known optimizations: string turn = "X"; can be changed to char turn = 'x';
Form1.cs
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace tictactoeAPP
{
public partial class Form1 : Form
{
string turn = "X";
byte turnno = 1;
void Again()
{
DialogResult result = MessageBox.Show("Do you wish to play again?", "Again", MessageBoxButtons.YesNo);
if (result == DialogResult.Yes)
{
Button[] buttons = new Button[] { button1, button2, button3, button4, button5, button6, button7, button8, button9 };
foreach (Button item in buttons) { item.Text = ""; }
turn = "X";
turnno = 1;
label1.Text = "Waiting for X";
}
else { this.Close(); }
}
void Winner()
{
/// 123
/// 456
/// 789
///
/// 159
/// 357
///
/// 147
/// 258
/// 369
///
string[][] conditions = new string[][]
{
new string[] { button1.Text, button2.Text, button3.Text },
new string[] { button4.Text, button5.Text, button6.Text },
new string[] { button7.Text, button8.Text, button9.Text },
new string[] { button1.Text, button5.Text, button9.Text },
new string[] { button3.Text, button5.Text, button7.Text },
new string[] { button1.Text, button4.Text, button7.Text },
new string[] { button2.Text, button5.Text, button8.Text },
new string[] { button3.Text, button6.Text, button9.Text }
};
foreach (string[] item in conditions)
{
if (item[0] == item[1] && item[1] == item[2] && item[0] != "") { MessageBox.Show(item[0] + " wins!", "Winner"); Again(); return; }
}
}
void check_win()
{
Winner();
if (turnno > 9) { MessageBox.Show("It is a tie.", "Tie"); Again(); return; }
}
void Assign(Button widget)
{
if (widget.Text == "")
{
turnno++;
widget.Text = turn == "X" ? "X" : "O";
turn = turn == "X" ? "O" : "X";
label1.Text = "Waiting for " + turn;
check_win();
}
}
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
Assign(button1);
}
private void button2_Click(object sender, EventArgs e)
{
Assign(button2);
}
private void button3_Click(object sender, EventArgs e)
{
Assign(button3);
}
private void button4_Click(object sender, EventArgs e)
{
Assign(button4);
}
private void button5_Click(object sender, EventArgs e)
{
Assign(button5);
}
private void button6_Click(object sender, EventArgs e)
{
Assign(button6);
}
private void button7_Click(object sender, EventArgs e)
{
Assign(button7);
}
private void button8_Click(object sender, EventArgs e)
{
Assign(button8);
}
private void button9_Click(object sender, EventArgs e)
{
Assign(button9);
}
private void Form1_Load(object sender, EventArgs e)
{
}
}
}
Form1.Designer.cs (unchanged)
namespace tictactoeAPP
{
partial class Form1
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;
/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
#region Windows Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.button1 = new System.Windows.Forms.Button();
this.button2 = new System.Windows.Forms.Button();
this.button3 = new System.Windows.Forms.Button();
this.button4 = new System.Windows.Forms.Button();
this.button5 = new System.Windows.Forms.Button();
this.button6 = new System.Windows.Forms.Button();
this.button7 = new System.Windows.Forms.Button();
this.button8 = new System.Windows.Forms.Button();
this.button9 = new System.Windows.Forms.Button();
this.label1 = new System.Windows.Forms.Label();
this.SuspendLayout();
//
// button1
//
this.button1.Location = new System.Drawing.Point(22, 12);
this.button1.Name = "button1";
this.button1.Size = new System.Drawing.Size(91, 92);
this.button1.TabIndex = 0;
this.button1.UseVisualStyleBackColor = true;
this.button1.Click += new System.EventHandler(this.button1_Click);
//
// button2
//
this.button2.Location = new System.Drawing.Point(119, 12);
this.button2.Name = "button2";
this.button2.Size = new System.Drawing.Size(91, 92);
this.button2.TabIndex = 1;
this.button2.UseVisualStyleBackColor = true;
this.button2.Click += new System.EventHandler(this.button2_Click);
//
// button3
//
this.button3.Location = new System.Drawing.Point(216, 12);
this.button3.Name = "button3";
this.button3.Size = new System.Drawing.Size(91, 92);
this.button3.TabIndex = 2;
this.button3.UseVisualStyleBackColor = true;
this.button3.Click += new System.EventHandler(this.button3_Click);
//
// button4
//
this.button4.Location = new System.Drawing.Point(22, 110);
this.button4.Name = "button4";
this.button4.Size = new System.Drawing.Size(91, 92);
this.button4.TabIndex = 3;
this.button4.UseVisualStyleBackColor = true;
this.button4.Click += new System.EventHandler(this.button4_Click);
//
// button5
//
this.button5.Location = new System.Drawing.Point(119, 110);
this.button5.Name = "button5";
this.button5.Size = new System.Drawing.Size(91, 92);
this.button5.TabIndex = 4;
this.button5.UseVisualStyleBackColor = true;
this.button5.Click += new System.EventHandler(this.button5_Click);
//
// button6
//
this.button6.Location = new System.Drawing.Point(216, 110);
this.button6.Name = "button6";
this.button6.Size = new System.Drawing.Size(91, 92);
this.button6.TabIndex = 5;
this.button6.UseVisualStyleBackColor = true;
this.button6.Click += new System.EventHandler(this.button6_Click);
//
// button7
//
this.button7.Location = new System.Drawing.Point(22, 208);
this.button7.Name = "button7";
this.button7.Size = new System.Drawing.Size(91, 92);
this.button7.TabIndex = 6;
this.button7.UseVisualStyleBackColor = true;
this.button7.Click += new System.EventHandler(this.button7_Click);
//
// button8
//
this.button8.Location = new System.Drawing.Point(119, 208);
this.button8.Name = "button8";
this.button8.Size = new System.Drawing.Size(91, 92);
this.button8.TabIndex = 7;
this.button8.UseVisualStyleBackColor = true;
this.button8.Click += new System.EventHandler(this.button8_Click);
//
// button9
//
this.button9.Location = new System.Drawing.Point(216, 208);
this.button9.Name = "button9";
this.button9.Size = new System.Drawing.Size(91, 92);
this.button9.TabIndex = 8;
this.button9.UseVisualStyleBackColor = true;
this.button9.Click += new System.EventHandler(this.button9_Click);
//
// label1
//
this.label1.AutoSize = true;
this.label1.Font = new System.Drawing.Font("Microsoft Sans Serif", 21.75F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0)));
this.label1.Location = new System.Drawing.Point(349, 36);
this.label1.Name = "label1";
this.label1.Size = new System.Drawing.Size(181, 33);
this.label1.TabIndex = 9;
this.label1.Text = "Waiting for X";
//
// Form1
//
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.ClientSize = new System.Drawing.Size(691, 314);
this.Controls.Add(this.label1);
this.Controls.Add(this.button9);
this.Controls.Add(this.button8);
this.Controls.Add(this.button7);
this.Controls.Add(this.button6);
this.Controls.Add(this.button5);
this.Controls.Add(this.button4);
this.Controls.Add(this.button3);
this.Controls.Add(this.button2);
this.Controls.Add(this.button1);
this.Name = "Form1";
this.Text = "Form1";
this.Load += new System.EventHandler(this.Form1_Load);
this.ResumeLayout(false);
this.PerformLayout();
}
#endregion
private System.Windows.Forms.Button button1;
private System.Windows.Forms.Button button2;
private System.Windows.Forms.Button button3;
private System.Windows.Forms.Button button4;
private System.Windows.Forms.Button button5;
private System.Windows.Forms.Button button6;
private System.Windows.Forms.Button button7;
private System.Windows.Forms.Button button8;
private System.Windows.Forms.Button button9;
private System.Windows.Forms.Label label1;
}
}
Program.cs (unchanged)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace tictactoeAPP
{
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());
}
}
}
Answer: There is not much to say about the code. It works as expected as a simple tic-tac-toe-game, and you show good understanding of how to code in C#.
One can discuss if a line like this:
if (item[0] == item[1] && item[1] == item[2] && item[0] != "") { MessageBox.Show(item[0] + " wins!", "Winner"); Again(); return; }
is good or bad or best practice. I find it hard to read because I'm used to have statements line by line as:
if (item[0] == item[1] && item[1] == item[2] && item[0] != "")
{
MessageBox.Show(item[0] + " wins!", "Winner");
Again();
return;
}
According to optimization, I don't think that should be your major concern (there is really nothing to optimize). Instead I would focus on how to separate game logic from the UI. As for now, your code is relying solely on the state of the UI-controls, and that is commonly regarded as bad design, because you then are bound to a specific UI (WinForms). Instead you should build a Game model, that can keep track of the game state. As a template for that you could do something like:
public class TTTGame
{
Field[,] _fields;
TTTGame()
{
// TODO: Initialize fields
}
public void SetField(Player player, int row, col)
{
_fields[row, col].Player = player;
}
public State GetState()
{
// TODO: check _fields to see if there is a winning "row" or if the game is over with a tie.
return new State(/* TODO with properties */);
}
}
public class Field
{
public Player Player { get; set; }
}
public enum Player
{
None,
X,
O
}
public class State
{
// TODO implement what ever properties are needed to describe the current state (winning player, game over, tie etc.
}
The above may not be the best/state of the art solution, but is just meant as inspiration.
The next challenge would be to implement Player classes - a human player and a an "AI"-player, so you would be able to play against the "computer". | {
"domain": "codereview.stackexchange",
"id": 30981,
"tags": "c#, beginner, tic-tac-toe, winforms, gui"
} |
External force and Conservation of energy | Question: Suppose a body (say $5\text{ }kg$) is kept at rest on a horizontal table of some friction coefficient (say $0.2$). If a force of $20\text{ }N$ is applied to the body for just a moment (i.e. infinitesimal interval of time), it moves in the direction of the force. We know that it moved because it gained energy from the force. And when the whole energy (which was gained earlier) is released, it comes to rest. It means that the body didn't use its own energy and the external force also did some work. So the energy of the body is conserved.
But from the law of C.O.M.E energy is not conserved when an external force does some work.
I am confused kindly clarify my doubt.
Answer: Since the object begins and ends at rest the change in kinetic energy is zero. Per the work-energy theorem that means the net work done on the object is zero. All of the positive work done by the $20\text{ N}$ force equals the negative work done by friction. Friction takes all the mechanical energy supplied by the $20\text{ N}$ force and dissipates it as heat at the surfaces. Mechanical energy is not conserved but total energy is.
The kinetic friction force is constant and acts on the body over the entire distance it moves. If the total distance travelled starting from rest and ending at rest is $d$ then the total negative work done by friction is
$$W_{\text{frict}}=-\mu mgd$$
Where $\mu$ is the coefficient of kinetic friction, $m$ is the mass ($5\text{ }kg$) and $g$ the acceleration due to gravity, per the work-energy theorem that equals the total positive work done by the $20\text{ N}$ force.
What I suspect is giving you difficulty is the $20\text{ N}$ force is removed short of the total distance travelled so that it only does work over the distance applied, which is true. But when the force is removed the object still has the kinetic energy given it due to it being accelerated from rest by the $20\text{ N}$ force.
While the force was applied positive net work was done equal to the change in kinetic energy of the object, $mv^2/2$. From that point on until the object stops, the only force acting on the object is the friction force. It brings the object to a stop. The net work is now negative converting all the kinetic energy primarily to heat.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 67095,
"tags": "newtonian-mechanics, forces, energy, energy-conservation"
} |
"Perfect waveform" for cross-correlations? | Question: I am well versed in cross-correlations, as my masters thesis heavily relied on them for music classification and beat detection. So this question focuses more on the signals that I am running an xcorr on as opposed to the actual cross correlation process.
Here is a simplified version of my setup:
Suppose I have the original signal, which I time shift the signal (add a delay), and add noise/distort the signal. Then I run a cross-correlation on the two signals to find this time delay and get some value of R.
Is there some formulation to come up with a signal that will reduce ambiguity errors and optimize my value of R?
I know it will be some non-repeating, pseudo-random signal, but is there a formal formulation for this? Or should I be looking at other techniques?
Perhaps something like this would be perfect: http://tedxtalks.ted.com/video/TEDxMIAMI-Scott-Rickard-The-Wor. You see any issues with this?
I'm not talking about cross correlation processing techniques to improve the performance. I'm strictly talking about the shape of the waveform
Answer: Finally, I found the answer I was looking for! A whole host of sequences have been analyzed and discussed in terms of their auto- and cross-correlation properties in the following thesis.
De Bruijn Sequences in Spread Spectrum Systems: Problems and Performance in Vehicular Applications by Stefano Andrenacci
(http://www.openarchive.univpm.it/jspui/bitstream/123456789/472/1/Tesi.Andrenacci.pdf)
Here are the possible candidates. I will have to research them further, but I will surely go with one of these:
M-Sequences
Gold Sequences
Chaos-Based Sequences
Kasami Sequences
OVSF Sequences
De Bruijn Sequences
The author advocates De Bruijn Sequences, but I believe my case is mmost suited towards an M-Sequence. | {
"domain": "dsp.stackexchange",
"id": 2224,
"tags": "signal-analysis, continuous-signals, signal-detection, cross-correlation"
} |
Does a spinning part affect the moment of inertia of a composite object? | Question: I have been going back through some Kleppner problems and have a doubt concerning problem 6.18. It states:
Find the period of a pendulum consisting of a disk of mass $M$ and radius $R$ fixed to the end of a rod of length $l$ and mass $m$. How does the period change if the disk is mounted to the rod by a frictionless bearing so that it is perfectly free to spin?
The first part (with the disk not free to spin) was reasonably straightforward. My only doubt was that I assumed the moments of inertia of the disk and rod could be added; this seems reasonable, but I don't quite know how to justify it rigorously (side-note: if anybody could give a hint on this, it would be highly appreciated). My result was:
$$T=2\pi\sqrt{\frac{MR^2/2+Ml^2+ml^2/3}{gl(M+m/2)}}$$
For the second part, the issue I had was mainly conceptual... So when the disk is free to spin, it's no longer part of the rigid body; so it won't contribute to the moment of inertia, right? The problem comes here: earlier, to calculate the torque on the rigid body about the pivot, I said that:
$$\tau=R_{CM}\times W$$
Where $R_{CM}$ is the center of mass of the rigid body. This formula comes from a summation over the torques on every small mass in the rigid body, so I figured that the torque on the rigid body, once the disk was no longer a part of it, would depend only on the center of mass of the rod (the disk would no longer affect the 'effective' center of mass that the torque acts on). This gives
$$T=2\pi\sqrt{\frac{2l}{3g}}$$
I checked my answer with this website (pages 5-7) afterwards, and the first part agreed but the second part was in disagreement; the problem was that in that site, $R_{CM}$ was still 'affected' by the spinning disk. Why is this so? (I explained above why I think that $R_{CM}$ should not contribute.)
Answer: You are very close.
Just to review what is going on, the period is given by
\begin{equation}
T = \frac{2\pi}{\omega} = 2\pi \sqrt{\frac{I}{k_{eff}}}
\end{equation}
where $I$ is the moment of inertia of the system and the torque is proportional to the angle by which the pendulum has been displaced with a coefficient that I'm calling $k_{eff}$ in analogy to Hooke's law:
\begin{equation}
\tau = -k_{eff} \theta.
\end{equation}
You have correctly identified that, when the disk is free to rotate, $I$ is simply the moment of interia of the rod $I=\frac{1}{3}ml^2$.
Let's compute the torque on the rod about the pivot. There are two contributions: one is the gravity acting directly on the rod, which (as you have correctly identified) gives a contribution $k_{eff,rod}=\frac{1}{2}mgl$.
However the disk also applies a torque to the rod. The disk is now essentially a weight hanging off the end of the rod, and so it contributes to the torque by adding to the effective Hooke's law constant: $k_{eff,disk}=Mgl$.
Thus
\begin{equation}
k_{eff} = k_{eff,rod} + k_{eff,disk} = \frac{1}{2}mgl + Mgl
\end{equation}
This resolves your issue. | {
"domain": "physics.stackexchange",
"id": 14956,
"tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics"
} |
Why does classical light always result in super-Poissonian statistics? | Question: It is a well-known result that classical light (which I take here to mean mixtures of coherent states) cannot produce sub-Poissonian photon-counting statistics, with a single beam of coherent light corresponding to a Poissonian photon-counting statistics (as discussed for example here), and other kinds of non-quantum light corresponding to super-Poissonian statistics.
However, I have never seen this fact proven formally. Usually, texts show how some common kinds of classical light, such as thermal light, result in super-Poissonian statistics, and how quantum states can produce sub-Poissonian ones, but they do not tackle the general case.
More specifically, consider a state which is a mixture of coherent states. This corresponds to a photon counting probability $P(n)$ of the form
$$P(n)=\sum_\lambda p_\lambda P_\lambda(n),$$
with $\sum_\lambda p_\lambda =1$, and $P_\lambda(n)$ being the Poisson distribution with expected value $\lambda$:
$$P_\lambda(n)\equiv e^{-\lambda}\frac{\lambda^n}{n!}.$$
A super-Poissonian distribution is characterised by the property that the variance is greater than the expected value, that is, $\sigma^2\ge\mu$. More precisely, in the considered case this means
$$\sum_n(n-\mu)^2P(n)\ge \mu,\quad \mu\equiv\sum_n nP(n).$$
Can this property be shown in full generality, without making reference to specific types of light?
Answer: Let us compute the first moments of $P(n)$:
$$\mu\equiv\sum_n nP(n)=\sum_n n\sum_\lambda p_\lambda P_\lambda(n)=\sum_\lambda p_\lambda \lambda,$$
where I used the property of the Poisson distribution $\sum_n n P_\lambda(n)=\lambda$.
Similarly, we have
$$\sum_n n^2 P(n)=\sum_\lambda p_\lambda \lambda(\lambda+1),$$
where I used $\sum_n n^2 P_\lambda(n)=\lambda(\lambda+1)$.
The variance $\sigma^2$ of the distribution thus reads
$$\sigma^2\equiv\sum_n (n-\mu)^2 P(n)=\sum_\lambda p_\lambda \lambda(\lambda+1)-\mu^2,$$
and finally the difference between variance and expected value, $\sigma^2-\mu$, is
$$\sigma^2-\mu=\sum_\lambda p_\lambda\lambda(\lambda+1)-\mu(\mu+1).\tag1$$
Defining $f(\lambda)\equiv\lambda(\lambda+1)$, (1) can be written as
$$\sigma^2-\mu=\sum_\lambda p_\lambda f(\lambda)-f\Big(\underbrace{\sum_\lambda p_\lambda \lambda}_{\mu}\Big).$$
The conclusion $\sigma^2-\mu\ge0$ now follows from $f$ being convex, together with Jensen's inequality.
This proves that an arbitrary mixture (convex combination) of Poissonians gives a super-Poissonian distribution satisfying $\sigma^2\ge\mu$. | {
"domain": "physics.stackexchange",
"id": 100201,
"tags": "quantum-mechanics, photons, quantum-information, quantum-optics"
} |
Why there are no $uuu$ and $ddd$ baryons with spin 1/2? | Question: What is preventing $Δ^{++}$ and $Δ^-$ spin 3/2 baryons from going to a lower-energy state with spin 1/2 similar to that of protons and neutrons? I don't think the Pauli exclusion principle can prevent it because the quarks have different colors. The whole purpose of the quark color is to allow more than one quark to be in the same state. What's so special about protons and neutrons? What allows them to have lower energy compared to $Δ^+$ and $Δ^0$?
Answer: You are correct to point out that there's no symmetry that forbids a state with isospin 3/2 and spin 1/2; in the nomenclature, this is also called a $\Delta$ resonance. The Particle Data Group lists two such particles, with mass 1620 MeV and 1910 MeV. They exist, but they are heavier than the spin-3/2 $\Delta$ at 1232 MeV.
The reason why is isospin, although the exclusion principle is involved.
From the standpoint of the strong nuclear interaction, you can sometimes treat the proton and the neutron as two states of the same particle, the "nucleon." In quantum mechanics, a system with two internally available states usually tends to follow the same mathematical rules as a spinor with angular momentum ℏ/2; this is the case for the nucleon. So the strong interaction operator that distinguishes between is a "rotation" in "isotope space," or isospin.
Isospin is a good quantum number for the ground states and excited states of many light nuclei. In heavy nuclei, where the energy due to electrostatic repulsion starts to compete with the nuclear binding energy, the symmetry between proton and neutron is broken and you can't assign a definite isospin to a particular state.
In isotope space the pion is a three-state triplet, obeying the same algebra as a spin-one system in angular momentum space. You can think of the $\pi^+$ and $\pi^-$ as the isotopic raising and lowering operators on the proton and the neutron.
Similarly, a $\Delta$ is a strongly-interacting particle with total isospin 3/2. The $\Delta$ has four projections onto the charge axis, corresponding to the four charge states: $\Delta^{++}, \Delta^+, \Delta^0, \Delta^-$. Historically I believe the existence of the $\Delta^{++}$ with spin 3/2 was a lynchpin in the argument for the existence of quark color. The $\Delta^{++}(1232)$ has spin 3/2, so its spin wavefunction is symmetric under exchange; its isospin wavefunction, for the same reason, is symmetric under exchange; therefore there must be another degree of freedom with three states so that the quark wavefunction can be antisymmetric.
So why is a spin-1/2 $\Delta$ heavier than the lightest spin-3/2 $\Delta$? You can compare with the case of the deuteron. Nucleons don't have the color degree of freedom, so exchange symmetry — the exclusion principle — requires that a two-nucleon system with spin 0 must have isospin 1, and vice-versa. Isospin symmetry tells us that a proton-neutron pair with spin 0 should have roughly the same energy as a diproton or a dineutron. Since neither of those systems is bound, we expect to find the deuteron with isospin 0 and spin 1. Which it has. Apparently, in baryons and light nuclei, total isospin contributes more to the total energy of a system than does total angular momentum. | {
"domain": "physics.stackexchange",
"id": 16534,
"tags": "quantum-spin, quarks, pauli-exclusion-principle, baryons, color-charge"
} |
VC dimension of complement | Question: Let $C\subseteq 2^X$ be a concept class over $X$ and let $\bar{C}:=\{X\setminus c\mid c\in C\}$ be the complement. Show that $VCdim(C)=VCdim(\bar{C})$.
Proof:
Let $d:=VC_{dim}(C)$, then there exists $S\subseteq X$, $|S|=d$, s.t. $S$ is shattered by $C$.
Let $d':=VC_{dim}(\bar{C})$, then there exists $S'\subseteq X$, $|S'|=d'$, s.t. $S'$ is shattered by $C$.
Show that $d\leq d'$ and $d' \leq d$. I know that a set $S$ is shattered by $C$ iff $\Pi_C(S):=\{c\cap S\mid c\in C\}=2^S$, but I have no clue how to show the two sides. Can someone help me with that?
Answer: First, observe that it is enough to prove that $d\le d'$. Then, the converse inequality follows from the fact that $\overline{\overline{C}}=C$, and by applying the initial argument to $\overline{C}$.
To prove the claim, we actually prove something stronger: we prove that if $C$ shatters $S$, then $\overline{C}$ also shatters $S$.
Let $S$ be a set that is shattered by $C$, and consider $\Pi_{\overline{C}}(S)$. Let $T\subseteq S$, then there exists $c\in C$ such that $c\cap S=S\setminus T$ (since every subset of $S$ can be reached this way). Let $c'=X\setminus c\in \overline{C}$, then $c'\cup c=X$. Thus, $c'\cap S=T$. Therefore, $\Pi_{C}(S)\subseteq\Pi_{\overline{C}}(S)$.
Applying the same argument to $\overline{C}$, we get $\Pi_{\overline{C}}(S)\subseteq \Pi_{\overline{\overline{C}}}(S)=\Pi_{C}(S)$, and thus we have the equality we wanted, and we conclude the claim. | {
"domain": "cs.stackexchange",
"id": 2376,
"tags": "machine-learning, vc-dimension"
} |
What are these stripes in swimming pool light lenses called which make the LEDs seem as if they are pointing towards us, no matter where we are? | Question:
What are these stripes in swimming pool light lenses called which make the LEDs seem as if they are pointing towards us, no matter where we are? How do these stripes work?
Also, should their angle be perpendicular to ground or parallel?
Answer: Those stripes are lenses that have been molded into the clear plastic or glass that protects the LED's from the water. But instead of being circular lenses, they are stretched out in one direction into what you call "stripes". Such an arrangement is called a cylindrical lens and is used when you want to control the spread of a light beam in just one direction instead of two.
In this case, the objective of the lens is to spread out the light beam in the horizontal plane so it appears to be beamed sideways- and hence it seems to be focused on you no matter where you are located within that horizontal plane. | {
"domain": "physics.stackexchange",
"id": 86942,
"tags": "diffraction, lenses"
} |
Pioneer_p3dx simulation on Gazebo, not stopping on key release | Question:
I am working on Pioneer P3dx simulation on Gazebo.Using the package "ua_ros_p3dx". When I am using keyboard to control the robot, the robot is moving, but once release the key from the keyboard, still the robot moves. Its not getting stopped. Till I press any other key the robot continues on the previous command. Please help me to resolve the issue.
The link for the package I am using is https://github.com/RafBerkvens/ua_ros_p3dx
Thanks
Melvin
Originally posted by manuelmelvin on ROS Answers with karma: 33 on 2018-07-09
Post score: 0
Original comments
Comment by jayess on 2018-07-09:
Can you please update your question with a link to the package that you're using
Comment by manuelmelvin on 2018-07-09:
I have updated with the package link. Thank you
Answer:
Hello @manuelmelvin,
Unfortunately, using the differential driver gazebo plugin like this will not work. But the good news is that you can use in a different way, like they do for Husky or Jackal robots (https://github.com/s-mostafa-a/Original-Husky).
Basically, it's about using the diff_drive_controller (http://wiki.ros.org/diff_drive_controller)
By doing this, you can set the variable cmd_vel_timeout as your needs.
I've cloned your repo and worked on one of my own, see below the main modifications I've done or if you prefer, check in my public repository the merge I've done to master branch (https://bitbucket.org/theconstructcore/p3dx/commits/ed1e237e2a36a1c2d23cd5edcbed2059cf0c88f2)
Yet, you can run my ROSJect using this ROSDS link (https://rds.theconstructsim.com/tc_projects/use_project_share_link/438ff787-1c07-44ff-8fb0-b7a21f4014b2)
Or watch this video to follow the instructions: https://www.youtube.com/watch?v=x8iedoVgv8k
Here it goes the modifications:
p3dx_description/urdf/pioneer3dx.gazebo (just comment the plugin code)
<!--
<gazebo>
<plugin name="differential_drive_controller" filename="libgazebo_ros_diff_drive.so">
<alwaysOn>true</alwaysOn>
<updateRate>100</updateRate>
<leftJoint>base_right_wheel_joint</leftJoint>
<rightJoint>base_left_wheel_joint</rightJoint>
<wheelSeparation>0.39</wheelSeparation>
<wheelDiameter>0.15</wheelDiameter>
<torque>5</torque>
<commandTopic>${ns}/cmd_vel</commandTopic>
<odometryTopic>${ns}/odom</odometryTopic>
<odometryFrame>odom</odometryFrame>
<robotBaseFrame>base_link</robotBaseFrame>
</plugin>
</gazebo>
-->
p3dx_description/urdf/pioneer3dx_wheel.xacro (change hardwareInterface value to VelocityJointInterface)
<transmission name="${parent}_${suffix}_wheel_trans">
<type>pr2_mechanism_model/SimpleTransmission</type>
<joint name="base_${suffix}_wheel_joint">
<hardwareInterface>VelocityJointInterface</hardwareInterface>
</joint>
<actuator name="base_${suffix}_wheel_motor">
<mechanicalReduction>${reflect * 624/35 * 80/19}</mechanicalReduction>
</actuator>
</transmission>
p3dx_gazebo/launch/gazebo.launch (Spawn the differential drive controller)
<launch>
<!-- these are the arguments you can pass this launch file, for example
paused:=true -->
<arg name="paused" default="false" />
<arg name="use_sim_time" default="true" />
<arg name="gui" default="true" />
<arg name="headless" default="false" />
<arg name="debug" default="false" />
<!-- We resume the logic in empty_world.launch, changing only the name of
the world to be launched -->
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<!--<arg name="world_name" value="$(find p3dx_gazebo)/worlds/p3dx.world" />-->
<arg name="debug" value="$(arg debug)" />
<arg name="gui" value="$(arg gui)" />
<arg name="paused" value="$(arg paused)" />
<arg name="use_sim_time" value="$(arg use_sim_time)" />
<arg name="headless" value="$(arg headless)" />
</include>
<group ns="/p3dx">
<!-- Load the URDF into the ROS Parameter Server -->
<param name="robot_description"
command="$(find xacro)/xacro.py --inorder '$(find p3dx_description)/urdf/pioneer3dx.xacro'" />
<!-- Run a python script to the send a service call to gazebo_ros to spawn
a URDF robot -->
<node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model"
respawn="false" output="screen" args="-urdf -param robot_description -model p3dx" />
<rosparam command="load" file="$(find p3dx_control)/config/control.yaml" />
<node name="base_controller_spawner" pkg="controller_manager" type="spawner"
args="--namespace=/p3dx
p3dx_joint_publisher
p3dx_velocity_controller
--shutdown-timeout 3"
output="screen"/>
<!-- ros_control p3rd launch file -->
<!-- <include file="$(find p3dx_control)/launch/control.launch" /> -->
</group>
</launch>
p3dx_control/config/control.yaml (create a new file to configure the controller)
p3dx_joint_publisher:
type: "joint_state_controller/JointStateController"
publish_rate: 50
p3dx_velocity_controller:
type: "diff_drive_controller/DiffDriveController"
left_wheel: 'base_right_wheel_joint'
right_wheel: 'base_left_wheel_joint'
publish_rate: 50
pose_covariance_diagonal: [0.001, 0.001, 0.001, 0.001, 0.001, 0.03]
twist_covariance_diagonal: [0.001, 0.001, 0.001, 0.001, 0.001, 0.03]
cmd_vel_timeout: 0.25
wheel_separation : 0.39
wheel_radius : 0.15
I hope it can help you.
Cheers!
Originally posted by marcoarruda with karma: 541 on 2018-07-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by manuelmelvin on 2018-07-14:
Thank you very much for your effort. It worked for me. great job marcoarruda
Comment by marcoarruda on 2018-07-20:
Great to know it helped you! Do you mind to check my answer as correct? Thanks! | {
"domain": "robotics.stackexchange",
"id": 31232,
"tags": "ros-indigo"
} |
What is the optimal light setting for human vision? In other words, is my vision better during the day or at night? | Question: There seems to me to be two main effects that should be considered in answering this question.
The first is that the iris dilates in low-light settings, which causes an increase in the numerical aperture of the eyeball and therefore an increase in the amount of light that gets focused onto the retina. However, there is also a smaller amount of ambient light that will hit the eyeball under these low-light settings than in normal daylight.
The second factor to consider is that the eyeball itself is not a perfect lens. Rather, it is subject to quite large spherical aberrations, which distort the perceived image. In low-light scenarios, when the iris dilates, more of the eye (and thus more of its spherical aberrations) are exposed to the incoming light, thus altering the produced image even more than in daylight.
My feeling is that there has to be an ambient light level at which the iris is as dilated as it can be to allow the most light to hit the retina, while simultaneously not being too dilated as to introduce too much spherical aberration to the image. What is this level?
Answer: Vision in the daylight and vision during night are two different processes performed, in the eye, by two different kind of cells.
The vision under well-lit conditions is called photopic vision, while vision in low levels of light is called scotopic vision. Photopic vision is performed by cone cells, that are mainly concentrated in the fovea, that is, roughly speaking, the point in which the image you are looking at is focused in your eyes. On the other hand, scotopic vision is performed by rod cells, that are distributed on the whole retina (with a varying density) with the exception of the fovea, where they are less concentrated. The first consequence is that, while in photopic vision it is possible to directly stare a certain point focusing it, in scotopic vision (for example, star gazing with naked eye) often it happens that objects that are visible in the peripheral field of view suddenly are no longer visible when we focus on them. (Actually, I am not a biologist, but this seems reasonable from an evolutionary point of view. In the dark, you have to control the whole environment for predators, therefore peripheral vision is extremely important, while in the light predators are more visible and the main goal is thus to focus things).
A second, important difference is that in photopic vision the cone cells are sensitive to certain wavelengths and they can distinguish between them, thus allowing color vision, while in scotopic vision different colors cannot be distinguished and the sensitivity as a function of the wavelength is different.
As a side note, it is remarkable to notice that in scotopic vision we can see (in this case, I mean perceive with a signal in our nerves) just a few photons (less than ten in the right spectral region), while in photopic vision this number is bigger. Notice that the active substance in rod cells, that is called rhodopsin, is actually sensitive to the single photon (in certain spectral regions), but a conscious response is obtained only when a few more photons comes to the retina, otherwise the noise in our vision process would be too high.
To get back to your question, I think that from the point of view of a physicist it is actually quite difficult to answer. The process of vision is extremely complicated and it is not only related to the sensitivity of our eye to light. In fact, it involves also the response of the human brain to the optical stimulus and all the ways in which it interprets it.
Last, in this Wikipedia article, it states that:
Night vision is of a much poorer quality than day vision because it is limited by a reduced resolution and therefore provides the ability to only discriminate between shades of black and white.
The reference for this sentence (and probably references therein) could be a good starting point for understanding the factors that are playing a role in this difference of resolution. | {
"domain": "physics.stackexchange",
"id": 51169,
"tags": "optics, vision, biology"
} |
How do you handle unbalanced image datasets? | Question: I have an image data set on which I am training a CNN. The data set is slightly unbalanced. So, my solution up till now was to delete some images of the majority class.
But I now realize that there are cleaner ways to deal with this. But I haven't been able to find ways to fix unbalanced image data sets. Only structured data-sets.
I would like someone to guide me to fix the unbalance, other than deleting data from the majority class.
Answer: You can always adjust class weights accordingly. I know the reference is not for image data but it shouldn't matter if you are doing classification. Here is another answer more direct to the point. | {
"domain": "ai.stackexchange",
"id": 3040,
"tags": "machine-learning, convolutional-neural-networks, datasets, imbalanced-datasets"
} |
Refactor or simple code RoR | Question: I have a GiftCard class where I have method can_deliver? and deliver! that I think can be refactored to better looking code :
def can_deliver?
(self.sent_at.nil? && self.scheduled_at.nil?) || (self.sent_at.nil? && self.scheduled_at && self.scheduled_at < Time.now)
end
def deliver!
return unless self.can_deliver?
begin
Notifier.gift_card_code_mail(self).deliver
self.sent_at = Time.now
self.save
rescue Exception
logger.error "Could not send email gift card ##{self.id}"
self.line_item.order.comments.create!(:content => "Can not send gift card mail")
end
begin
Notifier.gift_card_delivered(self).deliver
rescue Exception
logger.error "Could not send deliver confirmation email gift card #{self.id}"
self.line_item.order.comments.create!(:content => "Can not send confirmation delivery of gift card")
end
end
Answer: Some notes:
self: You write self.attribute to access attributes. I won't say that's bad practice, not at all, mainly because it's easier to see if you're accessing local variables or instance methods. However, in Ruby it's idiomatic not to use self in this case.
variable.nil?: this is something we often see but 99% of the time is unnecessary verbose. You want to know if a variable is not set? write !variable. The only case a explicit nil? is needed is when you want to tell false from nil, not the case here.
Layout: Don't write long lines, 80/100 is a sound limit. It usually pays off in form of clarity also: in can_deliver? for example, if you break on the || operator the boolean expression is much more clear.
Early returns: return unless self.can_deliver?. Again, I won't say that's bad, in a language that has no syntax for guards (like other languages have), it's a handy way to do early returns when pre-conditions are not met. As a rule of thumb, however, I'd recommend to write a full conditional. Yeah, I know, it's more verbose and you get an extra indentation level, but on the other hand the layout of the function/method helps understanding what the method it's doing.
The cool thing about the way you wrote can_deliver? (using an expression instead of a bunch of imperative returns) is that you can apply boolean algebra. Notice here than (!p && !q) || (!p && q && q < t) -> !p && (!q || q < t). Of course this kind of simplifications must only be done when the resulting expression is at least as declarative as the original (we are not trying to save some NAND gates here). If you read it out loud and it makes sense then it's ok.
You have two (and probably more throughout the code) almost identical begin/rescue blocks. That's a call for abstraction (it this case an abstraction using block wrappers).
Don't write rescue Exception, that's a common pitfall. Check this SO question.
Shouldn't deliver! return something? how would the caller know if it was successful? you should always return something, either a value (a boolean seems fit in this case) or, yes, return nil but raise an exception on error. Personally I like to return values and leave exceptions for really exceptional things (like, I don't know, the hard disk exploded).
Applying all these points, and with a declarative approach in mind, I'd write:
class GiftCard
def can_deliver?
!sent_at && (!scheduled_at || scheduled_at < Time.now)
end
def deliver!
can_deliver? && send_card_code_email && send_gift_card_email
end
private
def comment_on_exception(msg)
yield
rescue => exc
logger.error(msg)
line_item.order.comments.create!(:content => msg)
false
end
def send_card_code_email
comment_on_exception("Could not send email gift card #{id}") do
Notifier.gift_card_code_mail(self).deliver
update_attribute(:sent_at, Time.now)
end
end
def send_gift_card_email
comment_on_exception("Could not send deliver confirmation email gift card") do
Notifier.gift_card_delivered(self).deliver
end
end
end
If you found some of these advices useful, take a look at the RubyIdioms page I maintain (and its RubyFunctionalProgramming companion). Comments most welcome. | {
"domain": "codereview.stackexchange",
"id": 2848,
"tags": "ruby, ruby-on-rails"
} |
How to experimentally cool a nucleus | Question: Its possible to cool atoms experimentally (i.e. reducing their individual momenta) through laser cooling getting matter composed of these atoms to do some strange things (Ex: superfluids and bose-einstein condensates).
But suppose you wanted, just for fun to make them EVEN COLDER. I came across: http://www.int.washington.edu/users/bertsch/general_interest/scientific_american_1983.pdf which describes (albeit in a very pop-sci way) the existence of vibrations in the nuclei of atoms.
How could one experimentally damp these nuclear vibrations and cool them to a ground state?
(I suppose one could use neutrino's to cool off stuff involving the weak-interaction)
Answer: With the exception of meta-stable nuclear isomers (look for a mass-number with an 'm' at the end as in technium-99m) almost every nucleus you meet is already in the ground state for that isotope. Most excited nuclear states decay very quickly, so that there is no chance of collecting a laboratory sample of nuclei in those states. The meta-stable states are the exception.
Of course some isotopes which are in their own ground state can get to an even lower energy state by some radioactive decay process, which they do. Eventually.
Short answer to the question asked: there is no means of cooling the nucleus. | {
"domain": "physics.stackexchange",
"id": 38884,
"tags": "quantum-mechanics, experimental-physics, nuclear-physics"
} |
Generate the Convolution Matrix of 2D Kernel for Convolution Shape of `same` | Question: I want to find a convolution matrix for a certain 2D kernel $ H $.
For example, for image Img of size $ m \times n $ , I want (in MATALB):
T * Img = reshape(conv2(Img, H, 'same'), [], 1);
Where T is the convolution matrix and same means the Convolution Shape (Output Size) matched the input size.
Theoretically, H should be converted to a toeplitz matrix, I'm using the MATLAB function convmtx2():
T = convmtx2(H, m, n);
Yet T is of size $ (m+2) (n+2) \times (mn) $ as MATLAB's convmtx2 generates a convolution matrix which matches Convolution Shape of full.
Is there a way to generate the Convolution Matrix which matches using conv2() with the same convolution shape parameter?
Answer: I cannot test this on my computer because I do not have the convtmx2 function, here is what the MATLAB help says:
http://www.mathworks.com/help/toolbox/images/ref/convmtx2.html
T = convmtx2(H,m,n) returns the convolution matrix T for the matrix H. If X is an m-by-n matrix, then reshape(T*X(:),size(H)+[m n]-1) is the same as conv2(X,H).
This would get the same resulting convolution of conv2(X,H) but then you would still have to pull out the correct piece of the convolution. | {
"domain": "dsp.stackexchange",
"id": 7155,
"tags": "image-processing, matlab, computer-vision, convolution"
} |
Am I checking incompressibility of a velocity flow correctly? | Question: My velocity flow is defined by $u_r$, $u_{\theta}$, $u_x$. This makes the strain rate tensor of the velocity flow equal to:
$J_{ij} = \begin{bmatrix} u_{rr} & u_{r\theta} & u_{rx} \\
u_{\theta r} & u_{\theta \theta} & u_{\theta x} \\
u_{xr} & u_{x\theta} & u_{xx} \end{bmatrix}$
Where $J$ can be split into a symmetrical part $\mathcal{D}$ and a anti symmetrical part $\Omega$ which are defined as:
$\mathcal{D_{ij}} = \frac{1}{2} (u_{ij} + u_{ji}), \quad \mathcal{D}^T = \mathcal{D}\\
\Omega_{ij} = \frac{1}{2} (u_{ij} - u_{ji}), \quad \Omega^T = - \Omega$
I now have to check whether or not the flow field is incompressible. And to me it seems that compressibility is defined by the diagonal terms in $J$, and vorticity is described by the off diagonal terms in $J$. Am I right when I say the following thing:
$\mathcal{D_{ij}} = \frac{1}{2} (u_{ij} + u_{ji}), \quad \mathcal{D}^T = \mathcal{D} \equiv \frac{1}{2}(2\cdot u_{rr} + 2\cdot u_{\theta \theta} + 2\cdot u_{xx})$
And that if I show that this equation is equal to zero the fluid flow is incompressible?
Answer: The continuity equation in cylindrical coordinates is $$\frac{1}{r}\frac{\partial (ur)}{\partial r}+\frac{1}{r}\frac{\partial v}{\partial\theta}+\frac{\partial w}{\partial z}=0$$ | {
"domain": "physics.stackexchange",
"id": 46652,
"tags": "fluid-dynamics, flow"
} |
Costume world fail to load in Hector_quadrotor_gazeboo | Question:
hi there. I am trying to load a costume world for Hector_quadrotor_gazebo , lets say 3.world which is basically roslaunch hector_quadrotor_gazebo quadrotor_empty_world.launch then added some buildings and objects and saved the new world as 3.world.
how can launch this world later on ? i had followed the tutorial in 'http://learn.turtlebot.com/2015/02/03/6/' which worked fine for turtlebot but didnt work for hector_quadrotor.
iam new to ROS and Gazebo so excuse my question.
Originally posted by Caesar84 on Gazebo Answers with karma: 3 on 2018-04-02
Post score: 0
Answer:
So if you look at the contents of the quadrotor_empty_world.launch take note of the following section:
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="paused" value="$(arg paused)"/>
<arg name="use_sim_time" value="$(arg use_sim_time)"/>
<arg name="gui" value="$(arg gui)"/>
<arg name="headless" value="$(arg headless)"/>
<arg name="debug" value="$(arg debug)"/>
</include>
This .launch file passes arguments to gazebo_ros's empty_world.launch file. If you look at the empty_world.launch file, you'll see the following line:
<arg name="world_name" default="worlds/empty.world"/> <!-- Note: the world_name is with respect to GAZEBO_RESOURCE_PATH environmental variable -->
This line means that you can pass this file a world_name argument to open any .world file you'd like.
So to answer your question, add a line (or make a copy if you don't want to change the original file) to the quadrotor_empty_world.launch file like so:
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="paused" value="$(arg paused)"/>
<arg name="use_sim_time" value="$(arg use_sim_time)"/>
<arg name="gui" value="$(arg gui)"/>
<arg name="headless" value="$(arg headless)"/>
<arg name="debug" value="$(arg debug)"/>
<arg name="world_name" value="path/to/3.world"/>
</include>
And then obviously input the correct path to your 3.world file.
Originally posted by Raskkii with karma: 376 on 2018-04-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4250,
"tags": "gazebo-world"
} |
Audio template matching - power correlation? | Question: I'm looking at some code that matches audio templates in a longer audio file. The calculation correlates the power spectra of the template and audio file, maximizing over the possible alignments. This seems sub-optimal to me, because by going over to the power spectrum we're throwing away phase information. Yet when I try doing a full correlation I get worse results. I'm not 100% sure yet my implementation is correct; I could have made a silly mistake.
Is there any reason it could be better correlating the power spectra than doing a full correlation? E.g. is it more noise resistant? (I don't see why it would be. As it happens, I do have something resembling white noise in my test data.)
The only obvious thing I can think is that the power spectral correlation is probably better if the alignment error is >= 1/highest frequencies in signal because then the template and signal will be out of phase. I don't think this should be the case for me: I'm optimizing to within more temporal precision than that.
Other ideas?
EDIT: in view of Peter K's comment, I should clarify that I'm using the short time Fourier transform, summing over windows of size around 0.01s. That's how the alignment dependence enters.
Answer: If you're cross correlating short time power spectra and disregard phase, then you're not really disregarding phase.
Phase of the global Fourier transform encodes the temporal structure, including the position of the transients, evolution of tones or the local incoherence of noise.
By capturing the time dependence in the moving window of the time-frequency power spectrum, the really important aspects of phase are represented in the temporal evolution of the frequency power density.
So what you are discarding is merely "local phase", which contains a lot of information that is however modified by even the most subtle processes like sound propagation, speaker reproduction, microphone recording, etc. These modifications don't affect the qualitative content of the sound a lot, and a robust sound recognition algorithm should be mostly insensitive to them.
That means discarding the local phase will make your recognition or correlation algorithm more robust and avoid misclassifications due to small inaudible phase errors, while at the same time preserving the total temporal structure of the signal. | {
"domain": "dsp.stackexchange",
"id": 3050,
"tags": "audio, cross-correlation"
} |
Why does an eutectic liquid not change composition on solidifying? | Question: Callen remarks that "If it is desired to have the solid precipitate with the same composition as the liquid, it is necessary to start with a liquid of" eutectic composition. Why is this so? I think this has something to do with how the composition of the liquid phase in general changes during the solidification process in an $r$ component system, $r>1$, but it's not clear to me how freezing at the eutectic composition prevents this.
Answer: On either side of a eutectic lie slushy (i.e., two-phase) regions (here marked α+L and L+β, with L being liquid and α and β being impure A and B, respectively):
(This and similar images from Shackelford.)
Here, both the solid and liquid Gibbs free energies $G$ are pretty low, so Nature splits the difference and lets the material at that composition decompose into a little of both:
(Image from Porter and Easterling.)
Progressive cooling from the liquid through one of these regions yields crystallites of the relevant solid, with composition determined by the lever rule (with constant composition assumed for simplicity, corresponding to slow cooling); then, the remaining liquid freezes into a fine eutectic microstructure because both α and β favorably exist as solids:
One can avoid the included crystallites by freezing at exactly the eutectic composition:
As shown in the Gibbs free energy plots above, this is the composition that happens to be the last point that intersects the convex hull formed by L, α, and β as L rises out of this contour, now energetically unfavorable. | {
"domain": "physics.stackexchange",
"id": 96480,
"tags": "thermodynamics, phase-transition, physical-chemistry"
} |
Bio-fuel efficiency as aviation gasoline | Question: Can biofuels replace petroleum for aviation fuel?
I've read in the news that Alaska Airlines was flying with 10~20% biofuel and 80~90% petroleum. Is biofuel efficiency comparable to the efficiency of petroleum? I am curious whether or not the maximum speed of the airplane is same when it uses bio-fuel 100%.
Answer: The term biofuel refers both to the source of the fuel, as well as (currently, at least) the compounds contained in it. If the production quality is strictly controlled, eventually there should be no detectable difference between a fuel mixture derived directly from biological sources, and one derived from petroleum products (which, technically speaking, are of biological origin themselves, just a few million years removed). However, due to technological limitations, biofuels and petroleum-based fuels do have different chemical components today.
The reason we hear of differences in mileage or "energy density" in biofuels used to power automobiles as compared to gasoline/petrol is because they are not the same mixture of chemicals. This page goes into detail (the whole site is actually quite clear with its explanations), but the basic difference is that petroleum products are essentially 100% hydrocarbons (only containing hydrogen and carbon atoms) while nearly all biofuels contain oxygen as well, which has numerous effects on the fuel's physical and chemical characteristics, altering everything from polarity to reactivity to stability over time to the types of byproducts (and pollutants) produced during combustion.
There is much more to the story, but the basic answer to your question is yes, biofuels could replace petroleum-based products for aviation and many other types of fuels, and not impact the maximum speed of the aircraft. It may require engine modification, and the energy density may not be exactly the same (possibly reducing range), but for the long-term health of our planet we MUST reduce and eventually eliminate our use of petroleum.
By the way, the Aviation or Chemistry StackExchange sites would likely be able to provide you with much more in-depth answers, as ultimately this question is better suited to either or both of those sites than Biology, which would be more concerned with the production of biofuels by living organisms. | {
"domain": "biology.stackexchange",
"id": 4631,
"tags": "biotechnology, flight, vegetable"
} |
Electric power and resistance dependance | Question: According to the equations,
$$P=VI =I^2R\,\text{ and voltage } V=IR$$
it seems clear that when the resistance is lower by fixing the voltage at constant, the current is therefore, higher, generating high power. But what confused me was when the resistance is higher by fixing the current at constant, the voltage is therefore, higher, which in turn lead to a higher power as well. Can anyone pull me out of this confusion?
Answer: The answer is "yes, that's what happens." There's no paradox. If you hold the current constant, and increase the resistance, the power increases. This is because of V=IR and P=VI (Ohms law and the power through a resistor). If you put these together, you can see that $P=I^2R$. If you hold the current constant, and increase the resistance, power goes up. That's just how the equations work.
What makes this confusing is that it's not intuitive how to hold current constant. We typically don't think that way. Usually we think in terms of voltages. So one way to think of this is our higher resistance forces the power supply to provide a higher voltage in order to push through the same current. Intuitively, it should make sense that a higher voltage supply can produce higher powers (though you would need the $P=I^2R$ equation to prove it).
You can use metaphors as well. Any metaphor where you can put a load on something works decently well. Take your own body. You can run at a fairly nice pace. The resistance on your body while running is quite low, so it doesn't take much effort. Now add resistance:
If you run at the same pace (the equivalent of keeping the current the same), you're going to have to push much harder (the equivalent of raising the voltage). And, if you notice, you get hot really fast (power is being dissipated). However, that's only because you kept the current the same. It would also be possible to slack off, not running as hard, in which case you could dissipate less power than before. But your original problem declares that you're keeping the current constant, so you're going to have to work harder and have more power! | {
"domain": "physics.stackexchange",
"id": 43816,
"tags": "electricity, electric-current, electrical-resistance, voltage, power"
} |
Is there a limit to how fast spacetime can bend/warp around an accelerating object? | Question: If in the future a spaceship can accelerate to nearly the speed of light, will it encounter any kind of resistance/drag from the spacetime in front of it not being able to bend/warp fast enough around it?
It is my understanding that as an object accelerates, its mass will increase and this increase in mass should result in the object having more gravity, and this gravity will continue to increase as the object's speed increases. The creation of more and more gravity should result in an increased area of spacetime surrounding the object being bent/warped.
So, if there is a limit to how fast spacetime can bend/warp around an accelerating object, along with an increased area of spacetime being affected by an accelerating object's increased gravity, will this spaceship start to heat up and begin to melt the faster it accelerates?
Answer: Changes in the bending of spacetime propagate through spacetime at the speed of light. If I wave my hand thus the (very, very, very tiny) changes to spacetime will propagate outwards as gravitational waves at the speed of light and won't reach the Sun for 8 minutes.
No matter how massive an object or how great the acceleration, spacetime elsewhere only "feels" that acceleration after it has passed into the acceleration's lightcone. | {
"domain": "astronomy.stackexchange",
"id": 3220,
"tags": "gravity, astrophysics, space-time, general-relativity"
} |
Group list of words by occurrence / inverse index of strings | Question: I'm currently working through Coding the matrix, though I'm trying to work through it using Haskell rather than Python. One of the exercises has you write a function where you take a list of strings, and then transform them into an inverse index of the words contained in each string, along with the index of the originating string:
> makeInverseIndex ["hello world", "world test", "hello test"]
fromList [("hello",fromList [0,2]),("test",fromList [1,2]),("world",fromList [0,1])]
You then implement an orSearch and an andSearch, which returns the indexes of the strings that match the search:
> orSearch ["hello", "world"] $ makeInverseIndex ["hello world", "world test", "hello test"]
fromList [0,1,2]
> andSearch ["hello", "world"] $ makeInverseIndex ["hello world", "world test", "hello test"]
fromList [0]
Here is my attempt in Haskell:
import Data.List as L
import Data.Map.Strict as M
import Data.Set as S
makeInverseIndex :: [String] -> Map String (Set Int)
makeInverseIndex = L.foldr (unionWith S.union . uncurry group) M.empty . zipWithIndex . L.map words
where
zipWithIndex :: [a] -> [(Int, a)]
zipWithIndex xs = zip [0..length xs] xs
group :: Ord k => v -> [k] -> Map k (Set v)
group n = M.fromList . L.map (\w -> (w, S.singleton n))
orSearch :: [String] -> Map String (Set Int) -> Set Int
orSearch words =
M.foldr S.union S.empty . pick words
andSearch :: [String] -> Map String (Set Int) -> Set Int
andSearch words index =
M.foldr S.intersection (S.fromList [0..length index-1]) . pick words $ index
pick :: Ord k => [k] -> Map k v -> Map k v
pick keys m =
restrictKeys m $ S.fromList keys
Answer: Prefer qualified imports for containers
The containers modules contain several functions that have the same names as their list counterpart. Therefore, they are usually included as qualified modules:
import Data.Map.Strict (Map)
import qualified Data.Map.Strict as M
import Data.Set (Set)
import qualified Data.Set as S
The types are imported unqualified to make the type signatures easier to read.
Next, it's a easier to exchange your types later if you provide a type synonym:
type MultiMap k v = Map k (Set v)
type WordMap = MultiMap String Int
Now, let's have a look at your functions. zipWithIndex isn't optimal because it traverses xs twice. However the result of zip is as long as the shorter of the two lists. We can therefore simply write
zipWithIndex xs = zip [0..] xs
Note that you don't need a type signature on those local functions. Indeed, they can be misleading, because the a in zipWithIndex is not related to the a in the outer function.
Don't shadow library function names
group is a name that's already imported via Data.List. Since we now import our containers as qualified, we can simply provide our own singleton function to get rid of group:
singleton :: k -> v -> MultiMap k v
singleton k v = M.singleton k (S.singleton v)
Our makeInverseFunction would now look like this:
singleton :: k -> v -> MultiMap k v
singleton k v = M.singleton k (S.singleton v)
makeInverseIndex :: [String] -> WordMap
makeInverseIndex = foldr (M.unionWith S.union . uncurry insert) M.empty . zipWithIndex . L.map words
where
zipWithIndex = zip [0..]
insert v = foldMap (flip singleton v)
Prefer functions that provide your functionality already
However, there's a function to convert a list of maps into a single map, , unionsWith:
makeInverseIndex :: [String] -> WordMap
makeInverseIndex = M.unionsWith S.union . concat . zipWith go [0..] . map words
where
go index ws = map (flip singleton index) ws
I admit that go is a bad name in that context. indexer might be a better one. By the way, we cannot use foldMap or mconcat here, since that wouldn't merge the map values.
Try to relax your type signatures*
orSearch is fine, although you could relax its type. Also, words is against a function name. You can use ws safely in this context.
orSearch :: (Ord k, Ord v) => [k] -> MultiMap k v -> Set v
orSearch ws =
M.foldr S.union S.empty . pick ws
* unless it leads to performance problems or ambiguities
Prefer foldr1 instead of complicated start values
Now, andSearch is a little bit tricky. You use S.fromList [0..length index-1] in order to have a proper "zero" case. However, if the map is empty, the correct answer should be the empty set, not the complete, right?
So let's handle that case first with M.null index and then use foldr1 from Map's Foldable instance:
andSearch :: (Ord k, Ord v) => [k] -> MultiMap k v -> Set v
andSearch words index =
| M.null sets = S.empty
| otherwise = foldr1 S.intersection sets
where
sets = pick words index
Other than that, well done. Keep in mind that there's a IntSet in Data.IntSet that might be more suitable for your use case. | {
"domain": "codereview.stackexchange",
"id": 28919,
"tags": "haskell"
} |
Algorithm to check two binary expression trees for equivalence | Question: Is there any known algorithm to check for equivalence of two binary expression trees over a field $\mathbb{F}$?
For example for the expression $a+b = b+a$ it should return true (since $\mathbb{F}$ is commutative) and $a^b = b^a$ should return false as well as for $a^2 = a$.
I can think of a naive implementation which is basically brute-force creating all equivalent binary expression trees for LHS and for RHS and check for a non empty intersection.
Is there a real-world-efficient algorithm to do this? I mean it isn't a must to be polynomial time but will work fast for common or relatively small problems (trees with at most ~100 nodes).
Wikipedia doesn't give any reference to the problem of comparing two for equivalence over some algebraic structure.
Answer: The answer depends heavily on what operations you allow to appear in the trees.
If the tree uses only addition, subtraction, and multiplication, this is an instance of the polynomial identity testing problem, which can be solved in polynomial time by a randomized algorithm. Basically, you pick random values for $a,b$ and check whether both trees return the same value; and repeat a few hundred times. See https://en.wikipedia.org/wiki/Schwartz%E2%80%93Zippel_lemma and Is there an efficient algorithm for expression equivalence?.
Division can also be handled by expressing the tree as a rational polynomial (ratio of two polynomials), then again using polynomial identity testing.
If you can have addition, subtraction, multiplication, division, and exponentiation, then I don't know whether there are efficient algorithms. I think that problem, or some variant of it, is addressed here: Decidability of equality, and soundness of expressions involving elementary arithmetic and exponentials.
(Possibly loosely related: Decidability of Equality of Radical Expressions)
Let me also highlight that expressions like $a^b$ "smell fishy" and don't typecheck when $a,b \in \mathbb{F}$ are elements of a finite field. For instance, if $\mathbb{F}=GF(p)$, then exponentiation $a^b$ is well-defined if $a$ is taken modulo $p$ and $b$ is taken modulo $p-1$ (not modulo $p$). In particular, $(a+p)^b=a^b$ modulo $p$, but $a^{b+p} \ne a^b$ modulo $p$. So, to make an expression like $a^b$ make sense, $a^b$ has to be interpreted as $a^{f(b)}$ where $f$ maps from elements of $GF(p)$ to numbers $\{0,1,\dots,p-2\}$. There is a natural way to do that, e.g., $f(0)=0$, $f(1)=1$, ..., $f(p-2)=p-2$, $f(p-1)=0$. But then when you do that, exponentiation doesn't have the properties you might expect. In particular, $a^{b+c}$ is no longer equal to $a^b \times a^c$. For instance, if $b=1$ and $c=p-1$, then $a^{b+c}=a^{p-1+1}=a^0=1$, but $a^b = a^1 = a$ and $a^c = a^{p-1} = 1$, so $a^{b+c}=1$ but $a^b \times a^c = a$. So I would be pretty suspicious of any expression that contains exponentiation where the expression in the exponent is an element of $\mathbb{F}$. | {
"domain": "cs.stackexchange",
"id": 18328,
"tags": "algorithms, graphs"
} |
In the derivation of Noether theorem, why do we subtract the same quantity to obtain Noether current? | Question: In a general approach derivation of Noether theorem, we have
$$
\alpha \mathcal{L} = \alpha \partial _{\mu} \left( \frac{\partial \mathcal{L}}{\partial \left( \partial _{\mu} \phi \right)} \Delta \phi \right) + \alpha \left( \frac{\partial \mathcal{L}}{\partial \phi} - \partial _{\mu} \frac{\partial \mathcal{L}}{\partial \left( \partial _{\mu} \phi \right)} \right) \Delta \phi
$$
The second term vanishes because of Lagrange equation. Then we define $\partial _{\mu} \mathcal{J}^{\mu} \equiv \partial _{\mu} \left( \frac{\partial \mathcal{L}}{\partial \left( \partial _{\mu} \phi \right)} \Delta \phi \right)$. Therefore conclude $\partial _{\mu} j^{\mu} = 0$ for
$$
j^{\mu} = \frac{\partial \mathcal{L}}{\partial \left( \partial _{\mu} \phi \right)} - \mathcal{J}^{\mu}
$$
It made me so confused because is $j^{\mu}$ not just identically ZERO? If it is, why do we then care about its four divergence anyway?
Any thoughts would be appreciated!
Answer: You're misreading the cited source. An action-preserving transformation of $\phi$ adds a total derivative to $\mathcal{L}$. In particular, the fact the $\delta S=0$ doesn't use the Euler-Lagrange equation. The trick you're missing is you should find, without using the ELE, a valid-on-shell choice of $\mathcal{J}^\mu$ for which $\delta\mathcal{L}=\alpha\mathcal{J}^\mu$. I'm sure you'll see this at work if you go through their examples carefully. | {
"domain": "physics.stackexchange",
"id": 53081,
"tags": "quantum-field-theory, conservation-laws, noethers-theorem"
} |
Set an instant velocity on Gazebo | Question:
Hello,
I am working on Gazebo 7 with ROS kinetic.
So, here is my problem, I have that code function :
void SetVelocity(const double &_vel)
{
this->model->GetJointController()->SetVelocityTarget(this->joint->GetScopedName(), _vel);
}
It's used to set a velocity to my pioneer, it's working pretty well but I want the velocity to be instant.
For exemple, when I set it to a velocity of 30 and then a velocity of 0, it will take so many time to stop. What are the solutions ? Here is the rest of the code :
this->model = _model;
this->joint = _model->GetJoints()[0];
this->pid = common::PID(0.1, 0, 0, 1000, -1000, 1000, -1000);
this->model->GetJointController()->SetVelocityPID(this->joint->GetScopedName(), this->pid);
Thank in advance.
Originally posted by shenki on ROS Answers with karma: 16 on 2017-04-06
Post score: 0
Answer:
For the moment I found that, it's working but it's not that good...
public: void SetVelocity(const double &_vel)
{
double __vel = _vel;
if(__vel>30)__vel=30;
this->joint->SetDamping(0,old_value/10);
this->joint->Update();
this->model->GetJointController()->SetVelocityTarget(this->joint->GetScopedName(), __vel);
old_value = __vel;
}
This is stupid, so if someone has better, i'll take it, thank you !
Originally posted by shenki with karma: 16 on 2017-04-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 27539,
"tags": "ros, gazebo7, ros-kinetic"
} |
Collision that governs kinetic model of gas | Question: From Atkins' Physical Chemistry 10th, it states that
The kinetic model is based on three assumptions:
The gas consists of molecules of mass $m$ in ceaseless random motion obeying the laws of classical mechanics.
The size of the molecules is negligible, in the sense that their diameters are much smaller than the average distance travelled between collisions.
The molecules interact only through brief elastic collisions.
I quite can't get the second assumption. The intermolecular distance between the molecules are greater than the diameter of the molecules, so we can neglect the volume. Given Boyle's law apply (as omitting size and volume of molecules is a property that ideal gas has), there will be many molecules collide with the walls and the molecules will hit another molecules after its collision with wall (right?), thus the pressure of the system should be low. How often the collision of what kind happens under what condition?
If the separation doesn't result any influence on each molecule (because third assumption said elastic collision, so any interaction (hydrogen bonding, van der Waals, so on) higher than the collision energy should be omitted), will the collision come purely from their collision with walls, not collision between each molecule?
Wikipedia states also the assumption that
The number of molecules is so large that statistical treatment can be applied.
Answer:
There will be many molecules collide with the walls and the molecules will hit another molecules after its collision with wall (right?), thus the pressure of the system should be low. How often the collision of what kind happens under what condition?
The collision condition doesn't need to be strictly in Boyle's law condition, that the elastic collision is happened between anything and under any pressure condition. The requirement of large number of molecules but only collision interactions are applied implies that it is just not applicable to apply Boyle's law in kinetic model of gas. | {
"domain": "chemistry.stackexchange",
"id": 12553,
"tags": "physical-chemistry"
} |
Could anyone help explain this current voltage graph for an LED in liquid nitrogen? | Question: I've been doing my coursework investigating LEDs at various temperatures and I've come across an interesting phenomenon which nobody I've asked has been able to explain thoroughly - wheras at room temperature, the LED gives a standard exponential response, when placed in liquid nitrogen (at -196 C) the graph is pretty strange. This data was recorded using a constant current power supply, and is a combination of three different experiments - it still worked fine after each experiment at room temperature.
I've asked a couple of my teachers, and the answers they gave ranged from 'the lattice might change and contract at cooler temperatures' to 'the internal temperature of the LED might increase when it has higher currents'. I was wondering if anyone had any more domain-specific knowledge than my teachers and could help explain this :)
Here's the graph of this particular LED at room temperature (around 24 C in this case):
Here's the one in LN2:
If it helps, the LED was Cyan in colour and had a wavelength of roughly 485nm in LN2, and 497nm at 80 C.
Many thanks,
Tom
Answer: There are really two questions here (I think):
why is the voltage drop so different for the cold LED (notice it ranges from 3.5 to 4.5 V at LN temperature, but from 2.0 to 3.2 at room temperature)
why does the LN curve exhibit the strange curvature?
CuriousOne already hinted at the answer - this has to do with the temperature of the LED. In particular, the voltage developed at constant current is much greater at low temperatures.
In the case of the device immersed in LN, the heating will initially be quite small as the current is small. Thus the junction temperature does not change much. But once the current increases, so does the heat dissipation - and then the junction heats up and the forward voltage will drop. As CuriousOne suggested, very short pulses of current of different magnitude might remove some of that effect. In fact, LEDs are known to be more efficient (quite a bit!) when driven with a pulsed rather than constant current - this is mostly related to the saturation of absorption centers in the lattice (bleaching) but also somewhat to heating effects.
So what causes the overvoltage to be a function of temperature? This is explained in this article which shows that the overvoltage of a diode junction near room temperature changes by about 2.5 mV / °C. At LN temperatures (about -195 °C) you would expect the voltage to be at least 0.5 V higher than at room temperature: the fact that it is a bit more is because it is not a "small" temperature change so the simple expression doesn't quite apply.
The ideal diode equation states
$$I_D = I_S\left(e^{V_D/\eta V_T}-1\right)$$
In this equation, the thermal voltage $V_T = \frac{kT}{q}$ - the Boltzmann constant times the temperature divided by the charge on the electron.
Since Boltmann constant and charge are constant, it follows that the thermal voltage scales with temperature: and the saturation current $I_S$ depends strongly on temperature (it is much lower when the temperature is lower, as there are fewer electrons which have sufficient energy to cross the bandgap).
There is a better analysis of all this at this educypedia link where they explore and graph the behavior. Specifically, it explains that the strong temperature dependence of the forward voltage relates directly to the saturation current, which itself is a strong function of the bandgap (quite large for blue LEDs) and temperature. This is the effect that I believe dominates here. | {
"domain": "physics.stackexchange",
"id": 23260,
"tags": "electric-current, voltage, light-emitting-diodes"
} |
Quantum lambda calculus | Question: Classically, there are 3 popular ways to think about computation: Turing machine, circuits, and lambda-calculus (I use this as a catch all for most functional views). All 3 have been fruitful ways to think about different types of problems, and different fields use different formulation for this reason.
When I work with quantum computing, however, I only ever think about the circuit model. Originally, QC was defined in terms of quantum Turing machines but as far as I understand, this definition (although equivalent to quantum circuits if both are formulated carefully) has not been nearly as fruitful. The 3rd formulation (in terms of lambda-calculus or similar functional settings) I am completely unfamiliar with. Hence my questions:
What are useful definitions of quantum lambda-calculus (or other functional paradigms)?
What subfields of QIP gain deeper insight from using this formulation instead of the circuit model?
Notes
I am aware that I am ignoring many other popular formalisms like cellular automata, RAM-models, etc. I exclude these mostly because I don't have experience with thinking in terms of these models classically, let alone quantumly.
I am also aware that there are popular alternatives in the quantum setting, such as measurement-based, topological, and adiabatic. I do not discuss them because I am not familiar with the classical counterparts.
Answer: here is a half-baked answer:
I know that Ugo Dal Lago at University of Bologna has been studying quantum lambda calculus. You may want to check his publications and perhaps this one in particular:
Quantum implicit computational complexity by U. Dal Lago, A. Masini, M. Zorzi.
I am saying it's a half-baked answer, because I haven't had chance to read any of his works. | {
"domain": "cs.stackexchange",
"id": 4271,
"tags": "lambda-calculus, quantum-computing, reference-request, computation-models"
} |
Can a plant be induced to accelerate transpiration? | Question: Just what the title states.
I wonder whether it is possible to fire a chemical switch - sort-of like injecting adrenaline in a human, to accelerate a particular process in a plant. For example, transpiration.
Answer: With high(er) temperatures, especially in drier air with a bit of breeze, a plant will transpire more. While not a chemical solution, it is indeed a mechanism that serves to cool the plant. | {
"domain": "biology.stackexchange",
"id": 627,
"tags": "plant-physiology"
} |
Setting flags to show three buttons | Question: I have this loop that iterates and assigns a variable to true depending on the different conditions
for (const element of actionsReferences) {
if (element === 'accept') {
this.showAcceptButton = true
} else if (element === 'reject') {
this.showRejectButton = true
} else if (element === 'transfer') {
this.showTransferButton = true
}
}
How can i get the same result by avoiding if () ?
Answer: You could use a string to function "map", in JavaScript that can be implemented with a simple object:
var map = {
'accept' : function(o) { o.showAcceptButton = true; },
'reject' : function(o) { o.showRejectButton = true; },
'transfer' : function(o) { o.showTransferButton = true; }
};
let thisObject = {}; // fake this object
map['accept'](thisObject);
map[element](this); // use within your loop
// ES6 map
const map6 = {
accept : (o) => o.showAcceptButton = true,
reject : (o) => o.showRejectButton = true,
transfer : (o) => o.showTransferButton = true
};
// alternative ES6 map
const map6a = {
accept(o) { o.showAcceptButton = true; },
reject(o) { o.showRejectButton = true; },
transfer(o) { o.showTransferButton = true; }
};
map6['reject'](thisObject);
map6a['transfer'](thisObject);
// check if function exists and really is a function
if ('accept' in map6 && typeof map6['accept'] === 'function') map6['accept'](thisObject); | {
"domain": "codereview.stackexchange",
"id": 28569,
"tags": "javascript, ecmascript-6"
} |
What is the longest detectable EM wavelength? | Question: What is the longest detectable (by today's technology) EM wavelength? and is there a limit of the energy that those with longer wavelengths that we cannot detect can carry? can there be a galactic or "Intergalactic space" scale standing EM waves? e.g. a standing wave between the BHs at the centers of the Milky-way and Andromeda?
Answer: The fundamental electromagnetic Schumann resonance has a wavelength approximately equal to the Earth's circumference. No longer wavelength can propagate in the waveguide formed by the Earth surface and the ionosphere. Lower frequency radiation from space cannot penetrate the ionosphere or even the less dense solar wind and interstellar medium. The intergalactic medium may allow electromagnetic waves with wavelengths of a few million kilometers, but those cannot penetrate the Galaxy.
The limit in space is the plasma frequency, an effect of the free electrons in plasma. Plasma is ubiquitous in space. The Schumann resonances are only possible because the Earth's atmosphere contains almost no free electrons.
Note that very close to the plasma frequency, the phase velocity of electromagnetic radiation approaches infinity, so the wavelength can, in principle, be arbitrarily long. The group velocity, however, approaches zero, so such radiation cannot, in practice, propagate. | {
"domain": "physics.stackexchange",
"id": 94896,
"tags": "electromagnetic-radiation, astronomy, measurements, wavelength, dark-energy"
} |
cannot find -lOpenNI2Orbbec. ROS Orbbec Astra camera | Question:
I'm trying to interface my Orbbec Astra camera using ROS (c++).
I figured out I need to use the custom OpenNI2 by Orbbec.
I built it according to the instructions.
Then I tried to build ros_astra_camera.
catkin_make --pkg astra_camera gives me /usr/bin/ld: cannot find -lOpenNI2Orbbec
My CMakeLists.txt:
cmake_minimum_required(VERSION 2.8.3)
project(camera)
find_package(catkin REQUIRED COMPONENTS
roscpp
std_msgs
message_generation
image_transport
cv_bridge
)
find_package( OpenCV REQUIRED )
add_message_files(
FILES
TrackedPosition.msg
)
generate_messages(
DEPENDENCIES
std_msgs
)
catkin_package(
CATKIN_DEPENDS roscpp std_msgs message_runtime
)
include_directories(
${catkin_INCLUDE_DIRS}
${OpenCV_INCLUDE_DIRS}
)
add_executable(tracker src/tracker.cpp)
target_link_libraries(tracker ${OpenCV_LIBRARIES} ${catkin_LIBRARIES})
add_dependencies(tracker camera_generate_messages_cpp)
My package.xml:
<?xml version="1.0"?>
<package>
<name>camera</name>
<version>0.0.0</version>
<description>The camera package</description>
<maintainer email="jeff@todo.todo">jeff</maintainer>
<license>TODO</license>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>roscpp</build_depend>
<build_depend>std_msgs</build_depend>
<build_depend>message_generation</build_depend>
<build_depend>image_transport</build_depend>
<build_depend>cv_bridge</build_depend>
<run_depend>roscpp</run_depend>
<run_depend>std_msgs</run_depend>
<run_depend>message_runtime</run_depend>
<run_depend>image_transport</run_depend>
<run_depend>cv_image</run_depend>
</package>
Any help is very much appreciated.
Please let me know if you need any additional information.
Originally posted by voxl on ROS Answers with karma: 1 on 2016-12-13
Post score: 0
Answer:
Had this issue myself - Rename Astra OpenNI2 file to OpenNI2Orbbec in /usr/lib
Mark
Originally posted by MarkyMark2012 with karma: 1834 on 2017-01-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2017-01-29:
Are you suggesting to rename the library itself?
Comment by MarkyMark2012 on 2017-01-29:
You can simply add a sym link to the OpenNI2 library file. Calling the link libOpenNI2Orbbec or there abouts
Comment by dwikyerl on 2017-01-29:
There is no Astra OpenNI2 file in my /usr/lib | {
"domain": "robotics.stackexchange",
"id": 26475,
"tags": "ros, openni, libopenni"
} |
Looking for the name of a particular device | Question: Please move this if it's not in the right location.
I'm looking for the name of a device that I frequently see in many scenarios, specifically that of an office/library which can be described as having multiple rings that rotate in various directions. I was thinking it was a gyroscope or perhaps a celestial globe, but something tells me that it's not quite what I'm looking for. I recall that there is a movie production company which uses this device as their symbolic figure of their logo.
Answer: The device you are describing is a Cardan suspension. | {
"domain": "physics.stackexchange",
"id": 2409,
"tags": "soft-question, terminology, gyroscopes"
} |
Velocity is zero, but acceleration is not? | Question: Imagine a block (of mass m) attached to a spring (of spring constant k) that is hanging from a fixed support on the ceiling. The spring is initially in its relaxed state( no compression or extension). At this moment velocity of the block is zero
When I release the spring from rest, the force due to gravity and the force due to spring(restoring force) are the two forces acting on the block.
According to me,when:
magnitude of Force due to gravity> magnitude of Restoring force, then the velocity of the body increases.
magnitude of Force due to gravity=magnitude of Restoring force, then the acceleration of the body is zero, and the body has maximum constant velocity.
magnitude of Force due to gravity < magnitude of Force of the spring, then the velocity of the body decreases until it stops.
However when the spring reaches maximum extension, there is still negative accelaration, but the velocity of the body is zero.
[ I used work-energy theorem to calculate maximum extension as $\frac{2mg}{k}$, and therefore, force due to spring = 2mg,
force due to gravity= mg,
and net force = force due to spring- force do to gravity=mg]
If there is a net upward force, then there is also an acceleration, but the velocity is zero. How is this possible?
Answer: It is possible because the velocity goes from a negative velocity to a positive velocity (depending on how you chose the axis). The object has a velocity towards the ground, but due to a force in the opposite direction, the object decelerates to zero. At zero velocity, there is still a net force acting on the object due to the spring, so the object will accelerate in the opposite direction.
Try to compare it with this: you thrown an object vertically upwards with a velocity $v_{0}$, so the only force acting on the object is gravity downwards with an acceleration $-g$. At a certain point, the object will slow down to a velocity of $0$ and then fall back downwards to earth. In this whole process, the acceleration is constant, but the velocity is still zero at its highest point.
More mathematically: $a = -g = \frac{dv}{dt}$. This is a simple differential equation and can be solved easily with integrals: $-gdt = dv$ so $\int_{t_0}^{t}-gdt = -g\int_{t_0}^{t}dt = -g\cdot(t-t_0) = -g\cdot t$ if we take $t_0 = 0s$ and also $\int_{v_0}^vdv = v(t)-v_0$, and so we get $-g\cdot t = v(t)-v_0$ or $v(t) = v_0 - g\cdot t$. Acceleration $a$ is constant and thus $a(t) = -g$. Now $v(t)$ will be equal to zero for $t = \frac{v_0}{g}$, so $v(\frac{v_0}{g}) = 0$ but also $a(\frac{v_0}{g}) = -g \neq 0$.
A mathematical equation like this can also be deducted for your spring problem, and then you'll see that the velocity will in fact be zero when the acceleration is maximum, and that the velocity will be maximum for zero (net) acceleration. This is known as a harmonic oscillator but requires some knowledge of differential equations. | {
"domain": "physics.stackexchange",
"id": 44145,
"tags": "homework-and-exercises, forces, kinematics, work, spring"
} |
Kobuki auto-docking error | Question:
Hi,
I am currently working with a Turtlebot 2 with a Kobuki base,
I tried to follow the tutorial http://ros.org/wiki/kobuki/Tutorials/TestingAutomaticDocking
I can start the Kobuki node with no error, but as soon as I start the auto-docking nodelet I get this error :
Warning: class_loader::class_loader_private: SEVERE WARNING!!! A namespace collision has occured with plugin factory for class (null). New factory will OVERWRITE existing one. This situation occurs when libraries containing plugins are directly linked against an executable (the one running right now generating this message). Please separate plugins out into their own library or just dont link against the library and use either class_loader::ClassLoader/MultiLibraryClassLoader to open.
at line 180 in /opt/ros/groovy/include/class_loader/class_loader_core.h
/opt/ros/groovy/lib/nodelet/nodelet: symbol lookup error: /opt/ros/groovy/lib/libkobuki_auto_docking_ros.so: undefined symbol: _ZN9actionlib15GoalIDGeneratorC1Ev
[kobuki-2] process has died [pid 31630, exit code 127, cmd /opt/ros/groovy/lib/nodelet/nodelet manager __name:=kobuki __log:=/home/caroline/.ros/log/3fd781dc-b72b-11e2-a8e9-002354f2e9ef/kobuki-2.log].
log file: /home/caroline/.ros/log/3fd781dc-b72b-11e2-a8e9-002354f2e9ef/kobuki-2*.log
Did someone experience the same problem ?
Or does someone have some idea why I get that ?
Caroline
Originally posted by CarolineQ on ROS Answers with karma: 395 on 2013-05-07
Post score: 1
Answer:
The module was missing some links to the actionlib library after it got catkinized. This has been fixed, but of course, takes a while to filter through to the public debs. You can either work from source or from the shadow-fixed debs in the meantime.
For others who've had the same problem, the issue is being tracked here.
Originally posted by Daniel Stonier with karma: 3170 on 2013-05-07
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by CarolineQ on 2013-05-09:
Thanks for your answer, I just updated the package and now it seems to be solved. Thanks. | {
"domain": "robotics.stackexchange",
"id": 14095,
"tags": "kobuki"
} |
md5sum mismatch (954ba1b87d3a20757aed046b5b4078f1 != 4b2834b201a8e322d0b941a6eec7557c) on build/gmapping_r39.tar.gz; aborting | Question:
I am trying to install slam_gmapping package, but when I type rosmake, one error happens.
d5sum mismatch (954ba1b87d3a20757aed046b5b4078f1 != 4b2834b201a8e322d0b941a6eec7557c) on build/gmapping_r39.tar.gz; aborting
I have researched but I could not solve the problem. Anyone have a idea of how I can fix this?
Thanks
Originally posted by alexandre on ROS Answers with karma: 1 on 2013-12-14
Post score: 0
Answer:
How are you trying to install gmapping?
It looks like your download is being truncated. If you're compiling from source you should remove that file and try again.
Originally posted by tfoote with karma: 58457 on 2013-12-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 16461,
"tags": "navigation, gmapping"
} |
dipole field on axis twice the field on perpendicular bisector | Question: Why is the dipole field on axis twice the field on perpendicular bisector?
For the perpendicular bisector:
Lets assume -q is right from of the origin and +q is left from the origin, both a distance a from the origin.
The electric field at a point on the y-axis(on the perpendicular bisector of the dipole) is
$$\vec E = {\vec E_+ + \vec E_-} = {kq\over(\sqrt{a^2+y^2})^2}{a\hat i-(-a\hat i)\over\sqrt{a^2+y^2}}={2akq\over(a^2+y^2)^{\frac32}\hat i}$$
$$\text{when }y>>a:\quad{{2akq\over y^3}\hat i}$$
$$\text{in terms of electric dipole moment }p=2aq:\quad{{kp\over y^3}\hat i}$$
For the electric field on the dipole's axis, the value my book gives for x>>a is twice the dipole field on the perpendicular bisector:
$${2kp\over x^3}\hat i$$
But they really don't explain why. I tried to derive it with no luck. Why is it twice the other value? Can you help me derive it?
Thank you already
Answer: The general equation for electric field due to dipole at a point is KP/R^3[(1+3cos^2(thetha)]^1/2. When (thetha=90 it is equitorial line and when (theta=0) it is axial line from the above equation we get the required results ie KP/R^3 and 2KP/R^3 | {
"domain": "physics.stackexchange",
"id": 21766,
"tags": "electric-fields, coulombs-law, dipole"
} |
What is "structural model checking"? | Question: In "Calculational Desing of a Regular Model Checker by Abstract Interpretation" (see) Cousot first defines a definition of model checking in his new settings at the page 9 then at the page 11 he says that the previous definition is impractical for structural model checking.
So what is structural model checking?
Answer: I hadn't seen that term before either. In essence it seems to mean a model checking algorithm taking a program $P$ (with its syntactic structure) and a property that exploits $P$'s structure in the actual model checking as opposed to more traditional algorithms that turn $P$ into a labelled transition system and then go to town on that.
Definition 3 in the paper gives the technical details in three installments, by induction on the program structure. | {
"domain": "cstheory.stackexchange",
"id": 5622,
"tags": "model-checking, formal-methods"
} |
How many types of specific heat can a gas have? | Question:
The number of specific heats that a gas can have is? Which of the following three options is correct?
Only one. 2. Only two. 3. Infinite
We know that a gas has two specific heats - $C_p$ (Specific heat at constant pressure) and $C_v$ (Specific heat at constant volume). I answered option 2 but the correct option according to the question paper solution is option 3.
So, why does a gas have infinite number of specific heats and how do I conceptualise this? Is each specific heat related to a unique thermodynamic process?
Answer: The question may be aiming to expand one's scope beyond what's considered in introductory textbooks. This is the essence of scientific research.
The heat capacity $C_X$ at a condition of constant $X$ is $C_X\equiv T\left(\frac{\partial S}{\partial T}\right)_X$, with temperature $T$ and entropy $S$. We can interpret this as the heating required to obtain a certain temperature change (at constant $X$).
Introductory treatments often assume that a gas is enclosed in an an impermeable container (constant $N$); thus, we have the associated standard constant-volume and constant-pressure heat capacities
$$C_{V,N}\equiv T\left(\frac{\partial S}{\partial T}\right)_{V,N}\,\,\mathrm{and}$$
$$C_{P,N}\equiv T\left(\frac{\partial S}{\partial T}\right)_{P,N},$$
which for the ideal gas are conveniently equal to $\frac{dU}{dT}$ and $\frac{dH}{dT}$, respectively, with the internal energy $U$ and enthalpy $H$ potentials. (Note that no subscripts are required in this special case because they're irrelevant; for the ideal gas, the coefficients of $dV$ and $dP$ in the corresponding fundamental relations are zero.)
OK, enough review of introductory material. Consider now a gas at equilibrium with an adjacent material. (The system boundaries still include only the gas.) The corresponding heat capacities are now relevant:
$$C_{V,\mu}\equiv T\left(\frac{\partial S}{\partial T}\right)_{V,\mu}\,\,\mathrm{and}$$
$$C_{P,\mu}\equiv T\left(\frac{\partial S}{\partial T}\right)_{P,\mu},$$
with constant chemical potential $\mu$, as the number of gas molecules is no longer constant; the gas can diffuse into and/or react with the other material. (Note that this analysis requires care because the system entropy is now affected not just by heating but also by mass transfer, as mass carries its own entropy. One obtains an infinite heat capacity for some simple condensation models, for example, because no amount of cooling can lower the gas temperature while it's condensing—it just accelerates the condensation rate at the boiling temperature.)
Still broader, consider a magnetic gas; that is, a gas for which another type of work other than pressure–volume work is relevant: magnetic field–magnetization work, or $B$–$M$ work.
We can define new potentials
$$\Phi\equiv U+MB=TS-PV+\mu N+MB;$$
$$\Psi\equiv\Phi-MB,$$
where we distinguish $\Psi$ from $U$ by the possibility of $M$–$B$ work when the former is used. We might now heat a gas at constant magnetic field $B$ or constant magnetization $M$, respectively corresponding to heat capacities
$$C_{Y,B}\equiv T\left(\frac{\partial S}{\partial T}\right)_{Y,B}\,\,\mathrm{and}$$
$$C_{Y,M}\equiv T\left(\frac{\partial S}{\partial T}\right)_{Y,B},$$
where $Y$ refers to some combination of variables we already discussed ($V$, $P$, $N$, $\mu$) being held constant. Note that it can still be convenient to work in terms of a potential: $C_{M}=\frac{\partial \Phi}{\partial T}$ and $C_{B}=\frac{\partial \Psi}{\partial T}$, where the natural variables are being held constant. This is analogous to $C_V=\frac{\partial U}{\partial T}$ and $C_P=\frac{\partial H}{\partial T}$. (In other words, $\Phi$ and $\Psi$ are just as valid physical properties of the gas as the more familiar $U$ and $H$!) Recognizing such analogies and how they can be extended to an arbitrary degree is how one's thermodynamic muscles strengthen.
This gives eight possible heat capacities, with the possibility of extending the framework further with other work types. And we haven't even gotten into ways that functions of the thermodynamic variables might be held constant, rather than just the variables themselves. An infinite number of variations are possible, so the answer to the original question is (3). | {
"domain": "physics.stackexchange",
"id": 91726,
"tags": "thermodynamics, energy, kinetic-theory"
} |
Are weak vector bosons produced in atomic transitions? | Question: In another question, I asked if gravitons could be produced in atomic transitions (an electron decaying to smaller energy orbitals). The energy taken away is astronomically small though. Can the same be said for W/Z particles? What about virtual W/Z particles? Can these produce neutrinos with an energy comparable to photons?
Answer: The W nd Z bosons have a mass of 80 and 91 GeV, giga ($10^9$) electron volt. Atomic energies are of the order of electron volt, so they certainly cannot be produced. These large masses together with the smallness of the week coupling constant do not even allow weak interactions to affect energy levels in atomic states. | {
"domain": "physics.stackexchange",
"id": 79964,
"tags": "atomic-physics, weak-interaction"
} |
How is current produced by one rotating charged sphere? | Question: I was doing this problem today:
This question came in the Dhaka University admission exam 20-21
Q) A sphere with charge $q$ is rotating by a non-conducting string at an angular speed $\omega$. What is the amount of current produced by the charged sphere?
(A) $\omega q$
(B) $2\pi\omega q$
(C) $\frac{q}{\omega}$
(D) $\frac{q\omega}{2\pi}$
Third-party question bank's attempt:
$$I=\frac{q}{T}=\frac{q}{\frac{2\pi}{\omega}}=\frac{q\omega}{2\pi}$$
So, (D).
My comments:
I have some problems with this question. According to Wikipedia,
An electric current is a stream of charged particles, such as
electrons or ions, moving through an electrical conductor or space.
There is no such stream of charged particles in the setup of the question. Only one charged sphere is moving/rotating. How will only one charged sphere produce current?
Moreover, according to Wikipedia,
For a steady flow of charge through a surface, the current I (in
amperes) can be calculated with the following equation:
$$I=\frac{Q}{t}$$
, where Q is the electric charge transferred through the surface over
a time t.
In the setup of the question, there is no steady flow of charge through a surface. Only one charged sphere is moving/rotating. So, how can $I=\frac{Q}{t}$ be applicable here?
Answer: Current can be defined as amount of charge passing through an area per unit time. One thing that is not clear in the given question is whether the sphere is conductive (all charge is at the surface of the sphere) or insulator (the charge is somewhat distributed over the entire sphere).
The solution I suggest below works for both a conductor sphere and an insulator sphere in which the charge is homogenously distributed with a volume charge density $\rho$.
Assume that in a given time $dt$ the sphere is rotated an angle $d\theta$. The amount of charge $dQ$ that passes through the cross section (half disk) of the sphere is the volume of the spherical wedge times the volume charge density
$$dQ=\rho \frac{2}{3}R^3d\theta.$$
The volume charge density $\rho$ can be calculated as
$$\rho= \frac{q}{\frac{4}{3}\pi R^3}$$
therefore
$$dQ=\frac{qd\theta}{2\pi}.$$
Because current (as stated above) defined as $I=dQ/dt$, we have
$$I=\frac{dQ}{dt}=\frac{q}{2\pi}\frac{d\theta}{dt}=\frac{q\omega}{2\pi}.$$ | {
"domain": "physics.stackexchange",
"id": 89470,
"tags": "homework-and-exercises, electromagnetism, electric-current"
} |
Open source Anomaly Detection in Python | Question: Problem Background:
I am working on a project that involves log files similar to those found in the IT monitoring space (to my best understanding of IT space). These log files are time-series data, organized into hundreds/thousands of rows of various parameters. Each parameter is numeric (float) and there is a non-trivial/non-error value for each time point. My task is to monitor said log files for anomaly detection (spikes, falls, unusual patterns with some parameters being out of sync, strange 1st/2nd/etc. derivative behavior, etc.).
On a similar assignment, I have tried Splunk with Prelert, but I am exploring open-source options at the moment.
Constraints:
I am limiting myself to Python because I know it well, and would like to delay the switch to R and the associated learning curve. Unless there seems to be overwhelming support for R (or other languages/software), I would like to stick to Python for this task.
Also, I am working in a Windows environment for the moment. I would like to continue to sandbox in Windows on small-sized log files but can move to Linux environment if needed.
Resources:
I have checked out the following with dead-ends as results:
Some info here is helpful, but unfortunately, I am struggling to find the right package because:
Twitter's "AnomalyDetection" is in R, and I want to stick to Python. Furthermore, the Python port pyculiarity seems to cause issues in implementing in Windows environment for me.
Skyline, my next attempt, seems to have been pretty much discontinued (from github issues). I haven't dived deep into this, given how little support there seems to be online.
scikit-learn I am still exploring, but this seems to be much more manual. The down-in-the-weeds approach is OK by me, but my background in learning tools is weak, so would like something like a black box for the technical aspects like algorithms, similar to Splunk+Prelert.
Problem Definition and Questions:
I am looking for open-source software that can help me with automating the process of anomaly detection from time-series log files in Python via packages or libraries.
Do such things exist to assist with my immediate task, or are they imaginary in my mind?
Can anyone assist with concrete steps to help me to my goal, including background fundamentals or concepts?
Is this the best StackExchange community to ask in, or is Stats, Math, or even Security or Stackoverflow the better options?
EDIT [2015-07-23]
Note that the latest update to pyculiarity seems to be fixed for the Windows environment! I have yet to confirm, but should be another useful tool for the community.
EDIT [2016-01-19]
A minor update. I had not time to work on this and research, but I am taking a step back to understand the fundamentals of this problem before continuing to research in specific details. For example, two concrete steps that I am taking are:
Starting with the Wikipedia articles for anomaly detection, understanding fully, and then either moving up or down in concept hierarchy of other linked Wikipedia articles, such as this, and then this.
Exploring techniques in the great surveys done by Chandola et al 2009 Anomaly Detection: A Survey and Hodge et al 2004 A Survey of Outlier Detection Methodologies.
Once the concepts are better understood (I hope to play around with toy examples as I go to develop the practical side as well), I hope to understand which open source Python tools are better suited for my problems.
EDIT [2020-02-04]
It has been a few years since I worked on this problem, and am no longer working on this project, so I will not be following or researching this area until further notice. Thank you very much to all for their input. I hope this discussion helps others that need guidance on anomaly detection work.
FWIW, if I had to do the same project now with the same resources (few thousand USD in expenses), I would pursue the deep learning/neural network approach. The ability of the method to automatically learn structure and hierarchy via hidden layers would've been very appealing since we had lots of data and (now) could spend the money on cloud compute. I would still use Python though ;).
Cheers!
Answer: Anomaly Detection or Event Detection can be done in different ways:
Basic Way
Derivative! If the deviation of your signal from its past & future is high you most probably have an event. This can be extracted by finding large zero crossings in derivative of the signal.
Statistical Way
Mean of anything is its usual, basic behavior. if something deviates from mean it means that it's an event. Please note that mean in time-series is not that trivial and is not a constant but changing according to changes in time-series so you need to see the "moving average" instead of average. It looks like this:
The Moving Average code can be found here. In signal processing terminology you are applying a "Low-Pass" filter by applying the moving average.
You can follow the code bellow:
MOV = movingaverage(TimeSEries,5).tolist()
STD = np.std(MOV)
events= []
ind = []
for ii in range(len(TimeSEries)):
if TimeSEries[ii] > MOV[ii]+STD:
events.append(TimeSEries[ii])
Probabilistic Way
They are more sophisticated specially for people new to Machine Learning. Kalman Filter is a great idea to find the anomalies. Simpler probabilistic approaches using "Maximum-Likelihood Estimation" also work well but my suggestion is to stay with moving average idea. It works in practice very well.
I hope I could help :)
Good Luck! | {
"domain": "datascience.stackexchange",
"id": 3614,
"tags": "machine-learning, python, data-mining, anomaly-detection, library"
} |
Laravel 8 registration and login with user profiles | Question: I am working on a Laravel application (Github repo) that requires user registration and login.
After registration, the users can change their registration details (except password, for which there is the default password recovery functionality) and add more info.
They also have the possibility to replace the default avatar image with a picture of their choice
In routes\web.php I have:
use Illuminate\Support\Facades\Route;
Route::get('/', [App\Http\Controllers\Frontend\HomepageController::class, 'index'])->name('homepage');
Auth::routes();
Route::get('/dashboard', [App\Http\Controllers\Dashboard\DashboardController::class, 'index'])->name('dashboard');
Route::get('/dashboard/profile', [App\Http\Controllers\Dashboard\UserProfileController::class, 'index'])->name('profile');
Route::post('/dashboard/profile/update', [App\Http\Controllers\Dashboard\UserProfileController::class, 'update'])->name('profile.update');
Route::post('/dashboard/profile/deleteavatar/{id}', [App\Http\Controllers\Dashboard\UserProfileController::class, 'deleteavatar'])->name('profile.deleteavatar');
In Controllers\Dashboard\UserProfileController.php I have:
namespace App\Http\Controllers\Dashboard;
use App\Http\Controllers\Controller;
use Illuminate\Http\Request;
use Auth;
use App\Models\UserProfile;
class UserProfileController extends Controller
{
// Guard this route
public function __construct() {
$this->middleware('auth');
}
public function index(UserProfile $user)
{
return view('dashboard.userprofile',
array('current_user' => Auth::user())
);
}
public function update(Request $request)
{
$current_user = Auth::user();
$request->validate([
'first_name' => ['required', 'string', 'max:255'],
'last_name' => ['required', 'string', 'max:255'],
'email' => ['required', 'email', 'max:100', 'unique:users,email,'. $current_user->id],
'avatar' => ['mimes:jpeg, jpg, png, gif', 'max:2048'],
]);
$current_user->first_name = $request->get('first_name');
$current_user->last_name = $request->get('last_name');
$current_user->email = $request->get('email');
$current_user->bio = $request->get('bio');
// Upload avatar
if (isset($request->avatar)) {
$imageName = md5(time()) . '.' . $request->avatar->extension();
$request->avatar->move(public_path('images/avatars'), $imageName);
$current_user->avatar = $imageName;
}
// Update user
$current_user->update();
return redirect('dashboard/profile')
->with('success', 'User data updated successfully');
}
// Delete avatar
public function deleteavatar($id) {
$current_user = Auth::user();
$current_user->avatar = "default.png";
$current_user->save();
}
}
The update profile form:
<form action="{{ route('profile.update') }}" enctype='multipart/form-data' method="post" novalidate>
{{csrf_field()}}
<div class="form-group">
<input type="text" id="first_name" name="first_name" placeholder="First name" class="form-control" value="{{old('first_name', $current_user->first_name)}}">
@if ($errors->has('first_name'))
<span class="errormsg text-danger">{{ $errors->first('first_name') }}</span>
@endif
</div>
<div class="form-group">
<input type="text" id="last_name" name="last_name" placeholder="Last name" class="form-control" value="{{old('last_name', $current_user->last_name)}}">
@if ($errors->has('first_name'))
<span class="errormsg text-danger">{{ $errors->first('last_name') }}</span>
@endif
</div>
<div class="form-group">
<input type="text" id="email" name="email" placeholder="E-mail address" class="form-control" value="{{old('email', $current_user->email)}}">
@if ($errors->has('email'))
<span class="errormsg text-danger">{{ $errors->first('email') }}</span>
@endif
</div>
<div class="form-group">
<textarea name="bio" id="bio" class="form-control" cols="30" rows="6">{{old('bio', $current_user->bio)}}</textarea>
@if ($errors->has('bio'))
<span class="errormsg text-danger">{{ $errors->first('bio') }}</span>
@endif
</div>
<label for="avatar" class="text-muted">Upload avatar</label>
<div class="form-group d-flex">
<div class="w-75 pr-1">
<input type='file' name='avatar' id="avatar" class="form-control border-0 py-0 pl-0 file-upload-btn" value="{{$current_user->avatar}}">
@if ($errors->has('avatar'))
<span class="errormsg text-danger">{{ $errors->first('avatar') }}</span>
@endif
</div>
<div class="w-25 position-relative" id="avatar-container">
<img class="rounded-circle img-thumbnail avatar-preview" src="{{asset('images/avatars')}}/{{$current_user->avatar}}" alt="{{$current_user->first_name}} {{$current_user->first_name}}">
<span class="avatar-trash">
@if($current_user->avatar !== 'default.png')
<a href="#" class="icon text-light" id="delete-avatar" data-uid="{{$current_user->id}}"><i class="fa fa-trash"></i></a>
@endif
</span>
</div>
</div>
<div class="form-group d-flex mb-0">
<div class="w-50 pr-1">
<input type="submit" name="submit" value="Save" class="btn btn-block btn-primary">
</div>
<div class="w-50 pl-1">
<a href="{{route('profile')}}" class="btn btn-block btn-primary">Cancel</a>
</div>
</div>
</form>
The deleting of the user's picture (reverting to the default avatar, in other words, is done via AJAX):
(function() {
//Delete Avatar
$('#delete-avatar').on('click', function(evt) {
evt.preventDefault();
var $avatar = $('#avatar-container').find('img');
var $topAvatar = $('#top_avatar');
var $trashIcon = $(this);
var defaultAvatar = APP_URL + '/images/avatars/default.png';
//Get user's ID
var id = $(this).data('uid');
if (confirm('Delete the avatar?')) {
var CSRF_TOKEN = $('meta[name="csrf-token"]').attr('content');
$.ajax({
url: APP_URL + '/dashboard/profile/deleteavatar/' + id,
method: 'POST',
data: {
id: id,
_token: CSRF_TOKEN,
},
success: function() {
$avatar.attr('src', defaultAvatar);
$topAvatar.attr('src', defaultAvatar);
$trashIcon.remove();
}
});
}
});
})();
Questions:
Could the code be significantly "shortened"?
Are there better alternatives to the means I have chosen to do the various "actions" (inserting/updating the avatar and bio, etc)?
How can this be improved (in any way)?
Answer: The suggestions below should allow the code to be shortened and improved.
Update method is a bit long
The UserProfileController::update() method is somewhat long. The sections below should allow it to be simplified.
Pass fields to update to model method update()
The UserProfileController::update() method is somewhat long. Presuming that the model UserProfile is a sub-class of Illuminate\Database\Eloquent\Model then the update() method can be passed an array of attributes to update. Instead of these lines:
$current_user->first_name = $request->get('first_name');
$current_user->last_name = $request->get('last_name');
$current_user->email = $request->get('email');
$current_user->bio = $request->get('bio');
Get an array of fields to update from $request->all(), then set the avatar on that array if the avatar needs to be updated.
Make a form request class for handling the validation
The validation rules could be moved out to a FormRequest subclass.
namespace App\Http\Requests;
use Auth;
use Illuminate\Foundation\Http\FormRequest;
class UserUpdateRequest extends FormRequest
{
public function rules()
{
return [
'first_name' => ['required', 'string', 'max:255'],
'last_name' => ['required', 'string', 'max:255'],
'email' => ['required', 'email', 'max:100', 'unique:users,email,'. Auth::user()->id],
'avatar' => ['mimes:jpeg, jpg, png, gif', 'max:2048'],
];
}
}
If the validation fails and the request was an XHR request, an HTTP response with a 422 status code will be returned to the user including a JSON representation of the validation errors..
Then that subclass can be injected instead of Illuminate\Http\Request in the update method arguments and use $request->all() to get the fields to pass to $current_user->update().
public function update(UserUpdateRequest $request)
{
$current_user = Auth::user();
$fieldsToUpdate = $request->all()
// Upload avatar
if (isset($request->avatar)) {
$imageName = md5(time()) . '.' . $request->avatar->extension();
$request->avatar->move(public_path('images/avatars'), $imageName);
$fieldsToUpdate['avatar'] = $imageName;
}
// Update user
$current_user->update($fieldsToUpdate);
return redirect('dashboard/profile')
->with('success', 'User data updated successfully');
}
Middleware can be added to a group of routes
Instead of setting the middleware in the controller, a Middleware Group could be added to routes\web.php - e.g.
Route::group(['middleware' => ['auth']], function() {
Route::get('/dashboard', [App\Http\Controllers\Dashboard\DashboardController::class, 'index'])->name('dashboard');
Route::get('/dashboard/profile', [App\Http\Controllers\Dashboard\UserProfileController::class, 'index'])->name('profile');
Route::post('/dashboard/profile/update', [App\Http\Controllers\Dashboard\UserProfileController::class, 'update'])->name('profile.update');
Route::post('/dashboard/profile/deleteavatar/{id}', [App\Http\Controllers\Dashboard\UserProfileController::class, 'deleteavatar'])->name('profile.deleteavatar');
});
Also, for the sake of readability (e.g. see section 2.3 of PSR-12) it would be wise to alias the profile controller using a use statement.
use App\Http\Controllers\Dashboard\UserProfileController;
Then each reference can simply be UserProfileController Instead of the fully qualified name.
Resource Controller
While it may not save many lines and would likely require updating the route paths, consider using a Resource controller. The Update route would instead be /dashboard/profile with the verb PUT or PATCH.
Remember to add testing
Laravel offers great support for writing feature tests to ensure the routes output what you expect. It appears there is already a factory for the user and a migration for the user table so those could be used with the RefreshDatabase trait in tests.
The tests could use the SQLite database engine for testing- simply by uncommenting the lines 24 and 25 of phpunit.xml.
Feature tests can make great use of the HTTP test functions available- e.g. requesting routes acting as a user (see the example in that section about using a model factory to generate and authenticate a user) and ensuring the status is okay or redirected to a certain route with assertRedirect().
If using a formRequest subclass as suggested above an assertion could be made that the response code is 422 for invalid input (e.g. missing required field, wrong type of field, etc).
JavaScript can be simplified slightly with jQuery shortcuts
The call to $.ajax() can be replaced with a call to $.post(). Then there is no need to specify the method, and the keys can be removed from the options:
$.post(
APP_URL + '/dashboard/profile/deleteavatar/' + id,
{
id: id,
_token: CSRF_TOKEN,
},
function() {
$avatar.attr('src', defaultAvatar);
$topAvatar.attr('src', defaultAvatar);
$trashIcon.remove();
}
}); | {
"domain": "codereview.stackexchange",
"id": 41511,
"tags": "javascript, php, ajax, laravel"
} |
Vue - It's the Royal Game of Ur | Question: Background
After learning Kotlin for Advent of Code in December, I started looking into cross-compiling Kotlin for both the JVM and to JavaScript. Then I wrote a game server in Kotlin and also a simple game implementation of a game known as The Royal Game of Ur. Game logic by itself doesn't do much good though without a beautiful client to play it with (few people likes sending data manually). So I decided to make one in what have become my favorite JavaScript framework (everyone must have one, right?).
The repository containing both the client and the server can be found here: https://github.com/Zomis/Server2
Play the game
You can now play The Royal Game of UR with a server (a simple AI just making a random move is also available to play against) or without a server. (If you can't get the server to work, play the version without a server).
Please note that these will be updated continuously and may not reflect the code in this question.
Rules of The Royal Game of Ur
Or rather, my rules.
Two players are fighting to be the first player who races all their 7 pieces to the exit.
The pieces walk like this:
v<<1 E<< 1 = First tile
>>>>>>>| E = Exit
^<<2 E<<
Only player 1 can use the top row, only player 2 can use the bottom row. Both players share the middle row.
The first tile for Player 1 is the '1' in the top row. Player 2's first tile is the '2' in the bottom row.
Players take turns in rolling the four boolean dice. Then you move a piece a number of steps that equals the sum of these four booleans.
Five tiles are marked with flowers. When a piece lands on a flower the player get to roll again.
As long as a tile is on a flower another piece may not knock it out (only relevant for the middle flower).
Main Questions
Do I have too many / too few components? I am aiming to make several other games in Vue so I like to make things re-useable.
How are my Vue skills?
Can anything be done better with regards to how I am using Vue?
I am nowhere near a UX-designer, but how is the user experience?
Any other feedback also welcome.
Code
Some code that is not included below:
require("../../../games-js/web/games-js"): This is the Kotlin code for the game model. This is code that has been transpiled to JavaScript from Kotlin.
import Socket from "../socket": This is an utility class for handling the potential WebSocket connection. The code below is checking if the Socket is connected and can handle both scenarios.
RoyalGameOfUR.vue
<template>
<div>
<h1>{{ game }} : {{ gameId }}</h1>
<div>
<div>{{ gameOverMessage }}</div>
</div>
<div class="board-parent">
<UrPlayerView v-bind:game="ur" v-bind:playerIndex="0"
:gamePieces="gamePieces"
:onPlaceNewHighlight="onPlaceNewHighlight"
:mouseleave="mouseleave"
:onPlaceNew="placeNew" />
<div class="ur-board">
<div class="ur-pieces-bg">
<div v-for="idx in 20" class="piece piece-bg">
</div>
<div class="piece-black" style="grid-area: 1 / 5 / 2 / 7"></div>
<div class="piece-black" style="grid-area: 3 / 5 / 4 / 7"></div>
</div>
<div class="ur-pieces-flowers">
<UrFlower :x="0" :y="0" />
<UrFlower :x="3" :y="1" />
<UrFlower :x="0" :y="2" />
<UrFlower :x="6" :y="0" />
<UrFlower :x="6" :y="2" />
</div>
<div class="ur-pieces-player">
<transition name="fade">
<UrPiece v-if="destination !== null" :piece="destination" class="piece highlighted"
:mouseover="doNothing" :mouseleave="doNothing"
:class="{['piece-' + destination.player]: true}">
</UrPiece>
</transition>
<UrPiece v-for="piece in playerPieces"
:key="piece.key"
class="piece"
:mouseover="mouseover" :mouseleave="mouseleave"
:class="{['piece-' + piece.player]: true, 'moveable':
ur.isMoveTime && piece.player == ur.currentPlayer &&
ur.canMove_qt1dr2$(ur.currentPlayer, piece.position, ur.roll)}"
:piece="piece"
:onclick="onClick">
</UrPiece>
</div>
</div>
<UrPlayerView v-bind:game="ur" v-bind:playerIndex="1"
:gamePieces="gamePieces"
:onPlaceNewHighlight="onPlaceNewHighlight"
:mouseleave="mouseleave"
:onPlaceNew="placeNew" />
<UrRoll :roll="lastRoll" :usable="ur.roll < 0 && canControlCurrentPlayer" :onDoRoll="onDoRoll" />
</div>
</div>
</template>
<script>
import Socket from "../socket";
import UrPlayerView from "./ur/UrPlayerView";
import UrPiece from "./ur/UrPiece";
import UrRoll from "./ur/UrRoll";
import UrFlower from "./ur/UrFlower";
var games = require("../../../games-js/web/games-js");
if (typeof games["games-js"] !== "undefined") {
// This is needed when doing a production build, but is not used for `npm run dev` locally.
games = games["games-js"];
}
let urgame = new games.net.zomis.games.ur.RoyalGameOfUr_init();
console.log(urgame.toString());
function piecesToObjects(array, playerIndex) {
var playerPieces = array[playerIndex].filter(i => i > 0 && i < 15);
var arrayCopy = []; // Convert Int32Array to Object array
playerPieces.forEach(it => arrayCopy.push(it));
function mapping(position) {
var y = playerIndex == 0 ? 0 : 2;
if (position > 4 && position < 13) {
y = 1;
}
var x =
y == 1
? position - 5
: position <= 4 ? 4 - position : 4 + 8 + 8 - position;
return {
x: x,
y: y,
player: playerIndex,
key: playerIndex + "_" + position,
position: position
};
}
for (var i = 0; i < arrayCopy.length; i++) {
arrayCopy[i] = mapping(arrayCopy[i]);
}
return arrayCopy;
}
export default {
name: "RoyalGameOfUR",
props: ["yourIndex", "game", "gameId"],
data() {
return {
highlighted: null,
lastRoll: 0,
gamePieces: [],
playerPieces: [],
lastMove: 0,
ur: urgame,
gameOverMessage: null
};
},
created() {
if (this.yourIndex < 0) {
Socket.send(
`v1:{ "type": "observer", "game": "${this.game}", "gameId": "${
this.gameId
}", "observer": "start" }`
);
}
Socket.$on("type:PlayerEliminated", this.messageEliminated);
Socket.$on("type:GameMove", this.messageMove);
Socket.$on("type:GameState", this.messageState);
Socket.$on("type:IllegalMove", this.messageIllegal);
this.playerPieces = this.calcPlayerPieces();
},
beforeDestroy() {
Socket.$off("type:PlayerEliminated", this.messageEliminated);
Socket.$off("type:GameMove", this.messageMove);
Socket.$off("type:GameState", this.messageState);
Socket.$off("type:IllegalMove", this.messageIllegal);
},
components: {
UrPlayerView,
UrRoll,
UrFlower,
UrPiece
},
methods: {
doNothing: function() {},
action: function(name, data) {
if (Socket.isConnected()) {
let json = `v1:{ "game": "UR", "gameId": "${
this.gameId
}", "type": "move", "moveType": "${name}", "move": ${data} }`;
Socket.send(json);
} else {
console.log(
"Before Action: " + name + ":" + data + " - " + this.ur.toString()
);
if (name === "roll") {
let rollResult = this.ur.doRoll();
this.rollUpdate(rollResult);
} else {
console.log(
"move: " + name + " = " + data + " curr " + this.ur.currentPlayer
);
var moveResult = this.ur.move_qt1dr2$(
this.ur.currentPlayer,
data,
this.ur.roll
);
console.log("result: " + moveResult);
this.playerPieces = this.calcPlayerPieces();
}
console.log(this.ur.toString());
}
},
placeNew: function(playerIndex) {
if (this.canPlaceNew) {
this.action("move", 0);
}
},
onClick: function(piece) {
if (piece.player !== this.ur.currentPlayer) {
return;
}
if (!this.ur.isMoveTime) {
return;
}
console.log("OnClick in URView: " + piece.x + ", " + piece.y);
this.action("move", piece.position);
},
messageEliminated(e) {
console.log(`Recieved eliminated: ${JSON.stringify(e)}`);
this.gameOverMessage = e;
},
messageMove(e) {
console.log(`Recieved move: ${e.moveType}: ${e.move}`);
if (e.moveType == "move") {
this.ur.move_qt1dr2$(this.ur.currentPlayer, e.move, this.ur.roll);
}
this.playerPieces = this.calcPlayerPieces();
// A move has been done - check if it is my turn.
console.log("After Move: " + this.ur.toString());
},
messageState(e) {
console.log(`MessageState: ${e.roll}`);
if (typeof e.roll !== "undefined") {
this.ur.doRoll_za3lpa$(e.roll);
this.rollUpdate(e.roll);
}
console.log("AfterState: " + this.ur.toString());
},
messageIllegal(e) {
console.log("IllegalMove: " + JSON.stringify(e));
},
rollUpdate(rollValue) {
this.lastRoll = rollValue;
},
onDoRoll() {
this.action("roll", -1);
},
onPlaceNewHighlight(playerIndex) {
if (playerIndex !== this.ur.currentPlayer) {
return;
}
this.highlighted = { player: playerIndex, position: 0 };
},
mouseover(piece) {
if (piece.player !== this.ur.currentPlayer) {
return;
}
this.highlighted = piece;
},
mouseleave() {
this.highlighted = null;
},
calcPlayerPieces() {
let pieces = this.ur.piecesCopy;
this.gamePieces = this.ur.piecesCopy;
let obj0 = piecesToObjects(pieces, 0);
let obj1 = piecesToObjects(pieces, 1);
let result = [];
for (var i = 0; i < obj0.length; i++) {
result.push(obj0[i]);
}
for (var i = 0; i < obj1.length; i++) {
result.push(obj1[i]);
}
console.log(result);
return result;
}
},
computed: {
canControlCurrentPlayer: function() {
return this.ur.currentPlayer == this.yourIndex || !Socket.isConnected();
},
destination: function() {
if (this.highlighted === null) {
return null;
}
if (!this.ur.isMoveTime) {
return null;
}
if (
!this.ur.canMove_qt1dr2$(
this.ur.currentPlayer,
this.highlighted.position,
this.ur.roll
)
) {
return null;
}
let resultPosition = this.highlighted.position + this.ur.roll;
let result = piecesToObjects(
[[resultPosition], [resultPosition]],
this.highlighted.player
);
return result[0];
},
canPlaceNew: function() {
return (
this.canControlCurrentPlayer &&
this.ur.canMove_qt1dr2$(this.ur.currentPlayer, 0, this.ur.roll)
);
}
}
};
</script>
<style>
.piece-0 {
background-color: blue;
}
.ur-pieces-player .piece {
margin: auto;
width: 48px;
height: 48px;
}
.piece-1 {
background-color: red;
}
.piece-flower {
opacity: 0.5;
background-image: url('../assets/ur/flower.svg');
margin: auto;
}
.board-parent {
position: relative;
}
.piece-bg {
background-color: white;
border: 1px solid black;
}
.ur-board {
position: relative;
width: 512px;
height: 192px;
min-width: 512px;
min-height: 192px;
overflow: hidden;
border: 12px solid #6D5720;
border-radius: 12px;
margin: auto;
}
.ur-pieces-flowers {
z-index: 60;
}
.ur-pieces-flowers, .ur-pieces-player,
.ur-pieces-bg {
display: grid;
grid-template-columns: repeat(8, 1fr);
grid-template-rows: repeat(3, 1fr);
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
.ur-pieces-player .piece {
z-index: 70;
}
.piece {
background-size: cover;
z-index: 40;
width: 100%;
height: 100%;
}
.piece-black {
background-color: #7f7f7f;
}
.player-view {
width: 512px;
height: 50px;
margin: auto;
display: flex;
flex-flow: row;
justify-content: space-between;
align-items: center;
}
.side {
display: flex;
flex-flow: row;
}
.piece.highlighted {
opacity: 0.5;
box-shadow: 0 0 10px 8px black;
}
.side-out {
flex-flow: row-reverse;
}
.moveable {
cursor: pointer;
animation: glow 1s infinite alternate;
}
@keyframes glow {
from {
box-shadow: 0 0 10px -10px #aef4af;
}
to {
box-shadow: 0 0 10px 10px #aef4af;
}
}
.fade-enter-active, .fade-leave-active {
transition: opacity .5s;
}
.fade-enter, .fade-leave-to {
opacity: 0;
}
</style>
UrFlower.vue
<template>
<div class="piece piece-flower"
v-bind:style="{ 'grid-area': (y+1) + '/' + (x+1) }">
</div>
</template>
<script>
export default {
name: "UrFlower",
props: ["x", "y"]
};
</script>
UrPiece.vue
<template>
<transition name="fade">
<div class="piece"
v-on:click="click(piece)"
:class="piece.id"
@mouseover="mouseover(piece)" @mouseleave="mouseleave()"
v-bind:style="{ gridArea: (piece.y+1) + '/' + (piece.x+1) }">
</div>
</transition>
</template>
<script>
export default {
name: "UrPiece",
props: ["piece", "onclick", "mouseover", "mouseleave"],
methods: {
click: function(piece) {
console.log(piece);
this.onclick(piece);
}
}
};
</script>
UrPlayerView.vue
<template>
<div class="player-view">
<div class="side side-remaining">
<div class="number">{{ remaining }}</div>
<div class="pieces-container">
<div v-for="n in remaining" class="piece-small pointer"
:class="{ ['piece-' + playerIndex]: true, moveable: canPlaceNew && n == remaining }"
@mouseover="onPlaceNewHighlight(playerIndex)" @mouseleave="mouseleave()"
style="position: absolute; top: 6px;"
:style="{ left: (n-1)*12 + 'px' }" v-on:click="placeNew()">
</div>
</div>
</div>
<transition name="fade">
<div class="player-active-indicator" v-if="game.currentPlayer == playerIndex"></div>
</transition>
<div class="side side-out">
<div class="number">{{ out }}</div>
<div class="pieces-container">
<div v-for="n in out" class="piece-small"
:class="['piece-' + playerIndex]"
style="position: absolute; top: 6px;"
:style="{ right: (n-1)*12 + 'px' }">
</div>
</div>
</div>
</div>
</template>
<script>
export default {
name: "UrPlayerView",
props: [
"game",
"playerIndex",
"onPlaceNew",
"gamePieces",
"onPlaceNewHighlight",
"mouseleave"
],
data() {
return {};
},
methods: {
placeNew: function() {
this.onPlaceNew(this.playerIndex);
}
},
computed: {
remaining: function() {
return this.gamePieces[this.playerIndex].filter(i => i === 0).length;
},
out: function() {
return this.gamePieces[this.playerIndex].filter(i => i === 15).length;
},
canPlaceNew: function() {
return (
this.game.currentPlayer == this.playerIndex &&
this.game.isMoveTime &&
this.game.canMove_qt1dr2$(this.playerIndex, 0, this.game.roll)
);
}
}
};
</script>
<style scoped>
.player-active-indicator {
background: black;
border-radius: 100%;
width: 20px;
height: 20px;
}
.number {
margin: 2px;
font-weight: bold;
font-size: 2em;
}
.piece-small {
background-size: cover;
width: 24px;
height: 24px;
border: 1px solid black;
}
.pieces-container {
position: relative;
}
</style>
UrRoll.vue
<template>
<div class="ur-roll">
<div class="ur-dice" @click="onclick()" :class="{ moveable: usable }">
<div v-for="i in 4" class="ur-die">
<div v-if="rolls[i - 1]" class="ur-die-filled"></div>
</div>
</div>
<span>{{ roll }}</span>
</div>
</template>
<script>
function shuffle(array) {
// https://stackoverflow.com/a/2450976/1310566
var currentIndex = array.length,
temporaryValue,
randomIndex;
// While there remain elements to shuffle...
while (0 !== currentIndex) {
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex -= 1;
// And swap it with the current element.
temporaryValue = array[currentIndex];
array[currentIndex] = array[randomIndex];
array[randomIndex] = temporaryValue;
}
return array;
}
export default {
name: "UrRoll",
props: ["roll", "usable", "onDoRoll"],
data() {
return { rolls: [false, false, false, false] };
},
watch: {
roll: function(newValue, oldValue) {
console.log("Set roll to " + newValue);
if (newValue < 0) {
return;
}
this.rolls.fill(false);
this.rolls.fill(true, 0, newValue);
console.log(this.rolls);
shuffle(this.rolls);
console.log("After shuffle:");
console.log(this.rolls);
}
},
methods: {
onclick: function() {
this.onDoRoll();
}
}
};
</script>
<style scoped>
.ur-roll {
margin-top: 10px;
}
.ur-roll span {
font-size: 2em;
font-weight: bold;
}
.ur-dice {
width: 320px;
height: 64px;
margin: 5px auto 5px auto;
display: flex;
justify-content: space-between;
}
.ur-die-filled {
background: black;
border-radius: 100%;
width: 20%;
height: 20%;
}
.ur-die {
display: flex;
justify-content: center;
align-items: center;
width: 64px;
border: 1px solid black;
border-radius: 12px;
}
</style>
Answer: \$\color{red}{\textrm{warning: cheesy meme with bad pun below - if you don't like those, then please skip it...}}\ \$
Ermagherd
Question responses
Do I have too many / too few components? I am aiming to make several other games in Vue so I like to make things re-useable.
I think the current components are divided well. The existing components make sense.
How are my Vue skills?
Usage of Vue looks good. There are a few general JS aspects that I have feedback for (see below, under last "question") but usage of Vue components and other constructs looks good.
Can anything be done better with regards to how I am using Vue?
Bearing in mind I am not an expert VueJS user and have only been working with it on small projects in the past year, I can't really think of anything... If you really wanted you could consider using slots somehow, or an Event bus if the components became more separated but that might not be nessary since everything is contained in the main RoyalGameOfUR component.
If I think of anything else, I will surely update this answer.
I am nowhere near a UX-designer, but how is the user experience?
The layout of the game components is okay, though it would be helpful to have more text prompting the user what to do, or at least the rules and game play instructions somewhere (e.g. in a text box, linked to another page, etc.). In the same vain, I see an uncaught exception in the console if the user clicks the dice when it isn't time to roll. One could catch the exception and alert the user about what happened.
Any other feedback also welcome.
Feedback
Wow that is a really elegant application! Well done! I haven't used the grid styles yet but hope to in the future.
I did notice that after rolling the dice, when selecting a piece from a stack, it doesn't matter which player is the current player - I can click on either stack (though only a piece from the current player's stack will get moved).
I did notice an error once about this.onclick is not defined but I didn't observe the path to reproduce it. If I see it again I will let you know.
Suggestions
JS
let & const
I see the code utilizes let in a few places but otherwise just var. It would be wise to start using const anywhere a value is stored but never re-assigned - then use let if re-assignment is necessary. This helps avoid accidental re-assignment and other bugs.
Using var outside of a function declares a global variable1...I only spot one of those in your post (i.e. var games) but if there were other places where you wanted a variable in another file called games then this could lead to unintentional value over-writing.
Array copying
In piecesToObjects(), I see these lines:
var arrayCopy = []; // Convert Int32Array to Object array
playerPieces.forEach(it => arrayCopy.push(it));
You could utilize Array.from() to copy the array, then use array.map() to call mapping() instead of using the for loop. Originally I was thinking that the forEach could be eliminated but there is a need to get a regular array instead of the typed array (i.e. Int32Array). If the array being copied (i.e. array) was a regular array, then you likely could just use .map() - see this jsPerf to see how much quicker that mapping could be.
return Array.from(playerPieces).map(mapping);
And that function mapping could be pulled out of piecesToObjects if playerIndex is accepted as the first parameter, and then playerIndex can be sent on each iteration using Function.bind() - i.e. using a partially applied function.
return Array.from(playerPieces).map(mapping.bind(null, playerIndex));
Nested Ternary operator
Bearing in mind that this might just be maintained by you, if somebody else wanted to update the code, that person might find the line below less readable than several normal if blocks. My former supervisor had a rule: no more than one ternary operator in one expression - especially if it made the line longer than ~100 characters.
var x =
y == 1
? position - 5
: position <= 4 ? 4 - position : 4 + 8 + 8 - position;
Something a little more readable might be:
var x;
if (y == 1) {
x = position - 5;
} else {
x = position <= 4 ? 4 - position : 4 + 8 + 8 - position;
}
0-based Flower grid areas
Why add 1 to the x and y values in UrFlower's template? Perhaps you are so used to 0-based indexes and wanted to keep those values in the markup orthogonal with your ways... Those flowers could be put unto an array and looped over using v-for... but for 5 flowers that might be too excessive...
CSS
Inline style vs CSS
There are static inline style attributes in UrPlayerView.vue - e.g. :
div v-for="n in remaining" class="piece-small pointer"
:class="{ ['piece-' + playerIndex]: true, moveable: canPlaceNew && n == remaining }"
@mouseover="onPlaceNewHighlight(playerIndex)" @mouseleave="mouseleave()"
style="position: absolute; top: 6px;"
and
<div v-for="n in out" class="piece-small"
:class="['piece-' + playerIndex]"
style="position: absolute; top: 6px;"
The position and top styles could be put into the existing ruleset for .piece-small...
1https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/var#Description | {
"domain": "codereview.stackexchange",
"id": 32731,
"tags": "javascript, game, css, ecmascript-6, vue.js"
} |
Detection of anomal data in the text | Question: I work with texts where there is a dialogue between two people (a client and a call center employee, the beginning and end of each person’s phrase is not defined). My goal is to classify texts in which a call center employee names words from my list.
If the texts are manually marked up, can such a classification problem be solved?
Are there any tricks to solve this type of problem?
Sample data:
"hello hello my name is Sam Chin I'm calling for pizza delivery Okay now check your order wait a minute Sam"
Answer: There's a lot of ways to do this, one approach is to use token-based matching. You can use this to easily find any "tokens" in the text, like names, places, or just plain words.
Methodology
I'd recommend using Rule-based Entity Recognition in spaCy. You'll define the "rules" of what the entity looks like, here's the example from the docs where we define the following patterns to find:
An entity type of Organization and the word Apple
An entity type of Location and the words san and francisco
Here's that in code (live example):
from spacy.lang.en import English
from spacy.pipeline import EntityRuler
nlp = English()
ruler = EntityRuler(nlp)
# These are the rules you define, look at the docs to see what your options are.
# You don't have to use the "label", you can just look for a "pattern" if you want.
patterns = [{"label": "ORG", "pattern": "Apple"},
{"label": "GPE", "pattern": [{"LOWER": "san"}, {"LOWER": "francisco"}]}]
ruler.add_patterns(patterns)
nlp.add_pipe(ruler)
# The text you're searching through to find your patterns
doc = nlp("Apple is opening its first big office in San Francisco.")
# This prints out the matches
print([(ent.text, ent.label_) for ent in doc.ents])
The output of this code is: [('Apple', 'ORG'), ('San Francisco', 'GPE')]
Usage
Thankfully spaCy has some fantastic online tools for helping you write your patterns, I highly recommend you check these links out.
Install spaCy
Evaluate if you should use rules or a model (I suggest rules but I could be wrong), and if you should use token matcher or phrase matcher
Read the documentation to determine how to write your patterns
Test your patterns using the Live Rule-based Matcher Explorer
Use the code snippet above or the code samples in the docs to see how to use your new patterns | {
"domain": "datascience.stackexchange",
"id": 7100,
"tags": "deep-learning, nlp, speech-to-text"
} |
Two ships, one year apart, travelling from earth to a distant planet. What is the time difference of arrival? | Question: If a ship set off at a significant percentage of the speed of light, and landed on a distant planet, and then a second ship set off a year or so later, what would the time distance be at the destination planet?
Does the time dilation change depending on if the ships are travelling at the same time? Say if the 'on ship' duration of the journey was ten years, and the second ship set out while the first one was still travelling, would they arrive with the same gap as what they set out with?
Or, since the ship that has reached the destination has slowed down/ landed before the second ship, would the time dilation change the time gap at that end?
So, for instance, if a person were on the second ship, would their perception of the journey be 'a year ago the first ship set off, now we set off, it's a ten year journey, we arrive and find that the first ship arrived a year before we did' or would there be time differences due to the two ships setting off at different times?
Does travelling at a significant percentage of the speed of light and the time dilation effect change whether you're travelling toward or away from something? Like would the perception of the ship's journey be different from earth than from the target planet in terms of time?
I'd appreciate any answers, though I'm dumb as a brick and don't understand mathematics, so simple explanations would be very appreciated.
Answer: Persons A and B are traveling. Both would experience the same trip duration: ten years. B would spend a year waiting for his turn to depart. A would spend a year waiting for B to arrive and join him. Their clocks and calendars would read the same when B arrives. | {
"domain": "physics.stackexchange",
"id": 60223,
"tags": "homework-and-exercises, special-relativity, speed-of-light"
} |
Why do physicists use LHC? | Question:
My question is why are we colliding particles in LHC to produce new ones?
And these particles that they sometimes say live for a fraction of a second, how in space they exists then?
In space all these particle exist without smashing to each other, why do we need to smash them to produce them?
Answer: Physicists collide particles to study their behavior under extreme conditions. New unknown particles can be created in high energy collisions or new unknown processes may be observed. Our equations describing the particles are predicting certain behavior and physicists are testing if the particles really behave like that. And they are hoping to find a discrepancy to locate a problem in our theories and improve them.
The short lived particles are short lived not only in our laboratories, but also in space. They are created also in collisions, but not in accelerators. Certain processes in the space are powerful enough to accelerate and collide particles at much higher energies than our most powerful accelerator. If a short lived particle is created in such process, it decays quickly, like in the Earth laboratories. Long lived high energy particles (protons, neutrions, electrons) are flying through the space everywhere and they are even hitting the Earth. Every day there are lots of particles arriving to the Earth from the space that have much higher energy than protons in LHC. | {
"domain": "physics.stackexchange",
"id": 15154,
"tags": "particle-physics, large-hadron-collider"
} |
Bottom up chart parser adding active arc step | Question: I am following Bottom up chart parsing algorithm from Natural Language Understanding book by James Allen. It is
I couldn't understand the 3rd step. I thought that when active arc is added the dot is advanced. But dot is indroduced in the third step. The example says that
This example shows dot is advanced. But step 3 says otherwise
Answer: You have spotted a typo in that book.
The dot $\circ$ in step 3 of figure 3.11 should be after constituency C. To be explicit and precise, step 3 should be stated as the following.
For each rule in the grammar of form X $\to$ C X$_1$ ... X$_n$, add an active arc of form X $\to$ C $\circ$ X $_1$ ... X$_n$ from position P$_1$ to P$_2$.
As you have observed, all examples following figure 3.11 show clearly that the dot $\circ$ are placed after constituency C.
This can also be verified by the statement a few lines below.
Notice that active arcs are never removed from the chart.
We can check that we cannot, indeed, find any instance of an active arc in all of section 3, chapter 3 of the book that starts with a $\circ$.
There is also a rule that for each active arc from position $a$ to $b$, the symbols before the $\circ$ should correspond to the terminals between those positions inclusively. That rule also implies that a dot $\circ$ cannot be at the very front of an active arc.
By the way, that books is written so clearly that I have to say that glaring typo is an almost reverse testimonial. | {
"domain": "cs.stackexchange",
"id": 13418,
"tags": "parsers, natural-language-processing"
} |
How good is current tsunami prediction? | Question: We all know that predicting tsunami and earthquake is difficult, with too many variables involved.
But with the advent in data collection and computing power and better models, one should be able to predict tsunami better than in the past. How accurate is current tsunami prediction?
Answer: Well, immediately after the earthquake, they created the map showing when the tsunami arrives at different places:
I guess that it ended up being pretty accurate but I haven't checked. The speed of ocean waves actually depends on the frequency...
Your ordering "tsunami and earthquake" is somewhat bizarre. You do acknowledge and realize that the tsunami was a consequence of the earthquake, don't you? ;-) Predicting earthquakes themselves is not really possible. I think that no one knew about the Japanese earthquake until the very moment when it took place. | {
"domain": "physics.stackexchange",
"id": 660,
"tags": "geophysics, tsunami"
} |
MoveIt MSA configuration files not as straight forward as they say? | Question:
I have been struggling introducing my custom .xacro into MoveIt. I have followed the MSA to a T.
I attempted the Panda MoveIt tutorials first, and they work perfectly.
My robot already works in Rviz in conjunction with Joint State Publisher so I know it operates correctly in its range of motion.
I have also been able to manually boot it into an empty gazebo world. I am just having trouble creating and utilizing the files that are generated from the MSA.
When I follow the instructions from https://ros-planning.github.io/moveit_tutorials/doc/setup_assistant/setup_assistant_tutorial.html?highlight=moveit and attempt to "roslaunch moveit_config demo.launch" I get this response...
(I cannot post a screenshot without 5 points (?) so I C->P'd)
homefolder@ubuntu:~/manipulator_ws$ roslaunch version1_desc demo.launch
... logging to /home/homefolder/.ros/log/48303788-8948-11ea-a025-001c429fcd02/roslaunch-ubuntu-31411.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
Invalid <arg> tag: moveit_config
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/home/homefolder/manipulator_ws/src/ros-moveit-arm/moveit_plugin
ROS path [2]=/home/homefolder/manipulator_ws/src/moveit_tutorials
ROS path [3]=/home/homefolder/manipulator_ws/src/my-robotic-manipulator
ROS path [4]=/home/homefolder/manipulator_ws/src/ros-moveit-arm/my_arm_xacro
ROS path [5]=/home/homefolder/manipulator_ws/src/panda_moveit_config
ROS path [6]=/home/homefolder/manipulator_ws/src/version1_desc
ROS path [7]=/opt/ros/kinetic/share.
Arg xml is <arg default="$(find moveit_config)/default_warehouse_mongo_db" name="db_path"/>
The traceback for the exception was written to the log file
The workspace is named "manipulator_ws." My main package is "version1_desc" and when I started the MSA, I opened the new package into this "version1_desc". So the config files that the MSA has created are located in:
manipulator_ws/src/version1_desc/moveit_config/config
and
manipulator_ws/src/version1_desc/moveit_config/launch
I remember reading that creating a package inside a package usually brings up issues, so this part of the creation process confused me. As soon as the MSA created a package in a package, I assumed this would not work, and it didn't.
I am using Ubuntu 16.04 LTS
ROS Kinetic
Using Parallels Desktop on a 2019 macbook pro
This seems like a path problem but I am new to the system. I have seen other posts with similar issues but the solutions that have been deemed worthy of "closing" do not provide clear solutions to my issue.
Can anyone offer some assistance? I have spent the better part of 5 hours moving stuff around and erasing and recreating these moveit_config files to no luck. Im about ready to quit and go be a construction worker again.
please help.
Originally posted by matthewmarkey on ROS Answers with karma: 68 on 2020-04-28
Post score: 0
Original comments
Comment by gvdhoorn on 2020-04-28:\
(I cannot post a screenshot without 5 points (?) so I C->P'd)
Which is perfect, as there is no need to show a screenshot of an error message which is all text. It's also not allowed according to the support guidelines.
Comment by gvdhoorn on 2020-04-28:
Finally as a suggestion: change the title of your question. As-is, it does not convey anything about what sort of problem you encountered. This makes it almost useless, as people searching will not get any hint about the topic your question.
Comment by matthewmarkey on 2020-04-28:
This is great advice, all round. Thank you sir. I will work on my formatting as far at the questions go and appreciate the timely help.
Comment by gvdhoorn on 2020-04-28:
I re-opened your question, as it's unclear to me why you closed it as a duplicate.
If it really is a duplicate, please post a link to the Q&A it is a duplicate of here in a comment.
Answer:
manipulator_ws/src/version1_desc/moveit_config/launch
I remember reading that creating a package inside a package usually brings up issues, so this part of the creation process confused me.
yes, this is your problem most likely.
Move the entire moveit_config directory (including all its contents) to manipulator_ws/src. Then delete the build, devel and install folders from your manipulator_ws (the last one only if it exists) and build your workspace again. Make sure to source devel/setup.bash after a successful build.
Now try again.
As to whether the MSA is "not as straight forward as they say?": in the end, it's the user which determines where the output of the MSA should be stored. If you direct it to generate the package inside another package, that's where the output will be generated.
I'm not entirely sure I understand why you did that, but that's not too important either.
Originally posted by gvdhoorn with karma: 86574 on 2020-04-28
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 34848,
"tags": "ros-kinetic"
} |
Is the set of Gödel numbers of computable constant functions recursively enumerable? | Question: I've been working on the following exercise:
$S = \{ x | f_x \text{ is constant} \}$. Is $S$ recursively enumerable?
Here, $fx$ is the function computed by the $\text{x-th TM}$. So it is a computable function.
Intuitively, I think that to check if $S$ is constant, I would have to check if $f_x$ stops for every input. That procedure would run forever.
Important: Same thing about $\overline{S}$, if some $f_x$ is undefined for every input $y$ (it would not be constant, as far as I understand, it is constant if it has the same image for all input values) then I would not be able to list $f_x$ in an enumeration of $\overline{S}$.
In fact, suppose $\forall x,fx$ is undefined. This $f_x$ is computable - a TM can be constructed that loops forever. How can you determine that $f_x$ is not constant? There will be no input for which $f_x$ halts. Therefore, we cannot check for equality of $x_1$ and $x_2$ to determine that is not constant. Then, we can't list this $TM_x$ in the enumeration.
A solution in which this $f_x$ was put at the beginning of the enumeration was suggested. But there are infinitely many $TMs$ that are undefined for all inputs. Consequently, I can't put them at the beginning of the enumeration as other $TMs$ won't be enumerated.
I tried to reduce the Halting Problem to this problem without success. I believe that both $S$ and $\overline{S}$ are not r.e. (see my intuitive thoughts above).
How would you solve this problem?
Answer: Your intuition is good: checking whether $x \in S$ is stronger than checking $x \in K$ ($K$ the halting problem, i.e. $K = \{ x \mid f_x(x) \text{ halts}\}$). In other words,
$\qquad\displaystyle \langle \operatorname{sgn} \circ f\rangle \in S \implies f \in K$.
However, the reverse does not hold and no similar implication works for $\overline{K}$, the reduction partner we really want (as it's not semi-decidable).
A small interlude: the title you chose does not actually fit the question! The set of all constant functions is in fact recursively enumerable, e.g. by
$\qquad\displaystyle \varphi_i(x) = i$
which is clearly a computable function. Many sets of functions are enumerable like this, but not all.
What you are really asking is: given a Gödel numbering (e.g. an encoding of all Turing machines), what about the set of all indices (read: programs) that compute this here set of functions? That's another thing entirely because of the properties such a Gödel numbering has. The distinction is important, see e.g. here.
The basic idea for a reduction is always this: build a function that depends on $x$ and whose encoding is in $S$ if and only if $x \in \overline{K}$ -- then we got a deal.
So, consider
$\qquad\displaystyle g_x(n) =
\begin{cases}
n, &f_x(x) \text{ halts after at most $n$ steps } \\
1, &\text{else}
\end{cases}
$
which is clearly computable. Note furthermore that given $x$, we can compute $y$ with $f_y = g_x$; such compilation is possible because we have (or can assume) a Gödel numbering. Now, clearly
$\qquad \langle g_x \rangle \in S \iff x \in \overline{K}$
holds; thus $S$ can't be semi-decidable since $\overline{K}$ would then be semi-decidable, too, contradicting what we know. | {
"domain": "cs.stackexchange",
"id": 2257,
"tags": "computability, semi-decidability"
} |
How to run same pakcage twice at the same time? | Question:
Hi
im currently running the sicktoolbox_wrapper package for the LMS200 for scanning purpose
I have two LMS200 and would like to use them at the same time
I know the connecting port which are ttyUSB0 and ttyUSB1
lets say im running in port ttyUSB0
when i execute the command of
rosparam set /sicklms/port /dev/ttyUSB1
rosrun sicktoolbox_wrapper sicklms
the old one is killed because of
[new node registered with the same
name]
is there any possible way to change the node name before executing it?
or is it possible to copy and paste then reinstall the same package with a different and publishing a different node name?
I am trying rosparam set but have not found a clue yet
plx help
Originally posted by fcl21 on ROS Answers with karma: 1 on 2014-04-11
Post score: 0
Answer:
See this answer, it shows how to set the node name both from the command line and from launch files.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-04-11
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by fcl21 on 2014-04-11:
let me make it clear
if i execute
rosrun sicktoolbox_wrapper sicklms __name:=sicklms
it is executable but doesnt change anything
if i execute this
rosrun sicktoolbox_wrapper sicklms __name:=sicklms2
i have
[ERROR] [1397226096.240203952]: Initialize failed! are you using the correct device path?
:/
Comment by fcl21 on 2014-04-11:
i just found that if i do
rosrun sicktoolbox_wrapper sicklms __name:=sicklms2
it says
attempting to open devic @ /dev/lms200
which is a path that i never told the computer to do
Comment by demmeln on 2014-04-11:
This is probably the default value. You will need to set the parameter rosparam set /sicklms2/port /dev/ttyUSB1 when you run rosrun sicktoolbox_wrapper sicklms __name:=sicklms2. It is probably best if you create a launch file for your nodes. | {
"domain": "robotics.stackexchange",
"id": 17626,
"tags": "ros, sicktoolbox-wrapper"
} |
Airplane with banner in a windy day | Question: Will the banner of this airplane be always in the proper direction if the airplane flies in any direction on a windy day?
Answer: Other than User58220's answer, I'm reading a lot of nonsense here.
When you fly an airplane (I and many other people on this site do), when you are cruising in the air, you center the rudder.
The plane has no awareness of the movement of the air mass over the ground (wind).
The plane has a vertical stabilizer (tail) which causes it to point into its relative wind, which is entirely unrelated to the motion of the air mass over the ground.
If it is towing a banner, that also flows in a line with the airplane.
If you deflect the rudder pedals, that pushes the tail of the plane left or right, causing the relative wind to blow more against one side of the airplane.
You don't do that to compensate for wind over the ground (except during landing and takeoff, when you don't have a banner).
The rudder is used in turns to compensate for adverse aileron yaw, during takeoff to compensate for asymmetric propeller thrust, and during landing to keep the nose lined up with the runway in a crosswind.
If you do anything with the rudder in cruise, you are not doing it to compensate for air movement over the ground.
EDIT: adding some educational videos:
Crosswind landing technique.
More than you wanted to know about banner towing.
Better late than never: Suppose the air mass is simply not moving, but the ground underneath is moving (with respect to the air).
The plane just flies in the still air, banner and all, irrespective of what the ground's doing.
However, if it is needed to fly from airport A to airport B, as User58220 pointed out, that's a navigation problem, not an aircraft-operation problem.
It is necessary to turn the plane to a direction such that it will arrive at the place where airport B will be when both the plane and the airport get there.
This is called the wind triangle problem, and there is a simple tool to help solve it, called an E6B.
Further edit: I almost forgot that when I first started using the Microsoft Flight Simulator, the plane wouldn't go where I pointed it, which made it very hard to land. It would always drift off center.
Later I learned a fundamental difference between cars and planes.
Cars go where they are pointed.
Planes go where they are carried.
There's always some sort of wind, so the way you get where you're going is by looking at the ground to see where you're being carried.
If you are going to the right of where you want to go, you adjust your heading to the left, and vice versa.
You never do it by pointing the plane at your destination, except in a general sense.
I only mention this because not having that understanding could have led to the OP's question. | {
"domain": "physics.stackexchange",
"id": 9910,
"tags": "aircraft, relative-motion, equilibrium"
} |
Confusion implementing inverse z transform in MATLAB | Question: I am trying to use MATLAB commands ztrans and iztrans , but i am not getting proper results
My code is below, why i am not getting H1=H2??Keeping in view that "H2" is "inverse Z transform" of "Z transform" of "H1"
clc
clear
syms z n
H1=(z.*(z-1))/((z+1)*(z+1/3))
pretty(H1);
f=iztrans(H1,n);
pretty(f)
H2=ztrans(f)
This outputs
Answer: The two answers are equivalent and related by partial fraction expansion.
This is clear when you multiplying out the numerator to get a common denominator:
$$
\begin{align}10z(z+1/5)& - 9z(z+1/3) \\&= 10z^2 + 2z - 9z^2 - 3z\\
&= z^2 - z\\
&= z(z-1)
\end{align}
$$ | {
"domain": "dsp.stackexchange",
"id": 11008,
"tags": "matlab, z-transform"
} |
How do you mix two pure states to obtain a mixed state? | Question: If we have the following two states
\begin{equation}
|\psi\rangle_1 = \frac{1}{\sqrt{2}}|0\rangle_A|0\rangle_B + \frac{1}{\sqrt{2}} |1\rangle_A |1\rangle_B
\end{equation}
\begin{equation}
|\psi\rangle_2 = \frac{1}{\sqrt{2}}|0\rangle_A|0\rangle_B - \frac{1}{\sqrt{2}} |1\rangle_A |1\rangle_B
\end{equation}
How do you mix them with the same proportion to create a mixed state? What would be the resulting density operator?
Answer: You can prepare the mixed state as follows. Flip a perfect coin. If it comes up heads, prepare $|\psi\rangle_1$, otherwise prepare $|\psi\rangle_2$. Finally, forget the result of flipping the coin.
The corresponding density operator is
$$
\rho = \frac{1}{2} |\psi\rangle_1\langle\psi|_1 + \frac{1}{2} |\psi\rangle_2\langle\psi|_2.
$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 2524,
"tags": "quantum-state, entanglement, textbook-and-exercises, density-matrix"
} |
Has physics ever tried to explain how do we get "sensorial experiences"? | Question: To be clear about what I mean with "sensorial experiences", let's take for example our visual experiences. Certainly, physics (and other sciences) explains a whole process which involves light arriving to our eyes, a transformation into electric signals that go into out brain, where it is further processed and then, kind of magically, we experience colours, shapes and stuff that obviously are much more meaningful to us than mere lightwaves and electricity. So, has physics (or science, in general) ever tried to explain how this is achieved? Although I took the visual experience example, similar examples obviously exist for the other senses.
Answer: This question may be in threat of being closed as not being physics related and being related, rather, to neuroscience, biology, or philosophy.
I'll give what I see as the physics answer to the question:
Has physics ever tried to explain how do we get “sensorial
experiences”?
It seems you are asking about the the hard problem of consciousness and if physics posits any of sort of solution to it.
The answer is no.
Subjective experience (consciousness) is outside of the scope of physics. As you correctly identify, physics does posit laws for how the matter, electromagnetic fields, etc. in our bodies and brains will react (in a physical sense) to various stimuli, but it makes no claim onto how these physical processes relate to what you call our sensorial experiences.
Warning: Speculation ahead
I might even hazard to say that science itself has never really made progress on the hard problem of consciousness either. It is tricky for science to access because science typically deals with things that are observable for many people whiles ones own sensorial experience is limited to one person. Note that neuroscience and other fields have much to say about the soft problem of consciousness.
All of that said, a solution to the hard problem of consciousness will probably come from a nexus of hard sciences, social sciences, and philosophy. | {
"domain": "physics.stackexchange",
"id": 68277,
"tags": "everyday-life, biophysics, biology"
} |
Current distribution of Hertz dipole NOT in origin of coordinate system | Question: Greetings I am having trouble to understand how the distribution should be if the dipole is not at the origin and not along z-axis. For a hertz dipole with length $d<<\lambda$ along z-axis at the origin of coordinate system we can say it has a distribution $\mathbf{J}=I_0 \mathbf{z}_0$. Assume now that the dipole is located along y-axis and at $z=a$, i.e., at $a$ above coordinates origin. How should the current distribution be expressed now? Thank you.
Answer: Your current distribution is incorrect (even for the origin), both unit-wise (if $I_0$ current) and meaning-wise. You are implying constant current distribution in the whole space.
For point-like z-aligned dipole at position $\mathbf{r}_0$,
$$
\mathbf{J}\left(\mathbf{r}\right)=\mathbf{\hat{z}}I_0 l \delta^{(3)}\left(\mathbf{r}-\mathbf{r}_0\right)
$$
where $\delta^{(3)}$ is the delta-function, and $l$ is some sort of a measure of how 'long' the dipole is. Normally you would pack $\mathbf{\hat{z}}I_0 l$=$\mathbf{\dot{p}}$ where $\mathbf{p}$ is the dipole moment. Such 'packing' allows to hide the uncomfortable truth that if $l\to0$, then $I_0\to\infty$ for finite radiated power.
Alternatively you could use something like Heaviside function - $\mathcal{H}$ and define:
$$
\begin{align}
S\left(\zeta\right)=&\mathcal{H}\left(\zeta+\frac{1}{2}\right) - \mathcal{H}\left(\zeta-\frac{1}{2}\right) \\
\mathbf{J}\left(\mathbf{r}\right)=&\mathbf{\hat{z}}I_0 S\left(\frac{z-z_0}{l}\right) \delta\left(x-x_0\right)\delta\left(y-y_0\right)\cos\left(\frac{\pi}{l}\cdot\left(z-z_0\right)\right) \\
\end{align}
$$
Where $l$ is now a 'proper' length of the dipole and $\cos$ is needed for charge conservation.
I am implying harmonic time-dependence in both cases.
Following comments. I think Balanis is not doing the best job in explaing here. Lets back up. In general you are trying to solve
$$
\nabla^2 \mathbf{A}+k^2\mathbf{A}=-\mu\mathbf{J}
$$
I am assuming only electric and no magnetic current densities here. A solution in case of scattering boundary conditions (basically antenna radiates into free-space and no incoming waves) with no outside excitation is then:
$$
\mathbf{A}\left(\mathbf{r}\right)=\frac{\mu}{4\pi}\int d^3 r' \mathbf{J}\left(\mathbf{r}'\right)\cdot\frac{\exp\left(-jk\left|\mathbf{r}-\mathbf{r}'\right|\right)}{\left|\mathbf{r}-\mathbf{r}'\right|}
$$
This tells you the 'ammount' of vector potential $\mathbf{A}$ you will see at position $\mathbf{r}$ due to current density at position $\mathbf{r}'$. You have to integrate over the whole space to pick up all the current density. In practice you often define current density to fit inside the volume $V$ in which case you write $\int_V d^3 r'$. This is Balanis's equation Eq. 3-27, but with more explicit notation.
Now you see why spatial dependence of current density matters - it tells you how to limit your volume integral.
Next, how to represent a point-dipole antenna? Balanis talks about end-plates on the wires, Fig. 4-1. This allows him to define a consant current distribution in the wire with finite length. A proper expression for such distribution would be:
$$
\begin{align}
S\left(\zeta\right)=&\mathcal{H}\left(\zeta+\frac{1}{2}\right) - \mathcal{H}\left(\zeta-\frac{1}{2}\right) \\
\mathbf{J}\left(\mathbf{r}\right)=&\mathbf{\hat{z}}I_0 S\left(\frac{z-z_0}{l}\right) \delta\left(x-x_0\right)\delta\left(y-y_0\right) \\
\end{align}
$$
However Balanis does nothing interesting with this finite length of the antenna, so we can take the limit before computing the integral. This strays outside your question, so I will leave it for now.
Let us get back to solution and sub in the last current density. Using the properties of the delta-functions:
$$
\begin{align}
\mathbf{A}\left(\mathbf{r}\right)=&\frac{\mu}{4\pi}\int d^3 r' \mathbf{J}\left(\mathbf{r}'\right)\cdot\frac{\exp\left(-jk\left|\mathbf{r}-\mathbf{r}'\right|\right)}{\left|\mathbf{r}-\mathbf{r}'\right|} \\
=&\mathbf{\hat{z}}\frac{\mu I_0}{4\pi}\int^{z_0+l/2}_{z_0-l/2} dz'\frac{\exp\left(-jk\sqrt{\left(x-x_0\right)^2+\left(y-y_0\right)^2+\left(z-z'\right)^2}\right)}{\sqrt{\left(x-x_0\right)^2+\left(y-y_0\right)^2+\left(z-z'\right)^2}}
\end{align}
$$ | {
"domain": "physics.stackexchange",
"id": 73349,
"tags": "electromagnetism"
} |
What does the 'displacement' refer to in the definition of work? | Question: The definition of work given in books is The work is said to be done by a force on a body, when the body is moved by the force through some 'displacement'.
Now let a body of mass $m$ at rest. When a force $F$ is applied on it, it gets accelerated and starts moving. It will keep moving until an another force is applied on the body. If no other force is applied on the body to prevent its state of motion, it will be continuously covering more and more displacement (in the direction of applied force).
So, in the above definition what does the word 'displacement' refer to?
Answer: It's true that terminology can sometimes be confusing. Let's go to the clear math:
If the body moves in straight line, you have to take the component along the force's direction.
For example, if you push a block during a 3m path in front of you, then the distance is 3m.
However, if the block moves 3m to North-west but the force is north-directed, then you must take only the component along the direction of force, i.e. the northern component. You'll have $3m\cdot \cos(45º)$
In sum, if the trajectory $s$ and the force $F$ make an angle $\alpha$, then
$$W=F\cdot s \cdot \cos(\alpha)$$
This applies only to straight lines. If the movement is curve, you must divide the curve into very small segments of small distance $ds$, and the small amount of work in each segment is:
$$dW= F \cdot ds \cdot \cos(\alpha) $$ | {
"domain": "physics.stackexchange",
"id": 50526,
"tags": "newtonian-mechanics, forces, work, definition, displacement"
} |
Occupancy grid not displaying in Rviz | Question:
I have a custom model which I created from my xacro files and I can move around using a teleop node. BUT I now want to map a virtual room and I added the AMCL and Gmapping nodes to my launch file but I notice a number of things in Rziv which are definately NOT right :
The occupancy grid is not visible in Rviz so the room is not being mapped
The /map topic has no data published into it
2D pose estimates in Rviz does not show particle filters
The following is my launch file:
<?xml version="1.0"?>
<launch>
<!-- PARAMETERS -->
<param name="robot_description" command="$(find xacro)/xacro $(find sweep_bot)/urdf/cleaner.xacro"/>
<!-- RVIZ -->
<node name='robot_state_publisher' pkg='robot_state_publisher' type='robot_state_publisher'/>
<node name='joint_state_publisher_gui' pkg='joint_state_publisher_gui' type='joint_state_publisher_gui'/>
<node name='rviz' pkg='rviz' type='rviz'/>
<!-- GAZEBO -->
<include file="$(find gazebo_ros)/launch/willowgarage_world.launch"></include>
<node name='sweeper_bot' pkg='gazebo_ros' type='spawn_model' args='-urdf -model sweeper -param robot_description'/>
<!--- Run AMCL -->
<include file="$(find amcl)/examples/amcl_omni.launch" />
<!-- GMAPPING -->
<node pkg="gmapping" type="slam_gmapping" name="gmapping" output="screen"></node>
<!-- Move Base -->
<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"/>
</launch>
My /tf and /odom topics show data is being published into them.
My terminal prints the following message repeatedly:
[ WARN] [1612313212.923369324, 5.005000000]: Timed out waiting for transform from base_link to map to become available before running costmap, tf error: canTransform: source_frame base_link does not exist.. canTransform returned after 0.1 timeout was 0.1.
As shown in the image above, rviz shows the laser detects obstacles but the occupancy grid does not display.
My questions :
I know my /tf is generated from joints as defined in my urdf file(s) but is the data in the /odom topic also auto generated from the urdf ?
Can anyone give me any insight into why my /map topic is empty ?
**UPDATE **
Below is the tf :
Originally posted by sisko on ROS Answers with karma: 247 on 2021-02-02
Post score: 1
Original comments
Comment by Delb on 2021-02-02:
First AMCL is for localization when you already know the map and gmapping is for localization while creating the map so they shouldn't be used at the same time. You should try to create the map first using only gmapping and then use AMCL with the map you've created.
Now you might have something wrong with your TFs, can you show us your tf tree ? (Using rosrun rqt_tf_tree rqt_tf_tree)
Comment by sisko on 2021-02-02:
@Delb: Thanks for your input. I added my tf tree as requested.
Answer:
I figured it out.
The GMapping node was missing configuration parameters which would tell it agongst other things, what topic to publish map data to. See below :
<node pkg="gmapping" type="slam_gmapping" name="gmapping" output="screen">
<param name="map_frame" value="map"/>
<param name="base_frame" value="chasis"/>
<param name="odom_frame" value="odom"/>
</node>
All available parameters are listed at http://wiki.ros.org/gmapping.
Originally posted by sisko with karma: 247 on 2021-02-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 36034,
"tags": "ros, navigation, rviz, ros-melodic, gmapping"
} |
Examples of the price of abstraction? | Question: Theoretical computer science has provided some examples of "the price of abstraction." The two most prominent are for Gaussian elimination and sorting. Namely:
It is known that Gaussian elimination is optimal for, say, computing the determinant if you restrict operations to rows and columns as a whole 1. Obviously Strassen's algorithm does not obey that restriction, and it is asymptotically better than Gaussian elimination.
In sorting, if you treat the elements of the list as black boxes that can only be compared and moved around, then we have the standard $n \log n$ information-theoretic lower bound. Yet fusion trees beat this bound by, as far as I understand it, clever use of multiplication.
Are there other examples of the price of abstraction?
To be a bit more formal, I'm looking for examples where a lower bound is known unconditionally in some weak model of computation, but is known to be violated in a stronger model. Furthermore, the weakness of the weak model should come in the form of an abstraction, which admittedly is a subjective notion. For example, I do not consider the restriction to monotone circuits to be an abstraction. Hopefully the two examples above make clear what I'm looking for.
1 Klyuyev, V. V., and N. I. Kokovkin-Shcherbak: On the minimizations of the number of arithmetic operations for the solution of linear algebraic systems of equations. Translation by G. I. Tee: Technical Report CS 24, June 14, 1965, Computer Science Dept., Stanford University. Available online.
Answer: Purely functional programming is a popular abstraction that offers, at least according to its proponents, a great increase in the expressive power of code, among other benefits. However, since it is a restrictive model of the machine — in particular, not allowing mutable memory — it raises the question of asymptotic slowdown compared to the usual (RAM) model.
There's a great thread on this question here. The main takeaways seem to be:
You can simulate mutable memory with a balanced binary tree, so the worst case slowdown is O(log n).
With eager evaluation, there are problems for which this is the best you can do.
With lazy evaluation, it is not known whether or not there is a gap. However, there are many natural problems for which no known purely functional algorithms matches the optimal RAM complexity.
It seems to me that this is a surprisingly basic question to be open. | {
"domain": "cstheory.stackexchange",
"id": 5086,
"tags": "ds.algorithms, reference-request, big-list, lower-bounds"
} |
What Alice and Bob receive after entangled pair of qubits (generated by any source)? | Question: I'm bit confused regarding the "Alice and Bob receive one qubit each from an entangled pair of qubits". For example, Alice has 2 qubits = |0⟩, |1⟩, and Bob has 2 qubits = |0⟩, |1⟩, After superposition>>entangled pair generated, i.e., a qubit pair = 1/sqrt (2) [|00⟩ and |11⟩] is formed. What Alice and bob receive at their ends after entanglement? will appreciate to share diagram of qubits received at both ends.
If Alice and Bob receive one qubit each from an entangled pair of qubit. How come it is known or measured? as it is stated that measurement collapse the qubits states.
Answer: An initial source creates entangled qubits, e.g., in this description here:
$$\vert \Psi \rangle = \frac{1}{\sqrt{2}}(\vert 00\rangle + \vert 11\rangle)$$
I think the confusion arises from the fact that the above state is the state for two qubits; Alice and Bob each receive one of the physical qubits that, overall, must be described by the state above.
The subtlety then is the difference between a physical implementation of entangled pair of qubits, and a qubit itself, with respect to their information theoretic representation.
An entangled state of the form above might be describing the state of a pair of photons. These are two photons, then one is sent to Alice and another is sent to Bob. The overall description of the state must be the one from above; but Alice and Bob will only be able to act on the photonic modes that they receive. Let us use labels to describe the situation better:
$$\vert \Psi \rangle = \frac{1}{\sqrt{2}}(|0\rangle_A\vert 0 \rangle_B + \vert 1\rangle_A \vert 1 \rangle_B)$$
These labels are to represent the idea that the two parties have each one the corresponding qubit, but the state must be characterized by this overall entangled state. Alice then might act on her part of the qubit, and Bob equivalently, meaning that they can operate with $U_A \otimes \mathbb{1}$ (for Alice) and $\mathbb{1} \otimes U_B$ (for Bob).
Physically, in the example I gave, this amounts for instance for the photon that Alice has passing through beam-splitters or phase-shifters on her side and Bob doing similar things on his side of the laboratory.
Important to note is that the entangled state representing the two photons is created and later shared, in this set up, and it is absolutely different than statistically representing mixtures of $|00>$ or $|11>$.
Finally, to discuss about measurement. Indeed, measurements collapse the wave function, and the state would not be entangled anymore. In the case above, measuring over $\{\vert 0\rangle \langle 0 \vert, \vert1\rangle\langle 1 \vert\}$, the state would collapse into either $|00\rangle$ or $|11\rangle$ depending on the outcomes each party gets. I hope this helped clarified a bit, but it will take some more examples and reflection to understand entanglement more deeply. | {
"domain": "quantumcomputing.stackexchange",
"id": 4626,
"tags": "entanglement"
} |
What is the difference between Statistical Mechanics and Quantum Mechanics | Question: What is the difference between Statistical and Quantum Mechanics?
In both we try to study the property of small particles using probability and hence apply to macroscopic systems.
Answer: In statistical mechanics the system at any time is in a definite microstate (e.g. positions and velocities of all the particles in a gas), yet we don't know what this state is. Instead, we define certain global properties of the system that are defined on longer time scales (like total energy, entropy, temperature, volume) that are useful in many processes and try to predict them (in equilibrium) from the microscopic degrees of freedom.
In quantum mechanics, on the other hand, there are many options for what we mean by "states". The most intuitive definition is to define them in terms of things we can measure, like positions or velocities of particles, etc. However, the state is actually a wave in the space of these states and any given particle can actually spread out in state space and occupy many of these states simultaneously, with a different "amplitude" $\psi$, just as a wave can spread out over space with a different amplitude at any point. (There are also restrictions on these wave functions, such as the fact that it must spread both in position $\Delta x$ and in momentum $\Delta p$ such that $\Delta x \Delta p \geq \hbar /2$, and similarly for other variables.) But basically the system can spread over what we would call "measurable" states.
The probabilities come in in Quantum mechanics, for example, when you try to measure the position of a particle that is spread over many different positions. This is where quantum mechanics gets confusing and leads to endless discussions about reality, but in short, the wave function "collapses" and you only measure one position, with probability $|\psi(x)|^2$. | {
"domain": "physics.stackexchange",
"id": 57120,
"tags": "quantum-mechanics, statistical-mechanics"
} |
This LoginPane is a Pain | Question: Well, it really isn't a big pain: but I fear of security risks (if that is even possible).
Background:
I decided to (sort of) abandon my Sudoku project (because I accidentally deleted it from disk), and was given the idea of a JavaFX Helper Library. I started with the LoginPane, as I find I use login popups quite often.
The code is here:
import javafx.geometry.HPos;
import javafx.geometry.Insets;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.Label;
import javafx.scene.control.TextField;
import javafx.scene.layout.ColumnConstraints;
import javafx.scene.layout.GridPane;
import javafx.scene.layout.HBox;
import javafx.stage.Stage;
public class LoginPane {
public static final String DEFAULT_TITLE = "Login";
public static final String USERNAME_LABEL_DEFAULT_TEXT = "Username: ";
public static final String PASSWORD_LABEL_TEXT = "Password: ";
public static final String DEFAULT_LOGIN_TEXT = "Login";
public static final String CANCEL_TEXT = "Cancel";
public static final boolean DEFAULT_HAS_CANCEL = true;
public static final boolean DEFAULT_CAN_CLOSE = true;
private static final Insets MAIN_PANE_PADDING = new Insets(10);
private static final int MAIN_PANE_GAP = 10;
private static final int BUTTON_PREF_WIDTH = 60;
private static final int BUTTON_PANE_SPACING = 10;
private static final boolean IS_RESIZABLE = false;
private static final ColumnConstraints COL_1_CONSTRAINS = new ColumnConstraints(
70);
private static final ColumnConstraints COL_2_CONSTRAINS = new ColumnConstraints(
200);
private static String username = null;
private static char[] password = null;
public static UserInfo showLoginPane() {
return showLoginPane(DEFAULT_TITLE);
}
public static UserInfo showLoginPane(String title) {
return showLoginPane(title, USERNAME_LABEL_DEFAULT_TEXT);
}
public static UserInfo showLoginPane(String title,
String usernameLabelText) {
return showLoginPane(title, usernameLabelText, DEFAULT_LOGIN_TEXT);
}
public static UserInfo showLoginPane(String title,
String usernameLabelText, String loginText) {
return showLoginPane(title, usernameLabelText, loginText, DEFAULT_HAS_CANCEL, DEFAULT_CAN_CLOSE);
}
public static UserInfo showLoginPane(String title,
String usernameLabelText, String loginText, boolean hasCancel,
boolean canClose) {
Stage stage = new Stage();
GridPane mainPane = new GridPane();
mainPane.setPadding(MAIN_PANE_PADDING);
mainPane.setHgap(MAIN_PANE_GAP);
mainPane.setVgap(MAIN_PANE_GAP);
mainPane.getColumnConstraints().addAll(COL_1_CONSTRAINS,
COL_2_CONSTRAINS);
Label userLabel = new Label(usernameLabelText);
GridPane.setHalignment(userLabel, HPos.RIGHT);
TextField usernameField = new TextField();
GridPane.setHalignment(usernameField, HPos.LEFT);
mainPane.addRow(0, userLabel, usernameField);
Label passwordLabel = new Label(PASSWORD_LABEL_TEXT);
GridPane.setHalignment(passwordLabel, HPos.RIGHT);
PasswordField passwordField = new PasswordField();
GridPane.setHalignment(passwordField, HPos.LEFT);
mainPane.addRow(1, passwordLabel, passwordField);
HBox buttonPane = new HBox(BUTTON_PANE_SPACING);
Button login = new Button(loginText);
login.setPrefWidth(BUTTON_PREF_WIDTH);
buttonPane.getChildren().add(login);
login.setOnAction(e -> {
username = usernameField.getText();
password = passwordField.getText().toCharArray();
});
if (hasCancel) {
Button cancel = new Button(CANCEL_TEXT);
cancel.setPrefWidth(BUTTON_PREF_WIDTH);
buttonPane.getChildren().add(cancel);
cancel.setOnAction(e -> stage.close());
}
buttonPane.setAlignment(Pos.CENTER_RIGHT);
mainPane.add(buttonPane, 1, 2);
GridPane.setHalignment(buttonPane, HPos.RIGHT);
Scene scene = new Scene(mainPane);
stage.setTitle(title);
stage.setScene(scene);
stage.setResizable(IS_RESIZABLE);
stage.setOnCloseRequest(e -> {
if (canClose) {
stage.close();
} else {
e.consume();
}
});
stage.showAndWait();
return new UserInfo(username, password);
}
}
and if you are wondering what UserInfo is, it's here:
import java.util.Arrays;
public class UserInfo {
private final String username;
private final char[] password;
public UserInfo(String username, char[] password) {
this.username = username;
this.password = password;
}
public String getUsername() {
return username;
}
public char[] getPassword() {
return Arrays.copyOf(password, password.length);
}
}
Snapshot (showLoginPane() with no arguments):
Explanation:
The showLoginPane() method is the basic method. It provide default values for all the possible editable values. The editable values are as follows:
String title: the title of the popup.
String usernameLabelText: the text that replaces the default "Username: ".
String loginText: the text that replaces the default "Login" text on the button.
boolean hasCancel: if true, the cancel button is added. If false, then the cancel button is not shown.
boolean canClose: if true, the popup is closeable through the X button. Otherwise, it will not close.
Questions:
Is the static method, constants, and return values username and password good practice? I couldn't find a way around this.
Which constants should be public, and which private?
Naming?
Anything else?
Answer: Constants
All the public statics should be private. If clients really care, you can document the default label, but then you can't change the label without it being an API spec change. I wouldn't expect most clients need to know the default. Either they care to specify or they don't.
For the privates, my personal preference would be to inline all the ones that will only ever be used in one place, but I can see the argument for extracting them.
Design
I would suggest making a LoginPane an instantiable class, rather than just a helper method. Conceptually, a LoginPane is a thing, and so representing instances as objects would seem to be reasonable. It would have a constructor and a public UserInfo show() method, which would make it reusable.
You should also consider using the Builder pattern to avoid the telescoping static method problem that you have. It would allow clients to specify only the values they care about. Documentation can indicate defaults - a good idea for the booleans, at least, though per above the labels are debatable.
Naming
For consistency, canCancel would be better than hasCancel. The documentation can indicate that the cancel option is only made available if canCancel is true. Non-abbreviated names are generally better - COLUMN_1_CONSTRAINTS > COL_1_CONSTRAINTS.
Correctness
The passwordLabel and cancelButton don't use the non-default label values, even if they're provided as method arguments.
I'm not sure how much making the password a char[] actually helps you, because it's already in memory as a String from getText(). I'm not a JavaFX expert, so I'm not sure if there's a more correct way to do it, but a quick search implies there is not.
You have a bug at the intersection of canClose and hasCancel. If hasCancel is true and canClose is false, you render a cancel button which does nothing.
You also don't differentiate at all between somebody hitting [Ok] without entering any values and somebody hitting [Cancel]. It's unclear if that's relevant to clients, but probably it is.
Here's something I slapped together that addresses many of the issues I discussed.
public final class LoginPane {
private static final Insets MAIN_PANE_PADDING = new Insets(10);
private static final int MAIN_PANE_GAP = 10;
private static final int BUTTON_PREFERRED_WIDTH = 60;
private static final int BUTTON_PANE_SPACING = 10;
private static final boolean IS_RESIZABLE = false;
private static final ColumnConstraints COLUMN_1_CONSTRAINS = new ColumnConstraints(70);
private static final ColumnConstraints COLUMN_2_CONSTRAINS = new ColumnConstraints(200);
private final String username = "";
private final char[] password = new char[0];
private final Stage stage = new Stage();
private LoginPane(final Builder builder) {
final GridPane mainPane = new GridPane();
mainPane.setPadding(MAIN_PANE_PADDING);
mainPane.setHgap(MAIN_PANE_GAP);
mainPane.setVgap(MAIN_PANE_GAP);
mainPane.getColumnConstraints().addAll(COLUMN_1_CONSTRAINS, COLUMN_2_CONSTRAINS);
final Label userLabel = new Label(builder.usernameLabel);
GridPane.setHalignment(userLabel, HPos.RIGHT);
final TextField usernameField = new TextField();
GridPane.setHalignment(usernameField, HPos.LEFT);
mainPane.addRow(0, userLabel, usernameField);
final Label passwordLabel = new Label(builder.passwordLabel);
GridPane.setHalignment(passwordLabel, HPos.RIGHT);
final PasswordField passwordField = new PasswordField();
GridPane.setHalignment(passwordField, HPos.LEFT);
mainPane.addRow(1, passwordLabel, passwordField);
final HBox buttonPane = new HBox(BUTTON_PANE_SPACING);
final Button login = new Button(builder.loginText);
login.setPrefWidth(BUTTON_PREFERRED_WIDTH);
buttonPane.getChildren().add(login);
login.setOnAction(e -> {
username = usernameField.getText();
password = passwordField.getText().toCharArray();
});
if (builder.canCancel) {
final Button cancel = new Button(builder.cancelText);
cancel.setPrefWidth(BUTTON_PREFERRED_WIDTH);
buttonPane.getChildren().add(cancel);
cancel.setOnAction(e -> stage.close());
}
buttonPane.setAlignment(Pos.CENTER_RIGHT);
mainPane.add(buttonPane, 1, 2);
GridPane.setHalignment(buttonPane, HPos.RIGHT);
final Scene scene = new Scene(mainPane);
this.stage.setTitle(builder.title);
this.stage.setScene(scene);
this.stage.setResizable(IS_RESIZABLE);
this.stage.setOnCloseRequest(e -> {
if (builder.canCancel || builder.canClose) {
stage.close();
} else {
e.consume();
}
});
}
public UserInfo showAndWait() {
this.stage.showAndWait();
return new UserInfo(this.username, this.password);
}
public static final class Builder {
private String title = "Login";
private String usernameLabel = "Username: ";
private String passwordLabel = "Password: ";
private String loginText = "Login";
private String cancelText = "Cancel";
private boolean canCancel = true;
private boolean canClose = true;
public Builder() {
}
public Builder title(final String title) {
this.title = title;
return this;
}
public Builder usernameLabel(final String usernameLabel) {
this.usernameLabel = usernameLabel;
return this;
}
public Builder passwordLabel(final String passwordLabel) {
this.passwordLabel = passwordLabel;
return this;
}
public Builder loginText(final String loginText) {
this.loginText = loginText;
return this;
}
public Builder cancelText(final String cancelText) {
this.cancelText = cancelText;
return this;
}
/**
* Whether or not the user can cancel this login dialog without entering values. True by default.
* @param canCancel if true, the user can cancel the login dialog without entering values.
* @return the instance of this Builder for method chaining. Will never return null.
*/
public Builder canCancel(final boolean canCancel) {
this.canCancel = canCancel;
return this;
}
public Builder canClose(final boolean canClose) {
this.canClose = canClose;
return this;
}
public LoginPane build() {
return new LoginPane(this);
}
}
}
Using it might look something like:
final LoginPane loginPane = new LoginPane.Builder().title("New Title").canCancel(false).build();
loginPane.showAndWait();
If you wanted to be really fancy, you could add public static methods to the LoginPane class which create and return a Builder with that value set. Then the client call would look more like:
final LoginPane loginPane = LoginPane.title("New Title").canCancel(false).build();
loginPane.showAndWait();
It makes the code a bit easier to read at the cost of a slightly messier API. | {
"domain": "codereview.stackexchange",
"id": 16296,
"tags": "java, authentication, javafx"
} |
Java Magic square program | Question: Here is my improved version of my Magic square program from following this earlier version.
I also added a few comments.
Any help for improvements would be really appreciated.
import java.util.HashSet;
import java.util.Scanner;
public class MagicSquare {
private int[] square;
private int[] row_sum;
private int[] col_sum;
private int magicNumber;
private int size;
private boolean[] usedNumbers;
private int solutions=0;
private int squareSize;
public MagicSquare(int size) {
this.size = size;
this.usedNumbers = new boolean[size * size + 1];
this.square = new int[size * size];
this.row_sum = new int[size];
this.col_sum = new int[size];
this.magicNumber = ((size * size * size + size) / 2);
this.squareSize = size * size;
}
private boolean solve(int x) {
if (x == squareSize && checkDiagonals()) {
for (int i = 0; i < size; i++) {
if (row_sum[i] != magicNumber || col_sum[i] != magicNumber) {
return false; // no solution, backtrack
}
}
solutions++;
System.out.println("Solution: "+solutions);
printSquare();
return false; // serach for next solution
}
// the 1d square is mapped to 2d square
HashSet<Integer> validNumbers = new HashSet<Integer>(); // all valid Numbers from one position
if(x%size == size-1 && magicNumber-row_sum[(x/size)] <= squareSize &&
usedNumbers[magicNumber-row_sum[x/size]] == false) {
validNumbers.add(magicNumber-row_sum[(x/size)]); // All values in a row, except for the last one were set
}
if(x/size == size-1 && magicNumber-col_sum[(x%size)] <= squareSize && //
usedNumbers[magicNumber-col_sum[x%size]] == false) {
validNumbers.add(magicNumber-col_sum[x%size]); // // All values in a col, except for the last one were set
}
if(x%size != size-1 && x/size != size-1) { // for all other positions
for(int i=1; i<usedNumbers.length; i++) {
if (usedNumbers[i]== false) validNumbers.add(i);
}
}
if(validNumbers.size()==0) {
return false; // no valid numbers, backtrack
}
for (int v : validNumbers) {
row_sum[x/size] += v;
col_sum[x%size] += v;
if (row_sum[x/size] <= magicNumber && col_sum[x%size] <= magicNumber) {
square[x] = v;
usedNumbers[v] = true;
if (solve(x + 1) == true) {
return true;
}
usedNumbers[v] = false;
square[x] = 0;
}
row_sum[x/size] -= v;
col_sum[x%size] -= v;
}
return false;
}
private boolean checkDiagonals() {
int diagonal1 = 0;
int diagonal2 = 0;
for(int i=0; i<squareSize; i=i+size+1) {
diagonal1 = diagonal1 + square[i];
}
for(int i=size-1; i<squareSize-size+1; i = i+size-1) {
diagonal2 = diagonal2 + square[i];
}
return diagonal1==magicNumber && diagonal2==magicNumber;
}
private void printSquare() {
for (int i = 0; i < squareSize; i++) {
if(i%size ==0) {
System.out.println();
}
System.out.print(square[i] + " ");
}
System.out.println();
}
public static void main(String[] args) {
try {
Scanner sc = new Scanner(System.in);
int size = sc.nextInt();
MagicSquare m = new MagicSquare(size);
sc.close();
long start = System.currentTimeMillis();
m.solve(0);
long duration = System.currentTimeMillis() - start;
System.out.println("Runtime in ms : " + duration+" = "+duration/1000 + "sec");
System.out.println("There are "+m.solutions+" solutions with mirroring");
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
Answer: Don't repeat yourself
You use x/size and x%size a lot. You can easily assign them to local variables, and your code becomes much better readable. x is not the best name for an index, consider using i.
int x = i % size;
int y = i / size;
if(x == size-1 && magicNumber-row_sum[y] <= squareSize &&
usedNumbers[magicNumber - row_sum[y]] == false) {
validNumbers.add(magicNumber - row_sum[y]); // All values in a row, except for the last one were set
}
if(y == size-1 && magicNumber - col_sum[x] <= squareSize && //
usedNumbers[magicNumber - col_sum[x]] == false) {
validNumbers.add(magicNumber - col_sum[x]); // // All values in a col, except for the last one were set
}
.... | {
"domain": "codereview.stackexchange",
"id": 33215,
"tags": "java, performance, recursion, iteration, backtracking"
} |
How to prove that a system is real-time? | Question: I'm writing a paper on the system I designed which upon receiving an event, performs a number of calculations and publishes the result to several subscribers. I can calculate the amount of time it takes from the moment the system is informed about the event, to the moment when all subscribers successfully received the calculation results. I perform multiple experiments to measure the amount of time for a various number of subscribers.
In my paper, I'm claiming that my system is capable of real-time responses to the first event. Could you please give me some hints that how I can prove that based on the data I collected? Thanks
Answer: You need to provide a more precise description of your system and real-time
requirements.
If you are interested in only the timeliness and timeliness predictability of
the response to the first event, your question is about that one event response
and not about the subsequent events responses and thus the system.
You didn't specify your required timeliness and timeliness predictability for
responding to that event. It sounds like you are satisfied with your empirical
measurements for subsequent responses to varied numbers of recipients. So for
the first event response, you need to specify your desired response time
constraint, such as a deadline. Then you need to specify the desired
predictability of that response's timeliness.
Perhaps you want to a' priori know that the deadline will always be met--i.e.,
that the response is deterministic (deterministic is one end-point on the scale
of predictability). If so, you need to identify what presumptions about your
system and its execution environment you make for that prediction; that is
called a system model.
Your question indicates that your measurements do not cause you to expect that
subsequent responses be deterministic--instead you expect that if the first
response is deterministic, your subsequent response measurements have been
empirically predicted accurately.
There is a large body of literature about determining worst case execution times
(the first response latency in your case), and the presumptions that are
necessary for those times to be accurate. Formal proof that your first response
time is deterministic may or may not be feasible, depending on the unstated
presumptions you are making. (In general, such proofs get rapidly more difficult
as more things are presumed to be more dynamic instead of all being static.)
But if instead of requiring that the first response time be deterministic, you
are either satisfied with or forced to concede that your system model provides a
non-deterministic first response time, you have to explicitly deal with the
response time predictability.
There is a vast body of theory and practice on that topic in general (although not in the conventional real-time computing field).
In your case, you could simply do sensitivity experiments to measure and establish bounds on the variability of the first response timeliness, by varying parameters of your system model.
To convert that into analytical predictability, you can do probability distribution function fitting. For example, perhaps your measured first response times are normally distributed with means and variances based on your measurements. While not a proof, that gives you an analytical way to reason about that response time. | {
"domain": "cs.stackexchange",
"id": 9276,
"tags": "real-time"
} |
Can contemporary technology simulate the spectrum of a star? | Question: The ability to manipulate light spectra (as in mercury lamps, sodium lamps, infra red) is within the ambit of contemporary technology. Stars are said to be unique in the spectrum they emit ;
http://skyserver.sdss.org/dr1/en/proj/basic/spectraltypes/ writes to say
The best tool we have for studying a star's light is the star's spectrum. A spectrum (the plural is "spectra") of a star is like the star's fingerprint. Just like each person has unique fingerprints, each star has a unique spectrum.
with my limited understanding of physics, I may have understood wrong!!
Can contemporary technology simulate the spectrum of a star?
Answer: It depends how good of an approximation you want. If you just want something that looks like starlight to the human eye then it's not too hard - you can buy Solar spectrum bulbs at any hardware store. But of course, this isn't going to give you a great approximation, and it's only going to be anywhere close in the visible wavelengths.
If you want something that's going to be a fairly good approximation across a very wide range of wavelengths then just heat any random object up to between $3600\:\mathrm{K}$ and $50,000\:\mathrm{K}$, depending on the star. (Those massive blue stars at $50,000\:\mathrm{K}$ will present a challenge, but I think it's within the bounds of experimental possibility.) This works because stars and other hot object both emit spectra that are close to the ideal black body spectrum. You can get an idea of how close by comparing the curve on this graph to the edge of the yellow region:
(image source.)
If you want to reproduce all those deviations from the ideal black body curve then it's going to be a bit harder, but it's probably doable if you have a good enough reason to bother. I would guess that a good technique would be to surround your black body with gases similar to the star's corona, in order to reproduce the absorption lines. Emission lines will be a bit more tricky, but I guess if there's no other way you could simply heat those gases up to the appropriate temperature.
The uniqueness of a star's spectrum comes mostly from its temperature and its composition, i.e. the gases that make it up, so by using this method you could probably more or less simulate the spectrum of a specific star.
This method would simulate the spectrum of light that the star emits, but if you wanted to simulate the spectrum that we actually see it would be much harder. This is because the spectra of distant stars are modified by a redshift, caused by the fact that distant galaxies are moving away from us. The redshift is basically the optical equivalent of the doppler effect, and it causes us to see frequencies lower than what the star emits. If you wanted to simulate this in the laboratory you would have to use a different method than the one I've described, such as the customised diffraction gratings described in Rod Vance's answer.
Of course, if you meant "simulate on a computer" then it's a different question. I think this is probably not too hard - you just need to look up the appropriate emission and absorption spectra and add them up in the right way. I'm sure people researching stars' spectra do this all the time. | {
"domain": "physics.stackexchange",
"id": 9436,
"tags": "visible-light, stars"
} |
Why Is the DFT of a Signal Symmetric About Its Central Bin? | Question: When i take an N point DFT of a signal, it comes out to be conjugate symmetric about the point N/2 . Could someone please tell how to understand this intuitively or mathematically ?
Answer: First of all it is only conjugate symmetric if the input signal is Real Signal.
The reason for that is the DFT is built by Complex Exponential which can be decomposed into Sine and Cosine.
Since the Sine governs the Complex part and is $ N $ periodic one could see it creates conjugate symmetry of the DFT.
You just need to follow the definition of the DFT. | {
"domain": "dsp.stackexchange",
"id": 5525,
"tags": "fourier-transform, dft"
} |
Empty batteries magically resurrect after reinserting them | Question: It happened to me quite often (most recently with my wireless keyboard) that a battery stopped to work and then, if I unplug it and then plug it back in it works again, not just for a couple of minutes but even for a day or two, then you can repeat the process but this time it will last less and so on until it finally dies. The same trick works for my tv control. I can't really understand why, can somebody help?
Answer: For low current applications that are sensitive to voltage, like modern electronics, the resistance of the contacts can be significant.
So when you reinsert the batteries you clean off any dirt, moisture or corrosion on the contacts, the resistance drops. So when a current flows the voltage drop accross the contacts is reduced and more of the battery voltage is available to the product.
Remember that the multiple (2or3 AA/AAA) batteries are in series, so there are two contact for each battery and so 4 or 6 contact resistances ( of perhaps 0.1 or 0.2ohms) in series. If your electronics needs 3V or 4.5V and takes 50-100mA then you could lose 0.5V ! | {
"domain": "physics.stackexchange",
"id": 13727,
"tags": "soft-question, batteries"
} |
Transforming words in sentences into vector form to prepare a model | Question: I want to build a simple classifier that classifies if the text is a question or just a simple message. I understand logistic regression and can work to create a simple neural network.
I have the labeled input data in English, Japanese, Korean, Thai. How could I transform this data before I feed it into the classifier?
Answer: An approach would be to sort out all the words in your data according to how often they appear, i.e. their "frequency". After that, pick the "X" most frequent words in your dataset to use them for the classification of your dataset.
Assuming that you are working with Python and Keras, you should use the Embedding layer. For more details about how to use that layer, check this.
Shortly, what this layer does is that it maps the input to a high dimensional vector domain. A word is converted to a real-valued vector and word similarity is evaluated by the "closeness" of two word-vectors in the high-dimensional vector space.
Also make sure that your dataset consists of texts of fixed-length, by truncating long sequences or zero-padding short ones.
After all this is done, you can train a recurrent neural network with LSTM neurons as a text classifier. LSTMs have been proven very successful in text processing due to their inherent memory.
A hands-on Python/Keras tutorial that demonstrates all the above can be found here, I am sure it will be of high help :) | {
"domain": "datascience.stackexchange",
"id": 3262,
"tags": "machine-learning, python, supervised-learning"
} |
pcl 3D Object Recognition and Tracking using Kinect | Question:
Any one can advice me, I have a kinect and I want to use it to do the object recognition and make sure the position, PCL is available.3D Object Recognition and Tracking using Kinect .Can any one help me.
Originally posted by Roslj on ROS Answers with karma: 1 on 2015-06-13
Post score: 0
Original comments
Comment by Chaos on 2015-07-06:
Hi @Roslj! Have you tried this?
Answer:
try once ORK(object recognition kitchen).
Originally posted by dinesh with karma: 932 on 2016-07-01
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21916,
"tags": "pcl"
} |
Commutator of $[\hat p, F(\hat x)]$ without using $\hat p=-i\hbar\frac\partial{\partial x}$? | Question: I have been able to prove this relation by using a certain method, but it uses the fact that $$\hat p=-i\hbar\frac\partial{\partial x},\tag{1}$$ which is a relation I have avoided so far, so I wish to prove it without using that ($\hat p$ has simply been defined as the generator of spatial translation). Is it possible to prove that $$[\hat p,F(\hat x)]=-i\hbar\frac{\partial F(\hat x)}{\partial x}.\tag{2}$$ without using this relation? I can't see how to get even the first step.
Answer: Write $F(\hat{x})$ as a power series with arbitrary coefficients. Use linearity to express $[\hat{p}, F(\hat{x})]$ as a linear combination of commutators of the form $[\hat{p}, \hat{x}^n]$. You can calculate the latter commutator using induction and the identities $[A, BC] = B [A, C] + [A, B]C$ and $[\hat{x}, \hat{p}] = i \hbar$. Then recombine the power series into a single expression. | {
"domain": "physics.stackexchange",
"id": 44088,
"tags": "quantum-mechanics, homework-and-exercises, operators, momentum, commutator"
} |
Pizza ordering program recursion on the GUI | Question: This program is a basic pizza ordering program with seven topping options, four size options, and five crust options. It pulls the options from a database and populates the GUI on start.
I connected the images saved in the program to the strings associated with each topping. I used a recursive method to stack the topping images that will update each time a topping selection is made. The recursive method was the only solution I found that would restack the images without a blank space when a topping was deselected. It is working really well but I wanted to make it available for review.
The recursive method I used for the GUI is on the bottom of the code. There is also a lot of code written for the check box buttons. I'm not sure if I could have made that any shorter.
This is how the program looks:
public class FXMLDocumentController implements Initializable {
// variables with fx:id names for image views
@FXML
private ImageView ivOne;
@FXML
private ImageView ivTwo;
@FXML
private ImageView ivThree;
@FXML
private ImageView ivFour;
@FXML
private ImageView ivFive;
@FXML
private ImageView ivSix;
@FXML
private ImageView ivSeven;
@FXML
private ImageView ivSize;
@FXML
private ImageView ivCrust;
// create an image view array for the recursive method used to update the
// toppings images.
@FXML
private final ImageView[] iva = new ImageView[7];
// array for toppings, size, and crust images
private Image[] topImgs = new Image[8];
private Image[] sizeImgs = new Image[4];
private Image[] crustImgs = new Image[5];
// variables for java text fields displaying price breakdown and total of pizza
@FXML
private TextField tfSizePrice;
@FXML
private TextField tfToppingsPrice;
@FXML
private TextField tfExtrasPrice;
@FXML
private TextField tfTotalPrice;
@FXML
private TextArea taPizzaOverview;
// variables for java radio button elements in display for pizza size
@FXML
private RadioButton radioButtonSizeOne;
@FXML
private RadioButton radioButtonSizeTwo;
@FXML
private RadioButton radioButtonSizeThree;
@FXML
private RadioButton radioButtonSizeFour;
// variables for java radio button elements in display for crusts
@FXML
private RadioButton rbCrustOne;
@FXML
private RadioButton rbCrustTwo;
@FXML
private RadioButton rbCrustThree;
@FXML
private RadioButton rbCrustFour;
@FXML
private RadioButton rbCrustFive;
// variables for java check box elements in display for toppings
@FXML
private CheckBox cbTopOne;
@FXML
private CheckBox cbTopTwo;
@FXML
private CheckBox cbTopThree;
@FXML
private CheckBox cbTopFour;
@FXML
private CheckBox cbTopFive;
@FXML
private CheckBox cbTopSix;
@FXML
private CheckBox cbTopSeven;
// radio button groups for size and crust. Only one may be selected
private final ToggleGroup sizeGroup = new ToggleGroup();
private final ToggleGroup crustGroup = new ToggleGroup();
// arrays that hold the database data
private String[] pizzaSize;
private String[] pizzaCrust;
private String[] pizzaToppings;
// strings of pizza choices to be displayed at the end.
private String sizeChoice = "Extra-Large";
private String crustChoice = "Deep Dish";
private String toppingsChoice = "";
private String pizzaOverview = "";
// variables for cost of toppings, size, crust, and total cost for the pizza
private double toppingsPrice = 0;
private double extrasPrice = 0;
private double sizePrice = 13;
private double totalPrice = 0;
/**
* The action method for every button on the display that sorts the data
* and returns the price and overview to the user.
* @param event
*/
@FXML
private void handleButtonAction(ActionEvent event) {
Object source = event.getSource();
// toppings buttons, adds 50 cents per topping or takes off 50 cents if user
// changes their mind. These action events also add to the toppings string
// or take off what the user deselected.
if (source == cbTopOne){
if (cbTopOne.isSelected() == true){
toppingsChoice = toppingsChoice + pizzaToppings[0] + ". ";
toppingsPrice = toppingsPrice + 0.5;
}else{
toppingsChoice = toppingsChoice.replace( pizzaToppings[0] + ". ", "");
toppingsPrice = toppingsPrice - 0.5;
}
}else if (source == cbTopTwo){
if (cbTopTwo.isSelected() == true){
toppingsChoice = toppingsChoice + pizzaToppings[1] + ". ";
toppingsPrice = toppingsPrice + 0.5;
}else{
toppingsChoice = toppingsChoice.replace( pizzaToppings[1] + ". ", "");
toppingsPrice = toppingsPrice - 0.5;
}
}else if (source == cbTopThree){
if (cbTopThree.isSelected() == true){
toppingsChoice = toppingsChoice + pizzaToppings[2] + ". ";
toppingsPrice = toppingsPrice + 0.5;
}else{
toppingsChoice = toppingsChoice.replace( pizzaToppings[2] + ". ", "");
toppingsPrice = toppingsPrice - 0.5;
}
}else if (source == cbTopFour){
if (cbTopFour.isSelected() == true){
toppingsChoice = toppingsChoice + pizzaToppings[3] + ". ";
toppingsPrice = toppingsPrice + 0.5;
}else{
toppingsChoice = toppingsChoice.replace( pizzaToppings[3] + ". ", "");
toppingsPrice = toppingsPrice - 0.5;
}
}else if (source == cbTopFive){
if (cbTopFive.isSelected() == true){
toppingsChoice = toppingsChoice + pizzaToppings[4] + ". ";
toppingsPrice = toppingsPrice + 0.5;
}else{
toppingsChoice = toppingsChoice.replace( pizzaToppings[4] + ". ", "");
toppingsPrice = toppingsPrice - 0.5;
}
}else if (source == cbTopSix){
if (cbTopSix.isSelected() == true){
toppingsChoice = toppingsChoice + pizzaToppings[5] + ". ";
toppingsPrice = toppingsPrice + 0.5;
}else{
toppingsChoice = toppingsChoice.replace( pizzaToppings[5] + ". ", "");
toppingsPrice = toppingsPrice - 0.5;
}
}else if (source == cbTopSeven){
if (cbTopSeven.isSelected() == true){
toppingsChoice = toppingsChoice + pizzaToppings[6] + ". ";
toppingsPrice = toppingsPrice + 0.5;
}else{
toppingsChoice = toppingsChoice.replace( pizzaToppings[6] + ". ", "");
toppingsPrice = toppingsPrice - 0.5;
}
// raido buttons for pizza size, changes the base price of the pizza and the
// text value of the size choice
}else if (source == radioButtonSizeOne){
sizeChoice = pizzaSize[0];
ivSize.setImage(sizeImgs[0]);
sizePrice = 13;
}else if (source == radioButtonSizeTwo){
sizeChoice = pizzaSize[1];
ivSize.setImage(sizeImgs[1]);
sizePrice = 11;
}else if (source == radioButtonSizeThree){
sizeChoice = pizzaSize[2];
ivSize.setImage(sizeImgs[2]);
sizePrice = 9;
}else if (source == radioButtonSizeFour){
sizeChoice = pizzaSize[3];
ivSize.setImage(sizeImgs[3]);
sizePrice = 7;
// radio buttons for pizza crust, special crusts cost extra
// and changes the text value of the crust choice variable
}else if (source == rbCrustOne){
crustChoice = pizzaCrust[0];
ivCrust.setImage(crustImgs[0]);
extrasPrice = 0;
}else if (source == rbCrustTwo){
crustChoice = pizzaCrust[1];
ivCrust.setImage(crustImgs[1]);
extrasPrice = 2;
}else if (source == rbCrustThree){
crustChoice = pizzaCrust[2];
ivCrust.setImage(crustImgs[2]);
extrasPrice = 0;
}else if (source == rbCrustFour){
crustChoice = pizzaCrust[3];
ivCrust.setImage(crustImgs[3]);
extrasPrice = 1;
}else if (source == rbCrustFive){
crustChoice = pizzaCrust[4];
ivCrust.setImage(crustImgs[4]);
extrasPrice = 0;
}
this.toppingImageOrder(toppingsChoice);
// get total price and final overview
totalPrice = toppingsPrice + sizePrice + extrasPrice;
pizzaOverview = "Your Pizza: \n Size: " + sizeChoice + "\n Crust: " + crustChoice
+ "\n Your toppings choices are " + toppingsChoice;
// display new costs seperated into parts so the user can see the breakdown
// of the price.
tfSizePrice.setText(Double.toString(sizePrice));
tfToppingsPrice.setText(Double.toString(toppingsPrice));
tfExtrasPrice.setText(Double.toString(extrasPrice));
tfTotalPrice.setText(Double.toString(totalPrice));
taPizzaOverview.setText(pizzaOverview);
}
/**
* Sets the display using data from the database to label the options
* avaliable to the user.
* @param url
* @param rb
*/
@Override
public void initialize(URL url, ResourceBundle rb) {
// run method to connect to the database and collect
// the data into the arrays.
try {
this.getData();
} catch (SQLException ex) {
Logger.getLogger(FXMLDocumentController.class.getName()).log(Level.SEVERE, null, ex);
}
// call the image class to collect the images and fill the image arrays
Images im = new Images();
im.Images();
// call the array return methods from the image class to fill the arrays
// in this class.
topImgs = im.getTopImages();
sizeImgs = im.getSizeImages();
crustImgs = im.getCrustImages();
// add the image values to the imageView array
iva[0] = ivOne;
iva[1] = ivTwo;
iva[2] = ivThree;
iva[3] = ivFour;
iva[4] = ivFive;
iva[5] = ivSix;
iva[6] = ivSeven;
// set the default size and crust choice images.
ivCrust.setImage(crustImgs[0]);
ivSize.setImage(sizeImgs[0]);
// radio buttons for size added to group and labled by data from database
radioButtonSizeOne.setToggleGroup(sizeGroup);
radioButtonSizeOne.setText(pizzaSize[0]);
radioButtonSizeTwo.setToggleGroup(sizeGroup);
radioButtonSizeTwo.setText(pizzaSize[1]);
radioButtonSizeThree.setToggleGroup(sizeGroup);
radioButtonSizeThree.setText(pizzaSize[2]);
radioButtonSizeFour.setToggleGroup(sizeGroup);
radioButtonSizeFour.setText(pizzaSize[3]);
radioButtonSizeOne.setSelected(true);
// radio buttons for crust added to group and labled by data from database
rbCrustOne.setToggleGroup(crustGroup);
rbCrustOne.setText(pizzaCrust[0]);
rbCrustTwo.setToggleGroup(crustGroup);
rbCrustTwo.setText(pizzaCrust[1]);
rbCrustThree.setToggleGroup(crustGroup);
rbCrustThree.setText(pizzaCrust[2]);
rbCrustFour.setToggleGroup(crustGroup);
rbCrustFour.setText(pizzaCrust[3]);
rbCrustFive.setToggleGroup(crustGroup);
rbCrustFive.setText(pizzaCrust[4]);
rbCrustOne.setSelected(true);
// buttons for toppings labled by data from database
cbTopOne.setText(pizzaToppings[0]);
cbTopTwo.setText(pizzaToppings[1]);
cbTopThree.setText(pizzaToppings[2]);
cbTopFour.setText(pizzaToppings[3]);
cbTopFive.setText(pizzaToppings[4]);
cbTopSix.setText(pizzaToppings[5]);
cbTopSeven.setText(pizzaToppings[6]);
}
/**
* Calls database connector method and sorts data into arrays
* @throws SQLException
*/
public void getData() throws SQLException{
// connect to the database
PizzaDatabase pd = new PizzaDatabase();
pd.connect();
// connection variable
Connection con = pd.getConnection();
pd.createTables(con);
// create an instance of each class used to collect db data and add it to
// it's array.
PizzaSize ps = new PizzaSize();
ps.pizzaSize(con);
PizzaCrust pc = new PizzaCrust();
pc.pizzaCrust(con);
PizzaToppings pt = new PizzaToppings();
pt.pizzaToppings(con);
// fill the arrays
pizzaSize = ps.getSize();
pizzaCrust = pc.getCrust();
pizzaToppings = pt.getToppings();
// close the connection
con.close();
}
// a method to gain the amount of toppings and which are chosen
// that calls the recursive method that displays the corresponding images
// in order selected.
public void toppingImageOrder(String topChoiceString){
// first clear the images in order to avoid leaving an image if the user
// diselects a topping
iva[0].setImage(null);
iva[1].setImage(null);
iva[2].setImage(null);
iva[3].setImage(null);
iva[4].setImage(null);
iva[5].setImage(null);
iva[6].setImage(null);
try {
// split the current topping choices into an array so that the program
// can see how many toppings there are and will be able to determine how
// many images it will update.
String[] curTop = topChoiceString.split(" ");
// gain the number of toppings
int i = curTop.length;
// pass the string to know which toppings and the integer to know
// how many toppings
imgLoop(i, curTop);
}catch (PatternSyntaxException ex){
}
}
/**
* this is a recursive method that takes in how many images will be changed
* and which toppings were chosen so it knows which images to display.
* it uses the ImageView array in order to keep the toppings stacked close
* to the pizza instead of just randomly placed on the GUI.
* @param i
* @param curTop
*/
public void imgLoop(int i, String[] curTop){
int img;
if ( i > 0){
i--;
if(curTop[i].contains("sausage")){
img = 0;
}else if(curTop[i].contains("ham")){
img = 1;
}else if(curTop[i].contains("pepperoni")){
img = 2;
}else if(curTop[i].contains("green")){
img = 3;
}else if(curTop[i].contains("mushrooms")){
img = 4;
}else if(curTop[i].contains("olives")){
img = 5;
}else if(curTop[i].contains("chicken")){
img = 6;
}else{
img = 7;
}
iva[i].setImage(topImgs[img]);
imgLoop(i, curTop);
}
}
}
Answer: There's a lot that can be said here, but I'll keep it to the main points of attention.
Yes, some of the data is loaded from the database, but it doesn't really make any difference; how many options there are per category is hard coded, and the toppings must match the hard coded images. The solution would be to add the options to the UI dynamically (this may be challenging).
One big class that does almost everything : manage presentation logic, get data from the DB, keep track of the pizza composition, pricing. You should take a look at MVC or MVP patterns.
The behavior of every single button is squeezed into one event handling method. Each buttons can have its own eventHandler, this will avoid the parade of if elses to determine which button it is. Similar event handlers can be instances of the same class with different parameters.
Empty catch block for PatternSyntaxException. It probably cannot occur, but if a bug is introduced and it does occur, you'll never find it. It's a RuntimeException, so just don't catch it. (I'm not saying RuntimeException should never be caught, I'm saying RuntimeExceptions that should never occur, shouldn't be caught) | {
"domain": "codereview.stackexchange",
"id": 24238,
"tags": "java, performance, recursion, image, gui"
} |
Could rapid switching of opposing lasers heat a substance | Question: https://en.m.wikipedia.org/wiki/Laser_cooling
Lasers can effect the momentum so could you have two sets of lasers pointing in opposing directions rapidly alternating which set is on to increase vibration of the atoms or molecules and therefore rapidly heat them up efficiently.
If it was for dry air which wavelength and pulse rates would be best.
Answer: If you are attempting to heat a substance the lasers would have to have a frequency to match that substance. I think by matching frequency you would negate the purpose of alternating the directions of the lasers. This is similar to how microwaves work in heating food by sending waves at the frequency of water so if you wanted to somehow emulate this using lasers you wouldn't need to alternate the lasers you would just need lasers that could match the frequency of the substance you wanted to heat.
Regarding heating air specifically it would depend on the volume of the container. Once you determine the volume of the container you could then calculate, using the equation U=cPmΔT, the amount of energy it would take to heat that area to a certain temperature and how long it would take. You could then transfer this to calculate whether or not it is feasible to have lasers of this energy. | {
"domain": "physics.stackexchange",
"id": 39550,
"tags": "thermodynamics, laser, kinetic-theory"
} |
dynamic_reconfigure group defaults | Question:
I stumbled upon the not-quite-as-well-documented feature of dynamic_reconfigure groups.
As I understand, whenever you set up a server, it should call the callback and initialize the config parameter with the values found in the cfg file. However, my groups are not being set to defaults, only my "bare" parameters are. If I change one of the group parameters, then the change takes effect.
Is there something I'm missing, or should I file a ticket?
(btw this ticket mentions inconsistent behavior, but it's talking about states rather than defaults, so I'm not sure it's related.)
void dynamic_cb(pkgConfig &config, uint32_t){
a = Duration(config.a);
b1 = config.groups.b.part1;
b2 = config.groups.b.part2;
}
cfg file:
gen.add("a", double_t, 0, "a_description", 1, 0, 10000)
group = gen.add_group("b")
group.add("part1", double_t, 0, "b1", 3.0, 0.0, 10.0)
group.add("part2", double_t, 0, "b2", 3.0, 0.0, 10.0)
Originally posted by thebyohazard on ROS Answers with karma: 3562 on 2015-07-09
Post score: 4
Original comments
Comment by ffurrer on 2016-06-07:
any update on this?
Answer:
Ok, I figured it out :).
In the current implementation the generated <name>Config class contains the variables of all groups as members directly. The ones under the corresponding group (class), the ones you (and I as well) accessed, never get assigned the default values.
So what you should do instead (at least until this code gets cleaned up):
void dynamic_cb(pkgConfig &config, uint32_t){
a = Duration(config.a);
b1 = config.part1;
b2 = config.part2;
}
You can take a look at the implementation of the code generator here.
Just adding this small note here: you can also not add parameters with the same name to different groups, as they would get the same variable name in the main <name>Config class.
Cheers
Originally posted by ffurrer with karma: 115 on 2016-06-07
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 22134,
"tags": "dynamic-reconfigure"
} |
L4&L5 positions? | Question: I know that, in L4&L5, the distances to the two main bodies should be equal. Still, how can I calculate that distance with accuracy? How can I know that that distance is inside the equipotential surface?
Let's take, as an example, the Sun-Earth system and the Earth-Moon system.
Thank you in advance!
Answer: The distances to the two main bodies are not only equal, they are equal to the distance between the two main bodies. In other words, the two main bodies and L4 (or L5) form an equilateral triangle. | {
"domain": "astronomy.stackexchange",
"id": 756,
"tags": "distances, lagrange-point"
} |
Defining acceleration in gravity-free space | Question: Without information from outside a closed spaceship, an astronaut cannot distinguish A from B.
A) In gravity-free space, the floor accelerates upwards at $a=g$ and hits a dropped watch.
B) On earth's surface, a dropped watch accelerates downwards at $a=g$ and hits the floor.
My question is, what is the acceleration relative to in A? Since acceleration is the second derivative of displacement with respect to time, what is the displacement relative to? If no other object is used to define the displacement then how can we know the spaceship is accelerating? If another object is used to define displacement, then wouldn't there be a gravitational force affecting the spaceship, if only weakly? The concept of "inertial reference frame" seems like circular logic in this case.
Answer:
My question is, what is the acceleration relative to in A?
Acceleration isn't relative, it's absolute. You can detect it with an accelerometer.
If another object is used to define displacement, then wouldn't there be a gravitational force affecting the spaceship, if only weakly?
One way to build an accelerometer is to use a free-falling test mass as your other object. The gravitational effect can be made as small as desired by making the test mass small enough. | {
"domain": "physics.stackexchange",
"id": 90713,
"tags": "general-relativity, gravity, reference-frames, observers, equivalence-principle"
} |
Why doesn't the upper block move when force less than limiting friction is applied, in two block problem (further explanation below)? | Question: In my school, I learned that when two blocks are placed on the ground with one block above the other, if a force is applied to the lower block, two opposing forces of friction act on it: one from the ground and the other from the upper block's surface. Consequently, according to Newton's third law, the upper block experiences a friction force in the forward direction. However, I have a question regarding this scenario. If the external force applied to the lower block is significantly less than the limiting friction of the ground, the lower block won't be set into motion due to the opposition from the static friction of the ground. In addition, I believe that the static friction of the upper block also plays a role in opposing the motion(as it does when the block do move). Consequently, the upper block should experience an equal and opposite reaction that sets it into motion as well. However, this doesn't seem to happen in reality. What misconception do I have in this situation?
Answer:
In my school, I learned that when two blocks are placed on the ground
with one block above the other, if a force is applied to the lower
block, two opposing forces of friction act on it: one from the ground
and the other from the upper block's surface.
It is correct that the friction force from the ground opposes the applied force on the lower block (block b), but there will be no friction forces between the two blocks unless the lower block is set into motion. If the ground friction force is static friction, it will match the applied force $F$ for a net horizontal force of zero and there will be no force causing friction to arise between the blocks. Only if the maximum possible static friction force between the lower block and the ground is exceeded setting the lower block into motion, will there be friction between the blocks to oppose such motion. It is important to understand that static friction only exists in opposition to a net force that would act on the object in the absence of any friction.
Consequently, according to Newton's third law, the upper block
experiences a friction force in the forward direction.
The upper block will experience a friction force in the forward direction only if the lower block experiences a friction force in the backwards direction by the upper block. But as indicated above, there will be no backwards direction friction force on the lower block unless the maximum possible static friction force on the ground is exceeded so that the lower block accelerates forward.
However, I have a question regarding this scenario. If the external
force applied to the lower block is significantly less than the
limiting friction of the ground, the lower block won't be set into
motion due to the opposition from the static friction of the ground.
That is correct.
In addition, I believe that the static friction of the upper block
also plays a role in opposing the motion(as it does when the block do
move).
That is correct, but only in opposing motion of the lower block that is actually occurring (sliding), not in preventing motion of the lower block from occurring. When the lower block is set into motion, the ground friction becomes kinetic, which is generally less than the static friction that initiated the motion, and the applied force $F$ on the lower block will be greater than the kinetic friction force, call it $f_{kG}$, from the ground on the lower block. Now there will be a net force on the lower block for static friction imposed by the upper bloc to the lower block, call it $f_{s-ab}$, to oppose. Then the net force on the lower block becomes:
$$F-f_{kG}-f_{s-ab}$$
And from Newton's 2nd law its acceleration becomes:
$$a_{b}=\frac{(F-f_{kG}-f_{s-ab})}{m_a}$$
Consequently, the upper block should experience an equal and opposite
reaction that sets it into motion as well. However, this doesn't seem
to happen in reality. What misconception do I have in this situation?
The misconception originates from the original statement of what you say you learned in school. There is no static friction force acting on the lower block by the upper block's surface unless the lower block is set into motion, as I discussed above.
In closing I suggest in the future your first step should be to draw free body diagrams for the individual blocks. In this example, one where the system is stationary (i.e., the maximum possible static friction force between the lower block and ground is not exceeded). The other for when the lower block is set into motion (i.e., the maximum ground static friction is reached and friction becomes kinetic) and you want to determine whether the two block move together with the same acceleration, or the top blocks slides on the lower when the applied force us great enough, as in the example of the "table cloth trick" shown in this video: https://www.google.com/search?q=table+cloth+trick&oq=table+cloth+trick&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQABiABDIJCAIQABgKGIAEMgkIAxAAGAoYgAQyCQgEEAAYChiABDIJCAUQABgKGIAEMgkIBhAAGAoYgAQyCQgHEAAYChiABDIJCAgQABgKGIAEMgoICRAAGAoYFhge0gENMjA2OTAwMjFqMGoxNagCALACAA&sourceid=chrome&ie=UTF-8
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 95978,
"tags": "newtonian-mechanics, friction, free-body-diagram"
} |
Paritioning a graph into clique and independent set | Question: I am interested in the complexity of the following problems:
Input: an undirected graph $G = \langle V, E \rangle$
Query 1: is there a partition of $V$ into two a clique $C$ and an independent set $I$ of equal size?
Query 2: are there a clique $C \subseteq V$ and an independent set $I \subseteq V$ whose sizes are at least $\frac{|V|}{4}$?
Answer: Question (1) is easy polynomial time. As Juho has already mentioned in comments, the graphs that can be partitioned into a clique and an independent set are the split graphs. They can be recognized and partitioned in polynomial time, and all valid partitions (if there are more than one) differ by only a single vertex and can also be found in polynomial time (see the Wikipedia article for details). So you simply test whether any of these partitions satisfies your additional constraint.
As for Question (2), it seems you already know that HALF-CLIQUE is NP-complete, so taking an n-vertex hard instance for this problem and adding another n independent vertices produces a hard instance for your problem. That is, the answer is that it is indeed NP-complete.
One could more-or-less mechanically compose this with the hardness reduction for HALF-CLIQUE to get a reduction from a SAT-like problem, but why is that an interesting or useful thing to do? | {
"domain": "cstheory.stackexchange",
"id": 3722,
"tags": "cc.complexity-theory, ds.algorithms, graph-theory, clique, independent-set"
} |
Why car has sharp separation edges at the back? | Question: Most new cars that gain low drag have sharp separation edges at the back.
If edges are round than wake is smaller but curvature produce low pressure, if edges are sharp wake is larger but dont have low pressure.
So what solution is better and why?
You can see at rear bumper one sharp line that will stop air to follow curvature of bumper.
Answer: On a car you want a controlled separation. The sharp edge produces a shallow pressure profile up to the edge, at which the flow separates cleanly. If the contour were round, separation would happen somewhere along the contour and will be asymmetrical between left and right sides in a crosswind. With the high pressures at high speed, such asymmetrical pressure has the potential to produce unwanted side forces which negatively affect handling. | {
"domain": "physics.stackexchange",
"id": 83031,
"tags": "flow, drag, aerodynamics, turbulence"
} |
tf/tutorials/adding a frame | Question:
After I run the problem, all I can see and do is to move turtle1 turtle2 is always stationary no matter where I moved my turtle1. Is there anything I miss doing ? I am running in groovy.
Thank you
Edwin
Originally posted by elthef on ROS Answers with karma: 26 on 2014-02-26
Post score: 0
Answer:
Ok I have solve the problem...
Originally posted by elthef with karma: 26 on 2014-02-27
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17109,
"tags": "turtlesim, transform"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.