anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Ideal reconstruction after down sampling | Question: The signal $x_a(t) = \cos(2\pi450t)$ is sampled.
F = 450
Fs = 1000 Hz
f = F/Fs = 450/1000 // Sampling theorem is fulfilled
x(n) = cos(2*pi*(450/1000))
The signal is then down sampled with a factor 3.
fNew = f*3 = 450*3/1000 = 1.35
xNew(n) = cos(2*pi*1.35)
Now the signal is prepared. How to make an ideal reconstruction using 1000Hz?
Answer: If you take your input signal and downsample it by a factor of 3 your original frequency of 450 Hz will be aliased. But since you single contains a single frequency it can be reconstructed.
For example (in MATLAB), let's generate 1024 samples of your signal:
Fs = 1000;
t = (0:1023)/Fs;
xa = cos(2*pi*450*t);
figure; pwelch(xa,[],[],[],Fs);
Now if you down sample your signal you notice that the frequency has changed.
xad = xa(1:3:end);
figure; pwelch(xad,[],[],[],Fs/3)
And if you try to interpolate the signal back to the original sampling frequency, you can see that there is some kind of "ambiguity":
xadu = upsample(xad,3);
figure; pwelch(xadu,[],[],[],Fs);
Also note that the amplitude is lower (by a factor of 3 actually).
All there is left is to filter the signal using a high-pass filter:
h = fir1(32,350/500,'high');
xaduf = filter(3*h,1,xadu);
xar = xaduf(37:end); | {
"domain": "dsp.stackexchange",
"id": 2699,
"tags": "reconstruction"
} |
Delete (Reverse Backspace) Button | Question: I have built a delete button in Android, just like in Windows, which removes the string on the right hand side of the cursor (one by one). (Instead of the mainstream Backspace which removes the left side.)
Tell me if you have any better ideas.
package com.example.app;
import android.annotation.SuppressLint;
import android.app.Activity;
import android.content.Context;
import android.os.Bundle;
import android.text.Editable;
import android.view.Menu;
import android.view.View;
import android.view.View.OnClickListener;
import android.view.inputmethod.InputMethodManager;
import android.widget.Button;
import android.widget.EditText;
import android.widget.TextView.BufferType;
import android.widget.Toast;
public class MainActivity extends Activity {
public Toast t, t1,t2;
Editable a, b, d, f, a1;
String e;
public int c, d1;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
final EditText ed;
Button dlt;
dlt = (Button) findViewById(R.id.button1);
ed = (EditText) findViewById(R.id.editText1);
ed.clearFocus();
dlt.setOnClickListener(new OnClickListener() {
@SuppressLint("NewApi")
public void onClick(View v) {
InputMethodManager mgr = (InputMethodManager) getSystemService(Context.INPUT_METHOD_SERVICE);
boolean check = mgr.isActive(ed);
if(check==true){
Toast.makeText(getApplicationContext(), "Welcome", t2.LENGTH_SHORT)
.show();
}
a = ed.getEditableText();
b = a;
c = ed.getSelectionStart();
String a11 = b.toString().substring(c);
Toast.makeText(getApplicationContext(), a11, t1.LENGTH_SHORT)
.show();
String a22 = b.toString().substring(0, c);
boolean daj = b.toString().isEmpty();
if (a11 != null && !a11.trim().equals("")) {
int strChar = a11.length();
String strcut = a11.substring(1, strChar);
e = a22.concat(strcut);
ed.setText(e, BufferType.EDITABLE);
b = ed.getEditableText();
ed.setSelection(c);
} else {
Toast.makeText(getApplicationContext(), "Cool",
t.LENGTH_SHORT).show();
}
}
});
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
}
Answer:
Name your variables and fields meaningfully. I have no idea what a,b,e, etc are.
Don't split your variable declaration and initalization. Instead of
final EditText ed;
Button dlt;
dlt = (Button) findViewById(R.id.button1);
ed = (EditText) findViewById(R.id.editText1);
do
final EditText ed = (EditText) findViewById(R.id.editText1);
final Button dlt = (Button) findViewById(R.id.button1);
And be consistent with your final usage. Either use it everywhere where appropriate or don't. Otherwise I start to wonder why ed is final and dlt is not.
There a TextUtils.isEmpty() method available on Android. Use it instead of
a11 != null && !a11.trim().equals("")
Use static field access. Instead of t1.LENGTH_SHORT use Toast.LENGTH_SHORT. All the fields t,t1 and t2 are unnecessary.
All your fields should be local variables. | {
"domain": "codereview.stackexchange",
"id": 7900,
"tags": "java, android"
} |
Is there a way to solve the following differential equation for a sphere rising in a fluid? | Question: Given the boundary conditions, how do I find the analytical solution (for the velocity) of the following expression:
$$ \left(\frac{2}{3} \pi \rho_f a^3 + \frac{4}{3} \pi \rho_s a^3\right) \frac{d ^2 x}{d t^2} = \frac{4}{3} \pi a^3 (\rho_f - \rho_s)g - \frac{1}{2} \rho_f C_d \pi a^2 \left(\frac{d x}{d t}\right)^2$$
where $a$ is the radius of the sphere, and $\rho_s$ and $\rho_f$ are the density of the sphere and fluid respectively. Is this possible? If not, are there assumptions I can try to make so that I can get an approximation to the analytical solution?
Answer: The differential equation
A general trick in these cases is to solve the differential equation first for the speed, which is easy and first-order, and then integrate the speed to get the position.
Indeed, this is a differential equation of the form
$$d^2x/dt^2=A-B(dx/dt)^2$$
If we write $v=dx/dt$ this becomes
$$dv/dt = A-Bv^2$$ which can be solved despite in general having a not so nice solution.
Of course in your case we have:
$$A={{4\over 3}\pi a^3(\rho_f-\rho_s)g \over {2\over 3}\pi a^3(\rho_f+2\rho_s)} = 2g { (\rho_f-\rho_s) \over (\rho_f+2\rho_s)}$$
$$B={{1\over 2} \rho_f C_d \pi a^2 \over {2\over 3}\pi a^3(\rho_f+2\rho_s)} = {3\over 4} {C_d \over a} {\rho_f \over (\rho_f+2\rho_s)}$$
notice that, dimensionally, $A$ is an acceleration and $B$ the inverse of a position (because $Bv^2$ is an acceleration). Also, $\sqrt{A/B}$ is a speed and $\sqrt{AB}$ the inverse of a time. We can use this fact later on to check the validity of our solutions.
Let us now see how we can reach that solution and how much better it looks in the case in which $v(0)=0$ i.e. your sphere starts without any initial velocity (for the more general case, refer to the link above and find the constant $c_1$ using any initial condition you might need).
The analytical solution for the speed
Very briefly, you can solve it by rewriting
$${dv \over A-Bv^2}=dt$$
and then by integrating this on both sides from our initial time $t_0$ to $t$:
$$\int_{t_0}^t {dv \over A-Bv^2} = t-t_0$$
By solving the integral on the left hand side:
$${1\over\sqrt{AB}} \tanh^{-1}\left(\sqrt{B\over A} v\right)+c = t-t_0$$
where of course we added the necessary constants due to integration.
Now if we put $t_0=0$ and $c=0$ (equivalent to $v(0)=0$) we can solve it for $v$ and get
$$tanh^{-1}\left(\sqrt{B\over A} v\right)=\sqrt{AB}t$$ which then, applying $\tanh$ on both sides
$$\left(\sqrt{B\over A} v\right) = \tanh(\sqrt{AB}t)$$ and finally
$$v(t)=\sqrt{A\over B}\tanh(\sqrt{AB}t)$$ which is our solution for the speed.
A sketch of the solution for the velocity in the case where $v(0)=0$ is here [Note: the units are in terms of $A$ and $B$. Put in the right numbers in your constants to get the actual result you need]: it is a hyperbolic tangent, rising decently fast at the beginning but saturating at a limit speed given by $dv/dt=0$ i.e. when $v=A/B$ (when the $v$-dependent term equals the constant $A$):
The analytical solution for the position
Then of course you need to integrate again the speed $v(t)$ to get the position $x(t)=\int v(t)dt$. Again of course we use the initial condition $x(0)=0$ which is easy and also general, as starting at any other height would lead to the same solution albeit with a $+x(0)$ term, as there is no position-dependent force here.
If you then do also the speed integration you get a solution of the form
$$x(t) = {1\over B}\ln(\cosh(\sqrt{AB}t))$$
meaning that you move with almost constant acceleration at the beginning (so $x(t)\sim v t +1/2at^2$ if $t\ll 1$ - that is why it starts looking like a parable) and then, once you reach terminal velocity, you go on at constant speed $v=A/B$ (linear behavior)
Putting in the values
So, using our definition of $A$ and $B$
$$A = 2g { (\rho_f-\rho_s) \over (\rho_f+2\rho_s)}$$
$$B = {3\over 4} {C_d \over a} {\rho_f \over (\rho_f+2\rho_s)}$$
we get (in the $v(0)=0$ and $x(0)=0$ assumption):
$$v(t)=\sqrt{ {8 g a\over 3 C_d} { (\rho_f-\rho_s) \over \rho_f } } \tanh \left(\sqrt{{6g C_d \over a} { \rho_f (\rho_f-\rho_s) \over (\rho_f+2\rho_s)^2} } t \right)$$
and for the position
$$x(t)=
{4\over 3} {a \over C_d} { (\rho_f+2\rho_s)\over \rho_f }\log(\cosh \left(\sqrt{{6g C_d \over a} { \rho_f (\rho_f-\rho_s) \over (\rho_f+2\rho_s)^2} } t \right))$$
Summing up
So the sphere starts without speed, accelerates under the effect of Archimedes' law but is sooner or later slowed down by the $-v^2$ term (friction/drag) which brings it to limit velocity, at which it moves until it reaches the surface. Reaching limit speed is a feature of most $v$-dependent friction forces. | {
"domain": "physics.stackexchange",
"id": 76423,
"tags": "newtonian-mechanics, fluid-dynamics, velocity, drag, differential-equations"
} |
What is the degree of freedom? | Question: In here, https://en.wikipedia.org/wiki/Degrees_of_freedom_(mechanics),
the degree of freedom is defined as "the number of independent parameters that define its configuration." So, if $N$ particles are in the system, the degree of freedom is $3N$.
But here, https://en.wikipedia.org/wiki/Degrees_of_freedom_(physics_and_chemistry),
defined as "The set of all dimensions of a system is known as a phase space, and degrees of freedom are sometimes referred to as its dimensions." In this sense, the degree of freedom is $6N$.
What is the definition of the degree of freedom?
Answer: A degree of freedom is basically a system variable that's unbound (free).
We say "degrees of freedom" rather than just "variables" to clarify that we're referring to that freeness of the system rather than a specific count of variables.
For example, consider a 2-D grid with a particle at $\left(x,y\right)$. We can also refer to that particle's location in terms of polar coordinates, $\left(r,\theta\right)$. So that's 4 variables: $\left\{x,y,r,\theta\right\}$; however, at most we can only fill in 2 of them. This is what we mean by the system having "2 degrees of freedom": sure there're more than 2 variables, but only 2 of them are free.
Example: $3n$ vs. $6n$ from the question
If you have a system of $n$ particles, then their positions have $3n$ degrees-of-freedom:
1 for each $x$ coordinate;
1 for each $y$ coordinate; and
1 for each $z$ coordinate.
But what if you want to include their velocities? Then you need $3n$ more for the components of velocity: $v_x$, $v_y$, and $v_z$. That brings it to $6n$.
However, neither $3n$ nor $6n$ is particularly fundamental or worth memorizing. You'll generally want to think out the number of degrees of freedom every time you consider a physical situation. | {
"domain": "physics.stackexchange",
"id": 47422,
"tags": "definition, degrees-of-freedom"
} |
Deriving $Tds$ relations in thermodynamics | Question: The Tds relations I refer to are, $$Tds = du + Pdv$$$$Tds = dh - vdP$$
The first equation is derived (assuming internally reversible process) from the definition of entropy $ds = \delta Q/T$ and the idea that heat supplied is used to do work and increase internal energy. Note that work here refers to work at constant $P$. The second $Tds$ equation is obatained using the definition of enthalpy as $h = u + Pv$ => $dh = du + Pdv + vdP$ and substituing $du + Pdv$ with $Tds$ as per the first equation => $dh = Tds+vdP$. Note that unlike the first relation, $P$ is allowed to vary here (the term $vdP$ appears as a result).
How is this substitution correct where we assume $P$ to be constant as well as a variable within the same relation?
The above approach to deriving the Tds relations is from Thermodynamics: An Engineering Approach, Cengel and Boles, 8e.
Answer: The first relation $T~ds=du+P~dv$ is the statement of the first law of thermodynamics. It is to be taken as a given, as a starting point from which you derive all other equations (such as the second equation you have derived for $dh$). In other words, nothing is supposed constant in $T~ds=du+P~dv$; it is applicable to arbitrary infinitesimal process. | {
"domain": "physics.stackexchange",
"id": 44311,
"tags": "thermodynamics, pressure, entropy, volume"
} |
What am I missing in this application of Dijkstra? | Question: When running Dijkstra against the following graph:
1 3
A -- B -- C
\ /
2 \ / 1
\ /
D
...I come up with the following:
Current
A B C D
A 1(via A) - 2(via A)
B 1(A) 4(via B) 2(via A)
C 1(A) 4(via B) 2(via A) (5 via C ignored)
The shortest path from A to B is 1 via A.
The shortest path from A to C is 4 via B.
The shortest path from A to D is 2 via A.
...but there is a path from A to C via D that is of magnitude 3 (2 + 1).
What am I missing here?
Answer: current should be the unexplored with the lowest cost. Which means that after you explored $B$ you should have explored $D$ because $2<4$. | {
"domain": "cs.stackexchange",
"id": 9089,
"tags": "shortest-path"
} |
What is the sign of voltage if I move from the positive part to the negative part? | Question: In this example:
I want to calculate the voltage that exists between point a and b.
Of course this is not about getting some homework done, I really want to understand this.
So, this is my reasoning:
The tension $U_{ab}$ is the tension from a to b.
It has the same value as the tension from b to a, just with an opposite sign.
So, the value will be a positive or a negative volt (there is one volt difference, we just don't know the sign).
Now, I suppose this is a common scenario where electrons are the ones carrying the charge.
Electrons carry a negative charge and they move from negative charged zones to more positive charged zones to feel more relaxed there / to reach an equilibrium with their environment.
So, electrons move from the - to the +.
If I go from a to be I am doing the opposite thing, going from + to -.
So, the math go as follows:
$$U_{ab} = - (-2 V) - 3V = -1 V$$
The solutions sheet in this example say it's +1 V, but not why.
May you please help me to understand it?
Visualizing it:
In the next image from Wikipedia we can visualize the situation.
Suppose it's a real battery where electrons are going out at the negative side and being attracted in the positive side.
The arrow that represents voltage is very clearly drawn as a pushing force from - to +.
Sign conventions:
I only found passive and active sign conventions. Both talk about what is considered positive for current. Current in or out. But it does not talk about tension. Actually, it represents tension in both cases going from $-$ to $+$.
Passive: current is being consumed.
Active: current is being created.
Answer:
I want to calculate the voltage that exists between point a and b.
I'm just going to try to answer this question without digging in to any of the side issues you raised.
It will help to remember a schematic is a highly abstracted view of an electric circuit. You can think of it as a way of visually representing a set of equations.
For example, a resistor designated R1 with value $R$ connected with its (arbitrarily chosen) positive node at a and negative node at b is a visual representation of the equation
$$I({\rm R1}) = \frac{V_a-V_b}{R}.$$
(For your problem this is actually irrelevant, since they haven't given you the value $R$, they've just told you one terminal is at -2 V relative to the other terminal)
Similarly, an ideal voltage source with value 3 V connected with its positive terminal at node e and negative terminal at node b is shorthand for
$$V_e - V_b = 3\ {\rm V}$$
The advantage of using abstract models like these schematic diagrams is that it saves you having to consider numerous physical details like whether the charge carriers are positively or negatively charged, what electric fields are present around the devices, etc. You should take advantage of this to focus your attention on the information presented in the schematic diagram and how it can be used to solve the problem, rather than complicate the problem by bringing in details not needed to find the solution.
May you please help me to understand it?
So in this specific problem you want to find the voltage between a and b. You have a diagram that shows you that
$$V_e - V_b = 3 V$$
and
$$V_a - V_e = -2 V$$
From simple arithmetic you know
$$ V_a - V_b = (V_a - V_e) + (V_e - V_b)$$
so
$$ V_a - V_b = -2\ {\rm V} + 3\ {\rm V} = +1\ {\rm V}$$
No information about the type of charge carriers in the system, or the passive sign convention, or even the actual behavior of resistors is needed to solve the problem from the given information. | {
"domain": "physics.stackexchange",
"id": 63061,
"tags": "electric-circuits, electric-current, electrical-resistance, voltage, conventions"
} |
Add lines to star with fixed coordinates maximizing smallest angle | Question: I have the following problem:
There are existing stars (as in graph-theory stars) with a fixed representation in a 2D coordinate space, meaning that angles between the edges are not allowed to change.
Now I want to add additional edges from the star center. Both the amount of existing edges and the amount of edges to add can vary.
How can the new edges be added in a way that maximizes the smallest angle between edges in the resulting star? For this minimization, the angles between newly added edges, as well as the angles between newly added and existing edges are relevant.
Here are two trivial examples, the black edges are the existing ones, the blue ones the newly added:
However, there could also be more complex examples, consider adding a different amount of edges(1,2,3,..) to this star:
Ideally, the algorithm is fast and easy to implement in object-oriented programming languages.
Background:
The underlying use case is drawing of Hydrogens for 2D Molecule depictions. In a software, a 2D molecule depiction might be loaded from a file or even drawn by a user, then, the user can trigger an action, adding missing Hydrogen atoms to all existing atoms.
For each atom (star), a varying number of hydrogens depending on the atom type and charge (usually 1-3), should be added and drawn in a visually pleasing way.
The existing edges and angles can not be modified, because they are given by the user. Small angles tend to look bad, therefore, the goal is to maximize the smallest angle.
Currently, this is done by choosing the largest angle, then adding an edge in the middle. Then repeating this until no more edges are to be added. This is very fast and looks reasonably well, but obviously doesn't always find the best solution.
Answer: Here is an algorithm that simulates the process of adding the edges one by one, keeping the smallest angle between all adjacent edges
as largest as possible at all times.
List all $m$ existing edges clockwise as $e_0, e_2, \cdots, e_{m-1}, e_{m}=e_1$.
Create an empty priority queue $P$ for elements of the form $(i, \alpha, j)$ with associated priority $\dfrac\alpha{i+1}$.
For each $0\le j\lt m$, insert $(0, \alpha_j, j)$ into $P$, where $\alpha_j$ is the measure of the angle between edge $e_j$ and $e_{j+1}$. The first number, which is 0 for now, means the number of edges that will be added evenly-spaced between $e_j$ and $e_{j+1}$.
Repeat the following procedure $n$ times, where $n$ is the number of edges to be added.
Select the top element $T=(i,\alpha,\_)$ of the priority queue.
Add 1 to $i$, which lowers the priority of $T$.
Remove and insert $T$ so as to keep $P$ a valid priority queue. (These two operations together can be implemented more efficiently without actually removing $T$ first.)
At the end of algorithm, all elements in $P$ tell us how many edges should be added to each pair of adjacent edges evenly.
Most popular languages have built-in support for priority queues. For example, there is a heapq in Python. There is a PriorityQueue in Java. | {
"domain": "cs.stackexchange",
"id": 13235,
"tags": "algorithms, graphs, computational-geometry"
} |
Is it possible to change the logger without building ROS packages (ROS2) | Question:
Is it possible to change the logger, spdlog to log4cxx, without having to rebuild individual ros packages?
Originally posted by MHx on ROS Answers with karma: 45 on 2020-11-25
Post score: 2
Answer:
Unfortunately you'll need to rebuild rcl. rcl picks the logger to use at compile time, which by default is rcl_logging_spdlog.
Add rcl_logging_log4cxx and rcl to your workspace, then choose the logging implementation by setting the environment variable RCL_LOGGING_IMPLEMENTATION.
export RCL_LOGGING_IMPLEMENTATION=rcl_logging_log4cxx
colcon build --cmake-clean-cache --packages-select rcl_logging_log4cxx rcl
There may be more relevant info in this thread: https://discourse.ros.org/t/ros2-logging/6469/24
Originally posted by sloretz with karma: 3061 on 2020-12-18
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 35799,
"tags": "ros, ros2, logger"
} |
Is it only me or matlab's periodogram's function is confusing? | Question: I am really confused with matlab's periodogram.
First of all, it says in the documentation that its unit is "dB per unit frequency". What does that really mean? I just don't quite get it.
Second of all, I'm trying to evaluate an ADC. I tried to measure its linearity but that won't work because when I reduce the level of the input signal by ,let's say, -3dB I excpect to see -3dB reduction in the highest peak of the spectrum, but that's not the case.
Last but not least, with this "dB per unit frequency" when is the highest peak actually 0 dB? It should reach that at a point, right? Because as far as I know dB is always a ratio of some kind. How can I modify it as to give me the values in dBFs for example? so I could get a sense of what my data look like.
Sorry for the long post, Mathworks' Periodogram just doesn't say it clear.
Thank you all.
Answer: As the documentation states, periodogram provides a power spectral density estimate pxx:
[pxx, w] = periodogram(x);
meaning that it shows how the total variance of the signal, var(x), is distributed over the frequency w. This implies that the integral of the estimated spectral density over the frequency axis
trapz(w, pxx)
should correspond to that total variance.
This also means that the psd is not normalized, neither to a total variance of 1, nor to a maximum peak of 1 (= 0 dB). That the result is plotted (not computed!) in dB merely means that the vertical axis shows $10 \, \log_{10}($ pxx $)$, not that it shows a ratio. There is therefore no reason to expect a peak to align with 0 dB; depending on the range of values of the signal it can be both much higher and much lower.
To produce a plot from pxx and w, use
plot(w, 10 * log10(pxx))
xlabel('normalized frequency [rad / sample]')
ylabel('power spectral density [(rad/sample)^{-1}] in dB')
Reducing the level of the total signal by 3 dB means to divide the signal x by a factor of 2. If you do this numerically
periodogram(x)
hold all
periodogram(x / 2)
you will see that the second psd estimate has exactly the same shape as the first, it's just 3 dB lower. If this is not what you find with your measurement data, then this means that either your volume regulation or your ADC do not work correctly.
If you are really sure you want a normalized spectrum, do the following: First, compute the psd estimate:
[pxx, w] = periodogram(x);
Then, normalize the psd, e.g. with respect to the highest peak:
npxx = pxx / max(pxx);
You can then plot the normalized psd on a logarithmic scale like this:
plot(w, 10 * log10(npxx))
xlabel('normalized frequency [rad / sample]')
ylabel('normalized power spectral density')
Note that npxx does not have a unit.
A note about units: The unit of numbers returned as pxx by periodogram & friends depends on the units of the input signal x. Let's say these are dimensionless numbers; then the unit of the numbers in the pxx output is $\rm \frac{1}{rad / sample}$, because it is a density over w which is in units of $\rm rad / sample$. Or let's say these are voltage measurements coming from a calibrated ADC, so that the numbers in x are in volts. Then the unit of the numbers in the pxx output is $\rm \frac{V^2}{rad / sample}$.
If you additionally specify the sampling frequency in the call to periodogram:
[pxx, f] = periodogram(x, [], [], fs);
and fs is in Hz, the pxx is a density over frequency f, and its unit is $\rm \frac{V^2}{Hz}$.
Now if pxx is plotted, it is often plotted logarithmically, like stated above. If the function $10 \log_{10}$ is used, then it is said that the result is in decibel, dB. Then sometimes the unit of 10 * log10(pxx) is written as $\rm \frac{dB}{rad / sample}$. Strictly speaking, this expression doesn't make sense because dB is not just a unit (a different measurement stick) but denotes the use of a different quantity: the logarithm. That's why above I wrote the unit as [(rad/sample)^{-1}] in dB. | {
"domain": "dsp.stackexchange",
"id": 2655,
"tags": "matlab, fft, frequency-spectrum, power-spectral-density, dbfs"
} |
How to get laser data of Hokuyo UST-20LX | Question:
Hokuyo UST-20LX 's data interface is Ethernet 100base-TX. I have installed Hokuyo node, but Ethernet interface is not a serial port device, so the /dev/ttyACM0 didn't show up. Is there any drivers for that device under ROS? How can I get the data of UST-20LX?
Originally posted by Thomaswang on ROS Answers with karma: 23 on 2014-08-18
Post score: 0
Original comments
Comment by dornhege on 2014-08-18:
Can you try urg_node?
Answer:
urg_node is the correct option. Should be as simple as installing it (apt-get or what have you), and then running it, specifying an IP address.
rosrun urg_node urg_node _ip_address:=192.168.0.10 (the ip address is the factory default).
Originally posted by Bradley Powers with karma: 422 on 2014-08-31
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 19087,
"tags": "ros, laser, ethernet, hokuyo"
} |
Basic ul li elements in JavaScript listeners | Question: I am learning JavaScript and I have written a code snippet. Can someone suggest some improvements?
Array.prototype.pushIfDoesntExist = function (item){
if(this.indexOf(item) === -1) {
this.push(item);
return true;
}
return false;
}
var Testing = {
clickedList: [],
applyListeners: function() {
var self = this;
var nodes = document.getElementsByTagName('li');
for (var i = 0; i < nodes.length; i++) {
document.getElementsByTagName('li')[i].addEventListener('click', function() {
//we only want to display an item once
self.clickedList.pushIfDoesntExist(this.textContent)
self.displayData();
});
}
},
displayData: function() {
var textBox = document.getElementById('clickedElements');
textBox.textContent = this.clickedList.join(', ');
}
}
Testing.applyListeners();
<ul>
<li>Milk</li>
<li>Eggs</li>
<li>Bacon</li>
<li>Cheese</li>
</ul>
<div id="clickedElements">
</div>
Answer: Don't Modify Array.prototype
Writing custom functions to built in prototypes is generally a bad idea. The basic reasoning behind this is that it is a bad practice to modify something you don't own, but check out this Stack Overflow Question for more info.
Instead of changing the prototype you could just write a global function:
function pushIfDoesntExist(array, item) {
if (array.indexOf(item) === -1) {
array.push(item);
return true;
}
return false;
}
Weigh Out Pros and Cons before Abstracting
The function pushIfDoesNotExist is well named, so it can act as a comment for what the code is doing. Also, I can think of several of cases where it could be useful. BUT is it really worth writing a function for it? After all, you wouldn't write a function add1:
function add1(number) {
return number + 1;
}
And that's a well named function with even more applications than pushIfDoesNotExist. What I'm trying to get at here is that abstracting a part of code is not always a good idea. There are some downsides to making a function for something:
The function delocalizes code. Someone who is reading through your code to better understand it needs to hunt down where pushIfDoesNotExist is defined to see how it works. (good documentation can help with this)
It adds a dependency. Functions that call pushIfDoesNotExist rely on pushIfDoesNotExist being defined properly. (unit tests can help with this)
It takes time to write/debug/unit-test/document/maintain/think-about pushIfDoesnNotExist.
Is the value added by the function pushIfDoesntExist really worth the cost? (especially since you only call it once)
Inline Module Constructor
If you want private member variables, or you don't like using this or self you can use an inline function to construct your Testing object.
var Testing = (function () {
var clickedList = []; //hidden inside inline function scope
function applyListeners() {
document.getElementsByTagName("li").forEach(function (node) {
if (clickedList.indexOf(elem.textContent) === -1) {
clickecList.push(node.textContent);
displayData();
}
});
}
function displayData() {
document.getElementById("clickedElements").textContent = clickedList.join(", ");
}
return {
applyListeners: applyListeners // public
};
})(); | {
"domain": "codereview.stackexchange",
"id": 20802,
"tags": "javascript, html, html5"
} |
Motion of an object in rotating frame | Question: Yesterday I was looking at an old sloan video that describes motion in inertial and non-inertial frame. An experiment was actually like this. Two persons are sitting on the opposite side of a table fixed to a turning platform. The platform is rotating in uniform circular motion. Now Guy1 pushes a ball over the frictionless surface of the table in a straight line towards Guy2. The question what will be the motion of the ball from a viewer inside the rotating frame and to someone outside in fixed frame of reference. I got little confused as to how to conceive the fictitious force. What will the motion be? In general, how to derive equation of motion in non-inertial frame. Please also add some good references on intuitive understanding of these type of problems.
Answer: The ball will move in a straight line according to an observer outside. But the observer is rotating, he will see ball deflecting away. This is called $\textbf{coriolis effect}$. Watch this video to understand properly.
coriolis effect | {
"domain": "physics.stackexchange",
"id": 27359,
"tags": "newtonian-mechanics, reference-frames, rotational-kinematics"
} |
How is angular momentum conserved when torque is zero? | Question: According to law of conservation of momentum , angular momentum of a particle is conserved when its torque is zero . Please make it easier for me to visualise this condition .
Answer: We can start with the definition of angular momentum $\vec{L} = \vec{r} \times \vec{p}$. Differentiate both sides with respect to $t$ to get
\begin{equation}
\frac{d\vec{L}}{dt} = \vec{v} \times \vec{p} + \vec{r} \times \frac{d\vec{p}}{dt}
\end{equation}
The first term on the right hand side is zero because $\vec{v}$ and $\vec{p}$ are parallel to each other. Further, by Newton's second law, $d\vec{p}/dt$ is the force $\vec{F}$. We therefore have,
\begin{equation}
\frac{d\vec{L}}{dt} = \vec{r} \times \vec{F}
\end{equation}
Now, $\vec{r} \times \vec{F}$ is the torque $\vec{N}$. If the torque is zero, $\vec{L}$ is conserved.
One way to visualize it could be to notice that torque is used for turning. If it is zero, either a particle does not turn or if it is turning, it keeps turning at the same rate. | {
"domain": "physics.stackexchange",
"id": 16902,
"tags": "angular-momentum, torque"
} |
Does the death of Kilogram ($kg$) affect us in any means in our day to day life? | Question: Recently, the sleek cylinder of platinum-iridium metal has been discarded and the kilogram is set to be redefined along with ampere for electricity and Kelvin for temperature. Hereafter the Kilogram is dead and does it affect us in any means?
Answer: Short answer: no
Slightly longer answer: One of the primary goals of the BIPM is to ensure continuity in all redefinitions of the SI. So the numerical value of Planck’s constant chosen to define the kilogram was chosen precisely to prevent any discontinuity. In other words, avoiding any “day to day life” impact is an intentional part of the redefinition. | {
"domain": "physics.stackexchange",
"id": 53919,
"tags": "mass, conventions, si-units, metrology"
} |
Android UI code for a test job | Question: I wrote my first Android UI application for a test job, but it was declined as employer said that he did't like "the code quality". He didn't specify what he meant. But I'm very interesting to know what's wrong with the code.
Here is the link to the Android Studio project.
Here is the text of this test:
Design and build android activity for serving full screen html ads.
When Activity is shown request for ad by sending HTTP POST request
with parameter id with value of sim card IMSI (subscriber id)
to http://www.505.rs/adviator/index.php
Example:
POST /adviator/index.php HTTP/1.1
Host: www.505.rs
Cache-Control: no-cache
Postman-Token: abd93bb8-2857-2fd0-7679-0b25087e1d35
Content-Type: application/x-www-form-urlencoded
id=85950205030644900
{ "status":"OK", "message":"display full screen ad",
"url":"http://www.505.rs/adviator/ad.html" }
If status is equal "OK" use returned "url" and load ad into Activity
webView.
If status is not equal "OK" show dialog with 'message' text and OK
button. Clicking OK button will dismiss both dialog and activity.
While requesting for ad and loading ad html show spinner in center of
screen and transparent background.
Once html ad is loaded hide spinner and show ad.
When user clicks ad link close activity and open native android
browser with clicked url.
Activity should work in both portrait and landscape mode.
AdActivity.java:
package ru.cityads.test.activities;
import android.app.AlertDialog;
import android.content.Context;
import android.content.Intent;
import android.net.Uri;
import android.os.Bundle;
import android.support.annotation.StringRes;
import android.telephony.TelephonyManager;
import android.view.View;
import android.webkit.WebResourceError;
import android.webkit.WebResourceRequest;
import android.webkit.WebView;
import android.webkit.WebViewClient;
import android.widget.ProgressBar;
import com.trello.rxlifecycle.components.RxActivity;
import ru.cityads.test.R;
import ru.cityads.test.services.AdResponse;
import ru.cityads.test.services.AdsService;
import rx.Observable;
import rx.android.schedulers.AndroidSchedulers;
import rx.schedulers.Schedulers;
/**
* Activity that shows ads.
*/
public class AdActivity extends RxActivity
{
@Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_ad);
setupWebView();
requestAd();
getDeviceID();
}
//region Private behaviour methods
private void finishDisplayAd()
{
this.finish();
}
//endregion
//region Private ad request methods
private void requestAd()
{
showProgress();
final AdsService adsService = new AdsService();
final Observable<AdResponse> adRequest = adsService.requestAd(getDeviceID());
adRequest.compose(bindToLifecycle())
.subscribeOn(Schedulers.newThread())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(adResponse -> { acceptAdResponse(adResponse); }, error -> handleAdRequestError(error));
}
private void handleAdRequestError(Throwable e)
{
showErrorMessage(R.string.error_dialog_unknown_error_message);
}
private void acceptAdResponse(AdResponse adResponse)
{
if(adResponse.checkStatus())
{
loadUrl(adResponse.getUrl());
}
else
{
showErrorMessage(adResponse.getMessage());
}
}
//endregion
//region Private view helpers
private void setupWebView()
{
getWebView().setWebViewClient(new WebViewClient()
{
@Override
public void onPageFinished(WebView view, String url)
{
AdActivity.this.showWebView();
}
@Override
public void onReceivedError(WebView view, WebResourceRequest request, WebResourceError error)
{
AdActivity.this.showErrorMessage(R.string.error_dialog_web_load_fail_message);
}
@Override
public boolean shouldOverrideUrlLoading(WebView view, String url)
{
final Uri uri = Uri.parse(url);
final Intent intent = new Intent(Intent.ACTION_VIEW, uri);
startActivity(intent);
finishDisplayAd();
return true;
}
});
}
private void showProgress()
{
getProgressBar().setVisibility(View.VISIBLE);
getWebView().setVisibility(View.GONE);
}
private void showErrorMessage(@StringRes int errorMessageId)
{
final String message = getResources().getString(errorMessageId);
showErrorMessage(message);
}
private void showErrorMessage(String errorMessage)
{
final AlertDialog.Builder dialog = new AlertDialog.Builder(this);
dialog.setMessage(errorMessage);
dialog.setTitle(R.string.error_dialog_title);
dialog.setPositiveButton(R.string.error_dialog_button_text, (a, b) -> finishDisplayAd());
dialog.create();
dialog.show();
}
private void loadUrl(String url)
{
getWebView().loadUrl(url);
}
private void showWebView()
{
getProgressBar().setVisibility(View.GONE);
getWebView().setVisibility(View.VISIBLE);
}
//endregion
//region Private view getters
private ProgressBar getProgressBar()
{
return (ProgressBar)findViewById(R.id.progressBar);
}
private WebView getWebView()
{
return (WebView)findViewById(R.id.webView);
}
//endregion
//region Private device id getter method
public String getDeviceID()
{
String deviceID = null;
try
{
String serviceName = Context.TELEPHONY_SERVICE;
TelephonyManager m_telephonyManager = (TelephonyManager) getSystemService(serviceName);
deviceID = m_telephonyManager.getDeviceId();
}
catch(Throwable e)
{
}
if(deviceID == null)
{
deviceID = "000000000000000";
}
return deviceID;
}
//endregion
}
AdsService.java:
package ru.cityads.test.services;
import com.squareup.okhttp.OkHttpClient;
import java.util.concurrent.TimeUnit;
import retrofit.GsonConverterFactory;
import retrofit.Retrofit;
import retrofit.RxJavaCallAdapterFactory;
import retrofit.http.Field;
import retrofit.http.FormUrlEncoded;
import retrofit.http.POST;
import ru.cityads.test.BuildConfig;
import rx.Observable;
/**
* Service used to request ads.
*
* @see AdResponse
*/
public class AdsService
{
public AdsService()
{
final OkHttpClient httpClient = new OkHttpClient();
httpClient.setConnectTimeout(BuildConfig.HTTP_CLIENT_CONNECT_TIMEOUT, TimeUnit.SECONDS);
httpClient.setWriteTimeout(BuildConfig.HTTP_CLIENT_WRITE_TIMEOUT, TimeUnit.SECONDS);
httpClient.setReadTimeout(BuildConfig.HTTP_CLIENT_READ_TIMEOUT, TimeUnit.SECONDS);
final Retrofit retrofit = new Retrofit.Builder()
.addCallAdapterFactory(RxJavaCallAdapterFactory.create())
.addConverterFactory(GsonConverterFactory.create())
.baseUrl(BuildConfig.API_BASE_URL)
.client(httpClient)
.build();
mRemoteInterface = retrofit.create(RemoteInterface.class);
}
public Observable<AdResponse> requestAd(String requesterId)
{
return mRemoteInterface.requestAd(requesterId);
}
//region Interface representing remote ad service. Implemented by Retrofit.
private interface RemoteInterface
{
@FormUrlEncoded
@POST("/adviator/index.php")
Observable<AdResponse> requestAd(@Field("id") String id);
}
//endregion
//region Private data
private final RemoteInterface mRemoteInterface;
//endregion
}
AdResponse.java:
package ru.cityads.test.services;
import com.google.gson.annotations.SerializedName;
/**
* Represents response to ad request, made by {@link AdsService}
*
* @see AdsService
*/
public class AdResponse
{
public String getStatus()
{
return mStatus;
}
public String getMessage()
{
return mMessage;
}
public String getUrl()
{
return mUrl;
}
public boolean checkStatus()
{
return getStatus().equals("OK");
}
//region Private data
@SerializedName("status")
private String mStatus;
@SerializedName("message")
private String mMessage;
@SerializedName("url")
private String mUrl;
//endregion
}
Answer: You seem to write your code C-style. You prefix your instance variables with m, like mStatus and mMessage. You have things like //region Private data and //endregion.
Java tends to follow a different style guide, where, most of the time, the following things apply...
No prefixing of instance variables
4-space tabs
class definition, then instance variables, then constructor, then methods
1 class/interface per file
What you wrote works, that's for sure. But what you seem to be doing is writing C or C#-styled Java code.
I recommend you look up some Code Conventions of Java. That way you'd be able to write code which is more "natural".
As for getDeviceID():
public String getDeviceID()
{
String deviceID = null;
try
{
String serviceName = Context.TELEPHONY_SERVICE;
TelephonyManager m_telephonyManager = (TelephonyManager) getSystemService(serviceName);
deviceID = m_telephonyManager.getDeviceId();
}
catch(Throwable e)
{
}
if(deviceID == null)
{
deviceID = "000000000000000";
}
return deviceID;
}
From what I can see from the documentation, neither getSystemService nor getDeviceId() will throw any exceptions. As such, I think you could get rid of the try-catch you've got:
public String getDeviceID()
{
String serviceName = Context.TELEPHONY_SERVICE;
TelephonyManager m_telephonyManager = (TelephonyManager) getSystemService(serviceName);
String deviceID = m_telephonyManager.getDeviceId();
if(deviceID == null)
{
deviceID = "000000000000000";
}
return deviceID;
}
serviceName seems only extra here, to me, so we can remove that...
public String getDeviceID()
{
TelephonyManager m_telephonyManager = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE);
String deviceID = m_telephonyManager.getDeviceId();
if(deviceID == null)
{
deviceID = "000000000000000";
}
return deviceID;
}
You already use m_variable or mVariable for your instance variables, so if you're going to do that, at least be consistent and don't name method-scoped variables like they are instance scoped...
public String getDeviceID()
{
TelephonyManager telephonyManager = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE);
String deviceID = telephonyManager.getDeviceId();
if(deviceID == null)
{
deviceID = "000000000000000";
}
return deviceID;
}
Additionally, according to the documentation, if TelephonyService is not available, then you will get back null. To deal with this, add a check:
public String getDeviceID()
{
TelephonyManager telephonyManager = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE);
if(telephonyManager == null){
return "000000000000000";
}
String deviceID = telephonyManager.getDeviceId();
if(deviceID == null)
{
deviceID = "000000000000000";
}
return deviceID;
}
Now, I don't know about how you wanna deal with bad device id's, but if you want to provide a default implementation, then 000000000000000 is fine. If that's the case, I'd recommend putting it in a separate constant so that you don't make mistakes with the amount of zeros. If you don't want to handle a default case, you should just return null straight away. In that case, you can reduce the function getDeviceID() to this:
public String getDeviceID()
{
TelephonyManager telephonyManager = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE);
if(telephonyManager == null){
return null;
}
return telephonyManager.getDeviceId();
}
Lastly...
@Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_ad);
setupWebView();
requestAd();
getDeviceID();
}
Here you call getDeviceID(), but it does nothing! You get the deviceId and then throw it away again!
You should remove the unneeded function call. | {
"domain": "codereview.stackexchange",
"id": 18092,
"tags": "java, android, rx-java"
} |
Difference between null and recessive allele? | Question: I get that in a single gene locus, an individual can have RR, Rr, or rr as the two alleles for that gene. R is "wild type" because it is the allele occurring most frequently. r is the allele that is not WT.
RR and Rr show dominant phenotypes, whereas rr shows the recessive phenotype.
But what's the difference between r and a null allele (allele generated by a loss of function mutation outputting the complete loss of the WT phenotype)? Where _ is a null allele, my questions are below:
R _ would produce the same as Rr, correct or not?
r _ would produce the same as rr, correct or not?
_ _ would produce the same as rr, correct or not?
Answer: Good question +1. Unfortunately, the mechanisms by which dominance work is relatively poorly understood and it is likely that the mechanism differs from one locus to another.
You might want to have a look at the posts
Why are some genes dominant over others? What is the mechanism behind it?
Evolution of dominance
or some papers such as
Llaurens et al. (2009)
I don't think one can make any general prediction about the phenotype of R_, r_ or __ without having a priori knowledge of the biological pathway (incl. allele interaction (see Llaurens et al. (2009)) and gene interaction network) by which this particular locus is affecting the phenotype. It is tempting to say that R_ is alike Rr or RR, and r_ is alike rr but this is not necessarily true. | {
"domain": "biology.stackexchange",
"id": 5559,
"tags": "genetics, molecular-genetics"
} |
Can plants live forever? | Question: I know that some plants die for old age like a
lettuce. But there are trees like Baobabs or larger Ficus in the tropics of whom we don't know their age. And trees like spruce reaching 9950 years old, which die for environmental factors and not by age.
However, E.g. when we multiply vegetatively an olive or cacao tree, we use a fragment or clone from a unique individual passing the same genetic information, again and again.
Then, can these plants live forever?
Answer: The answers to these questions often boil down to "what do you mean by live forever?".
You've included vegetative cloning, so I infer that counts as one living organism for your purposes. In that case, the answer is absolutely.
Pando is at least 10 thousand years old and only getting larger. The Cavendish banana is about 150 years old, but produces at least 70 million tons of bananas yearly.
See here for other giant clones. Of particular interest is King's Lomatia, represented as a species by a single, sterile, individual. Fossilized triploid leaves are too old for carbon dating(at least 45 thousand years old), so it's essentially unknown how old it is.
I would be comfortable declaring Pando immortal, since it's way too big to be eaten, too resilient to burn down, and has already survived the middle and end(at least!) of an ice age. That just leaves (ha) disease and basically nothing else as possible threats. | {
"domain": "biology.stackexchange",
"id": 4078,
"tags": "genetics, plant-physiology, senescence"
} |
How does HIV mutate into other strains while keeping their virulent phenotype? | Question: How does a virus like HIV mutate into so many strains, and yet all of them are harmful to our immune system? What gives this virus the ability to mutate so efficiently?
Answer: Others have already touched the important points. Consider this as a summary.
What gives HIV the ability to mutate?
All organisms mutate by two mechanisms:
Replication errors
Mutagenesis by physical/chemical agents that cause a chemical change
(lesion) on DNA
The main enzyme responsible for HIV replication is reverse transcriptase which makes a DNA copy of its RNA genome. All RNA and DNA polymerases make some amount of error but the error rate of reverse transcriptase is much higher than usual DNA-dependent DNA polymerases because it does not have a proofreading mechanism.
How does a virus like HIV mutate into so many strains, and yet all of
them are harmful to our immune system?
As indicated in previous answers, the mutations will produce a virions with a spectrum of infectivity/pathogenicity (some can even be non-infective).
However, the immune system acts as a selective barrier which selects only those mutants that can survive (similar to what happens in evolutionary process of natural selection). The selected strains expand their population and that is how these strains get established. | {
"domain": "biology.stackexchange",
"id": 6059,
"tags": "mutations, virus, hiv, retrovirus, aids"
} |
html and css of login form | Question: I am new to bootstrap and trying to design login page. Please check my below link and let me know if i did it in wrong way.
I have on doubt I think i am doing in wrong way because when i inspect my html page(right click inspect element) on form then i notice form width is less than input field. <form class="form-vertical" role="form"></form> this is my form and i notice my input field's width is larger than form
https://jsfiddle.net/vaaibhavk32/yej8d8r3/
<!DOCTYPE html>
<html>
<head>
<title>Login</title>
<meta name="viewport" content="width=device-width, initial-scale=1" charset="utf-8" >
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" >
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" ></script>
<style type="text/css">
h1,div{
margin: 0px;
padding: 0px;
}
body {
padding-top: 70px;
}
.login-containt {
margin: 0 auto;
box-shadow: 0 15px 20px rgba(0, 0, 0, 0.1);
width: 40%;
}
.outer-form{
padding: 0px 5px;
}
.login-containt h1 {
text-align: center;
font-weight: bold;
font-size: 26px;
padding-bottom: 10px;
}
@media screen and (max-width: 600px){
.login-containt{
width: 100%
}
}
</style>
</head>
<body>
<!-- Fixed navbar -->
<nav class="navbar navbar-default navbar-fixed-top">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="#">Project name</a>
</div>
<div id="navbar" class="navbar-collapse collapse">
<ul class="nav navbar-nav">
<li class="active"><a href="#">Home</a></li>
<li><a href="#about">About</a></li>
<li><a href="#contact">Contact</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Dropdown <span class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="#">Action</a></li>
<li><a href="#">Another action</a></li>
<li><a href="#">Something else here</a></li>
<li role="separator" class="divider"></li>
<li class="dropdown-header">Nav header</li>
<li><a href="#">Separated link</a></li>
<li><a href="#">One more separated link</a></li>
</ul>
</li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li><a href="../navbar/">Default</a></li>
<li><a href="../navbar-static-top/">Static top</a></li>
<li class="active"><a href="./">Fixed top <span class="sr-only">(current)</span></a></li>
</ul>
</div><!--/.nav-collapse -->
</div>
</nav>
<div class="container">
<div class="jumbotron login-containt">
<div class="outer-form">
<h1>Login</h1>
<form class="form-vertical" role="form">
<div class="row">
<div class="form-group input-group">
<span class="input-group-addon" id="basic-addon1">@</span>
<input type="email" name="email" id="email" placeholder="Email" class="form-control">
</div>
</div>
<div class="row">
<div class="form-group input-group">
<span class="input-group-addon"><i class="glyphicon glyphicon-lock"></i></span>
<input type="password" name="password" id="password" placeholder="Password" class="form-control">
</div>
</div>
<div class="row">
<div class="form-group">
<input type="checkbox" aria-label="..."> Remember Me
<a href="forgot_password" class="pull-right">I forgot my password</a><br>
</div><!-- /input-group -->
</div>
<div class="row text-center">
<div class="form-group ">
<input type="button" class="btn btn-success" style="width: 100%" value="Login" name="">
</div>
</div>
<div class="row text-center">
OR
</div>
<div class="row">
<div class="form-group">
<a href="/register/" class="text-center">Register a new membership</a>
</div>
</div>
</form>
</div>
</div>
</div>
</body>
</html>
Answer: Within a form, you don't use the .row class. The documentation examples always use .form-group instead. Keep the .form-group div but remove the parent element (.row).
I have updated your Fiddle with the correct code. | {
"domain": "codereview.stackexchange",
"id": 30490,
"tags": "html, css"
} |
Unit Testing Search and Sort method(s) | Question: I just decided to write a unit test, and see how to do it. This is my unit test:
[TestMethod]
public void TestSearchSort()
{
System.IO.File.WriteAllText(@"C:\Users\Hosch250\Documents\Visual Studio 2013\Projects\ConsoleApplication14\ConsoleApplication14\TestFile.cs", string.Empty);
var TestInstance = new Program();
var titles = new List<string>();
string[] query = { "main", "menu" };
Program.Search(ref titles, ref query);
var expectedTitles = new List<string>();
string[] expectedTitleStrings = { "The Main Menu", "OneNote", "The Text Menu",
"The Text Block Menu", "The Table Menu", "The Table Cells Menu",
"The Draw Menu", "The Drawn Items Menu", "The Picture Menu",
"The File Menu", "Draw", "Windows Phone Notebooks",
"Windows Phone Sections", "Windows Phone Pages" };
expectedTitles.AddRange(expectedTitleStrings);
using (var file = new System.IO.StreamWriter(@"C:\Users\Hosch250\Documents\Visual Studio 2013\Projects\ConsoleApplication14\ConsoleApplication14\TestFile.cs", true))
{
file.WriteLine("Expected Count: " + expectedTitles.Count);
file.WriteLine("Actual Count: " + titles.Count);
}
Assert.AreEqual(expectedTitles.Count, titles.Count);
for (var i = 0; i < titles.Count; i++)
{
using (var file = new System.IO.StreamWriter(@"C:\Users\Hosch250\Documents\Visual Studio 2013\Projects\ConsoleApplication14\ConsoleApplication14\TestFile.cs", true))
{
file.WriteLine("Expected Title: " + expectedTitles[i]);
file.WriteLine("Actual Title: " + titles[i]);
}
Assert.AreEqual(expectedTitles[i], titles[i]);
}
}
This is the method it tests:
public static void Search(ref List<string> resultTitles, ref string[] query)
{
List<int> weight = new List<int>();
int position = -1;
foreach (string[] array in SearchKeys.Keys)
{
position++;
int length = array.Length;
int middle = length / 2;
char firstCharMidArray = array[middle][0];
foreach (string s in query)
{
int min = array[middle][0] < s[0] ? middle : 0;
int max = array[middle][0] <= s[0] ? array.Length : middle + 1;
for (int i = min; i < max; i++)
{
weight.Add(0);
if (array[i] == s)
{
if (weight[position] == 0)
{
resultTitles.Add(array[0]);
}
weight[position]++;
}
}
}
}
StableSort(ref resultTitles, ref weight);
}
I write to the file in the test method so I can see where/how it failed. How did I do? Should I be doing anything different? Should I have more tests? I most certainly cannot run every possible combination of search terms.
Answer:
var expectedTitles = new List<string>();
string[] expectedTitleStrings = { "The Main Menu", "OneNote", "The Text Menu",
"The Text Block Menu", "The Table Menu", "The Table Cells Menu",
"The Draw Menu", "The Drawn Items Menu", "The Picture Menu",
"The File Menu", "Draw", "Windows Phone Notebooks",
"Windows Phone Sections", "Windows Phone Pages" };
expectedTitles.AddRange(expectedTitleStrings);
This can just be written as
var expectedTitles = new List<string>
{
"The Main Menu", "OneNote", "The Text Menu",
"The Text Block Menu", "The Table Menu", "The Table Cells Menu",
"The Draw Menu", "The Drawn Items Menu", "The Picture Menu",
"The File Menu", "Draw", "Windows Phone Notebooks",
"Windows Phone Sections", "Windows Phone Pages"
};
var TestInstance = new Program();
This isn't used and can be removed.
There's a convenience method CollectionAssert.AreEqual that can be used.
Two collections are equal if they have the same elements in the same order and quantity. Elements are equal if their values are equal, not if they refer to the same object. The values of elements are compared using Equals by default.
It will give you error messages like this
CollectionAssert.AreEqual failed. (Different number of elements.)
CollectionAssert.AreEqual failed. (Element at index 0 do not match.)
One way of writing unit tests is called arrange-act-assert. Following that method and using the above recommendations, the code would look like this
var expectedTitles = new List<string>
{
"The Main Menu", "OneNote", "The Text Menu",
"The Text Block Menu", "The Table Menu", "The Table Cells Menu",
"The Draw Menu", "The Drawn Items Menu", "The Picture Menu",
"The File Menu", "Draw", "Windows Phone Notebooks",
"Windows Phone Sections", "Windows Phone Pages"
};
var titles = new List<string>();
string[] query = { "main", "menu" };
Program.Search(ref titles, ref query);
CollectionAssert.AreEqual(expectedTitles, titles); | {
"domain": "codereview.stackexchange",
"id": 12028,
"tags": "c#, unit-testing"
} |
What happens when a compass is suspended inside a current carrying solenoid? | Question: Suppose I have a current carrying solenoid with a strong magnetic field inside and outside it.
Now I bring a good compass inside that solenoid now I would like you to tell me the direction of North Pole of that compass in which it will get deflected, either towards the South Pole or North Pole of the solenoid.
Please Explain your answer.
For more details please view this:-
Where is the deflection of compass needle when placed inside a current carrying solenoid?
Answer: The compass will line up so that it's field lines align with the field lines of the solenoid. This will mean the north pole of the compass will point towards the north pole of the solenoid. | {
"domain": "physics.stackexchange",
"id": 43097,
"tags": "electromagnetism, magnetic-fields"
} |
Fourier Transform and Delta Function | Question: I am very new to Fourier analysis, but I understand that through the use of the Fourier transform a signal in the time domain is displayed in the frequency domain, where frequency values are normally displayed along the x-axis, and amplitude is displayed along the y-axis. However, at one point in the textbook I am using, the following is stated:
Let us assume that we have the function $f(t) = \cos(\omega_0 t)$. The spectrum then consists of two delta-functions
$$F(\omega) = \pi \delta(\omega - \omega_0) + \pi \delta(\omega + \omega_0)$$
__
This confuses me. When we have $f(t) = \cos(\omega_0 t)$, then I would assume that the Fourier transform should yield an amplitude of $1$ at $\omega = \omega_0$ and $0$ elsewhere. But the delta function is defined as:
$$\delta(\omega - \omega_0) = \left\{ \begin{array}{1 1} \infty & \quad \omega = \omega_0 \\ 0 & \quad \omega \neq \omega_0 \end{array} \right.$$
So wouldn't this give an infinite value at $\omega = \omega_0$?
If anyone can explain the intuition behind the statement in my textbook, then I would be very grateful!
Answer: The textbook is right. A sine wave in the time domain has infinite energy since it continues over an infinite amount of time. When you transform into the Frequency domain all this energy is concentrated on a single (or two) frequency. Hence the value there is indeed infinite.
Of course these are all theoretical considerations. In the real world, ideal sine waves do not exist since they all have a beginning and an end. | {
"domain": "dsp.stackexchange",
"id": 1385,
"tags": "fourier-transform, frequency-spectrum"
} |
Converting inches to feet | Question: This was a homework assignment that I'm now done with - I submitted it as is. However the fact that I needed to use the same code twice bugged me... The double code is:
printf("Enter a distance in inches (0 to quit): ");
scanf("%f",&input);
Is there a better way to do the same thing in my loop instead of the double scanf/printf? It does need to quit the program immediately if a 0 is entered.
#include <stdio.h>
int main()
{
float distance, floatFeet, input;
int feet;
printf("Enter a distance in inches (0 to quit): ");
scanf("%f", &input);
while (input != 0)
{
feet = input/12;
distance = (input-feet*12);
floatFeet = input/12;
printf("%d feet and %f inches or %f feet \n\n", feet, distance, floatFeet);
printf("Enter a distance in inches (0 to quit): ");
scanf("%f", &input);
}
}
Answer: You could move the duplicated lines into a function, something like this should work:
#include <stdio.h>
float get_input(void)
{
float input;
printf("Enter a distance in inches (0 to quit): ");
scanf("%f", &input);
return input;
}
int main()
{
float inches, floatFeet, input;
int feet;
while (input = get_input())
{
feet = input/12;
inches = (input-feet*12);
floatFeet = input/12;
printf("%d feet and %f inches or %f feet \n\n",feet,inches,floatFeet);
}
printf("Goodbye!\n");
} | {
"domain": "codereview.stackexchange",
"id": 3273,
"tags": "c"
} |
Decomposition of the symmetric part of a tensor | Question: The rate of strain tensor is given as $$e_{ij} = \frac{1}{2}\Big[\frac{\partial v_i}{\partial x_j}+ \frac{\partial v_j}{\partial x_i}\Big]$$ where $v_i$ is the $i$th component of the velocity field and $x_i$ is the $i$th component of the position vector. From what I read, I understand that $e_{ij}$ is the rate of strain tensor or the symmetric part of the deformation tensor i.e $\nabla \bf{v}$.
The rate of strain tensor can be decomposed in the following form: $$e_{ij} = [e_{ij} - \frac{1}{3}e_{kk}\delta_{ij}] + \frac{1}{3}e_{kk}\delta_{ij} $$ From what I could gather, $e_{kk}$ can be written as $\nabla \cdot \bf{v}$ which represents the pure volumetric expansion of a fluid element and the first term is some kind of strain which does not encompass volumetric change. Is this correct or is there more to it. What is the correct physical interpretation for it, and why is it useful?
Further more I read that any such symmetric part of tensor can be decomposed into a “isotropic” part and an “anisotropic” part. I am unable to understand Why we can do this and what it represents physically. I would like to have a mathematical as well as a physical understanding for this sort of decomposition. I am very new to tensors and fluid mechanics and would like to have a complete understanding of this. Thank you for the answers.
Answer: There are many different answers to your question (since usefulness is subjective), but here's what I would consider the "main" one.
Very often we assume fluids are incompressible: that is, that the density $\rho$ is constant, and consequently $\nabla \cdot \mathbf{v} = 0$ from the mass continuity equation. By splitting the strain rate tensor $\bf{D}$ into a sum of an isotropic tensor $\mathbf{P}$ and a trace-less deviatoric tensor $\mathbf{S}$,
$$\mathbf{D} = \mathbf{P} + \mathbf{S} = \frac{1}{3}\text{tr}(\mathbf{D})\mathbf{I} + \left(\mathbf{D} - \frac{1}{3}\text{tr}(\mathbf{D})\mathbf{I}\right) = \frac{1}{3}(\nabla\cdot\mathbf{v})\mathbf{I} + \mathbf{S}$$ we can isolate the source of compressibility effects as $\mathbf{P}$ and ignore it in the case where $\rho$ is constant, simplifying constitutive equations considerably.
This can be useful, for example, to give us a straightforward way to mathematically analyze the behavior of fluids in the regime where they become slightly compressible: we know the effects will show up in the strain rate tensor as an extra diagonal term $\epsilon \mathbf{I}$ where $\epsilon \ll 1$, and we can use perturbation theory to see how compressibility propagates into the mechanics.
From a more general perspective, when formulating constitutive laws involving tensors of arbitrary type in classical mechanics, we seek to formulate such laws so that they satisfy objectivity (Galilean transformation invariance). Such laws can only depend on the invariants of tensors, and as a result it's useful to isolate the terms which depend on each individual invariant, of which the trace is one. | {
"domain": "physics.stackexchange",
"id": 50703,
"tags": "fluid-dynamics, tensor-calculus, stress-energy-momentum-tensor"
} |
Launching a Python program without a #! at the top | Question:
I am currently porting some of our code/infrastructure to electric from diamondback. When trying to launch my new electric stuff, I get a problem being able to launch one of our third party libraries (dynamixel_controllers) It looks like all the python programs in this package do not have the #!/usr/bin/python at the top of the files, and now they will not launch with the following error message:
If it is a script, you may be missing a '#!' declaration at the top.
I haven't changed anything related to this, and while I can add the #! to the top of each file, I'd rather not as it is an external library, and I don't want to be running our own fork of it. Is there something new to electric that I must do to launch python files?
[Update]
The specific file I'm attempting to run is the joint_trajectory_action_controller.py file from the dynamixel_controllers package. Should this be launched instead by the controller spawner or something? Here's the associated (failing) launch file:
<launch>
<!-- Load joint names from yaml file -->
<rosparam file="$(find cyton_arm_driver)/config/joint_controllers.yaml" command="load"/>
<node name="cyton_joint_action_controller" pkg="dynamixel_controllers" type="joint_trajectory_action_controller.py" required="true">
<param name="~controller_namespace" type="str" value=""/>
</node>
</launch>
Originally posted by John Hoare on ROS Answers with karma: 765 on 2011-09-15
Post score: 0
Original comments
Comment by John Hoare on 2011-09-15:
Please see my update, I am trying to run the joint_trajectory_action_controller.py file.
Comment by arebgun on 2011-09-15:
John, what files are you trying to run that don't have #! line? The two nodes that are designed to run as a script have that line (controller_manager.py and controller_spawner.py), all other modules are not supposed to be run standalone.
Answer:
You are correct, joint_trajectory_action_controller was redesigned in electric version of dynamixel_controllers package. In Diamondback version it was a standalone node that communicated with joint controllers through ROS topics. In Electric version it was rewritten to be more like a standard joint controller to gain direct access to serial port. New version is much smoother than the old one because commands (both velocity and position) for all controlled joints are packed into a single packet that gets sent over the serial bus (before we had to send a separate command for each velocity/position change for each joint).
With that said, the way you start joint_trajectory_action_controller is also different in Electric, now you have to use controller_spawner.py script to start it up. When trajectory controller starts up it expects joint controllers for all joint it will control to be up and running. I haven't yet written proper documentation for it, but there's a couple of examples for 2 different arms that will help you to get started:
Wubble robot's arm: launch file, joint controllers configuration and trajectory controller configuration
Pi robot's arm: launch file, joint controller configuration (also includes trajectory controller configuration at the very bottom)
Basically, when launching joint trajectory controller you need to specify all its dependencies (all joint controllers that it will use to carry out the desired trajectory).
Let me know if you have any problems running this.
Originally posted by arebgun with karma: 2121 on 2011-09-15
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by arebgun on 2011-09-15:
Great! Send some videos, I'd like to see how Cyton arm works :)
Comment by John Hoare on 2011-09-15:
Thanks, following this post and those examples I've successfully ported everything over to the Electric version. Thanks. | {
"domain": "robotics.stackexchange",
"id": 6703,
"tags": "python, roslaunch"
} |
Pythonic Prime Generation | Question: I am using a sieve to return all primes under N:
def prime_gen(n):
a = [x for x in range(2, n+1)]
for b in range(2, round(n**.5)):
c = 2
while b*c <= n:
if b*c in a:
a.remove(b*c)
c += 1
return(a)
Is the conditional if b*c in a iterating through the list again? Without using a different algorithm entirely, how do I make this code more efficient and pythonic?
Answer: Use Python's math library
You can either import the whole math library, or just import the modules you need.
from math import sqrt, floor
Use a dictionary instead of a list
Yes, if b*c in a iterates through the list. However, if you use a dictionary, it takes only a single operation to look up any value. This is because Python stores dictionary keys and values as a hash table.
You can use Python's dict comprehension syntax.
d = {key: value for (key, value) in iterable}
The result looks like this:
from math import sqrt, floor
def prime_gen(n):
a = {k: 0 for k in range(2, n+1)}
for b in range(2, floor(sqrt(n))):
c = 2
while b*c <= n:
if b*c in a:
a.remove(b*c)
c += 1
return(a) | {
"domain": "codereview.stackexchange",
"id": 28433,
"tags": "python, python-3.x, primes"
} |
Reducing power of ammoniated electrons | Question: I came across a question asking which of the following conversion/reductions can be accomplished using ammoniated electrons.
$\ce{O2}$ to $\ce{O2^{2-}}$
$\ce{K2[Ni(CN)4]}$ to $\ce{K4[Ni(CN)4]}$
Aromatic ring
Non-terminal alkyne
I have been able to find on Wikipedia that $\ce{O2}$ is reduced to $\ce{O^{2-}}$. So I believe 1 should be incorrect. I am unsure of 2 and could not find any literature regarding the same. About 3, I know about Birch Reduction. About 4 I don't know.
The answer is 1, 2, 3, 4.
Answer: The reducing power of alkali metal such as $\ce{Na}$ and $\ce{K}$ in liquid $\ce{NH3}$ is duee to the solvated electrons ($e^-$), which can reduced certain compounds into unusual oxidation states. One such example is reduction of oxygen ($\ce{O2}$). The the solvated electrons can be reduced $\ce{O2}$ to $\ce{O2^{.-}}$ (superoxide ion) first and then to $\ce{O2^2-}$ (peroxide ion) (Ref.1). However, a solution of $\ce{Na}$ in liquid $\ce{NH3}$ is treated with oxygen, the peroxide ion is generally formed (Ref.2). When a solution of $\ce{Li}$ in liquid $\ce{NH3}$ is rapidly oxidized with $\ce{O2}$ at $\pu{-78 ^\circ C}$, a bright lemon-yellow solution is formed (which is believed to be a solution of $\ce{LiO2}$). When the solution is warmed towards $\pu{-33 ^\circ C}$, the yellow color faded and white suspension is formed (which is believed to be solid $\ce{Li2O2}$). The color change is not reversible (pp. 131-132, Ref.2).
Another example is conversion of $\ce{Ni^{II}}$ to $\ce{Ni^{I}}$ and $\ce{Ni^{II}}$ to $\ce{Ni^{0}}$. Potassium tetracyanonickelate(II) ($\ce{K2[Ni(CN)4]}$) has been reduced by $\ce{K}$ in liquid $\ce{NH3}$ at $\pu{-33 ^\circ C}$ to the red potassium tetracyanonickelate(I) $(\ce{K3[Ni(CN)4]})$ (Ref.3), which is slowly reduced further to yellow potassium tetracyanonickelate(0) $(\ce{K4[Ni(CN)4]})$ at $\pu{0 ^\circ C}$ (Ref.4).
It is known fact that solutions of alkali metal in liquid $\ce{NH3}$ reduce several organic compounds including alkynes to trans-alkenes and aromatics to 1,4-cyclodienes (Birch reduction; e.g., reduction of benzene to cyclohexa-1,4-diene).
Therefore, correct answer to the question is all of (1)-(4).
References:
William H. Schechter, Harry H. Sisler, Jacob Kleinberg, “The Absorption of Oxygen by Sodium in Liquid Ammonia: Evidence for the Existence of Sodium Superoxide,” J. Am. Chem. Soc. 1948, 70(1), 267-269 (https://doi.org/10.1021/ja01181a083).
Nils-Gösta Vannerberg, “Chapter 3: Peroxides, Superoxides, and Ozonides of the metals of groups Ia, IIa, and IIb,” In Progress in Inorganic Chemistry; Volume 8; F. Albert Cotton, Ed.; John Wiley & Sons, Inc.: Easton, PA, 1962, pp. 125-198 (ISBN:978-04-7017-6733).
John W. Eastes, Wayland M. Burgess, “A Study of the Products Obtained by the Reducing Action of Metals upon Salts in Liquid Ammonia Solutions. VII. The Reduction of Complex Nickel Cyanides: Mono-valent Nickel,” J. Am. Chem. Soc. 1942, 64(5), 1187-1189 (https://doi.org/10.1021/ja01257a053).
George W. Watt, James L. Hall, Gregory R. Choppin, Philip S. Gentile, “Mechanism of the Reduction of Potassium Tetracyanonickelate(II) and Potassium Hexacyanocobaltate(III) with Potassium in Liquid Ammonia,” J. Am. Chem. Soc. 1954, 76(2), 373-375 (https://doi.org/10.1021/ja01631a016).
Additional reference: V. Gutmann, "Chapter III: Coordination Chemistry in Proton-containing Donor Solvents," In Coordination Chemistry in Non-Aqueous Solutions; Springer-Verlag: Wien, Austria, 1968, pp.35-58 (ISBN-13: 978-3-7091-8196-6). | {
"domain": "chemistry.stackexchange",
"id": 13721,
"tags": "inorganic-chemistry, organic-reduction, group-theory"
} |
Nonuniform circular motion | Question: A ball rocks around an arc. In the following illustration, the ball reaches the end of the arc (its velocity magnitude is zero at that particular moment).
Now, I want to know which forces are acting on that ball at that particular moment. We have the tension force $\vec T_2$ acting on the ball, which is the centripetal force. We also have the gravity which consists of two Cartesian components: radial $mg \cos\alpha$ and tangent $mg \sin \alpha$. In the radial axis our net force $\vec T_2 - mg \cos \alpha = mv^2 / R $ is zero because $v = 0$. However, in the tangent axis our net force is not zero - $mg \sin \alpha$. My question is - how it could be if the ball at that particular moment is not moving because he reaches the edge of his trajectory? If it doesn't move then there should be some opposite force acting on it. My intuition says that that tangent force $mg \sin \alpha$ is forcing the body to slow down in that direction, so it slowly "cancels" the force which caused the body to move initially. But how can I describe it with formulas and/or illustrate it from the point of view of inertial system (i.e., Earth)?
Answer: You are right that the force balance is non-zero and that the pendulum-bob is not moving, but this does not mean that the pendulum-bob is not accelerating does it. So, at the moment the 'bob' is still, it is accelerating back to the centre-line of the oscillation with it's maximum absolute velocity.
See here for more information and a nice animation of this phenomenon.
I hope this helps. | {
"domain": "physics.stackexchange",
"id": 6829,
"tags": "newtonian-mechanics"
} |
How to put this metric in matrix form? | Question: Given the metric
$$ds^{2}=dt^{2}-2 dr dt-r^{2}(d\theta^{2}+\sin^{2}\theta \,d\phi^{2})$$
How to put this metric in matrix form?
I ask this because the metric is obviously not diagonal so what will the component of $g_{rr}$, $g_{tr}$, $g_{rt}$ be?
Answer: $$\left(\begin{matrix}
1 & -1 & 0 & 0 \\
-1 & 0 & 0 & 0 \\
0 & 0 & -r^2 & 0 \\
0 & 0 & 0 & -r^2\sin^2\theta
\end{matrix}\right)$$ | {
"domain": "physics.stackexchange",
"id": 33038,
"tags": "homework-and-exercises, general-relativity, metric-tensor"
} |
Are there super-maxwell's equations? | Question: Suppose you are given a configuration of electric charges and currents. How do you go about solving it? First you find the fields using Maxwell's equations. Then you solve for the forces and find the accelerations.
This changes the configuration and you have to go through the above mentioned procedure from start once again and then again. Is there some way in which you could find the entire future of the configuration all in one go? Some super-maxwell's equations where if you input the time it will tell you the future like all scientific theories are supposed to do?
Answer: It depends on what you mean by solving for the future state. Even in Newtonian gravity, the three body problem requires approximations and with many bodies there really is no shortcut other than to calculate numerically with a simulation.
The equations of classical electrodynamics can become messy even faster than Newtonian gravity, as there is radiation. But changes in the fields propagate at a finite velocity, so you can use the equations to locally update the fields and forces on particles, take small step in time and repeat to do a simulation.
So if you were hoping for an awesome method that can give an exact formula solving the field equations and giving charge density changes, then no, we have no such thing. The closest we have to a "super solver" is a computer simulation.
Now, I should mention that there are situations in classical electrodynamics where the "feed-back" between charges and fields can run awry. In Griffith's EM textbook, he argues that point particles in classical electrodynamics lead to difficulties where accelerating the particle can make it interact with its own radiation in a way that are not fully resolvable in classical E&M. So even a single particle and fields can run into trouble.
Its not always possible to avoid approximations. | {
"domain": "physics.stackexchange",
"id": 38812,
"tags": "electromagnetism, classical-electrodynamics, maxwell-equations"
} |
Bogosort vs bogobogosort | Question: A question on Stack Overflow had me look up the article on bogosort on Wikipedia.
There they describe the bogosort algorithm and the bogobogosort algorithm. They say, about this last algorithm, that:
bogobogosort was designed not to succeed before the heat death of the universe on any sizable list.
I wrote a program to compare these 2 algorithms and got somewhat conflicting results: bogosort performs worse than bogobogosort for arrays up to 12 elements.
I am more inclined to believe my code is wrong than Wikipedia. Of course it can be that 12 is not a sizable enough list and the difference starts being noticeable with larger arrays.
#include <errno.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void quit(const char *msg) __attribute__((noreturn));
void datainit(int *data, size_t n);
void acopy(int *dst, const int *src, size_t n);
unsigned long bogosort(int *data, size_t n);
unsigned long bogobogosort(int *data, size_t n);
double timedelta(const struct timespec *b, const struct timespec *a);
void shuffle(int *data, size_t n);
int sorted(int *data, size_t n);
int randto(int n);
int main(int argc, char **argv) {
int data[20], data2[20];
unsigned long n;
unsigned long s[2];
struct timespec t[3];
char *err;
if (argc != 2) quit("Specify number of elements.");
errno = 0;
n = strtoul(argv[1], &err, 10);
if (errno || n < 2 || n > 20) quit("number between 2 and 20");
srand((unsigned)time(0));
datainit(data2, n);
clock_gettime(CLOCK_MONOTONIC, t + 0);
acopy(data, data2, n);
s[0] = bogosort(data, n);
clock_gettime(CLOCK_MONOTONIC, t + 1);
acopy(data, data2, n);
s[1] = bogobogosort(data, n);
clock_gettime(CLOCK_MONOTONIC, t + 2);
printf(" bogosort shuffled %12lu cards in %f seconds.\n", s[0], timedelta(t + 1, t + 0));
printf("bogobogosort shuffled %12lu cards in %f seconds.\n", s[1], timedelta(t + 2, t + 1));
return 0;
}
void quit(const char *msg) {
if (msg) fprintf(stderr, "%s\n", msg);
exit(EXIT_FAILURE);
}
void datainit(int *data, size_t n) {
for (size_t i = 0; i < n; i++) data[i] = (int)i;
shuffle(data, n);
}
void acopy(int *dst, const int *src, size_t n) {
for (size_t i = 0; i < n; i++) dst[i] = src[i];
}
unsigned long bogosort(int *data, size_t n) {
unsigned long c = 0;
while (!sorted(data, n)) {
c += n;
shuffle(data, n);
}
return c;
}
unsigned long bogobogosort(int *data, size_t n) {
unsigned long c = 0;
for (size_t b = 2; b <= n; b++) {
while (!sorted(data, b)) {
c += b;
shuffle(data, b);
b = 2;
}
}
return c;
}
double timedelta(const struct timespec *b, const struct timespec *a) {
double aa = a->tv_sec + (a->tv_nsec / 1000000000.0);
double bb = b->tv_sec + (b->tv_nsec / 1000000000.0);
return bb - aa;
}
void shuffle(int *data, size_t n) {
if (n == 1) return;
int p = randto((int)n);
int tmp = data[p];
data[p] = data[n - 1];
data[n - 1] = tmp;
shuffle(data, n - 1);
}
int sorted(int *data, size_t n) {
int r = 1;
for (size_t i = 1; i < n; i++) {
if (data[i - 1] > data[i]) {
r = 0;
break;
}
}
return r;
}
int randto(int n) {
int mx = (RAND_MAX / n) * n;
int p;
do p = rand(); while (p >= mx);
return p % n;
}
And this is a possible result for an array with 10 elements
% ./a.out 10
bogosort shuffled 13816640 cards in 0.400548 seconds.
bogobogosort shuffled 1102233 cards in 0.027740 seconds.
So, unexpectedly bogosort shuffled more cards than bogobogosort and took more time doing it.
Is my bogosort() wrongly implemented?
Is my bogobogosort() wrongly implemented?
Is my shuffle() wrongly implemented? (but it would influence both functions)
Is my randto() wrongly implemented? I believe there's no bias.
Answer: Not bogobogosort
The way you implemented your bogobogosort, it is actually faster than bogosort because of a few reasons:
After you successfully bogosort k elements, you immediately check whether k+1 elements are sorted. This means that if on a previous shuffle, you happened to shuffle m elements all in order, then after you sort the first two elements you will immediately reach m.
For example, suppose you reach 5 cards and then fail. You shuffle 5 cards and then start over at 2. If those 5 cards happen to be in sorted order, you will immediately reach 6 as soon as you put the first 2 cards in order.
Because of point #1, your bogobogosort is actually faster than the bogosort because it only needs to sort n-1 cards instead of n. After it sorts n-1 cards, it checks the order of n cards and may fail. But in failing, it reshuffles n cards so that each time it sorts n-1 cards it has a 1/n chance of eventually succeeding. So the total number of cards shuffled is on the order of (n-1) * (n-1)! * n which simplifies to (n-1) * n!, compared to the bogosort which shuffles n * n! cards.
I believe that the same principle applies at each step, so the time is even less than (n-1) * n!. I'm not sure of the exact math but from running your program it appears that the bogobogosort runs in approximately the same time as the bogosort with one fewer cards. I.e. Your_bogobogosort(n) = bogosort(n-1).
A proper bogobogosort
I rewrote your bogobogosort function to be like this:
unsigned long bogobogosort(int *data, size_t n) {
unsigned long c = 0;
size_t b = 2;
while (1) {
if (sorted(data, b)) {
if (b == n)
break;
b++;
} else {
b = 2;
}
c += b;
shuffle(data, b);
}
return c;
}
The key difference here is that after each success where it increments b, it reshuffles the deck to prevent point #1 above from happening.. With this change, I get output like this:
% ./a.out 6
bogosort shuffled 1044 cards in 0.000013 seconds.
bogobogosort shuffled 54464568 cards in 0.500339 seconds. | {
"domain": "codereview.stackexchange",
"id": 14823,
"tags": "algorithm, c, random, sorting"
} |
MTW's Gravitation, Box 11.2, etc.: Relative acceleration depends only on $\mathbf{u}$ and $\mathbf{n}$ at the fiducial point? | Question: My question relates to MTW's Gravitation, Box 11.2 (copied below) and the discussion on page 271. This is my paraphrase of the gist of box 11.2:
Consider a family $\Lambda$ of timelike geodesics affinely parameterized
by $\lambda$ and selected by the parameter $n$. The fiducial geodesic
is designated by the selector value $n$. The set of ordered pairs $\left\langle \lambda+\Delta\lambda,n+\Delta n\right\rangle $ may serve as coordinates near $\mathscr{M}.$ In these coordinates,
$\mathscr{M}$ has coordinates $\left\langle \lambda,n\right\rangle $.
Define the vectors $\mathbf{u}=\frac{\partial}{\partial\lambda}$
and $\mathbf{n}=\frac{\partial}{\partial n}$. The separation between
the fiducial point $\left\langle \lambda,n\right\rangle $ and the
point with the same $\lambda$ value on the geodesic designated by
$n+\Delta n$ is $\vec{n}=\Delta n\mathbf{n}$. The separation $\vec{n}$
is then parallel transported along the geodesic $n$ by $\Delta\lambda,$
so that the tail of its image is at $\mathscr{N}$ and the tip is
at $\mathscr{B}$. A similar image is produced by parallel transport
of $\vec{n}$ along the geodesic $n$ by $-\Delta\lambda,$ with tail
and tip designated $\mathscr{L}$ and $\mathscr{A}$ respectively.
Beginning at the point $\mathscr{Q}$ which has coordinates $\left\langle \lambda,n+\Delta n\right\rangle$
the point $\mathscr{R}$ is determined by a parameter change $\Delta\lambda$
along the geodesic $n+\Delta n.$ The point $\mathscr{P}$ is determined
by a change $-\Delta\lambda$ along the same $n+\Delta n$ geodesic.
The points $\mathscr{A}$ and $\mathscr{B}$ are determined by parallel
transporting $\vec{n}=\Delta n\mathbf{n}$ along the geodesic $n$
by $-\Delta\lambda$ and $\Delta\lambda$ respectively. The vectors
$\mathscr{B}\mathscr{R}$ and $\mathscr{A}\mathscr{P}$ are then parallel
transported (along unspecified routs) to $\mathscr{Q}$ were they
are summed to produce
$$
\delta_{2}=\mathscr{B}\mathscr{R}+\mathscr{A}\mathscr{P}=\left(\Delta\lambda\right)^{2}\Delta n\left(\nabla_{\mathbf{u}}\nabla_{\mathbf{u}}\mathbf{n}\right),
$$
where $\nabla_{\mathbf{u}}\nabla_{\mathbf{u}}\mathbf{n}$ is defined
to be the relative-acceleration vector.
On page 271 we find:
[Examine Box 11.4] Thereby arrive at the remarkable equation (11.6)
$$
\nabla_{\mathbf{u}}\nabla_{\mathbf{u}}\mathbf{n}+\left[\nabla_{\mathbf{n}}\nabla_{\mathbf{u}}\right]\mathbf{u}=0.
$$
This equation is remarkable, because at first sight it seems crazy.
The term $\left[\nabla_{\mathbf{n}}\nabla_{\mathbf{u}}\right]\mathbf{u}$
involves second derivatives of $\mathbf{u}$ and first derivatives
of $\nabla_{\mathbf{n}}:$
$$
\left[\nabla_{\mathbf{n}}\nabla_{\mathbf{u}}\right]\mathbf{u}=\nabla_{\mathbf{n}}\nabla_{\mathbf{u}}\mathbf{u}-\nabla_{\mathbf{u}}\nabla_{\mathbf{n}}\mathbf{u}.
$$
It thus must depend on how $\mathbf{u}$ and $\mathbf{n}$vary from
point to point. But the relative acceleration it produces, $\nabla_{\mathbf{u}}\nabla_{\mathbf{u}}\mathbf{n},$
is known to depend only on the values of $\mathbf{u}$ and $\mathbf{n}$
at the fiducial point, not on how $\mathbf{u}$ and $\mathbf{n}$
vary (see box 11.2 F).
I do not understand what this means. The relative acceleration is
derived by multiple parallel transport steps, and taking differences
of vectors at different locations. That is, its establishment involves
more than simply the values of $\mathbf{u}$ and $\mathbf{n}$ at
one point. Therefore, if we interpret "the fiducial point" to
mean, $\mathscr{Q}$ alone, the statement doesn't make sense to me,
at all.
Even if we allow "the fiducial point" to mean any arbitrary point
on the fiducial geodesic, the derivation still involves points not
on that geodesic, and not determined solely by parallel transport
along the fiducial geodesic.
What does it mean to say "$\nabla_{\mathbf{u}}\nabla_{\mathbf{u}}\mathbf{n},$
is known to depend only on the values of $\mathbf{u}$ and $\mathbf{n}$
at the fiducial point, not on how $\mathbf{u}$ and $\mathbf{n}$
vary"?
PS: I believe the essential conclusion is that relative acceleration can be expressed as the operation of a multilinear form (tensor field $\mathbf{\text{Riemann}}$) on $\mathbf{u}$ and $\mathbf{n}$. What I'm not getting is the reasoning leading to that conclusion.
The situation seems similar to that of the differential of a multivariable mapping being a linear mapping associated with its derivative matrix.
Answer: Just means it's linear in u and n.
Edit
Your relative acceleration vector is just geodesic deviation. It is the physical manifestation of
the riemann tensor
R(u, n)u which is a vactor valued 2-form. It tells you the deviation of a geodesic in the neighborhood of n connected by u, at each point p(t) where t is the affine parameter corresponding to the proper time. Iff the rieamann tensor is nonzero inertial geodesics will accelerate with respect to each other (tidal effect).
So you just want the deviation at a given point, not the deviation of the deviation. That's why the tensor contains no derivatives of u and n. So R(lu, n) u =lR(u, n) u where l is a scalar function. | {
"domain": "physics.stackexchange",
"id": 86892,
"tags": "general-relativity, differential-geometry, curvature, vector-fields"
} |
disabling kinect accelerometer | Question:
Hi ROS folks,
I wonder if there is any way to shut down kinect accelemotor. I need to mount the kinect in angular way and still have the full tilting range. Accelemotor prevents kinect to go towards up even though it is looking down! I looked at kinect_aux, there are all of the controls except this.
Any advice?
Originally posted by Amin on ROS Answers with karma: 93 on 2012-04-24
Post score: 1
Answer:
Hi Again,
After some tests, I realized that even using the trick above you are not able to get more than -60 degrees on tilting kinect. In other words, kinect is able to tilt down -30 degrees itself (according to kinecy_aux stack) and at the same time you are also able to mount the kinect stand with about -30 degrees (more than this figure would not have effect since accelerometer does not allow kinect to go down further even though it can go!) so both of them would give 60 degrees.
My goal was to mount the kinect on the ceiling of my room in such a way that the FOV of kinect would be vertical against the ground. By doing that 60 degrees (30 tilting + 30 mounting angle) with a FOV of (43/2 ~ 22) I could get to -82 and not -90!
Any advice on how to come up with accelerometer is appreciated.
Originally posted by Amin with karma: 93 on 2012-05-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9121,
"tags": "kinect"
} |
Returning the common divisors of two integers | Question: I have written this function that returns the common divisors of two integers. When called recursively 10 times with big integers ranging in 100000000, it takes over 5 seconds. How can I improve it so that it is faster (less than 5 seconds)?
commonDivisors :: Int -> Int -> [Int]
commonDivisors x y = if x > y then divisors y x else divisors x y
where divisors :: Int -> Int -> [Int]
divisors z n = filter (\x -> n `mod` x == 0 && z `mod` x ==0) [1..n]
Answer: I don't know your application, but it seems rather uncommon to me that one would want a list of all common divisors.
To optimize your code, you could use Euclid's algorithm to find the greatest common divisor of the two numbers; this is much faster than iterating through all numbers less than your arguments. Even more, the common divisors of your two numbers happen to be the divisors of the greatest common divisor. Thus, the two-step process of finding the greatest common divisor and then finding all divisors of that greatest common divisor will be faster than your current method, since it does not usually have to consider all numbers up to the smaller factor.
An optimization for finding factors of a number \$n\$ would be only to iterate up to \$\sqrt n\$, adding to the list any factor \$k\$ you find as well as \$n / k\$. A separate idea would be to compute the prime factorization of \$n\$ and finding the factors from that. | {
"domain": "codereview.stackexchange",
"id": 18252,
"tags": "performance, haskell"
} |
What would a red giant Sun look like from Proxima b? | Question: As of now, Proxima b is the only confirmed planet (or dwarf planet if it didn't clear its orbit - sorry, couldn't resist) around Proxima Centauri, the nearest-known star to the Sun about 4.2 ly away. From that distance, the Sun looks just like an average star in the night sky. Now if the Sun became a red giant, at 256 times its current diameter (if I recall correctly), what would it look like from Proxima b's night sky? On a good telescope, Betelgeuse is recognizable as more than a dot, its surface is visible, that would surely be the case for a red giant Sun too. What would it take to recognize the Sun's surface from the Proxima system? Would the Sun perhaps even cast a visible shadow?
Answer: The sun, when it becomes a red giant, will be more 100 times more luminous than it is now, but that only gives it an absolute magnitude of about -1, and an apparent magnitude at 4 light-years of about -5: similar to Venus. It is possible for this to cast a very faint shadow, but you need a very dark place to see it.
The sun will still be much too small to appear as a disc, it will be about 1 arcsecond. (And Betelgeuse is not visible as a disc, even in a "good telescope" you need special techniques, such as interferometry, to see its surface)
But it would be bright! Unfortunately(!) the sun won't become a red giant for billions of years, and by the time it does, Proxima will have moved away and could be on the other side of the galaxy, and the sun is unlikely to be visible at all. | {
"domain": "astronomy.stackexchange",
"id": 4944,
"tags": "observational-astronomy, the-sun, telescope, red-giant"
} |
Why will tissue from identical twin be rejected in some cases? | Question:
Also, transplants from one identical twin to another are almost never rejected.
[ Source: Medline Plus ]
Why can a tissue from an identical twin be rejected ?
Answer: MattDMo, I think you're right on. Also remember the thymus is where T-cell recombination and maturation occurs during fetal development, and rates are highest during neonatal period through pre-adolescent childhood - long after sharing a womb.
The immune system keeps developing during childhood. It's the immune system that's responsible for tissue rejection in transplant.
In our immune system, T cells attack and kill foreign cells in the body. They're supposed
to ignore cells that carry molecules belonging to their own body.
T cells go through selection in the thymus during these years to eliminate the T cells that attack their own molecules;the rest of the T cells that recognize foreign molecules are allowed to live.
Next, consider that environmentally-induced epigenetics influence the expression of proteins and other products. Even twins have different experiences and exposures, so there may be a difference in protein components in their bodies.
Also, this is conjecture, but the same concepts involved in autoimmune disorders (which are another topic entirely) might also be involved in some of these situations as well. | {
"domain": "biology.stackexchange",
"id": 2192,
"tags": "human-biology, immunology, tissue, transplantation, twins"
} |
What do the terms "offline" and "online" refer to in the field of high energy physics data analysis? | Question: The title says it - I've encountered these terms several times but have never found an explanation anywhere. An example of use is this ATLAS note.
If I may hazard a guess: Data rate is high at the LHC so only a fraction can be analysed in real time (online) while some is stored for later analysis (offline).
Answer: Online data analysis is cursory analysis done as the data is collected. It is often used for the purpose of selecting which events to save to disk or tape to be analyzed later (an event "filter"). Given that the current CERN experiments will be taking, in the next run, data at rates exceeding a terabyte per second, this notion is essential.
In fact, the online analysis stream is performed in sever steps, each discarding a high percentage of the event data. The trigger itself can be considered a hardware implementation of the level-0 event selection. CDF and D0 had three levels of online (plus the trigger); the third level also assigned events to data streams, so that for example potential top candidates were one stream.
The offline analysis is done afterward, on the stored data The offline analysis is done on "farms" of computers, which in the days of CDF and D0 were mainly in the Feynmann Center at Fermilab, but today are tens of thousands of CPUs distributed across universities and institutions around the world. The term "offline" is generally reserved for the process of event reconstruction, where the raw data from the detectors is processed to determine what happened in the event, e.g., "19 electrons came out at from the primary vertex with these momenta; there are secondary vertices at this point and that point, a hadron 'jet' of this energy and momentum came from this secondary vertex..." and so forth. This offline reconstruction is in principle done once and for all for each event; in practices, reconstruction is run a couple of times as techniques are refined based on experience with the events.
The results of event reconstruction are saved as what in Fermilab days were known as DSTs (data summary tapes) and are greatly compactified relative to the original data. Nowadays this summary data is kept on disk, distributed around the world. Then there are many projects of offline analysis (using the DSTs) coded or designed by many experimenters, to extract the actual physics from those summarized events. | {
"domain": "physics.stackexchange",
"id": 33775,
"tags": "particle-physics, experimental-physics, terminology, data-analysis"
} |
Base or foundation? | Question: Please help, what is the correct term for:
The bottom/foot part of building or structure which carries loads from the building/structure and transfers the loads to the soil/earth/ground beneath it?
The soil massive to which said load from the structure is transferred.
In my native tongue, the two meanings have different terms, however, in dictionaries they are named both base and/or foundation. The question is to distinguish the part of structure from the soil.
Answer: The part of the building that is in contact with the soil is the foundation:
the lowest division of a building, wall, or the like, usually of masonry and partly or wholly below the surface of the ground.
As you'll see from the same link, that might also be used for the ground itself:
the natural or prepared ground or base on which some structure rests.
However, I would suggest that definition 2 (referring to the ground as the foundation) is not in use as a technical engineering definition.
I would suggest using "foundation" or "base" for the part of the structure. I am struggling to think of a good term for the soil/found on which the foundation sits. You might need to stick to simple terms such as "supporting ground". Possibly "bearing strata" or "founding strata" might be better (though maybe not applicable in all situations). | {
"domain": "engineering.stackexchange",
"id": 1275,
"tags": "structural-engineering, civil-engineering, structures, geotechnical-engineering, terminology"
} |
What is viscosity in terms of chemistry? | Question: In physics, viscosity is the force applied in a unit area times time or basically Pascal times second. We could say that it is a measure of its resistance to gradual deformation by shear stress or tensile stress.
But what does it mean in the chemistry world?
I believe it is related to the distance between molecules in a solution. When molecules are farther apart then viscosity is lower than when molecules are closer.
Could you tell me if my mature concept about viscosity is on the right track or far too off? Could you also define viscosity in terms of chemistry?
Answer: More or less similar…I will try to put brief concepts of viscosity in terms of pharmaceutics which is somewhat another branch of chemistry, where it is very much applicable in drug design.
Viscosity is an expression of the resistance to flow of a system under an applied stress. The more viscous a liquid is, the greater is the applied force required to make it flow at a particular rate.
Viscosity is defined in terms of the force required to move one plane
surface past another under specified conditions when the space between
is filled by the liquid in question. More simply, it can be considered
as a relative property, with water as the reference material.
Viscosity implies that the liquid flows even under the smallest stress and does not return to its original shape or form once the stress is removed.
Viscous deformation, i.e. viscous flow, occurs in accordance with Newton’s law,
$\ce{σ=ηỳ}$
where the applied stress σ results in flow with a velocity gradient ỳ
or rate of shear. The proportionality constant η is termed viscosity,
while its reciprocal is called fluidity.
Viscosity has also been
described as the internal friction in the fluid as it corresponds to the resistance of the fluid to the relative motion of adjacent layers
of liquid.
The viscosity of simple liquids (i.e, pure liquids consisting of small molecules and solutions where solute and solvent are small molecules) depends only on composition, temperature, and pressure. It increases moderately with increasing pressure and markedly with decreasing temperature.
There is quite a lot to discuss on viscosity in depth but these are just shallow concepts to understanding the term.
References
Martin’s Physical Pharmacy and Pharmaceutical Sciences : Rheology
Remington: Essentials of Pharmaceutics: Rheology
Ansel’s Pharmaceutical Dosage Forms and Drug Delivery Systems : Special Solutions and Suspensions
Aulton ’s Pharmaceutics The Design and Manufacture of Medicines 4th edViscosity Rheology, and flow of fluids | {
"domain": "chemistry.stackexchange",
"id": 9028,
"tags": "physical-chemistry, viscosity"
} |
Preparing Position Time Graph and Velocity Time Graph | Question: So, I'm a little confused on how this would actually work.
For a Homework question, I needed to analyze a short little simulation where I collect Data to prepare a velocity-time plot and position-time plot. I'm a little confused about how what I do. For the velocity-time plot part
The Data I collected was:
Seconds (s) --- Fall (m)
0 --- 0
1 --- 4.9
2 --- 19.4
3 --- 44.1
4 --- 79.2
5 --- 123
6 --- 176
7 --- 240
Answer: the position- time plot: x axes t in s, y axes s in m
for the velocity calculate it for every time interval, since this is just 1s you have the mean velocity as for example: s5-s4/1s =0.44m/s which you plot at t=4.5s | {
"domain": "physics.stackexchange",
"id": 72716,
"tags": "homework-and-exercises, kinematics"
} |
What is a distributed force? | Question: I was reading a book called "Properties of Matter" by D.S. Mathur. And In the chapter titled "Elasticity", The book said that "Stress" is measured per unit area due to it being a distributed force. What does that mean? I also had trouble understanding why Stress is measured as force per unit area.
Answer:
What does that (a distributed force) mean? I also had trouble
understanding why Stress is measured as force per unit area.
Think about stress in the same way as pressure. In mechanics of materials, stress or pressure expresses the internal forces between neighboring particles (atoms, molecules, etc.) of materials. The units of stress are the same as pressure.
The diagram below shows a vertically oriented cylinder of cross section area $A$ with a disc of mass $m$ placed on top. The downward force of the disc is $mg$. The axial or normal stress on the cross section area of the cylinder is the result of the disc force being distributed uniformly over the cross section area and equals $F/A=mg/A$.
For any given cross section area within the cylinder material, the stress will be the sum of the weight of the disc plus the weight of cylinder above the cross section, divided by cross sectional area.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 64133,
"tags": "terminology, elasticity"
} |
How many yellow objects are there? | Question: I thought and read about color mixing today. I made some counterintuitive discoveries, and i now have some thought experiments which i cannot test.
I could print a yellow image but i use a different printing technique on each half. On the left side i use color which is true yellow(so yellow pigments which reflect yellow wavelength), on the right i use small red and green squares next to each other(not overlapping) which from afar appear also as the same yellow. So both sides appear yellow to me because my cone cells on my retina are tricked. if i am now holding a monochromatic yellow filter in front of me, the right side would appear black because red and green cant get trough the filter, and the left side would be ... still the same yellow. So i could now distinguish between the left and right side.
So my question is, what would i see when i took multiple monochromatic filters with me and look around?
Are there materials which appear a specific color because they are(So that the atoms absorb every other color or just emits this specific color) and some materials which reflect or emit different wavelenghts which then appear as a specific color?
Answer: Every object has a spectrum of emitted light, which is a function $I(\lambda)$ that tells you the intensity $I$ of the light of wavelength $\lambda$ that is emitted by the object. Many different objects have many different spectra, and there are huge swaths of physics devoted to predicting, analyzing, and generally studying the spectra of various objects, from minerals to the atmosphere to stars and galaxies.
This makes it difficult to say what color an object is, because it's always emitting light composed of a bunch of different colors. That said, we can reasonably say that an object "appears a certain color because it is that color" when its spectrum is relatively sharply peaked in one color. A great example of this is a sodium vapor lamp, which gets its yellow color from the electrons in sodium atoms jumping between two energy levels. As such, its spectrum is very sharply peaked at 590 nm:
The above is an emission spectrum; the higher the points on the graph, the more intense the emitted light of that color is. For context, this is what the lamp looks like:
And for some more context, here are the spectra of a lot of other common light sources. Though they all look basically white, you can see that there are clear differences in their spectra that our eyes generally can't resolve:
In contrast, one of the green pigments in grass and most plants is chlorophyll b, which has a much broader and more complicated spectrum. Below I present the absorption spectrum, which tells you how much light is absorbed by the pigment. The convention is reversed: the higher the points on the graph, the more the light of that color is absorbed, so the parts of the graph that tell you the color that you see are the low points:
You can see from this graph that there are actually three component colors to chlorophyll b in the visible range: there's the obvious green color between 500-600 nm, there's a red portion past 650 nm, and there's a purple contribution from light lower than 450 nm. Your eyes take in this complicated red-green-purple mix and output "this looks green." | {
"domain": "physics.stackexchange",
"id": 57943,
"tags": "visible-light, vision"
} |
How to perform SLS check with strut-and-tie method? | Question: I am using strut-and-tie method to design a corbel. I noticed that Eurocode mentions strut-and-tie only in section 6, which is ultimate limit state design. How do I perform serviceability limit state design for the corbel using strut-and-tie method?
Answer: One important thing to remember about the strut-and-tie method is that as a lower-bound method it is based on the plasticity theory. That is, the principle is to find a safe and statically admissible stress distribution, and if any such distribution is possible the structure will not collapse at this load. This principle only makes sense for the ultimate limit state. However, if the assumed load path is close to what linear elasticity theory would give, the stresses will be a decent approximation to a detailed finite element calculation and may very well be sufficiently accurate. Therefore, the difficult part of the calculation is knowing how well your strut-and-tie model aligns with elasticity theory.
There is a bit more on this in EN 1992-1-1 section 5.6.4.
But if you do your SLS check with a strut-and-tie model, the actual calculations are quite straightforward. Calculate the stress in concrete and reinforcement and check them against the SLS limits in section 7.2. Use the reinforcement stress to calculate a crack width and check that against the limit from section 7.3. And you're done. The strut-and-tie approach will not give you a way to calculate a deflection but for a typical corbel that wouldn't be very interesting anyway. | {
"domain": "engineering.stackexchange",
"id": 2558,
"tags": "structural-engineering, reinforced-concrete, eurocodes"
} |
Should I be concerned about a galvanic reaction between silver solder and stainless steel? | Question: I would like to solder a drain fitting onto my ultrasonic cleaner's tank. The fitting and the tank are both stainless steel. During its operation, the tank is filled with a cleaning solution and heat is applied to accelerate the process.
I'm concerned the silver solder joint and or the tank itself may become compromised overtime. Could there be a galvanic reaction between the solder and the tank, especially in the presence of mildly corrosive cleaning agents and heat?
Answer: The composition of Silver solders, themselves, as presented in this reference shows large differences in percentages of the anodic zinc and the cathode metal, with respect to possible galvanic based corrosion:
The compositions for the most commonly used jewelry grade silver solders are as follows, in descending order of melting point, from Knuth’s “Jewelers’ Resource”. Their compositions may vary with the different manufacturers, especially in the zinc content:
IT (extremely hard) 80Ag, 16Cu, 4Zn (sometimes no zn)
Hard 75Ag, 22Cu, 3Zn
Medium 70Ag, 20Cu, 10 Zn
Easy 65 Ag, 20Cu, 15 Zn
Extra Easy 56 Ag, 22 Cu, 17 Zn, 5 Sn
Easy Flo 45 Ag, 15 Cu, 16 Zn, 24 Cd
There is at least one study, to quote from Corrosion Issues in Solder Joint Design and Service:
Corrosion is an important consideration in the design of a solder joint. It must be addressed with respect to the service environment or, as in the case of soldered conduit, as the nature of the medium being transported within piping or tubing. Galvanic-assisted conosion is of particular concern, given the fact that solder joints are comprised of different metals or alloy compositions that are in contact with one-another.....
Functionally, however, it is the loss of material, be it the filler metal or the loss of nearby substrate-material, that most sibtificantly impacts solder joint performance and reliability. Material loss degrades the joint’s capacity to support a mechanical load, provide hermetically for a container structure, or sustain continuity in an electrical circuit.....
The predominance of one or more corrosion mechanisms, be it uniform, pitting, or crevice corrosion, as well as the rates of material loss, are very sensitive to the alloy properties (composition, phase distribution, oxide layer chemistry and thickness, etc.) and the service environment.
So, to answer your question, there is likely cause for your concern, but based on the last comment, a correct assessment likely requires the precise nature of the Silver solder employed and what chemicals, and associated mixes, that could act on the solder joint over time. | {
"domain": "chemistry.stackexchange",
"id": 13374,
"tags": "metallurgy, silver, corrosion"
} |
Can empty space be a frame of reference to measure velocity? | Question: I am wondering if there is only one object in the universe than does it make any sense to talk about its velocity. If empty space can be thought of as a reference to measure its velocity than it might.
I read somewhere that in empty space someone cannot tell if she is moving unless she has some acceleration. Now if it does not make sense to talk about velocity how can it make sense to talk about acceleration?
Answer: The Equivalence Principle of General Relativity holds that acceleration and gravity can be described identically. With an accelerometer, you can tell whether or not you are accelerating in empty space, regardless of whether another object is available to act as a reference point. Under acceleration, your weight will change just as though you were approaching or standing in a gravitational field.
Velocity, however, has meaning only in relation to relative movement between two or more objects. No one has yet discovered an edge to empty space which can be used as a reference point for measuring the velocity of an object. Therefore, a reference point is necessary to measure velocity, and the velocity will be attributable to both objects.
Acceleration is the rate of change of velocity. You do not need to know the magnitude of velocity, only its rate of change in order to measure acceleration. | {
"domain": "physics.stackexchange",
"id": 28196,
"tags": "acceleration, velocity, inertial-frames"
} |
Inapproximability of an optimization problem | Question: Suppose we have an optimization problem $\mathcal{P}$ that we should cover all points with $k$ disjoint rectangles in the plane and we should optimize a distance function over each rectangles . Now, suppose there is a $\mathcal{P}'$ that just need cover all points in the plane with $k$ disjoint rectangles.
Already proved that $\mathcal{P}'$ is NP-Hard and there is no constant factor approximation algorithm for $\mathcal{P}'$. Can we conclude that $\mathcal{P}$ has no constant factor approximation algorithm? Why?
I think as follow:
$\mathcal{P}$ is at least hard as $\mathcal{P}'$ so if there is a constant factor approximation algorithm for $\mathcal{P}$ then for each feasible solution $\mathcal{I}$ of $\mathcal{P}$, then $\mathcal{I}$ is a solution for decision version of $\mathcal{P}'$ hence we solve decision version of $\mathcal{P}'$ in polynomial time and hence $P=NP$. Finally, we conclude that $\mathcal{P}$ has no constant factor approximation algorithm.
Answer: Here is a similar example. Let $\mathcal{Q}$ be the problem of maximizing the number of satisfied clauses in an input CNF given that the assignment is the FALSE assignment. Let $\mathcal{Q'}$ be the problem of maximizing the number of satisfied clauses in an input CNF, without any other constraints. It is known that $\mathcal{Q'}$ has no PTAS (unless P=NP). Does it follow that $\mathcal{Q}$ has no PTAS? | {
"domain": "cs.stackexchange",
"id": 19784,
"tags": "complexity-theory, approximation"
} |
Generate secure URL safe token | Question: using Microsoft.AspNetCore.WebUtilities;
using System.Security.Cryptography;
namespace UserManager.Cryptography
{
public class UrlToken
{
private const int BYTE_LENGTH = 32;
/// <summary>
/// Generate a fixed length token that can be used in url without endcoding it
/// </summary>
/// <returns></returns>
public static string GenerateToken()
{
// get secure array bytes
byte[] secureArray = GenerateRandomBytes();
// convert in an url safe string
string urlToken = WebEncoders.Base64UrlEncode(secureArray);
return urlToken;
}
/// <summary>
/// Generate a cryptographically secure array of bytes with a fixed length
/// </summary>
/// <returns></returns>
private static byte[] GenerateRandomBytes()
{
using (RNGCryptoServiceProvider provider = new RNGCryptoServiceProvider()) {
byte[] byteArray = new byte[BYTE_LENGTH];
provider.GetBytes(byteArray);
return byteArray;
}
}
}
}
I've created the above class (C#, .Net Core 2.0) to generate a cryptographically secure string token that is URL safe, so it can be used in an URL without the necessity to be encoded.
I will use that token as a GET parameter (e.g. www.site.com/verify/?token=v3XYPmQ3wD_RtOjH1lMekXloBGcWqlLfomgzIS1mCGA) in a user manager application where I use the token to verify the user email or to recover a user password.
The above link will be sent as email to the user that has requested the service.
I store the token into a DB table with an associated expiration datetime.
I've seen other implementations on this and other sites but all seem to be unnecessarily complicated. Am I missing something?
Answer: Minor suggestions:
public class UrlToken
The class has no instance data, so it could be made static:
public static class UrlToken
Microsoft's Naming Guidelines and their Framework Design Guidelines suggest not using underscores and also using PascalCasing for constants, so
private const int BYTE_LENGTH = 32;
could be:
private const int ByteLength = 32;
However, even that name doesn't tell us much of what it is for. Let's try again:
private const int NumberOfRandomBytes = 32;
Typo/misspelling in the XML doc comment: "encoding" is written as "endcoding".
There is mixed curly brace formatting. Microsoft guidelines (see links above) suggest the opening and closing curly braces should be on their own line.
using (RNGCryptoServiceProvider provider = new RNGCryptoServiceProvider()) {
to:
using (RNGCryptoServiceProvider provider = new RNGCryptoServiceProvider())
{
By the way, kudos to you on your proper use of the using construct! Looks fantastic! | {
"domain": "codereview.stackexchange",
"id": 33383,
"tags": "c#, asp.net-core"
} |
Basic binary search in python | Question: I just started learning to code during quarantine and whilst learning python, I created this program for a binary search. This is really basic so I assume that there is a way easier method. If anyone knows how I can make this way simpler, your help would be appreciated.
list = [1,15,37,53,29,22,31,90,14,6,37,40]
finished = False
target = 37
list.sort()
print(list)
while finished == False:
n = len(list)
print(n)
if target < list[int(n/2)]:
for items in range(int(n / 2), n):
list.pop()
print('Item is too large.')
elif target == list[int(n/2)]:
finished = True
print('The item has been found.')
else:
list.reverse()
for items in range(int(n / 2), n):
list.pop()
list.reverse()
print('Item is too small.')
print(list)
Answer:
Avoid using list as the variable name, because it is a
function in Python.
You don't need to manipulate the list, such
as list.pop() and list.reverse(). It is inefficient. You can
determine the updated search range with index.
When target is assigned a value not within list, there will be IndexError: list index out of range. It means you didn't handle the case well.
Modified code:
search_list = [1,15,37,53,29,22,31,90,14,6,37,40]
target = 37
start = 0
end = len(search_list)-1
search_list.sort()
print('The sorted list:',search_list)
while start<=end:
n = int((end + start)/2)
if target < search_list[n]: # the target is small
end = n-1
elif target > search_list[n]: # the target is large
start = n+1
elif target == search_list[n]: # the target is found
print('The item {} has been found at the index {} in the sorted list.'.format(target,n))
break
else:
print('The item is not found.') | {
"domain": "codereview.stackexchange",
"id": 38299,
"tags": "python, array, search, binary-search"
} |
Possible to use Master theorem? $T(n) = aT(\lfloor \frac{n}{b} \rfloor) + g(n)$ | Question: The master theorem can be used in case of a recurrence relation of the form $T(n) = aT(\frac{n}{b}) + g(n)$
But is it possible to use the master theorem for recurrence relations of the form $T(n) = aT(\lfloor \frac{n}{b} \rfloor) + g(n)$?
Answer: Yes, you can, as David has pointed out. However, his answer only mentions the case of the upper bounds explicitly, leaving the case of the lower bounds dangling.
In fact, the conclusion of master theorem, which gives asymptotic approximation of the given function $T$ in $\Theta$ notation, still holds if we change the recurrence relation from
$$ T(n) = aT(\frac{n}{b}) + g(n)$$
to
$$T(n) = aT(\lfloor\frac{n}{b}\rfloor) + g(n),$$
or
$$T(n) = aT(\lceil\frac{n}{b}\rceil) + g(n),$$
or, even more generally,
$$T(n) = aT\left(\frac{n}{b}+ h(n)\right) + g(n),\text{ where } h(n)\in O\left(\frac n{\log^2n}\right).$$
These variants of the master theorem can be obtained by applying Akra–Bazzi method. | {
"domain": "cs.stackexchange",
"id": 12677,
"tags": "recurrence-relation, recursion, master-theorem"
} |
Electrophilic or nucleophilic addition? | Question: In this reaction:
$$\ce{H2C=O + H2O -> H2C-(OH)2}$$
My textbook says it's a nucleophilic addition reaction.
But in this reaction:
$$\ce{H2C=CH2 + HBr -> H3C-CH2Br}$$
is an electrophilic addition reaction.
How do we know when a reaction is via an electrophilic or nucleophilic addition?
Answer: Learning organic chemistry is about learning key modes of reactivity and being about to identify key features of reactants that may trigger relevant modes.
Relevant examples here are:
Nature of orbitals: Carbonyl groups have a low lying $\sigma^{*}$ orbital that is quite susceptible to nucleophilic attack. Alkenes, on the other hand, do not. What alkenes do have is a lot of electron density, so they are susceptible to protonation to form cations.
Acidity. Water is very mildly acidic while hydrogen bromide is quite acidic. Therefore different modes of reactivity should be considered.
The real question here is: Why are you trying to compare these two reactions at all? They're using different reagents with very different functional groups. Naturally, very different mechanisms were involved between the two reactions. | {
"domain": "chemistry.stackexchange",
"id": 9753,
"tags": "organic-chemistry"
} |
Does ROS Tango work? | Question:
It's been a couple years and I have come back to see if ROS Tango works. Unfortunately, the tutorials fail miserably, requiring old versions of everything from Android Studio 0.6 to unsupported gradle versions and plugins. Even with those it generates run-time errors.
The tutorials page has not been updated since September, 2014, so I think it is an abandoned project. That's too bad, as it would be a useful thing to have working. Does anyone have a Tango that has a ROS interface?
Originally posted by dan on ROS Answers with karma: 875 on 2017-01-01
Post score: 3
Original comments
Comment by gvdhoorn on 2017-01-02:
This is a bit of a 'me too' comment, but I've not seen any development in that direction either. It's definitely a shame, as Tango devices are very interesting. I'd be interested to see whether you get any more informative responses, especially since the OSRF worked on Tango with Google.
Comment by gvdhoorn on 2017-01-02:
See OSRF Blog - Project Tango Announced fi.
Comment by gijsje170 on 2017-01-03:
I remember i was able to send my pointcloud and position over wifi using the yellowstone tablet. I could not merge the RGB data with the pointcloud though. If this is what you are looking for I could search what tutorials I used.
Answer:
Take a look around here: http://wiki.ros.org/tango_ros_streamer.
That is an Android application that will publish Tango data to ROS topics. The core of that application is a ROSJava node that you can integrate into another application (http://wiki.ros.org/tango_ros_streamer/tango_ros_node).
The code of the application is open source, and you can install it into your Tango enabled device from the Play store for free.
A bit late, but hope it helps!
Originally posted by jubeira with karma: 1054 on 2017-10-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by dan on 2018-01-27:
That works, but not for the orginal Tango development kit. For an explanation of why it fails for that, see this link:
https://github.com/Intermodalics/tango_ros/issues/384
Comment by jubeira on 2018-01-29:
@dan previous releases of the application used to work with the development kit; I've used them myself. Don't expect them to be feature-complete nor completely bug-free, but they work.
Take a look here: https://github.com/Intermodalics/tango_ros/releases, download the apk and install it with adb.
Comment by dan on 2018-02-27:
@jubeira Great call! It works with this version of tango_ros_streamer:
https://github.com/Intermodalics/tango_ros/releases/tag/v1.0-15
See this thread for more details:
https://github.com/Intermodalics/tango_ros/issues/384 | {
"domain": "robotics.stackexchange",
"id": 26618,
"tags": "java, android"
} |
How can I get historical weather data for the United States? | Question: I went to the web site of the National Weather Service and clicked around but it was fruitless. How can I get historical weather data in the US?
I want to get daily high and low and precipitation for a particular weather station for time periods such as 1930 to 1940.
Answer: This the portal for the NOAA historical records
https://www.ncdc.noaa.gov/cdo-web/search
You can specify what you are looking for (Summaries, or other), the time range, the area.
They have records starting in the 1800`s to today. | {
"domain": "earthscience.stackexchange",
"id": 936,
"tags": "meteorology, open-data"
} |
Finding equivalent capacitance between A and B, | Question: Following is the question, asked is to find equivalent capacitance between A and B.
I've tried couple of times and getting 7/3 micro farad as the answer while the correct answer is 8/3 micro farads. Please explain the method to solve such questions.
Answer: A lot of these problems are best done by first redrawing the circuit so that it is in a more accessible form.
The correct answer is $\frac 8 3 \;\mu$F. | {
"domain": "physics.stackexchange",
"id": 31077,
"tags": "homework-and-exercises, electric-circuits, capacitance"
} |
rospy service multiple argument | Question:
Hello,
It might be a total newbie question but I can't find the solution. I know that for Subscriber I can call callback_args but how am I suppose to do it for Services ?
I'm creating 2 services that are going to do the same thing BUT I want the callback to receive an id so it knows from which robot the service was required. I could pass it in the message but a better way to do it for me is to add a constant number associated to each call_back. I though of something like this but this abviously doesn't work :
service.append(rospy.Service(name, sentobject, self.check, callback_args=0) ) #Here 0 would be my id
service.append(rospy.Service(name, sentobject, self.check, callback_args=1) ) #Here 1 would be my id
With check being defined as :
def check(self, rep, robot_id):
How can I do it in python ?
Thanks a lot
Originally posted by Maya on ROS Answers with karma: 1172 on 2014-06-26
Post score: 1
Answer:
Hi,
i am not sure why you want two different services that perform the same function.
Other than that i would do as you mentioned and include this data in the service definition (.srv file).
Edit:
When you look at the definition of the rospy services you can clearly see that there are no callback arguments expected. You can always use your Service definition in the .srv you are using to add a field like an string or int for the robot ID to be passed to the service call. For example you could take the AddTwoInts.srv
int64 a
int64 b
---
int64 sum
and modify it to look like this
int64 a
int64 b
int64 robotid
---
int64 sum
In your service callback you can than access this value and change the response message accordingly. For this you should rename the file, add it to the CMakeLists.txt in the package the .srv file is in and make a catkin_make to generate the needed include files.
Originally posted by bajo with karma: 206 on 2014-06-26
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Maya on 2014-06-30:
That's exactly what I end up doing. Why can"t we use argument in services ? :/ | {
"domain": "robotics.stackexchange",
"id": 18398,
"tags": "ros, rospy, service, multiple"
} |
Counting game with hardcoded AI | Question: The previous failed attempt
This question derives from this other question I made.
In the previous question I tried to use a Monte Carlo algorithm;
sadly it did not work.
I am doing an hardcoded AI
I therefore decided to try an hardcoded AI, but not for 21 game because it would have been too trivial, for a more general game similar to it. With the default settings you see below the game is equal to the 21 game.
Starting assumption
This AI was built completely by me, knowing from the start only that in the 21 game to play the best possible you have to always make the number dividble by 4. Ex (2 -> 4) (13 -> 16).
I tried to come up with a more general answer.
Sometimes plays well sometimes not
This AI mysteriously plays well only in some cases, (example: END_VALUE = 21 MINIMUM_PLAYABLE_NUMBER = 1 MAX_PLAYABLE_NUMBER = 4).
Coding style part
If you know little about AI's and Math games, feel free to comment only about my coding style.
import random
END_VALUE = 21
MINIMUM_PLAYABLE_NUMBER = 1
MAX_PLAYABLE_NUMBER = 3
INTRODUCTION = """
In this game you and the computer
alternate counting up to """ + str(END_VALUE) + """ chosing a number
from """+ str(MINIMUM_PLAYABLE_NUMBER) + """ to """ + str(MAX_PLAYABLE_NUMBER) + """.
The first that arrives to """ + str(END_VALUE) + """ LOSES.
"""
def dividers(n):
"""
Given a number n returns all its dividers from 2 to the number
>>> dividers(46)
[2, 23, 46]
>>> dividers(100)
[2, 4, 5, 10, 20, 25, 50, 100]
"""
return [i for i in range(2,n+1) if n%i==0]
def max_divider_less_than_or_equal_max_playable_number(n):
"""
Core component of the AI.
Tries to get the max divider less than MAX_PLAYABLE_NUMBER.
If fails plays random.
>>> MAX_PLAYABLE_NUMBER = 6
>>> max_divider_less_than_or_equal_max_playable_number(65)
5
>>> MAX_PLAYABLE_NUMBER = 8
>>> max_divider_less_than_or_equal_max_playable_number(17)
[random] # note that 1 is not included in the dividers
"""
try:
return max([i for i in dividers(n) if i <= MAX_PLAYABLE_NUMBER])
except ValueError: # There are no divisors
return random.randint(MINIMUM_PLAYABLE_NUMBER,MAX_PLAYABLE_NUMBER)
def AI(current_value):
"""
Plays a move trying to make the user have to play
at END_VALUE - 1 so that the player loses and he wins.
He does so by playing the so that
(current_value + move) % max_divider_less_ \
than_or_equal_max_playable_number(END_VALUE - 1) == 0:
is True.
This AI misteriously works only for some values of the
constants at the start.
"""
moves = range(MINIMUM_PLAYABLE_NUMBER,MAX_PLAYABLE_NUMBER + 1)
for move in moves:
if (current_value + move) % max_divider_less_than_or_equal_max_playable_number(END_VALUE - 1) == 0:
return move
return random.randint(1,MAX_PLAYABLE_NUMBER)
def get_input_and_handle_errors():
"""
Asks the user for input.
Checks if it is a number, if it isn't raises error with message.
Checks if it is in the correct range, sif it isn't raises error with message.
"""
try:
temp = int(input("Enter a number "))
except ValueError:
raise TypeError("You must enter a number.")
if MINIMUM_PLAYABLE_NUMBER <= temp <= MAX_PLAYABLE_NUMBER:
player_move = temp
else:
raise ValueError("You must play a number from " + str(MINIMUM_PLAYABLE_NUMBER) + " to " + str(MAX_PLAYABLE_NUMBER) + " both limits included.")
return temp
def show_information_after_turn(value,who):
print("After " + who + "'s turn value is " + str(value))
def check_if_game_over(value):
if value >= END_VALUE:
return True
def ask_user_if_he_wants_to_be_first():
temp = input("Do you want to be first? ")
# I check "y" in temp.lower()
# instead of temp.lower() == "yes"
# To give more freedom to the user
if "y" in temp.lower():
return True
elif "n" in temp.lower():
return False
def main():
"""
Main interface.
"""
value = 0
print(INTRODUCTION)
player_is_first = ask_user_if_he_wants_to_be_first()
while 1:
if not player_is_first:
#############################################
# Computer's turn
value += AI(value)
show_information_after_turn(value,"computer")
if check_if_game_over(value):
print("Computer has lost")
break
player_is_first = False
##############################################
# Player's turn
player_move = get_input_and_handle_errors()
value += player_move
show_information_after_turn(value,"player")
if check_if_game_over(value):
print("You have lost")
break
if __name__ == "__main__":
main()
Answer: Since you have already acknowledged deficiencies in the AI algorithm, I'll just review the implementation.
You have a latent bug in the AI() function:
return random.randint(1,MAX_PLAYABLE_NUMBER)
The lower bound should be MINIMUM_PLAYABLE_NUMBER. I am puzzled by the naming inconsistency. The constants should be MIN_PLAYABLE_NUMBER and MAX_PLAYABLE_NUMBER, or they should be MINIMUM_PLAYABLE_NUMBER and MAXIMUM_PLAYABLE_NUMBER.
Conventionally, there should be a space after each comma. This is not explicitly stated in PEP 8, but implied in all of the examples.
To generate the introductory message, use str.format() instead of string concatenation.
There is a lot of similarity between the human and the AI. The main loop could therefore benefit from converting them into objects.
import itertools
import random
import string
END_VALUE = 21
MINIMUM_PLAYABLE_NUMBER = 1
MAX_PLAYABLE_NUMBER = 3
class AI:
@property
def name_possessive_case(self):
return "computer's"
@property
def losing_message(self):
return 'Computer has lost.'
@staticmethod
def _dividers(n):
"""
Given a number n returns all its dividers from 2 to the number
>>> dividers(46)
[2, 23, 46]
>>> dividers(100)
[2, 4, 5, 10, 20, 25, 50, 100]
"""
return [i for i in range(2,n+1) if n%i==0]
@staticmethod
def _max_divider_less_than_or_equal_max_playable_number(n):
"""
Core component of the AI.
Tries to get the max divider less than MAX_PLAYABLE_NUMBER.
If fails plays random.
>>> MAX_PLAYABLE_NUMBER = 6
>>> _max_divider_less_than_or_equal_max_playable_number(65)
5
>>> MAX_PLAYABLE_NUMBER = 8
>>> _max_divider_less_than_or_equal_max_playable_number(17)
[random] # note that 1 is not included in the dividers
"""
try:
return max([i for i in AI._dividers(n) if i <= MAX_PLAYABLE_NUMBER])
except ValueError: # There are no divisors
return random.randint(MINIMUM_PLAYABLE_NUMBER, MAX_PLAYABLE_NUMBER)
def play(self, current_value):
"""
Plays a move trying to make the user have to play
at END_VALUE - 1 so that the player loses and he wins.
He does so by playing the so that
(current_value + move) % max_divider_less_ \
than_or_equal_max_playable_number(END_VALUE - 1) == 0:
is True.
This AI misteriously works only for some values of the
constants at the start.
"""
moves = range(MINIMUM_PLAYABLE_NUMBER, MAX_PLAYABLE_NUMBER + 1)
for move in moves:
if (current_value + move) % AI._max_divider_less_than_or_equal_max_playable_number(END_VALUE - 1) == 0:
return move
return random.randint(MINIMUM_PLAYABLE_NUMBER, MAX_PLAYABLE_NUMBER)
class Human:
@property
def name_possessive_case(self):
return 'your'
@property
def losing_message(self):
return 'You have lost.'
def play(self, current_value):
"""
Asks the user for input.
Checks if it is a number, if it isn't raises error with message.
Checks if it is in the correct range, sif it isn't raises error with message.
"""
try:
temp = int(input("Enter a number "))
except ValueError:
raise TypeError("You must enter a number.")
if MINIMUM_PLAYABLE_NUMBER <= temp <= MAX_PLAYABLE_NUMBER:
player_move = temp
else:
raise ValueError("You must play a number from " + str(MINIMUM_PLAYABLE_NUMBER) + " to " + str(MAX_PLAYABLE_NUMBER) + " both limits included.")
return temp
def determine_player_order():
yes_no = input("Do you want to be first? ")
if "y" in yes_no.lower():
return itertools.cycle((Human(), AI()))
else:
return itertools.cycle((AI(), Human()))
def main():
print("""
In this game you and the computer
alternate counting up to {END_VALUE} chosing a number
from {MINIMUM_PLAYABLE_NUMBER} to {MAX_PLAYABLE_NUMBER}.
The first that arrives to {END_VALUE} LOSES.
""".format(END_VALUE=END_VALUE, MINIMUM_PLAYABLE_NUMBER=MINIMUM_PLAYABLE_NUMBER, MAX_PLAYABLE_NUMBER=MAX_PLAYABLE_NUMBER))
value = 0
turns = determine_player_order()
for player in turns:
value += player.play(value)
print("After %s turn value is %d" % (player.name_possessive_case, value))
if value >= END_VALUE:
print(player.losing_message)
break
if __name__ == "__main__":
main() | {
"domain": "codereview.stackexchange",
"id": 10441,
"tags": "python, ai"
} |
Public key chunked encryption with C# method to decrypt | Question: I have written a method in PHP which breaks up larger than the byte size of the key of the certificate used to encrypt, and a method in c# which will decrypt said chunks using the corresponding private key.
What I would like to be reviewed is the C# decrypt() function to improve the performance of decrypting each chunk.
In my tests using StopWatch this part takes about 3 seconds to decrypt all chunks in the test string, and store them back as a string. If I change the line
decrypted = decrypted + Encoding.UTF8.GetString( rsa.Decrypt( buffer, false ) );
to
byte[] tmp = rsa.Decrypt( buffer, false );
... the performance increases a bit to 2.8 seconds.
Generate private and public key pair (run this code)
GenerateKeyPair(true);
// generate public and private key
function GenerateKeyPair($display = false) {
$config = array(
"digest_alg" => "sha512",
"private_key_bits" => 4096,
"private_key_type" => OPENSSL_KEYTYPE_RSA
);
$ssl = openssl_pkey_new( $config );
openssl_pkey_export($ssl, $privKey);
$pubKey = openssl_pkey_get_details($ssl)['key'];
if($display == true) {
// display keys in textareas
echo '
<textarea rows="40" cols="80">' . $privKey . '</textarea>
<textarea rows="20" cols="80">' . $pubKey . '</textarea>
';
}
// return keys as an array
return array(
'private' => $privKey,
'public' => $pubKey
);
}
Save the results to private.txt and public.txt
PHP: Encrypt using Public key (public.txt)
// replace this line with the one from http://pastebin.com/6Q2Zb3j6
$output = '';
echo encrypt($data, 'public.txt');
// encrypt string using public key
function encrypt($string, $publickey, $chunkPadding = 16) {
$encrypted = '';
// load public key
$key = file_get_contents($publickey);
$pub_key = openssl_pkey_get_public($key);
$keyData = openssl_pkey_get_details($pub_key);
$chunksize = ($keyData['bits'] / 8) - $chunkPadding;
openssl_free_key( $pub_key );
// split string into chunks
$chunks = str_split($string, $chunksize);
// loop through and encrypt each chunk
foreach($chunks as $chunk) {
$chunkEncrypted = '';
//using for example OPENSSL_PKCS1_PADDING as padding
$valid = openssl_public_encrypt($chunk, $chunkEncrypted, $key, OPENSSL_PKCS1_PADDING);
if($valid === false){
$encrypted = '';
return "failed to encrypt";
break; //also you can return and error. If too big this will be false
} else {
$encrypted .= $chunkEncrypted;
}
}
return bin2hex($encrypted);
}
C#: Class to load private key and use it to decrypt
using System;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Security.Cryptography;
using System.IO;
using System.Security.Cryptography.X509Certificates;
public class crypt {
public static string decrypt(string encrypted, string privateKey) {
string decrypted = "";
try {
RSACryptoServiceProvider rsa = DecodePrivateKeyInfo( DecodePkcs8PrivateKey( File.ReadAllText( privateKey ) ) );
byte[] encryptedBytes = Enumerable.Range( 0, encrypted.Length -1 )
.Where( x => x % 2 == 0 )
.Select( x => Convert.ToByte( encrypted.Substring( x, 2 ), 16 ) )
.ToArray();
byte[] buffer = new byte[( rsa.KeySize / 8 )]; // the number of bytes to decrypt at a time
int bytesRead = 0;
using ( Stream stream = new MemoryStream( encryptedBytes ) ) {
while ( (bytesRead = stream.Read( buffer, 0, buffer.Length )) > 0 ) {
decrypted = decrypted + Encoding.UTF8.GetString( rsa.Decrypt( buffer, false ) );
}
}
return decrypted;
} catch (CryptographicException ce) {
return ce.Message;
} catch (FormatException fe) {
return fe.Message;
} catch (IOException ie) {
return ie.Message;
} catch (Exception e) {
return e.Message;
}
}
//-------- Get the binary PKCS #8 PRIVATE key --------
private static byte[] DecodePkcs8PrivateKey( string instr ) {
const string pemp8header = "-----BEGIN PRIVATE KEY-----";
const string pemp8footer = "-----END PRIVATE KEY-----";
string pemstr = instr.Trim();
byte[] binkey;
if ( !pemstr.StartsWith( pemp8header ) || !pemstr.EndsWith( pemp8footer ) )
return null;
StringBuilder sb = new StringBuilder( pemstr );
sb.Replace( pemp8header, "" ); //remove headers/footers, if present
sb.Replace( pemp8footer, "" );
string pubstr = sb.ToString().Trim(); //get string after removing leading/trailing whitespace
try {
binkey = Convert.FromBase64String( pubstr );
} catch ( FormatException ) { //if can't b64 decode, data is not valid
return null;
}
return binkey;
}
//------- Parses binary asn.1 PKCS #8 PrivateKeyInfo; returns RSACryptoServiceProvider ---
private static RSACryptoServiceProvider DecodePrivateKeyInfo( byte[] pkcs8 ) {
// encoded OID sequence for PKCS #1 rsaEncryption szOID_RSA_RSA = "1.2.840.113549.1.1.1"
// this byte[] includes the sequence byte and terminal encoded null
byte[] SeqOID = { 0x30, 0x0D, 0x06, 0x09, 0x2A, 0x86, 0x48, 0x86, 0xF7, 0x0D, 0x01, 0x01, 0x01, 0x05, 0x00 };
byte[] seq = new byte[15];
// --------- Set up stream to read the asn.1 encoded SubjectPublicKeyInfo blob ------
MemoryStream mem = new MemoryStream( pkcs8 );
int lenstream = (int)mem.Length;
BinaryReader binr = new BinaryReader( mem ); //wrap Memory Stream with BinaryReader for easy reading
byte bt = 0;
ushort twobytes = 0;
try {
twobytes = binr.ReadUInt16();
if ( twobytes == 0x8130 ) //data read as little endian order (actual data order for Sequence is 30 81)
binr.ReadByte(); //advance 1 byte
else if ( twobytes == 0x8230 )
binr.ReadInt16(); //advance 2 bytes
else
return null;
bt = binr.ReadByte();
if ( bt != 0x02 )
return null;
twobytes = binr.ReadUInt16();
if ( twobytes != 0x0001 )
return null;
seq = binr.ReadBytes( 15 ); //read the Sequence OID
if ( !CompareBytearrays( seq, SeqOID ) ) //make sure Sequence for OID is correct
return null;
bt = binr.ReadByte();
if ( bt != 0x04 ) //expect an Octet string
return null;
bt = binr.ReadByte(); //read next byte, or next 2 bytes is 0x81 or 0x82; otherwise bt is the byte count
if ( bt == 0x81 )
binr.ReadByte();
else
if ( bt == 0x82 )
binr.ReadUInt16();
//------ at this stage, the remaining sequence should be the RSA private key
byte[] rsaprivkey = binr.ReadBytes( (int)( lenstream - mem.Position ) );
RSACryptoServiceProvider rsacsp = DecodeRSAPrivateKey( rsaprivkey );
return rsacsp;
} catch ( Exception ) {
return null;
} finally { binr.Close(); }
}
//------- Parses binary ans.1 RSA private key; returns RSACryptoServiceProvider ---
private static RSACryptoServiceProvider DecodeRSAPrivateKey( byte[] privkey ) {
byte[] MODULUS, E, D, P, Q, DP, DQ, IQ;
// --------- Set up stream to decode the asn.1 encoded RSA private key ------
MemoryStream mem = new MemoryStream( privkey );
BinaryReader binr = new BinaryReader( mem ); //wrap Memory Stream with BinaryReader for easy reading
byte bt = 0;
ushort twobytes = 0;
int elems = 0;
try {
twobytes = binr.ReadUInt16();
if ( twobytes == 0x8130 ) //data read as little endian order (actual data order for Sequence is 30 81)
binr.ReadByte(); //advance 1 byte
else if ( twobytes == 0x8230 )
binr.ReadInt16(); //advance 2 bytes
else
return null;
twobytes = binr.ReadUInt16();
if ( twobytes != 0x0102 ) //version number
return null;
bt = binr.ReadByte();
if ( bt != 0x00 )
return null;
//------ all private key components are Integer sequences ----
elems = GetIntegerSize( binr );
MODULUS = binr.ReadBytes( elems );
elems = GetIntegerSize( binr );
E = binr.ReadBytes( elems );
elems = GetIntegerSize( binr );
D = binr.ReadBytes( elems );
elems = GetIntegerSize( binr );
P = binr.ReadBytes( elems );
elems = GetIntegerSize( binr );
Q = binr.ReadBytes( elems );
elems = GetIntegerSize( binr );
DP = binr.ReadBytes( elems );
elems = GetIntegerSize( binr );
DQ = binr.ReadBytes( elems );
elems = GetIntegerSize( binr );
IQ = binr.ReadBytes( elems );
// ------- create RSACryptoServiceProvider instance and initialize with public key -----
RSACryptoServiceProvider RSA = new RSACryptoServiceProvider();
RSAParameters RSAparams = new RSAParameters();
RSAparams.Modulus = MODULUS;
RSAparams.Exponent = E;
RSAparams.D = D;
RSAparams.P = P;
RSAparams.Q = Q;
RSAparams.DP = DP;
RSAparams.DQ = DQ;
RSAparams.InverseQ = IQ;
RSA.ImportParameters( RSAparams );
return RSA;
} catch ( Exception ) {
return null;
} finally { binr.Close(); }
}
private static int GetIntegerSize( BinaryReader binr ) {
byte bt = 0;
byte lowbyte = 0x00;
byte highbyte = 0x00;
int count = 0;
bt = binr.ReadByte();
if ( bt != 0x02 ) //expect integer
return 0;
bt = binr.ReadByte();
if ( bt == 0x81 )
count = binr.ReadByte(); // data size in next byte
else
if ( bt == 0x82 ) {
highbyte = binr.ReadByte(); // data size in next 2 bytes
lowbyte = binr.ReadByte();
byte[] modint = { lowbyte, highbyte, 0x00, 0x00 };
count = BitConverter.ToInt32( modint, 0 );
} else {
count = bt; // we already have the data size
}
while ( binr.ReadByte() == 0x00 ) { //remove high order zeros in data
count -= 1;
}
binr.BaseStream.Seek( -1, SeekOrigin.Current ); //last ReadByte wasn't a removed zero, so back up a byte
return count;
}
private static bool CompareBytearrays( byte[] a, byte[] b ) {
if ( a.Length != b.Length )
return false;
int i = 0;
foreach ( byte c in a ) {
if ( c != b[i] )
return false;
i++;
}
return true;
}
private static async Task<string> DecryptX(string encrypted, string keyFile) {
string decrypted = "";
byte[] data = Convert.FromBase64String( encrypted );
byte[] cert = pem2bytes( File.ReadAllText( keyFile ) );
RSACryptoServiceProvider rsa = null;
if(rsa == null) {
try {
X509Certificate2 cer = new X509Certificate2( cert );
if ( cer.HasPrivateKey ) {
rsa = (RSACryptoServiceProvider)cer.PrivateKey;
} else {
rsa = (RSACryptoServiceProvider)cer.PublicKey.Key;
}
} catch (CryptographicException ce) {
return ce.Message;
}
}
if (rsa == null) { return "No decoder hack worked"; }
try {
byte[] buffer = new byte[100]; // the number of bytes to decrypt at a time
int bytesReadTotal = 0;
int bytesRead = 0;
byte[] decryptedBytes;
using ( Stream stream = new MemoryStream( data ) ) {
while ( ( bytesRead = await stream.ReadAsync( buffer, bytesReadTotal, 100 ) ) > 0 ) {
decryptedBytes = rsa.Decrypt( buffer, false );
bytesReadTotal = bytesReadTotal + bytesRead;
decrypted = decrypted + Encoding.UTF8.GetString( decryptedBytes );
}
}
} catch ( CryptographicException ce) {
return ce.Message;
} catch ( Exception e) {
return e.Message;
}
return decrypted;
}
private static byte[] pem2bytes( string publicKey ) {
string[] stripstrings = new string[] {
"PRIVATE KEY",
"PUBLIC KEY",
"CERTIFICATE"
};
string pemstr = publicKey.Trim();
StringBuilder sb = new StringBuilder( pemstr );
foreach(string strip in stripstrings) {
sb.Replace( "-----BEGIN " + strip + "-----", "" );
sb.Replace( "-----END " + strip + "-----", "" );
sb.Replace( "-----BEGIN RSA " + strip + "-----", "" );
sb.Replace( "-----END RSA " + strip + "-----", "" );
}
string pubstr = sb.ToString().Trim(); //get string after removing leading/trailing whitespace
try {
return Convert.FromBase64String( pubstr );
} catch ( FormatException ) { //if can't b64 decode, data is not valid
return null;
}
}
}
C#: Sample usage to download and decrypt PHP output
string key = @"C:\path\to\private.txt";
using(WebClient wc = new WebClient()) {
// download encrypted string from php script
string encrypted = wc.DownloadString( "http://127.0.0.1/encrypt.php" );
// start stopwatch
var watch = System.Diagnostics.Stopwatch.StartNew();
string decrypted = crypt.decrypt( encrypted, key );
Console.WriteLine( "Elapsed: " + watch.Elapsed.TotalSeconds.ToString() );
Console.WriteLine(
decrypted
);
}
Answer: Replace all this junk:
byte[] encryptedBytes = Enumerable.Range( 0, encrypted.Length -1 )
.Where( x => x % 2 == 0 )
.Select( x => Convert.ToByte( encrypted.Substring( x, 2 ), 16 ) )
.ToArray();
byte[] buffer = new byte[( rsa.KeySize / 8 )]; // the number of bytes to decrypt at a time
int bytesRead = 0;
using ( Stream stream = new MemoryStream( encryptedBytes ) ) {
while ( (bytesRead = stream.Read( buffer, 0, buffer.Length )) > 0 ) {
decrypted = decrypted + Encoding.UTF8.GetString( rsa.Decrypt( buffer, false ) );
}
}
with
using System.Runtime.Remoting.Metadata.W3cXsd2001;
byte[] encryptedBytes = SoapHexBinary.Parse(encrypted);
byte[] decryptedBytes = new byte[encryptedBytes.Length];
int blockSize = rsa.KeySize >> 3;
int blockCount = 1 + (encryptedBytes.Length - 1) / blockSize;
Parallel.For(0, blockCount, (i) => {
var offset = i * blockSize;
var buffer = new byte[Math.Min(blockSize, encryptedBytes.Length - offset)];
Buffer.BlockCopy(encryptedBytes, offset, buffer, 0, buffer.Length);
Buffer.BlockCopy(rsa.Decrypt(buffer, false), 0, decryptedBytes, offset, buffer.Length);
});
We use a highly-optimized builtin class for converting hexadecimal description of a byte array into that byte array. Much simpler, much faster.
Then, we skip the MemoryStream entirely, since extra wrappers can only slow things down, and streams force us into sequential operation.
Finally, we use Parallel.For to process each block, passing it through rsa.Decrypt, and storing the output at the same array index the input came from.
The result's no longer a string, you can turn it into one after the end of the parallel for if you need to, but generally byte array is better for strong arbitrary data anyway.
If your block operation changes the size (how does that work? and how did you figure out how much to stick into each block on the encryption side?) then
int blockSize = rsa.KeySize >> 3;
int blockCount = 1 + (encryptedBytes.Length - 1) / blockSize;
var decryptedChunks = new byte[][blockCount];
Parallel.For(0, blockCount, (i) => {
var offset = i * blockSize;
var buffer = new byte[Math.Min(blockSize, encryptedBytes.Length - offset)];
Buffer.BlockCopy(encryptedBytes, offset, buffer, 0, buffer.Length);
decryptedChunks[i] = rsa.Decrypt(buffer, false);
});
var decryptedBytes = decryptedChunks.SelectMany(x => x);
Note that the parallelization will only work if your rsa.Decrypt method is thread-safe/stateless. If it's not, find one that is. | {
"domain": "codereview.stackexchange",
"id": 23681,
"tags": "c#, performance, php"
} |
Error compiling openni2_camera -> libopenni2_wrapper.so | Question:
I'am trying to get openNi2_camera with ros hydro running on a raspberry pi. The other files just went fine, now the executeable is failing with the following error:
Scanning dependencies of target openni2_camera_node
[ 92%] Building CXX object openni2_camera/CMakeFiles/openni2_camera_node.dir/ros/openni2_camera_node.cpp.o
/tmp/ccU7U6HY.s: Assembler messages:
/tmp/ccU7U6HY.s:82: Warning: swp{b} use is deprecated for this architecture
Linking CXX executable /home/pi/catkin_ws/devel/lib/openni2_camera/openni2_camera_node
/home/pi/catkin_ws/devel/lib/libopenni2_wrapper.so: undefined reference to `oniStreamRegisterNewFrameCallback'
/home/pi/catkin_ws/devel/lib/libopenni2_wrapper.so: undefined reference to `oniStreamGetProperty'
/home/pi/catkin_ws/devel/lib/libopenni2_wrapper.so: undefined reference to `oniDeviceIsCommandSupported'
/home/pi/catkin_ws/devel/lib/libopenni2_wrapper.so: undefined reference to `oniDeviceSetProperty'
I couldn't use apt-get install libopenni2-dev, which is why I compiled everything from the source, having working Sample-Apps of OpenNi2 like SimpleRead.
I also added something to the CMakeLists.txt like that:
set(PC_OPENNI2_LIBRARIES /home/pi/OpenNI2/)
include_directories(${PC_OPENNI2_LIBRARIES}/Include)
include_directories(${PC_OPENNI2_LIBRARIES}/Source)
Thanks for any advice in advance
Originally posted by honky on ROS Answers with karma: 51 on 2014-05-09
Post score: 0
Answer:
After adding this to my CMakeLists.txt it worked:
include_directories(/home/pi/kalectro/OpenNI2/Include)
include_directories(/home/pi/kalektro/OpenNI2/Source)
##Find pack for OpenNI2
find_path(OpenNI2_INCLUDEDIR
NAMES OpenNI.h
HINTS /home/pi/kalectro/OpenNI2/Include)
find_library(OpenNI2_LIBRARIES
NAMES OpenNI2 DummyDevice OniFile PS1090
HINTS /usr/lib/ /usr/lib/OpenNI2/Drivers
PATH_SUFFIXES lib)
message (STATUS ${OpenNI2_LIBRARIES})
include_directories(${OpenNI2_INCLUDEDIR})
link_directories(${OPENNI2_DIR}/Redist)
include_directories(${OPENNI2_DIR}/Include)
Originally posted by honky with karma: 51 on 2014-05-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17902,
"tags": "ros, ros-hydro, openni2-camera, openni2, raspberrypi"
} |
How to design internal wall separation in the PVC tube to use it as a water tanker | Question: I am working on design of the column which contains the water tanker and the pump + electronics at the bottom.
I have tried different solutions and they failed. I would like to know your advice on this R&D problem with the goal to be able to achieve the the robustness, manufacturing easy process and low cost.
Objective:
User put the tube as a column in vertical position.
The tube is separated inside in 2 parts by the false bottom. The upper part is used as a water tank. The lower part contains the water pump.
The false bottom contains the hole which is used to pass the water from the upper section to the lower section with the help of the pump. The water pipe
fitting hose connector is fixed through the hole.
Requirements: Tube + false bottom
The material of the tube shall be one of the next list:
PVC, uPVC, ABS, Acrylic/PMMA, PC/Polycarbonate
The material of the wall separation shall be one of the next list:
PVC, uPVC, ABS, Acrylic/PMMA, PC/Polycarbonate
The tube shape shall be cylindrical
The tube shall be separated inside in 2 sections: the
upper section to contains the water and used as a water tank the
lower section to contains the water pump
The false bottom shall not pass the water from the upper section
to the lower
The false bottom thickness shall not exceed 6 mm
The upper tube section shall contain minimum 4 liters of volume
The upper tube section shall contain the water with no leakage
The lower tube section shall contain maximum 1 liter of volume
The lower tube section height shall be maximum 100 mm
The tube diameter shall be >= 110 and <= 150 mm
The tube height shall be >= 400 and <= 700 mm
The tube wall thickness (the difference between outer diameter and
inner diameter) shall be >= 2 and <= 6 mm
The false bottom shall be resistant to the weight of 10 kg
placed on the wall separation inside the upper section
The false bottom shall be functional after 10 drop tests with
different orientations from the height of 1 m
I have designed this custom plug and 3d printed it with different sealing rings diameters. The problem is that:
It is very difficult to insert the plug with sealing in the PVC pipe (12 cm diameter).
The PVC pipe is not uniform and so the sealing does not works perfectly, so there is a water leakage
This solution doesn't seems to be a good one for production
I am going to search more on:
existing plumbing solutions to separate the water from the dry section
factories if they are able to produce the tube with the inner separation wall
extruded plastic disks which could be fixed to the tube walls by the heater
Do you know any existing plumbing solutions or could you advice me something to solve this problem in the most efficient manner?
Note: The solution with a silicone sealing is acceptable for a prototype but not the good one for production
Answer: Consider the use of a double-lip seal.
Figure 1. A double-lipped seal in the false base of the tube.
This can then be inserted from the top of the tube to take advantage of the tapered edge. It seems to me that increasing the pressure in the top side will increase the seal tightness but check with someone that uses these things. (I'm an electrical engineer.)
w
Figure 2. Having three legs will simplify horizontal alignment of the base.
Figure 3. Doing it this way avoids the internal seal problem completely.
With the arrangement of Figure 3 the assembly, sealing and inspection becomes much simpler. With PVC, for example, a solvent can be used to fuse the plastics. This used to be quite common on plumbing waste pipes, for example. In addition, if it does leak it's visible and external to the electronics. I would consider dipping the end of the pipe into a flat-bottomed container of solvent at the right depth just prior to assembly. There is a fire risk and solvent risk here so you require ventilation and a flame-proof dispenser.
Figure 4. A bench can with spring loaded flame arrester plate. Image: JustRite.
Read the description in the linked article for details.
Figure 5. This design while it makes the two parts visible has the aesthetic advantage that there is no step in the outline. The disadvantage is increased difficulty in solvent bonding. | {
"domain": "engineering.stackexchange",
"id": 3355,
"tags": "mechanical-engineering, fluid-mechanics, applied-mechanics, pipelines, product-engineering"
} |
Matter-antimatter annihilation | Question: What happens if different size atoms meet? We've just created anti-helium, I think. What if one atom of anti-helium collided with one atom of iron. Would some of the iron be left over as a new element?
Answer: Yes, some of the nucleons (protons and neutrons) from the iron would remain because of baryon number conservation. If you start with 4 antibaryons and 56 baryons, you're most likely going to end up with 52 baryons. (Of course, you could end up with 2 antibaryons and 54 baryons, or whatever, but it's impossible to end up with 0 baryons.)
However, you can't just say "oh, I started with 26 protons and 2 antiprotons, and 30 neutrons and 2 antineutrons, so I must end up with 24 protons and 28 neutrons, or chromium-52". Although it's theoretically possible to end up with a chromium-52 nucleus and a bunch of gamma rays, that must be very unlikely because there are many other allowed possibilities for the products of the reaction, and the final state with a 52Cr nucleus and gamma rays corresponds to only a small fraction of the parameter space of possible products.
More likely, the 56Fe nucleus is going to explode into a bunch of smaller fragments because so much energy is released inside it. Most of these fragments will probably be smaller nuclei, but it's possible to get other kinds of particles too: pions, muons... anything there's enough energy to create can be created (as long as the whole process doesn't violate any conservation laws).
Also note that the numbers of protons and neutrons don't have to be separately conserved. It's completely possible to produce some smaller nuclei which are unstable and subsequently beta-decay, changing protons into neutrons or vice versa.
So basically the answer is "yes, some of the baryons would be left over as new elements, but there's no way to tell which ones and it will be a messy, complicated process". | {
"domain": "physics.stackexchange",
"id": 901,
"tags": "nuclear-physics"
} |
Designing a lowpass filter to minimize aliasing in pre-decimated streaming audio | Question: I need to apply low-pass filter to PCM files. There are several methods such as FIR filters, IIR (butterworth-chebyshev..) filters but it seems to me applying Fast Fourier transform and eliminating higher frequencies is the closest way to an ideal filter.
What is the fastest and closest to ideal filtering method?
It is required for anti-aliasing before changing the sampling rate of sound.(Fcutoff=Fs/2) And will be applied every block of 1 sec. data. The main requirement is, after changing the sample rate, the new audio quality must be close as possible to original audio quality. (Not noisy.)
Thanks.
Answer: The best choice of filter depends on your specific application requirements. There are two basic choices: FIR and IIR. IIR will be much more efficient however, it will results in phase distortions. The phase distortions are completely inaudible (unless it's a bizarre outlier case) but clearly measurable. So it depends whether you can tolerate this our not.
In either case you need to decide how close you need to get to the new Nyquist frequency and how much aliasing noise you can tolerate. A typical example would be that you want the passband to extend to 90% of the new Nyquist frequency and that you would like your aliasing products to be below -80dB. Based on these specifications you can then design the appropriate filter. Other considerations include how much pass band ripple you can accept and if you have any constraints on maximum group delay and/or latency.
Here is an example: Let's say you want to downsample from 44.1 kHz to 32 kHz and the new Nyquist frequency is 16kHz. Going to 90% Nyquist (14400 Hz), with 0.1dB pass band ripple and 80 dB of attenuation at 16 kHz could be done with an elliptical filter of 9th order.
As nibot has pointed out, zeroing FFT bins is a poor choice for a low pass filter since the resulting low pass has very big side lobes and aliasing rejection will be quite poor. It would also require a proper implementation of an overlap-add or overlap-save algorithm to deal with a continuous signal. | {
"domain": "dsp.stackexchange",
"id": 239,
"tags": "filters, audio"
} |
Redux: button click potentially fires 3 action for different reducers | Question: I’m using Redux and thunk, I have a list component
class List extends Component {
render() {
//code
}
return(
<Aux>
<Button
type="button"
click={() => this.props.addNew(data)}
btnType="add" >
Add
</Button>
<Button
type="button"
click={() => this.props.deleteB(id, type)}
btnType="Delete" >
Delete
</Button>
<Button
type="button"
click={() => this.props.editStart(id)}
btnType="edit" >
Edit
</Button>
<Button
type="button"
click={() => this.props.editSave(data, type)}
btnType="save" >
save
</Button>
</Aux>
)
}
const mapStateToProps = state => {
return {
editing: state.list.editing
};
};
const mapDispatchtoProps = dispatch => {
return {
deleteB: (id, type) => {dispatch(actions.deleteB(id, type))},
editStart: (id) => {dispatch(actions.editStart(id))},
editSave: (data, type) => {dispatch(actions.editSave(data, type))},
addNew: (data) => dispatch(actions.addNew(data))
};
};
export default connect(mapStateToProps, mapDispatchtoProps)(List);
A reducer
const addNew = ( state, action ) => {
//immutable state
//add new
//return updated state
};
const deleteB = ( state, action ) => {
//immutable state
//delete
//return updated state
};
const editStart = ( state, action ) => {
//immutable state
//update editing to true
//return updated state
};
const editSave = ( state, action ) => {
//immutable state
if(action.type === state.editing.type) {
//update object with user data
} else {
//delete old data same code of deleteB
//add data user like addNew
//update editing to false
}
//return updated state
};
const reducer = ( state = initialState, action ) => {
//switch case
};
export default reducer;
When the user clicks on the “save” button if the type of the data changed the reducer uses the same functions that the buttons “add” and “delete” uses.
I’ve created an utility function for deleting and adding data, but It still looks ugly.
I was wondering if when the user clicks “Save” it is better to call a function in the List component that calls “addNew” and “deleteB” and finally “editSave” to only update the state for the “editing” state property.
I think that a reducer needs to know only the state he needs to update, so editSave should only update the editing slice of state, and I need to reuse the others reducers.
But I don’t know if there is a better way, or if I wrote a bad pattern for the reducers.
Answer:
I was wondering if when the user clicks “Save” it is better to call a function in the List component that calls “addNew” and “deleteB” and finally “editSave” to only update the state for the “editing” state property.
I would avoid this. Ideally, all your logic should live either the reducer or the thunk, NOT in components. This way, your logic is in one place, are easily and uniformly testable, and are not affected by things such as re-implementation of the UI (i.e. changing of components, replacing the UI library, etc).
Reducers are just pure functions - they take in old state and an action, and they return new state. As long as you follow that basic principle, you're good. When you call a reducer from another reducer, you're effectively just composing reducers - which isn't a strange concept in Redux. So it's not weird to have something like this:
export const add = (state, action) => { ... }
export const edit = (state, action) => { ... }
export const delete = (state, action) => { ... }
export const someWeirdCaseOfAddEditDelete = (state, action) => {
if(action.type === state.editing.type) {
return edit(state, action)
} else {
const intermediateState = add(delete(state, action), action)
const finalState = doMoreStuffWith(intermediateState, action)
return finalState
}
}
As an added bonus, when the time comes when someWeirdCaseOfAddEditDelete starts to deviate from your regular add, edit and delete, you can simply replace its implementation and its tests without having to meddle with the other three reducers.
To address your concerns in the comments:
I thought that was an anti-pattern because it's like dispatching an action inside a reducer that got executed from an action.
Calling dispatch in a reducer is the antipattern. But composing functions (i.e. breaking up your reducer into subfunctions that deal with specific parts of the state) is totally fine.
It may be easier to wrap your head around the idea by dropping the assumption that the common operation is a reducer. Think of it as just a common utility function:
const thatCommonFunction = (somePartOfTheState) => ({
thatOnePropertyYouNeedToChange: { ... }
})
const reducer1 = (state, action) => ({
...state,
...thatCommonFunction(state.some.part)
})
const reducer2 = (state, action) => ({
...state,
...thatCommonFunction(state.some.part)
})
Every time I call "deleteB" or "add" from "someWeirdCaseOfAddEditDelete" the state will update, before returning the intended "finalState".
If the state updates before all of your reducers return the new state to Redux, then there's something wrong with your code.
Redux receives the updated state only after it runs through all the reducers. The state should never update while execution is still in the reducer.
Under the hood, at the very basic, Redux does something like this:
const createStore = (rootReducer, initialState = {}) => {
let currentState = initialState;
const subscriptions = [];
return {
dispatch(action) {
// This part is where control is handed over to your reducers.
currentState = rootReducer(currentState, action);
// Only after the line above should your state be updated.
// After that one line, the UI is informed of the update.
subscriptions.forEach(s => s(currentState))
},
subscribe(fn) {
subscriptions.push(fn)
}
})
} | {
"domain": "codereview.stackexchange",
"id": 33335,
"tags": "javascript, react.js, redux"
} |
Can I speed up this multi-LCSS (Longest Common Substring) algorithm? | Question: I'm creating a plagiarism detection algorithm in Python, which scores how much of document A has been plagiarised in document B, and returns all the copied sections in B that were plagiarised from A.
Here's my working version:
from textdistance import LCSStr
DEFAULT_MIN_SUBSTRING_LEN = 30
def get_doc_lcs_matches(
doc1: str,
doc2: str,
min_substring_len: int = DEFAULT_MIN_SUBSTRING_LEN
):
get_lcs = LCSStr()
while True:
# Find a satisfactory LCS:
lcs = get_lcs(doc1, doc2)
lcs_len = len(lcs)
if lcs_len < min_substring_len:
break
yield lcs
# Remove the found LCS substring from both strings:
lcs_idx_1 = doc1.find(lcs)
lcs_idx_2 = doc2.find(lcs)
doc1 = doc1[:lcs_idx_1] + doc1[lcs_idx_1 + lcs_len:]
doc2 = doc2[:lcs_idx_2] + doc2[lcs_idx_2 + lcs_len:]
def get_similarity(
doc1: str,
doc2: str,
min_substring_len: int = DEFAULT_MIN_SUBSTRING_LEN,
stop_score: float = None
):
matches = []
doc1_len = len(doc1)
count = 0
score = 0.0
for match in get_doc_lcs_matches(doc1, doc2, min_substring_len):
matches.append(match)
count += len(match)
score = count / doc1_len
if stop_score and score >= stop_score:
break
return {
"count": count,
"score": score,
"matches": matches
}
I've added stop_score as a way to stop early if a certain plagiarism threshold has been reached, which helps speed this up considerably for my use case.
My problem is, however, this is still extremely slow for large documents (1 MB+), and I have to process quite a few!
I'm wondering if anyone can suggest anything that would improve efficiency? I'm also open to alternative solutions to my original problem.
--- UPDATE ---
@Pseudonym suggested tokenising. I considered this, but it won't suit my use case unfortunately:
The documents are technical in nature and are about the same subject, so the vocabulary set is rather small.
The kind of plagiarism I'm looking for is exclusively the copy & paste variety. I don't particular care to catch slight modifications, so don't need fuzzy or semantic matching.
Hence my current algorithm focuses on LCSS.
Answer: Questions about specific programs or libraries are off-topic here, but let's talk about text similarity algorithms.
I have two suggestions that may help.
First off, consider tokenising the text into words first. It obviously depends on your goals, but I suspect that a score based on the number of matching words would be more informative than a score based on the number of matching characters, because it ignores whitespace. Case-folding is probably a good idea, and you could also consider stemming, if you want to catch near-similarities like changing the tense of a verb.
Second, if your goal is to find long runs of matches, LCSS is probably not the best way of doing this for a reason that you may not have considered: no matter how you compute it, a longest common substring may not be a cleanest common substring.
Consider two documents: Document A contains 10 'a' characters, and document B contains 20 'a' characters.
What you might hope is that a LCSS algorithm would match the 10 characters in document A with the first or last 10 characters in document B. But it's also a correct answer to match every second character in document B. That is, after all, still a common substring with the maximum possible length of 10 characters.
This specific simple case probably won't happen with most efficient LCSS algorithms, but it happens in practice on more complicated examples. The GNU diff program actually has a post-pass on its LCSS algorithm to make the diff "pretty".
A better approach would be to use suffix arrays. If you sort both documents into the same suffix array, which takes linear time and space, then substrings end up together in the sorted order, and finding long common substrings is straightforward. | {
"domain": "cs.stackexchange",
"id": 21890,
"tags": "python, longest-common-substring"
} |
Find hamming codewords in r=2^k dimensions | Question: There is the original problem, and an equivalent problem.
The equivalent problem: construct a set $A$ that contains bit arrays of length $r-1$, where $|A|=2^{r-1}/r$ and $hamming \space distance (i, j) = 3 \quad \forall i, j \in A, \space i \neq j$.
Misha Lavrov gave an excellent answer on Math.SE that solves the original problem. However, I don't get his construction.
If you could prove that his algorithm solves the problem for any $r=2^k$, or provide your own algorithm that solves the problem / equivalent problem for any $r=2^k$, I'd really appreciate it.
The original problem: choose $2^r/r$ bit arrays, where each bit array is of length $r$, and $r=2^k$ for all $k \ge 2$, such that the chosen arrays can be split into 2 sets, $A$ and $B$, where
$|A|=|B|=2^{r-1}/r$
$\forall x \in A.(\exists y \in B.(hamming \space distance (x, y) = 1))$
and $\forall y \in B.(\exists x \in A.(hamming \space distance (x, y) = 1))$
$hamming \space distance (i, j) = 3 \quad \forall i, j \in A, \space i \neq j$
$hamming \space distance (i, j) = 3 \quad \forall i, j \in B, \space i \neq j$
Here is the answer for $k = 2$, where blue circles are chosen bit arrays.
(P.S. Instead of $n=2^r$ dimensions in my question on Math.SE, I used $r=2^k$ here, because Misha's answer referred to bit arrays of length $r$. I hope this is not too confusing.)
Answer: I completely misunderstood Misha's answer. Here is my updated understanding of the equivalent problem. All quotes are from his answer.
Let $S = \{0,1\}^{r-1}$, $|S| = 2^{r-1}$ bit arrays, each bit array has length $r-1$.
Let $S_i = \{x\ |\ x \in S\ and\ x_i = 1\}$, where $x_i$ is the $i^{th}$ bit of bit array $x$. We set the $i^{th}$ bit of array $x$ to $1$.
An even number of bit arrays is always chosen from $S_i$. If initially an odd number of bit arrays is chosen, then choose the bit array corresponding to the parity bit as well, so finally an even number of bit arrays are chosen.
This corroborates with Misha's answer, as the coordinates numbered off exactly match the parity bit coverage.
Among the coordinates numbered $0\dots001, 0\dots011, 0\dots101, 0\dots111, \dots, 1\dots111$, an even number are $1$'s.
Among the coordinates numbered $0\dots010, 0\dots011, 0\dots110, 0\dots111, \dots, 1\dots111$, an even number are $1$'s.
Repeat step 2 for all $i \in [1,\ r-1]$.
Let $A$ be the set of chosen bit arrays from $S_i$ for all $i$, where chosen bit arrays are hamming codewords.
The hamming code is an even parity bit, single error correction code for arrays of length $r - 1$.
The following describes single error correction, where "vertex" means "bit arrays", and "picked vertices" means "bit array in $A$" or "hamming codeword".
If a vertex is not one of the picked vertices, find all the parity conditions that fail, let $i$ be the coordinate with a $1$ in all the bits corresponding to the failed conditions; change the $i^{\text{th}}$ coordinate, and you will get a picked vertex.
This solves the equivalent problem of finding set $A$.
Now, to solve the original problem.
(I originally thought the $00\dots00^{\text{th}}$ coordinate was for double error detection, and was wondering why I needed that in a single error correction code. I was completely wrong.)
Given a vertex, if it is one of the picked vertices, change the $00\dots00^{\text{th}}$ coordinate, and you will get another picked vertex.
This is equivalent to prepending a new bit, $b$, for each bit array within the set $A$ that we derived earlier, thus making all bit arrays have length $r$.
Arbitrarily let bit $b = 0$. Define $A_{new} = \{b\ x_1\dots x_{r-1}\ |\ x \in A\}$.
Let $B = \{\neg b\ x_1\dots x_{r-1}\ |\ x \in A\}$, so all bit arrays are same as $A_{new}$, except for every array, the $b$ bit is flipped.
This fits well with Misha's explanation that bit arrays from $A_{new}$ and $B$ are nearly-identical, because the only difference is the flipped bit $b$.
the $00\dots00^{\text{th}}$ coordinate of a vertex [...] is the "free" coordinate where we pick a vertex with $0$ and also pick a nearly-identical vertex with $1$.
This solves the original problem, such that the $2^r/r$ bit arrays are in sets $A_{new}$ and $B$. | {
"domain": "cs.stackexchange",
"id": 9403,
"tags": "graphs, optimization, combinatorics, colorings, hamming-code"
} |
Which CFTs have AdS/CFT duals? | Question: The AdS/CFT correspondence states that string theory in an asymptotically anti-De Sitter spacetime can be exactly described as a CFT on the boundary of this spacetime.
Is the converse true? Does any CFT in a suitable number of spacetime dimensions have an AdS/CFT dual? If no, can we characterize the CFTs which have such a dual?
Answer: The answer is not known, but many believe it is: "Yes, every CFT has an AdS dual." However, whether the AdS dual is weakly-coupled and has low curvature -- in other words whether it's easy to do calculations with it -- is a different question entirely. We expect, based on well-understood examples (like $\mathcal N=4$ SYM dual to Type IIB strings on $\mathrm{AdS}_5 \times S^5$), that the following is true:
For the AdS dual to be weakly-coupled, the CFT must have a large gauge group.
For the AdS curvature scale to be small (so that effective field theory is a good approximation), the CFT must be strongly-coupled. In well-understood examples, the CFT has an exactly marginal coupling which when taken to infinity decouples stringy states from the bulk spectrum. By contrast, at weak CFT coupling, the AdS dual description would involve an infinite number of fields and standard EFT methods would not apply. (This doesn't necessarily mean calculations are impossible: we would just need to better understand string theories in AdS -- something which is actively being worked on.)
As far as I know, appropriate conditions for CFTs without exactly marginal couplings to have good AdS EFTs are not known. Also, well-understood AdS/CFT dual pairs where the CFT violates one or both of the above conditions are scarce. | {
"domain": "physics.stackexchange",
"id": 3377,
"tags": "string-theory, research-level, conformal-field-theory, ads-cft"
} |
How to compute disparity on a specific region of interest only? | Question:
Is is possible, using stereo_image_proc, to compute disparities on a specific region of interest only?
The setup is as follow:
we detect a color blob in an image
we use stereo to determine its 3d position
It would greatly improve performances if we could just run the stereo on a subwindow. Is there a way to tell stereo_image_proc to do so?
We could probably just pass to stereo_image_proc a new pair of images matching the zone we want but it would be cumbersome to recompute the appropriate camera parameters and crop the images by hand...
Originally posted by Thomas on ROS Answers with karma: 4478 on 2012-03-29
Post score: 1
Answer:
AFAIK the parameter roi in CameraInfo.msg can be used to choose a region of interest. Try republishing a camera info with the roi set to your object region. And use that to feed stereo_image_proc. If you just need some sampled 3D points you should use StereoCameraModel::projectDisparityTo3d. This is way faster than constructing a whole disparity image.
Originally posted by Stephan with karma: 1924 on 2012-03-30
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8786,
"tags": "stere-image-proc"
} |
ROS Planner Trajectory Sampling Rate | Question: When using ROS Noetic (ROS2 answer great though too) with the Pilz industrial planner, I noticed that the plan output trajectory has 100ms between samples. I searched the web (and the source-code) but couldn't figure out where this 100ms comes from. Is there a parameter to adjust it or is it hard-coded somewhere?
I didn't see a parameter for the other planners (CHOMP, STOMP, ...) either. I get that post-processing can "fill-in" these trajectories, but I'm curious how to increase the planner output density instead.
The pilz source-code has a "plan(...)" function that takes a sampling rate, which seems to be called by a "generate(...)" function which also takes a sampling rate, but I couldn't figure out what calls the "generate(...)" function.
Extra tags: frequency, duration, sampling rate, sample rate, hz, publish rate,
Answer: In case of ROS2
Moveit2 planners inherit the planning_interface::PlanningContext and planning_interface::PlannerManager classes as shown here.
When a planning problem has to be solved then the the planning_interface::PlanningContext::solve method of the PlanningContextof the planner is called.
In case of the pilz planner the pilz_industrial_motion_planner::PlanningContextBase inherits from planning_interface::PlanningContext and pilz_industrial_motion_planner::CommandPlanner inherits from planning_interface::PlannerManager.
The solve method of pilz_industrial_motion_planner::PlanningContextBase directly calls the generate method of a pilz_industrial_motion_planner::TrajectoryController which then calls its own pilz_industrial_motion_planner::TrajectoryController plan method (Note, the plan method is virtual in the TrajectoryGenerator class and only implemented in the TrajectoryGenerator{PTP|LIN|CIRC} classes respectively).
Depending on the used motion planning strategy of pilz (e.g. point-to-point motion, linear trajectory or circular trajectory) the generator then plans the motion.
In summary, a simplified call stack looks like this (types are left out and names shortened):
PlanningContext::solve(res)
↓ is implemented by pilz
PlanningContextBase::solve(res)
↓ calls
TrajectoryGenerator::generate(scene, req, res, sampling_time=0.1)
↓ calls
TrajectoryGenerator::plan(scene, req, plan_info, sampling_time, joint_trajectory)
where res is a reference to the motion plan response, scene is the planning scene of the context, req is the motion planning request of the context, sampling_time is the sampling rate of your question and plan_info is a pilz collection of some scene and req information.
Note, that the sampling_time parameter is introduced in the TrajectoryGenerator::generate method with a default value of 0.1 (the 100ms you mentioned) and is directly forwarded to the call of TrajectoryGenerator::plan but not changed (not even in the point-to-point, linear nor circular trajectory generators calls of the TrajectoryGenerator::generate method - the default value of 100ms of the parameter is used) nor read from anywhere like a ros parameter. Thus the value appears truly hardcoded in ROS2.
The implementation of moveit for ROS1 is different - so it still might be possible there. | {
"domain": "robotics.stackexchange",
"id": 39018,
"tags": "ros, moveit, ros-industrial"
} |
rosjava builds fail, says I'm missing ant, but I'm not | Question:
I had typed up a long explanation, but the webpage did something weird and I lost it.
Anyway, I have two machines, a desktop and laptop, both running linuxmint 12. The desktop installed correctly and works well with rosjava. The laptop always fails during builds. I have installed rosjava several different ways (including the exact method used on the dekstop machine from which I took notes during installation.) I also tried installing rosjava-core from synaptic, and following this tutorial, except using electric instead of diamondback.: http://www.cs.bham.ac.uk/internal/courses/int-robot/2011/notes/install.php
All of my builds fail when they run ant. If I run rosmake rosjava -rosdep--install, I get the following:
Failed to find rosdep ant for package rosjava on OS:ubuntu version:11.10
Failed to find rosdep boost for package rosjava on OS:ubuntu version:11.10
[ rosmake ] rosdep install failed: Rosdep install failed
When I run my package I get the following:
...BUILD FAILED
/home/embedded/ros_workspace/rosjava_core/rosjava/build.xml:42: Compile failed; see the compiler error output for details.
Total time: 4 seconds
Executing command: ['ant'] ...
ANT IS INSTALLED.
ant -version
Apache Ant(TM) version 1.8.2 compiled on August 19 2011
I even copied the exact same (working) package from the working machine to the laptop, and got the same failures.
Any help would be very much appreciated. I'm pulling my hair out over this thing. Meanwhile, I'll be looking for a more appropriate tool to work on this laptop... Like a large hammer.
Here is my package:my_package.tar.gz.jpg
Oh, and I have the correct bootstrap line in the Makefile.
Originally posted by morrowsend on ROS Answers with karma: 56 on 2012-01-23
Post score: 0
Answer:
The rosdep error is not critical, it just means that the ROS doesn't know about the correct package for your Ubuntu version. Just compile without the --rosdep-install flag, i.e rosmake rosjava. To me the error doesn't look like an the system cannot find ant, I don't see anything like ant missing when you are compiling. In particular the message says that the error happens in an ant file which indicates that ant was actually called.
After calling rosmake once on your package, you can directly call ant to just compile it. Did you try that? Also, you didn't paste the compiler error output which might really help.
Your current code will not compile against current rosjava. Rosjava has been refactored a lot over the last few weeks and the NodeMain interface changed. DefaultNodeFactory is gone, too.
EDIT:
The output definitely shows a problem when compiling rosjava:
...
[javac] RosoutLogger.java:188: package org.ros.message.rosgraph_msgs does not exist
[javac] publish(org.ros.message.rosgraph_msgs.Log.FATAL, message);
[javac] ^
[javac] RosoutLogger.java:196: package org.ros.message.rosgraph_msgs does not exist
[javac] publish(org.ros.message.rosgraph_msgs.Log.FATAL, message, t);
[javac] ^
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 24 errors
Some of the required messages could not be found. There is a certain chance that the following steps could work:
Check that rosgraph_msgs is installed on your system: 'rospack find rosgraph_msgs' should print a path.
Remove all build relicts from previous rosjava builds:
rm -rf ~/.ros/rosjava
Rebuild everything:
rosmake --pre-clean rosjava
Now, the compiler shouldn't complain anymore. However, your package isn't built yet. You can try compiling it with 'rosmake my_package'. With your current code, this will fail however because your package uses a deprecated rosjava API.
Originally posted by Lorenz with karma: 22731 on 2012-01-23
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by morrowsend on 2012-01-24:
Here is my bash output: http://www.pastie.org/3245781 and here is the log file: http://www.pastie.org/3245797 I've been trying things in the time between my post, so this log file is slightly different than before.
Comment by Lorenz on 2012-01-24:
Can you please post the complete error output?
Comment by morrowsend on 2012-01-24:
I've tried compiling without -rosdep--install, but it still fails. I removed the code in the src folder and tried rosmake my_package. This failed exactly the same as before. I started from scratch using the tutorial on the rosjava page with the same results as well. | {
"domain": "robotics.stackexchange",
"id": 7967,
"tags": "ros, linuxmint"
} |
ROS2 on Windows with NVidia GPU | Question:
Hi everyone!
I'm trying to understand the best way to setup my environment for ROS2+Gazebo.
Up until now I was using an Ubuntu 22.04 VM with Virtualbox to get simple exercises up and running in ROS2, with no major problem.
Since I have to go up a step and start a more complex system, plus I'll need Gazebo for simulations, I thought about how to improve the environment, given that there is always some kind of performance issue using VMs.
It seems my options are:
dual boot
Docker
Git Bash from CMD, as if I was actually on Unix
Since for simplicity of use, budget (can't get another laptop) and resources (my laptop's SDD is not that big) the dual boot is not an option, I moved to Docker. I set up the container (osrf/ros), I installed XLaunch for the GUI and successfully integrated Docker with WSL2. Everything following this guide
However, I can't seem to use my Nvidia graphic card. I've tried launching the turtlesim node + teleop, but it's laggy and slow and Task Manager does not show the process as executing with the Nvidia card.
Has anyone managed to successfully setup a similar environment? Or is there any other option that is actually better than this?
Thanks in advance!
Originally posted by slim71 on ROS Answers with karma: 18 on 2023-04-22
Post score: 0
Answer:
To anybody interested, I gave up.
Getting everything set up on Windows is such a pain, and gives no benefits.
I've found a SSD drive empty and setup a detachable dual-boot with Ubuntu there.
Originally posted by slim71 with karma: 18 on 2023-05-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 38356,
"tags": "ros, gazebo, ros2, windows10, nvidia"
} |
Should I use ideal or non-ideal filters for offline filtering? | Question: I've got an offline signal that I want to high-pass filter. Should I use a Butterworth filter, or could I use the fact that the whole signal is known and use an ideal (step) filter?
Using a high-degree Butterworth filter is, I think, practically the same thing as using an ideal filter, and a higher degree means a better filter if there's no implementation and processing cost. Could the sharp change in impulse response cause a problem, though?
The signal is a bioelectric signal that is almost periodic, and I've got 100 periods or so.
Answer: For most practical applications, filter responses that approach the behavior of the ideal "brick-wall" filter are overkill. I know it's tempting to try to design a really sharp filter when you've got all the time in the world (i.e. for offline applications), but if you really look at the characteristics of your problem, you can most likely get away with something much more reasonable.
Another good reason not to use a brick-wall filter: its impulse response is infinite in length, and it has the form of a $sinc$ function. When you apply such a filter to your signal, you may notice long-duration ringing in the signal at the output of the filter. This comes from the fact that the $sinc$ function in the filter's impulse response is infinitely long and doesn't decay very fast; the resulting effect is most likely not desirable. In general, filters with long frequency responses are not very suitable for analysis of short signals, as the filter output is dominated by the transient behavior of the filter.
The aforementioned effects aren't limited strictly to ideal filter responses; if you design a filter with a really sharp cutoff, it's still possible that you will get time-domain artifacts that you don't want. This intuitively makes sense, because you can view the frequency response of the "not-quite-ideal" filter as the response of the ideal filter convolved with a narrow main lobe that smears the response out a bit. Frequency-domain convolution is equivalent to windowing in the time domain; if you look at the impulse response of a very sharp filter that you've designed, it's likely to look much like a $sinc$ function with a window applied to taper the impulse response off at a faster rate than the $sinc$ does on its own.
I give the same advice as with most filter-design problems: you should really try to tackle the problem as quantitatively as possible. Using intuition on what might seem like a good approach can often steer you down the wrong path. Instead, think about the following:
Where is my signal of interest in the spectrum?
What unwanted signals are in the spectrum? Where are they?
How large are the unwanted signals relative to the signal of interest? In order to accomplish my ultimate goal, how much must it be suppressed?
How much distortion can I tolerate on my signal of interest (both in amplitude and phase)?
What computational limitations do I have?
The first four questions will give you filter performance specifications that you can use with any number of design methods to arrive at filters that will achieve your goals. The last question is also important, and can be used to choose between different filter topologies (i.e. FIR versus IIR) to find one that is implementable for your application. | {
"domain": "dsp.stackexchange",
"id": 70,
"tags": "filters"
} |
What material reduces electromagnetic wave? | Question: I am looking for a material that can cover an antenna to reduce electromagnetic wave. Cement reduces the wave but not a good choose in this case. Aluminium foil works too but it is too strong and blocks all waves at such a high frequency. It is better to be adjustable in thickness or number of layers. Is there any proper choose?
Answer: See: http://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.3.037001 for APS article on meta materials.
Depending on what you want see commercial quality absorbers at http://tdkrfsolutions.com/products/absorbers
Or the wiki small article on it, at https://en.wikipedia.org/wiki/Electromagnetic_absorbers
Bottom line, metals will reflect most RF, and you can try tuned band absorbers or some more broadband, depending on what you want. Lower freqs will be harder, you would need thicker absorbers.
Or you could disconnect the antenna, or place filters behind it (Google RF filters), or limiters.
So, not sure what you want this to cover the antenna for, maybe get smaller antenna, plenty of devices will lower the power in/out. If you say what you are trying to do maybe we can chime in on some answer. | {
"domain": "physics.stackexchange",
"id": 33995,
"tags": "electromagnetism, electromagnetic-radiation"
} |
Align Point clouds from multiple kinects | Question:
Hi,
I am using two Kinects to capture an object. In this scenario the cameras are rigidly attached to a surface and their poses are also fixed. I have obtained the relative camera poses using "camera_pose_calibration" package, calibrated for rgb images. I want to visualize the point clouds from both Kinects in Rviz but the point clouds i obtain are not aligned. In some cases I get one of the point clouds completely inverted with respect to the other when viewed from the /world as fixed frame in Rviz. In other cases the alignment is not satisfactory.
How can i achieve reasonable alignment between the point clouds from multiple Kinects and visualize the result in Rviz?
Additional info: I am using ROS fuerte and platform is Ubuntu 12.04. In Rviz I am displaying PointCloud2 and the depth registration is turned on so the topics are /kinect1/depth_registered/points and /kinect2/depth_registered/points respectively.
Originally posted by Faizan A. on ROS Answers with karma: 68 on 2013-05-04
Post score: 2
Original comments
Comment by georgebrindeiro on 2013-05-05:
Are you able to see both the /kinect1 and /kinect2 tf frames simultaneously?
Comment by Faizan A. on 2013-05-06:
yes. i would upload the result of tf view_frames here but i dont have enough karma so I uploaded it on Dropbox here
https://www.dropbox.com/s/cvn1nilsn089xyo/calibration_2Kinects.pdf
Comment by georgebrindeiro on 2013-05-09:
I think your problem is related to the fact that you have three separate tf trees. The point clouds are defined in terms of the depth_frame, but you only have the rgb_frames' position relative to world. It is reasonable to expect that this could be easily resolved, but unfortunately it isn't...
Comment by georgebrindeiro on 2013-05-09:
Ideally you would get the tf between /world and /kinectX_rgb_frame and things would connect in a graph-like form, but that's not possible currently. We can only have one parent to each child, as seen here: http://answers.ros.org/question/56646/relative-tf-between-two-cameras-looking-at-the-same-ar-m
Comment by Faizan A. on 2013-05-12:
That's true. It can also be verified. The /world is arbitrarily chosen, and sometimes during the configuration it was aligned perfectly with one of the /kinect frames. i.e. the /world and /kinect1 had the same coordinates and orientation. At the point the point clouds were perfectly aligned.
Comment by Faizan A. on 2013-05-12:
So do you have a work around? how can I align multiple point clouds using camera_pose_calibration?
As I understand you are also trying to achieve similar results. Were you able to achieve that using multiple AR markers to estimate camera poses relative to each other?
Any help would be appreciated.
Comment by georgebrindeiro on 2013-05-13:
I haven't continued in that direction yet because I have been involved in some other developments in my lab, but you should be able to write a node that reads in all clouds you are interested in merging, read all relevant transforms and publish a single cloud on a different topic.
Answer:
So I found a work around and posting it here so it would be useful to others.
First calculate the camera poses using a temporary namespace for the cameras. e.g i used kinect1_temp and kinect2_temp. Once the calibration file is saved you would have transforms from /world to /kinect1_temp_rgb_optical_frame and /kinect2_temp_rgb_optical_frame. Now launch the tf publisher and launch the cameras using namespaces that you would use later e.g. kinect1 and kinect2. Calculate the transforms from kinect1_rgb_optical_frame to kinect1_link. Use this transform to link kinect1_temp_rgb_optical_frame and kinect1_link. The corresponding tf tree can be seen here.
Since Tf tree does not support cyclic trees, it is workaround using dummy namespaces assuming the camera positions remain constant.
Hope it helps someone.
Originally posted by Faizan A. with karma: 68 on 2013-05-21
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 14057,
"tags": "ros, rviz, camera-pose-calibration, pointclouds"
} |
Implementation of counting sort | Question: I'm trying to learn more about counting sort and I just implemented the example given in CLRS, my question is: How can I improve this code?
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void counting_sort(int *, int, int);
int find_max(int *, int);
int main(void)
{
int l_size;
int i;
scanf("%d", &l_size);
int *num_list = calloc(l_size, sizeof(int));
for (i = 0; i < l_size; i++)
scanf("%d", &num_list[i]);
int max = find_max(num_list, l_size);
counting_sort(num_list, l_size, max);
puts("Sorted:");
for (i = 0; i < l_size; i++)
printf("%d ", num_list[i]);
printf("\n");
return 0;
}
void counting_sort(int *num_list, int l_size, int max)
{
int i, j;
int *count_list = calloc(max + 1, sizeof(int)); // auxiliary array C
int *sorted_list = calloc(l_size, sizeof(int));
for (i = 0; i < l_size; i++)
count_list[num_list[i]]++;
for (i = 1; i < max + 1; i++)
count_list[i] += count_list[i - 1];
for (i = l_size - 1; i >= 0; i--)
{
sorted_list[count_list[num_list[i]] - 1] = num_list[i];
count_list[num_list[i]]--;
}
memcpy(num_list, sorted_list, l_size * sizeof(int));
}
int find_max(int *num_list, int l_size)
{
int i;
int max = num_list[0];
for (i = 1; i < l_size; i++)
{
if (num_list[i] > max)
max = num_list[i];
}
return max;
}
Answer: You can avoid modifying the input parameter num_list and the memcopy by returning the sorted list pointer. All relevant changes to your code are marked with // (1) in the code below.
You also have a memory leak in counting_sort since you don't release count_list. The fix is marked // (2).
Same for the lists in main // (3)
Note : The program also seg faults when negative data is entered. You either have to guard against that, or also find the minimum data value, allocate count_list big enough and fix the indices when accessing it.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int* counting_sort(int *, int, int); // (1)
int find_max(int *, int);
int main(void)
{
int l_size;
int i;
scanf("%d", &l_size);
int *num_list = calloc(l_size, sizeof(int));
for (i = 0; i < l_size; i++)
scanf("%d", &num_list[i]);
int max = find_max(num_list, l_size);
int* sorted = counting_sort(num_list, l_size, max); // (1)
puts("Sorted:");
for (i = 0; i < l_size; i++)
printf("%d ", sorted[i]); // (1)
printf("\n");
free(sorted); // (3)
free(num_list); // (3)
return 0;
}
int* counting_sort(int *num_list, int l_size, int max) // (1)
{
int i, j;
int *count_list = calloc(max + 1, sizeof(int)); // auxiliary array C
int *sorted_list = calloc(l_size, sizeof(int));
for (i = 0; i < l_size; i++)
count_list[num_list[i]]++;
for (i = 1; i < max + 1; i++)
count_list[i] += count_list[i - 1];
for (i = l_size - 1; i >= 0; i--)
{
sorted_list[count_list[num_list[i]] - 1] = num_list[i];
count_list[num_list[i]]--;
}
free(count_list); // (2)
return sorted_list; // (1)
}
int find_max(int *num_list, int l_size)
{
int i;
int max = num_list[0];
for (i = 1; i < l_size; i++)
{
if (num_list[i] > max)
max = num_list[i];
}
return max;
} | {
"domain": "codereview.stackexchange",
"id": 30925,
"tags": "c, sorting"
} |
Can TensorFlow minimize "symbolically" | Question: From https://stackoverflow.com/questions/36370129/does-tensorflow-use-automatic-or-symbolic-gradients, I understood TensorFlow requires all the operations in the Graph to be explicit formulas (instead of black-boxes, such as raw python functions) to do Automatic Differentiation. Then it will do some kind of Gradient Descent based on that to minimization.
I'm wondering, since it already know all the explicit formulas, can it directly find out the minimum by examining the equation itself? Like computing the points where gradient is zero or do not exist, then do some kind of processing to find out the minimum.
I found it is simple to do this "symbolic minimization" above with few variables such as minimizing Σ(a_i - v)^2 where v is the trainable variable an a_i are all the training samples. I'm not sure is there a general way though.
Answer: If by "symbolic" you mean finding an analytical solution, that is, an equation for each weight, then the answer is no. The example you chose results in a system linear equations, which can be solved analytically. However once you introduce non linearities (by using activation functions with more than one layer), most non trivial cases will have no analytical solution and will need to be solved numerically. This is not a problem specific to tensorflow, it is a mathematical issue, it will not be possible on any language, current or future. Unless there is some revolution in math first. | {
"domain": "ai.stackexchange",
"id": 524,
"tags": "tensorflow, optimization"
} |
package.xml: use 'replace' tag for non-ROS package | Question:
When I do this:
foo/package.xml:
...
<replace>bar</replace>
...
I get this:
$ dpkg-deb --info ros-kinetic-foo_0.1.0_amd64.deb
...
Replaces: ros-kinetic-bar
...
What do I do if I don't want that ros-kinetic- prefix on the replaced package name?
Originally posted by rubicks on ROS Answers with karma: 193 on 2019-03-05
Post score: 0
Original comments
Comment by gvdhoorn on 2019-03-06:
I believe that ros-kinetic- prefix is added there by the rosdebian generator in Bloom. I'm not sure it can be removed.
Perhaps @William can add something here.
Comment by gvdhoorn on 2019-03-06:
REP 127 does say this though (here):
Declares a rosdep key or ROS package name that your package replaces
"rosdep key" could only mean "system dependency" (ie: non-ROS pkg), so it would seem this should work.
Is bar a ROS pkg?
Comment by rubicks on 2019-03-06:
@gvdhoorn , in this example, bar need not be a ROS package. It's better (for my use case, at least) if bar is not a ROS package. This would allow me to <replace> arbitrary packages by name and not just the proper subset of packages known to rosdep.
Comment by gvdhoorn on 2019-03-06:
Hm.
I can think of one reason why we would not want to do/support/allow this (but this is just me speculating): if arbitrary pkgs could be replaced that way, I would create a "rogue" package.xml, release it and replace any arbitrary pkg on a system. That wouldn't be very nice.
Comment by rubicks on 2019-03-06:
...unless, of course, that was the desired behavior. Any person (or software tool) capable of creating and/or editing a debian/control file can effect package removal. Maintainers and system administrators are responsible for package hygiene and installation, respectively.
Comment by gvdhoorn on 2019-03-06:
In principle I agree. But we have to remember that releasing a ROS pkg and getting it distributed through the package repositories to thousands of user's machines is way easier than getting something into Debian/Ubuntu -- which have stricter quality control mechanisms.
Comment by gvdhoorn on 2019-03-06:
But again: it was just me speculating.
What you've observed could just be a bug in the generator.
Comment by rubicks on 2019-03-06:
Submitted issue asking for clarification: https://github.com/ros-infrastructure/bloom/issues/520
Comment by Dirk Thomas on 2019-03-06:
I assume bar isn't a rosdep key in your case? Have you tried using an existing rosdep key? I would expect that to get resolved to whatever system package(s) the rosdep db maps it to on the target platform.
Comment by rubicks on 2019-03-06:
I have not tried using an existing rosdep key because the package I want to replace is not a ROS package.
Answer:
The strings used for dependencies are not system packages. Every name is a rosdep key which is then resolves by the rosdep database to a actual package name. So even if your package is not a ROS package you need a rosdep key for it in order to be able to use it in the manifest.
Originally posted by Dirk Thomas with karma: 16276 on 2019-03-06
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by rubicks on 2019-03-06:
@dirk-thomas, okay, that explains it. I'll see if I can work around this with some rosdep trickery and (if successful) post an answer. Thanks very much for clarifying.
Comment by gvdhoorn on 2019-03-06:
I'd actually assumed that @rubicks was already using recognised rosdep keys, but if that wasn't the case then that would indeed explain it. | {
"domain": "robotics.stackexchange",
"id": 32595,
"tags": "ros, package.xml, ros-kinetic, catkin"
} |
Complexity class of efficient streaming algorithms | Question: Consider the class of problems $\mathsf{StreamL}$ which can be solved in logarithmic space reading the input in a single pass from left to right. In other words:
$L \in \mathsf{StreamL}$ if there exists a Turing machine $M$ which decides $L$, where:
There are two tapes, the read-only input tape and the working tape
$M$ moves only to the left on the input tape, and uses at most $O(\log n)$ space on the working tape.
Has this class been studied?
My assumption is that the answer is yes, but I'm not yet aware of a definition of the class in the literature.
Most literature on streaming algorithms that I am aware of considers the complexity of solving specific algorithmic problems, and does not tackle structural complexity i.e. defining classes such as the above and determining their relationships.
There is also a large body of work on communication complexity classes. In this domain there is a relevant class called $\mathsf{P}^{cc}$ (see Babai, Frankl, and Simon 1986: Complexity classes in communication
complexity theory), which contains functions of two variables $f(x,y)$ where they can be solved using a small amount of communication between $x$ and $y$. This is related to $\mathsf{StreamL}$ above (for functions of two variables, $\mathsf{StreamL}$ is contained in $\mathsf{P}^{cc}$), but the class above is not limited to functions of two variables and enforces a stricter computational requirement.
The obvious inclusions are $\mathsf{REG} \subseteq \mathsf{StreamL} \subseteq \mathsf{L}$, and no apparent inclusion either way between $\mathsf{StreamL}$ and $\mathsf{NC}^1$.
Answer: Along with my comment above (noting that not even AC0 is in "StreamL"), let me say that that this class has been studied before; you just need to know what they used to call it.
Search for "one-way logspace" and you will find plenty of references. (Typically, past work treats it as a reducibility concept.) The papers that those papers reference should cover what you need. | {
"domain": "cstheory.stackexchange",
"id": 4988,
"tags": "cc.complexity-theory, reference-request, complexity-classes, space-bounded, streaming-algorithms"
} |
Why are the spherical and cartesian galactic coordinates in the ATNF Pulsar Catalogue different? | Question: I am trying to relate Galactic Latitude (b) and Longitude (l) (spherical coordinates) to galactic cartesian x,y,z coordinates in the ATNF pulsar catalog.
This query shows some sample data with G_l, G_b, XX, YY, and ZZ; excerpt:
------------------------------------------------------------------
# NAME Gl Gb ZZ XX YY
(deg) (deg) (kpc) (kpc) (kpc)
------------------------------------------------------------------
1 J0002+6216 cwp+17 117.33 -0.07 -0.00 0.00 8.50
2 J0006+1834 cnt96 108.17 -42.98 -0.59 0.60 8.70
3 J0007+7303 aaa+09c 119.66 10.46 0.25 1.20 9.18
The description of the variables from the ATNF documentation are as follows:
GL: Galactic longitude (degrees)
GB: Galactic latitude (degrees)
[...]
ZZ: Distance from the Galactic plane, based on Dist
XX: X-Distance in X-Y-Z Galactic coordinate system (kpc)
YY: Y-Distance in X-Y-Z Galactic coordinate system (kpc)
My understanding is that these variables should be related by the following equations:
tan(G_L) = YY/XX
tan(G_b) = XX/ZZ
However, when I test this assumption my calculated values are very different:
I have tried exploring the possibility that the x,y,z coordinate system may be oriented differently than I expect, but I can find no orientation that yields similar results for G_l or G_b:
Where could I have gone wrong? I feel like I am losing my mind not being able to convert these with simple trig.
Answer: The ATNF Pulsar Catalogue's galactic longitude and latitude are heliocentric,
but the origin of their rectangular coordinates is near the center of our galaxy.
The catalogue documentation,
section 6. Distances, says:
The Galactocentric coordinate system (XX, YY, ZZ) is right-handed
with the Sun at (0.0, 8.5 kpc, 0.0)
and the ZZ axis directed toward the north Galactic pole.
From a heliocentric point of view
(U, V, W)
= (8.5 - YY, XX, ZZ).
Then $\tan{l} = V / U$ and $\tan{b} = W / \sqrt{U^2 + V^2}$
as you'd expect. | {
"domain": "astronomy.stackexchange",
"id": 2840,
"tags": "coordinate, data-analysis, pulsar"
} |
The Simon's Algorithm, confusing equation | Question: I'm approaching the Simon's Algorithm and have troubles with understanding a logic in an introduction.
Above the eq. 6.5.4 they introduce that set S which has 2 elements. As far as I understand, these are: n zeroes (0) and an arbitrary string of n [zeroes and ones] (s). As 6.5.4 suggests, the set S contains vectors which are 'forbidden' for the z (in the sum they indicate that z belongs to s perpendicular which is orthogonal to S). The idea behind introducing that subspace of S is to eliminate kets for which the phase is 0 (s$*$z=1). But if z takes 00..., the bitwise inner product with anything is 0 and it seems to be what we want, so why is it in the 'forbidden' set S?
Could you please bring me back on the right track?
Answer: Here, the idea behind introducing the subspace $S$ and its orthogonal complement $S^\bot$ was to show that all vectors on the RHS of equation 6.5.1 and 6.5.2 form a vector space of dimension $n-1$.
As you say, indeed, the zero vector $\mathbf{0}$ is a vector that we need, but you are wrong in saying that it is 'forbidden' because it appears in the subspace $S$.
Let's go over the definitions again:
$S^\bot$ is the vector space in $\mathbb{Z}_2^n $ such that
$$
S^\bot = \{\mathbf{z} \in \mathbb{Z}_2^n | \mathbf{s}\cdot \mathbf{z} = 0 \}
$$
This includes the zero vector since it satisfies the definition.
$S$ is the subspace that is orthogonal to $S^\bot$, and this is the set $\{ \mathbf{0}, \mathbf{s} \}$.
Note that it includes the zero vector, because its scalar product with itself is 0. This is not surprising because all subspaces of a vector space must have the zero vector.
Ultimately, you can see that the entire vector space of $\mathbb{Z}_2^n$ breaks into two sets: one whose scalar product with $\mathbf{s}$ is 0, and another whose scalar product is 1. Both sets have the same number of elements, thus reducing the dimension of the RHS in 6.5.4 from $n$ to $n-1$. | {
"domain": "physics.stackexchange",
"id": 19926,
"tags": "quantum-information, linear-algebra, algorithms"
} |
What happens when a battery is in space? | Question: What would happen If you take a battery in a space ? Would the current flow between terminals being it in vacuum of space?
Answer: Check out this post over at NASA. It's directed towards kids but it gives a good overview.
Most relevantly,
"In space, batteries must work in both very hot and very cold conditions. They must withstand a lot of radiation from the Sun. They must work in a vacuum without leaking or blowing up! They must be rugged enough to withstand the severe vibrations of a rocket launch."
So batteries would function normally, but there are some special conditions that have to be considered when manufacturing batteries for use in space.
To answer your question in the comments of your question:
Yes there is electrical resistance in space. That resistance is a property of the circuitry not the environment the circuit is in (normally). | {
"domain": "physics.stackexchange",
"id": 14802,
"tags": "electric-current, space, batteries"
} |
Using generic methods for basic crud operations | Question: Regarding re-usability, is this OK? What might go wrong? What would be a better design? Performance-related issues and any other comments are welcome.
package sample.library.dao.util;
import java.io.Serializable;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.orm.hibernate3.HibernateTemplate;
import org.springframework.transaction.annotation.Transactional;
/**
* @author mulean
*
*/
@Transactional(value = "transactionManager", timeout = 30,
rollbackFor = java.lang.Exception.class)
public class DAOUtil {
@Autowired
HibernateTemplate template;
public <T> T updateData(T t){
template.update(t);
return t;
}
public <T> T delete(T t){
template.delete(t);
return t;
}
public <T> Boolean save(T t){
boolean success = false;
try{
template.saveOrUpdate(t);
success = true;
}catch(Exception e){
success = false;
return success;
}
return success;
}
@SuppressWarnings("unchecked")
public <T> List<T> listData(String className){
List<T> items = null;
items = template.find("from "+className.toUpperCase());
return items;
}
}
Answer: What about having generic abstract dao class which will contain these methods? You will most likely have to create some concrete dao classes anyway to have specific methods.
We are using this approach in our projects (behind JPA facade)
public abstract class AbstractJpaDao<T extends AbstractEntity> implements GenericDao<T> {
@PersistenceContext
protected EntityManager entityManager;
protected final Class<T> entityClass;
public AbstractJpaDao(Class<T> entityClass) {
this.entityClass = entityClass;
}
@Override
public T get(Serializable id) {
Assert.notNull(id, "Entity ID cannot be null");
return entityManager.find(entityClass, id);
}
@Override
public List<T> findAll() {
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
CriteriaQuery<T> query = builder.createQuery(entityClass);
Root<T> root = query.from(entityClass);
query.select(root);
return entityManager.createQuery(query).getResultList();
}
EDIT
Also do not put @Transactional on your DAO classes but on Services. Service method describes unit of work which should be done whole or rollbacked.
Consider this simple code
public void save() {
dao1.delete(entity1);
dao2.delete(entity2);
}
When second dao fails to delete and you have transactional daos not service, the first entity is lost.
And you save method should not return boolean, but let bubble the exception up into the service layer and let service handle the exception. | {
"domain": "codereview.stackexchange",
"id": 4754,
"tags": "java, generics, hibernate"
} |
teb local planner installation | Question:
Hi everyone
I am kind of new for the ROS, and when I tried to follow the instruction from ROS wiki to install the teb_local_planner, I have encounter some error, I am currently using ubuntu 14.04LTS.
So here is the prompt I used and the reply I receive
samping@samping-HP-250-G4-Notebook-PC://$ sudo apt-get install ros-{distro}-teb-local-planner
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-{distro}-teb-local-planner
Can anyone provide me some help on installing this package?
Originally posted by samping on ROS Answers with karma: 33 on 2016-04-27
Post score: 1
Original comments
Comment by croesmann on 2016-05-02:
hey, please accept the answer if it has solved your issue
Comment by samping on 2016-05-02:
How do I accept it, I can't find the button
Answer:
Hey,
just replace {distro} in the command with your ROS-Distribution (indigo, jade):
On indigo:
sudo apt-get install ros-indigo-teb-local-planner
On jade:
sudo apt-get install ros-jade-teb-local-planner
Originally posted by croesmann with karma: 2531 on 2016-04-27
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by croesmann on 2016-04-27:
I've updated the tutorials page to include buttons for version selection and hence avoiding {distro} in commands.
Comment by zkytony on 2016-07-12:
Hi croesmann, I'm still not able to install teb_local_planner for indigo (on Ubuntu 16.04), even with the command sudo apt-get install ros-indigo-teb-local-planner. Can you check if it works for you now?
Comment by zkytony on 2016-07-12:
I also searched on Ubuntu's App Directory, but could not find any package relating to "teb"
Comment by croesmann on 2016-07-12:
hi @kaiyu, if you are on Ubuntu 16.04, you are probably running ROS Kinetic since Indigo is not available for 16.04. After successfully installing ROS, invoke sudo apt-get install ros-kinetic-teb-local-planner. | {
"domain": "robotics.stackexchange",
"id": 24486,
"tags": "ros, teb-local-planner"
} |
How to initialize my circuit with two random complex numbers? | Question: Working through Lab 3 in the Qiskit text, I have been attempting to initialize my one qubit circuit with two random numbers. My first attempt, as directed, is using .initialize function, but I keep getting an error:'Sum of amplitudes-squared does not equal one.'
My code
qc = QuantumCircuit(1)
#### your code goes here
#Create random initial state
SignChoice=[1,-1]
a=random.choice(SignChoice)*random.random()
b=random.choice(SignChoice)*random.random()
p=complex(a,b)
q=np.sqrt(1-p**2)
initial_state = [p,q] # Define state |q_0>
qc.initialize(initial_state, 0) # Initialise the 0th qubit in the state `initial_state`
In attempting to fix this error I have used the code below to see my amplitudes and the sum of their squares and though I can see that rounding may sometimes be an issue, it does often result in exactly 1+0j, which leaves me wondering why it will not initialize!
print(p,q,p**2+q**2)
My second attempt will be with
random_statevector,
but I have not yet completed it.
Answer: You can do it slightly different and then you will have no error.
Generate 4 random numbers instead of 2, and set q in the same way as p.
Then you can compute the norm of p and q using np.linalg.norm() method and set initial_state to be a list of p and q divided by the norm.
Here's the code fixed:
qc = QuantumCircuit(1)
#### your code goes here
#Create random initial state
SignChoice = [1,-1]
a=random.choice(SignChoice)*random.random()
b=random.choice(SignChoice)*random.random()
c=random.choice(SignChoice)*random.random()
d=random.choice(SignChoice)*random.random()
p=complex(a,b)
q=complex(c,d)
amplitudes = [p,q]
norm = np.linalg.norm(amplitudes)
initial_state = amplitudes/norm # Define state |q_0>
qc.initialize(initial_state, 0) # Initialise the 0th qubit in the state `initial_state` | {
"domain": "quantumcomputing.stackexchange",
"id": 4052,
"tags": "qiskit, programming, textbook-and-exercises"
} |
can $e^{j\phi_0}$ be incorporated into the sine and cosine terms? | Question: This is from "communication system" by Carlson fifth edition page 109:
If transfer function of a channel be like this:
$H(f)= Ae^{j\phi_0}e^{-j2\pi ft_g} $
and input to this system be:
$x(t)=x_1(t)\cos(\omega_c t) - x_2(t)\sin(\omega_c t) $
will output be like this?
$y(t) = Ax_1(t-t_g)\cos[\omega_c (t-t_g)+\phi_0] - Ax_2(t-t_g)\sin[\omega_c (t-t_g)+\phi_0]$
my problem is with this sentence in the book:
$e^{j\phi_0}$ can be incorporated into the sine and cosine terms...
Edit:
picture from book:
Answer: Your problem might be the unmentioned fact that the given frequency response $(4)$ in Carlson's book is only valid for positive frequencies. The complete definition should be
$$H(f)=\begin{cases}Ae^{j\phi_0}e^{-j2\pi ft_g},&f>0\\Ae^{-j\phi_0}e^{-j2\pi ft_g},&f<0\end{cases}\tag{1}$$
Now we have $H(f)=H^*(-f)$, which is necessary for the corresponding system (channel) to be real-valued.
If you split the system into two subsystems $H_1(f)=Ae^{j\textrm{sgn}(f)\phi_0}$ and $H_2(f)=e^{-j2\pi ft_g}$, where $H_1(f)$ is a (scaled) phase shifter, and $H_2(f)$ is a pure delay, it's easy to see the impact of the total system on the input signal.
Consider the input signal $x_1(t)\cos(\omega_ct)$. The output of the scaled phase shifter is $Ax_1(t)\cos(\omega_ct+\phi_0)$, if the signal $x_1(t)$ is a lowpass signal with a maximum frequency smaller than the carrier frequency $\omega_c$. Delaying that signal by $t_g$ gives
$$y_1(t)=Ax_1(t-t_g)\cos[\omega_c(t-t_g)+\phi_0]\tag{2}$$
The result for the signal $x_2(t)\sin(\omega_ct)$ follows in a completely analogous way.
In sum, the two necessary conditions for the given formula to be true are
$H(f)$ corresponds to a real-valued system, i.e., $H(f)=H^*(-f)$, and
$x_1(t)$ and $x_2(t)$ are lowpass signals with maximum frequencies smaller than the carrier frequency $\omega_c$. | {
"domain": "dsp.stackexchange",
"id": 10205,
"tags": "fourier-transform, digital-communications, phase, group-delay, envelope"
} |
Find the velocity of rope falling off table | Question: A uniform rope of total length sits on a table
$L = 2m$
A portion of the rope hangs off of the edge of the table
$l = 2/3 m$
The coefficient of static friction between the rope and the table is
$K_s = 0.5$
and the coefficient of kinetic friction is
$K_k = 0.4$
How fast will the rope be moving when it completely falls off of the top surface of the table?
What I've tried
I've tried finding the total force so that I get a net force downwards by subtracting the friction force off of gravitational force
Note: I have set the mass of the rope to 2kg so that the length = the weight
$F_g - F_f = F_t = (2/3)(9.8) - (0.4)(4/3)(9.8) = 1.30666N$
But I quickly realised that the amount of friction and mass is going to decrease as other part of rope accelerates towards the ground.
Which means the acceleration is not going to be constant but increasing due to the extra mass falling off.
How do I deal with this?
Answer: Consider an energy approach: the gravitational potential of the rope just hanging before dropping away, minus the work done by friction, is the kinetic energy. From that, you find speed.
The friction force varies as the rope moves because the normal force (I.e. weight) varies with the length of rope still on the table. That varies from the full weight for the first bit of distance to zero for the last. You can handle that with calculus, or realize that this is a linear effect which allows you to average: the average weight, 1/2 the total, times the distance times the kinetic coefficient is the work done. | {
"domain": "physics.stackexchange",
"id": 48133,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, friction"
} |
Is there a general version of Bell's inequality, like the general version of uncertainty principle? | Question: By the general version of uncertainty principle, I mean the result involving general operators $A$ and $B$, which says that the products of standard deviations is equal to the sum of the expected value of the commutator plus the expected value of the anti-commutator (the result is not exactly this. I forgot the exact expression).
The Bell inequality is proved for spin measurements. Is there a general version of the inequality? Maybe something involving the commutator? I want to understand the key reason behind the weird correlations in Quantum mechanics. I think it might have to do with non-commutative observables, just like the uncertainty principle has to do with non commutativity
Answer: Probably the most generic statement about correlation that is still useful in this context is Tsirelson's bound for the CHSH inequality:
The setup are four observables $A_0,A_1,B_0,B_1$ with possible outcomes $\pm 1$ and $[A_i,B_j] = 0$ (but not, crucially, $[A_0,A_1] = 0$ or $[B_0,B_1] = 0$). This isn't as restrictive as it might seem because you can convert any observable with discrete spectrum into a family of such observables by just taking the projectors onto the eigenspaces of that observable and adding minus the projector onto the orthgonal complement to each projector, and likewise you can convert any experiment where you want to measure some quantity into a series of binary questions whether that quantity is in inside or outside some interval and then assign +1 to "inside" and -1 to "outside".
Then Tsirelson's bound says that the correlations of these observables are bounded as
$$ \sum_i \sum_j \langle A_i B_j\rangle \leq c$$
where $c = 2$ if $[A_0,A_1] = 0$ and $[B_0,B_1] = 0$ and $c = 2\sqrt{2}$ for the general case. The vanishing commutator corresponds to a classical local realist theory. Hence the proof of Tsirelson's bound shows us that the reason local realist theories cannot reproduce quantum mechanics indeed is that the commutativity of local realist observables places stricter bounds on correlation functions than general quantum theory. | {
"domain": "physics.stackexchange",
"id": 91457,
"tags": "quantum-mechanics, correlation-functions, bells-inequality"
} |
Image overlap (RViz and Gazebo world) | Question:
Hello people,
I am working with a robot equiped with a stereo camera rig, but when trying to see the image in RViz, I get an overlap of the two worlds, Rviz and gazebo:
Running image_view to see the image I don't get this problem:
I am not understanding why, and if someone could help me I would be very grateful.
Originally posted by aba92 on ROS Answers with karma: 29 on 2015-06-23
Post score: 0
Answer:
Use an image display, not a camera display.
Originally posted by dornhege with karma: 31395 on 2015-06-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21990,
"tags": "ros, gazebo, rviz, 2d-image"
} |
Approach to find minimum set in one table consisting of all items in second table | Question: wondering if you can help me with the following problem.
I have one table consisting of items in each person's bag:
name | item1 | item2 | item3
____________________________________
jack | pen | pencil | eraser
____________________________________
jane | phone | pen | camera
____________________________________
leia | pencil | eraser | glasses
I have another list with all items that can be in a bag:
pen
pencil
eraser
phone
camera
glasses
What is the best approach to find the minimum list of names who have all the items in the list? For example, in this toy example, the output would be {jane,leia}.
Your help is much appreciated. Please also let me know if I should formulate the problem differently.
Answer: This is essentially the Set Cover problem, which is NP-hard, so you should not expect an algorithm that is always both exact and efficient (polynomial-time).
If you really need the minimum set, you have to resort to algorithms that sooner or later have to explore an exponentially large solution space. You might want to cast your problem as an Integer Linear Program (ILP) and use an ILP solver, or cast it as a Satisfiability (SAT) problem and use a SAT solver. If you want to implement the algorithm yourself from scratch, I suggest you have a look into branch and bound methods.
If you don't really need the minimum cardinality set, you can just greedily pick a name that covers as many as possible uncovered items, mark them covered, and repeat, until all items are covered. This won't do too bad, in a specific theoretical sense, although in practice it may be a relatively low quality solution. | {
"domain": "cs.stackexchange",
"id": 13199,
"tags": "algorithms, graphs"
} |
What is the Units for Thermal conductivity? | Question: What are the units for thermal conductivity and why?
Answer: Thermal conductivity has dimensions of $\mathrm{Power / (length * temperature)}$. Power is the rate of heat flow, (i.e.) energy flow in a given time. Length represents the thickness of the material the heat is flowing through, and temperature is the difference in temperature through which the heat is flowing.
In SI units, it is commonly expressed as $\mathrm{Watts / (meter * Kelvin)}$, and in US units, it is commonly given in $\mathrm{BTU/hr/(feet\ *\ ^oF)}$.
It expresses the rate at which heat is conducted through a unit thickness of a particular medium. That rate will vary linearly based on the temperature difference across the material, so it is expressed as a value per degree of temperature difference, thus Heat Rate per unit thickness per degree of temperature difference. | {
"domain": "physics.stackexchange",
"id": 9718,
"tags": "thermodynamics, units, thermal-conductivity"
} |
Print path (leaf to root) with max sum in a binary tree | Question: I want you to pick my code apart and give me some feedback on how I could make it better or more simple.
public class MaxSumPath {
private TreeNode root;
private static class TreeNode {
TreeNode left;
TreeNode right;
int item;
TreeNode (TreeNode left, TreeNode right, int item) {
this.left = left;
this.right = right;
this.item = item;
}
}
private static class TreeInfo {
TreeNode node;
int maxCount;
TreeInfo (TreeNode node, int maxCount) {
this.node = node;
this.maxCount = maxCount;
}
}
public void maxSumPath () {
final TreeInfo treeInfo = new TreeInfo(null, 0);
fetchMaxSumPath(root, treeInfo, root.item);
printPath (root, treeInfo);
}
private void fetchMaxSumPath (TreeNode node, TreeInfo treeInfo, int sumSoFar) {
if (node == null) return;
sumSoFar = sumSoFar + node.item;
if (node.right == null && node.left == null) {
if (sumSoFar > treeInfo.maxCount) {
treeInfo.maxCount = sumSoFar;
treeInfo.node = node;
}
return;
}
fetchMaxSumPath(node.left, treeInfo, sumSoFar);
fetchMaxSumPath(node.right, treeInfo, sumSoFar);
}
private boolean printPath (TreeNode node, TreeInfo treeInfo) {
if (node != null && (node == treeInfo.node || printPath(node.left, treeInfo) || printPath(node.right, treeInfo))) {
System.out.print(node.item + ", ");
return true;
}
return false;
}
}
Also considering additional storage (TreeInfo), is it right to say that the space complexity is \$O(1)\$?
Answer: You should have a constructor MaxSumPath(TreeNode root). Otherwise, there's no way for any other class to call the code.
The TreeNode(left, right, item) constructor is awkward. Consider rearranging the TreeNode constructor arguments to TreeNode(left, item, right) or TreeNode(item, left, right). I suggest creating a TreeNode(item) constructor as well.
Your TreeInfo could be eliminated by moving its variables into MaxSumPath itself. I'd rename the variables to TreeNode maxLeaf and int maxSum.
In the maxSumPath() method, you call fetchMaxSumPath(root, treeInfo, root.item). It should be fetchMaxSumPath(root, treeInfo, 0) if you don't want to double-count the root node. (Fortunately, this bug happens not to affect the output.)
The space complexity is actually O(log n) when you take into account the stack used in recursion. The time complexity is O(n), because you visit every node once when discovering the leaf for the path with the maximum sum, and every node again to find that leaf for printing. You might want to use O(log n) space (which, remember, happens anyway due to the recursion stack) to keep track of the best path while discovering the best sum, which lets you avoid the second pass O(n) pass. | {
"domain": "codereview.stackexchange",
"id": 4619,
"tags": "java, algorithm, tree"
} |
Applications of CNN for detecting crime from video surveillance cameras | Question: Inspired by this discussion about recognizing human actions, I have found the Fall-Detection project which detects humans falling on the ground from a CCTV camera feed, and which can consider alerting the hospital authorities.
My question is, are there any existing real-life implementations or research projects which specifically use live video feed from the surveillance cameras in order to detect crime using convnets (or similar approaches)? If so, how do they work, briefly? Do they automatically inform the police about the crime with the details what happened and where?
For example car accidents, physical assaults, robberies, violent disturbances, weapon attacks, etc.
Answer: After a bit of research I found something kind of close:
Artificially intelligent security cameras are spotting crimes before they happen
New surveillance cameras will use computer eyes to find 'pre crimes' by detecting suspicious behaviour and calling for guards
CCTV 'fightcams' detect violence 'before it happens' at Dailymail, also check at Telegraph
They, however, makes no mention of what specific methods they use.
So a crime detection system as that is written does not exist, but abnormal behaviour detection systems do.
An accurate generalized system seems intuitively infeasible, however. Commiting a crime, unlike falling, is a complex behavior, and takes so many forms. A camera watching a store's counter like at a 7-11 could perhaps see that the 'customer''s arm is strangely reaching across the counter, and the attendant is moving a lot more than usual suddenly, but aside from very specific cases like this such a system is currently quite unfeasible.
Crimes are unusual, relatively speaking, and their dramatic nature means that even the simplest crimes play out in very different ways. Perhaps you could in this case you could try to look for images of a gun, or someone with their hands up. So, looking for unusual, detectable behavioural mannerisms may be possible, but not crime detection.
Ultimately, while you may be able to make (possibly pretty good) systems to detect specific crimes in specific environments, that's all we got for now.
P.S. - Do these camera's also get audio signals? That is also an interesting facet to consider ("PUT YOUR HANDS UP / GIVE ME ALL YOUR MONEY") | {
"domain": "ai.stackexchange",
"id": 90,
"tags": "convolutional-neural-networks, computer-vision, action-recognition"
} |
Places for the up-to-date topics in robotics | Question: Are there specific places/websites where you guys read about the topics that are being searched in Robotics, specially, for the most newest researches. I would like to know the places where the most hottest topics in Robotics are studied for both theoretical and experimental studies. The only place I know is IEEE community. They are doing great specially their magazine but I'm curious if there are any alternatives for robotics scientists. Please include journals.
Answer: This sounds a bit like a shopping question, but I'll provide what I know.
The IEEE societies have always provided me a wealth of information. In addition to the Robotics Society, with its magazine and journal (IEEE Transactions on Robotics and Automation), they also host a fantastic annual conference. But there are other societies with robotics-related content, also. For example, their Systems, Man, and Cybernetics society has several journals, and many topics directly related to robotics. They also have societies which focus on systems, medical devices, and industrial applications with relevant content to an aspiring roboticist.
ASME has similar societies with a broad array of relevant content. Check out their Journal of Mechanisms and Robotics, or papers from their biannual Design Technical Conferences.
The International Journal of Robotics Research was a seminal journal, and continues to be a leading publisher of robotics research. There's also the Robotics and Autonomous Systems journal, which I have not personally found too related to my interests, but is an active publication.
Springer-Verlang also publishes some interesting collections of research papers, often in book form. Their series on Advances in Robot Kinematics has been quite interesting to me.
Also, don't forget to include patent office searches.
I'm sure there are dozens more good publications. I'd recommend finding a topic you're interested in studying, then find a seminal paper on that topic. Tracing back through the chains of references should point you to the publications that have been most active for that topic. | {
"domain": "robotics.stackexchange",
"id": 1634,
"tags": "research"
} |
A trait class to detect whether a template is specialized for a given type | Question: Today's question will just be about a small utility that I needed at some point for one of my projects: a template to detect whether a template is specialized for a given type. The utility uses std::void_t from C++17, but this alias template is simle enough to be implemented in you favourite C++ revision, whichever it is:
template<typename...>
struct voider
{
using type = void;
};
template<typename... TT>
using void_t = typename voider<TT...>::type;
Here is the template utility I wrote to detect whether a template has a specialization for a given type, for SFINAE purpose:
template<
template<typename...> class,
typename,
typename=void
>
struct is_specialized:
std::false_type
{};
template<
template<typename...> class Template,
typename T
>
struct is_specialized<Template, T, std::void_t<decltype(Template<T>{})>>:
std::true_type
{};
And here is a small example of how it can be used:
template<typename T>
struct foo;
template<>
struct foo<int> {};
template<>
struct foo<float> {};
int main()
{
// Should print 1 1 0
std::cout << is_specialized<foo, int>::value << ' '
<< is_specialized<foo, float>::value << ' '
<< is_specialized<foo, double>::value;
}
Is there any way I could improve this specialization trait? Any kind of review is welcome :)
Answer: I think there's an issue with wording here. What you're testing is if a given instantiation of a class template is default constructible. You're not testing if it's "specialized". You cannot check for that. How would you differentiate between:
template <typename T> struct foo { };
template <> struct foo<int> { };
// is foo<X> the primary or the specialization?
Back to your example - just because you cannot construct an object of a particular instantiation (e.g. foo<double>) doesn't mean that you can't name the type. You can do that just fine. That lets you simplify your trait down to:
template <typename, typename=void>
struct is_specialized : std::false_type
{};
template<typename T>
struct is_specialized<T, void_t<decltype(T{})>>
: std::true_type
{};
Which you would use as is_specialized<foo<int>>::value (true) or is_specialized<foo<double>>::value (false).
But really, since we're just testing for constructibility, we should just use what's in the standard:
template <typename T>
using is_specialized = std::is_default_constructible<T>; | {
"domain": "codereview.stackexchange",
"id": 14935,
"tags": "c++, template, c++14, sfinae"
} |
What is context window size? | Question: I am trying to implement a recurrent neural network machine translation system, and I am just learning the things.
I am creating a word embedding matrix. In order to do that, I should know my vocabulary size, dimension of the embedding space, and context window size?
What is context window?
Answer: A context window applies to the number of words you will use to determine the context of each word. Like of your statement is "the quick brown fox" a context window of two would mean your samples are like (the,quick) and ( the, brown). Then you slide one word and your samples become (quick, the ), (quick, brown) and (quick fox) and so on. I would suggest reading up this word 2vec tutorial to understand the training method and terminology. | {
"domain": "datascience.stackexchange",
"id": 1404,
"tags": "rnn, machine-translation"
} |
Is the input of ESC really limited at 50 Hz and will the PID controllers work properly? | Question: Based on the wiki page of ESC, the ESC generally accepts a nominal 50 Hz PWM servo input signal whose pulse width varies from 1 ms to 2 ms
http://en.wikipedia.org/wiki/Electronic_speed_control
For our project, we integrate a flight controller for our UAV, Naza m-lite and we want to implement position control. We already have localization and we can control the quadrotor by applying servo width to roll, pitch, yaw and thrust throttle. Since the ESC only accepts 50 Hz, will the PID controller work at only 50 Hz?
Answer: First things first: the information on wikipedia refers to ESCs in general, and not a specific one. You should consult the datasheet on your particular ESC model to make sure that it does in fact use 50 Hz.
You also seem to have a misconception about how the PID controller and ESC are coupled together; in fact, the PID controller can work at whatever frequency it wants. The output of the PID will be some value, and in order to send that value to the ESC you will need to convert it to a pulse-width modulated (PWM) signal on the frequency that the ESC uses for input. | {
"domain": "robotics.stackexchange",
"id": 480,
"tags": "pid"
} |
Superposing waves to create visible light | Question: I have a question: Is it possible to create visible light using two other non-coherent waves from the electromagnetic spectrum for example superposing infrared and ultraviolet to create visible light?
Answer: Color vision is based on the response of cone cells in the eye to different frequencies of light. For humans (with full color vision) there are three types of cones sensitive to different frequency ranges at different levels. Each individual photon in your superposed wave will be of a specific frequency, which for your question you've assumed is outside the visible range, i.e. outside the sensitivity region of any of the cones. Hence even if one of those photons hits a cone cell, it will not generate a response at the cellular level and will therefore not be perceived by the brain at all. It doesn't matter how many different frequencies of photons you superpose - if none of them are in the visible range then none of them will stimulate any of the cones. | {
"domain": "physics.stackexchange",
"id": 78898,
"tags": "waves, visible-light, superposition"
} |
Confused about Q and K in Attention Mechanism | Question: The following equation computes the attention scores:
A = softmax(QK / d)
I think Q and K are interchangeable, but why is one called Query and the other called Key?
Answer: First of all, it is necessary to figure out what kind of attention mechanism we are talking about.
If this is a «classical» attention mechanism, then Q and K are calculated for completely different sequences: input and output.
If this is a self-attention, then, indeed, they are calculated for the same sequence, but for different tasks. Q – represents the token for which the attention is calculated. On the other hand, K – represents tokens that can be paid attention to. | {
"domain": "datascience.stackexchange",
"id": 12060,
"tags": "transformer, attention-mechanism"
} |
Parallel Connection of Capacitor | Question: Assume Two Capacitor filled with different dielectrics with dielectric constant $K_1$ and $K_2$ respectively in Parallel connection. Then we have same voltage $V$ on both capacitors. Therefore, $$\dfrac{E}{K_1}d=\dfrac{E}{K_2}d.$$ So, $K_1= K_2$. This means we have always two dielectrics with same dielectric constant. This is clear contradiction, Where I am mistaken?
Answer: You seems to assume both capacitors has the same plate separation $d$. So, lets assume that. Assume there is no dielectric material. Therefore, nicely $Ed = Ed$ in both capacitors. Which is nice. :).
Now, I think I understand your confusion. Have an isolated capacitor with electric field inside plates of $E$. Insert dielectric $K$. Under this case, the electric field now falls $E/K$. But under this scenario, there were also a voltage drop. Why? Because here the charge is conserved because capacitors are isolated.
If you force potential to be the same, then charge increase will contribute on electric field with $K$, and electric field will fall $K$, canceling out, then $E_1 = E_2$.
Since $Q = CV$, the electric field after dielectric:
$$
E_{after} =
\frac{\sigma}{\epsilon} =
\frac{Q_{after}}{\epsilon_0 KA} =
\frac{C_{after}V}{\epsilon_0 KA} =
\frac{KC_{before}V}{\epsilon_0 KA} =
\frac{C_{before}V}{\epsilon_0 A} =
E_{before}
$$
Then, decrease by $K$ of electric field after dielectric is inserted field under condition of constant $V$ does not happen. Anyway, the neat answer of @AnubhavGoel address this far simpler than mine. | {
"domain": "physics.stackexchange",
"id": 26151,
"tags": "electrostatics, electric-fields, capacitance, dielectric"
} |
How to download BAIR action free robot pushing dataset? | Question: I'm trying to download BAIR action free robot pushing dataset. I tried downloading from here. In browser, it shows its size is 30GB, but downloads some data and then fails. I tried multiple attempts with no success. Then I tried to download using wget
wget http://rail.eecs.berkeley.edu/datasets/bair_robot_pushing_dataset_v0.tar
Even with this, it shows total size is 30GB, but after downloading some 199MB, it ended saying download is complete
wget http://rail.eecs.berkeley.edu/datasets/bair_robot_pushing_dataset_v0.tar
--2019-05-16 12:30:50-- http://rail.eecs.berkeley.edu/datasets/bair_robot_pushing_dataset_v0.tar
Resolving rail.eecs.berkeley.edu (rail.eecs.berkeley.edu)... 128.32.189.73
Connecting to rail.eecs.berkeley.edu (rail.eecs.berkeley.edu)|128.32.189.73|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 32274964480 (30G) [application/x-tar]
Saving to: ‘bair_robot_pushing_dataset_v0.tar’
bair_robot_pushing_dataset_v0.tar 0%[ ] 189.95M 456KB/s in 10m 59s
2019-05-16 12:41:50 (295 KB/s) - Connection closed at byte 199172826. Retrying.
--2019-05-16 12:41:51-- (try: 2) http://rail.eecs.berkeley.edu/datasets/bair_robot_pushing_dataset_v0.tar
Connecting to rail.eecs.berkeley.edu (rail.eecs.berkeley.edu)|128.32.189.73|:80... connected.
HTTP request sent, awaiting response... 416 Requested range not satisfiable
The file is already fully retrieved; nothing to do.
Also, I found a script that downloads BAIR dataset here. But I encountered the same problem here as well.
I'm confused now. Is the dataset so small or am I doing something wrong?
Answer: BAIR dataset can be downloaded here
https://sites.google.com/berkeley.edu/robotic-interaction-datasets
Additionally, here is the code to extract data from the dataset
import datetime
import os
import time
import cv2
import numpy as np
import skvideo.io
import tensorflow as tf
from PIL import Image
from tensorflow.python.platform import gfile
def get_next_video_data(data_dir):
filenames = gfile.Glob(os.path.join(data_dir, '*'))
if not filenames:
raise RuntimeError('No data files found.')
for f in filenames:
k = 0
for serialized_example in tf.python_io.tf_record_iterator(f):
example = tf.train.Example()
example.ParseFromString(serialized_example)
# print(example) # To know what all features are present
actions = np.empty((0, 4), dtype='float')
endeffector_positions = np.empty((0, 3), dtype='float')
frames_aux1 = []
frames_main = []
i = 0
while True:
action_name = str(i) + '/action'
action_value = np.array(example.features.feature[action_name].float_list.value)
if action_value.shape == (0,): # End of frames/data
break
actions = np.vstack((actions, action_value))
endeffector_pos_name = str(i) + '/endeffector_pos'
endeffector_pos_value = list(example.features.feature[endeffector_pos_name].float_list.value)
endeffector_positions = np.vstack((endeffector_positions, endeffector_pos_value))
aux1_image_name = str(i) + '/image_aux1/encoded'
aux1_byte_str = example.features.feature[aux1_image_name].bytes_list.value[0]
aux1_img = Image.frombytes('RGB', (64, 64), aux1_byte_str)
aux1_arr = np.array(aux1_img.getdata()).reshape((aux1_img.size[1], aux1_img.size[0], 3))
frames_aux1.append(aux1_arr.reshape(1, 64, 64, 3))
main_image_name = str(i) + '/image_main/encoded'
main_byte_str = example.features.feature[main_image_name].bytes_list.value[0]
main_img = Image.frombytes('RGB', (64, 64), main_byte_str)
main_arr = np.array(main_img.getdata()).reshape((main_img.size[1], main_img.size[0], 3))
frames_main.append(main_arr.reshape(1, 64, 64, 3))
i += 1
np_frames_aux1 = np.concatenate(frames_aux1, axis=0)
np_frames_main = np.concatenate(frames_main, axis=0)
yield f, k, actions, endeffector_positions, np_frames_aux1, np_frames_main
k = k + 1
def extract_data(data_dir, output_dir, frame_rate):
"""
Extracts data in tfrecord format to gifs, frames and text files
:param data_dir:
:param output_dir:
:param frame_rate:
:return:
"""
if os.path.exists(output_dir):
if os.listdir(output_dir):
raise RuntimeError('Directory not empty: {0}'.format(output_dir))
else:
os.makedirs(output_dir)
seq_generator = get_next_video_data(data_dir)
while True:
try:
_, k, actions, endeff_pos, aux1_frames, main_frames = next(seq_generator)
except StopIteration:
break
video_out_dir = os.path.join(output_dir, '{0:03}'.format(k))
os.makedirs(video_out_dir)
# noinspection PyTypeChecker
np.savetxt(os.path.join(video_out_dir, 'actions.csv'), actions, delimiter=',')
# noinspection PyTypeChecker
np.savetxt(os.path.join(video_out_dir, 'endeffector_positions.csv'), endeff_pos, delimiter=',')
skvideo.io.vwrite(os.path.join(video_out_dir, 'aux1.gif'), aux1_frames, inputdict={'-r': str(frame_rate)})
skvideo.io.vwrite(os.path.join(video_out_dir, 'main.gif'), main_frames, inputdict={'-r': str(frame_rate)})
skvideo.io.vwrite(os.path.join(video_out_dir, 'aux1.mp4'), aux1_frames, inputdict={'-r': str(frame_rate)})
skvideo.io.vwrite(os.path.join(video_out_dir, 'main.mp4'), main_frames, inputdict={'-r': str(frame_rate)})
# Save frames
aux1_folder_path = os.path.join(video_out_dir, 'aux1_frames')
os.makedirs(aux1_folder_path)
for i, frame in enumerate(aux1_frames):
filepath = os.path.join(aux1_folder_path, 'frame_{0:03}.bmp'.format(i))
cv2.imwrite(filepath, cv2.cvtColor(frame.astype('uint8'), cv2.COLOR_RGB2BGR))
main_folder_path = os.path.join(video_out_dir, 'main_frames')
os.makedirs(main_folder_path)
for i, frame in enumerate(main_frames):
filepath = os.path.join(main_folder_path, 'frame_{0:03}.bmp'.format(i))
cv2.imwrite(filepath, cv2.cvtColor(frame.astype('uint8'), cv2.COLOR_RGB2BGR))
print('Saved video: {0:03}'.format(k))
def main():
data_dir = '../softmotion30_44k/test'
output_dir = '../ExtractedData/test'
frame_rate = 4
extract_data(data_dir, output_dir, frame_rate)
return
if __name__ == '__main__':
print('Program started at ' + datetime.datetime.now().strftime('%d/%m/%Y %I:%M:%S %p'))
start_time = time.time()
main()
end_time = time.time()
print('Program ended at ' + datetime.datetime.now().strftime('%d/%m/%Y %I:%M:%S %p'))
print('Execution time: ' + str(datetime.timedelta(seconds=end_time - start_time)))
References:
https://github.com/edenton/svg/blob/master/data/convert_bair.py | {
"domain": "datascience.stackexchange",
"id": 9497,
"tags": "dataset"
} |
Has Hawking radiation and black hole evaporation been observed by astronomers? | Question: Has Hawking radiation ever been observed or has there been any attempt of observing them? Do we have any evidence that black holes can evaporate?
I wonder how would one distinguish between Hawking radiation from some other radiation. Moreover, Hawking radiation causes a black hole to evaporate. Is there any hint in astronomy for the occurrence of such events? Are there chances of observing them in future in the sense that are there people/collaborations who dedicatedly looking for Hawking radiation?
Answer:
Has Hawking radiation ever been observed or has there been any attempt of observing them?
No, it has never been detected. The Hawking radiation from a stellar-mass black hole is so weak that it can never be detected by any foreseeable technology.
Moreover, Hawking radiation causes a black hole to evaporate. Is there any hint in astronomy for the occurrence of such events?
No. Any stellar-mass black hole that has existed in the universe so far will have been absorbing mass-energy faster than it lost it through Hawking radiation, even if the only infalling energy was from the cosmic microwave background.
Are there chances of observing them in future in the sense that are there people/collaborations who dedicatedly looking for Hawking radiation?
The only realistic possibility I've seen suggested was that in some scenarios involving large extra dimensions, the LHC could have produced microscopic black holes. Microscopic black holes could evaporate relatively quickly, and the radiation might be detectable. However, they don't seem to have been produced at the LHC. | {
"domain": "physics.stackexchange",
"id": 68708,
"tags": "black-holes, experimental-physics, astrophysics, astronomy, hawking-radiation"
} |
Why do only unsaturated hydrocarbons undergo addition reactions? | Question: Why do only unsaturated hydrocarbons undergo addition reactions? For example, ethene or ethyne will undergo an addition reaction with chlorine, whereas ethane will not.
Furthermore, why does benzene not undergo addition reactions, even though it is not saturated?
Answer: To what would you add given a saturated hydrocarbon? You must substitute, abstract, or displace.
Benzene is aromatic — different pattern of reactivity. It photochlorinates. It UV reacts with maleic anhydride, then adds another mole thermally. Palladium catalysis will swap appropriate substituents. | {
"domain": "chemistry.stackexchange",
"id": 1016,
"tags": "organic-chemistry, reactivity"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.