anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Adiabatic expansion of steam through a valve | Question: I'm working on a homework problem, and I have a suspicion the textbook is trying to trick me.
The question is:
"Steam at 20 bar and 300 C is to be continuously expanded to 1 bar. Compute the entropy generated and the work obtained per kilogram of steam if this expansion is done by passing the steam through an adiabatic expansion valve."
My energy and entropy balances give:
$$\dot{W} = \dot{M}(\hat{H}_2 - \hat{H}_1)$$
$$\dot{S}_{gen} = \dot{M}(\hat{S}_2 - \hat{S}_1)$$
where
$$\dot{W} = \dot{W}_s - P\frac{dV}{dt}$$
... but, there is no $\dot{W}_s$ since it's an expansion valve, right? If it was a turbine (which the next question asks), then there would be shaft work, but unless I'm mistaken, there is no shaft work for an expansion valve, right?
Answer: Yes. Expansion valve has no work. So you have the final enthalpy (same as initial enthalpy) and pressure. That will give you $S_2$ from the steam tables :). | {
"domain": "physics.stackexchange",
"id": 6550,
"tags": "thermodynamics, adiabatic"
} |
Verifying IPv6 addresses | Question: While trying to learn Python I stumbled upon a question which asks you to write a code that is able to take an input and verify it to see whether it meets the IPv6 Criteria.
I wrote the following code:
valid_characters = ['A', 'B', 'C', 'D', 'E', 'F', 'a', 'b', 'c', 'd', 'e', 'f', ':', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
address = input('Please enter an IP address: ')
is_valid = []
for i in range(len(address)):
current = address[i]
for j in range(len(valid_characters)):
check = valid_characters[j]
if check == current:
a = 1
is_valid.append(a)
address_list = address.split(":")
invalid_segment = False
for i in range(len(address_list)):
current = address_list[i]
if len(current) > 4:
invalid_segment = True
if len(address) == len(is_valid) and len(address_list) == 8 and invalid_segment == False:
print("It is a valid IPv6 address.")
elif invalid_segment:
print("It is not a valid IPv6 address.")
else:
print("It is not a valid IPv6 address.")
Although the code works and for the sample cases that were provided it is successfully able to confirm whether the entered IP address is IPv6 or not, I feel like my solution is not really elegant enough and that there should be a better way to do this.
I would really appreciate it if someone could point me to the right direction for how I should condense or improve my code.
Please note that I am new to Python, with little to no experience.
Answer: Python has a philosophy of "Batteries Included". Don't reinvent what's in the standard library!
In particular, this code is much more easily written if we take advantage of IPv6Address:
import ipaddress
address = input('Please enter an IP address: ')
try:
addr = ipaddress.IPv6Address(address)
except ipaddress.AddressValueError:
print(address, 'is not a valid IPv6 address')
else:
if addr.is_multicast:
print(address, 'is an IPv6 multicast address')
if addr.is_private:
print(address, 'is an IPv6 private address')
if addr.is_global:
print(address, 'is an IPv6 global address')
if addr.is_link_local:
print(address, 'is an IPv6 link-local address')
if addr.is_site_local:
print(address, 'is an IPv6 site-local address')
if addr.is_reserved:
print(address, 'is an IPv6 reserved address')
if addr.is_loopback:
print(address, 'is an IPv6 loopback address')
if addr.ipv4_mapped:
print(address, 'is an IPv6 mapped IPv4 address')
if addr.sixtofour:
print(address, 'is an IPv6 RFC 3056 address')
if addr.teredo:
print(address, 'is an IPv6 RFC 4380 address') | {
"domain": "codereview.stackexchange",
"id": 30333,
"tags": "python, beginner, python-3.x, validation, ip-address"
} |
What does the "depth" mean on earth? | Question:
I am really confused about the meaning of "depth" which is acquired by a RGB-D camera.
I think there are two meanings,
1: The distance from a point in the 3-D world to the infrared camera center.
2: The Z value of the 3-D world point in the camera frame.
Which is right?
If the second meaning is right, the depth values of all the points in a wall which is parallel with the RGB-D camera should be equal, but the fact is not like that. However, many 3-D vision books indicate the second meaning is right. I am really confused. Who can help me?
Originally posted by somebodyus on ROS Answers with karma: 113 on 2015-07-16
Post score: 0
Answer:
The data being provided should have a frame_id defined which will tell you which coordinate frame the data is referenced against.
Originally posted by tfoote with karma: 58457 on 2018-01-11
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 22207,
"tags": "ros, depth"
} |
How many trees should be generated in a random forest? | Question: What are ways of determining the number of trees to be generated in a random forest algorithm?
Answer: The number of estimators in Random Forest is a hyper-parameter. If you are using SKLearn's Random Classifier you can use one of the following techniques to find a (near) optimal hyperparameter settings (Note:You can tweak other hyperparameters like min_leaf_size etc as well with this approach);
GridSearchCV You can specify a grid of all the hyperparameters and a scoring criteria. This function will then evaluate all combinations of these parameters for you and return the setting which performed best on the validation set.
RandomSearchCV You can specify a grid of all the hyperparameters and a scoring criteria. This function will then evaluate n randomly choosen combinations of these parameters for you and return the setting which performed best on the validation set.
Bayesian Optimization You can treat the hyperparameter settings and corresponding score as a black box function and use exploitation-exploration paradigm using bayesian optimization. | {
"domain": "ai.stackexchange",
"id": 810,
"tags": "machine-learning, decision-trees, hyper-parameters, random-forests"
} |
How is the work done by a battery (when connected to a capacitor) independent of time? | Question: We know that W bybattery = V(It) = VQ, which is time-dependent. But when the battery is connected to a capacitor then W bybattery = 1/2 (CV^2), where C and V both are time-independent, hence W bybattery is coming to be time independent in this case. I am not able to understand why this happens.
Answer: This independence from time only arises when said circuit is in 'steady-state' which is essentially after infinite time. This steady-state refers to the time when the capacitor in the circuit is completely charged and current is no more flowing through it (assuming direct current and not alternating current).
Otherwise, the following equation stands true:
$Q$ = $Q_{o}(1-e^{-t/\tau})$
where $Q_{o}$ is the theoretical maximum possible charge on the capacitor and $\tau$ is the 'time-constant' of the R-C circuit, given by R times C, R being the resistance in the circuit and C being the capacitance of the capacitor. Clearly, this equation isn't time independent. | {
"domain": "physics.stackexchange",
"id": 90127,
"tags": "electrostatics, capacitance, batteries"
} |
Systems of equations solver with sympy | Question: So this is my first Python project and hoped for a bit of feedback.
I'm trying to make a little program, where I can insert n equations and run a numerical solve. I ended up using tkinter for the GUI as it seemed very approachable, and sympy for the math part.
I'm unsure if the overall structure of the program, is good or bad.
And some functions caused me a bit of trouble. Especially the define_eqsys_vars I feel like became needlessly complicated.
import tkinter as tk
import sympy as sp
from tkinter import Text
def clean_input(x: str) -> list:
temp = x.replace(' ', '').split('\n')
cleaned = list(filter(None, temp))
return cleaned
def create_res(x: str):
split = x.split('=')
res_eq = (sp.parse_expr(split[0], evaluate=False) - sp.parse_expr(split[1], evaluate=False))
return res_eq
def define_eqsys_vars(eqsys: list):
unique_vars = set()
for eq in eqsys:
unique_vars = unique_vars.union(eq.atoms(sp.Symbol))
# create a list with all symbols converted to text, and join - var() takes a string
var_string = ', '.join([repr(eq) for eq in unique_vars])
variables = sp.var(var_string)
return variables
def create_eqsys(x: list) -> tuple:
equation_system = [create_res(eq) for eq in x]
variables = define_eqsys_vars(equation_system)
return equation_system, variables
def create_guess(eqsys: list) -> tuple:
unique_vars = set()
for eq in eqsys:
unique_vars = unique_vars.union(eq.atoms(sp.Symbol))
guess = [1] * len(unique_vars)
return tuple(guess)
def solve_eqsys(eqsys, symbols, guess):
result = sp.nsolve(tuple(eqsys), symbols, guess)
return result
def main():
# input, from tkinter window
text_input = inputtxt.get("1.0", "end-1c")
# clean text
cleaned_text = clean_input(text_input)
# create system of equations and try to solve it
eqsys, eqsys_vars = create_eqsys(cleaned_text)
guess = create_guess(eqsys)
solution = solve_eqsys(eqsys, eqsys_vars, guess)
# text output
Output.insert(tk.END, f"Solution: {solution}")
return
# Build GUI
root = tk.Tk()
toplabel = tk.Label(text="Start variable name with a letter")
inputtxt = Text(root, height=30, width=50, bg="light yellow")
Output = Text(root, height=30, width=25, bg="light cyan")
Display = tk.Button(root, height=2,
width=20,
text="Solve system of equations",
command=lambda: main())
toplabel.pack()
inputtxt.pack()
Display.pack()
Output.pack()
tk.mainloop()
Answer: You shouldn't list() your filter. You can just return the iterable.
Rather than forming a subtraction from your user's equation, why not just... form an equation, and use the default solve instead of nsolve?
An Equality object can give you its free_symbols directly, which will be simpler than your use of atoms.
Don't sp.var and don't form a string to give it - just use free_symbols directly.
You don't need to make a (fairly unsafe, since it's uninformed) guess if you just call solve.
Your application doesn't render very well for me - it fills the entire height of the screen. I haven't bothered to try fixing this.
Don't leave state or setup code in the global namespace. The quickest fix in your case is to make main a closure.
Suggested
import tkinter as tk
from typing import Iterable, Sequence
import sympy as sp
from tkinter import Text
def clean_input(x: str) -> Iterable[str]:
temp = x.replace(' ', '').split('\n')
return filter(None, temp)
def create_res(x: str) -> sp.Equality:
lhs, rhs = x.split('=')
res_eq = sp.Eq(
sp.parse_expr(lhs, evaluate=False),
sp.parse_expr(rhs, evaluate=False)
)
return res_eq
def define_eqsys_vars(eqsys: Iterable[sp.Equality]) -> set[sp.Symbol]:
unique_vars = set()
for eq in eqsys:
unique_vars |= eq.free_symbols
return unique_vars
def create_eqsys(x: list) -> tuple[
list[sp.Equality],
set[sp.Symbol],
]:
equation_system = [create_res(eq) for eq in x]
variables = define_eqsys_vars(equation_system)
return equation_system, variables
def solve_eqsys(eqsys: Sequence[sp.Equality], symbols: Sequence[sp.Symbol]) -> dict:
return sp.solve(eqsys, symbols)
def setup() -> None:
def main() -> None:
# input, from tkinter window
text_input = inputtxt.get("1.0", "end-1c")
# clean text
cleaned_text = clean_input(text_input)
# create system of equations and try to solve it
eqsys, eqsys_vars = create_eqsys(cleaned_text)
solution = solve_eqsys(eqsys, eqsys_vars)
# text output
Output.insert(tk.END, f"Solution: {solution}")
# Build GUI
root = tk.Tk()
toplabel = tk.Label(text="Start variable name with a letter")
inputtxt = Text(root, height=30, width=50, bg="light yellow")
Output = Text(root, height=30, width=25, bg="light cyan")
Display = tk.Button(
root, height=2, width=20,
text="Solve system of equations",
command=main,
)
toplabel.pack()
inputtxt.pack()
Display.pack()
Output.pack()
tk.mainloop()
if __name__ == '__main__':
setup() | {
"domain": "codereview.stackexchange",
"id": 43123,
"tags": "python-3.x, sympy"
} |
Is it possible for a car to pull down a helicopter? | Question: https://youtu.be/jzUFCQ-P1Zg?t=426
from this great BMW ad, I was curious if the helicopter being pulled by a car is even remotely possible?
Answer: Yes, in this instance it can
So the question here is: can the tractive force from the car overcome the lifting capacity of the helicopter?
The comments say the car is a BMW 540i, clearly it is a sedan model and it seems to be an automatic. The weight of that car is — according to BMW USA — 3847 lbs.
What tractive force can that achieve? On a dry surface, you can achieve a friction coefficient of up to 0.9 with car tires.
So the tractive force a BMW 540i can achieve should be around 1500 kg under ideal conditions.
I have not been able to definitively identify the make and model of the helicopter — and the registration number is clearly "borrowed" from another make and model of helicopter — though I suspect it is a Airbus H125/AS350 variant, possibly dressed up slightly to look better for the film.
Airbus AS350
The useful load for the AS350 / H125 is about 2400 lbs, or 1100 kg.
This means that it is possible for the BMW to be overpowering the helicopter by several hundred kilograms. Add to that the confusion and chaos of the situation, along with the turbulence of being so close to the ground, and it is more than likely that suddenly gaining a 1.5 ton load on the helicopter will drag it down before the pilot can take approriate action, like detaching the sling assembly (if that is even possible).
What I want to know though is how the edge of the sun-roof survived pulling over a tonne — focused on that little hook — without buckling. ;-D
Spherical Cow considerations:
I am being unfair and assuming most favourable conditions for the car and least favourable conditions for the helicopter. This is not a fair fight.
I assume the car engine can overpower the car's own traction force (i.e. it can spin out)
I assume the driving wheels can take the entire weight. If we assume that this car has an all wheel drive (that option is available for the 540i) this assumption is plausible.
I assume that the angles of the cable in relation to the lifting force/traction force are small enough to make the resulting component forces remain within relevant intervals. And since a deviation as large as 30 degrees still keeps 87% of the lift/traction, I assume this is the case here. Do note that the car is at an advantage here since - if it starts tilting - its weight adds to the composite force on the cable, while for the helicopter any deviation puts it at a disadvantage, since it need to start tilting to not be pulled sideways (which in the film contributes to the crash), which in turn decreases the lifting force it has available to counter gravity.
I have not included the weight of fuel and passengers in the car, which would give the car further advantage.
Another approach
Why they would want to hook the car to the helicopter is a mystery to me because that helicopter cannot lift that car. The helicopter's lifting capacity is short by about 1500 lbs / 675 kg.
Now assume that they have gotten to a standoff. The helicopter is hovering at full power. The car is standing still directly under it. The weight on the wheels is 675 kg, and with a friction coefficient of slightly less than 0.5, this still means a possible traction force of 300 kg / 3000 N.
At this point the driver steps on the gas. If the helicopter intends to remain stationary, it must start tilting to counter the sideways force that the car is exerting on the wire. But that means the lifting force is diminished. This relieves the car of lift and more weight is put on the wheels. This means the car can exert an even larger sideways force. The helicopter must tilt more. Less lift is put on the car... and so forth.
The helicopter has entered into a version of a dynamic rollover.
Examples of dynamic rollover.
So unless the pilot intends to crash really quickly, they must follow the car obediently. The car is now driving around with a helicopter, like a balloon on a string!
So in short: this scene makes no sense because — unless the helicopter can lift the car in its entirety — a car can always either drag it down or, force it to follow. | {
"domain": "physics.stackexchange",
"id": 46042,
"tags": "newtonian-mechanics, forces, aerodynamics, estimation"
} |
Target that topples like real person | Question: I'm designing a throwing target that will behave like a person when struck.
Nudge it and it won't noticably move.
Hit it decently, it will lean a little and then return to it's original position, like person would just take a step back.
Strike with all you've got and will fall backwards.
I'm looking for equation that will determine force needed to topple given design.
I've decided on triangular cuboid laid on its side as base (see pic). It will prevent any sideways movement and gives a window for back-and-forth movement.
I can see that there's more than one way to set that up, but i need to avoid unreasonable height or mass. For that i need equations to tinker with, that I can't come up by myself.
Assume that:
Ground is rough enough so it will never slide
Mass centre, force applied and target's bullseye are all alighend at the top of cuboid.
Force will be applied perpendicularly to target. If that's unrealistic then it should be any angle from parallel to perpendicular to the ground. Sketch:
Answer: There's a lot more to this question than OP imagines.
If $F$ was a continuous force, then from the geometry and with trigonometry $F_1$ could easily be calculated:
$$F_1=F\cos (\pi-2\alpha)$$
This creates counterclockwise torque about the forward pivot point of the stand:
$$\tau_1=F_1r$$
Which tries to topple the stand.
The weight $mg$ provides an opposing clockwise torque $\tau_2$:
$$\tau_2=mgr\cos\alpha$$
If there is a net, positive torque:
$$\tau_{net}=\tau_1-\tau_2>0$$
Then angular acceleration around the forward pivot point will occur, as per Newton. The ensemble will topple because as rotation proceeds, $\tau_2$ actually vanishes.
But that's far from the end of it.
I'm designing throwing target that will behave like a person when struck.
This suggest that the target will be struck by a mass bearing projectile. In that case $F$ is not constant but an impact force or a short-lived impuls. The size and duration of it cannot be calculated accurately or easily because they depend on how elastic the collision is: I assume the projectile will bounce off the target (only in the case of a very sticky target would that not be true).
Possibly the easiest approach would be to assume the projectile transfers some of its kinetic energy to the ensemble, so that it will have a certain amount of rotational kinetic energy $K_R$, immediately after impact:
$$K_R=\frac12 I\omega_0^2$$
Where $I$ is the inertial moment of the ensemble about the forward pivot point and $\omega_0$ the angular velocity of the ensemble, immediately after impact.
Since the ensemble is now rotating, the previously mentioned $\tau_2$ provides a decelerating torque:
$$\tau_2=I\dot{\omega}$$
As the point $m$ increases in height during rotation, its potential energy $U$ increases. When the forward bar has become vertical, the height increase $\Delta y$ is:
$$\Delta y=r-h\:\text{with }h=r\sin\alpha$$
And the corresponding change in potential energy is:
$$\Delta U=mg\Delta y=mg(r-h)=mgr(1-\sin\alpha)$$
As during the rotation rotational kinetic energy is converted to potential energy, the ensemble will topple if:
$$K_R>\Delta U$$
Or:
$$\frac12 I\omega_0^2>mg(r-h)$$
This is the condition for toppling.
Let's assume a projectile of mass $M$ is thrown at the target at speed $v$. Its kinetic energy would be:
$$K_P=\frac12 Mv^2$$
Now we assume a fraction $\epsilon$ of this is transferred to the ensemble. Its rotational kinetic energy $K_R$ would then become:
$$K_R=\frac12 \epsilon Mv^2$$
The remaining fraction of kinetic energy would be carries off by the bouncing projectile:
$$K_P=K_R+K'_P\implies K'_P=\frac12 (1-\epsilon)Mv^2$$
And the condition for toppling:
$$\frac12 \epsilon Mv^2>mg(r-h)=mgr(1-\sin\alpha)$$
The problem remains to find a useful expression for $\epsilon$ from all relevant parameters. | {
"domain": "physics.stackexchange",
"id": 34883,
"tags": "newtonian-mechanics"
} |
Capturing a light beam | Question: For a given container made of an extremely reflective surface, is it possible to shine a beam of light in, and with no 'fiddling' (i.e. closing the hole, tilting the object) to contain the beam for an infinite amount of time (not a very long time, but such that it will never escape). Consider the following
Something like this. Except I feel like the light will escape given enough time. Also, the object has to be finite in size (infinite is cheating). If there's a proof that no such container exists then that's fine too.
Ignore factors such as dissipation by heat, or quantum tunneling, and just assume a perfect environment with perfect materials.
Answer: With classical ray optics, yes. For example, it is possible to trap a ray between two reflecting spheres. Here are two pictures I created with one of my old computer programs (actually this was a teamwork project).
On these two pictures, the incident ray was directed at slightly different angles. It is quite clear, that there exists some limiting case between these two angles, in which the light will not escape either downwards nor upwards. Obviously, we could also use such a system in a bottle.
However, it should be noted that the path of the ray is unstable. If we had taken account also wave-optics, the wave would have "leaked" away very fast from he unstable equilibrium. It turns out, that taking into account quantum mechanics, such a system could be used to heat a hotter body (inside to bottle) with a colder body (outside), which is a contradiction. Thus in real life, this is impossible. | {
"domain": "physics.stackexchange",
"id": 26419,
"tags": "optics, visible-light, reflection"
} |
What does it mean that light "will not reach your eye unless your eye is positioned at just the right place" in specular reflection? | Question: I am confused about the following passage from my textbook:
When light is incident upon a rough surface, even microscopically rough such as this page, it is reflected in many directions, as shown in Fig. 23-3. This is called diffuse reflection. The law of reflection still holds, however, at each small section of the surface. Because of diffuse reflection in all directions, an ordinary object can be seen at many different angles by the light reflected from it. When you move your head to the side, different reflected rays reach your eye from each point on the object (such as this page), Fig. 23-4a. Let us compare diffuse reflection to reflection from a mirror, which is known as specular reflection. ("Speculum" is Latin for mirror.) When a narrow beam of light shines on a mirror, the light will not reach your eye unless your eye is positioned at just the right place where the law of reflection is satisfied, as show in in Fig. 23-4b. This is what gives rise to the special image-forming properties of mirrors.
The highlighted part is unclear. It is said that vision doesn't occur if I am not at the right position but I see the image as well as I am in front the mirror. So what is the meaning of the highlighted part?
Answer:
but I see the image as well as I am in front the mirror
It's because light reflected from you and striking mirror after is of diffuse reflection type. So each your body point acts as ambient light source, which sends bunch of light rays to different directions, part of them who passes to the right angle onto the mirror towards your eyes. So mirror forms specular reflection, but your body - diffuse. That's main point why you see yourself in the mirror being even straight in front of it.
To check the fact of pecular reflection of mirror you need to shine parallel rays, for example laser, into a mirror and you will notice that laser dot will be positioned at some reflected angle, which illustrates the fact that other directions doesn't see laser light being reflected. | {
"domain": "physics.stackexchange",
"id": 70614,
"tags": "optics, visible-light, reflection, geometric-optics"
} |
Code to modify DOM element using DOM.style property | Question: So, I've a code that is below which works just fine:
<html>
<head>
<title> Javascript Practice </title>
</head>
<body>
<div id="myDiv">
</div>
<script>
var a = 0;
var b = 255;
var c = 0;
var color2 = 'RGB(' + a + ',' + b + ',' + c + ')';
document.getElementById("myDiv").style.backgroundColor = color2;
document.getElementById("myDiv").style.width = 50;
document.getElementById("myDiv").style.height = 300;
</script>
</body>
</html>
I've got a lot of feedback saying that when accessing DOM style property, I need to set value as strings such as setting DOM.style.width not to a number like I've done here but appending w + "px"
I'm puzzled because I'm using Brackets as a code editor and deploying this on chrome. My code runs just fine and every time I load the HTML page, it initializes a div of the required dimensions. Everyone tells me it won't work which is what's confusing me.
TIA.
Answer: Inline style lengths require units. In quirks mode, browsers will assume pixels as the unit, if provided with an integer instead of a length. If you are not sure how the browser will handle the value, specify the units.
The CSS parser interprets unitless numbers as px (except for line-height and any other properties where they have distinct meaning, and except in shorthands).
Bonus: Here is an extensible version of your script.
const state = {
color : { r : 0, g : 255, b : 0 },
dimensions : { width : 50, height : 300 }
}
const applyState = (el, state) => {
Object.assign(el.style, {
backgroundColor: `rgb(${state.color.r}, ${state.color.g}, ${state.color.b})`,
width: `${state.dimensions.width}px`,
height: `${state.dimensions.height}px`
})
}
applyState(document.getElementById('my-div'), state)
<div id="my-div"></div> | {
"domain": "codereview.stackexchange",
"id": 37653,
"tags": "javascript, dom"
} |
What is meant by the term "computational basis"? | Question: What is meant by the term "computational basis" in the context of quantum computing and quantum algorithms?
Answer: When we have just one qubit, there's nothing particularly special about the computational basis; it's just nice to have a canonical basis. In practice you could think that first you implement a gate $Z$ with $Z^2 = I$ and $Z\neq I$, and then you say that the computational basis is the eigenbasis of this gate.
However, when we talk about multi-qubit systems, the computational basis is meaningful. It comes from picking a basis for each qubit, and then taking the basis which is the tensor product of all these bases. Picking the same basis for each qubit is nice just to keep everything uniform, and calling them $0$ and $1$ is a nice notational choice. What's really important is that our basis states are product states across our qubits: the computational basis states can be prepared by initializing our qubits separately and then bringing them together. This isn't true for arbitrary states! For example, the cat state $\frac1{\sqrt2}\left(|0^n\rangle + |1^n\rangle\right)$ requires a log-depth circuit in order to prepare it from a product state. | {
"domain": "quantumcomputing.stackexchange",
"id": 42,
"tags": "quantum-state, terminology-and-notation"
} |
How to prove this simple randomized algorithm is 2-approximate for MAS? | Question: The Maximum Acyclic Subgraph (MAS) problem is:
Given a directed graph $G = (V, E)$, find
the largest subset of edges which are acyclic.
In this paper the authors state the following algorithm:
A simple randomized algorithm achieves a factor 1/2 for
this problem: Simply pick a random ordering of the vertices. In fact, one can achieve factor 1/2 by an even simpler
algorithm: Pick an arbitrary ordering of the vertices $\pi$ and
its reverse $\pi^R$. One of them has at least 1/2 fraction of the
edges in the forward direction.
How can we prove that this is indeed a 2-approximation algorithm? I'm having trouble understanding what a reverse of an ordering could be, and what a "forward" direction is (since we're not dealing with network flow).
Answer: To prove the approximation guarantee for any algorithm, we mostly aim to find a lower bound on the optimal value (for minimization problem) or an upper bound on the optimal value (for maximization problem).
Since yours is a maximization problem, a trivial upper bound is $|E|$ for the optimal value since any solution is a subset of $E$ and thus contains less than $|E|$ edges.
Now, let us see how we get $2$-approximation. Take any arbitrary ordering of vertices: $v_1, \dotsc, v_n$.
Let $|E_f|$ be the set of edges that goes forward, i.e., all edges in $E_f$ are of the form $(v_i, v_j)$ such that $i < j$. It is easy to see that this subset of edges forms an acyclic subgraph.
Similarly, $E_b$ be the set of edges that goes backward, i.e., all edges in $E_b$ are of the form $(v_i, v_j)$ such that $i > j$. This subset of edges forms an acyclic subgraph as well.
It is easy to see that either $|E_f| \geq |E|/2$ or $|E_b| \geq |E|/2$.
Since $\mathsf{OPT} \leq |E|$, we get $|E_f| \geq \mathsf{OPT}/2$ or $|E_b| \geq \mathsf{OPT}/2$.
In other words, either $E_f$ or $E_b$ is a $2$-approximation. The algorithm simply chooses the one with the maximum cardinality. | {
"domain": "cs.stackexchange",
"id": 21195,
"tags": "approximation"
} |
How to time-indexed the ros output txt file | Question:
Hello
Im using ROS Fuerte and Ubuntu 10.4. Would like to have some outputs of my nodes as a text file. Is there any option to time indexed those outputs?
Thanks
Originally posted by Astronaut on ROS Answers with karma: 330 on 2012-10-16
Post score: 0
Answer:
rostopic echo /data > data.txt
Originally posted by Zargol with karma: 206 on 2014-04-10
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11398,
"tags": "ros"
} |
ROS Navigation Stack: Global Planner | Question:
Hello,
The tutorial for Navigation Stack states, since the Global Planner assumes circular robots, it produces waypoints that are optimistic for the actual robot footprint, and therefore the generated path may be infeasible to follow.
I was wondering if it is still the case if we consider the circumscribed circle of the robot's footprint. In this case, is it not even more conservative?
Thanks.
Originally posted by ROSCMBOT on ROS Answers with karma: 651 on 2014-06-04
Post score: 0
Answer:
If you pass in the circumscribed circle as the inner radius it will be conservative, i.e. plans should be feasible. On the other hand you might loose plans that would work, i.e. it's incomplete.
All this relates to global planners such as NavFn. There are global planners that plan actual trajectories in the configurations space and thus are correct (but usually slower).
Originally posted by dornhege with karma: 31395 on 2014-06-05
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by ROSCMBOT on 2014-06-05:
Thanks.
1- Does NavFn consider the radius of the inscribed circle of the robot's footprint?
2- Could you name Global Planners in ROS that plan actual trajectories in the configuration space?
Comment by dornhege on 2014-06-05:\
yes, 2. One example that can do that: http://wiki.ros.org/sbpl | {
"domain": "robotics.stackexchange",
"id": 18169,
"tags": "ros, navigation, stack"
} |
Comparing IDs in two arrays | Question: I have two arrays that I need to compare if their IDs are equal. This is how I am currently doing it:
PFQuery *earnedQuery = [PFQuery queryWithClassName:@"EarnedAchievement"];
[earnedQuery whereKey:@"user" equalTo:[PFUser currentUser]];
[earnedQuery findObjectsInBackgroundWithBlock:^(NSArray *objects, NSError *error) {
for (Achievement *achievement in allAchievements) {
for (PFObject *earnedAchievement in objects) {
if ([achievement.id isEqualToString:earnedAchievement[@"achievmentId"]]) {
//user has earned this achievement
achievement.earned = [NSNumber numberWithBool:YES];
achievement.earnedDate = earnedAchievement[@"earnedOn"];
}
}
}
[context save:&error];
}];
Is there a better or more efficient way to do this outside of the two for in loops?
Answer: The first thing we can do to improve the efficiency of this code is to remember to check for an error and make sure that the returned objects array returns anything before we do anything. Generally, code from Apple libraries would guarantee our objects array to be empty if there were any sort of error, but some libraries don't make that guarantee.
We can also store our allAchievements locally as an NSDictionary rather than an array. If allAchievements is an NSDictionary, our code looks something more like this:
[earnedQuery findObjectsInBackgroundWithBlock:^(NSArray *objects, NSError *error) {
for (PFObject *earnedAchievement in objects) {
Achievement *achievement = allAchievements[earnedAchievement[@"achievmentId"]];
if (achievement) {
achievement.earned = @YES;
achievement.earnedDate = earnedAchievement[@"earnedOn"];
}
}
[context save:&error];
}];
Now we eliminate the nested loop. And we only run through the outer loop once per earnedAchievement (which may be zero times).
Also, keep in mind that everything within the findObjectsInBackgroundWithBlock: is executing in the background.
Some other comments...
I would eliminate the use of magic strings. Define all of your keys as constants somewhere. This serves multiple purposes.
If we ever decide to actually change the key, it's changed everywhere all at once.
Typing magic strings is prone to typos, where as using a variable won't compile if we've made a typo (unless we typo into another variable name).
Typing magic strings won't auto-complete (which makes typos more likely, see point 2), but constant variables will. | {
"domain": "codereview.stackexchange",
"id": 13628,
"tags": "objective-c"
} |
Which was the first restriction endonuclease to be isolated? | Question: I kind of have confusion between the first restriction endonuclease to be isolated and discovered.
In class XII NCERT, it says that,
"In the year 1963, the two enzymes responsible for restricting the growth of bacteriophage in Escherichia coli were discovered. One of these added methyl groups to the DNA, while the other cut DNA. The latter was called restriction endonuclease".
Now, "one of these added methyl groups to the DNA" refers to DNA methylase clearly and "the other cut DNA" must refer to EcoR1, I think?
In the next paragraph it is given that,
"The first restriction endonuclease, Hind II, whose functioning depended on a specific DNA nucleotide sequence was isolated and characterised five years later".
What I understand from this is that,
Hind II was the first restriction endonuclease to be discovered but EcoR1 the first to be isolated?
I went on many sites, but they all use these terms interchangeably.
Answer: The restriction endonuclease (RE) isolated from E. coli K strain was a type I restriction endonuclease, identified/characterized by Meselson and Yuan1 in 1968. Type I REs are site specific, but cleave DNA randomly.
The one isolated from Haemophilus influenzae by Smith and Wilcox2 in 1968 was the first type II RE and is now known by the name HindII - the isolate they had was actually a heterogenous mixture with HindIII, for which there are no sites in the one of the viruses they were digesting, so they were able to work out the specific site of digestion for HindII, but not HindIII. Type II REs are site specific and cleave at specific sites too, which makes them useful in molecular biology, as opposed to type I REs, which aren't so useful.
EcoRI (type II RE) wasn't isolated until 1971, and was done by Yoshimori as his PhD thesis at University of California, San Francisco (see reference 27 in Roberts; ref 3 below).
Roberts (2005) a succinct history of it all3. See sections on Restriction Enzymes and Polyacryamide Gel Electrophoresis for information about the discovery of type I and II REs respectively.
Refs:
Meselson M, Yuan R. DNA restriction enzyme from E. coli. Nature. 1968 Mar 23;217(5134):1110-4. doi: 10.1038/2171110a0. PMID: 4868368.
Smith HO, Wilcox KW. A restriction enzyme from Hemophilus influenzae. I. Purification and general properties. J Mol Biol. 1970 Jul 28;51(2):379-91. doi: 10.1016/0022-2836(70)90149-x. PMID: 5312500.
Roberts RJ. How restriction enzymes became the workhorses of molecular biology. Proc Natl Acad Sci U S A. 2005 Apr 26;102(17):5905-8. doi: 10.1073/pnas.0500923102. Epub 2005 Apr 19. PMID: 15840723; PMCID: PMC1087929. | {
"domain": "biology.stackexchange",
"id": 12325,
"tags": "biotechnology, restriction-enzymes"
} |
Starting point for a derivation of fictitious forces | Question: I came across this expression at the start of a derivation of fictitious forces:
$$(dA/dt)_L = (dA/dt)_R + \omega \times A$$
Where the $L$ subscript refers to the laboratory (inertial) reference frame and the $R$ subscript is the rotating frame. $\omega$ is the angular velocity and $A$ is a vector.
This equation makes intuitive sense to me, the change in the laboratory frame has to be some combination of the change in the rotating frame as well as being augmented by the angular velocity. However, I would like to see an actual derivation of this.
Answer: You haven't given enough information: What you are asking for is that missing bit. I think I can guess just what that is: If $\vec{A}$ is a position vector (e.g. points from the origin to a point that is represented in both the inertial and the non-inertial frame of reference) and your non-inertial frame of reference rotates at a constant angular velocity $\left| \vec{\omega} \right|$ around an axis $\vec{\omega}$ through the origin, it all happens to make sense because then $-\vec{\omega} \times \vec{A}$ is the difference in the time-derivative $\frac{d}{dt} \left( \vec{A}_R - \vec{A}_L \right)$ where I use the subscript L and R the way you did.
To make this obvious, it would be neat to go to the previous (zeroth?) step, writing an equation relating $\vec{A}_L$ and $\vec{A}_R$ before taking the time-derivative. You probably have to do it yourself, because there is more than one way to express it, and to see what is happening, you should chose a representation that you are sufficiently at ease with to do the vectorial differentiation. | {
"domain": "physics.stackexchange",
"id": 20661,
"tags": "homework-and-exercises, newtonian-mechanics, reference-frames, rotation, rotational-kinematics"
} |
What are "electron holes" in semiconductors? | Question: I'm tutoring senior high school students. So far I've explained them the concepts of atomic structure (Bohr's model & Quantum mechanical model) very clearly. Now the next topic to be taught is semiconductors.
I myself am not conviced with the concept of electron holes. If there is no electron then there is no electron. How can it be a hole. We define a hole when there is some thing every where except at a place. But inside an atom how can we define a HOLE.
Kindly explain it with the help of Bohr's model.
What was the need of introducing such abstract concept in semi conductors?
Answer: The notion of a particle in nonrelativistic quantum mechanics is very general: anything that can have a wavefunction, a probability amplitude for being at different locations, is a particle. In a metal, electrons and their associated elastic lattice deformation clouds travel as a particle. These effective electron-like negative carriers are electron quasiparticles, and these quasiparticles have a negative charge, which can be seen by measuring the Hall conductivity. Their velocity gives rise to a potential difference transverse to a wire in an external magnetic field which reveals the sign of the carriers.
But in a semiconductor, the objects which carry the charge can be positively charged, which is physically accurate--- a current in such a material will give an opposite sign Hall effect voltage.
To understand this, you must understand that the electron eigenstates in a periodic lattice potential are defined by bands, and these bands have gaps. When you have an insulating material, the band is fully filled, so that there is an energy gap for getting electrons to move. The energy gap generically means that an electron with wavenumber k will have energy:
$$ E= A + B k^2 $$
Where A is the band gap, and B is the (reciprocal of twice the) effective mass. This form is generic, because electrons just above the gap have a minimum energy, and the energy goes up quadratically from a minimum. This quadratic energy dependence is the same as for a free nonrelativistic particle, and so the motion of the quasiparticles is described by the same Schrodinger equation as a free nonrelativistic particle, even though they are complicated tunneling excitations of electrons bound to many atoms.
Now if you dope the material, you add a few extra electrons, which fill up these states. These electrons fill up k up to a certain amount, just like a free electron Fermi-gas and electrons with the maximum energy can be easily made to carry charge, just by jumping to a slightly higher k, and this is again just like a normal electron Fermi gas, except with a different mass, the effective mass. This is a semiconductor with a negative current carrier.
But the energy of the electrons in the previous band has a maximum, so that their energy is generically
$$ E = -Bk^2$$
Since the zero of energy is defined by the location of the band, and as you vary k, the energy goes down. These electrons have a negative nonrelativistic effective mass, and their motion is crazy--- if you apply a force to these electrons, they move in the opposite direction! But this is silly--- these electron states are fully occupied, so the electrons don't move at all in response to an external force, because all the states are filled, they have nowhere to move to.
So in order to get these electrons to move, you need to remove some of them, to allow electrons to fill these gaps. When you do, you produce a sea of holes up to some wavenumber k. The important point is that these holes, unlike the electrons, have a positive mass, and obey the usual Schroedinger equation for fermions. So you get effective positively charged positive effective mass carrier. These are the holes.
The whole situation is caused by the generic shape of the energy as a function of k in the viscinity of a maximum/minimum, as produced by a band-gap.
Bohr model holes
You can see a kind of electron hole already in the Bohr model when you consider Moseley's law, but these holes are not the physical holes of a semiconductor. If you knock out an electron from a K-shell of an atom, the object you have has a missing electron in the 1s state. This missing electron continues to orbit the nucleus, and it is pretty stable, in that the decay takes several orbits to happen.
The many-electron system with one missing electron can be thought of as a single-particle hole orbiting the nucleus. This single particle hole has a positive charge, so it is repelled by the nucleus, but it has a negative mass, because we are not near a band-gap, it's energy as a function of k is the negative of a free electron's energy.
This negative-mass hole can be thought of as orbiting the nucleus, held in place by its repulsion to the nucleus (remember that the negative mass means that the force is in the opposite direction as the acceleration). This crazy system decays as the hole moves down in energy by moving out from the nucleus to higher Bohr orbits.
This type of hole-description does not appear in the literature for Moseley's law, but it is a very simple approximation which is useful, because it gives a single particle model for the effect. The approximation is obviously wrong for small atoms, but it should be exact in the limit of large atoms. There are unexplained regularities in Moseley's law that might be explained by the single-hole picture, although again, this "hole" is a negative mass hole, unlike the holes in a positive doped semiconductor. | {
"domain": "physics.stackexchange",
"id": 42781,
"tags": "solid-state-physics, semiconductor-physics"
} |
asctec_drivers and mav_tools Build errors on Ubuntu 11.04 running ROS | Question:
For our pelican running ROS on Ubuntu 11.04, I've got similar errors from building both asctec_drivers and mav_tools from git cloned versions of both pkgs.
Below is the errors after calling "rosmake asctec_drivers --rosdep-install". Presently I have an impression that it might be an issue of having the right path to pkgs or stacks ? But I still cannot figure this out.
NB: I posted this yesterday at the asctec-users forum before discovering this forum today.
[ rosmake ] Packages requested are: ['asctec_drivers']
[ rosmake ] Logging to directory/home/inf4/.ros/rosmake/rosmake_output-20110721-221425
[ rosmake ] Expanded args ['asctec_drivers'] to:
['asctec_msgs', 'pelican_urdf', 'asctec_proc', 'asctec_mon', 'asctec_autopilot']
[ rosmake ] Generating Install Script using rosdep then executing. This may take a minute, you will be prompted for permissions. . .
rosdep executing this script:
{{{
set -o errexit
#No Packages to install
}}}
[ rosmake ] rosdep successfully installed all system dependencies
[rosmake-0] Starting >>> rosbuild [ make ]
[rosmake-0] Finished <<< rosbuild ROS_NOBUILD in package rosbuild
No Makefile in package rosbuild
[ rosmake ] [ make ] [ rosbuild: 0.0 sec ] [ 1 Active 1/50 Complete ]
[rosmake-0] Starting >>> roslang [ make ]
[rosmake-1] Starting >>> cpp_common [ make ]
[rosmake-0] Finished <<< roslang ROS_NOBUILD in package roslang
No Makefile in package roslang
[rosmake-1] Finished <<< cpp_common ROS_NOBUILD in package cpp_common
[rosmake-0] Starting >>> roslib [ make ]
[rosmake-1] Starting >>> roscpp_traits [ make ]
[ rosmake ] [ make ] [ roslib: 0.0 sec ] [ roscpp_... [ 2 Active 4/50 Complete ]
[rosmake-0] Finished <<< roslib ROS_NOBUILD in package roslib
[rosmake-1] Finished <<< roscpp_traits ROS_NOBUILD in package roscpp_traits
[rosmake-1] Starting >>> rostime [ make ]
[rosmake-1] Finished <<< rostime ROS_NOBUILD in package rostime
[rosmake-0] Starting >>> xmlrpcpp [ make ]
[rosmake-1] Starting >>> roscpp_serialization [ make ]
[rosmake-1] Finished <<< roscpp_serialization ROS_NOBUILD in package roscpp_serialization
[rosmake-0] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp
[rosmake-1] Starting >>> rosconsole [ make ]
[rosmake-1] Finished <<< rosconsole ROS_NOBUILD in package rosconsole
[rosmake-0] Starting >>> std_msgs [ make ]
[rosmake-0] Finished <<< std_msgs ROS_NOBUILD in package std_msgs
[ rosmake ] [ make ] [ std_msgs: 0.0 sec ] [ 1 Active 10/50 Complete ]
[rosmake-1] Starting >>> pelican_urdf [ make ]
[rosmake-0] Starting >>> rosgraph_msgs [ make ]
[rosmake-0] Finished <<< rosgraph_msgs ROS_NOBUILD in package rosgraph_msgs
[rosmake-0] Starting >>> roscpp [ make ]
[rosmake-0] Finished <<< roscpp ROS_NOBUILD in package roscpp
[rosmake-0] Starting >>> asctec_msgs [ make ]
[ rosmake ] [ make ] [ pelican_urdf: 0.1 sec ] [ ... [ 2 Active 13/50 Complete ]
[ rosmake ] All 2 lines
{-------------------------------------------------------------------------------
mkdir: kann Verzeichnis „build“ nicht anlegen: Keine Berechtigung
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package pelican_urdf written to:
[ rosmake ] /home/inf4/.ros/rosmake/rosmake_output-20110721-221425/pelican_urdf/build_output.log
[rosmake-1] Finished <<< pelican_urdf [FAIL] [ 0.10 seconds ]
[ rosmake ] Halting due to failure in package pelican_urdf.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] All 2 lines
{-------------------------------------------------------------------------------
mkdir: kann Verzeichnis „build“ nicht anlegen: Keine Berechtigung
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package asctec_msgs written to:
[ rosmake ] /home/inf4/.ros/rosmake/rosmake_output-20110721-221425/asctec_msgs/build_output.log
[rosmake-0] Finished <<< asctec_msgs [FAIL] [ 0.10 seconds ]
[ rosmake ] Halting due to failure in package asctec_msgs.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 15 packages with 2 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/inf4/.ros/rosmake/rosmake_output-20110721-221425
Originally posted by william nguatem on ROS Answers with karma: 1 on 2011-07-21
Post score: 0
Answer:
The problem you have is that the package you're trying to build has read only permissions on your filesystem. If you're using a binary installation you should need to build the package, it's already built. If you want to modify the package you should be downloading the packages you want to modify from source and adding them to the front of your ROS_PACKAGE_PATH.
Originally posted by tfoote with karma: 58457 on 2011-07-30
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 6229,
"tags": "ros"
} |
How is chemical and dynamic equilibrium related? | Question: I have been told that dynamic equilibrium is attained when the reactants and products of a system has achieved the same concentration or when their rates of formation are the same. And I also know that even though nothing seems to go on in a dynamic equilibrium there's still so much activity.
Now assuming I've got that right. I want to know how is chemical equilibrium any different from what I mentioned abt dynamic equilibrium.
Also can chemical and dynamic equilibrium coexist??
Answer: Background: It is generally the case that there is an energy barrier between reactants and products, sometimes called an activation energy, and this is smaller for reactants than products so that a reaction proceeds towards products.
Suppose now that there is a reaction such that $\ce{ A \to B}$ and initially only A is present so that the rate of forward reaction is large, i.e. lots of A and almost no B make the forwards reaction faster than the reverse reaction. As soon as some B is present it can react back to make A, but $\ce{ B \to A}$ will be slow compared to $\ce{ A \to B}$ because there is less B (smaller concentration) and the activation barrier is larger so the 'rate constant' is smaller.
Eventually the amount of A is reduced and B increased until both rates are the same and equilibrium is achieved. Note that the concentrations need not be the same, in fact they usually are not. Also note that molecules of A continuously convert to B; $\ce{ A \to B}$ and similarly B convert to A; $\ce{ B \to A}$ which is to say that the normal state of affairs is the 'dynamic equilibrium' and simply using the word 'equilibrium' implicitly assumes that this is the case. | {
"domain": "chemistry.stackexchange",
"id": 9447,
"tags": "equilibrium"
} |
Jetson Nano comes with OpenCV 4.1.1., do I need to downgrade to 3.2. for melodic? | Question:
I just got a Jetson Nano running Ubuntu 18.04 and it comes with OpenCV 4.1.1. pre installed. I've read ROS melodic is meant to work with OpenCV 3.2. and I'm getting some catkin make errors due to conflict between versions, for example:
usr/bin/ld: warning: libopencv_imgcodecs.so.3.2, needed by /opt/ros/melodic/lib/libcv_bridge.so, may conflict with libopencv_imgcodecs.so.4.1
Should I downgrade my system OpenCV to 3.2.?
Originally posted by jorgemia on ROS Answers with karma: 98 on 2020-03-28
Post score: 0
Original comments
Comment by floda on 2020-06-21:
Do you mind sharing your recipe in recompiling cv_bridge? I am badly stuck. Thank you.
Comment by Akr2712 on 2020-08-18:
Hi, Please check the this vision_opencv to fix the issues with cv_bridge.
Answer:
I encountered the same issue lately.
You can downgrade with sudo apt -y --allow-downgrades install libopencv-dev=3.2.0+dfsg-4ubuntu0.1 and hold (avoid it being upgraded) with sudo apt-mark hold libopencv-dev.
If you have time, try to work on making your packages compatible with OpenCV 4 (e.g. by removing usage of the C API) since focal, and therefore noetic, drops support for OpenCV 3. To make everything work on melodic, your best bet right now is to downgrade and hold.
Originally posted by kmfrick with karma: 26 on 2020-05-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jorgemia on 2020-05-09:
Thanks! Managed to find and install from source releases of certain packages that were compatible with OpenCV 4 but it has been a real pain to get it to work because had to rebuild CV Bridge and many others...I think downgrading is probably better as you say until more packages work with OpenCV 4.
Will the downgrade command you posted remove OpenCV 4 from the Nano?
Comment by kmfrick on 2020-05-10:
Yes, it should remove OpenCV 4 and install version 3.2.0.
Comment by floda on 2020-06-21:
@jorgemia Do you mind posting your steps in recompiling cv_bridge? I got badly stuck there.
Comment by jorgemia on 2020-06-22:
@floda Hey I did it a while back but basically installed someone's fork from github where they had changed files to support opencv 4. You can find info here that might be useful: Github Issue and you could check some of the pull requests on github. Some of them might have been merged already so cv_bridge might already support OpenCV4. Also, the latest noetic release of cv-bridge supports OpenCv4, you might be able to install that. | {
"domain": "robotics.stackexchange",
"id": 34658,
"tags": "ros, ros-melodic, catkin-make, libopencv, cuda"
} |
Why Does the Model not Improve in PyTorch? | Question: I have a simple curve fitting problem in hand. I wrote some code in PyTorch as follows:
class MyDatasetV1(torch.utils.data.Dataset):
def __init__(self, dataset):
# Initialize a dataset
assert isinstance(dataset, list), '"dataset" must be of "tuple" type'
assert isinstance(dataset[0], torch.Tensor), '"x" must be of "torch.Tensor" type!'
assert isinstance(dataset[1], torch.Tensor), '"y" must be of "torch.Tensor" type!'
self.x = dataset[0]
self.y = dataset[1]
self.length = self.x.shape[0]
def __len__(self):
# Get the number of elements in entire dataset
return self.length
def __getitem__(self, index):
return self.x[index], self.y[index]
class MyModelV2(torch.nn.Module):
def __init__(self, input_size, output_size, hiddens, weights, biases, batchnorms, activations, dropouts):
# Initialize a custom fully-connected model
super(MyModelV2, self).__init__()
assert len(hiddens) + 1 == len(weights), 'Number of hidden layers must match the number of "weights" units/tensors!'
assert len(hiddens) + 1 == len(biases), 'Number of hidden layers must match the number of "bias" units/scalars!'
assert len(hiddens) + 1 == len(batchnorms), 'Number of hidden layers must match the number of "batch normalization" units!'
assert len(hiddens) + 1 == len(activations), 'Number of hidden layers must match the number of "activation" functions!'
assert len(hiddens) + 1 == len(dropouts), 'Number of hidden layers must match the number of "dropout" units!'
self.weights = weights
self.biases = biases
self.batchnorms = batchnorms
self.activations = activations
self.dropouts = dropouts
self.layers_size = [input_size]
self.layers_size.extend(hiddens)
self.layers_size.append(output_size)
self.layers = torch.nn.ModuleList()
def build(self):
# Build a model with given specifications
for index in range(len(self.layers_size) - 1):
layer = torch.nn.Linear(self.layers_size[index], self.layers_size[index + 1])
if self.weights[index]:
self.weights[index](layer.weight)
if self.biases[index]:
self.biases[index](layer.bias)
self.layers.append(layer)
if self.batchnorms[index]:
self.layers.append(torch.nn.BatchNorm1d(self.layers_size[index + 1]))
if self.dropouts[index]:
self.layers.append(torch.nn.Dropout(self.dropouts[index]))
self.layers.append(self.activations[index])
def forward(self, x):
# Forward pass for a given input
for layer in self.layers:
x = layer(x)
return x
def set_weight(weights):
return torch.nn.init.xavier_uniform_(weights)
def set_bias(biases):
return torch.nn.init.zeros_(biases)
torch.manual_seed(7)
model = MyModelV2(1, 1, [64, 64], 3 * [set_weight], 3 * [set_bias], 3 * [False], 3 * [torch.nn.ReLU()], 3 * [False])
model.build()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
x = torch.linspace(-10, 10, 1000 * 1).reshape((1000, 1))
y = 0.1 * x * torch.cos(x) + 0.05 * torch.normal(1, 2, size=(1000, 1))
ds = MyDatasetV1([x, y])
ds_loader = torch.utils.data.DataLoader(ds, batch_size=32, shuffle=True)
def get_training_loss(model, training_loader, criterion, optimizer):
# Training loop for a given model
model.train()
training_loss = 0.0
for x_train, y_train in training_loader:
optimizer.zero_grad()
y_hat_train = model(x_train)
train_loss = criterion(y_hat_train, y_train)
train_loss.backward()
optimizer.step()
training_loss += train_loss.item()
# Calculate the average training loss
training_loss /= len(training_loader)
return training_loss
EPOCHS = 100
for epoch in range(1, EPOCHS + 1):
tr = get_training_loss(model, ds_loader, criterion, optimizer)
print(f'Epoch number: {epoch} - Training error/loss: {tr:.6e}')
def predictor(model, x):
# Predict after training for a given model
model.eval()
with torch.no_grad():
x = model(x)
return x
y_hat = predictor(model, x)
plt.figure(dpi=120)
plt.plot(x.numpy(), y.numpy(), 'ro', markersize=1.5, label='(x, y)')
plt.plot(x.numpy(), y_hat.numpy(), 'bo', markersize=1.5, label='(x, $\hat{y}$)')
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
plt.legend()
plt.show()
Even though I tried different models, i.e., 32, 32 or 64, 32, 16 neurons at hidden layers, etc., I ended up having a zero prediction from all as shown in the figure below.
I reviewed my model many times, but I could not figure out the issue here. What is wrong with it? Thanks in advance!
Answer: For whom this post and reply might be useful,
If the model is examined carefully, the ReLU activation function at the end before the output neurons is what causes the problem. Since the dataset consists of positive and negative values, using ReLU activation function just before the output neurons prevents the model from learning the negative values of y in the dataset as seen in the plot above. | {
"domain": "ai.stackexchange",
"id": 3971,
"tags": "deep-learning, pytorch, curve-fitting"
} |
$\frac{\partial x_{\mu}}{\partial \omega_{a}}$ doesn't make any sense | Question: In the book Condensed Matter Field Theory by Altland, on page 32, it is given while explaining Noether's theorem that
To understand the impact of a symmetry transformation, it is fully sufficient to consider its infinitesimal version. (Any finite transformation can be generated by successive application of infinitesimal ones.) Consider, thus, the two mappings
$$
\begin{aligned}
x_{\mu} \rightarrow x_{\mu}^{\prime} &=x_{\mu}+\left.\frac{\partial x_{\mu}}{\partial \omega_{a}}\right|_{\omega=0} \omega_{a}(x) \\
\phi^{i}(x) \rightarrow \phi^{\prime i}\left(x^{\prime}\right) &=\phi^{i}(x)+\omega_{a}(x) F_{a}^{i}[\phi]
\end{aligned}
$$
expressing the change of both fields and coordinates to linear order in a set of parameter functions $\left\{\omega_{a}\right\}$ characterizing the transformation. (For a three-dimensional rotation, $\left(\omega_{1}, \omega_{2}, \omega_{3}\right)=(\phi, \theta, \psi)$ would be the rotation angles, etc.) The functionals $\left\{F_{a}^{i}\right\}-$ which need not depend linearly on the field $\phi$, and may explicitly depend on the coordinate $x-$ define the incremental change $\phi^{\prime}\left(x^{\prime}\right)-\phi(x)$.
We now ask how the action Eq. (1.16) changes under the transformation Eq. (1.42), i.e. we wish to compute the difference
$$
\Delta S=\int d^{m} x^{\prime} \mathcal{L}\left(\phi^{\prime i}\left(x^{\prime}\right), \partial_{x_{\mu}^{\prime}} \phi^{\prime i}\left(x^{\prime}\right)\right)-\int d^{m} x \mathcal{L}\left(\phi^{i}(x), \partial_{x_{\mu}} \phi^{i}(x)\right)
$$
However, $\frac{\partial x_{\mu}}{\partial \omega_{a}}$ in the first equation doesn't make any sense; it means changing the mapped point if the transformation at that point were to change.
Could someone help me to decipher what kind of transformation the author is talking about in here?
Answer: Consider the example that Altland suggests: let $\theta,\phi,\psi$ be the Euler angles specifying a rotation. Then $x,y,z \to x(\theta,\phi,\psi),y(\theta,\phi,\psi), z(\theta,\phi,\psi)$ and the derivatives $\partial x/\partial \theta$ etc. give the infinitesimal form of the rotation. | {
"domain": "physics.stackexchange",
"id": 78323,
"tags": "symmetry, field-theory, coordinate-systems, notation, noethers-theorem"
} |
PHP Response Wrapper | Question: I'm writing a routing system that may or may not be part of a public API later for a personal project. A main part of the routing system is a response object for the user to send headers, status code, and the body of the response that is sent from the server.
I've created a response class that I think covers everything the user would want to do with the response (it could use a few more "convenience" functions however). I wanted to post it here to make sure my code is clean, and that I didn't miss anything that the end user might want to do.
An instance of the request class is provided to the user when a request is made to a specified URL (see my last code review)
I present my response class:
class Response {
private $headers = array();
private $code = 200;
private $body = '';
private $sent = false;
private $log;
public function __construct() {
$this->log = Logger::getLogger(get_class($this));
}
public function headers ($headers) { // Add some headers to the headers array
$this->headers = array_merge($this->headers, $headers);
return $this;
}
public function header ($key, $value) { // set a single header in the headers array
$this->headers[$key] = $value;
return $this;
}
public function code ($code) { // Set the status code
$this->code = $code;
return $this;
}
public function status ($code) { // Alternate method for setting the status code
return $this->code($code);;
}
public function json($str) { // respond with json, set the body text and set the content-type header
$this->header('Content-Type', 'application/json');
if(is_array($str)) { // handle either raw JSON text, or php arrays
$this->body = json_encode($str);
} else {
$this->body = $str;
}
return $this;
}
public function html ($str){ // respond with HTML
$this->header('Content-Type', 'text/html');
$this->body = $str;
return $this;
}
public function form ($str) { // Respond with form data
$this->header('Content-Type', 'application/x-www-form-urlencoded');
// TODO: Allow the user to user an array
$this->body = $str;
return $this;
}
public function render ($file){ // Render an HTML file from the templates folder
//TODO: Restrict to templates folder open
//TODO: Add server-side rendering code
$this->body = file_get_contents($file);
return $this;
}
public function sent() { // Check if the request has been sent
return $this->sent;
}
public function send () { // send the request
if($this->sent()){
$log->error('Attempted to call send on a sent response!');
exit('Attempted to call send on a sent response!');
}
// Log the request for debugging
$this->log->info($this->headers);
$this->log->info('HTTP Responce Code: ' . $this->code);
$this->log->info($this->body);
// Set the headers
foreach($this->headers as $key => $value) {
header($key .': ' . $value);
}
// Set the status code
http_response_code($this->code);
// Send out the body
echo $this->body;
// Set the sent variable
$this->sent = true;
}
}
EDIT:
The idea with the short and clear function names that all return $this is to be able to have clear and simple chainable responses. (e.g $res->code(200)->json(... Json...);)
Answer: I would tend to agree with comments from @sensorario that fluent interface may not make much sense here.
I am going to operate under the assumption that this Response class if being instantiated from a controller or the like. If that is the case, why would it make sense to have this fluent style? Presumably that controller know all the information it need to know in order to create a response and there are not going to be other things that decorate the response during it's short lifecycle.
If this is that case, why make code more complex in the controller and make this Response object more fragile in terms of being potentially put into a bad state (i.e incomplete field completion, etc.).
I would prefer to see the constructor enforce the data to be set on the object at the point of instantiation such that you know that the object has all the dependencies it needs to fulfill it's main method calls. For example:
public function __construct(
$content,
int $statusCode,
Logger $logger,
array $headers = [],
) {
$this->content = $content;
$this->statusCode = $statusCode;
$this->logger = $logger;
$this->headers = $headers;
}
By doing this, you guarantee that all dependencies are met and you can totally eliminate all your setters.
This makes code to instantiate this object much cleaner and more foolproof as well, as it is way easier for someone to see all dependencies you need to pass an object while working in an IDE via code completion and method signatures than it is for a coder to have to know that they have to not only instantiate an object, but also call methods a, b, c, d, e, etc. on it to get it set up in a proper state.
Note the suggested use of dependency injection for your Logger. I assume the controller has access to the Logger already, so why should this class have to know how to instantiate it? Note that you only have happy-path case here for logger. What if instantiation fails? Agree with other suggestion about using PSR-3 compliant logger.
You have no validation on your public method parameters at all, either via type hinting or checking the passed values themselves. An object of this class can easily be put into a bad state, something I think would be critical for something as important as the response sending mechanism.
Do you really need key value format for your headers? You are kind of implying some level of header management here which does not really exist. What value do you get from storing headers as key-value store considering you have to transform them back to values usable by header() anyway? Unless you are performing more sophisticated header management (header content validation, ASCII-encoding of headers, header replacement, etc.) why not deal with accepting headers from caller transparently (i.e "send me headers as formatted for header()") as opposed to creating your own sort of header API.
If you decide to keep setters, why have two methods that do the same thing in code() and status(). Why have one break method chaining? There really isn't much value in returning the value the caller just sent you back to it.
The render() method seems out of place in this class. It should be up to controller or caller up the stack to prepare the content and inject into the Response object.
Your different format-specific methods seem oddly designed and incongruent with regards to sending headers. I would think that only a send() tpye message should begin output to the browser.
Right now, the call could do some wonky stuff like:
$response->json(...); // sends JSON header
$response->html(...); // sends HTML header
$response->send(); // sends other headers and response body
If you follow suggestion above about injecting dependencies (including content) at point of Response instantiation, then perhaps this simply looks like:
$response = new Response(...);
$response->sendJson();
// or
$response->sendHtml();
With format-specific methods being simple wrappers to main send() method
public function sendJson() {
$this->headers[] = 'Content=Type: application/json';
return $this->send('json_encode');
}
public function send(callable $transformation = null) {
if(!is_null($transformation)) {
$this->content = call_user_func($transformation, $this->content);
}
$this->sendHeaders();
echo $this->content;
// if you need to pass back success to caller
// alternately, perhaps this just exits
return true;
}
exit('Attempted to call send on a sent response!');
Don't output system-level messages to standard out. Consider throwing exceptions in this class and letting calling code handle user-friendly error messaging.
Why log every request? Oftentimes code to perform debug-level logging might be triggered in a conditional controlled by an application-wide configuration setting. It seems odd to take on a logger dependency, just for informational logging anyways. Your server logs should be able to to give you the header and status information (if properly formatted). Logging each and every response seems like a little much and also a good way to potentially leak secure information into your application logs.
if(is_array($str)) { // handle either raw JSON text, or php arrays
$this->body = json_encode($str);
} else {
$this->body = $str;
}
Why conditionally encode JSON? If you are sending application\json header, you better well be sending well-formatted JSON to the client (why would you break your own API contract?). What about passing objects? JSON allows for encoding more than just arrays.
You have a couple styling issues that you really should address.
inconsistent spacing in function signatures (spacing around parentheses for parameters and bracing).
a lot of comments same line as the code there are commenting on. This makes code hard to read due to horizontal scrolling. Comments should go on the line before the code they reference. | {
"domain": "codereview.stackexchange",
"id": 25692,
"tags": "php, url-routing"
} |
Generating uniformly random bits from a stream of arbitrarily biased bits | Question: Say we have a function called GenBiasedBit. This function returns 1 with probability p (where p is an unknown real number between 0 and 1 exclusive) and returns 0 with probability 1 − p. How could I write a Las Vegas algorithm (called GenUnbiasedBit) that returns 1 or 0 with equal (nonzero) probability, using calls from GenBiasedBit as a source of randomness? I'm not really sure how to approach this problem. Since 1 and 0 must be generated with equal chance, I'm assuming they must both have a 50/50 chance of being selected. Since this is a Las Vegas algorithm, I'm not sure how to guarantee that its output is correct, if we don't know the probability of GenBiasedBit generating a 1 or 0.
Answer: We can simulate an unbiased coin as follows. Toss a pair of coins, and repeat until they produce different outcomes. If the outcome was H,T (heads followed by tails) output heads, and if it was T,H output tails.
Conditioning on the event that the above simulation halts (i.e. at some iteration $i$ we receive different outcomes), then both H/T have equal probability. The probability of a "bad output" in a single iteration (same result from both coins) is $p^2+(1-p)^2$, which is less than $1$ for $p\in (0,1)$. Thus, the probability of "failure" after $n$ iterations is bounded by $(1-2p(1-p))^{-n}$. | {
"domain": "cs.stackexchange",
"id": 11092,
"tags": "randomized-algorithms, randomness"
} |
Definition of electric current | Question: As I am taught in school that electric current is the flow of electrons, but in some places I faced another definitions like transfer of energy between electrons. I thought that it is the transfer of charges between electrons in a specific time. So what is really the electric current? Because I still somehow uncertained about this idea.
Answer: Current is not exactly a flow of electrons but rather a flow of charge.
Electrons carry charge, so it is related and you might hear it like that here and there. But remember that not only electrons can carry charge, other types of particles can do that to. We still call it current in those cases, because current is just charge per second moving through.
Current is written in amperes or amps. $1\;\mathrm{A}$ equals one Coulomb per second $1\;\mathrm{C/s}$. Since the electron has the charge of:
$$q_{electron}=1.6\times10^{-19} \,\mathrm{C}$$
then $1\;\mathrm{A}$ which is $1\;\mathrm{C/s}$ corresponds to $1/q_{electron}=6.2\times 10^{18}$ electrons flowing through per second (if we are talking about a system, where electrons are the charge-carriers of course). That is a lot of electrons. | {
"domain": "physics.stackexchange",
"id": 26673,
"tags": "electricity, electric-circuits, electric-current"
} |
multiple pointcloud2 topics for Navigation Stack with teb_local_planner | Question:
I'm trying to setup move_base to work with an quadrotor with 4 stereo cameras sets under four different namespace (/front, /right, /back, and /left)
I'm currently using stereo_image_proc to generate pointcloud2 topics, which generates four separate topics:
/front/points2
/right/points2
/back/points2
/left/points2
I like to use all 4 topics for obstacle detection in my navigation stack with the teb_local_planner. Can the navigation stack handle multi point clouds or should I merge them into one topic?
Does following costmap setup look correct?
costmap_common_parameters.yaml
robot_radius: 0.5
transform_tolerance: 0.2
map_type: costmap
obstacle_layer:
enabled: true
obstacle_range: 3.0
raytrace_range: 4.0
max_obstacle_height: 2.5 # I have it set just below door height
min_obstacle_height: 1.5 # I have it set above my min flight height
inflation_radius: 0.2
track_unknown_space: true
combination_method: 1
observation_sources: point1 point2 point3 point4 laser1 laser2
point1: {sensor_frame: front_camera, data_type: PointCloud, topic: /front/points2, marking: true, clearing: true}
point2: {sensor_frame: right_camera, data_type: PointCloud, topic: /right/points2, marking: true, clearing: true}
point3: {sensor_frame: back_camera, data_type: PointCloud, topic: /back/points2, marking: true, clearing: true}
point4: {sensor_frame: left_camera, data_type: PointCloud, topic: /left/points2, marking: true, clearing: true}
laser1: {sensor_frame: base_link, data_type: LaserScan, topic: /scan1, marking: true, clearing: true}
laser2: {sensor_frame: base_link, data_type: LaserScan, topic: /scan2, marking: true, clearing: true}
inflation_layer:
enabled: true
cost_scaling_factor: 10.0 # exponential rate at which the obstacle cost drops off (default: 10)
inflation_radius: 0.5 # max. distance from an obstacle at which costs are incurred for planning paths.
global_costmap_params.yaml
global_costmap:
global_frame: /map
robot_base_frame: base_link
update_frequency: 1.0
publish_frequency: 0.5
static_map: true
transform_tolerance: 0.5
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::VoxelLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
local_costmap_params.yaml
local_costmap:
global_frame: /map
robot_base_frame: base_link
update_frequency: 5.0
publish_frequency: 2.0
static_map: false
rolling_window: true
width: 5.5
height: 5.5
resolution: 0.1
transform_tolerance: 0.5
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
Since my robot is a quadrotor, It is also has omni-directional movement like a holonomic robot. So I need a planner setup that does y movements as well.
teh_local_planner_params.yaml
TebLocalPlannerROS:
# Trajectory
teb_autosize: True
dt_ref: 0.3
dt_hysteresis: 0.1
global_plan_overwrite_orientation: True
max_global_plan_lookahead_dist: 3.0
feasibility_check_no_poses: 5
# Robot
max_vel_x: 3.0
max_vel_x_backwards: 3.0
max_vel_y: 3.0
max_vel_theta: 1.5
acc_lim_x: 3.0
acc_lim_y: 3.0
acc_lim_theta: 3.0
min_turning_radius: 0.0 # omni-drive robot (can turn on place!)
footprint_model:
type: "point"
# GoalTolerance
xy_goal_tolerance: 0.2
yaw_goal_tolerance: 0.1
free_goal_vel: False
# Obstacles
min_obstacle_dist: 0.7 # This value must also include our robot radius, since footprint_model is set to "point".
include_costmap_obstacles: True
costmap_obstacles_behind_robot_dist: 1.0
obstacle_poses_affected: 30
costmap_converter_plugin: ""
costmap_converter_spin_thread: True
costmap_converter_rate: 5
# Optimization
no_inner_iterations: 5
no_outer_iterations: 4
optimization_activate: True
optimization_verbose: False
penalty_epsilon: 0.1
weight_max_vel_x: 2
weight_max_vel_y: 2
weight_max_vel_theta: 1
weight_acc_lim_x: 1
weight_acc_lim_y: 1
weight_acc_lim_theta: 1
weight_kinematics_nh: 1 # WE HAVE A HOLONOMIC ROBOT, JUST ADD A SMALL PENALTY
weight_kinematics_forward_drive: 1
weight_kinematics_turning_radius: 1
weight_optimaltime: 1
weight_obstacle: 50
# Homotopy Class Planner
enable_homotopy_class_planning: True
enable_multithreading: True
simple_exploration: False
max_number_classes: 4
selection_cost_hysteresis: 1.0
selection_obst_cost_scale: 1.0
selection_alternative_time_cost: False
roadmap_graph_no_samples: 15
roadmap_graph_area_width: 5
h_signature_prescaler: 0.5
h_signature_threshold: 0.1
obstacle_keypoint_offset: 0.1
obstacle_heading_threshold: 0.45
visualize_hc_graph: False
launch file
<launch>
<master auto="start"/>
<!-- Run the map server, right now I don't have a map -->
<!--- Run AMCL, do I need to run AMCL? My UAV provides it own internal odometry-->
<!--- We load ACML here with diff=true to support our differential drive robot -->
<include file="$(find amcl)/examples/amcl_diff.launch" />
<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
<rosparam file="costmap_common_params.yaml" command="load" ns="global_costmap" />
<rosparam file="costmap_common_params.yaml" command="load" ns="local_costmap" />
<rosparam file="local_costmap_params.yaml" command="load" />
<rosparam file="global_costmap_params.yaml" command="load" />
<rosparam file="base_local_planner_params.yaml" command="load" />
</node>
</launch>
Finally, something that deviates from the traditional navigation stack. My quadrotor has velocity based controls, but they're not that great. I tried running on the default parameters from the robot setup from the navigation stack tutorial and quadrotor keeps making large sweeping arc motions that deviated from the trajectory using velocity commands. Now the quadrotor has very good position based navigation via waypoints. Is it possible to follow the trajectory base on the path generated by the teb_local_planner instead of using cmd_vel?
Originally posted by uwleahcim on ROS Answers with karma: 101 on 2016-07-03
Post score: 0
Answer:
In general, adding multiple sources is fine and should be preferred over fusing them manually.
After having a brief look at your configuration, I recognized that your observation_sources are not part
of the obstacle_layer param namespace. This could be the reason why they are ignored.
See here and here (-> part of the obstacle layer).
The planner supports holonomic movements since kinetic (are you already running kinetic)? Otherwise the source code is backwards compatible by compiling it from source (up to now).
Regarding your second question:
If you just need waypoints you can probably subscribe to the local_plan topic. If you also need temporal information, you could describe to the teb_feedback topic (but you need to turn it on by setting parameter publish_feedback to true.
But both messages/topics are only published while navigation is active.
PS: the navigation stack is specialized to motions in the 2d plane. So I guess, you are planning for a fixed height.
Originally posted by croesmann with karma: 2531 on 2016-07-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by uwleahcim on 2016-07-12:
Yes, I'm currently planning on performing 2D navigation for now.
As for the namespace issue, how do I got about making the observation_sources part of the obstacle_layer param namespace? According to link two, I should put the obs under the plugins, correct? Update: I added "obstacle_layer:" works | {
"domain": "robotics.stackexchange",
"id": 25125,
"tags": "ros, quadcopter, teb-local-planner, pointcloud"
} |
Should I be balancing the data before creating the vocab-to-index dictionary? | Question: My question is about when to balance training data for sentiment analysis.
Upon evaluating my training dataset, which has 3 labels (good, bad, neutral), I noticed there were twice as many neutral labels as the other 2 combined, so I used a function to drop neutral labels randomly.
However, I wasn't sure if I should do this before or after creating the vocab2index mappings.
To explain, I am numericizing my text data by creating a vocabulary of words in the training data and linking them to numbers using enumerate. I think to use that dictionary of vocab2index values to numericise the training data. I also use that same dictionary to numericise the testing data, dropping any words that do not exist in the dictionary.
When I took a class on this, they had balanced the training data AFTER creating the vocab2index dictionary. However, when I thought about this in my own implementation, it did not make sense. What if some words from the original vocabulary are gone completely, then we aren't training the machine learning classifier on those words, but they would not be dropping from the testing data either (since words are dropping from X_test based on whether they are in the vocab2index dictionary).
So should I be balancing the data BEFORE creating the vocab2index dictionary?
I linked the code to create X_train and X_test below in case it help.
def create_X_train(training_data='Sentences_75Agree_csv.csv'):
data_csv = pd.read_csv(filepath_or_buffer=training_data, sep='.@', header=None, names=['sentence','sentiment'], engine='python')
list_data = []
for index, row in data_csv.iterrows():
dictionary_data = {}
dictionary_data['message_body'] = row['sentence']
if row['sentiment'] == 'positive':
dictionary_data['sentiment'] = 2
elif row['sentiment'] == 'negative':
dictionary_data['sentiment'] = 0
else:
dictionary_data['sentiment'] = 1 # For neutral sentiment
list_data.append(dictionary_data)
dictionary_data = {}
dictionary_data['data'] = list_data
messages = [sentence['message_body'] for sentence in dictionary_data['data']]
sentiments = [sentence['sentiment'] for sentence in dictionary_data['data']]
tokenized = [preprocess(sentence) for sentence in messages]
bow = Counter([word for sentence in tokenized for word in sentence])
freqs = {key: value/len(tokenized) for key, value in bow.items()} #keys are the words in the vocab, values are the count of those words
# Removing 5 most common words from data
high_cutoff = 5
K_most_common = [x[0] for x in bow.most_common(high_cutoff)]
filtered_words = [word for word in freqs if word not in K_most_common]
# Create vocab2index dictionary:
vocab = {word: i for i, word in enumerate(filtered_words, 1)}
id2vocab = {i: word for word, i in vocab.items()}
filtered = [[word for word in sentence if word in vocab] for sentence in tokenized]
# Balancing training data due to large number of neutral sentences
balanced = {'messages': [], 'sentiments':[]}
n_neutral = sum(1 for each in sentiments if each == 1)
N_examples = len(sentiments)
# print(n_neutral/N_examples)
keep_prob = (N_examples - n_neutral)/2/n_neutral
# print(keep_prob)
for idx, sentiment in enumerate(sentiments):
message = filtered[idx]
if len(message) == 0:
# skip this sentence because it has length 0
continue
elif sentiment != 1 or random.random() < keep_prob:
balanced['messages'].append(message)
balanced['sentiments'].append(sentiment)
token_ids = [[vocab[word] for word in message] for message in balanced['messages']]
sentiments_balanced = balanced['sentiments']
# Unit test:
unique, counts = np.unique(sentiments_balanced, return_counts=True)
print(np.asarray((unique, counts)).T)
print(np.mean(sentiments_balanced))
##################
# Left padding and truncating to the same length
X_train = token_ids
for i, sentence in enumerate(X_train):
if len(sentence) <=30:
X_train[i] = ((30-len(sentence)) * [0] + sentence)
elif len(sentence) > 30:
X_train[i] = sentence[:30]
return vocab, X_train, sentiments_balanced
def create_X_test(test_sentences, vocab):
tokenized = [preprocess(sentence) for sentence in test_sentences]
filtered = [[word for word in sentence if word in vocab] for sentence in tokenized] # X_test filtered to only words in training vocab
# Alternate method with functional programming:
# filtered = [list(filter(lambda a: a in vocab, sentence)) for sentence in tokenized]
token_ids = [[vocab[word] for word in sentence] for sentence in filtered] # Numericise data
# Remove short sentences in X_test
token_ids_filtered = [sentence for sentence in token_ids if len(sentence)>10]
X_test = token_ids_filtered
for i, sentence in enumerate(X_test):
if len(sentence) <=30:
X_test[i] = ((30-len(sentence)) * [0] + sentence)
elif len(sentence) > 30:
X_test[i] = sentence[:30]
return X_test
Answer: If you look at the words in your dictionary (vocab) before/after pruning, most likely you'd see there isn't a lot of difference, not so much to affect your model performance.
In fact, creating a dictionary and model training are two more or less indpendent processes. To make your life easier, you could find the largest dev set you can find for building your vocab (excluding test set), and freeze it for all subsequent ETL/modeling. This way you don't have to deal with dictionary versioning, for example, after choosing different subsets of your training data.
Also if you have compute capacity, I'd suggest to upsample positive/negative classes instead, because those neutral samples you're dropping do have signal on the use of language and which borderline ambiguous samples should be treated as neutral. | {
"domain": "ai.stackexchange",
"id": 1839,
"tags": "natural-language-processing, python, training, sentiment-analysis"
} |
Multiple criterias in a VLOOKUP in Excel VBA | Question: I have made the following custom defined function in Excel. It works somehow like VLOOKUP(), but it takes two criteria. I think the code is a bit of a mess. Does anyone has any comments, suggestions, or improvements?
Public Function VLOOKUPMC(ByVal return_col_num As Long, ByRef table_array_1 As Range, ByVal lookup_value_1 As Variant, ByRef table_array_2 As Range, ByVal lookup_value_2 As Variant) As Variant
Dim rCell1 As Range
For Each rCell1 In table_array_1
With rCell1
If .Value = lookup_value_1 Then
If .Offset(0, table_array_2.Column - .Column) = lookup_value_2 Then
VLOOKUPMC = .Offset(0, return_col_num - .Column)
Exit Function
End If
End If
End With
Next rCell1
VLOOKUPMC = CVErr(xlErrNA)
End Function
Answer: Here's a different version, though I won't claim that it is significantly better. The code seemed to work just fine and was mostly easy to understand (see my comment on the original question). My only big suggestion would be to add some comments to better describe what you are doing and how this works.
For a brief description of my changes, see the comments in the code. I can elaborate more if necessary.
Public Function VLOOKUPMC(ByVal return_col_num As Long, _
ByRef table_array_1 As Range, ByVal lookup_value_1 As Variant, _
ByRef search_column_2 As Long, ByVal lookup_value_2 As Variant) As Variant
'Changed table_array_2 to search_column_2 because that's all that was used in the code below.
Dim rCell1 As Range
VLOOKUPMC = CVErr(xlErrNA)
'Left the For loop as-is, but maybe you want it to look only in the first column, like the normal VLOOKUP would?
For Each rCell1 In table_array_1
'Modified logic to all fit on one line/remove nesting.
'This organization may not be preferred, but it's one way to clean up the so-called 'mess'.
VLOOKUPMC = _
IIf(rCell1.Value = lookup_value_1 And _
rCell1.Offset(0, search_column_2 - rCell1.Column) = lookup_value_2, _
rCell1.Offset(0, return_col_num - rCell1.Column), _
VLOOKUPMC)
If VarType(VLOOKUPMC) <> vbError Then Exit Function
Next rCell1
End Function | {
"domain": "codereview.stackexchange",
"id": 3939,
"tags": "vba, excel, lookup"
} |
Does evaporation violate the second law? | Question: Suppose I leave a glass of water on a table at STP. The water molecules at the surface of the water can be assumed to obey the Maxwell-Boltzmann distribution (at the least they obey a distribution of speeds comparable to the MB distribution). Only the most energetic molecules, near the surface of the liquid, have enough kinetic energy to overcome the attractive inter-molecular forces present in the liquid. As these highly energetic molecules escape from the liquid (i.e., evaporate), the average kinetic energy of the molecules in the liquid decreases. The temperature of the water is nothing but a measure of the average kinetic energy of the molecules in that water and hence the waters temperature decreases. This is effectively the mechanism behind sweating.
But now, if only the most energetic molecules were able to escape the inter-molecular forces and end up as water vapour, does this not mean that the water vapour above the surface now has a higher temperature than that of the water beneath it since the vapour is composed only of the most energetic molecules and the remaining liquid contains only the least energetic molecules? If this is the case, then surely it is an example of energy flowing from a cold sink to a hot sink? I realise this is impossible and so my thinking is definitely incorrect although I can't seem to build an intuition for why I am incorrect. All I can do is simply state the second law and tell my self I'm wrong. But that's no way to understand so if anyone can help me out on this issue it would be most appreciated!
Answer:
But now, if only the most energetic molecules were able to escape the inter-molecular forces and end up as water vapour, does this not mean that the water vapour above the surface now has a higher temperature than that of the water beneath it since the vapour is composed only of the most energetic molecules and the remaining liquid contains only the least energetic molecules? [emph added]
No. Some energy went into breaking the intermolecular bonds of the liquid to obtain the gaseous state. At equilibrium, the temperatures, pressures, and chemical potentials (meaning the partial pressures) of the liquid and vapor are identical. This is actually a great example of the Second Law at work: gradients in the intensive variables (temperature, pressure, chemical potential) are eliminated through shifting and exchange of the corresponding extensive variables (entropy, volume, matter).
Note that this equilibrium temperature (of the liquid and gas) will be lower than the original temperature of the liquid. The reason is that the molecules in the liquid were bonded to each other to some degree, meaning that they were in a low-energy state relative to a gas. Upon evaporation, energy was required to break these bonds, and—assuming a simple isolated system—this energy could come only from the thermal energy of the substance. So yes, you might that the liquid originally at 25°C is now a liquid–gas mixture at 24.8°C. | {
"domain": "physics.stackexchange",
"id": 76377,
"tags": "thermodynamics, statistical-mechanics, entropy, water, phase-transition"
} |
Is it possible to tune the amplitude of superposition generated by Hadamard gates? | Question: I had a question earlier about generating the superposition of all the possible states: Here. In that case, we could apply $H^{\otimes n}$ to the state $|0\rangle^{\otimes n}$, and each state has the same amplitude in the superposition: $|0\rangle^{\otimes n} \to \dfrac{1}{\sqrt{2^n}}\sum_{i=0}^{2^n-1} |i\rangle $. However, it is possible for us to tune the amplitude of certain states in the superposition? Say if I have 4 qubits and 4 Hadamard gates (one on each), that would generate a superposition of 16 states. Can I add some additional procedures to increase the amplitude of $|0110\rangle$ and $|1001\rangle$ and the rest states have the same and reduced amplitude?
Thanks!!
Answer: Yes indeed you can!
First a simple example: if you want to increase all the amplitudes of the all the states that look like $|\cdot \cdot \cdot 1\rangle$ then you just need to apply an $R_y$ gate on the final qubit.
If on the other hand you want to increase the amplitudes of the specific states $|0110\rangle$ and $|1001\rangle$ by specific amounts, then you need to apply a series of controlled $R_y$ gates which isn't so trivial, and generically takes an exponential depth. If you want to increase their amplitudes, but you don't care by how much specifically, you can use Grover's algorithm as mentioned by Bertrand Einstein IV. | {
"domain": "quantumcomputing.stackexchange",
"id": 2356,
"tags": "hadamard, superposition, amplitude-amplification"
} |
How is the expectation value defined in relativistic quantum mechanics? | Question: Since the norm of a wavefunction in relativistic quantum mechanics is defined as:
$$|\psi|^2=i\int\left(\psi^*\frac{\partial \psi}{\partial t}-\frac{\partial \psi^*}{\partial t}\psi\right)dx$$
How is the expectation value of an operator defined? I tried searching online but didn't find an answer.
Answer: Yes, that is, in fact, the mathematically well defined norm of one-particle quantum states in relativistic quantum mechanics (and relativistic QFT) for scalar Klein-Gordon particles.
However there is a substantial difference with respect to the non-relativistic case concerning the Hilbert space which does not contain all possible solutions of the scalar KG equation.
To appreciate this difference it is convenient to start form the momentum representation. Consider a sufficiently smooth real solution of KG equation. If it decays sufficiently fast in space can be expanded as (I henceforth assume to deal in a 4-dimensional Minkowski spacetime referring to standard Minkowski coordinates with $c=\hbar=1$, $kx := \sum_{a=1}^3 k^ax^a$, $E(k):= \sqrt{k^2+m^2}$), then
$$\psi(t,x) = \int_{\mathbb{R}^3} \frac{\phi(k)}{\sqrt{2E(k)}} e^{i(kx - E(k)t)} + \frac{\overline{\phi(k)}}{\sqrt{2E(k)}} e^{-i(kx - E(k)t)} \:\:\frac{d^3k}{(2\pi)^{3/2}}\:.\tag{1}$$
The quantum state (at time $t$) associated to this solution is just one half of this decomposition:
$$\Psi_t(x):= \int_{\mathbb{R}^3} \frac{\phi(k)}{\sqrt{2E(k)}} e^{i(kx - E(k)t)} \:\:\frac{d^3k}{(2\pi)^{3/2}}\:.\tag{2}$$
I stress that,
(a) differently from the real field $\psi$, $\Psi$ is complex in general;
(b) $\Psi$ is still a solution of KG equation, but it cannot have spatial compact support;
(c) in spite of the differences above, $\psi$ and $\Psi$ carry exactly the same amount of information.
As a matter of fact, with some elementary computations, one sees that
$$i\int_{\mathbb{R}^3} \overline{\Psi_t(x)} \frac{\Psi'_t(x)}{\partial t}
- \Psi'_t(x) \frac{\overline{\Psi_t(x)}}{\partial t} d^3x = \int_{\mathbb{R}^3} \overline{\phi(k)}\phi'(k) d^3k\:.\tag{3}$$
Let us focus on the right-hand side.
(a) It is evidently positive-defined, so that the norm is positive as it should be;
(b) it is also Poincaré-invariant in view of the structure of the unitary representation of the Lorentz group (the translational part acts trivially in terms of standard phases). If passing to the notation $K = (K^0,\vec{K})$ for the four momentum, so that $\vec{K}=k$ and $K^0:= E(k)$, and $\Lambda \in O(1,3)_+$,
$$(U_{\Lambda}\phi)(\vec{K}):= \sqrt{\frac{E(\vec{\Lambda K})}{E(\vec{K})}} \phi (\vec{\Lambda K})\:.$$
(Notice that $\frac{d\vec{K}}{E(\vec{K})}=\frac{d\vec{\Lambda K}}{E(\vec{\Lambda K})}$ as is well known and this fact assures the Poincaré invariance of the considered scalar product.)
(c) It does not depend on $t$.
Hence we have a properly defined Hilbert space (independent from $t$). Technically speaking one has to deal with a conveniently smooth vector space of functions $\psi$ which admits Fourier transform when divided with $E^{1/2}$ and finally he/she should take the completion of this space with respect to the said scalar product.
In particular, the expression for the scalar product given in the left-hand side of (3) is valid only for quantum states (2) and not for complete solutions of the KG equation as in (1).
REMARK. Relativistic QM has a problematic status in view of several issues in particular related to the definition of the position operators (there are several possibilities, but none is completely convincing and all are non-local). In particular $|\Psi_t(x)|^2$ cannot be interpreted as the probability density to find the particle at $x$ (when time is $t$). Generally speaking only one-particle states whose energy content is smaller than the mass of the considered particle (thus photons are ruled out) have some chances to have some physically meaningful interpretation.
However, the Hilbert space constructed above has a deep relevance also in the standard approach to QFT. Indeed, the symmetric Fock space of QFT is exactly constructed upon this one-particle Hilbert space. Together with the vacuum state, that Hilbert space is the crucial building block of the Fock space construction.
Let us come to the issue regarding the expression of the expectation value of an observable. It is clear form the discussion above that, if we deal with the momentum representation, where the scalar product is
$$\langle \phi , \phi' \rangle := \int_{\mathbb{R}^3} \overline{\phi(k)} \phi'(k) d^3k$$
nothing relevant changes with respect to the standard formalism. The expectation value $<A>_\phi $, defined from the spectral theory for the selfadjoint operator $A$, satisfies the identity (if $\phi$ stays in the domain of the operator)
$$<A>_\phi = \int_{\mathbb{R}^3} \overline{\phi(k)} (A\phi)(k) d^3k\:.$$
To export this identity in the spacetime representation is a hard issue and it strictly depends on the nature of $A$. Abstractly speaking, from (2), the spacetime representation $A_{st}$ of $A$ is defined as
$$(A_{st}\Psi)_t(x):= \int_{\mathbb{R}^3} \frac{(A\phi)(k)}{\sqrt{2E(k)}} e^{i(kx - E(k)t)} \:\:\frac{d^3k}{(2\pi)^{3/2}}\:.$$
With this definition, the expectation value (provided a number of mathematical hypotheses are fulfilled) of $A$ in the spacetime reads from (3)
$$<A>_\phi = i\int_{\mathbb{R}^3} \left[\overline{\Psi_t(x)} \frac{(A_{st}\Psi)_t(x)}{\partial t}
- (A_{st}\Psi)_t(x) \frac{\overline{\Psi_t(x)}}{\partial t}\right] d^3x\:.$$
To find the explicit expression of $A_{st}$ is usually difficult, barring trivial cases, as the momentum.
A nice fact is that the Schroedinger equation with Hamiltonian $H:= \sqrt{k^2+m^2}$ in momentum representation, when represented in spacetime and for the considered states implies the KG equation.
However, the spacetime representation of the said Hamiltonian is a pseudodifferential operator, that is a non-local operator
$$H_{st} = \sqrt{-\Delta_x + m^2I}$$
where $\Delta_x$ is the spatial Laplacian:
$$i\frac{\partial \Psi_t}{\partial t}= \left(\sqrt{-\Delta_x + m^2I} \Psi_t\right)(x)$$ | {
"domain": "physics.stackexchange",
"id": 86596,
"tags": "quantum-mechanics, hilbert-space, operators, wavefunction, klein-gordon-equation"
} |
How to prove the emptiness of intersection of two context free languages is undecidable? | Question: Where can I find a proof that the emptiness problem for the intersection of two context free languages is undecidable? I searched on the internet but could not find anything helpful.
Do you maybe have a book or paper I should investigate?
Answer: A popular reference is the article Undecidable Problems for Context-free Grammars by Hendrik Jan Hoogeboom.
The following is a proof taken from this note by Rob van Glabbeek.
Theorem: It is undecidable whether or not the languages generated by two given context-free grammars have an empty intersection.
Proof: By a reduction of post correspondence problem (which is known to be undecidable) to the empty intersection problem.
Given a set $d_1,\cdots,d_n$ of dominos where, for $i=1,\cdots,n$, the top string of $d_i$ is $w_i$ and the bottom string of $d_i = x_i$. Consider the context-free grammars
$$W\to w_1Wd_1\mid w_2Wd_2\mid\cdots\mid w_nWd_n\mid w_1d_1\mid w_2d_2\mid\cdots\mid w_nd_n$$
and
$$X\to x_1Xd_1\mid x_2Xd_2\mid\cdots\mid x_nXd_n\mid x_1d_1\mid x_2d_2\mid\cdots\mid x_nd_n.$$
Now notice that the given instance of PCS has a match exactly when the intersection of the languages generated by the resulting grammars above is nonempty. | {
"domain": "cs.stackexchange",
"id": 13982,
"tags": "formal-languages, context-free, reference-request, undecidability"
} |
Discrete Real Sinusoid Formula | Question: I have a basic question about the formulas represented for discrete sinusoids. In some textbooks we see this form
$$x[n]= A\cos (2\pi f nT + \phi ) $$
and in others we have
$$x[n]= A\cos (2\pi k/N n + \phi ) $$
What $N$ and $T$ stands for? And what is the differences between these two formulas?
Answer: In the first formula, $x[n]$ is written as if it were a sampled version of the continuous-time function
$$x_c(t)=A\cos(2\pi ft+\phi)\tag{1}$$
If you sample $(1)$ at $t=nT$, $n\in\mathbb{Z}$, you get the first formula. $T$ is the sampling period, or the inverse of the sampling frequency $f_s=1/T$.
If you compare the two formulas in your question you see that $fT=k/N$. Assuming that $k$ and $N$ are integers, then the second formula is a special case of the first one. The discrete-time signal obtained by sampling $(1)$ is not necessarily periodic. However if $fT$ is rational, as in your second formula, then $x[n]$ is periodic with period $N$. | {
"domain": "dsp.stackexchange",
"id": 7770,
"tags": "discrete-signals"
} |
Why don't we just build a giant wind turbine? | Question: The power generated by a wind turbine is given by:
$$\mathrm{Power} = \frac{1}{2}C\rho AV^3$$
Where:
$\rho = \text{Air density}$
$C = \text{Coefficient of performance}$
$A = \text{Frontal area}$
$V = \text{Velocity of the wind}$
In other words, the power is proportional to the square of the length of the blades and the cube of the velocity of the wind. As we know, the velocities of winds are high at high altitudes. So instead of building many smaller wind turbines, why can't we just build a giant wind turbine that is, say, 1000 m tall? That may be an engineering challenge at first, but won't that be more economical in the end? After all, the Burj Khalifa in Dubai has been 828 m tall.
Why can't we build three instead of one pillars to support such a structure? Why can't we build one at the sea?
The Vestas V164 has a rated capacity of 8.0 MW, has an overall height
of 220 m (722 ft), a diameter of 164 m (538 ft), is for offshore use,
and is the world's largest-capacity wind turbine since its
introduction in 2014.
Answer: Not everything scales linearly. In particular, the cross-sectional area of supports required scales faster than height of a structure, all else held constant. This explains why ants have tiny thin legs compared to elephants. An ant linearly scaled up to elephant size would not be able to stand, or would snap its legs trying.
The same thing happens to wind turbines. You get some advantages to making them bigger, as you mention, but you are also ignoring the disadvantages. Not only must the structural support for a large turbine be disproportionately larger than a smaller one, there is also more wind loading, and that loading is higher up. That puts disproportionately more torque on the mounting that has to be countered somehow.
Then there are manufacturing and maintenance issues. Building 500 m blades will be difficult, especially considering they would need to be assembled in the field where it will be more costly and more difficult to do well than in a controlled environment in a factory.
Wind turbines have gotten very large in recent years, for the reasons you mention. Material and manufacturing advances may allows them to get even larger and still make economic sense, but due to the non-linear nature of how various things scale, there will always be some finite sweet spot. | {
"domain": "engineering.stackexchange",
"id": 4379,
"tags": "electrical-engineering, civil-engineering, structural-engineering, renewable-energy, wind-power"
} |
Age of the universe | Question: We know that the universe is approximately 13.7 billion years old. What methods did we use to discover this? How sure are we about its accuracy?
Answer: The correct version of your first sentence would be something like:
Given our knowledge and the standard cosmological model, we estimate that the age of the universe is about 13.7 billion years old.
Every work quotes slightly different values for the age, depending on methods, observations used, assumptions, etc. As example, the first results from the satellite Planck report an age of 13.813, with an uncertainty of 58 million years (0.5%, to my knowledge one of the best).
So, first question is answered.
What methods did we use to discover this?
About 100 years ago a number of people realized that our Galaxy is not the only one in the Universe and that the all1 other galaxies are receding from us with a speed increasing with distance (credit for this work is generally given to Edwin Hubble for his Redshift-Distance relation for galaxies formulated in 1929). This together with the theoretical results from general relativity, brought scientists to believe that our Universe is expanding. Which means that in the universe's past it was smaller, much smaller than it is now 2.
The age of the Universe depends on the rate of expansion at each time. To have this, you must estimate the mean composition of the universe (in various components like radiation, matter, dark energy, and curvature) and the present day expansion rate (also known as the Hubble Constant). Once you have this, it's easy3 to compute the age of the Universe.
footnotes
except the ones in the local group that are gravitationally bound to us
side note: the cosmic microwave background (a radiation emitted about 13 and a half billions of years ago) provided in the 1964 a striking confirmation that the universe once was indeed much hotter and smaller
knowing what an integral is and with some basic of programming shouldn't be to difficult to solve equation 1 here | {
"domain": "astronomy.stackexchange",
"id": 82,
"tags": "early-universe"
} |
Proof of Minsky Papert Symmetrization technique | Question: I frequently hear about the Minsky-Papert Symmetrization technique in many papers with a reference to the book of Minsky. I could not locate the book online. Could someone supply me a proof of the symmetrization technique?
For instance, it is used in Lemma $5$ in this paper http://www.csee.usf.edu/~tripathi/Publication/polynomial-degree-conference.pdf
Answer: Over $0/1$ inputs we have
$$
\begin{align*}
(y_1+\cdots+y_N)^0 &= 1 \\
(y_1+\cdots+y_N)^1 &= \sum_i y_i \\
(y_1+\cdots+y_N)^2 &= \sum_i y_i+2\sum_{i<j} y_iy_j \\
(y_1+\cdots+y_N)^3 &= \sum_i y_i+6\sum_{i<j} y_iy_j + 6\sum_{i<j<k} y_iy_jy_k
\end{align*}
$$
And so on. It follows that for $0/1$ inputs, $p_{sym}$ can be written as a linear combination of $(y_1+\cdots+y_n)^0,\ldots,(y_1+\cdots+y_n)^d$, where $d$ is its degree. This linear combination can also be viewed as a polynomial $\tilde{p}$ in $y_1+\cdots+y_n$, which is equal to $p_{sym}$ for $0/1$ inputs. | {
"domain": "cs.stackexchange",
"id": 3696,
"tags": "polynomials"
} |
SemaphoreSlim throttling | Question: I am trying to throttle SemaphoreSlim (i.e. allow initialization with a negative initialCount). A scenario could be that you're hitting an API and may notice degradation due to a server overload, so you'd want to start throttling requests to, say, 10 concurrent requests. However, you would be keeping count of the concurrent requests and know that at this time there are 25 concurrent requests, but you can't start a SemaphoreSlim(-15, 10).
I tried making an implementation that allows this, but I'm not 100% sure whether or not this is thread-safe and if it could be optimized (e.g. doing without the locks).
public class SemaphoreSlimThrottle : SemaphoreSlim
{
private volatile int _throttleCount;
private readonly object _lock = new object();
public SemaphoreSlimThrottle(int initialCount)
: base(initialCount)
{
}
public SemaphoreSlimThrottle(int initialCount, int maxCount)
: base(Math.Max(0, initialCount), maxCount)
{
_throttleCount = Math.Min(0, initialCount);
}
public new int CurrentCount => _throttleCount + base.CurrentCount;
public new int Release()
{
if (_throttleCount < 0)
{
lock (_lock)
{
if (_throttleCount < 0)
{
_throttleCount++;
return _throttleCount - 1;
}
}
}
return base.Release();
}
public new int Release(int releaseCount)
{
if (releaseCount < 1)
{
base.Release(releaseCount); // throws exception
}
if (releaseCount + _throttleCount <= 0)
{
lock (_lock)
{
if (releaseCount + _throttleCount <= 0)
{
_throttleCount += releaseCount;
return _throttleCount - releaseCount;
}
}
}
if (_throttleCount < 0)
{
lock (_lock)
{
if (_throttleCount < 0)
{
int output = CurrentCount;
base.Release(releaseCount + _throttleCount);
_throttleCount = 0;
return output;
}
}
}
return base.Release(releaseCount);
}
}
I've packaged this as a NuGet package with source available on GitHub.
Answer: First the "new" keyword. If casting this class back down to SemaphoreSlim then none of the code that has the new keyword would be used and just the base SemaphoreSlim code would execute.
Since we want the new code to execute I would suggest not inheriting from SemaphoreSlim but rather wrap it and then chain the calls down into the wrapped SemaphoreSlim. Something like, I didn't put in all the method this just an example of the approach.
public class SemaphoreSlimThrottle : IDisposable
{
private volatile int _throttleCount;
private readonly SemaphoreSlim _semaphore;
private readonly object _lock = new object();
private bool _turnOffThrottleCheck = false;
public SemaphoreSlimThrottle(int initialCount, int maxCount)
{
if (initialCount < 0)
{
_semaphore = new SemaphoreSlim(0, maxCount);
_turnOffThrottleCheck = true;
}
else
{
_semaphore = new SemaphoreSlim(initialCount, maxCount);
_turnOffThrottleCheck = true;
}
}
public WaitHandle AvailableWaitHandle => _semaphore.AvailableWaitHandle;
public int CurrentCount => _throttleCount + _semaphore.CurrentCount;
public bool Wait(TimeSpan timeout, CancellationToken cancellationToken) => _semaphore.Wait(timeout, cancellationToken);
public int Release() => Release(1);
public void Dispose()
{
_semaphore.Dispose();
}
}
The other optimization that can be preformed is to not have the volatile field accessed once the "extra" have been used up. Also calling into the base outside the lock to have the locks short as possible.
Warning I did not test this code there could be typo or bugs in it - use as a gist
public int Release(int releaseCount)
{
// using bool property to avoid unnecessary volatile accesses in happy path
if (releaseCount < 1 || _turnOffThrottleCheck)
{
return _semaphore.Release(releaseCount);
}
int remainingCount;
var returnCount = 0;
lock (_lock)
{
var throttleCount = _throttleCount;
if (throttleCount == 0) // Different thread release all them just call into base
{
remainingCount = releaseCount;
}
else if (releaseCount + throttleCount < 0) // Releasing less than throttle just decrease
{
_throttleCount += releaseCount;
remainingCount = 0;
returnCount = throttleCount;
}
else // releasing all the throttles
{
_throttleCount = 0;
_turnOffThrottleCheck = true;
returnCount = throttleCount;
remainingCount = releaseCount + throttleCount;
}
}
// doing outside lock
if (remainingCount > 0) // call into base if more locks to be released
{
return _semaphore.Release(releaseCount) + returnCount;
}
return returnCount + _semaphore.CurrentCount;
}
This way we don't access _throttleCount in the happy path. We also release the lock and just call into the SemaphoreSlim outside the lock. | {
"domain": "codereview.stackexchange",
"id": 43228,
"tags": "c#"
} |
Minimum Phase - All Pass Decomposition For Large Linear Phase Filters | Question: UPDATE:
I am looking for a robust approach to decompose linear phase FIR filters with 100s of coefficients into its minimum phase and all pass components.
I originally thought determining all the zeros from the coefficients would be a mathematical challenge, so this question was initially focused on approaches to find the zeros. Through comments from @MattL I've learned that determining the zeros from 100s of coefficients may not necessarily be the numerical challenge (roots in Python and Octave appear to have no problem doing this) but the task of going from the roots back to a polynomial is significantly challenging.
I posted a related question on SE. Mathematics for this challenge of roots to polynomial.
A best answer will provide a feasible path for linear phase decomposition when dealing with a very large number of coefficients. We are given the coefficients for a large filter with 100s of coefficients, and from that I would like to be able to decompose it into minimum phase and maximum phase (all-pass) components.
Subsequent Update: As a response on the Mathematics site it was demonstrated that the issue may be in the precision of the roots function itself in MATLAB, Octave and Python. This then elevates my original thoughts of precision root finding for the high level question of decomposition for large linear phase FIR filters into min phase/max phase components to be best approaches to find the roots for such a large polynomial).
Original Question:
I thought that it may be relatively trivial and efficient to do a gradient descent search for the zeros within the unit circle, since we know exactly how many zeros we are searching for, we only need to search within and up to the unit circle (bounded), and that the surface of the z-transform is analytic, so we are guaranteed to have a smooth descent as we slide down the magnitude of the surface toward each zero. I can envision starting at the origin and moving outward until all zeros are located (counting number of zeros for each radius, which we could get from the DTFT after scaling the coefficients to the radius under test). Further binary search algorithms on the space could make this task very efficient and fast, even for thousands of zeros.
Before I detail this algorithm and put it to practice, I wanted to find out if (a) this already exists and is available, or (b) there are alternate efficient algorithms that would not have any trouble with finding hundreds of zeros within the unit circle (factoring a polynomial of hundredth order and higher), or (c) practical examples where this task is no as trivial as I envision with my approach and no such algorithm exists.
Below is a graphic to picture what I am envisioning, here with a simple case of four zeros inside the unit circle, as depicted on the typical pole-zero diagram on the z-plane, and its associated analytic surface. Realizable (causal) linear phase FIR filters with $N$ zeros inside the unit circle would also have $N$ poles at the origin, but for this purpose of finding the zeros they needn't be included and are not shown in the graphic below.
Determining the Minimum Phase Component Using the Hilbert Transform
@ZRHann makes the good suggestion to use the Hilbert Transform to extract the minimum phase system. This suggestion has merit since the minimum phase system frequency response $G_{min}(\omega)$ for any magnitude response $|G(\omega)|$ is given as:
$$G_{min}(\omega) = |G(\omega)|e^{j \phi (\omega)}$$
$$\phi(\omega) = H\{\ln(|G(\omega)|\}$$
I made an attempt of this approach using a test case provided by @MattL in response to another question, and used in his response to this one, which is the coefficients of an equiripple filter provided by:
coeff = sig.remez(668, [0, .3, .31, 1], [1, 0],[1, 2800], fs = 2)
And results in the following frequency response (FIR order 667):
As a quick test of @ZRHann's suggestion I did the following:
w, g = sig.freqz(coeff,1,668*N, whole=True) # DTFT to establish mag response g
phase = np.imag(sig.hilbert(np.log(np.abs(g)))) # min phase
mincoeff = fft.ifft(np.conj(np.abs(g)*np.exp(1j*phase))) # ifft to get coeff
(Note a potential drawback as done, that the results is an FIR only system- we can't follow this procedure to extract a minimum phase IIR)
I extended the multiplier N to reduce aliasing effects in the inverse FFT. This approach has reasonable results for comparison to other solutions. Below shows the results from simply truncating the resulting impulse response back to the original filter length, while further improvements by using windowing of this response could likely be achieved:
Zooming in on the passband also shows a ripple that was consistent with the original filter which had passband ripple +/-0.25 dB. Windowing should reduce the additional variation from the equiripple response both in passband and stopband.
Answer: Computing the polynomial coefficients from the roots of the polynomial is a potentially ill-conditioned problem. However, it turns out that the order of the roots supplied to the poly function can change things considerably. I learned this from these lecture notes by Ivan Selesnick. The so-called Leja ordering [1] helps to reduce numerical errors even for high polynomial orders.
In this answer I designed an FIR filter or order $667$. I used this filter to test the Octave functions root and poly, and to see if accuracy can be improved by Leja ordering of the roots. For the Leja ordering I used a Matlab/Octave implementation found here. I believe that it is the one that was originally published in [2].
This is how I tested the influence of Leja ordering of the polynomial roots on the accuracy of the polynomial coefficients obtained by poly:
h = remez(667, [0,.3,.31,1], [1,1,0,0], [1,2797]);
r = roots(h);
h1 = poly(r);
h1 = h1 * sum(h)/sum(h1);
rl = leja(r);
h2 = poly(rl);
h2 = h2 * sum(h)/sum(h2);
The figure below shows the differences between the original impulse response h and the two others: h1 is obtained by simple application of roots followed by poly, and h2 is obtained by applying Leja ordering to the roots before the call to poly. Note the scaling: the maximum error is in the order of $10^{15}$ in the first case, and $10^{-13}$ in the second.
Extraction of minimum-phase component:
Using the Leja ordering it was possible to extract the minimum-phase component of the linear-phase FIR filter of order $667$ mentioned above. We need to reflect the zeros that are outside the unit circle and compute the filter coefficients from the modified polynomial. Note that the resulting minimum-phase component is not an optimal filter for the given design problem, because the optimal minimum-phase filter should have a smaller order than the given filter. The following Octave commands were used to extract the minimum phase component:
sn = 1e-10;
rm = leja(r); % Leja ordering of roots of original filter
Io = find( abs(rm) > 1+sn ); % reflect zeros with |r|>1
rm(Io) = 1./conj(rm(Io));
% reconstruct and normalize minimum-phase response
hm = poly( rm );
hm = hm * sum(h)/sum(hm);
The figures below show the impulse responses and the magnitude responses of the linear phase filter and its minimum-phase component. The magnitude responses are virtually identical, as should be the case.
[1] F. Leja, "Sur certaines suites liées aux ensembles plan et leur
application á la représentation conforme," Ann. Polon. Math., vol.
4:8-13, 1957.
[2] M. Lang, and B. - C. Frenzel, "Polynomial Root Finding", IEEE Signal Processing Letters, Oct., 1994. | {
"domain": "dsp.stackexchange",
"id": 10736,
"tags": "finite-impulse-response, linear-phase, decomposition, allpass, minimum-phase"
} |
Can Hooke's law go along with Newton's 3rd law & how is momentum conserved in case of spring? | Question: Let there be a spring in its relaxed state- that is neither compressed nor extended. One end is fixed on the left and other end is free in the right. Now, I stretch the spring by pulling the free-end towards right. As soon as I do it, the spring exerts force ie. the restoring force on me towards left. But, I willn't move on the left but rather expand the spring towards right & it will exert more restoring force acc. to Hooke's law.
So, in this spring-me system , is the linear momentum conserved?? If so, how??? And also, is Hooke's law saying about Newton's 3rd law? That is, if I exert force on the spring, the spring exerts restoring force on me; then can I say by Newton's third law,at any instant $$ \text{restoring force} = - \text{my applied force}$$ ??? Please help me explaining these two: how the momentum is conserved and whether the relation above is true.
Answer: The above relation is correct. Regarding momentum, you need to consider the momentum of the entire system, including the wall, because the spring is not isolated.
Just imagine the wall is very heavy but not infinitely so. And suppose both you and the wall are in a frictionless surface, otherwise there will be additional friction forces and momentum will not be conserved. In the frictionless case, when you pull the spring to the right, it will move to the right togetre with your hand, but the rest of you body will slightly move to the left to compensate (because of the reaction force), Also, the wall will move slightly to the left, minimally if it is too heavy, so the string will actually expand. If you take into account all these motions, momentum is conserved (because no external forces act on the whole system). Momentum will not be conserved if you are standing on a regular floor because friction (an external force) will act. | {
"domain": "physics.stackexchange",
"id": 17970,
"tags": "newtonian-mechanics, momentum, spring"
} |
Strategy pattern in C++ - the Duck simulator | Question: I'm studying design patterns from Head First Design Patterns and, in order to get confident, I plan to implement the each pattern in C++ after studying the corresponding chapter.
As regards the Strategy pattern, this is the result:
Duck.hpp:
#ifndef DUCK_H
#define DUCK_H
#include <memory>
#include "FlyBehavior.hpp"
#include "QuackBehavior.hpp"
class Duck {
private:
std::unique_ptr<FlyBehavior> flyBehavior;
std::unique_ptr<QuackBehavior> quackBehavior;
public:
Duck(std::unique_ptr<FlyBehavior>, std::unique_ptr<QuackBehavior>);
void performFly();
void performQuack();
void setFlyBehavior(std::unique_ptr<FlyBehavior>);
void setQuackBehavior(std::unique_ptr<QuackBehavior>);
virtual void display() = 0;
};
#endif /* ifndef DUCK_H */
Duck.cpp:
#include <algorithm>
#include <memory>
#include "Duck.hpp"
Duck::Duck(std::unique_ptr<FlyBehavior> fb,
std::unique_ptr<QuackBehavior> qb)
: flyBehavior(std::move(fb))
, quackBehavior(std::move(qb))
{}
void Duck::performFly() {
flyBehavior->fly();
};
void Duck::performQuack() {
quackBehavior->quack();
};
void Duck::setFlyBehavior(std::unique_ptr<FlyBehavior> fb) {
flyBehavior = std::move(fb);
};
void Duck::setQuackBehavior(std::unique_ptr<QuackBehavior> qb) {
quackBehavior = std::move(qb);
};
RubberDuck.hpp:
#ifndef RUBBERDUCK_H
#define RUBBERDUCK_H
#include "Duck.hpp"
class RubberDuck : public Duck {
public:
RubberDuck();
void display() override;
};
#endif /* ifndef RUBBERDUCK_H */
RubberDuck.cpp:
#include <iostream>
#include <memory>
#include "FlyNoWay.hpp"
#include "RubberDuck.hpp"
#include "Squeak.hpp"
RubberDuck::RubberDuck()
: Duck(std::make_unique<FlyNoWay>(),
std::make_unique<Squeak>()) {
};
void RubberDuck::display() {
std::cout << "Hello, I am a rubber duck!\n";
};
SupersonicDuck.hpp:
#ifndef SUPERSONICDUCK_H
#define SUPERSONICDUCK_H
#include "Duck.hpp"
class SupersonicDuck : public Duck {
public:
SupersonicDuck();
void display() override;
};
#endif /* ifndef SUPERSONICDUCK_H */
SupersonicDuck.cpp:
#include <iostream>
#include <memory>
#include "FlyRocketPowered.hpp"
#include "SupersonicDuck.hpp"
#include "SupersonicSqueak.hpp"
SupersonicDuck::SupersonicDuck()
: Duck(std::make_unique<FlyRocketPowered>(),
std::make_unique<SupersonicSqueak>()) {
}
void SupersonicDuck::display() {
std::cout << "Hello, I am a supersonic duck!\n";
};
FlyBehavior.hpp:
#ifndef FLYBEHAVIOR_H
#define FLYBEHAVIOR_H
class FlyBehavior {
public:
virtual void fly() = 0;
};
#endif /* ifndef FLYBEHAVIOR_H */
FlyNoWay.hpp:
#ifndef NOFLY_H
#define NOFLY_H
#include "FlyBehavior.hpp"
class FlyNoWay : public FlyBehavior {
public:
void fly() override;
};
#endif /* ifndef NOFLY_H */
FlyNoWay.cpp:
#include <iostream>
#include "FlyNoWay.hpp"
void FlyNoWay::fly() {
std::cout << "I can't fly!\n";
};
FlyRocketPowered.hpp:
#ifndef FLYROCKETPOWERED_H
#define FLYROCKETPOWERED_H
#include "FlyBehavior.hpp"
class FlyRocketPowered : public FlyBehavior {
void fly() override;
};
#endif /* ifndef FLYROCKETPOWERED_H */
FlyRocketPowered.cpp:
#include <iostream>
#include "FlyRocketPowered.hpp"
void FlyRocketPowered::fly() {
std::cout << "WHOOOOOOMMMMMM\n";
};
QuackBehavior.hpp
#ifndef QUACKBEHAVIOR_H
#define QUACKBEHAVIOR_H
class QuackBehavior {
public:
virtual void quack() = 0;
};
#endif /* ifndef QUACKBEHAVIOR_H */
Squeak.hpp
#ifndef SQUEAK_H
#define SQUEAK_H
#include "QuackBehavior.hpp"
class Squeak : public QuackBehavior {
public:
void quack() override;
};
#endif /* ifndef SQUEAK_H */
Squeak.cpp
#include <iostream>
#include "Squeak.hpp"
void Squeak::quack() {
std::cout << "Squeak!\n";
};
SupersonicSqueak.hpp
#ifndef SUPERSONICSQUEAK_H
#define SUPERSONICSQUEAK_H
#include "QuackBehavior.hpp"
class SupersonicSqueak : public QuackBehavior {
public:
void quack() override;
};
#endif /* ifndef SUPERSONICSQUEAK_H */
SupersonicSqueak.cpp
#include <iostream>
#include "SupersonicSqueak.hpp"
void SupersonicSqueak::quack() {
std::cout << "SQUEEEEAAAAK!\n";
};
main.cpp
#include <memory>
#include "RubberDuck.hpp"
#include "SupersonicDuck.hpp"
#include "FlyRocketPowered.hpp"
#include "Squeak.hpp"
int main() {
RubberDuck rd;
rd.display();
rd.performFly();
rd.performQuack();
SupersonicDuck sd;
sd.display();
sd.performFly();
sd.performQuack();
// enhance rd with rockets:
rd.setFlyBehavior(std::make_unique<FlyRocketPowered>());
rd.display();
rd.performFly();
rd.performQuack();
}
The code works, hence I'm posting here an not on Stack Overflow, however I have some concerns about the following choices:
I've used make_unique/unique_ptr, but I could also use make_shared/shared_ptr;
I've tried to always split a translation unit in header and implementation files;
I've not defined a default constructor, but rather a two parameter constructor, as no duck should exist without the two behaviors;
The methods setFlyBehavior and setQuackBehavior can change the objects' behavior at runtime, such that the only two things that cannot change are the type itself of the object, and those methods (such as display) which have not been factored out of the Duck class.
Answer: The code looks good. It is probably a good idea to make the include guard macros agree with the file name (_H vs .hpp).
There is a simpler way to implement the strategy pattern — to store std::functions rather than smart pointers to base classes:
struct Duck {
using behavior_t = std::function<void()>;
behavior_t fly;
behavior_t quack;
behavior_t display; // maybe const; but that disables moving
};
Now everything is much simpler:
// for example
const auto plain_fly = [] { std::cout << "(flies)\n"; };
const auto plain_quack = [] { std::cout << "(quacks)\n"; };
// can even determine operation dynamically
struct hello_display {
std::string name;
void operator()() const
{
std::cout << "Hello, I am a " << name << "!\n";
}
};
int main()
{
Duck plain_duck{plain_fly, plain_quack, hello_display{"plain duck"}};
plain_duck.fly();
plain_duck.quack();
plain_duck.display();
}
(live demo)
I've used make_unique/unique_ptr, but I could also use make_shared/shared_ptr;
If the resource is owned by only one smart pointer rather than shared by multiple smart pointers, then unique_ptr is a reasonable choice.
I've tried to always split a translation unit in header and implementation files;
For simple operations, you can alternatively define the methods inline in the header file.
I've not defined a default constructor, but rather a two parameter
constructor, as no duck should exist without the two behaviors;
Consider checking for null pointers.
The methods setFlyBehavior and setQuackBehavior can change the
objects' behavior at runtime, such that the only two things that
cannot change are the type itself of the object, and those methods
(such as display) which have not been factored out of the Duck class.
This is fine if it fits the semantics of your program. Note that you can also use the strategy pattern for immutable attributes by not providing the corresponding modifier method. | {
"domain": "codereview.stackexchange",
"id": 37685,
"tags": "c++, design-patterns, inheritance"
} |
Can photons scattering off quarks produce resonant states? | Question: In an exam question about photons scattering off protons it showed a resonance graph and asked why there was a lack of structure for energies much higher than the resonance peak. It alluded that the answer had something to do with the fact that at these energies the photons scatter of the constituent quarks rather than the proton as a whole.
Does that mean that, since quarks are elementary particles, there are no resonant states that can be formed from a photon interacting directly with a quark?
Answer: Quarks are most peculiar elementary particles: they cannot be asymptotic states, and be observed in isolation outside hadrons.
So, no, a photon cannot excite them to some type of resonant hadronic bound state. How could it?
(A photon could, and often does, knock them out of the hadron, but they drag the requisite gluons and quarks out with them to generate their own color-singlet hadron cocoon.) | {
"domain": "physics.stackexchange",
"id": 78907,
"tags": "quantum-mechanics, quantum-field-theory, particle-physics, atomic-physics, quantum-electrodynamics"
} |
several catkin workspaces | Question:
Hi all,
Is it possible to have several catkin workspaces and to source them sequentially in the .bashrc without conflict or may we experience issues with that?
source /ws1/devel/setup.bash
source /ws2/devel/setup.bash
source /ws3/devel/setup.bash
# ...
Thanks
Originally posted by courrier on ROS Answers with karma: 454 on 2014-06-27
Post score: 3
Original comments
Comment by Dirk Thomas on 2014-06-28:
Effectively the last line will "overwrite" all previously sources workspaces. You should chain your workspaces on top of each other as described in the answer from @joq and only source the leaf workspace.
Answer:
For people having packages coming from different git/subversion repositories with their own file hierarchy not catkin-compliant and wanted to create several catkin workspaces because of these, I suggest to create a unique "central" catkin workspaces with symbolic links pointing to your git/svn repos, by this way you don't break the file hierarchy of your repos.
courrier@zbook:~/ros_ws/src$ ls -l
total 29
lrwxrwxrwx 1 courrier courrier 42 Sep 16 12:22 baxter_common -> /home/courrier/packages_ros/src/baxter_common/
lrwxrwxrwx 1 courrier courrier 48 Sep 9 19:25 CMakeLists.txt -> /opt/ros/hydro/share/catkin/cmake/toplevel.cmake
lrwxrwxrwx 1 courrier courrier 47 Sep 10 15:02 lemon_moveit_config -> /home/courrier/inria/asv1/src/lemon_moveit_config/
lrwxrwxrwx 1 courrier courrier 26 Sep 11 13:34 lemon_ros -> /home/courrier/university/lemon_ros/
[...]
Originally posted by courrier with karma: 454 on 2014-09-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 18420,
"tags": "ros, multiple, catkin"
} |
Why doesn't the Earth accelerate towards us? | Question: According to Newton's third law of motion that states that every action has an equal and opposite reaction.
So, if the Earth exerts a gravitational pull on us (people) then even we should exert a force equal and opposite (in terms of direction) on the Earth.
It is intuitive to think that this force is really small to get the Earth to move. But, if we take a look at the second law of motion that states that F = ma we see that however small the force, there will be some amount of acceleration. Therefore, even though we exert a very small gravitational force on the Earth it should be enough to get the Earth to move even though the acceleration is a very small amount.
But what I said clearly does not happen. So there must be some flaw with my reasoning. What is that flaw?
Answer: The acceleration that your gravitational pull causes in the Earth is tiny, tiny, tiny because the Earth's mass is enormous. If your mass is, say, $70\;\mathrm{kg}$, then you cause an acceleration of $a\approx 1.1\times 10^{-22}\;\mathrm{m/s^2}$.
A tiny, tiny, tiny acceleration does not necessarily mean a tiny, tiny, tiny speed, since, as you mention in comments, the velocity accumulates. True. It doesn't necessarily mean that - but in this case it does. The speed gained after 1 year at this acceleration is only $v\approx 3.6×10^{-15}\;\mathrm{m/s}$. And after a lifetime of 100 years it is still only around $v\approx 3.6×10^{-13}\;\mathrm{m/s}$.
If all 7.6 billion people on the planet suddenly lifted off of Earth and stayed hanging in the air on the same side of the planet for 100 years, the planet would reach no more than $v\approx 2.8\times 10^{-3}\;\mathrm{m/s}$; that is around 3 millimeters-per-second in this obscure scenario of 100 years and billions of people's masses.
Now, with all that being said, note that I had to assume that all those people are not just standing on the ground - they must be levitating above the ground.
Because, while levitating (i.e. during free-fall), they only exert the gravitational force $F_g$:
$$\sum F=ma\quad\Leftrightarrow\quad F_g=ma,$$
causing a net acceleration according to Newton's 2nd law. If they were standing on the ground, on the other hand, they apart from their gravitational force also exert a downwards pushing force equal to their weight, $w$:
$$\sum F=ma\quad\Leftrightarrow\quad F_g-w=ma.$$
Then there are two forces exerted on the planet, pushing in opposite directions. And in fact, the weight exactly equals the gravitational force (because those two correspond directly to an action-reaction pair via Newton's 3rd law). So the pressing force exerted on the planet cancels out the gravitational pull in the planet. Then the above formula results in zero acceleration. The forces cancel out and nothing accelerates any further.
In general, no system can ever accelerate solely via it's own internal forces. If we consider the Earth-and-people as one system, then their gravitational forces on each other are internal. Each part of the system may move individually - the Earth can move towards the people and the free-falling people can move towards the Earth. But the system as a whole - defined by the centre-of-mass of the system - will not move anywhere.
So, the Earth can move a tiny, tiny, tiny bit towards you while you move a much larger distance towards the Earth during your free-fall so the combined centre-of-mass is still stationary. But when standing on the ground, nothing can move because that would require you to break through the ground and move inwards into the Earth. If the Earth was moving but you weren't, then the centre of mass would be moving (accelerating) and that is simply impossible both because of the momentum conservation law as well as the energy conservation law. The system would be gaining kinetic energy without any external energy input; creating free energy out of thin air is just not possible. So this never happens. | {
"domain": "physics.stackexchange",
"id": 64984,
"tags": "newtonian-mechanics, forces, newtonian-gravity, reference-frames, acceleration"
} |
Error while installing DRCSIM | Question:
Hi when I was installing The DRCSIM, i got the next error:
Duplicate sources.list entry http://packages.osrfoundation.org/drc/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/packages.osrfoundation.org_drc_ubuntu_dists_precise_main_binary-i386_Packages)
I TRied UNstalling the DRCSIM and installing AGain; BUt i got the same error Again.
HOw can i solve this problem?
Originally posted by Jey_316 on Gazebo Answers with karma: 55 on 2013-03-18
Post score: 0
Answer:
Google says http://askubuntu.com/questions/183007/how-to-fix-duplicate-sources-list-entry-warning.
Originally posted by nkoenig with karma: 7676 on 2013-03-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3137,
"tags": "gazebo"
} |
What is the math behind the formula for the concentration of the strong titrant when calculating theoretical buffer capacity of a diprotic system? | Question: I have a question related to finding the theoretical buffer capacity of a diprotic system, specifically about the formula for the concentration of the strong titrant. In this post:
What is the formula for theoretical buffer capacity for a diprotic buffer system?
The top answer, given by grsousajunior, gives the equation for a buffer capacity, calculated by taking the derivative of the concentration of strong base as a function of pH. All of the math makes sense to me, except for the given equation for the concentration of strong base, $C_\mathrm{B}$. Here’s how my math for that equation worked out (most of this is reiterated in grsousajunior’s answer):
The first ionization is represented by the equation:
$\mathrm{H_2A+H_2O\rightleftharpoons HA^-+H_3O^+}$, and therefore has a $K_\mathrm{a1}$ of $K_\mathrm{a1}=\mathrm{\frac{[H_3O^+][HA^-]}{[H_2A]}}$
The second ionization is represented by the equation:
$\mathrm{HA^-+H_2O\rightleftharpoons A^{2-}+H_3O^+}$, and therefore has a $K_\mathrm{a2}$ of $K_\mathrm{a2}=\mathrm{\frac{[H_3O^+][A^{2-}]}{[HA^-]}}$
Shifting around the equation for $K_\mathrm{a1}$ to find $\mathrm{[HA^-]}=\frac{K_\mathrm{a1}\mathrm{[H_2A]}}{\mathrm{[H_3O^+]}}$ and putting that into the equation $\mathrm{[A^{2-}]}=\frac{K_\mathrm{a2}\mathrm{[HA^-]}}{\mathrm{[H_3O^+]}}$, $\mathrm{[A^{2-}]}=\frac{K_\mathrm{a1} K_\mathrm{a2}\mathrm{[H_2A]}}{\mathrm{[H_3O^+]^2}}$, at least by my math.
As far as I can tell, grsousajunior after that used the equation for the charge balance:
$\mathrm{[H_3O^+]+[B^+]=[OH^-]+[HA^-]+2[A^{2-}]}$
Which I think is correct, solved it for $\mathrm{[B^+]}$, and got $\mathrm{[B^+]}=C_\mathrm B=\mathrm{[OH^-]-[H_3O^+]+[HA^-]+2\,[A^{2-}]}$. Again, this is just speculation on my part. This brings me to my source of confusion. In the answer, grsousajunior gives the equation $C_\mathrm B=\frac{K_\mathrm w}{\mathrm{[H_3O^+]}}-\mathrm{[H_3O^+]}+\frac{K_\mathrm{a1}\,\mathrm{[H_2A]}\left(\mathrm{[H_3O^+]}+2K_\mathrm{a2}\right)}{\mathrm{[H_3O^+]^2}+K_\mathrm{a1}\mathrm{[H_3O^+]}+K_\mathrm{a1}K_\mathrm{a2}} $. To me, it looks like $\mathrm{[OH^-]}$ from the charge balance equation became $\frac{K_\mathrm w}{\mathrm{[H_3O^+]}}$, which makes perfect sense.
What I don’t get is how $\mathrm{[HA^-]+2[A^{2-}]}$ became $\frac{K_\mathrm{a1}\,\mathrm{[H_2A]}\left(\mathrm{[H_3O^+]}+2K_\mathrm{a2}\right)}{\mathrm{[H_3O^+]^2}+K_\mathrm{a1}\mathrm{[H_3O^+]}+K_\mathrm{a1}K_\mathrm{a2}} $.
When I add the equations that I have for those two concentrations, doubling the latter, I instead get $\frac{K_\mathrm{a1}\,\mathrm{[H_2A]}\left(\mathrm{[H_3O^+]}+2K_\mathrm{a2}\right)}{\mathrm{[H_3O^+]^2}}$. Everything is the same, except for the denominator. To me, it looks like the denominator somehow ended up as a quadratic, and I have know idea why. My best guess is that the concentrations of hydronium are separate between the two ionizations, but I don’t know how to wrangle out the math for that.
Does anyone know if or how the math I did is wrong? Thank you!
P.S. - Everything after this step makes sense to me, but the equation for buffer capacity that I would end up with using my equation for the concentration of strong base would look a bit different.
Answer: The equation that you have derived (the last one you show)
\begin{equation}
\frac{K_\mathrm{a1}\mathrm{[H_2A]}\left(\mathrm{[H_3O^+]}+2K_\mathrm{a2}\right)} {\mathrm{[H_3O^+]^2}} = \color{blue}{\left(\frac{\ce{[H2A]}}{\ce{[H3O^+]}^2}\right)}
K_\mathrm{a1}([\ce{H3O+}] + 2K_\mathrm{a2}) \tag{1}
\end{equation}
is correct. However
$C_\ce{H2A}$ refers to the initial concentration of the species
$\ce{[H2A]}$ refers to the equilibrium concentration of the species. This is the term that appears in the equilibrium constants.
The relation between both can be found by a mass balance, and replacing $\ce{[HA-]}$ and $\ce{[A^2-]}$ with the expressions that you have obtained
\begin{align}
C_\ce{H2A} &= \ce{[H2A]} + \ce{[HA-]} + \ce{[A^2-]} \\
&= \ce{[H2A]} + \frac{K_\mathrm{a1}\ce{[H2A]}}{\ce{[H3O^+]}} +
\frac{K_\mathrm{a1}K_\mathrm{a2}\ce{[H2A]}}{\ce{[H3O^+]}^2} \\
&= \frac{\ce{[H3O^+]}^2\ce{[H2A]}}{\ce{[H3O^+]}^2} +
\frac{\ce{[H3O^+]} K_\mathrm{a1}\ce{[H2A]}}{\ce{[H3O^+]}^2} +
\frac{K_\mathrm{a1}K_\mathrm{a2}\ce{[H2A]}}{\ce{[H3O^+]}^2} \\
&= \frac{[\ce{H2A}]}{\ce{[H3O^+]}^2} (\ce{[H3O^+]}^2 + K_\mathrm{a1}
\ce{[H3O^+]} + K_\mathrm{a1}K_\mathrm{a2}) \\
\color{blue}{\frac{[\ce{H2A}]}{\ce{[H3O^+]}^2}} &=
\frac{C_\ce{H2A}}{{\ce{[H3O^+]}^2 + K_\mathrm{a1} \ce{[H3O^+]} +
K_\mathrm{a1}K_\mathrm{a2}}} \tag{2}
\end{align}
and combining Eqs. (1) and (2) in blue
\begin{equation}
\frac{C_\mathrm{H_2A}K_\mathrm{a1}([\ce{H3O+}] + 2K_\mathrm{a2})}
{{\ce{[H3O^+]}^2 + K_\mathrm{a1} \ce{[H3O^+]} + K_\mathrm{a1}K_\mathrm{a2}}}
\end{equation}
which is the term that you are searching for. | {
"domain": "chemistry.stackexchange",
"id": 17457,
"tags": "acid-base, buffer"
} |
When can a natural satellite be in polar orbit around it's primary? | Question: A circular/inclined to the equator/non-polar orbit is typical for natural satellites; apparently a consequence of the energy/velocity requirements. This is applicable to natural satellites around planets, planets around Sol ... and perhaps upwards on the scale.
When can a natural satellite possibly be in polar orbit around it's primary?
Are there any such observations on record?
Answer:
When can a natural satellite possibly be in polar orbit around it's primary?
Well, normally you don't expect to see this. As explained in Why are our planets in the solar system all on the same disc/plane/layer? and the questions linked to it (such as Accretion disk physics - Stellar formation), we expect that when clouds of gas and dust condense into stars, planets, and moons, everything will be confined to a disk with the same angular momentum (or at least the same direction of the vector).
If you do see a polar orbit, or more generally a highly inclined* orbit, then it is an indication that one of the following probably occurred:
The smaller body was captured after the main body formed,
The smaller body's orbit was altered by (usually gravitational) interactions with other bodies, or
The larger body's spin was changed after the formation of the system.
Are there any such observations on record?
Why, yes indeed! In our own Solar System, my favorite example (there are many others) is Neptune's moon Triton. It is in a retrograde orbit, and the general consensus is that it falls into category (1) above. It was orbiting the Sun in the outskirts of the Solar System in what was probably an eccentric orbit, it got too close to Neptune, and it's close approach to Neptune completely by chance had it on the side of the planet such that it's orbit ended up going opposite to Neptune's spin.
While the planets in our Solar System aren't particularly inclined with respect to one another or the Sun's spin, this is not true of some other planetary systems we have observed. The past few years have seen a small explosion of measurements of exoplanetary systems' "spin-orbit misalignment" using the Rossiter–McLaughlin effect. For a nice light summary of this effect, you can take a look at [1], especially the figures.
Since that paper was written, many more exoplanets have been discovered, and many more astronomers have taken spin-orbit alignment data. The general consensus is that most systems are aligned, but there are some notable retrograde and polar orbits. It remains to be seen what this implies for our models of planetary formation and migration. There may even be an explanation for these inclined systems in (uninteresting) observer bias or (interesting) stellar physics, as discussed in [2], which also gives a list of measured inclinations for systems with relatively precise data.
* Inclination is defined as the angle between the main body's spin axis and the orbiting body's orbital axis. $0^\circ$ means perfect alignment in an equatorial orbit, $90^\circ$ is for a polar orbit, and $180^\circ$ is for a retrograde orbit (you are back in the equatorial plane, but going around the "wrong" way).
[1] Winn, 2006, "Exoplanets and the Rossiter-McLaughlin Effect."
[2] Winn et al., 2010, "Hot Stars with Hot Jupiters Have High Obliquities." | {
"domain": "physics.stackexchange",
"id": 9311,
"tags": "orbital-motion, satellites"
} |
Input image size in MATLAB | Question: Hi I am trying to get the image size of the uploaded image in MATLAB. So far I have got :
read=imread(image.png)
ymax=size(read) (1);
xmax=size(read) (2);
However this gives me an error. The image size is 300px by 256px, and I want my ymax and xmax to automatically get the pixel values. How do I do this?
Answer: It's not a MATLAB'ish way of doing that, try:
[xmax, ymax, nchan] = size(read);
Where nchan is the number of channels. Also it can be done by:
xmax = size(read, 1);
ymax = size(read, 2);
Or even:
[xmax, ymax, ~] = size(read); | {
"domain": "dsp.stackexchange",
"id": 1732,
"tags": "image-processing, matlab, image-segmentation"
} |
Dueling DQN - Advantage Stream, why use average and not the tanh? | Question: For Dueling DQN (page 5), why do authors use an average for Advantage stream, and don't simply "activate" the Advantage stream (with a $tanh$ for example)?
Would "activating" work in theory, and is it a similar idea to what the authors intended to achieve, or am I missing the point?
To remind, this is the equation, which Produces a Q-value for the taken action $a$ by combining the Value stream with the Advantage stream:
$$Q(s,a; \theta, \alpha, \beta) = V(s; \theta, \beta) + \biggl( A(s, a; \theta, \alpha) - \frac{1}{N}\sum_{a'}^{N}A(s, a'; \theta, \alpha) \biggr)$$
where
$s$ is the current state we are in
$a$ is the action we've decided to take
$a'$ is one of any action we could have taken (including the action we've taken)
$\theta$ are parameters (weights) of the network before the "splitting" into two separate streams
$\alpha$ are the parameters of Advantage stream
$\beta$ are the parameters of the Value stream
Answer: The goal of these models is to estimate the value of each action choice. They chose to use the average function to estimate the update value because the average function produces a single scalar influenced by each value.
Tanh function is not an appropriate nonlinear activation function. The tanh function only takes a scalar as an input, thus would not weight each value.
The authors tried an appropriate nonlinear activation function for many values - softmax:
We also experimented with a softmax version of equation (8), but found it to deliver similar results to the simpler module of equation (9). | {
"domain": "datascience.stackexchange",
"id": 3387,
"tags": "reinforcement-learning"
} |
Knots and strengh of a rope | Question: I read a few times that a knot can reduce the strenght of a rope, but I can't understand why this happens. Can someone explain me what happens to a rope tied with a generic knot and stretched? Is there a way to calculate the reduction of resistance from the form of the knot?
Answer: A knot always requires the rope in the knot to be curved. This increases the stress on the outside of the curved bit of rope, and decreases the stress in the inside. This increase in the stress in a knot means the rope breaks at a lower overall stress than a straight rope would. | {
"domain": "physics.stackexchange",
"id": 2854,
"tags": "newtonian-mechanics, everyday-life"
} |
Why do two flames kept side by side merge into a single flame? | Question: I have read about why a candle flame is in the shape of a tear drop in the presence of gravity. The top portion which is sharp is directly above the wick, where the soot particles from the wick rise up due to buoyancy and burn.
However, when I keep two flames side by side, with a space of about 3mm between the actual yellow boundaries of the two flames, I observe that the two flames are trying to get merged and the widths of the flames increase in order to mix with each other.
When I bring the flames a little closer to the point where the initial yellow boundaries would have just touched, there is only a single flame now, a much bigger tear drop and the highest "sharp" point being right in the middle of the two candles/lamps.
Could someone please explain this.
Answer: I did some experimenting (playing ? :-)).
The effect is "ill conditioned" and, while the result when the wicks are in close proximity is always a joined flame, the results when the separation is increased slightly is very 'time variable'. Using even quite thin candles (thicker than tapers - about 10mm od) flame proximity could not be got close enough to cause flame joining when the bodies were vertical. I angled two candles (see photos) and mounted them on bases which allowed X (and Z) separations to be varied. I took about 90 photos. Best results for this purpose seemed to be given by using flash to give basic image and reducing shutter speed below what would usually be used to get a degree of time averaging of the flame motion. At higher shutter speeds a flame that was flickering and that visually looked far taller than when the candles were widely separated, was almost always far shorter in the photos than it appeared to the eye. At shutter speeds of 1/20s or longer the perceived and photographed flames appear similar.
I believe that the mechanisms that I described originally (material below) were generally correct but the impression given by experiment is that heat transfer between flames is the main factor in flame growth which escalates into flame combination at very low separations.
Apart from flame size there are no apparent gross indications of decreasing proximity. Flame colouration on the lower curve of each flame on the side facing the other candle tends to change, with more of the red outer layer that is consistently seen further up the flame, but this effect is variable with flame flickering from air currents or flame interactions.
At very low separations and prior to joining, flame sizes suddenly increase substantially and flames may become very unstable with pronounced interactions between flames, but also may coexist stably for extended periods. [Somewhere in Brazil the Lorenz butterfly is enjoying itself].
Larger version here
and, sequence.
Much larger view here - 5442 x 1064
Shorter:
In the the region between the two candles a number of effects combine to produce increased vertical gas flow (of both air and combustion products) and higher temperatures at a given height. This raises the height of the combustion zones compared to elsewhere in the flame and the results are "regenerative" and continue until a steady state is reached. Factors which cause the above temperature rise and increased flow include:
Blocking of incoming radial air due to the other candle,
two streams of approximately tangential air into the shared zone,
radiative heating of incoming air further away than elsewhere due to two energy sources,
greater convection in this zone due to increased energy input,
greater volatilised fuel feed (less so than for air feed)
Longer:
A candle flame is a high temperature chemical reaction between atmospheric Oxygen and gaseous 'fuel', consisting of volatilised solids - typically paraffin or bees 'wax' - that is liquefied by radiated energy from the reaction above it and drawn vertically by capillary action into the reaction zone to replace fuel that has reacted. The high temperature of reaction relative to the surrounding Oxygen source produces low density high temperature combustion products which undergo classic "convective heat transfer" as the high temperature less low dense combustion products rise vertically and are displaced by cold air which is input from all sides.
Consider a single ideal candle burning in isolation in still air:
The flame radiates energy down into the 'wax' below it causing it to melt (aka solid to liquid phase change). The liquid is drawn up the provided "wick" by capilliary action until the closeness to the combustion zone raises its temperature to vaporisation point (aka liquid to gaseous phase change).
For an isolated ideal symmetrical candle with a vertical wick in otherwise still air, cool relatively dense air flows into the combustion zone equally from all sides and hotter, less dense combustion products become 'buoyant' due to density differences and as incoming air is entering from all radial directions, the hotter gases rise vertically above the centre of the candle. Fuel feed is vertical into the combustion zone, air feed is horizontal (& radial inwards) into the combustion zone, and the logical and only place left for the hot less dense combustion products is 'up'. Radiant energy escapes radially in a rotationally symmetrical pattern as light and heat (the two are the same in nature, being differentiated only by wavelength). The radiant energy output is not symmetrical when viewed in cross section due to the flow of reactants, changes in temperature and varying opacity of the flame zone.
Because the flame zone is "rotationally" symmetrical, it appears the same from any radial direction, but to a radially positioned observer (the safest place to be when looking at a candle) it appears wider than thick (ie wide and flattened in depth) as the width of the combustion zone is easily seen, but the depth is hidden by the 'flame'.
A non ideal candle may not be truly symmetrical and the wick may be at an angle and fuel flow up the wick may not enter the zone symmetrically relative to the candle body, but the above affects are observed to occur 'well enough' in everyday candles.
Now consider two identical burning candles placed a distance "d" apart.
When d is large the two candles burn independently and appear as before.
As d is diminished the air between the two candles starts to be affected by both flames. Instead of drawing air from infinity, the centre air must be fed from "either side" of the centre line, as "there is a candle in the way out towards infinity". Also the air between the candles starts to be preheated by two sources of radiation rather than one so is hotter than at other points the same distance from a candle centre. As d is decreased the increase in air temperature near the common zone becomes increasingly hotter than elsewhere so air starts to rise convectively sooner prior to meeting the main combustion zone, so that combustion happens higher up in the inter-candle region. This effect can be seen quantitatively in this crop from the main image below.
As d is further reduced to say under a body diameter there is no air path from the direction of the other candle and all air along the joining line must enter approximately tangentially. The incoming air is substantially preheated both by radiation and by gases carried into this hotter faster rising zone from further around the candle and as d further decreases any point in the inter candle region becomes essentially indistinguishable from a point at somewhat less distance from the centre elsewhere on the circumference. At low enough separation's two unavoidably become one.
This image is non ideal but shows in the inter candle zone:
The lack of air path between the two candles (there's another candle in the way!),
The transfer of radiative energy from two sources (place your finger that close and you know what would happen.)
Higher level of equivalent combustion zones due to greater air flow leading to ...
Image is from here unrelated - Wikipedia.
=============================================================
And many more.
Larger version here about 3000 pixels wide - enough to see most usefully, should anyone care. | {
"domain": "physics.stackexchange",
"id": 52155,
"tags": "thermodynamics, combustion"
} |
Why does the pressure change on uniformly mixing two liquids? | Question:
The two cylinders are connected the upper cylinder has a cross-section of A and the lower one has a cross-section of 2A (I've taken the cross-sections to be A and 2A as they are easier to work with than the radii r and 2r).
Initially the liquid present in the upper cylinder has a density of $\rho$ and the lower one has a density of $2\rho$. The pressure at the bottom in this scenario can be calculated as:
$$P_1=\rho\cdot g\cdot 2h+2\rho\cdot g \cdot h=4\rho\cdot g\cdot h$$
Now, if we were to mix the two liquids present in the cylinders to create a new solution with a uniform density of $\frac32\rho$ (The volume of both the cylinders is the same so we can simply take the mean of the two densities).
The pressure in this case can be calculated as:
$$P_2=\frac32\rho\cdot g \cdot 3h= \frac92\cdot g \cdot h$$
This was one approach to calculate the pressures another method is by simply calculating $$\frac{F}{A}$$
Since the entire liquid system is in equilibrium and the only external force balancing the gravitational force is the normal applied at the bottom surface the pressure will be:
$$P=\frac{Mg}{A}$$
$$P=\frac{\rho\cdot 2h \cdot A\cdot g + 2\rho\cdot 2A \cdot h \cdot g}{2\cdot A }= 3\rho \cdot g \cdot h$$
Which is matching with neither of the cases but i can't see why this is wrong.
Answer: First consider the case of a cylinder of uniform cross section $A$, in which liquids of density $\rho_1,\rho_2,$ with corresponding volumes $V_1,V_2,$ are present. Using the hydrostatic equation, the pressure at the bottom is $P_1=\rho_1gh_1+\rho_2gh_2=\rho_1g(V_1/A)+\rho_2g(V_2/A)=(\rho_1V_1+\rho_2V_2)g/A$. If after mixing the total volume doesn't change then mass conservation says that the final density should be $\rho_f=(\rho_1V_1+\rho_2V_2)/(V_1+V_2)$. The corresponding pressure at the bottom will be: $\rho_fgh_f=\rho_fg(h_1+h_2)=\rho_fg(V_1+V_2)/A=(\rho_1V_1+\rho_2V_2)g/A,$ the same as before. Therefore, when the cylinder has uniform cross-section, mixing doesn't change the pressure at the bottom, which is logical since the same weight of fluid is supported by the bottom in both cases.
In your problem the cross-section varies. Let $\rho_1,V_1,A_1$ and $\rho_2,V_2,A_2$ be the density, volume and cross-section of the two fluids. The interface between the two fluids lies exactly where the cross-section changes. Using the hydrostatic equation, the pressure is: $P_1=\rho_1gh_1+\rho_2gh_2=\rho_1g(V_1/A_1)+\rho_2g(V_2/A_2)$, which can't be simplified further. After mixing the density becomes $\rho_f=(\rho_1V_1+\rho_2V_2)/(V_1+V_2)$ as before. But now the pressure at the bottom changes to $\rho_fgh_f\neq P_1,$ as you can verify by substitution.
Since the same weight of fluid is being supported before and after mixing, is the change in pressure a contradiction? No, it isn't.
Above result derived using hydrostatic equation is correct. If you want to derive the pressure on the bottom using force balance then you must recognise that in the case where cross-section of the container varies, weight of the fluid is not the only force acting on the bottom. There is also a downward reaction force due to the horizontal wall where the cross-section changes (denoted by black arrows in the figure below). This reaction force is equal to the product of hydrostatic pressure at the horizontal wall (equal to $\rho_1gh_1$ before mixing and $\rho_fgh_1$ after mixing) and its area. When this is accounted for you get the same answer as above.
In short, the change in pressure at the bottom subsequent to mixing is entirely due to a change in the hydrostatic pressure at the horizontal wall. | {
"domain": "physics.stackexchange",
"id": 54638,
"tags": "pressure, density, fluid-statics"
} |
Differentiating relativistic momentum | Question: I am struggling to differentiate relativistic momentum formula.
Considering special relativity,
$$ \vec{F}=\frac{d\vec{P}}{dt}=\frac{d}{dt}\frac{m\vec{v}}{\sqrt{1-v^2/c^2}}$$
which I understand.
The textbook proceeds, "when the net force and velocity are both along the x-axis,"
$$ F=\frac{m}{(1-v^2/c^2)^{3/2}}a $$
This is where I am stuck.
I am not sure how to compute the derivative $$F=\frac{d}{dt}\frac{m\vec{v}}{\sqrt{1-v^2/c^2}}$$ to get $$ F=\frac{m}{(1-v^2/c^2)^{3/2}}a $$
Answer: If net force and velocity are both along the $x$-axis, $F_y,F_z,v_y,v_z$ are 0 and there remains
$$
F = F_x = \partial_t \frac{m v_x}{\sqrt{1 - v^2/c^2}}~,
$$
where also $v = v_x$. This means
$$
F = \partial_t \frac{m v}{\sqrt{1 - v^2/c^2}} = \partial_t \frac{m}{\sqrt{\frac 1 {v^2} - \frac 1 {c^2}}} \overset{\text{chain rule}}= - \frac{m}{2\sqrt{\frac 1 {v^2} - \frac 1 {c^2}}^3} \partial_t \left( \frac{1}{v^2} - \frac{1}{c^2} \right) \overset{\text{again chain rule}}= - \frac{m}{2\sqrt{\frac 1 {v^2} - \frac 1 {c^2}}^3} \left( \frac{-2}{v^3} \right) \underbrace{\partial_t v}_{=a}~.
$$
Multiplying the $v^3$ back into the root and canceling out the signs and the twos yields the result from the textbook. The trick here is to initially multiply both the denominater and the numerator with $1/v$, which leaves the fraction unchanged but reduces the number of occurances of $v$. | {
"domain": "physics.stackexchange",
"id": 81958,
"tags": "special-relativity, momentum"
} |
Navigation script and template | Question: Overview:
I'm using the following page template and scripts for navigation. I use a single page that loads menus and context content, and then use ajax or GET variable to load main-div's content, by passing $p variable to a php switch. Browser history is handled whith pushState function.
Main page:
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<div id="main-div">
<?php
if (isset($_GET['p'])) {
$p = $_GET['p'];
} else {
$p = "main";
}
require_once("assets/switch.php");
?>
</div>
<script>
// Define state on first load
function defineState() {
"use strict";
window.history.replaceState("<?php echo $p ?>", "", "");
}
window.onload = defineState;
function navegacion(dashboard, history) {
"use strict";
var urlPath = dashboard;
var xmlhttp;
var ActiveXObject;
if (window.XMLHttpRequest) {
// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else {
// code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState === 4 && xmlhttp.status === 200) {
window.scrollTo(0, 0);
//document.title = "TITLE";
document.getElementById("main-div").innerHTML = xmlhttp.responseText;
if(history !== 1) {
window.history.pushState(dashboard, "", "index.php?p="+urlPath);
}
}
};
xmlhttp.open("POST","assets/ajaxapi.php", true);
xmlhttp.setRequestHeader("Content-type","application/x-www-form-urlencoded");
xmlhttp.send("dashboard="+dashboard);
}
// Allows for back and forward browswe buttons
window.onpopstate = function(e){
"use strict";
if(e.state){
navegacion(e.state, 1);
}
};
</script>
</body>
</html>
ajaxapi.php:
<?php
if (isset($_POST['dashboard'])) {
$p = $_POST['dashboard'];
} else {
$p = NULL;
}
require_once("./switch.php");
Could there be any performance or error issues with this code, or how could this script be enhanced?
Thanks.
Answer: Security: XSS
You are open to reflected XSS via this payload:
p=foo","","");alert(1)//
Note that the injection takes place into a JavaScript context, which is worse than injections into HTML context because browser filters have a hard time catching it (if you try the payload with eg chrome, it will execute, while "</script><script>alert(1)</script> would not).
Standard HTML-encoding would solve your problem, but OWASP recommends escaping to prevent XSS in a JS context.
Misc
p isn't a great variable name. If it stands for page, just use $page instead (also for the GET index).
As your other variable and function names are in english, navegacion should be as well. | {
"domain": "codereview.stackexchange",
"id": 18730,
"tags": "javascript, php, ajax"
} |
Is there sufficient evidence to confirm additional Planet in solar system | Question: I have watched this scishow episode where Hank says that astronomers believe there is another planet in our solar system.
Link
He says that it's because how asteroids and comments are being pushed towards the sun (and us) from similar angles. Is this true? Do astronomers really have data that confirms the 'new ninth' planet?
EDIT: I did read the other related answers and articles, the way I understand this theory explained by tilt of 6 objects that could be explained by something massive enough pulling them. My question is do astronomers believe this is sufficient to confirm it that there is a planet.
Answer: This question already has muliple answers on this site...
However : Yes, astronomers DO have enough data to speculate about a possible Ninth Planet.
The orbits of six KBO are correlated, and a possible ninth planet could be the reason for those peculiar orbits. See image below for the computed results of the possible orbit of the ninth planet.:
However, since the Planet has not yet been SEEN or DETECTED, its existence is only supposed. We have nothing to prove it really exists. But if it exists, we know where it is :p | {
"domain": "astronomy.stackexchange",
"id": 1354,
"tags": "solar-system, the-sun, planet, comets, data-analysis"
} |
one-to-many matching in bipartite graphs? | Question: Consider having two sets $L$ (left) and $R$ (right).
$R$ nodes have a capacity limit.
Each edge $e$ has a cost $w(e)$.
I want to map each of the $L$ vertices to one node from $R$ (one-to-many matching), with minimum total edge-costs.
Each vertex in $L$ must be mapped to one vertex in $R$ (but each node in $R$ can be assigned to multiple $L$-nodes).
Examples: Consider the capacity of $R$ nodes is $2$.
1) This is NOT correct, since one node from $L$ has not assigned to a node in $R$.
2) This is NOT correct, since the capacity of a node in $R$ is violated.
3) This IS correct. All $L$ nodes are assigned to a node in $R$, and the capacity of $R$ nodes is not violated.
Any idea how can I solve this?
Answer: This problem is called the B-matching problem. Where you are given a function $b:V \rightarrow \mathbb{N}$ that assign a capacity to each vertex and a function $u:E \mapsto \mathbb{N}$ that assigns a weight to each edge.
The problem is solvable in polynomial time. An easy solution is to reduce the problem to minimum weight maximum matching. Create $b(v)$ copies of each vertex $v$ and connect each of them to all neighbors of the original vertex v. We get a polynomial time reduction, since for $b(v) \geq \mathrm{deg}(v)$ we can set $b(v) := \mathrm{deg}(v)$ since we can not match $v$ with more vertices than its neighborhood anyway and that each of its neighbors is matched with at most one vertex in your special case. | {
"domain": "cs.stackexchange",
"id": 15217,
"tags": "algorithms, graphs, matching, string-matching"
} |
Recurrence relation (not solvable by the master theorem) | Question: Consider the following recursion:
$\begin{cases}
T(n) = 2T(\frac{n}{2}) + \frac{n}{\log n} &n > 1 \\
O(1) &n = 1
\end{cases}$.
The master theorem doesn't work, as the exponent of $\log n$ is negative. So I tried unfolding the relation and finally got the equation:
$T(n) = n[1 + \frac{1}{\log(\frac{n}{2})} + \frac{1}{\log(\frac{n}{4})} + ... + \frac{1}{\log(2)}]$.
I do not know how to simplify (inequalities to use???) from here. A trivial method would be to assume that all reciprocal of the log terms are $< \frac{1}{\log(2)}$, and since there are $\log n$ terms, the summation of all the reciprocal-log terms is $< \frac{\log n }{\log(2)} = \log_2 n$, which gives $T(n) = O(n \log n)$. However this is a very poor approximation, as by the master theorem we can check that the time complexity for the recursive relation $T(n) = 2T(\frac{n}{2}) + n$ is $O(n \log n)$. Can someone find a tighter correct upper bound?
Answer: Wikipedia has a slight extension of the master theorem which covers your case: case 2b here.
For the recurrence $T(n)=aT(n/b)+f(n)$ where $f(n)=\Theta(n^{\log_b a}/\log n)$, it gives $T(n)=n^{\log_b a}\log\log n$. | {
"domain": "cs.stackexchange",
"id": 14606,
"tags": "time-complexity"
} |
Could we extract energy/heat from the mantle? | Question: the question is if lava in the mantle of earth is so hot then could we extract this energy to produce heat?=
we would introduce some tubes that can stand temperatures of 10000 K or more then by difference of pressure the heat or lava would come up and then we can use this heat to produce energy
Answer: Well this probably would be very inefficient and even if the tubes could sustain temperatures of 10,000 Kelvin no one could touch that tube and it would get real hot. Also for all that heat to reach the surface it would still take time. Also eventually those tubes would get so hot they might have some chemical reaction with the mantle and if the tube was able to sustain those high temperatures in the past it would not be able to do it after reacting with the lava. Furthermore the tube would experience wear and tear faster than ever. A much more efficient way is to send water down there which evaporates and then that water would not carry so much energy as to burn the surroundings but enough energy to do important things like power a city with the energy. | {
"domain": "physics.stackexchange",
"id": 67427,
"tags": "thermodynamics, energy, geophysics"
} |
What does rotational balance mean? | Question: For example, in many badminton rackets, it says that the balance point is rotational. How does this actually work?
Answer: I am not sure but here is a guess. It may mean the racket is dynamically as well as statically balanced about the balance point. The racket is balanced statically in that the racket is stationary with a fulcrum at the balance point. The racket is dynamically balanced in that rotation (spinning the racket) about a principal axis through the balance point is steady without requiring a restraining torque. | {
"domain": "physics.stackexchange",
"id": 83999,
"tags": "newtonian-mechanics, rotational-dynamics, reference-frames, terminology"
} |
Why we assign a rotation transformation and not any other when we derive Lorentz factor | Question: I can't truly understand why when we derive Lorentz factor we should assume that the transformation of coordinates is rotational one.
In the book "Relativity demystified" they say that: "In some sense we would like to think of this transformation as a rotation". And they come to the next equations:
$$x'=x\cos(\phi)-y\sin(\phi)$$
$$y'=-x\sin(\phi)+y\cos(\phi)$$
So to satisfy these equations they choose for coefficients of linear transformation $A,B,C,D$ next values: $$A=D=\cosh(\phi)$$ $$B=C=-\sinh(\phi)$$
I shall notice that the conditions which bounded these coefficients were next: $$D^2-B^2=1$$ $$A^2-C^2=1$$ $$CD=AB$$
It is understandable why they chose to manipulate with hyperbolic functions (because $\cosh^2(\phi)-\sinh^2(\phi)=1$). But the choice of the signs for these functions comes only from the assumptions that our transformation is rotational and the signs for values of coefficients based only on this.
So I dont understand why the transformation should satisfy rotational matrix. And this is an important part. Please could you show me simple example why this is true and why we should choose rotational transformation.
Answer: We assume that the two sets of coordinates are related by transformations that preserve the metric, namely all the matrices $\Lambda$ such that $g' = \Lambda^{T}g\Lambda = g$. This follows in turn from the assumptions that the speed of light does not depend on the reference frame of the observer.
Hyperbolic rotations are a special case of the above. | {
"domain": "physics.stackexchange",
"id": 32148,
"tags": "special-relativity"
} |
Input data from file, calculating average by a function | Question: I want to thank you guys firstly for your previous comments and feedback. Really appreciate it. I am learning C with school at the moment, but I plan to be "fluent" in C at the end of august of this year.
For school I have to write the following program:
Open up a file. The file is a *.txt one. It consists of 32 numbers. The first number corresponds to the number of tests distributed under the students. The second number is saying how many have returned the test. The other 30 numbers could be seperated into a 5 by 6 table. There were 6 question with each 5 possible answers. the numbers represent how many students have chosen the answer. The file is posted at the end of this post.
Now I have to open this file and making a display like this:
number of passed tests: 20
Number of returned tests: 19
Response is 95%.
A B C D E grade passed
question 1 2 2 2 2 2 2.4 yes
question 2 2 2 2 2 2 2.4 yes
question 3 2 2 2 2 2 2.4 yes
question 4 2 2 2 2 2 2.4 yes
question 5 2 2 2 2 2 2.4 yes
question 6 2 2 2 2 2 2.4 yes
Also note that the "comment" has three options; fail if grade<3.0 , warning if grade<3.5 and pass if higher than 3.5.
The code I Have written is like this :
#include <stdio.h>
#define RIJ 6
#define KOLOM 5
int invoer[RIJ][KOLOM] , rij, kolom;
float ingeleverd ;
float functie(int* i ) ;
int main(void)
{
float uitgereikt=0 , percentage ;
char bestandpad[100]= "" ;
FILE *inputbestand;
printf("Dit programma verwerkt de data uit verschillende enquetes.\n");
printf("\n");
printf("Voer de locatie van het bestand in.\n");
scanf("%s", &bestandpad);
inputbestand=fopen(bestandpad, "r" );
if (inputbestand == NULL)
{ printf("Bestand niet gevonden.");
return 0;
}
fscanf(inputbestand, "%f", &uitgereikt);
fscanf(inputbestand, "%f", &ingeleverd);
percentage = (ingeleverd/uitgereikt) *100 ;
//------- overige data inlezen in een array van 5 bij 6.
for (rij=0; rij<RIJ; rij++)
{
for (kolom=0; kolom<KOLOM ; kolom++)
{
fscanf(inputbestand, "%d", &invoer[rij][kolom]);
}
}
fclose(inputbestand);
printf("Aantal uitgereikte evaluaties: %0.f\n" , uitgereikt);
printf("Aantal ingeleverde evaluaties: %0.f\n" , ingeleverd);
printf("Response is: %.1f %% \n\n" , percentage);
printf(" \t A\t B \t C \t D \t E \t cijfer \t opmerking\n\n");
for (rij=0; rij<RIJ; rij++)
{
printf("vraag %d " ,rij+1);
for (kolom=0; kolom<KOLOM ; kolom++)
{
printf("%d ", invoer[rij][kolom]);
if (kolom==KOLOM-1) {
printf("%c \n", functie(invoer[rij]));
}
}
}
}
float functie(int* i )
{
int a ;
float gem, totaal;
for (a=0; a<5 ; a++)
{totaal= totaal+ ( i[a] * (a+1) ) ;}
gem = totaal/ingeleverd ;
if (gem<3.0) {printf("%.1f \t \t Voldoet niet\n", gem);}
else if (gem<3.5) {printf("%.1f \t \t opmerking\n", gem);}
else if (gem>3.5) {printf("%.1f \t \t Voldoet\n", gem);}
// return gem ;
}
It is working completely. I have one minor bug though. the results are not aligned properly. What do you guys think ? Except the bad indenting... I am also very sorry it is in Dutch. But I think you guys will understand alright..
the inputfile is like this:
20
19
1
2
4
5
7
0
1
8
7
3
8
5
4
2
0
2
2
4
11
0
2
3
3
5
6
0
0
4
12
3
Answer: I would think about breaking this up into more functions that do specific things. For example, I would break out the reading of the header information (the number of tests passed and the number returned) into one function, and the reading of the actual test data into another function:
void readHeader (FILE* inputbestand, float* uitgereikt, float* ingeleverd)
{
fscanf(inputbestand, "%f", uitgereikt);
fscanf(inputbestand, "%f", ingeleverd);
}
This brings up a question I have for you - why is ingeleverd a global variable? In fact, why do you have any global variables? They all look like they belong inside the main() function to me.
In any event, reading the table of data would then look something like this:
void readTable (FILE* inputbestand, int invoer[RIJ][KOLOM])
{
//------- overige data inlezen in een array van 5 bij 6.
int rij, kolom;
for (rij = 0; rij < RIJ; rij++)
{
for (kolom = 0; kolom < KOLOM; kolom++)
{
fscanf(inputbestand, "%d", &invoer[rij][kolom]);
}
}
}
For what it's worth, I like to put spaces around operators (like comparison operators - <, >, etc.), as I find it much easier to read. That's just a style decision and might not suit you, but that's what I prefer.
I would also move the output into various functions. The introductory text could go into a function like this:
#define RESULT_OK 1 // <- These can go at the top of the file
#define RESULT_ERROR 0 // <- with the others
int printIntro (FILE** inputbestand, char bestandpad[100])
{
int result = RESULT_OK;
printf("Dit programma verwerkt de data uit verschillende enquetes.\n");
printf("\n");
printf("Voer de locatie van het bestand in.\n");
scanf("%s", bestandpad);
*inputbestand = fopen(bestandpad, "r" );
if (*inputbestand == NULL)
{
printf("Bestand niet gevonden.");
result = RESULT_ERROR;
}
else
{
result = RESULT_OK;
}
return result;
}
Although, perhaps a better function name would be getFileName() (or would it be getBestandpad()? I don't know Dutch for "get").
Since you were originally returning 0 from main() when the file could not be read, you need to return any errors from the above function and use that in main() to decide what to return. Which reminds me - you aren't returning a value on success in your main() function. So let's look at that next. After breaking out the various functions, it will look something like this:
int main (void)
{
FILE* inputbestand;
int invoer[RIJ][KOLOM];
char bestandpad[100];
int result = printIntro(&inputbestand, bestandpad);
if (result == RESULT_OK)
{
float uitgereikt;
float ingeleverd;
readHeader(inputbestand, &uitgereikt, &ingeleverd);
readTable(inputbestand, invoer);
fclose(inputbestand);
printStatistics(uitgereikt, ingeleverd);
printResults(invoer, ingeleverd);
}
return result;
}
The function printStatistics() should print out the information about how many tests were passed and how many were returned:
void printStatistics (const float uitgereikt, const float ingeleverd)
{
printf("Aantal uitgereikte evaluaties: %0.f\n" , uitgereikt);
printf("Aantal ingeleverde evaluaties: %0.f\n" , ingeleverd);
float percentage = (ingeleverd/uitgereikt) *100 ;
printf("Response is: %.1f %% \n\n" , percentage);
}
Likewise, printResults() should print out the resulting table:
void printResults(int invoer[RIJ][KOLOM], const float ingeleverd)
{
printf(" \t A\t B \t C \t D \t E \t cijfer \t opmerking\n\n");
int rij, kolom;
for (rij=0; rij<RIJ; rij++)
{
printf("vraag %d " ,rij+1);
for (kolom=0; kolom<KOLOM ; kolom++)
{
printf("%d ", invoer[rij][kolom]);
if (kolom==KOLOM-1)
{
printf("%c \n", functie(invoer[rij], ingeleverd));
}
}
}
}
It's not clear to me what the functie() function is doing. It looks like it's supposed to calculate an average, but it doesn't look to me like it's actually doing that. Also, it returns a float, but it's being used as a character in the printf() statement where it's called. That's almost certainly incorrect. And you've listed 3.0 and 3.5 as the breakpoints between fail, warning, and pass, but in the example, all the grades are 2.4, but marked as "pass". So I'd double-check to make sure you understand the requirement there.
In addition to the above, you should know that there are various ways that reading from the file could fail. You could have truncated data, or there might be the wrong kind of data in the file. I'm not sure if your teacher wants you to worry about that yet, or not. If you want to, you can check the return value of fscanf() and make sure that it read the argument you expected. | {
"domain": "codereview.stackexchange",
"id": 13772,
"tags": "beginner, c, file, formatting, statistics"
} |
Can polyatomic ions (CO₃, PO₄, SO₄, NO₃) be considered conjugated systems? | Question: From my perspective these resonance structures allow these specific polyatomic ions to act as donor-acceptor molecules. Many donor-acceptor molecules also tend to be conjugated systems because they have chains of alternating conjugated π orbitals.
So does this imply that these ions (due to their resonance structures) can act as though they had a conjugated system or am I making too big of an assumption?
Curious on others perspectives/if I'm totally of my rocker.
Answer: "Conjugated" implies a 1,3-shift to move an electron or hole. It is a remnant of LCAO modeling that is obviously wrong but fantastically useful short of the Woodward-Hoffmann rules. I'm not sure inorganikers would like the name as such. MO modeling is accurate but unwieldly. "Delocalized" or "resonance hybrid" is good, certainly for inorganic systems that may have (virtual) d-orbital participation.
Consider nitrate. Inorganikers would say $\ce{N^{5+}}$, $\ce{^{-}O-N(=O)2}$, with five bonds to the nitrogen. The negative charge 1,3-shifts around all three oxygens. Organikers would see it as $\ce{N^{3+}}$, $\ce{[^{-}O{}-]_2N^{+}=O}%edit$, with four bonds to the nitrogen. 1,3 shifts, etc. Is nitrate ever a bidentate ligand? YES! But it is just another resonance structure.
Inorg. Chem. 35(24) 6964 (1996)
http://pubs.acs.org/doi/abs/10.1021/ic960587b | {
"domain": "chemistry.stackexchange",
"id": 858,
"tags": "inorganic-chemistry, resonance, ionic-compounds"
} |
How do I send the results of a convolutional layer and non-deep-learning features into a dense layer in Keras? | Question:
I understand that I can set up a convolutional network for 1-dimensional sequence/time series.
model = Sequential()
model.add(Conv1D())
model.add(GlobalMaxPooling1D())
model.add(Dense())
Let's say I'd like to use "regular" (non-deep-learning) features too in my model, how should I best combine the two at a dense layer?
Concretely, let's assume that, for each row of my dataset, there are 1k points in the time series, along with 100 "regular" features.
To generalize my question, let's say there are now two kinds of time series plus regular features for each row in my dataset. If I would like to have a separate convolutional block for each time series, how do I combine all three?
Answer: This can be done with Keras functional API.
In this example, "merge_1" layer gets input from :
Output of LSTM Layer (lstm_1)
aux_input layer
More info : https://keras.io/getting-started/functional-api-guide/ | {
"domain": "datascience.stackexchange",
"id": 4634,
"tags": "deep-learning, keras, convolution"
} |
The viscosity of Natural gas | Question: At low pressures, gas viscosity increases as temperature
increases, which is mainly due to increase in the intermolecular collision that is
caused by an increase in molecular friction. However, at high pressures, gas viscosity
decreases as temperature increases. So at low pressures gas is free to move and the distance between molecules is high but in high pressures it's vice versa. So in the first case intermolecular collision creates friction and that's why it hinders the mobility of the gas. But the same collision in the second case can make molecules to move faster because the molecules are packed and it leads to increase in kinetic energy. Is my understanding relevant to this statement? Or is there another reason for changing viscosity in terms of T and P?
Answer: Your understanding is partially relevant to the statement, but it needs some clarification. The statement is about the effect of pressure on the mobility of gas molecules, not the effect of intermolecular collisions. Intermolecular collisions are a factor that affects the mobility of gas molecules, but they are not the only factor. The statement also mentions kinetic energy, which is related to the speed and direction of gas molecules.
At low pressures, gas molecules are free to move around and collide with each other frequently. This means that they have a high kinetic energy and a high probability of changing their direction and speed with each collision. This also means that they have a low mean free path, which is the average distance that a molecule travels between collisions. A low mean free path means that gas molecules can diffuse quickly through a space.
At high pressures, gas molecules are forced to move closer together (making it simpler for them to pass one another)and collide with each other less frequently. This means that they have a low kinetic energy and a low probability of changing their direction and speed with each collision. This also means that they have a high mean free path, which is the average distance that a molecule travels between collisions. A high mean free path means that gas molecules can diffuse slowly through a space.
Therefore, your understanding is correct in saying that intermolecular collisions create friction and hinder the mobility of gas at low pressures, but it is incorrect in saying that they make molecules move faster at high pressures. In fact, it is the opposite: at high pressures, intermolecular collisions reduce friction and increase the mobility of gas by making molecules move faster. | {
"domain": "physics.stackexchange",
"id": 99138,
"tags": "thermodynamics, fluid-dynamics, gas, viscosity"
} |
conversion of sensor_msgs::PointCloud2ConstPtr | Question:
Hi all
How can I convert sensor_msgs::PointCloud2ConstPtr
to sensor_msgs::PointCloud2?
Originally posted by xelda1988 on ROS Answers with karma: 5 on 2013-06-13
Post score: 0
Answer:
Say you have input as sensor_msgs::PointCloud2ConstPtr and output as sensor_msgs::PointCloud2. output = *input;
Originally posted by Jiayi Liu with karma: 166 on 2013-06-13
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Bernhard on 2013-06-14:
Just a small addition to this answer. That's because the first (PointCloud2ConstPtr) is a pointer (Ptr), hence the *
Comment by xelda1988 on 2013-06-19:
Thanks guys! Is this copying the pointcloud, which allocates new memory? If yes, is there another way to do it?
Comment by ashish on 2017-06-01:
This might be a really 'new' comment.
ConstPtr types are typedef(s) for boost::shared_ptr (shared pointer) to manage the life-cycle of pointer-memories. There won't be any new memory allocated but reuse of an already allocated memory. | {
"domain": "robotics.stackexchange",
"id": 14552,
"tags": "sensor-msgs, pointcloud"
} |
Append Existing Columns to another Column in Pandas Dataframe | Question: I have a data that looks like this:
The T2M indicates the temperature, and the followed row is the year. I want to append all the similar parameters columns under a single column having all the years, I will end up with one T2M column only, and the final dataframe would look like this
Parameter | T2M | ...
Year | 1981 | ...
Jan
Feb
.
.
Year | 1982 | ...
.
.
.
I tried the following but it doesn't work:
dff = df.copy()
temp = df.iloc[:,1]
dff.append(temp)
I get this error :
ValueError: cannot reindex from a duplicate axis
which doesn't make sense because here in the first example similar indices were used.
Answer: Ok, I figured out the problem. The duplicate axis error was coming because the dataframe has multiple columns with the name 'T2M' so append() could not figure to which column it would append the new values.
Instead, I copied the dataframe, in the copy I deleted all columns to be appended, and extracted the data from the original df to the copied one. Since in the copy all columns are unique, everything went fine. | {
"domain": "datascience.stackexchange",
"id": 11249,
"tags": "pandas, data-cleaning, dataframe"
} |
Custom scrollbar | Question: I have a function that will .destroy() a custom scrollbar and then recreates the scrollbar with a new theme. My IDE (Eclipse) is telling me that
my function contains Undefined Variables. The error is not stopping me from running my program and I know that if the variable is not there my try statement will run a different code to create the scrollbar variable. I also know I can use #@UndefinedVariable to tell my IDE to not worry about the undefined variable.
Keep 2 things in mind:
My scrollbar is custom. It is not the tkinter scrollbar. I have this custom scrollbar so I can change the colors(theme) of the sliders, background, and arrows on the scrollbar as the tkinter scrollbar cannot do this on Windows or Mac machines.
My custom scrollbar does not currently have a way to manipulate the colors once it has been initialized. Because of this I decided the best way to change the theme of my scrollbar was to create a try statement that would first try to destroy the scrollbars and recreate them with the new theme or on except create the scrollbars because there was none to begin with.
My question is this:
Is it a problem for me to manage my scrollbar this way? Should I be going about this a different way?
I just feel like I am using the try statement in a way it was not meant to be used. Maybe I am just over thinking this and it is fine but it's best to know for sure so I don't make a habit of doing things the wrong way.
Below is the chopped down version of how I create and manage my scrollbars:
from tkinter import *
import scrollBarClass #Custom scrollbar class
pyBgColor = "#%02x%02x%02x" % (0, 34, 64)
pyFrameColor = "#%02x%02x%02x" % (0, 23, 45)
root = Tk()
root.title("MINT: Mobile Information & Note-taking Tool")
root.geometry("500x500")
root.config(bg = pyFrameColor)
root.columnconfigure(0, weight=1)
root.rowconfigure(0, weight=1)
currentTextColor = 'orange'
def doNothing():
print("Do lots of nothing?")
# ~~~~~~~~~~~~~~~~~< Theme >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def MintThemeDefault(mainBG, textBG, txtColor):
# Some theme configs
# More theme configs
# and so on...
root.text.config(bg = textBG, fg = txtColor)
try:
vScrollBar.destroy() #@UndefinedVariable
hScrollBar.destroy() #@UndefinedVariable
makeScrollBars(textBG, txtColor, mainBG)
except:
makeScrollBars(textBG, txtColor, mainBG)
def makeScrollBars(textBG,txtColor,mainBG):
vScrollBar = scrollBarClass.MyScrollbar(root, width=15, command=root.text.yview, troughcolor = textBG,
buttontype = 'square', thumbcolor = txtColor, buttoncolor = mainBG)
vScrollBar.grid(row = 0, column = 1, columnspan = 1, rowspan = 1, padx =0, pady =0, sticky = N+S+E)
root.text.configure(yscrollcommand=vScrollBar.set)
vScrollBar.config(background = mainBG)
hScrollBar = scrollBarClass.MyScrollbar(root, height=15, command=root.text.xview, orient='horizontal', troughcolor = textBG,
buttontype = 'square', thumbcolor = txtColor, buttoncolor = mainBG)
hScrollBar.grid(row = 1 , column = 0, columnspan = 1, rowspan = 1, padx =0, pady =0, sticky = S+W+E)
root.text.configure(xscrollcommand=hScrollBar.set)
hScrollBar.config(background = mainBG)
# ~~~~~~~~~~~~~~~~~< THEMES >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def MintTheme1():
mainBGcolor = "#%02x%02x%02x" % (64,89,82)
textBGcolor = "#%02x%02x%02x" % (17,41,41)
txtColor = "#%02x%02x%02x" % (175, 167, 157)
MintThemeDefault(mainBGcolor,textBGcolor,txtColor)
def MintTheme2():
global currentTextColor
mainBGcolor = "#%02x%02x%02x" % (14, 51, 51)
textBGcolor = "#%02x%02x%02x" % (4, 22, 22)
txtColor = "#%02x%02x%02x" % (223, 171, 111)
MintThemeDefault(mainBGcolor,textBGcolor,txtColor)
# ~~~~~~~~~~~~~~~~~< Theme Menu >~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def libraryMenu():
menu = Menu(root)
root.config(menu=menu)
prefMenu = Menu(menu, tearoff=0)
menu.add_cascade(label="Preferences", menu=prefMenu)
prefMenu.add_command(label = "Mint Theme 1", command = MintTheme1)
prefMenu.add_command(label = "Mint Theme 2", command = MintTheme2)
libraryMenu()
# ~~~~~~~~~~~~~~~~~< FRAMES >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
root.text = Text(root, undo = True)
root.text.grid(row = 0, column = 0, rowspan = 1, columnspan = 1, padx =0, pady =0, sticky = N+S+E+W)
root.text.config(bg = pyFrameColor, fg = "white", font=('times', 16), insertbackground = "orange")
root.text.config(wrap=NONE)
# ~~~~~~~~~~~~~~~~~< Default Theme >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MintThemeDefault("#%02x%02x%02x"%(64,89,82),"#%02x%02x%02x"%(0, 23, 45),"#%02x%02x%02x"%(175, 167, 157))
root.mainloop()
In order for you to test this code you will need the scrollBarClass.py file. Here is my Github Link for the file. Just put the scrollBarClass.py file in the same directory as the main.py file you are using to test the code with.
I am adding the complete code for review. Note that this program works fine without any major errors but does require a few files to function. See my GitHub for this project called MINT.
from tkinter import *
import time
import tkinter.messagebox
import tkinter.simpledialog
import json
from string import ascii_letters, digits
import os
import scrollBarClass
# Created on Mar 21, 2017
# @author: Michael A McDonnal
pyBgColor = "#%02x%02x%02x" % (0, 34, 64)
pyFrameColor = "#%02x%02x%02x" % (0, 23, 45)
root = Tk()
root.title("MINT: Mobile Information & Note-taking Tool")
root.geometry("1050x900")
root.minsize(800,600)
root.config(bg = pyFrameColor)
root.columnconfigure(0, weight=0)
root.columnconfigure(1, weight=1)
root.rowconfigure(0, weight=0)
root.rowconfigure(1, weight=1)
#root.rowconfigure(2, weight=1)
#~~~~~~~~~~~~~~~~~~~< Windows stuff >~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# row0label = Label(root)
# row0label.grid(row = 0 , column = 0 )
# row0label.configure(text = " ")
#~~~~~~~~~~~~~~~~~~~< Global Variables Being Uses >~~~~~~~~~~~~~~~~~~~~~~~~~~
path = "./NotesKeys/"
colorPath = "./Colors/"
notebook = dict()
currentWorkingLib = ""
currentWorkingKeys = ""
currentWorkingButtonColor = "orange"
selectedTextColor = "orange"
selectedBGColor = "#%02x%02x%02x"
postUpdate = False
#~~~~~~~~~~~~~~~~~~~< USE TO open all files in Directory >~~~~~~~~~~~~~~~~~~~
with open("%s%s"%(path,"list_of_all_filenames"), "r") as listall:
list_of_all_filenames = json.load(listall)
def openAllFiles():
global path
for filename in os.listdir(path):
with open(path+filename, "r") as f:
notebook[filename] = json.load(f)
openAllFiles()
#~~~~~~~~~~~~~~~~~~~< Prompt For New Library >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
valid_filename = ""
def new_lib_prompt():
global valid_filename, list_of_all_filenames, path
a_name = tkinter.simpledialog.askstring("Create New Note Library", "Alphanumeric and '_' only", initialvalue = "Name_Here")
VALID_CHARS = "-_.() {}{}".format(ascii_letters, digits)
valid_filename = ("".join(c for c in a_name if c in VALID_CHARS)).replace(" ", "_").lower()
if valid_filename != "" and valid_filename != "name_here":
if valid_filename not in list_of_all_filenames:
createNewNotesAndKeys(valid_filename)
list_of_all_filenames.append(valid_filename)
with open("%s%s"%(path,"list_of_all_filenames"), "r+" ) as f:
json.dump(list_of_all_filenames, f, indent = "")
libraryMenu()
else:
print ("Library already exist")
else:
print ("No Name Given")
def createNewNotesAndKeys(name):
global path, list_of_all_filenames
nName = name+"_notes"
kName = name+"_keys"
with open("./NotesKeys/default_notes", "r") as defaultN:
nBase = json.load(defaultN)
with open("./NotesKeys/default_keys", "r") as defaultK:
kBase = json.load(defaultK)
with open("%s%s"%(path,nName), "w") as outNotes:
json.dump(nBase, outNotes, indent = "")
with open("%s%s"%(path,kName), "w") as outNotes:
json.dump(kBase, outNotes, indent = "")
openAllFiles()
#~~~~~~~~~~~~~~~~~~~< USE TO CLOSE PROGRAM >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def closeprogram():
answer = tkinter.messagebox.askquestion("Leaving MINT?","Are you sure you want to leave MINT")
if answer == "yes":
root.destroy()
else:
tkinter.messagebox.showinfo("MINTy Fresh!","Welcome Back XD")
def doNothing():
print("Do lots of nothing?")
#~~~~~~~~~~~~~~~~~~~< Message Box >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def ihnb():
answer = tkinter.messagebox.askquestion("Do you want to be a Python Programmer?","Do you want to program?")
if answer == "yes":
a1 = "Then be prepared to spend countless hours hating life!"
root.text.delete(1.0, "end-1c")
root.text.insert("end-1c", a1)
root.text.see("end-1c")
else:
a2= "Smart move. Now go away!"
root.text.delete(1.0, "end-1c")
root.text.insert("end-1c", a2)
root.text.see("end-1c")
#~~~~~~~~~~~~~~~~~~~< UPDATE keyword display >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def update_kw_display():
pass
listToPass = ["chose a library","chose a library_keys","chose a library_notes",""]
if currentWorkingKeys not in listToPass:
keys_to_be_updated = notebook[currentWorkingKeys]
root.textSideL.delete(1.0, "end-1c")
root.textSideR.delete(1.0, "end-1c")
contr = 0
for item in keys_to_be_updated:
if contr == 0:
root.textSideL.insert("end-1c",item+"\n")
root.textSideL.see("end-1c")
contr += 1
else:
root.textSideR.insert("end-1c",item+"\n")
root.textSideR.see("end-1c")
contr = 0
else:
print("In the list to pass")
#~~~~~~~~~~~~~~~~~~~< Search for words and highlight >~~~~~~~~~~~~~~~~~~~~~~~~
def searchTextbox(event=None):
root.text.tag_configure("search", background="green")
root.text.tag_remove('found', '1.0', "end-1c")
wordToSearch = searchEntry.get().lower()
idx = '1.0'
while idx:
idx = root.text.search(wordToSearch, idx, nocase=1, stopindex="end-1c")
if idx:
lastidx = '%s+%dc' % (idx, len(wordToSearch))
root.text.tag_add('found', idx, lastidx)
idx = lastidx
root.text.tag_config('found', font=("times", 16, "bold"), foreground ='orange')
#~~~~~~~~~~~~~~~~~~~< UPDATE selected_notes! >~~~~~~~~~~~~~~~~~~~
def append_notes():
global currentWorkingLib, currentWorkingKeys, path
e1Current = keywordEntry.get().lower()
e1allcase = keywordEntry.get()
e2Current = root.text.get(1.0, "end-1c")
answer = tkinter.messagebox.askquestion("Update Notes!","Are you sure you want update your Notes for "+e1allcase+" This cannot be undone!")
if answer == "yes":
if e1Current in notebook[currentWorkingLib]:
statusE.config(text = "Updating Keyword & Notes for the "+currentWorkingLib+" Library!")
dict_to_be_updated = notebook[currentWorkingLib]
dict_to_be_updated[e1Current] = e2Current
with open("%s%s"%(path,currentWorkingLib),"w") as working_temp_var:
json.dump(dict_to_be_updated, working_temp_var, indent = "")
statusE.config(text = "Update Complete")
else:
statusE.config(text= "Creating New Keyword & Notes for the "+currentWorkingLib+" Library!")
dict_to_be_updated = notebook[currentWorkingLib]
dict_to_be_updated[e1Current] = e2Current
with open("%s%s"%(path,currentWorkingLib), "w" ) as working_temp_var:
json.dump(dict_to_be_updated, working_temp_var, indent = "")
keys_to_be_updated = notebook[currentWorkingKeys]
keys_to_be_updated.append(e1allcase)
with open("%s%s"%(path,currentWorkingKeys), "w" ) as working_temp_keys:
json.dump(keys_to_be_updated, working_temp_keys, indent = "")
statusE.config(text = "Update Complete")
update_kw_display()
else:
tkinter.messagebox.showinfo("...","That was close!")
#~~~~~~~~~~~~~~~~~~~< Entry Widget >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def kw_entry(event=None):
global currentWorkingLib
e1Current = keywordEntry.get().lower()
#e1IgnoreCase = keywordEntry.get()
if currentWorkingLib in notebook:
note_var = notebook[currentWorkingLib]
if e1Current in note_var:
#tags_list=[r"(?:<<)",r"(?:>>)",r"(?:<)",r"(?:>)"]
root.text.delete(1.0, "end-1c")
root.text.insert("end-1c", note_var[e1Current])
root.text.see("end-1c")
else:
root.text.delete(1.0, "end-1c")
root.text.insert("end-1c", "Not a Keyword")
root.text.see("end-1c")
else:
root.text.delete(1.0, "end-1c")
root.text.insert("end-1c", "No Library Selected")
root.text.see("end-1c")
#~~~~~~~~~~~~~~~~~~~< Preset Themes >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
baseBGimage=PhotoImage(file="./Colors/pybgbase.png")
bgLable = Label(root, image= baseBGimage)
bgLable.place(x = 0, y = 0)
bgLable.config(image = baseBGimage)
bgLable.image = baseBGimage
currentTextColor = 'orange'
def MintThemeDefault(mainBG, textBG, txtColor,bgimage):
global currentTextColor
currentTextColor = txtColor
themeBGimage = bgimage
textFrame.config(bg = textBG)
entryBGimage.config(image = themeBGimage)
entryBGimage.image = themeBGimage
kwBGimage.config(image = themeBGimage)
kwBGimage.image = themeBGimage
bgLable.config(image = themeBGimage)
bgLable.image = themeBGimage
#entryBGimage.config(image = themeBGimage)
#entryBGimage.image = themeBGimage
root.config(bg = mainBG)
root.text.config(bg = textBG, fg = txtColor)
root.textSideL.config(bg = textBG, fg = txtColor)
root.textSideR.config(bg = textBG, fg = txtColor)
searchEntry.config(fg = txtColor, bg = textBG)
keywordEntry.config(fg = txtColor, bg = textBG)
statusFrame.config(bg = textBG)
statusE.config(fg = txtColor, bg = textBG)
statusW.config(fg = txtColor, bg = textBG)
searchLabel.config(fg = txtColor, bg = textBG)
keywordLabel.config(fg = txtColor, bg = textBG)
UpdateKeywordsButton.config(fg = txtColor, bg = textBG)
try:
vScrollBar.destroy() #@UndefinedVariable
hScrollBar.destroy() #@UndefinedVariable
makeScrollBars(textBG, txtColor, mainBG)
except:
makeScrollBars(textBG, txtColor, mainBG)
def makeScrollBars(textBG,txtColor,mainBG):
vScrollBar = scrollBarClass.MyScrollbar(textFrame, width=15, command=root.text.yview, troughcolor = textBG,
buttontype = 'square', thumbcolor = txtColor, buttoncolor = mainBG)
vScrollBar.grid(row = 0, column = 2, columnspan = 1, rowspan = 1, padx =0, pady =0, sticky = N+S+E)
root.text.configure(yscrollcommand=vScrollBar.set)
vScrollBar.config(background = mainBG)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
hScrollBar = scrollBarClass.MyScrollbar(textFrame, height=15, command=root.text.xview, orient='horizontal', troughcolor = textBG,
buttontype = 'square', thumbcolor = txtColor, buttoncolor = mainBG)
hScrollBar.grid(row = 1 , column = 0, columnspan = 1, rowspan = 1, padx =0, pady =0, sticky = S+W+E)
root.text.configure(xscrollcommand=hScrollBar.set)
hScrollBar.config(background = mainBG)
def MintTheme1():
mainBGcolor = "#%02x%02x%02x" % (64,89,82)
textBGcolor = "#%02x%02x%02x" % (17,41,41)
txtColor = "#%02x%02x%02x" % (175, 167, 157)
bgimage=PhotoImage(file="./Colors/theme1bg.png")
MintThemeDefault(mainBGcolor,textBGcolor,txtColor,bgimage)
def MintTheme2():
global currentTextColor
mainBGcolor = "#%02x%02x%02x" % (14, 51, 51)
textBGcolor = "#%02x%02x%02x" % (4, 22, 22)
txtColor = "#%02x%02x%02x" % (223, 171, 111)
bgimage=PhotoImage(file="./Colors/theme2bg.png")
MintThemeDefault(mainBGcolor,textBGcolor,txtColor,bgimage)
#~~~~~~~~~~~~~~~~~~~< Menu function >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def updateWorkingLibKeys(filename):
global currentWorkingLib,currentWorkingKeys
currentWorkingLib = "{}_notes".format(filename).lower()
currentWorkingKeys = "{}_keys".format(filename).lower()
update_kw_display()
def libraryMenu():
menu = Menu(root)
root.config(menu=menu)
fileMenu = Menu(menu, tearoff=0)
menu.add_cascade(label="File", menu=fileMenu)
fileMenu.add_command(label="Save", command=doNothing)
fileMenu.add_command(label="Save As", command=doNothing)
fileMenu.add_separator()
fileMenu.add_command(label="Exit", command= closeprogram)
libMenu = Menu(menu, tearoff=0)
menu.add_cascade(label="Note Libraries", menu=libMenu)
libMenu.add_command(label="Library Help Page - Not Implemented Yet", command=doNothing)
libMenu.add_separator()
libMenu.add_command(label="New Library", command=new_lib_prompt)
libMenu.add_command(label="Lock Library - Not Implemented Yet", command=doNothing)
libMenu.add_command(label="Delete Library! - Not Implemented Yet", command=doNothing)
libMenu.add_separator()
prefMenu = Menu(menu, tearoff=0)
menu.add_cascade(label="Preferences", menu=prefMenu)
prefMenu.add_command(label="Mint Theme 1", command=MintTheme1)
prefMenu.add_command(label="Mint Theme 2", command=MintTheme2)
helpMenu = Menu(menu, tearoff=0)
menu.add_cascade(label="Help", menu=helpMenu)
helpMenu.add_command(label="Info", command=doNothing)
for filename in list_of_all_filenames:
libMenu.add_command(label = "%s"%(filename), command = lambda filename=filename: updateWorkingLibKeys(filename))
libraryMenu()
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
textFrame = Frame(root, borderwidth = 0, highlightthickness = 0)
textFrame.grid(row = 0, column = 1, columnspan = 1, rowspan = 2, padx =0, pady =0, sticky = W+E+N+S)
textFrame.columnconfigure(0, weight=1)
textFrame.rowconfigure(0, weight=1)
textFrame.columnconfigure(1, weight=0)
textFrame.rowconfigure(1, weight=0)
entryFrame = Frame(root)
entryFrame.grid(row = 0, column = 0, rowspan = 1, columnspan = 1, padx =0, pady =0, sticky = W+E+N+S)
entryFrame.columnconfigure(0, weight=0)
entryFrame.columnconfigure(1, weight=0)
entryFrame.rowconfigure(0, weight=0)
entryFrame.rowconfigure(1, weight=0)
entryFrame.rowconfigure(2, weight=0)
entryBGimage = Label(entryFrame, image= baseBGimage, borderwidth = 0, highlightthickness = 0)
entryBGimage.image = baseBGimage
entryBGimage.place(x = 0, y = 0)
entryBGimage.config(image = baseBGimage)
kwListFrame = Frame(root, borderwidth = 0, highlightthickness = 0)
kwListFrame.grid(row = 1, column = 0, rowspan = 1, columnspan = 1, padx =0, pady =0, sticky = W+E+N+S)
kwListFrame.columnconfigure(0, weight=1)
kwBGimage = Label(kwListFrame, image= baseBGimage, borderwidth = 0, highlightthickness = 0)
kwBGimage.image = baseBGimage
kwBGimage.place(x = 0, y = 0)
kwBGimage.config(image = baseBGimage)
root.textSideL = Text(kwListFrame, width = 10, height = 20)
root.textSideL.place( x = 5, y = 5)
root.textSideL.config(wrap=NONE)
root.textSideR = Text(kwListFrame, width = 10, height = 20)
root.textSideR.place( x = 95, y = 5)
root.textSideR.config(wrap=NONE)
statusFrame = Frame(root)
statusFrame.config(bg = pyFrameColor)
statusFrame.grid(row = 3, column = 0, rowspan = 3, columnspan = 2, padx =0, pady =0, sticky = W+E+N+S)
statusFrame.columnconfigure(0, weight=1)
statusFrame.columnconfigure(1, weight=1)
statusFrame.rowconfigure(0, weight=0)
root.text = Text(textFrame, undo = True)
root.text.grid(row = 0, column = 0, rowspan = 1, columnspan = 1, padx =0, pady =0, sticky = W+E+N+S)
root.text.config(bg = pyFrameColor, fg = "white", font=('times', 16), insertbackground = "orange")
root.text.config(wrap=NONE)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
statusW = Label(statusFrame, font=("times", 16, "bold"), fg = "white", bg = "black", relief = SUNKEN, anchor = W)
statusW.grid(row = 0, column = 0, padx =1, pady =1, sticky = W+S)
statusW.config(text = "Operation Status", bg = "#%02x%02x%02x"%(0, 23, 45))
statusE = Label(statusFrame, font=("times", 16, "bold"), fg = "white", bg = "black", relief = SUNKEN, anchor = E)
statusE.grid(row = 0, column = 1, padx =1, pady =1, sticky = E+S)
statusE.config(bg = "#%02x%02x%02x"%(0, 23, 45))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
searchLabel = Label(entryFrame)
searchLabel.grid(row = 1, column = 0, padx =5, pady=5)
searchLabel.config(text="Search Text Field")
searchEntry = Entry(entryFrame, width = 20)
searchEntry.bind("<Return>", searchTextbox)
searchEntry.grid(row = 1, column = 1, padx =5, pady=5)
keywordLabel = Label(entryFrame)
keywordLabel.grid(row = 0, column = 0, padx =5, pady=5)
keywordLabel.config(text="Keyword Search")
keywordEntry = Entry(entryFrame, width = 20)
keywordEntry.bind("<Return>", kw_entry)
keywordEntry.grid(row = 0, column = 1, padx =5, pady=5)
UpdateKeywordsButton = tkinter.Button(entryFrame, fg = 'Black', bg = 'Orange', text = "Update Notes", command = append_notes)
UpdateKeywordsButton.grid(row = 2, column = 0, padx =5, pady =5)
MintThemeDefault("#%02x%02x%02x"%(64,89,82),"#%02x%02x%02x"%(0, 23, 45),"#%02x%02x%02x"%(175, 167, 157),PhotoImage(file="./Colors/pybgbase.png"))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
status_time = ""
def tick():
global status_time
time2 = time.strftime("%H:%M:%S")
if time2 != status_time:
status_time = time2
statusE.config(text=time2+" Preparing to do nothing...")
statusE.after(200, tick)
tick()
#~~~~~~~~~~~~~~~~~~~< root Main Loop >~~~~~~~~~~~~~~~~~~~~~~~~~~~~
root.mainloop()
Answer: Idiomatic Python:
The only lines that should be outside a method or class are global imports and these:
if __name__ == '__main__':
sys.exit(main())
this makes it much easier to understand what the state is once the program starts, and makes it easier to follow as it changes while going through the code. As it stands I can't see the relationship between the top-level variables at a glance.
Avoid globals.
Use readable names. Names like ihnb are WTF moments waiting to happen.
Is it intentional that searchTextBox's event parameter defaults to null? Unless you know it will be called without a parameter it should not be defaulted.
Putting Class as a suffix in a class name is redundant.
The various GUI elements should be put together in an object.
trying to interact with possibly undefined variables is a definite code smell, indicating that the flow of your program is weird. Manually destroying objects is another smell in garbage collected languages like Python. Sometimes it is necessary, but most of the time you should be able to stamp over variables (or use with statements) without fear that your application will leak resources.
PEP8 stuff:
Variable names should be all_lower_case_and_underscore_separated.
This and various other stuff will be reported by the pep8 tool.
Personal preference:
You should not have commented code in checked in code.
I never import * from third party code and usually not even from my own libraries, to avoid polluting the name space and possibly creating collisions. root = tkinter.Tk() allows a reader to know instantly that it's an object based on outside code as opposed to something in the same file.
Nested if statements can easily be pulled out as methods, clearing up the code.
Comments should not include fancy separators. Those were only really necessary in the days of editors which didn't support code highlighting, and nowadays just distract from the content. | {
"domain": "codereview.stackexchange",
"id": 25742,
"tags": "python, tkinter"
} |
Does the complex conjugate of a vector have the same direction as the vector? | Question: Looking at reflected and transmitted optic waves, the $\overset{\rightharpoonup }{E}_t$ vector is always perpendicular to $\overset{\rightharpoonup }{k}_t$ (as seen in the attached image). So $\overset{\rightharpoonup }{E}_t\cdot \overset{\rightharpoonup }{k}_t=0$ always. Can the same be said for the complex conjugate of $\overset{\rightharpoonup }{E}_t $ ?
Specifically, does $\overset{\rightharpoonup }{E}_t{}^*\cdot \overset{\rightharpoonup }{k}_t$ always equal $0$?
Answer: I assume that you mean the complex vector E represents amplitude E and phase p of a plane wave Ecos(kr -wt +p) where k is the propagation vector perpendicular to the field vector. In complex notation it is usually written as
Eexp i(kr -wt +p). Taking the conjugate of this just gives
Eexp i(-k*r +wt - p). However the real part of this is unchanged so the field is the same and still perpendicular to k.
If you on the other hand just regard E and k as complex numbers then they could be written as Eexp ip and
kexp(ip + i90) = ikexp ip. The conjugate of E would then be
Eexp(- ip). The angular difference, which was 90 degrees before, now becomes 2p + 90 degrees which of course is not 90 degrees. | {
"domain": "physics.stackexchange",
"id": 28736,
"tags": "electromagnetism, optics, electromagnetic-radiation, vectors, calculus"
} |
What is the difference between the positional encoding techniques of the Transformer and GPT? | Question: I know the original Transformer and the GPT (1-3) use two slightly different positional encoding techniques.
More specifically, in GPT they say positional encoding is learned. What does that mean? OpenAI's papers don't go into detail very much.
How do they really differ, mathematically speaking?
Answer: The purpose of introduction of positional encoding is to insert a notion of location of a given token in the sequence. Without it, due to the permutation equivariance (symmetry under the token permutation) there will be no notion of relative order inside a sequence.
Given a token at $\text{pos}$-th position we would like to make the model understand, that this token is at particular position. See pretty nice blog here - https://kazemnejad.com/blog/transformer_architecture_positional_encoding/.
Fixed encoding
In the original Transformer one uses a fixed map from the token position $i$ to the embedding vector added to the original embedding:
$$
\begin{aligned}
PE(\text{pos}, 2i) &= \sin(\text{pos} / 10000^{2i / d_{\text{model}}}) \\
PE(\text{pos}, 2i + 1) &= \cos(\text{pos} / 10000^{2i / d_{\text{model}}})
\end{aligned}
$$
Here $\text{pos}$ is an index of the token in sequence, and $2i, 2i+1$ correspond to the dimension inside the embedding.
Learned encoding
Another strategy is to make map for $\text{pos}$ to the embedding vector of dimension $d_{\text{model}}$ learnable. One initializes somehow for each position in the sequence vector of positional embedding for each position from $0$ to $\text{max_length}$ and during the training these vectors are updated by gradient descent. | {
"domain": "ai.stackexchange",
"id": 3206,
"tags": "comparison, transformer, gpt, positional-encoding"
} |
Are there measurements or calculations that suggest atmospheric ice plates would be horizontal to within 0.1 degrees? | Question: This question describes a recently released explanation for flashes of light seen at the sub-solar point above Earth from the DSCOVR satellite, which is located in a special orbit between the Earth and Sun at a distance of about 1.5 million kilometers from the Earth.
I won't reproduce the full question here. I'm asking about the discussion in the following paper about flat ice plates of order 100 microns in size at about 5,000 to 8,000 meters in altitude in thin clouds over land. Specifically the idea that areas that are tens of kilometers wide could contain plates that are all coplanar - parallel to the local Earth's surface and more importantly to each other, to within about 0.1 degrees.
While here is part of the discussion section of the paper, further explanation of the geometry of the satellite and Sun are in the linked question.
The discussion section of the very recent, on-line-available Terrestrial glint seen from deep space: oriented ice crystals detected from the Lagrangian point Alexander Marshak, Tamás Várnai and Alexander Kostinski, doi: 10.1002/2017GL073248 contains the following text:
Based on in-situ measurements of cirrus clouds, [Korolev et al., 2000; McFarquhar et al., 2002], tiny hexagonal platelets of ice, floating in air in nearly perfect horizontal alignment, are likely responsible for the glints observed by EPIC over land. Because the EPIC instrument has a field of view of 0.62 degrees (see, https://epic.gsfc.nasa.gov/epic) and a 2048x2048 pixels CCD, the specular signal within an angle of only $~3x10^{-4}$ degree (Fig. 2) must either contain smooth large oriented ice plates or smaller oriented platelets sending back diffracted light. Size distribution of such crystals depends greatly on cloud temperature and humidity but the range is from tens of microns to mm. Taking the wavelength of 0.5 micron and ice platelet size of 50 microns, yields the ratio or angular half-width of the diffraction lobe around the specular direction $10^{-2}$ or on the order of a degree [Crawford, 1968, p.486]. This is broader than the angular width of a pixel but narrower than change in the zenith direction over the area covered by a pixel (0.1°).
Question: Is this collective horizontal alignment a known effect at even 1 degree, much less 0.1 degrees? Has this ever been measured to be true over such large areas, or even calculated?
Although the result is preliminary and the discussion short, the authors suggest that this uniform horizontal alignment would not be an unusual occurrence at all. For example, later in the discussion:
Out of total 4106 images collected, 336 contain land-glint for the blue channel, which was chosen because it has the highest spatial resolution (to reduce the amount of data transmitted from DSCOVR, for all other channels four pixels are averaged onboard the spacecraft). Can one interpret this ratio 336/4106 = 8.2%? To exclude images with ocean at the location of possible glint, we divide the 8.2% by the land fraction in EPIC tropical band (1/4), yielding 32.8%. Hence, roughly one in three images with land in the center contains a glint from an ice cloud. This matches the fraction of Earth covered by ice clouds in tropics which is also about a third [King et al., 2013]. This agreement suggests that terrestrial glints seen from deep space supply efficient means of detecting cloud ice, reflecting at least a factor of 5-6 stronger that surrounding pixels and may substantially increase cloud albedo [Takano and Liou, 1989], relative to diffuse reflectance from randomly oriented ice particles. This is significant as cirrus clouds, composed mostly of aspherical particles, cover over 30% of the Earth surface and play a major role in the radiation budget [Stephens et al., 1990].
I'm interested if there are there any explanations for why this might happen. I know there is a phenomenon called a "Sun pillar" that involves reflection off of such ice plates in the atmosphere, and I don't quite understand how the pillar is tall vertically but not wide laterally, but that is perhaps a different question. However, it may none-the-less involve the same kind of ice plate. So I've added this image of "Sun pillar crystals" to help with the discussion. In the explanation for the bright spots seen from DSCOVR, the light is at normal incidence to the planes of the crystals instead of oblique incidence shown here.
above: "Sun pillar crystals" From here. The present question involves reflection at normal incidence, not oblique incidence shown here.
Answer: While the horizontal alignment of ice crystals has been observed in several previous studies listed in the Marshak et al. paper, it is not clear how much ice crystals wobble around the perfectly horizontal position. Determining the range of tilt angles would be very interesting indeed!
The reason why ice plates can float with a horizontal alignment is discussed in the 1998 article "Subsuns and Low Reynolds Number Flow” by J. I. Katz, 1998, (Journal of Atmospheric Sciences, Volume 55, 3358). The article includes the following paragraph:
Under what conditions will a small, thin falling plate of ice maintain
an accurately horizontal orientation? At high Reynolds number Re > 1 a
falling plate leaves a turbulent wake (Willmarth et al. 1964;
Pruppacher and Klett 1978). Its center of drag lies close to its
leading edge or surface; any steady orientation is unstable; it
tumbles, and its path is irregular because of large horizontal forces
arising during its tumbling (ice crystals, unlike airplanes, rockets,
and arrows, are not equipped with stabilizing tails!). This is readily
verified by dropping a penny into a jar of water, an experiment in
which Re ≈ 3000; it tumbles and usually hits the sides. For Re ≈ 100 a
falling disc may oscillate periodically about a horizontal orientation
as it leaves behind a regular vortex street. This may be seen by
dropping aluminum foil discs of various radii into water. At Re < 100,
however, tumbling and oscillations are strongly damped by viscosity.
Intuitive concepts from our everyday experience with high Re flows are
still qualitatively applicable and show that a vertical orientation
(edge on to the flow) is unstable; if the plate tilts the hydrodynamic
force on its leading edge acts to amplify the tilt. However, the
horizontal orientation (face on to the flow) is stable; if the plate
tilts the wake of the leading edge partly shields the trailing edge
from the flow, reducing the drag on it; the resulting torque restores
the horizontal orientation and the disturbance is quickly damped. | {
"domain": "earthscience.stackexchange",
"id": 1117,
"tags": "atmosphere, clouds, ice, atmospheric-optics"
} |
Zwiebach quick calculation 2.5 | Question: I am working through Zwiebach's a first course in string theory. It's been a while since I did any math (or physics!), and I am stuck on the following problem (quick calculation 2.5 in the book):
Consider the plane $(x,y)$ with the identification
$$
(x,y)\sim(x + 2\pi R, y + 2\pi R).
$$
What is the resulting space? Hint: the space is most clearly exhibited using a fundamental domain for which the line $x+y=0$ is the boundary.
My thoughts
Now I understood all the previous examples he gave for identifications, but I can't figure this one out. Why is the line $x+y=0$ a boundary for the domain?
Points from either side of this boundary can be included in the domain no?
Answer: It's not a priori clear why $x+y=0$ should be a boundary. But let's investigate it. Draw the line yourself. You could explicitly parametrise this line, call it $L$, for example as $$p(t) = (t,-t)$$ but it's not necessary.
The equivalence relation tells us to identify the points $p \sim \sigma(p)$ where $$\sigma: (x,y) \mapsto (x + 2\pi R,y+2\pi R).$$
Now it's an elementary exercise to see that this means that we identify the line $L$ with a parallel line as follows:
$$ L \sim L' = L + (2\pi R,2\pi R).$$
Now it should be straightforward to see that [one choice of] fundamental domain is the area between two such lines. (Apply $\sigma$ and its inverse several times if you're still confused.)
But such a fundamental domain with two boundary lines identified is just the cylinder, except that it's rotated. Of course you can prove it mathematically, but this is simply a piece of paper with its edges glued together - you can visualise it. | {
"domain": "physics.stackexchange",
"id": 11284,
"tags": "homework-and-exercises, string-theory, geometry, topology"
} |
Implementing foldl1 in Haskell | Question: As an exercise (scroll to the first set of exercises) for learning Haskell, I have to implement foldl1.
I believe I have implemented it successfully and while there is an answer available, it would be great to have the eye of an expert and more importantly, the thought process of why certain decisions were made.
Below is my code.
foldl1' :: (a -> a -> a) -> [a] -> a
foldl1' f [a] = a
foldl1' f xs =
foldl1' f ((f (xs !! 0) (xs !! 1)):(drop 2 xs))
Answer: First, I like the explicit statement of the type signature. That's a good habit to get into, and makes it easier to capitalise on perhaps the greatest strength of using Haskell which is all the compile time checking. The provided signature is as general as it can be for lists.
Second, the single element base case is written cleanly and correctly.
The recursive case has room for a bit of picking apart. It is convention to use pattern matching syntax x:xs (or x1:x2:xs) for list recursive functions. As well as being cleaner to read, the behaviour is slightly different in that it can work out the first, second, and remainder of the list in a single pass without having to separately call !! twice and drop once.
foldl1' f (x1:x2:xs) = foldl1' f ((f x1 x2):xs)
One other improvement that I would suggest, taken directly from the prelude function by the same name, is explicitely handling the failure case when provided with an empty list. For comparison the inbuilt function produces an exception:
foldl1 (+) []
*** Exception: Prelude.foldl1: empty list | {
"domain": "codereview.stackexchange",
"id": 32569,
"tags": "beginner, haskell, reinventing-the-wheel"
} |
Keras functional model returning unexpected output dimensions | Question: So, this is my model:
#input layers
inputs = Input(shape = 2, name = "Input")
#hidden layers
x = Dense(6, activation = "relu", name = "dense_layer_1")(inputs)
x = Dense(4, activation = "relu", name = "dense_layer_2")(x)
punishment = Dense(3, activation = "relu", name = "punishment")(x)
#output layers
y_1 = Dense(1, activation = "sigmoid", name = "y_1")(punishment)
y_2 = Dense(1, activation = "sigmoid", name = "y_2")(punishment)
y_3 = Dense(1, activation = "sigmoid", name= "y_3")(punishment)
y_4 = Dense(1, activation = "sigmoid", name = "y_4")(x)
#functional model declaration
model = Model(inputs = inputs, outputs = [y_1, y_2, y_3, y_4])
and, when I call input_shape and output_shape on this:
print(model.input_shape)
print(model.output_shape)
(None, 2)
[(None, 1), (None, 1), (None, 1), (None, 1)]
But, when I call my model:
print(np.array([normalize_in([290360000, 0])], dtype = "float32")[0].shape)
print(denormalize_out(model(np.array([normalize_in([290360000, 0])], dtype = "float32")[0])))
it gives me:
[[ 1.724373 -0.39440534]]
(1, 2)
tf.Tensor(
[[[8.4842375e+05 1.7916626e+01 3.6546925e+01 1.6306080e+01]]
[[7.9671431e+05 1.7595743e+01 3.5045170e+01 1.5826001e+01]]
[[7.6195538e+05 1.7380047e+01 3.4035694e+01 1.5503292e+01]]
[[9.3114094e+05 1.8429928e+01 3.8949211e+01 1.7074041e+01]]], shape=(4, 1, 4), dtype=float32)
I've been trying to go around the internet to find it, but there was just way too few examples about multi-output keras functional models, and I couldn't figure out what is wrong with my model.
Answer: Based on the code and model architecture you have provided. It seems your model takes a single input and gives four outputs the shape of the input layer is (batch_size,2) while the output shape of each node seems to be (batch_size,1).
your output should be a list of four outputs means a list that contains the output tensors of all nodes. but you are getting the tensor of shape (4,1,4).
I think it may be an issue due to the way you are giving the input to the model can you please check the input shape since I have used the same input as you and I got a valid shape means a list of output tensors each contains tensor of shape (batch_size,1). and also, I think you should verify that the denormalize_out function is not stacking your output list.
also to verify the model architecture I have loaded your model on Netron it seems to have the correct architecture you can see it below
Here below is my code and its output
inputs = Input(shape = 2, name = "Input")
#hidden layers
x = Dense(6, activation = "relu", name = "dense_layer_1")(inputs)
x = Dense(4, activation = "relu", name = "dense_layer_2")(x)
punishment = Dense(3, activation = "relu", name = "punishment")(x)
#output layers
y_1 = Dense(1, activation = "sigmoid", name = "y_1")(punishment)
y_2 = Dense(1, activation = "sigmoid", name = "y_2")(punishment)
y_3 = Dense(1, activation = "sigmoid", name= "y_3")(punishment)
y_4 = Dense(1, activation = "sigmoid", name = "y_4")(punishment)
#functional model declaration
model = Model(inputs = inputs, outputs = [y_1, y_2, y_3, y_4])
#Dummy Input
dummy_input = np.random.rand(1,2)
print("Input shape",dummy_input.shape)
#If we store the output in diffrent variables it store each as output tensors
out1,out2,out3,out4 = model.predict(dummy_input)
#If we store all outputs in a single varialbe it store it as list of output tensors
preds = model.predict(dummy_input)
print("output 1 shape",out1.shape)
print("output 2 shape",out2.shape)
print("output 3 shape",out3.shape)
print("output 4 shape",out4.shape)
#making predictions
dummy_input_2 = np.array([[1.724373 , -0.39440534]])
print("dummy_input_2 shape",dummy_input_2.shape)
predictions = model(dummy_input_2)
print(predictions)
The output of the above script is
Input shape (1, 2)
output 1 shape (1, 1)
output 2 shape (1, 1)
output 3 shape (1, 1)
output 4 shape (1, 1)
dummy_input_2 shape (1, 2)
[<tf.Tensor: shape=(1, 1), dtype=float32, numpy=array([[0.3822088]], dtype=float32)>, <tf.Tensor: shape=(1, 1), dtype=float32, numpy=array([[0.60975766]], dtype=float32)>, <tf.Tensor: shape=(1, 1), dtype=float32, numpy=array([[0.5706901]], dtype=float32)>, <tf.Tensor: shape=(1, 1), dtype=float32, numpy=array([[0.45608807]], dtype=float32)>] | {
"domain": "ai.stackexchange",
"id": 4074,
"tags": "machine-learning, keras"
} |
Approximating max degree $3$ perfect matching count? | Question: We do not have a deterministic constant factor approximation scheme for general $n\times n$ $0/1$ permanent.
What is the best factor in deterministic approximation schemes if we only care counting bipartite perfect matching with average degree in $[2,3]$ and max degree $3$?
Answer: Dagum and Luby show (using a construction credited to Dahlhaus and Karpinski) how to construct, given a bipartite graph $G$, a bipartite graph $G'$ of maximum degree $3$ such that $G'$ has exactly as many perfect matchings as $G$ (see Theorem 6.2.). Then from $G'$ you can construct a graph $G''$ with average degree arbitrarily close to $2$ and twice as many perfect matchings as $G'$, as I explained here. Both constructions are in polynomial time. Therefore, the best polytime deterministic approximation to the number of perfect matchings of a graph of maximum degree 3, and average degree in $[2,3]$ is equal to the best polytime deterministic approximation to the permanent of a 0-1 matrix. As far as I know, this is the factor $2^n$ approximation achieved by Gurvits and Samorodnitsky.
(It may be helpful to note that a $\exp(n^\varepsilon)$ approximation for $\varepsilon < 1$ would imply an FPTAS.) | {
"domain": "cstheory.stackexchange",
"id": 4097,
"tags": "approximation-algorithms, permanent"
} |
Unstable vs. stable nuclei plotted on a graph | Question: The enclosed graph shows the number of Protons on the x-axis (Z), and the number of Neutrons on the y-axis for all elements (N). Stable combinations are marked by black squares, whereas unstable ones are marked by grey squares. I understand why there is a grey "line" located above the black "line", as adding electrically positive, mutually repelling, protons would make the nucleus less stable.
However, I do not understand why there is a grey "line" to the right of the black "line": isn't adding neutrons supposed to make a nucleus more stable (as a result of the additional Strong Force and greater distance between the protons)?
Can anyone help out?
Thank you!
from Yoram Kirsh, Fundamentals of Physics B, Tel Aviv, 1998, p. 111.
Answer: What your analysis is missing is that the nuclear attraction between a neutron and a proton is somewhat larger than the attraction between two neutrons or two protons. In nuclear physics this difference is called the symmetry energy. Because of this symmetry interaction the most tightly bound nuclei result from a balanced competition between the attractive symmetry energy and the coulomb repulsion between protons. As you deviate from this balance by having too many or too few neutrons, the resulting nuclei are less stable. | {
"domain": "physics.stackexchange",
"id": 87695,
"tags": "radioactivity, neutrons, protons, strong-force"
} |
Physical quantity that can be expressed using multiple fundamental units | Question: Any physical quantity can be represented as a product of powers of fundamental SI units.For example, Force has dimensions $[\text{kg}\ \text{m}\ \text{s}^{-2}]$ and has three fundamental units. Likewise, what is the physical quantity that has the most fundamental SI units? I can create an arbitrary physical quantity with dimensions $[\text{kg}\ \text{m}\ \text{s}\ \text{A}\ \text{mol}]$ but I am looking for a quantity that is meaningful.
Answer: You could make any arbitrary unit made of such things but they may not be particularly useful units, of course. The unit listed in the SI standard with the most "fundamental units" is the molar entropy, $\text{m}^2. \text{kg} .\text{s}^{-2} .\text{K}^{-1} . \text{mol}^{-1}$. | {
"domain": "physics.stackexchange",
"id": 40436,
"tags": "units, dimensional-analysis, si-units"
} |
How to tell the robot to go to a location on the map | Question:
http://wiki.ros.org/navigation/Tutorials/SendingSimpleGoals
This tutorial shows how to tell the robot to move forward. However, if I want to let this robot move to a certain coordinate on the map what should I do? I tried to minus the target location (hardcore it) with the current location(got from /odom) and send this to navigation stack, but it for reason unknown to me does work well.
Thanks!
Originally posted by rozoalex on ROS Answers with karma: 113 on 2017-08-08
Post score: 1
Original comments
Comment by jayess on 2017-08-08:
This would be easier to answer if you posted your attempted solution (code).
Answer:
I think sendingSimpleGoals would help, you can add your goals with certain coordiante and send the goal after one is complete
Originally posted by marine0131 with karma: 48 on 2017-08-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28556,
"tags": "ros, navigation, 3d-slam, turtlebot"
} |
Rapidly mixing Markov chains on 3-colorings of a cycle | Question: The Glauber dynamics is a Markov chain on the colorings of a graph in which at each step one attempts to recolor a randomly chosen vertex with a random color. It does not mix for the 3-colorings of a 5-cycle: there are 30 3-colorings, but only 15 of them can be reached by single-vertex recoloring steps. More generally, it can be shown not to mix for 3-colorings of an n-cycle unless n=4.
The Kempe chain or Wang-Swendsen-Kotecký dynamics is only a little more complicated: at each step one chooses a random vertex v and a random color c, but then one finds the subgraph induced by two of the colors (c and the color of v) and swaps these colors within the component containing v. It is not hard to see that, unlike the Glauber dynamics, all 3-colorings of a cycle can be reached.
Is the Wang-Swendsen-Kotecký dynamics rapidly mixing on 3-colorings of an n-vertex cycle graph?
I know of the results e.g. by Molloy (STOC 2002) that Glauber is rapidly mixing when the number of colors is at least 1.489 times the degree (true here) and the graph to be colored has high girth (also true), but they also require that the degree be at least logarithmic in the size of the graph (not true for cycle graphs), so they don't seem to apply.
Answer: I got the following solution by email from Dana Randall, so any credit for the solution should go to her (which I guess means: don't upvote this answer) and any bugs were likely introduced by me.
The short version of Dana's solution is: instead of using the Markov chain I described, in which potentially-large two-colored regions are recolored, use a "heat bath" in which we repeatedly remove the colors of two vertices and then choose a valid coloring for them at random. It's not hard to show that, if this chain mixes, then the other one does as well. But a standard path coupling argument turns out to work to show that the heat bath does indeed mix.
The long version is too long to include here, so I put it in a blog post instead. | {
"domain": "cstheory.stackexchange",
"id": 126,
"tags": "graph-theory, markov-chains"
} |
Config and test rosemacs | Question:
Hello,
So, through some miracle I installed roslisp and rosemacs... I think (it´s kinetic). What I´d really like to do is test it. I´ve never used emacs and lisp before but I hear it´s the best development environment ever. Where do I start? How do I know if emacs is configured to connect to (or start ) a lisp session?
Any advice is appreciated.
===== Update after last reply ===========
Actually, I was thinking more along the lines of getting emacs to talk to SBCL with the ROS configuration through slime. Not the internal emacs lisp. I did have some success with this. I edited the .emacs file in home directory to include this:
(add-to-list 'load-path "/opt/ros/kinetic/share/emacs/site-lisp")
;; or whatever your install space is + "/share/emacs/site-lisp"
(require 'rosemacs-config).
It added some interesting ros-related buffers to emacs. I´m till trying to figure out slime and connecting to sbcl
======== Update after Gaya´s suggestion to just do rosrun roslisp_repl roslisp_repl ========
Yes. I tried this earlier but couldn´t get roslisp_repl to install. HOWEVER, it finally ALMOST installed. I discovered that nothing (like knowrob) is properly installing (catkin making) on my system because rosjava didn´t come with the kinetic installation. Nor is there any way to install it, that I can see. There is not even a kinetic rosjava installation available from source:
wstool init -j4 ~/rosjava/src https://raw.githubusercontent.com/rosjava/rosjava/kinetic/rosjava.rosinstall
Using initial elements from: https://raw.githubusercontent.com/rosjava/rosjava/kinetic/rosjava.rosinstall
ERROR in config: Unable to download URL [https://raw.githubusercontent.com/rosjava/rosjava/kinetic/rosjava.rosinstall]: HTTP Error 404: Not Found
So, it looks like I may be screwed here. If everything requires rosjava and there is no rosjava available for kinetic... well... punt.
Sincerely,
-Todd
Originally posted by toddcpierce on ROS Answers with karma: 51 on 2016-08-08
Post score: 0
Original comments
Comment by gaya on 2016-08-09:
You followed the setup instructions for non-lisp developers of rosemacs. What you should try instead is the instructions for lisp developers. You can simply try to rosrun roslisp_repl roslisp_repl and that will start the lisp shell in an Emacs. There are also roslisp tutorials around, try those. .
Answer:
Emacs ships with lisp and you can always evaluate lisp in the scratch buffer. Emacs also ships with "An Introduction to Programming in Emacs Lisp" which you can find using the info command.
Originally posted by mcshicks with karma: 51 on 2016-08-09
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 25474,
"tags": "ros, roslisp"
} |
What change does $dQ$ represent in definition of current $i$ | Question: The definition of current $i$ is
$$i=\frac{dQ}{dt}.$$
According to calculus whenever we write one variable as a derivative of another variable that simply means we are trying to calculate the rate of change of former variable with respect to the latter but in the definition of current $dQ$ doesn’t seem to represent any change rather it is amount of charge passing through a particular area but since we are writing $Q$ as a derivative of time that means we are trying to calculate the rate of change of $Q$ with respect to time but actually this is not what we desire to calculate then why we are writing $Q$ as the derivative of time although $dQ$ does not represent any change.
The same argument applies to definition of rates of flow (for example water).
I may be getting wrong somewhere since I am a newbie to current electricity so please make me correct where I am getting wrong so that I can understand why we are write $Q$ as a derivative of time.
Answer: $Q(t)$ can be regarded as the total charge which has flown through a cross-sectional area and perpendicular to it from some time $t=t_{0}$ to $t=t$, where $t_0<t$. In general, $t_{0}$ would be the time when you turn on the current. So, while you can think of $dQ$ as being the differential amount of charge flowing through the cross-section in differential time $dt$, you can also think of $dQ$ as being the change in the total charge which has flown through the cross-section, which occurs in time $dt$. Therefore, $\frac{dQ}{dt}$ is the rate of change of "the total charge which has flown through the cross section" with respect to time. | {
"domain": "physics.stackexchange",
"id": 69086,
"tags": "electric-current, charge, differentiation, calculus"
} |
Hindsight Experience Replay: what the reward w.r.t. to sample goal means | Question: Referring to the paper on Hindsight Experience Replay
Is it right that sampled goals which are visited states should be followed by a positive (or non-negative) rewards in order to allow an agent learn?
On page 5 of the paper, a "Algorithm 1 Hindsight Experience Replay (HER)" scheme reads in particular:
for t = 0, T-1 do
r_t := r(s_t, a_t, g)
Store the transition (s_t || g, a_t, r_t, s_(t+1) || g) in R
Sample a set of additional goals for replay G := S(current episode)
for g' ∈ G do
r' := r(s_t, a_t, g')
Store the transition (St||g', a_t, r', s_(t+1)||g') in R
end for
end for
where:
g : current goal
R : replay buffer
All other symbols with a dash indicate that they were sampled in addition to the actual current goal within the current episode.
It means (as long as I understand) that for the sampled goals (g') the reward is now a function of action taken in state given the sampled goal. It is not very clear whether the agent will learn the task in case the reward is still the old function (which is non-positive for all states that are different from the final goal).
As an example, in a grid-world an agent gets -1 reward while not in the final cell of its destination, but with the new goals introduced, the agent's reward with respect to its current state is not r, but r' (reward after reaching goal).
Illustrating example (grid-world):
Answer:
... the reward is now a function of action taken in state given the sampled goal.
I believe the action taken is that from the original goal, not from the newly sampled goal (as you say you understand). Otherwise I think you have everything more or less correct.
We see in the first block of the algorithm, that each action $a_t$, given the current goal, g, results in the reward, $r_t$ (as usual). This is stored, along with the new state $s_{t+1}$ concatenated with the current goal (shown by the || symbol). This is highlighted as being standard experience replay.
In the second block, using the sampled (virtual) goals $g'$, we receive a virtual reward for our performance using the same action as previously $a_t$. This is repeated for some number of simulated goals, selected by a sampling strategy, of which several are discussed in Section 4.5.
I myself was wondering how many replays are sampled, as it seems that the key there is to sample enough, so that the buffer itself sees the right balance of additional goals (to reduce the reward density), but not so much that the virtual HER recordings from the second for-loop do not outnumber the real performed goal-action pairs from the first loop. In the paper (Section 4.5), this seems to be around the $k=8$ mark, where $k$ is the ratio of sampled/virtual goals to the original goals.
So I believe the sampled goals that are indeed visited states from the original goal, would indeed receive a non-negative reward.
I think the following is a key statement to help explain the intuition:
Notice that the goal being pursued influences the agent's actions but not the environment dynamics and therefore we can replay each trajectory with an arbitrary goal assuming we have an off-policy RL algorithm like DQN ...
This is very true in life. Imagine you try to throw a frisbee straight across a field to a friend. It doesn't make it, instead flying off to the right. Although you failed, you could learn that the wind is perhaps blowing left to right. If that had just so happened to be the task at hand, you would've received some positive reward for it!
The authors sample many additional goals, which in my analogy, may be the flight dynamics of that particular frisbee, the air density/humidity etc.
The main contribution of this paper, is a method to increase the density of the reward function i.e. to reduce how sparse the reward is for the model while training. Sampling these additional goals after each attempt (failed or otherwise) gives the framework the opportunity to teach the model something in each episode.
In the grid-based example, if for example the agent doesn't reach the final goal (as its original goal), it records -1 to the replay buffer. Then other goals are sampled from the possible next steps according to a sampling strategy, $\mathbb{S}$. For If you were close to the goal, it would make sense that sampling from future states selected at random from the same episode - after the transition - that you would likely end up at the goal. It is important here to realise that the goal has changed, which allows reward to be received. I point this out because the goal usually doesn't change in grid-based games; however, the experiments in the paper were performed on a robotic arm with 7-DOF in continuous space (only the reward was discrete).
EDIT
Below is a sketch of an example path, where we reach the final goal after 10 transitions (blue arrows). I set $k = 4$, so in each of the states $s_t$, we also have 4 randomly selected goals. We then take the corresponding action $a_t$ for the current state, which is the blue arrow. If the randomly sampled goal, $g'$, happens to be the same as $s_{t+1}$, we get a non-negative reward - these are the orange arrows. Otherwise, a negative reward is returned: the green arrows.
This is an example of the random sampling strategy, as my sampled goals $G$, are states that have been encountered in the whole training procesdure (not just the current episode), even though you cannot see it in my sketch.
So here we see there are 4 sampled goals, whch do indeed return non-negative reward. That was chance. The authors do say that:
In the simplest version of our algorithm we repla each trakectory with the goal $m(s_T)$, i.e. the goal which is achieved in the final state of the episode.
In that case, it would mean $k=1$ and is always simply where the episode ended. This would mean negative rewards in the HER portion of the algorithm for all time steps exluding the final, $t=T$, where we would reach the sampled goal.
That would indeed equate to the model having learned from an otherwise failed episode. In every single episode! | {
"domain": "datascience.stackexchange",
"id": 3619,
"tags": "reinforcement-learning"
} |
Complexity of parity game solving compared to PLS, PPA, and PPAD | Question: Since parity game solving is in TFNP ("Total Function Nondeterministic Polynomial") (and the decision version is is NP ∩ coNP), I wonder whether it is contained in PLS ("Polynomial Local Search") or PPA ("Polynomial Parity Argument")? Add PPP ("Polynomial Pigeonhole Principle") if you want, even so this would probably mean that it is already contained in PPAD ("Polynomial Parity Arguments on Directed graphs"), and hence in PPA.
Or is it rather the other way round and parity game solving can be shown to be hard for PLS or PPAD? But that would be surprising, since a recursive algorithm that solves parity games is known (even if it is not efficient in the worst case).
Edit 12 March 2017: I recently learned that parity game solving has been shown to be possible in quasipolynomial time. Here are the quoted references:
Deciding Parity Games in Quasipolynomial Time (PDF), by Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, and Frank Stephan.
A short proof of correctness of the quasi-polynomial time algorithm for parity games, by Hugo Gimbert and Rasmus Ibsen-Jensen.
Succinct progress measures for solving parity games, by Marcin Jurdziński and Ranko Lazić.
An implementation and comparison with previous approaches is available (classic strategy improvement "wins" on random instances, but gets "slow" on Friedmann’s trap examples):
An Ordered Approach to Solving Parity Games in Quasi Polynomial Time and Quasi Linear Space, by John Fearnley, Sanjay Jain, Sven Schewe, Frank Stephan, Dominik Wojtczak
Answer: Yes, solving parity games is known to be in PPAD (and thus PPA and PPP too) and PLS, and is thus unlikely to be hard for either (since this would imply containment of one of these classes in the other).
See, e.g.,
Daskalakis, Constantinos, and Christos Papadimitriou. "Continuous local search." Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms. SIAM, 2011.
and combine the membership of Simple Stochastic Games (SSGs) in CLS (which is in PPAD and PLS) with the well-known observation that solving parity games can be reduced to solving SSGs in polynomial time.
The reason that these problems are in PPAD is that they admit "optimality equations", rather like Bellman equations, that characterize solutions as fixed points. The reason these problems are in PLS is that they can be solved with local improvement algorithms like strategy improvement (a two-player generalization of policy iteration for MDPs). | {
"domain": "cs.stackexchange",
"id": 7445,
"tags": "complexity-theory, game-theory, game-semantics"
} |
Testing Goldbach's Conjecture hypothesis | Question: A book1 that I'm reading states Goldbach's conjecture as follows:
Every even integer > 2 could be represented as a sum of two prime numbers.
Every integer > 17 could be represented as a sum of three unique prime numbers.
Every integer could be represented as a sum consisted of maximally six prime numbers.
Every odd integer > 5 could be represented as a sum of three prime numbers.
Every even integer could be represented as a difference between two prime numbers.
and then asks for a code that checks each of the five statements. Here is the code:
Goldbach.c
// upper bound of the interval we search for primes
#define MAX 100
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include "HelperArray.h"
#include "HelperPrime.h"
#include "Goldbach.h"
int main(){
testGoldbackHypothesis();
return 0;
}
HelperArray.h
#ifndef HELPERARRAY_H
#define HELPERARRAY_H
/*
Function: pi()
It returns the approximate
number of primes up to the
paramter x.
(To be used to estimate the size of the array to store primes.)
*/
long int pi(int x){
return x / (log((double) x) - 1);
}
//-----------------------------------------------------------
/*
Function: arraySize()
It returns the size of the
array that will hold primes.
x/logx always > prime number density pi(x)/x.
*/
long int arraySize(int x){
return x / log((double) x);
}
//-----------------------------------------------------------
/*
initArray();
*/
void initArray(int* primes, unsigned int size, int initValue){
unsigned int i;
for (i = 0; i < size; ++i){
primes[i] = initValue;
}
}
//-----------------------------------------------------------
/*
Function: printArray();
*/
void printArray(int* primes, unsigned int size){
unsigned int i;
int* ptrToArray = primes;
int fieldWidth = 1 + log10((double)MAX);
printf("{");
for (i = 0; i < size; ++i){
printf("%*d", fieldWidth, ptrToArray[i]);
if (i < size - 1){
// exclude unassigned values at the end
if (ptrToArray[i+1] == 0){
break;
}
printf(", ");
}
if (i % 20 == 0 && i != 0){
printf("\n");
}
}
printf(" }\n");
}
//-----------------------------------------------------------
/*
Function: binarySearch()
It returns true if targer is
found in the array named primes.
Otherwise returns false.
*/
char binarySearch(unsigned int target, int* primes, unsigned int size){
int* ptrToArray = primes;
int first = 0;
int last = size;
while (first <= last){
unsigned int middle = first + (last - first) / 2;
if (ptrToArray[middle] == target){
return 1;
}
if (ptrToArray[middle] < target){
first = middle + 1;
}else{
last = middle - 1;
}
}
return 0;
}
#endif
HelperPrime.h
#ifndef HELPERPRIME_H
#define HELPERPRIME_H
/*
Function isPrime();
It returns true if the argument
is prime, otherwise returns false.
*/
char isPrime(int n){
unsigned int denom = 2;
if (n < 2){
return 0;
}
if (n == 2){
return 1;
}
while (denom <= sqrt((double) n)){
if (n % denom == 0){
return 0;
}
++denom;
}
return 1;
}
//-----------------------------------------------------------
/*
Function: findPrimesTill()
Finds all primes up to given number, n,
and returns them collected in array.
*/
void findPrimesTill(int* primes, unsigned int size, unsigned int upperBound{
unsigned int index = 0;
//int* ptrToArray = primes;
unsigned int i = 0;
for (i = 2; i < upperBound; ++i){
if (isPrime(i)){
primes[index++] = i;
if (index >= size){
printf("realloc on i = %d.\n", i);
break;
}
}
}
}
//-----------------------------------------------------------
/*
Function: isSumOfTwoPrimes()
Checks if argument is a sum of two prime.
*/
char isSumOfTwoPrimes(unsigned int target, int* primes, unsigned int size){
unsigned int i;
unsigned int remainder;
int* ptrToArray = primes;
for (i = 0; i < size; ++i){
if (ptrToArray[i] < target){
remainder = target - ptrToArray[i];
}else{
break;
}
if (binarySearch(remainder, primes, size)){
printf("%d = %d + %d", target, ptrToArray[i], remainder);
return 1;
}
}
return 0;
}
//-----------------------------------------------------------
/*
isSumOfUniqueThreePrimes();
*/
char isSumOfUniqueThreePrimes(unsigned int target, int* primes, unsigned int size){
unsigned int i;
unsigned int j;
unsigned int remainder;
int* ptrToArray = primes;
for (i = 0; i < size; ++i){
for (j = 0; j < size; ++j){
if (ptrToArray[i] + ptrToArray[j] < target){
remainder = target - ptrToArray[i] - ptrToArray[j];
}else{
break;
}
// check uniqueness
if (ptrToArray[i] != ptrToArray[j] && ptrToArray[j] != remainder && ptrToArray[i] != remainder){
if (binarySearch(remainder, primes, size)){
printf("%d = %d + %d + %d", target, ptrToArray[i], ptrToArray[j], remainder);
return 1;
}
}
}
}
return 0;
}
//-----------------------------------------------------------
/*
isSumOfThreePrimes();
*/
char isSumOfThreePrimes(unsigned int target, int* primes, unsigned int size{
unsigned int i;
unsigned int j;
unsigned int remainder;
int* ptrToArray = primes;
for (i = 0; i < size; ++i){
for (j = 0; j < size; ++j){
if (ptrToArray[i] + ptrToArray[j] < target){
remainder = target - ptrToArray[i] - ptrToArray[j];
}else{
break;
}
if (binarySearch(remainder, primes, size)){
printf("%d = %d + %d + %d", target, ptrToArray[i], ptrToArray[j], remainder);
return 1;
}
}
}
return 0;
}
//-----------------------------------------------------------
/*
Functiop: isSumOfTheMostSixPrimes();
It could probably be a recursive function.
Complexity: O(n^6)
*/
char isSumOfTheMostSixPrimes(unsigned int target, int* primes, unsigned int size){
int* ptrToArray = primes;
unsigned int bound = 6;
unsigned int i, j, k , l ,m ,n;
unsigned int currentSum = 0;
for (i = 0; i < size; ++i){
unsigned int currentSum = ptrToArray[i];
if (currentSum == target) return 1;
else if (currentSum > target) break;
for (j = 0; j < size; ++j){
currentSum = ptrToArray[i] + ptrToArray[j];
if (currentSum == target) return 1;
else if (currentSum > target) break;
for (k = 0; k < size; ++k){
currentSum = ptrToArray[i] + ptrToArray[j] + ptrToArray[k];
if (currentSum == target) return 1;
else if (currentSum > target) break;
for (l = 0; l < size; ++l){
currentSum = ptrToArray[i] + ptrToArray[j] + ptrToArray[k] + ptrToArray[l];
if (currentSum == target) return 1;
else if (currentSum > target) break;
for (m = 0; m < size; ++m){
currentSum = ptrToArray[i] + ptrToArray[j] + ptrToArray[k] + ptrToArray[l] + ptrToArray[m];
if (currentSum == target) return 1;
else if (currentSum > target) break;
for (n = 0; n < size; ++n){
currentSum = ptrToArray[i] + ptrToArray[j] + ptrToArray[k] + ptrToArray[l] + ptrToArray[m] + + ptrToArray[n];
if (currentSum == target) return 1;
else if (currentSum > target) break;
}
}
}
}
}
}
return 0;
}
//-----------------------------------------------------------
/*
Function: isDifferenceOfPrimes();
*/
char isDifferenceOfPrimes(unsigned int target, int* primes, unsigned int size){
int* ptrToArray = primes;
unsigned int i, j;
for (i = 0; i < size - 1; ++i){
for (j = i + 1; j < size; ++j){
if (target == ptrToArray[j] - ptrToArray[i]){
printf("%d = %d - %d", target, ptrToArray[j], ptrToArray[i]);
return 1;
}
}
}
return 0;
}
#endif
GoldBach.h
#ifndef GOLDBACH_H
#define GOLDBACH_H
// probably all uppedBounds in the for loops could be doubled
/*
Function: First();
Test first hypothesis.
*/
void First(int* primes, unsigned int size, unsigned int upperBound){
unsigned int even;
for (even = 4; even <= upperBound; even += 2){
if (isSumOfTwoPrimes(even, primes, size)){
printf("\nFirst Goldback's hypothesis not disproved!\n");
}else{
printf("\n?Exception: %d\n", even);
}
}
}
//-----------------------------------------------------------
/*
Function: Second();
Test first hypothesis.
*/
void Second(int* primes, unsigned int size, unsigned int upperBound){
unsigned int natural;
for (natural = 17; natural <= upperBound; ++natural){
if (isSumOfUniqueThreePrimes(natural, primes, size)){
printf("\nSecond Goldback's hypothesis not disproved!\n");
}else{
printf("\n?Exception:: %d\n", natural);
}
}
}
//-----------------------------------------------------------
/*
Function: Third()
*/
void Third(int* primes, unsigned int size, unsigned int upperBound){
int* ptrToArray = primes;
unsigned int integer;
for (integer = 0; integer < upperBound; ++integer){
if (isSumOfTheMostSixPrimes(integer, primes, size)){
printf("\nThird Goldback's hypothesis not disproved!\n");
}else{
printf("\n?Exception:: %d\n", integer);
}
}
}
//-----------------------------------------------------------
/*
Function: Fourth()
*/
void Fourth(int* primes, unsigned int size, unsigned int upperBound){
unsigned int odd;
for (odd = 7; odd <= upperBound; odd += 2){
if (isSumOfThreePrimes(odd, primes, size)){
printf("\nFourth Goldback's hypothesis not disproved!\n");
}else{
printf("\n?Exception:: %d\n", odd);
}
}
}
//-----------------------------------------------------------
/*
Function: Fifth();
*/
void Fifth(int* primes, unsigned int size, unsigned int upperBound){
unsigned int even;
for (even = 2; even <= upperBound; even += 2){
if(isDifferenceOfPrimes(even, primes, size)){
printf("\nFifth Goldback's hypothesis not disproved!\n");
}else{
printf("\n?Exception:: %d\n", even);
}
}
}
//-----------------------------------------------------------
/*
Function: testFirstGoldbackHypothesis(void)
*/
void testGoldbackHypothesis(void){
// calculate size of array
int error = MAX / 10; // uses adding of error rather than memory reallocation
int size = arraySize(MAX) + error;
// allocate memory for the array storing the primes
int* primes = 0;
primes = (int*)malloc(sizeof(int) * size);
// check allocation
if (!primes){
printf("Failed to allocate memory for array!\n");
}
initArray(primes, size, 0);
findPrimesTill(primes, size, MAX);
printArray(primes, size);
// First(primes, size, MAX);
// Second(primes, size, MAX);
// Third(primes, size, MAX);
// Fourth(primes, size, MAX);
Fifth(primes, size, MAX);
printf("\nup to the number: %d.\n", MAX);
// free allocated memory
free(primes);
}
#endif
Questions:
Would it be better if memory is reallocated for each prime outside of the current array size? (Currently, there are few garbage values at the end of the array.)
Is the current approach of checking right, are there more effective algorithms?
Is the code written according to the C coding standard?
1. Progamming = ++Algorithms
Answer:
Would it be better if memory is reallocated for each prime outside of the current array size?
No. See following.
Is the current approach of checking right, are there more effective algorithms?
char isPrime(int n){ is barely run-time efficient. Rather than seek the next prime with ++denom, maintain a prime list and use the next one. For code using only int, could use a bit accessed array uint8_t IsPrime[(INT_MAX-1)/8 + 1] (about 512M bytes) or something smaller based on MAX, populated with Sieve of Eratosthenes. The question becomes what is "efficient". Is that speed, memory usage, code space usage, source code terseness, small stack usage? OP did not specify - assume speed.
A compiler may not recognize that sqrt() has no side effects and that n is constant, thus repetitively calling sqrt(). Call sqrt() once. Better to round() result too.
//while (denom <= sqrt((double) n)){
unsigned limit = (unsigned) round(sqrt(n));
while (denom <= limit) {
...if (n % denom == 0){
}
Good use of unsigned int middle = first + (last - first) / 2; to avoid overflow issues - even though (first + last) / 2; appears faster, the latter can fail.
Minor. Returning char rather than int/unsigned is rarely faster/less code as that type is usually the processor's "preferred" type. Return int or bool. Profile code if this optimization is in doubt.
// char isSumOfTheMostSixPrimes(...
int isSumOfTheMostSixPrimes(...
Is the code written according to the C coding standard?
.h files are best for define and declarations. It is non-standard practice to put code in a .h file.
HelperArray.h does not include needed .h files. .h files should not depend on the code that includes them to have included certain files. This file should include them.
// add for `log()`, etc.
#include <math.h>
long int pi(int x){
return x / (log((double) x) - 1);
}
MAX is used, but not defined in HelperArray.h. The .h file needs to 1) not depend on other non-included declarations or defines or 2) should error intelligently.
#ifndef MAX
#error Define `MAX` before including HelperArray.h
#endif
void printArray(int* primes, unsigned int size){
unsigned int i;
int* ptrToArray = primes;
int fieldWidth = 1 + log10((double)MAX);
...
Change of type without checking range - candidate bug. Similar unqualified type changes used. The loose use of int/unsigned permeates code.
char binarySearch(unsigned int target, int* primes, unsigned int size){
...
// What if `last` > INT_MAX?
int last = size;
Invalid code. Likely missing )
void findPrimesTill(int* primes, unsigned int size, unsigned int upperBound{
char isSumOfThreePrimes(unsigned int target, int* primes, unsigned int size{
Unused variables are OK, but why have them?
char isSumOfTheMostSixPrimes(unsigned int target, int* primes, unsigned int size){
// Unused
unsigned int bound = 6;
unsigned int currentSum = 0;
void Third(int* primes, unsigned int size, unsigned int upperBound){
// Unused
int* ptrToArray = primes;
Cast is OK, but not needed per the standard
// primes = (int*)malloc(sizeof(int) * size);
// Better
primes = malloc(sizeof(int) * size);
// Even better: Avoid coding the wrong type.
primes = malloc(sizeof *primes * size);
Unclear why functions return type long. The C standard uses size_t as the Goldilocks type (not too narrow, not too wide) for array indexing and size.
// long int pi(int x){
// long int arraySize(int x){
size_t pi(int x){
size_t arraySize(int x){ | {
"domain": "codereview.stackexchange",
"id": 21799,
"tags": "algorithm, c, primes"
} |
Can the axis of rotation of a celestial body point in any arbitrary direction? | Question: I am developing a small computer program that involves moderately simple simulation of elliptical Kepler orbits for fictional, generated star systems. I'm doing this without much prior knowledge of orbits (apart from some basic equations) or astrophysics in general.
I'm attempting to create loosely realistic orbits in the sense that they are not all in one plane and that they are elliptical with the parent body at one focus. The program assumes there are only interactions between a body and the one it orbits. Planets do not affect other planets' orbits. All other forces are explicitly ignored with orbits following only Kepler's laws.
The axis of rotation of each body in this simulation will be static (ie. without precession) and the axial tilt will be pseudorandomly generated. I wish to align an orbit with zero inclination with the equatorial plane of the parent body.
The real question, then, is the following: are there are any important constraints to the direction of the axis of rotation of an arbitrary hypothetical body orbiting another hypothetical body of significantly greater mass that I should take into consideration when determining the axis and axial tilt?
Which is to say, can the axis of rotation of, say, a planet, "point" in any arbitrary direction?
(The axis can be assumed to be a vector in the direction of the north pole. The north pole is here simply the pole that is "above" the orbital plane when axial tilt is zero.)
Answer: Uranus has an axial tilt of 97.77 degrees (it's on its side). So we have axial tilts ranging from 23.5 degrees (earth) to uranus's axis to venus's retrograde rotation. I think its safe to say that the axis of rotation can point in any direction. Remember that the axis of rotation is in fact the direction of the angular momentum, which can be changed by torque, i.e. collisions that are not directed right at a planet's center-of-gravity. Considering that the early solar system featured many of these types of collisions, I imagine that over time the axial tilt of a plant may be in any conceivable direction. For more re: Uranus's tilt, check this. | {
"domain": "physics.stackexchange",
"id": 8262,
"tags": "astronomy, simulations, orbital-motion"
} |
Whats a reasonable popular science way to describe the unit $\rm T*ha*y$? | Question: I'm doing an popular science article on saving the climate through photosynthesis. For this I need to use the convenient unit ha*y (and because we've got so much area on the planet I want to do T*ha*y). I can of course use km^2 instead of ha, but that doesn't reduce the complexity. Please don't suggest anything about miles or football fields, I'm from Europe.
Pretty much everybody understands enough about kWh as a unit to some extent, but when I do Thay, or punctuated T*ha*y, I get blank stares. The concept of sequestering carbon at a certain rate per area for a certain amount of time goes down well, but as soon as I use a unit for it, the lights go out.
I'm sure the physics isn't very challenging at all, it's mostly my way with people that's a hurdle. I hope that is appropriate for this forum. Tips?
Answer: Assuming that you meant "terahectare" by Tha, that is not a valid unit at all. You are generally not allowed to use more than one metric prefix on a single unit, and ha already has the "hecto" (h) prefix attached to the area unit "are" (a). (See, e.g., https://en.wikipedia.org/wiki/Hectare#Conversions) The symbol - I won't call it a unit - Tha - has both a tera and a hecto prefix.
Ultimately if you want to "humanize" the unit for general readership, it will be more important to compare to something they know. You said you're from Europe, so maybe compare to the size of the continent or to one of the countries? | {
"domain": "physics.stackexchange",
"id": 81105,
"tags": "dimensional-analysis, units, si-units"
} |
How to find percentage error of equivalent resister? | Question: The resistors of $R_1=100\pm3Ω $ and $R_2=200\pm4Ω $ are connected in parallel.Then express equivalent resistance with percentage error.
I know how tho find percentage error if resistors are connected in series connection.
Can any one help me to find the percentage error if resistors are connected in parallel connection??
Answer: There are two approaches you can take here.
The simple way if just to consider what are the max and min values and calculate the resistance is in each case. This is probably fine for this case but is unsatisfactory for more complicated cases where is not clear what values maximise or minimise the resistance.
The more correct approach is to apply the propagation of uncertainty. In this case (and probably 95% of things you encounter) you need two rules in each case it is square of the uncertainty because you these are properties of the variance, while uncertainty is the standard deviation:
1) addition and subtractions sum linearly. i.e. $u_{A+B}^2 = u_A^2 + u_B^2$. Note this formula is identical for $u_{A-B}$, your uncertainty doesn't get smaller.
2) multiplications and divisions the fractional errors sum. i.e $(\frac{u_{AB}}{AB})^2=(\frac{u_A}{A})^2+(\frac{u_B}{B})^2$
For your case note uncertainty in 1 is zero.
Finally, putting my pedant hat on I feel obliged to point out that you are taking about uncertainty not error. An error is what you have when you measure your $100\Omega$ resistor and find it is actually $102\Omega$. | {
"domain": "physics.stackexchange",
"id": 13756,
"tags": "electrical-resistance, error-analysis"
} |
If an object had an acceleration only in the time dimension relative to an observer, what would it look like to an observer? | Question: The other day I was thinking about how an object moving away from an observer in spatial dimensions appears to get smaller as it gets further away.
That made me wonder- if there was an object that was stationary in the spatial dimensions relative to some observer, but moving away in the time dimension only, what would that look like? Is it even possible, or am I misunderstanding the nature of spacetime? Would the object get smaller even as it appeared to stay in the same physical location?
Answer: Both velocity and acceleration are not purely spatial concepts; they are concepts related to both space and time. In spacetime terms they relate to the relative gradient of one worldline or another. So what a non-physicist might refer to as a "spatial" acceleration is really a statement about what is going on in time as well as space.
With this in mind, one cannot easily give sense to the idea of a "purely temporal" acceleration, but perhaps one way to do it is to say it refers to getting further away in time in the sense of aging more quickly according to internal dynamics. For example, an object located higher up in a gravitational field will age more quickly. The observation here would be that as you sit at your location experiencing life at what feels to you like a normal pace, your friend at a higher location is getting through more heart-beats, and having faster cell division, and faster chemistry and particle physics and everything, as indicated by all your observations of them from your location. The effect would only be noticeable in extreme conditions of course. | {
"domain": "physics.stackexchange",
"id": 82189,
"tags": "kinematics, spacetime, thought-experiment"
} |
Why sine wave leakage in FFT spectrum | Question: The input is three sine waves with different amplitudes and frequencies. After FFT the spectrum shows the correct characteristic of each wave without leakage error. Why does the other spectrum leakage when the input is multiplied by a Hamming window function? Thanks]1
Answer: Yes, we use windows to reduce spectral leakage and you show an example where using a window increases it.
There is more than one way to explain it.
If you express a DFT as a matrix vector product
$$ \mathbf{y}=\mathbf{Wx}
$$
where
$$
\mathbf{WW^H}=c\mathbf{I}
$$
where $c$ is a constant that depends on how you define your DFT. The point is that $\mathbf{W}$ is an orthogonal matrix. Your 3 original sine waves correspond to 3 rows of $\mathbf{W}$. Your 3 sines are perpendicular to all the other $N-3$ rows, so there is no leakage.
When you uses a window like a Hamming Window, it’s equivalent to multiplying each row element by element by the window. This modified matrix $\mathbf{\tilde{W}}$ isn’t orthogonal any more.
$$
\mathbf{\tilde{W}\tilde{W}^H}\ne c \mathbf{I}
$$
The resulting matrix is nearly equal to a constant times $\mathbf{I}$ but the off diagonals are nonzero. Your 3 original sines now project on all the rows of $\mathbf{\tilde{W}}$. These nonzero projections are another way to say leakage.
The nonorthogonality of the widowed matrix is easy to verify in matlab using the dftmtx command. | {
"domain": "dsp.stackexchange",
"id": 7923,
"tags": "frequency-spectrum"
} |
Building tree structure based on flat objects | Question: You can find here a Follow-up question
Description
A List<UnrelatedObects> is returned by a 3rd party webservice, which will then be transformed to a List<ArchiveDefinition>. These ArchiveDefinition objects are connected by Parent'.ArchiveNodeId == 'Child'.ParentId.
The root objects TypeOfArchive property will always have the value ArchiveType.Archive.
The goal of the given class below is to build a List<ArchiveTreeNode> of these flat object list to fill a treeview control.
The class in question
public class ArchiveBuilder
{
public static List<ArchiveTreeEntry> Build(List<ArchiveDefinition> entries)
{
List<ArchiveTreeEntry> rootArchiveTreeEntries = new List<ArchiveTreeEntry>();
if (entries != null && entries.Count > 0)
{
List<ArchiveDefinition> rootEntries = GetRootEntries(entries);
foreach (ArchiveDefinition definition in rootEntries)
{
rootArchiveTreeEntries.Add(new ArchiveTreeEntry(definition));
entries.Remove(definition);
}
foreach (ArchiveTreeEntry parent in rootArchiveTreeEntries)
{
FillChildren(parent, entries);
}
}
return rootArchiveTreeEntries;
}
private static void FillChildren(ArchiveTreeEntry parent,
List<ArchiveDefinition> entries)
{
if (entries.Count > 0)
{
List<ArchiveDefinition> children = GetChildren(entries, parent.Id);
if (children.Count > 0)
{
RemoveChildren(entries, parent.Id);
foreach (ArchiveDefinition child in children)
{
ArchiveTreeEntry treeEntryChild = new ArchiveTreeEntry(child);
parent.AddChild(treeEntryChild);
FillChildren(treeEntryChild, entries);
}
}
}
}
private static List<ArchiveDefinition> GetRootEntries(List<ArchiveDefinition> entries)
{
return entries.FindAll(e => e.TypeOfArchive == ArchiveType.Archive);
}
private static List<ArchiveDefinition> GetChildren(List<ArchiveDefinition> entries, string parentID)
{
return entries.FindAll(e => e.ParentId == parentID);
}
private static void RemoveChildren(List<ArchiveDefinition> entries, string parentID)
{
entries.RemoveAll(e => e.ParentId == parentID);
}
}
Related classes and enums
public class ArchiveDefinition
{
public string ArchiveNodeId { get; private set; }
public string ParentId { get; private set; }
public ArchiveType TypeOfArchive { get; private set; }
public ArchiveDefinition (String parentId, String archiveNodeId,
ArchiveType type)
{
ParentId = parentId;
TypeOfArchive = type;
ArchiveNodeId = archiveNodeId;
}
}
public enum ArchiveType
{
Archive, ArchiveGroup, ArchiveEntry
}
public class ArchiveTreeEntry
{
public ArchiveType ArchiveEntryType { get; private set; }
public string Id { get; private set; }
public ReadOnlyCollection<ArchiveTreeEntry> Children
{
get
{
return new ReadOnlyCollection<ArchiveTreeEntry>(mChildren);
}
}
private List<ArchiveTreeEntry> mChildren = new List<ArchiveTreeEntry>();
public ArchiveTreeEntry(ArchiveDefinition archiveDefinition)
{
Id = archiveDefinition.ArchiveNodeId;
ArchiveEntryType = archiveDefinition.TypeOfArchive;
}
internal void AddChild(ArchiveTreeEntry child)
{
if (child != null)
{
mChildren.Add(child);
}
}
}
I would like to get a review for the ArchiveBuilder class. If you also want to review the related classes and enums, I won't mind.
Answer: I think your implementation is pretty clear. Nonetheless I tried some changes, hoping my version is more readable.
I agree with Stuart's post. (using IEnumerable<> if possible, non static Build() function, ...).
In Addition I think you have broken this principles:
Single
Responsibility
--> Your FillChildren(...) function is looking for children and removing them from the original list and adding them in the parent's children collection. I mean that are three responsibilities.
Least
Attonishment
--> In a function called FillChildren I doesn't expect anything being removed.
I changed also many Names, but this may be a matter of taste.
Here is the changed code:
public class ArchiveBuilder
{
public List<ArchiveTreeEntry> Build(List<ArchiveDefinition> availableArchiveDefinitions)
{
List<ArchiveTreeEntry> rootArchiveTreeEntries = null;
if (availableArchiveDefinitions != null && availableArchiveDefinitions.Count > 0)
{
rootArchiveTreeEntries = CreateRootArchiveTreeEntries(availableArchiveDefinitions);
availableArchiveDefinitions = RemoveRootArchiveDefinitions(availableArchiveDefinitions);
foreach (var entry in rootArchiveTreeEntries)
{
HandleAvailableEntriesForGivenParent(availableArchiveDefinitions, entry);
}
}
return rootArchiveTreeEntries;
}
private static void AssignChildrenToParent(ArchiveTreeEntry parent,
IEnumerable<ArchiveDefinition> children)
{
parent.AddChildRange(children.Select(x => new ArchiveTreeEntry(x)));
}
private static List<ArchiveTreeEntry> CreateRootArchiveTreeEntries(
IEnumerable<ArchiveDefinition> availableArchiveDefinitions)
{
var rootArchiveTreeEntries = new List<ArchiveTreeEntry>();
rootArchiveTreeEntries.AddRange(
availableArchiveDefinitions.Where(e => e.TypeOfArchive == ArchiveType.Archive)
.Select(x => new ArchiveTreeEntry(x)));
return rootArchiveTreeEntries;
}
private static IEnumerable<ArchiveDefinition> GetChildren(
IEnumerable<ArchiveDefinition> availableArchiveDefinitions,
string parentId)
{
return availableArchiveDefinitions.Where(e => e.ParentId == parentId);
}
private static void HandleAvailableEntriesForGivenParent(
List<ArchiveDefinition> availableArchiveDefinitions,
ArchiveTreeEntry parent)
{
if (availableArchiveDefinitions.Count > 0)
{
var children = GetChildren(availableArchiveDefinitions, parent.Id);
AssignChildrenToParent(parent, children);
RemoveAssignedItemsFromAvailabeEntries(availableArchiveDefinitions, parent.Id);
foreach (var nextParent in parent.Children)
{
HandleAvailableEntriesForGivenParent(availableArchiveDefinitions, nextParent);
}
}
}
private static void RemoveAssignedItemsFromAvailabeEntries(
List<ArchiveDefinition> availableArchiveDefinitions,
string parentId)
{
availableArchiveDefinitions.RemoveAll(e => e.ParentId == parentId);
}
private static List<ArchiveDefinition> RemoveRootArchiveDefinitions(
List<ArchiveDefinition> availableArchiveDefinitions)
{
var newEntries =
availableArchiveDefinitions.Except(
availableArchiveDefinitions.Where(e => e.TypeOfArchive == ArchiveType.Archive))
.ToList();
return newEntries;
}
} | {
"domain": "codereview.stackexchange",
"id": 35207,
"tags": "c#, design-patterns, tree"
} |
Why doesn't contact adhesive stick well to cured modified silane polymer? | Question: A contact adhesive(dissolved neoprene rubber) sticks well
to rubbery smooth materials like rubber, so why
doesn't contact adhesive stick well to a cured modified silane polymer (caulk like hybrifix super 7)?
Is there any adhesive that can stick to a cured piece of ms polymer?
(besides the ms polymer itself)
Thanks.
EDIT.
Ms polymer caulk does not stick well to all the materials,
so if you bond two objects (different materials) with ms polymer and
after it cured, one of the object gets loose, what type of adhesive can
you use in order to bond the loosen object to the cured ms polymer?
And why doesn't contact cement stick better to ms polymer?
Answer: Sorry for the downvotes, but what you may have is a more polar siloxane sealer, which would tend to repel "organic" carbon/hydrogen compounds.
The repulsion is similar to fluropolymers such as Teflon.
It may be possible to find a more compatible match for your contact adhesive by finding a sealant polymer with side groups that are more chemically matching. | {
"domain": "chemistry.stackexchange",
"id": 16892,
"tags": "adhesion"
} |
Is there a way to define a function of a meniscus curvate? | Question: If you cut a thin slit in thin opaque material and then put it into water and pull it out, the meniscus will be formed in the slit. For my research I need to know if it is analytically possible to define the curvature of meniscus( is it hyperbola, parabola, circle or something else).
I need the function of curvature so I will be able to make calculations of how the light passes through the meniscus.
I know there are numerical methods to make an approximate function, but I want to know if there is any other way.
Answer: I believe you are referring to a problem in the calculus of variations known as Plateau's problem.
This involves solutions of boundary value problems for the Laplace equation that satisfy the Dirichlet principle, and for simple boundary conditions you should be able to find analytic solutions in the literature. | {
"domain": "physics.stackexchange",
"id": 8323,
"tags": "optics, surface-tension"
} |
De Broglie wavelength and wave function | Question: Is the De Broglie wavelength of a quantum entity same as the wavelength of its wave function?
If yes, why? If no, why? If it is true only under certain circumstances, what are the conditions?
Answer: If the wave function is a plane wave then it is a De Broglie wave, and its wave length is the same as that of the De Broglie wave. However, wave functions can be very different from plane waves. De Broglie theory was just an early milestone in the development of quantum theory, and today it is primarily of historical interest. | {
"domain": "physics.stackexchange",
"id": 73033,
"tags": "quantum-mechanics, wavefunction, wavelength, wave-particle-duality"
} |
Which trac component to use when submitting tickets for object_recognition stack? | Question:
I'd like to submit feature requests for packages in stack object_recognition but cannot find any suitable component in https://code.ros.org/trac/wg-ros-pkg. Leaving it unspecified is not exactly what I want. Suggestions?
Originally posted by Julius on ROS Answers with karma: 960 on 2011-03-09
Post score: 0
Answer:
The component should be the stack name, object_recognition, but it is not currently defined.
Try creating it yourself. If you don't have permission, add that to your question (as a comment) and someone with admin authority can do it for you.
Originally posted by joq with karma: 25443 on 2011-03-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Julius on 2011-03-13:
Ticket has already found an owner and component is changed to object_recognition. Thanks.
Comment by tfoote on 2011-03-09:
I've created object_recognition as a component. If you opened a ticket it should get to the right place eventually.
Comment by Julius on 2011-03-09:
Yes, it's not defined. I followed your suggestion, opened a ticket and will see what happens and whether it's necessary to find someone taking care of it. Thanks. | {
"domain": "robotics.stackexchange",
"id": 5011,
"tags": "ros, object-recognition"
} |
Profile and pressure angle definition in gear geometry | Question: In gear geometry there are 3 main angles, which can be misleading:
Profile angle
Pressure angle
Operating pressure angle
Further for simplicity I would restrict the area of interest to "spur gears". It is quite obvious that for standard mating gears these above mentioned angles are equal, but how do they differ when a profile shift occurs?
We can distinguish 2 types of corrections:
$V_0$ shift $\longrightarrow \sum x = 0$
$V_+ V_-$ shift $\longrightarrow \sum x \neq 0$
So how these angles:
are geometrically defined?
change during profile shift?
can be calculated?
Answer: In spur gears, the profile angle, pressure angle, and operating pressure angle usually align for standard gears. But with profile shift, things change.
The profile angle, which is the slope of the gear tooth relative to the gear base circle, remains constant even with a profile shift. The pressure angle, the angle at which gear teeth transmit force, and the operating pressure angle, the pressure angle during gear operation, are the ones affected by profile shift.
In a V0 shift (∑x=0), the gears are modified but the center distance remains the same, whereas in a V+V− shift (∑x≠0), the center distance changes. These shifts alter the effective working pressure angle: a positive shift increases it, and a negative shift decreases it.
Calculating these angles post-shift involves gear tooth geometry and the involute function. It's complex and often handled by specialized gear design software, but can be done manually through detailed geometric analysis if necessary. | {
"domain": "engineering.stackexchange",
"id": 5364,
"tags": "gears, terminology, geometry"
} |
Injections and query | Question: I made a class that connects to my DB and inserts some values. Is it secure or how can I protect this further from injections? The object declaration will come from variables with POST from a form, after being validated, ofc. Just want to know if this is secure.
That, and also, should I make a method for every query I need? Or is there a better and secure way?
<?php
include "db/db_info.php";
/*$DBServer
$DBUser
$DBPass
$DBName*/
class WorkDB {
private $server;
private $user;
private $pass;
private $name;
private $conn;
public function __construct( $server, $user, $pass, $name ) {
$this->server=$server;
$this->user=$user;
$this->pass=$pass;
$this->name=$name;
}
public function tryconn() {
$this->conn = new mysqli( $this->server, $this->user, $this->pass, $this->name );
if ( mysqli_connect_error()) {
die( '*************Connection Error (' . mysqli_connect_errno() . '):'
.mysqli_connect_error() );
}
else echo 'ok';
}
public function query_register( $user, $pass, $email ) {
$stmt = $this->conn->prepare( "INSERT INTO `users` (`username`, `password`, `email`) VALUES (?, ?,?)" );
$stmt->bind_param( "sss", $user, $pass, $email );
$stmt->execute();
$stmt->close();
}
}//end of class
$a=new WorkDB( $DBServer, $DBUser, $DBPass, $DBName );
$a->tryconn();
$a->query_register( 'a', 'b', 'c' );
?>
Answer: What you did well
Storing sensitive information, such as the database connection parameters, separately from the source code is a good idea.
You used parameterized queries with placeholders, which are not vulnerable to SQL injection.
Things to work on
Your indentation is inconsistent.
tryconn() should not echo 'ok' on success, as that pollutes the output.
$a is a horrible name for your database handle. It's conventionally called $db or something like that. | {
"domain": "codereview.stackexchange",
"id": 8028,
"tags": "php, sql, mysql, sql-injection"
} |
Serialization and deserialization a doubly-linked list with a pointer to a random node in C++ | Question: I tried to serialize a doubly linked list. Can you rate it, and what could be improved?
I open the file with fopen(path, "wb") and write all data in binary mode.
#include <string>
#include <unordered_map>
#include <vector>
#include <random>
#include <iostream>
struct ListNode {
ListNode* prev = nullptr;
ListNode* next = nullptr;
ListNode* rand = nullptr;
std::string data;
};
class List {
public:
void Serialize(FILE* file)
{
std::unordered_map<ListNode*, int> uMap;
auto cur = head;
for (int i = 1; i <= count; i++)
{
uMap.insert(std::make_pair(cur, i));
cur = cur->next;
}
std::vector <std::pair<std::string, int>> vNode;
vNode.reserve(count);
cur = head;
while (cur)
{
int randEl{ 0 };
if (cur->rand != nullptr)
{
auto search = uMap.find(cur->rand);
randEl = search->second;
}
vNode.push_back(std::make_pair(cur->data, randEl));
cur = cur->next;
}
fwrite(&count, sizeof(count), 1, file);
for (auto& a: vNode)
{
fwrite(&a.second, sizeof(a.second), 1, file);
int size = a.first.size();
fwrite(&size, sizeof(size), 1, file);
fwrite(a.first.c_str(), 1, size, file);
}
}
void Deserialize(FILE* file)
{
std::unordered_map<int, ListNode*> uMap;
std::vector<int> vRandNode;
int rCount{ 0 };
fread(&rCount, sizeof(rCount), 1, file);
vRandNode.reserve(rCount);
rCount = 1;
for (; rCount <= vRandNode.capacity(); rCount++)
{
int randNode{ 0 };
fread(&randNode, sizeof(randNode), 1, file);
int len{ 0 };
fread(&len, sizeof(len), 1, file);
std::string temp{ "" };
while (len > 0)
{
char ch ;
fread(&ch, sizeof(ch), 1, file);
temp += ch;
len--;
}
Add(temp);
vRandNode.push_back(randNode);
uMap.insert(std::make_pair(rCount, tail));
}
auto cur = head;
for(auto a: vRandNode)
{
if (a != 0)
cur->rand = uMap.find(a)->second;
else
cur->rand = nullptr;
cur = cur->next;
}
}
void Add(std::string str)
{
ListNode* node = new ListNode;
node->data = std::move(str);
count++;
if (head == nullptr)
{
head = node;
}
else
{
tail->next = node;
node->prev = tail;
}
tail = node;
}
List(){}
List(std::string str)
{
Add(std::move(str));
}
~List()
{
while (head)
{
auto temp = head->next;
delete head;
head = temp;
}
}
void ShufleRandom()
{
std::random_device rd;
std::mt19937 gen(rd());
auto cur = head;
while (cur)
{
auto randNode = head;
int randNum = gen() % (count + 1);
for (int i = 1; i < randNum; i++)
{
randNode = randNode->next;
}
if (randNum == 0)
cur->rand = nullptr;
else
cur->rand = randNode;
cur = cur->next;
}
}
void PrintListAndRand() const
{
auto cur = head;
while (cur)
{
std::cout << "===Data->" << cur->data;
if (cur->rand)
std::cout << " | RandData->" << cur->rand->data << "===" << std::endl;
else
std::cout << " | RandData->nullptr===" << std::endl;
cur = cur->next;
}
std::cout << "_______________________________________________________________" << std::endl;
}
private:
ListNode* head = nullptr;
ListNode* tail = nullptr;
int count = 0;
};
Answer: Thanks for posting your code and not being afraid of feedback. The indentation of your code is nice and you're already using a lot of stuff present in the C++ standard library. That's great.
Unclear application
I don't understand how the structure would be beneficial for me. What would I use a linked list pointing to random nodes for?
And while I can add something to the list, it seems there's no way of accessing the elements of the list except saving and printing.
Your list does not act like a C++ container and thus its use is limited. We can't use it with any algorithms of the standard library.
Use of FILE and fopen()
These probably come from <cstdio>, which you should include, following the "include what you use" (IWYU) principle. And, as the name of the library suggests, they are C functions. For C++ you might want to #include <fstream> and use std::ifstream and std::ofstream instead.
Rule of 5
You have implemented a custom destructor, which means that you should think about a copy constructor, copy assignment operator, move constructor and move assignment operator as well.
See: The rule of three/five/zero
Long function Serialize
The function is 30 lines long and from its structure, it looks like this could be split into 3 smaller private functions:
conversion of the list into a Hashmap.
conversion of the Hashmap into a flat list
writing the flat list
Use emplace_back
The line vNode.push_back(std::make_pair(cur->data, randEl)); can be replaced by the shorter and more efficient vNode.emplace_back(cur->data, randEl);.
Long method Deserialize
I expected to see the 3 operations of Serialize() in reverse order, but somehow they are interleaved. At the moment, I can't judge whether that makes your implementation more efficient, but at least it hurts readability.
Try 3 private methods in the reverse order of Serialize.
count not set when deserializing
In Deserialize, why is the variable called rCount instead of count? At first sight, it seemed to me that count is never set - until I find that Add() is called which increases the counter.
Wrong use of capacity
The loop for (; rCount <= vRandNode.capacity(); rCount++) is incorrect. After vRandNode.reserve(rCount);, the capacity of that vector may be larger than the original requested size. As a consequence, you may read more items than available in the file.
[...] to a value that's greater or equal to new_cap.
Source: https://en.cppreference.com/w/cpp/container/vector/reserve
This brings me to an important topic:
Missing tests
It seems that there are no unit tests for your class. Although I find unit tests quite hard to do in C++ (compared to other languages), I still think they are useful and you should look into that.
Using C++ streams (as suggested before) will actually help you with testing, since you can use a string stream instead of a file stream during the test.
Uniform initialization
std::string temp{ "" }; can just be std::string temp;.
int randNode{ 0 }; can be int randNode{}; and similar.
You can also do that for ListNode: ListNode* prev{}; will use a nullptr.
Missing error checks / basic exception guarantee
The return value of fread() is never checked. Your code assumes that the file always has correct content. If you implement a unit test, also implement one for partial / broken files.
Try to achieve at least Basic exception guarantee level.
Make the loop complete
Instead of
rCount = 1;
for (; rCount <= vRandNode.capacity(); rCount++)
write
for (rCount = 1; rCount <= vRandNode.capacity(); rCount++)
so that the for loop looks like a normal for loop.
Except for capacity(), as discussed before.
Use auto
You already use auto a lot. This applies to the principle "almost always auto" (AAA). But you can use it more, e.g. auto node = new ListNode; instead of ListNode* node = new ListNode;
ShufleRandom
Shuffle is written with 2 Fs. ShuffleRandom() seems to be duplicate words for the same thing. Shuffle() is enough.
Random number generation
I don't know about your requirements on the distribution of the random numbers. Nor did I fully understand why that list has random pointers in the first place. The modulo approach typically doesn't give you a uniform distribution. Have a look at this Stack Overflow question to find out how to generate random numbers in a better way.
gen() gives you an unsigned integer, but randNum? is a signed integer. randNum should be the same type: auto const randNum = gen() % (count + 1);
I'd also like to see
if (randNum == 0)
{
cur->rand = nullptr;
}
else
{
cur->rand = nthNode(randNum);
}
Const correctness
The PrintListAndRand() function is const, so it seems that you're at least a bit familiar with const correctness. Seeing that, I would consider that Serialize() should also be const. | {
"domain": "codereview.stackexchange",
"id": 44527,
"tags": "c++, serialization"
} |
How is focal length defined for a two-lens system, separated by a distance $d$? | Question: I have found the formula for the effective focal length $f$ of two thin lenses with focal lengths $f_1$ and $f_2$ separated by distance $d$ to be
$$
\frac 1f=\frac 1{f_1}+\frac 1{f_2}-\frac d{f_1f_2}.
$$
However, I can't seem to find how $f$ is defined. Is it the distance from the first lens to the final focal point or the distance from the second lens to the final focal point? Or neither?
Answer: It is the distance from the image plane to the rear principal plane. You can find the location of this plane by projecting the image ray backwards through the system to where it crosses the projection of the object ray. This is sometimes also referred to as the effective focal length (v) of the system, and is true for both simple as well as complicated systems. The distance from the rear lens to the image plane is simply the back focal distance (v"). The difference between the v and v" can be found by the formula:
$\delta$ = $\frac{-d}{n}$$\frac{f}{f_1}$ = v" - v' where n=1 in air | {
"domain": "physics.stackexchange",
"id": 99749,
"tags": "optics, definition, geometric-optics, lenses"
} |
Generating Dinosaur names with Tensorflow RNN | Question: I try to adapt "Text generation with an RNN" tutorial to generate new dinosaur names from a list of the existing ones. For training RNN tutorial text is divided into example character sequences of equal length:
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# The unique characters in the file
vocab = sorted(set(text))
idx2char = np.array(vocab)
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//(seq_length+1)
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
# Convert to sequences of the same length
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
# Sequences as text
for item in sequences.take(2):
print("----")
print(repr(''.join(idx2char[item.numpy()])))
Output:
----
'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
----
'are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you k'
My problem differs from tutorial in that I have a list of names of different length instead of monolith of text:
aachenosaurus
aardonyx
abdallahsaurus
abelisaurus
abrictosaurus
abrosaurus
abydosaurus
acanthopholis
In my case character sequences are names. As long as I can't train RNN on sequences with different length (please, correct me if I am wrong here) I need to pad all my names with spaces to a size of a longest name, which is 26.
My longest name is lisboasaurusliubangosaurus, so, for example, aardonyx shoud be padded as:
"lisboasaurusliubangosaurus"
"aardonyx "
I tried to pad my sequences with:
# Convert individual characters to sequences of the desired size.
sequences = char_dataset.padded_batch(seq_length+1, padded_shapes=seq_length, drop_remainder=True)
Which results in error:
ValueError: The padded shape (26,) is not compatible with the corresponding input component shape ().
Questions:
Is it possible to train Tensorflow RNN with sequences of variable length?
How to pad short sequences?
Thanks!
Answer: Check out this answer:
https://stackoverflow.com/a/60230236/12642230
Alternative Solution:
Tensorflow provides a method pad_sequences() to do that:
https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences
The default value of padding is 'pre', you might wanna change that to 'post' to do what you want, along with providing the maximum length, which is 26 in your case. You also would need to add a special padding character to you dictionary of characters to indices, and use its index to provide padding value for the method. | {
"domain": "datascience.stackexchange",
"id": 7580,
"tags": "python, tensorflow, rnn"
} |
How to implement parametric iSWAP gate in Qiskit? | Question: I'm trying to implement the parametric $\text{iSWAP}$ gate, also known as $\text{XY}(\theta)$, in Qiskit.
$$
\text{XY}(\theta)=
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & \cos{\theta/2} & i\sin{\theta/2} & 0 \\
0 & i\sin{\theta/2} & \cos{\theta/2} & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}.
$$
Furthermore, once I've implemented this gate I would like to use it in order to decompose quantum circuits in terms of single-qubit gates and $\text{XY}(\theta)$ as the only two-qubit gate.
What are the steps that I need to do in order to achieve this?
Cheers!
Answer: Update
Qiskit 0.35 introduced a new gate XXPlusYYGate.
$$\newcommand{\th}{\frac{\theta}{2}}\\\begin{split}R_{XX+YY}(\theta, \beta)\ q_0, q_1 =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & \cos(\th) & i\sin(\th)e^{i\beta} & 0 \\
0 & i\sin(\th)e^{-i\beta} & \cos(\th) & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}\end{split}$$
So, you can now add parameterized $\text{XY}$ to your circuit as follows:
theta = Parameter('θ')
circ.append(XXPlusYYGate(theta, 0), [0, 1])
Original Answer
For the first part of your question, we have
$$XY(\theta) = \exp(-i {\frac{\theta}{2}} (X{\otimes}X + Y{\otimes}Y))$$
And since $X{\otimes}X$ and $Y{\otimes}Y$ commute, we can write it as
$$XY(\theta) = \exp(-i {\frac{\theta}{2}} X{\otimes}X) \exp(-i {\frac{\theta}{2}} Y{\otimes}Y)$$
Qiskit already has these two gates:
$$R_{XX}(\theta) = \exp(-i {\frac{\theta}{2}} X{\otimes}X)$$
And,
$$R_{YY}(\theta) = \exp(-i {\frac{\theta}{2}} Y{\otimes}Y)$$
Hence, the implementation of $XY(\theta)$ as a parameterized gate in Qiskit will be as simple as
from qiskit import QuantumCircuit
from qiskit.circuit import Parameter
theta = Parameter('θ')
circuit = QuantumCircuit(2)
circuit.rxx(theta, 0, 1)
circuit.ryy(theta, 0, 1)
param_iswap = circuit.to_gate()
Another Solution
If you want to use more basic gates than rxx and ryy, you can use Qiskit's Operator Flow:
H = 0.5 * ((X^X) + (Y^Y))
theta = Parameter('θ')
evolution_op = (theta * H).exp_i() # exp(-iθH)
trotterized_op = PauliTrotterEvolution(trotter_mode = Suzuki(order = 1, reps = 1)).convert(evolution_op)
circuit = trotterized_op.to_circuit()
circuit.draw('mpl')
The composition:
And as before
param_iswap = circuit.to_gate()
For the second part of your question, I think the best answer you can have is the one mentioned in the comments by @epelaaez, as it is recent and from a member of Qiskit's development team. | {
"domain": "quantumcomputing.stackexchange",
"id": 2886,
"tags": "qiskit, programming, quantum-gate"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.