anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
bilateral filter for color images | Question: Fast Bilateral Filter
I came across the above code of the bilateral filter for grayscale images. I tried to implement it for color images by splitting the image ti BGR channels and applying it to each channel separately. But I didn't get the desired output.
Could anybody help?
import numpy as np
from scipy import signal, interpolate
def bilateral(image, sigmaspatial, sigmarange, samplespatial=None, samplerange=None):
"""
:param image: np.array
:param sigmaspatial: int
:param sigmarange: int
:param samplespatial: int || None
:param samplerange: int || None
:return: np.array
Note that sigma values must be integers.
The 'image' 'np.array' must be given gray-scale. It is suggested that to use OpenCV.
"""
height = image.shape[0]
width = image.shape[1]
samplespatial = sigmaspatial if (samplespatial is None) else samplespatial
samplerange = sigmarange if (samplerange is None) else samplerange
flatimage = image.flatten()
edgemin = np.amin(flatimage)
edgemax = np.amax(flatimage)
edgedelta = edgemax - edgemin
derivedspatial = sigmaspatial / samplespatial
derivedrange = sigmarange / samplerange
xypadding = round(2 * derivedspatial + 1)
zpadding = round(2 * derivedrange + 1)
samplewidth = int(round((width - 1) / samplespatial) + 1 + 2 * xypadding)
sampleheight = int(round((height - 1) / samplespatial) + 1 + 2 * xypadding)
sampledepth = int(round(edgedelta / samplerange) + 1 + 2 * zpadding)
dataflat = np.zeros(sampleheight * samplewidth * sampledepth)
(ygrid, xgrid) = np.meshgrid(range(width), range(height))
dimx = np.around(xgrid / samplespatial) + xypadding
dimy = np.around(ygrid / samplespatial) + xypadding
dimz = np.around((image - edgemin) / samplerange) + zpadding
flatx = dimx.flatten()
flaty = dimy.flatten()
flatz = dimz.flatten()
dim = flatz + flaty * sampledepth + flatx * samplewidth * sampledepth
dim = np.array(dim, dtype=int)
dataflat[dim] = flatimage
data = dataflat.reshape(sampleheight, samplewidth, sampledepth)
weights = np.array(data, dtype=bool)
kerneldim = derivedspatial * 2 + 1
kerneldep = 2 * derivedrange * 2 + 1
halfkerneldim = round(kerneldim / 2)
halfkerneldep = round(kerneldep / 2)
(gridx, gridy, gridz) = np.meshgrid(range(int(kerneldim)), range(int(kerneldim)), range(int(kerneldep)))
gridx -= int(halfkerneldim)
gridy -= int(halfkerneldim)
gridz -= int(halfkerneldep)
gridsqr = ((gridx * gridx + gridy * gridy) / (derivedspatial * derivedspatial)) \
+ ((gridz * gridz) / (derivedrange * derivedrange))
kernel = np.exp(-0.5 * gridsqr)
blurdata = signal.fftconvolve(data, kernel, mode='same')
blurweights = signal.fftconvolve(weights, kernel, mode='same')
blurweights = np.where(blurweights == 0, -2, blurweights)
normalblurdata = blurdata / blurweights
normalblurdata = np.where(blurweights < -1, 0, normalblurdata)
(ygrid, xgrid) = np.meshgrid(range(width), range(height))
dimx = (xgrid / samplespatial) + xypadding
dimy = (ygrid / samplespatial) + xypadding
dimz = (image - edgemin) / samplerange + zpadding
return interpolate.interpn((range(normalblurdata.shape[0]), range(normalblurdata.shape[1]),
range(normalblurdata.shape[2])), normalblurdata, (dimx, dimy, dimz))
Answer: It seems to work for me.
If I give it the input image on the left and use parameters sigmaspatial of 20 and sigmarange of 200, I get the output on the right.
The trick was to make sure the output image is a) normalized and b) the right type (I went for int).
Python code below
And here.
import cv2
import matplotlib.pyplot as plt
# read image
image = cv2.imread('example.jpg', cv2.IMREAD_COLOR)
image = image[1:1440:20,1:1440:20,:]
RGB_img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.figure(1)
plt.imshow(RGB_img)
print("Hello")
x0 = bilateral(RGB_img[:,:,0], 20, 200)
print("x1 done")
x1 = bilateral(RGB_img[:,:,1], 20, 200)
print("x2 done")
x2 = bilateral(RGB_img[:,:,2], 20, 200)
fully_processed = cv2.merge([x0,x1,x2])
print("Done")
normalizedImg = np.zeros((1440, 1440, 3))
normalizedImg = cv2.normalize(fully_processed, normalizedImg, 0, 255, cv2.NORM_MINMAX)
plt.figure(2)
plt.imshow(normalizedImg.astype(int)) | {
"domain": "dsp.stackexchange",
"id": 10673,
"tags": "image-processing, filters, opencv"
} |
When deriving the power spectral density of stochastic processes, why does taking an expectation allow the $T\rightarrow\infty$ limit to be taken? | Question: I am following the arguments presented in the paper AN-255 Power Spectra Estimation, from Texas Instruments, to learn how to derive the power spectral density for a stationary stochastic process, and do not understand one of their steps. I am looking at Page 4 only of the the document for the purposes of this question.
The derivation is done by considering only a single realisation (sample function), denoted by $x(t)$, of the stochastic process. Then they do the usual thing of only considering a truncated version of this signal $x_T(t)$, which allows the Fourier transform of it to exist, denoted by $X_T(f)$ (because one of the requirements for FT to exist is absolute integrability, which isn't satisfied for random signals).
I am confused with how they justify taking the limit $T\rightarrow\infty$ in the right hand column of page 4, shown here:
$\hskip2in$
$\hskip2in$
I understand that you cannot take the limit $T\rightarrow\infty$ directly on the truncated sample function's Fourier transform $X_T(f)$, because it is not defined in that limit (indeed, that was the reason for doing the truncation in the first place).
I also understand that $X_T(f)$ results from a single realisation only of the stochastic process, and if you were to repeat the experiment again you would obtain a different $X_T(f)$. As such, this function is actually a random variable itself (at each frequency $f$), and therefore it makes sense to do an ensemble average over many realisations, and consequently motivates taking an expectation.
What I don't understand is why the expectation solves the problem, and allows you to take the limit $T\rightarrow\infty$ after you have taken the expectation:
$\hskip2in$
What is it about the expectation of $X_T(f)$ that means the limit $T\rightarrow\infty$ can then be taken? I'm having a hard time seeing it... Could someone spell this out plainly for me?
Answer: It is not the expectation operator that makes sure that the limit exists. The expectation just results in an ensemble average, which we need to obtain a deterministic function $S(f)$ for the power spectrum.
Assume we're given a deterministic power signal $x(t)$, i.e., a signal with finite non-zero power, and, consequently, infinite energy. Its Fourier transform generally doesn't exist. We can define a truncated Fourier transform
$$X_T(\omega)=\int_{-T}^Tx(t)e^{-j\omega t}dt\tag{1}$$
By assumption, $\lim_{T\to\infty}X_T(\omega)$ doesn't exist. However, the limit
$$\lim_{T\to\infty}\frac{\left|X_T(\omega)\right|^2}{2T}\tag{2}$$
exists because of the finite power assumption, and this is also the way the power spectrum is defined for such deterministic power signals.
If $x(t)$ is modeled as a random signal, we only need to modify $(2)$ by taking the expectation of the numerator to obtain the power spectrum of $x(t)$. | {
"domain": "dsp.stackexchange",
"id": 8595,
"tags": "fourier-transform, continuous-signals, power-spectral-density, random-process, stochastic"
} |
Doubts on representations of poincare group and QFT | Question: I am studying Poincare group and encountered the term massless representations of the Poincare group. I know Poincare group is studied by the studying the little group of various momenta, massless and massive. Does massless representations of Poincare group mean representations of the little group for $p^{\mu} = (1,1,0,0) $.(first coordinate is the energy).
This is the second question: If we are given a Lagrangian for a field(say Electromagnetic field), then we can study the representations of the relevant field. To study these representations of these, we can study the representations of the little group for various cases and using the method of induced representations, we can get the representations. Did I get the picture right ?
My question is
1.We know oen of the Casimir elements of the poicare group is $p^{\mu}p_{\mu} = m^2$ . How does one identify the m with mass of the field we are studying ?
2.If the Lagrangian involved mass-term,(like in Proca equation). How do the representations change? I don't see any reason why the representations should change because of the mass. (relevant explanation for the scalar field is enough for me)
Answer: You are asking how to connect the "mass term" for the field in the Lagrangian with the "mass value" for the particle state given by the irreducible unitary Poincaré representation it transforms in.
The connection is through the four-momentum operator. Doing canonical quantization starting from the Lagrangian and without ever thinking about any representations on particles, you can show that the four-momentum operator for a scalar field can be expressed in terms of creation/annihilation operators as
$$ P^\mu = \int p^\mu a^\dagger(\vec p) a(\vec p) \frac{\mathrm{d}^3 p}{(2\pi)^3}$$
so that
$$ P^2 = P_\mu P^\mu = \int \frac{\mathrm{d}^3 p}{(2\pi)^3}\frac{\mathrm{d}^3 q}{(2\pi)^3}\left(q_\mu p^\mu a^\dagger(\vec q)a(\vec q)a^\dagger(\vec p)a(\vec p)\right).$$
Using $a(\vec q) a^\dagger(\vec p) = (2\pi)^3 \delta(\vec p - \vec q) + a^\dagger(\vec p)a(\vec q)$, we have that
$$ P^2 = \int \frac{\mathrm{d}^3 p}{(2\pi)^3}\left(m^2 a^\dagger(\vec p)a(\vec p) + \text{term involving two annihilators on the right}\right)$$
and applying this operator to a one-particle state corresponding to the field (i.e. created by $a^\dagger$) therefore yields $m^2$ since $\int a^\dagger a$ is just a number operator.
Therefore, the mass value from the Lagrangian is indeed the Casimir value that appears in the Poincaré representation for a one-particle state. | {
"domain": "physics.stackexchange",
"id": 97844,
"tags": "quantum-field-theory, group-representations, poincare-symmetry"
} |
How inflation creates a universe from nothing? | Question: I have a basic, mostly purely conceptual understanding of Quantum Field Theory, and after lots of Youtube (thanks PBS Spacetime!) I have an idea of how inflation works to turn the vacuum into a universe. Please correct me if I'm wrong.
There are quantum fields present everywhere in the universe at once. Excitations on those fields, caused by energy, are the vibrations that we perceive as particles
A field at its lowest possible energy state still has quantum fluctuations since, due to the laws of Quantum Mechanics, it is impossible for the field to have precisely zero energy. These quantum fluctuations are small vibrations in the field that usually quickly form and cancel each other out, and they can be thoight of as (although this is mostly to aid in visualization, since they don't have the exact same properties) 'virtual particles' popping in and out of existence.
At the event horizon of a black hole, some of the vibrations are 'cut off' because of the one-way boundary that is the evnt horizon, so the vibrations that would be cancelled out by these don't, so they are perceived as 'real particles', what we call Hawking Radiation.
So my understanding is this. An early universe is in the vacuum state, with only quantum fluctuations permeating the cosmos. The process of inflation causes points that were previously very close to suddenly become extremely far apart, going from a distance in the quantum scale of things to lightyears apart in a tiny fraction of a second. Since points of the quantum field which were very close together are suddenly way too far away to communicate, this effectively 'cuts off' the vibrations in one spot from the rest of the universe, turning those quantum fluctuations into 'real particles'
Is this correct?
Answer: I think it is not possible to give a clear "yes or no" answer to your question, because it is a question about a research area where there remain many models which do not agree with one another, and we simply don't know which if any are right. The research area being inflation theory.
Inflation or something like it may have happened, or it may not have happened. The biggest unknowns here are to do with entropy. Attempts to model the early universe in detail typically evoke (without always realising that they have done so) extremely special states of affairs. This makes it hard to assess whether or not a given theory has not so much explained something as shown that it would be the outcome of something even more inexplicable. Inflation does not escape this problem.
I think the main message here is that something rather odd is happening in our day in the interaction between research science and the wider public. The distinction between carefully constructed and tested ideas and mere speculation is blurred in many popular books, and You Tube channels are even worse. In elementary particle physics, progress over the last 80 years has required a partnership between experiment and theory. There are occasional examples where theoretical understanding put in place something well out of the range of experiment but which proved to be correct (Higgs mechanism being a good example). But there are also many examples of cases where experiments yielded surprises. Inflation is an attempt to grapple with physics at the energy scale $\ge 10^{15}$ GeV. Experiments have accessed up to $10^4$ GeV.
I think the best way to respond to your question is to encourage continuing interest in these areas, but also to encourage a greater role for the attitude "well we really don't know yet".
But one thing we do know is that every scientific model ever put forward for anything has invoked a continuity between one thing and another, between a prior situation and a consequent situation. The idea that physics suggests that something could come from nothing is simply a misdirection, a deliberate miss-use of words, presumably in an effort to gain readers or something like that. I mention this simply because the title of your question suggests that you may have been miss-directed into this sort of juggling with the meanings of words.
Among the authors well-placed to comment here, and who does a reasonably balanced job I think, is Sean Carroll.
Added edit to answer specific point at the end of the question.
Either with or without inflation, space is reckoned to have started from an early state presumably described by quantum gravity, and it grew extremely fast at early times. This resulted in energy density fluctuations being present on pretty much all distance scales. This is modeled theoretically by using quantum theory to provide a value for the standard deviation of the distribution, and then subsequently treating that distribution as a classical field having fluctuations over space and time with the given standard deviation. The move here from quantum to classical is rather glossed-over in the research literature; it is connected to the subtleties involved in the process called symmetry-breaking.
(What is spontaneous symmetry breaking in QUANTUM systems?)
Anyway the main point for your question is that this is not like Hawking radiation. The fluctuations are already reckoned to be classical, or are treated as classical, whether or not there was a subsequent inflation to stretch them out. (I don't work directly in this research area; I got the above information from a book by Hobson, Efstathiou and Lasenby, and from various review and other papers). | {
"domain": "physics.stackexchange",
"id": 59202,
"tags": "quantum-field-theory, cosmology, big-bang, cosmological-inflation"
} |
Operating on a list of files using recursion | Question: I'm starting just now with CoffeeScript and I found out that I can't solve problems like looping and recursives with just one line. I would like to improve the code that I just wrote using built-in CoffeeScript helpers.
'use strict'
file_system = require 'fs'
Types = require './types'
Extract =
each_file: (index = 0) ->
length = @files.length
file = @files[index]
if index < length
Types.read file, @next.bind @
@each_file index + 1
next: (file) ->
@result.push file
if @result.length == @files.length
return @cb.clean.call @cb, @result
@each_file()
return
init: (files, cb) ->
@files = files
@cb = cb
@result = []
@each_file()
return
module.exports = Extract
The script speaks for itself; I'm doing a recursive function to send files to Types.read, and I store the result in the result array.
Answer: Fake for-loops and comprehensions
At the beginning of your post, you say that you can't always achieve everything with looping in one line in CoffeeScript. When you say that, I assume you are taking about the each_file:
each_file: (index = 0) ->
length = @files.length
file = @files[index]
if index < length
Types.read file, @next.bind @
@each_file index + 1
This is kinda ugly right now because you seem to be using a method to fake a for-loop. Luckily, with CoffeeScript's comprehensions, we can turn this into a simple 1-line expression.
First, we need to be iterating through all of the @files. That can be written simply like this:
Types.read(file, @next.bind this) for file in @files
This is a comprehension that will go through all of the @files and substitute the file in the Types.read call with the current file it's looping over.
Now that we have this, you can remove that
file = @files[index]
line, along with that single index parameter. Why? Because, now that we have this loop, we no longer have any of that method-recursion-fake-for-loop-idness.
Along with those, since this is now a comprehensions that loops through all the values in an array, we don't need to do any checking to make sure the index is less than the length.
if index < length
Can be removed. Guess what your method looks like now?
each_file: () ->
Types.read(file, @next.bind @) for file in @files
The above method will do exactly what it was doing before: it will go through all of the files in @files and pass it into Types.read along with @next.bind @.
What's different now?
It's much shorter and much simpler than what you were doing before. As I already stated, you seemed to be reinventing the for loop with that recursion you were doing.
It's more idiomatic. CoffeeScript has those comprehensions so you can simplify long tasks into a few small and readable lines.
Misc.
init: (files, cb) ->
@files = files
@cb = cb
This can be shortened to this:
init: (@files, @cb) ->
The CoffeeScript compiler treats parameters with a @ before them as a name of a property to set to the parameter. For example, @files will become:
this.files = files
in the method body. | {
"domain": "codereview.stackexchange",
"id": 16165,
"tags": "recursion, node.js, file, coffeescript, callback"
} |
Getting error code "string" from ArmNavigationErrorCodes for MoveArm action | Question:
The MoveArm action returns an error_code in its result:
more MoveArmResult.msg
#An error code reflecting what went wrong
motion_planning_msgs/ArmNavigationErrorCodes error_code
which I can access then with:
result->error_code.val;
is there a way to get a string (which is more meaningful than an int) from this error code? something like the ENUM defined here:
/opt/ros/cturtle/stacks/motion_planning_common/motion_planning_msgs/msg/ArmNavigationErrorCodes.msg
# overall behavior
int32 PLANNING_FAILED=-1
int32 SUCCESS=1
int32 TIMED_OUT=-2
# start state errors
int32 START_STATE_IN_COLLISION=-3
int32 START_STATE_VIOLATES_PATH_CONSTRAINTS=-4
Thanks in advance,
Originally posted by felix on ROS Answers with karma: 71 on 2011-02-27
Post score: 0
Answer:
If you are in unstable and using C++, there is a convenience function in motion_planning_msgs that does this for you:
#include <motion_planning_msgs/convert_messages.h>
....
std::string err_string = motion_planning_msgs::armNavigationErrorCodeToString(error_code);
....
Originally posted by Sachin Chitta with karma: 1304 on 2011-02-28
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Sachin Chitta on 2011-03-01:
This will go into the release of the arm_navigation stacks against diamondback (which is essentially a copy of unstable).
Comment by felix on 2011-02-28:
Thanks. I guess I am not in unstable, it is not defined in my motion_planning_msgs/convert_messages.h . I am using the ubuntu virtual box appliance I grabbed from WG site a while ago (but I am up to date). Will this make it in the stable version sometime? | {
"domain": "robotics.stackexchange",
"id": 4886,
"tags": "ros, pr2-arm-navigation"
} |
Flow down an incline - Understanding boundary conditions | Question: After working with some problems regarding flow, I came up to a similiar problem as the one presented here:
In solving the problem, we assume a laminar flow in steady state.
When using Navier-Stokes equations, to fully determine the velocity profile, I get a differential equation of order two. Meaning I need two boundary conditions in order to fully describe the velocity profile. First of all, if we assume the incline is stationary, no-slip forces us to set $v_x(0) = 0$ where $v_x$ is the velocity in the $x$-direction and a function of $y$. Now, since I've seen examples of solving this before, they also assumed that $v_x'(h)=0$, and it was somehow related to the shear stress.
First, I'd like to understand how certain partial derivatives of the velocity with combinations of different directions relate to shear stresses (i.e. derivative of $v_x$ with respect to $y$ or $v_y$ with respect to $x$).
Secondly, I want to understand why this boundary conditions holds. I assume that it has to do with friction and viscosity of the fluid. Since no fluid is above that region, we will have no shear force from any layer.
Answer: Recall that the viscosity stress tensor for an incompressible Newtonian fluid is (by definition):
$$
\sigma=\eta(\nabla\otimes v+\nabla\otimes v^T)
$$
You can derive this from a microscopic theory (transport phenomena) or phenomenologically (assume linearity, isotropy, and only dependence in first order spatial derivatives).
Btw, the resulting viscous shear forces at the boundary of a region $\Omega$ is:
$$
F=\int \sigma \cdot d^2x
$$
You obtain the Navier-Stokes by considering momentum balance in an infinitesimal domain or equivalently using Stokes’ formula.
In your case, using the equations give the shear stress of the free surface to be:
$$
\sigma_{xy}=\eta(\partial_xv_y+\partial_yv_x)
$$
Using the laminar hypothesis, this simplifies to:
$$
\sigma_{xy}=\eta v_x’
$$
So the boundary condition of the free surface is equivalent to:
$$
\sigma_{xy}(y=h)=0
$$
As you’ve written, this is justified physically. It essentially assumes that the air is inviscid so cannot apply a shear stress at the interface.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 95272,
"tags": "fluid-dynamics, flow, boundary-conditions, navier-stokes"
} |
How does capacitance work? | Question: I have a circuit whit a AC source a capacitor and a resistance all in series. I find that the difference of potential between the capacitor leads begin to change after some instants as it should. But my question is, if the resistance is very high, say infinite, the capacitor have no tension per se, but it experiences high voltage as a whole in respect to the ground. What i don’t understand is why there is not electric field between the plates.
I’m sorry if the question isn’t clear but I really cannot understand the implication of the definition of “potential”.
Answer: If you choose $R\gg\omega/C$ , then most the voltage will drop off in the resistor. You may now argue that, for any voltage to drop off in a resistor, there needs to be some current in it. And there is, in fact, such a current.
Imagine the condensator plates as two antennas. Yes, pretty strange antennas, but antennas they are. What you essentially do is: you send an AC signal from one of these antennas over to the other one. If the receiver antenna has no significant load to feed (that is, if $R$ is big) then its potential will pretty much exactly follow the potential of the sender antenna. In other words,
All the voltage you send into the circuitry will end up at the resistor.
The voltage along the capacitor will be very small. | {
"domain": "physics.stackexchange",
"id": 1208,
"tags": "electromagnetism, homework-and-exercises"
} |
Difference between Fourier and Laplace transforms in analyzing data | Question: I have a set of displacement-time graphs from an experiment to convert to the frequency domain. Both the Fourier and Laplace transform seem to do this, so what's the difference between them (difference in end result, not the mathematical difference) Also, is there even a way to perform a Laplace transform and output a graph?
Answer: The Fourier transform will better represent your data if there are oscillations in the displacement- time graphs and you want the period of those oscillations. The Laplace transform will better represent your data if it is made up of decaying exponentials and you want to know decay rates and other transient behaviors of your response. | {
"domain": "physics.stackexchange",
"id": 45171,
"tags": "computational-physics, fourier-transform, data-analysis, laplace-transform"
} |
Get the last activity per user per day in a dataframe | Question: I have many users. Each time a user uses their smartphone it register them. I am determining the last time each user used their smartphone each day.
Additionally smartphone usages from 18:00 to 06:00, the next day, should be considered as an entry on the previous day. I have created a dummy example.
I did the following:
First subtract the number of hours.
Sort the data frame based on user and date time.
Get the last row.
Is there a more efficient approach to this? Are there other tips I can follow to improve my code?
df_example = {'id': [1,1,1,1,1],
'activity': [datetime.datetime(2019, 12, 1, 19, 30, 1),
datetime.datetime(2019, 12, 1, 20, 22, 2),
datetime.datetime(2019, 12, 2, 2, 13, 2),
datetime.datetime(2019, 12, 3, 19, 12, 2),
datetime.datetime(2019, 12, 3, 21, 3, 1)
]}
df_example = pd.DataFrame(df_example, columns = ['id', 'activity'])
df_example['activity'] = df_example['activity'] - datetime.timedelta(hours=6, minutes=0)
df_example['date'] = df_example['activity'].apply(lambda x: x.date())
df_example.sort_values(by=['id', 'activity'])
df_example.groupby(['id', 'date']).tail(1)
Answer: Instead of using tail, if you only need one item there are the first and last methods, which do exactly what you think they would do with grouped dataframes:
df_example.groupby(['id', 'date']).last()
I doubt there is a faster way to create the activity column. And you have to create a new one because of your requirement with what counts to which day.
But you can speed up getting the date. Using apply with a lambda is about the second slowest way to work with pandas (manual Python for loops are slower). Instead use the vectorized datetime functions:
df_example['date'] = df_example['activity'].dt.date
Potential bug:
Note that df_example.sort_values(by=['id', 'activity']) returns the sorted dataframe, it does not modify it inplace. Either assign it back to df_example, or use inplace=True.
The same is true for groupby, you probably want to assign the result to a variable as well in order to do something else with it afterwards.
Python has an official style-guide, PEP8. While it recommends using spaces around the = operator when using it for assignment, it recommends using no spaces when using it for keyword arguments. | {
"domain": "codereview.stackexchange",
"id": 38547,
"tags": "python, python-3.x, numpy, pandas"
} |
What is a 'dynamic entity' in computer science? | Question: I am doing some reading into computer science and how the computing system works in order to be ready for my 1st year at university doing a Computer Science degree. I came across a term within the definition for a computer system that I tried looking up for on the internet called 'dynamic entity'. The problem is, I couldn't specifically find the definition I was looking for that could explain simply what it meant. If there is someone who could explain this term for me in a simple, but easy to understand, explanation, I would be very grateful. Thank you.
The link I got the term from: https://books.google.co.uk/books?id=3ls6K2cJW_0C&printsec=frontcover&dq=computer+science+easy+reading&hl=en&sa=X&ved=0ahUKEwjz2r_cuIfkAhUxonEKHWBZAu8Q6AEIKzAA#v=onepage&q&f=true
The term can be found on page 4 of the book, where they explain what a computing system is.
Answer: There are no scientific term "dynamic entity", so meaning of this pair of words can be well stretched. In this particular case they could write something like "A computing system ... is a dynamic system", but apparently they didn't like the repetition, so they have decided to use a fuzzy word "entity". Also, there is a mathematical term Dynamical System, which doesn't have anything in common with what they are trying to say.
Essentially they mean that there is a lot of parts in a real computing system, which can change with time - hardware and software can be installed and uninstalled, software can be configured in many different ways, data comes into the system and gets out of it and so on. Sometimes even users and people, supporting a complex computing system, can be considered as a part of it. | {
"domain": "cs.stackexchange",
"id": 14506,
"tags": "operating-systems"
} |
Cost function for LTI system identification | Question: I am currently reading and trying to understand a paper (Kulkarni and Colburn, 2004) that utilizes system identification methods to approximate head-related transfer functions.
The general approach is to
Compute an autoregressive (all-pole-) estimate of the transfer function using the autocorrelation method for linear prediction.
Use the AR estimate as a starting point to compute a pole-zero-representation of the system transfer function iteratively.
Evaluate the result of the estimation process on a logarithmic scale (Error in dB).
For the iterative procedure, the authors are proposing a cost function
$\hat{C} = \frac{1}{2\pi}\int_{-\pi}^{\pi}|H(e^{j\omega})A(e^{j\omega}) - B(e^{j\omega})|^2 d\omega $,
where $\hat{H}$ is the system transfer function, $A$ is the DTFT of the recursive coefficients and $B$ the DTFT of the transversal coefficients.
I understand this approach originates from this paper (Kalman, 1958).
This cost function is then extended for the iterative process as
$\hat{C}_i = \frac{1}{2\pi}\int_{-\pi}^{\pi}|\frac{H(e^{j\omega})A_i(e^{j\omega})}{A_{i-1}(e^{j\omega})} - \frac{B_i(e^{j\omega})}{A_{i-1}(e^{j\omega})}|^2 d\omega $,
where the index $i$ denotes variables corresponding to the $i$th iteration.
The iterative modification originates from this paper (Steiglitz and McBride, 1965).
In order to find a solution on a decibel scale, a weighting function $W$ is introduced:
$\hat{C}_i = \frac{1}{2\pi}\int_{-\pi}^{\pi} |W_i(e^{j\omega})|^2 |\frac{H(e^{j\omega})A_i(e^{j\omega})}{A_{i-1}(e^{j\omega})} - \frac{B_i(e^{j\omega})}{A_{i-1}(e^{j\omega})}|^2 d\omega $,
which is introduced in the paper as
$W_i(e^{j\omega}) = \frac{\log\left( H(e^{j\omega})\right) - \log\left( \frac{B_{i-1}(e^{j\omega})}{A_{i-1}(e^{j\omega})}\right)}{|H(e^{j\omega})A_{i-1}(e^{j\omega}) - B_{i-1}(e^{j\omega})|^2}$.
I understand this weighting function is the squared error in logarithmic scale between true and approximated transfer function, divided by the first cost function.
However, i have trouble understanding the process of arriving at the iterative cost function for several reasons. I would like to ask the following questions:
Why is the first cost function $\hat{C}$ preferred to, say, $\frac{1}{2\pi}\int_{-\pi}^{\pi} |H(e^{j\omega}) - B(e^{j\omega})/A(e^{j\omega})|^2$ in the first place? what does it do?
In the iterative cost function $\hat{C}_i$, what is the purpose of dividing by the recursive proportion of the last transfer function estimate?
For what reason is a weighting function introduced for logarithmic error minimization, rather than just using its numerator as a cost function directly?
I would really appreciate any help or pointers into the right direction.
Answer: The chosen cost function is the mean squared error, i.e., the integral over a squared magnitude of the difference between frequency responses. The function
$$E(e^{j\omega})=H(e^{j\omega})-\frac{B(e^{j\omega})}{A(e^{j \omega})}\tag{1}$$
depends on frequency, so you can't minimize it directly, unless you want to minimize it for exactly one frequency $\omega$, which is of course pointless. You can choose several error measures depending on $E(e^{j\omega})$ given in $(1)$. Two common choices are
$$\varepsilon_1=\max_{\omega}W(\omega)|E(e^{j\omega})|\tag{2}$$
and
$$\varepsilon_2=\int_{0}^{\pi}W(\omega)|E(e^{j\omega})|^2d\omega\tag{3}$$
with some positive weighting function $W(\omega)$. Note that $\varepsilon_1$ and $\varepsilon_2$ given by $(2)$ and $(3)$ do not depend on frequency.
The authors of the paper you refer to chose to minimize the weighted mean square error given by $(3)$. However, instead of using the linear difference $(1)$, they chose to minimize the average logarithmic difference, i.e., the average error on a dB scale.
The problem with the minimization of $(3)$ is that for IIR filters, it results in a non-linear optimization problem, which is much harder to solve directly, and which might also have locally optimal solutions that are far from the global optimum. The cost function $\hat{C}$ in your question is linear in the filter coefficients. The point of the iteration is now to solve a sequence of linear minimization problems (which is simple, just solve a system of linear equations) in order to compute the solution of the originally non-linear optimization problem (the minimization of $(3)$). Note that if convergence is achieved, then $A_{i-1}(e^{j\omega})=A_{i}(e^{j\omega})$, so the cost function $\hat{C}_i$ is identical to the cost function of the original non-linear problem. Yet, only linear minimization problems are solved in each iteration.
I'm not sure I completely understand your last question, but the weighting function is there to change to problem from minimizing the average squared difference between frequency responses to the average squared differences between the logarithms of the magnitude responses. The necessary weighting function is unknown, but if the procedure converges - note that there is no guarantee that it does in all cases - the final weight function is such that the mean squared logarithmic error is minimized. Note that this latter problem is highly non-linear, whereas the proposed procedure only solves linear subproblems. | {
"domain": "dsp.stackexchange",
"id": 7440,
"tags": "linear-systems, transfer-function, system-identification, autoregressive-model"
} |
Galactic Habitable Zone | Question: Do galaxies have habitable zones the same as stars do? Say in a galaxy with a very active nucleus producing a lot of heat and radiation, would there be a point at which no star's planets could harbor life do to the effects of the black hole? Also would there possibly be a habitable zone for galaxy clusters? If there were many galaxies with extremely active galactic nuclei condensed closely together would this possibly also hinder the evolution of life?
Answer: stars themselves, like our sun does, have a heliosphere. the size of this heliosphere would vary depending on the size/strength of the star and it's position in the galaxy. and what happens outside the heliosphere has usually little effect inside, in terms of radiation. so, the presence of a black hole or an agn near the star doesn't matter as long as the star itself is strong to create a large heliosphere (in which the planets reside). we can talk about the problem case by case but i don't know of a general galactic version of the goldilocks zone. | {
"domain": "astronomy.stackexchange",
"id": 1085,
"tags": "solar-system, exoplanet, planet, earth-like-planet, habitable-zone"
} |
What is a purpose of learning G-code? | Question: I might be wrong here. The solid modeling done in CAD like Catia, NX, Solidworks,etc can be taken as input in CNC and the desired shape and size of piece can be obtained.The CAD itself convert the model into G -code.So why do we need to learn CNC programming if CAD can do it? Stay safe.
Answer: Because some programs for some operations can be written quicker than the time it takes to produce the equivalent CAD file.
Also use of variables makes the program more useful ie a program to produce a cylinder can be written to take diameter as the controlling argument while producing many CAD files for various different diameters takes time. | {
"domain": "engineering.stackexchange",
"id": 3582,
"tags": "mechanical, cnc"
} |
A monad is just a monoid in the category of endofunctors, what's the enlightenment? | Question: Pardon the word play. I'm a little confused about the implication of the claim and hence the question.
Background: I ventured into Category Theory to understand the theoretical underpinnings of various categorical constructs and their relevance to functional programming (FP). It seems (to me) that one of the "crowning gems" at the intersection of Cat and FP is this statement:
A monad is just a monoid in the category of endofunctors
What is the big deal about this observation and what are its programmatic/design implications? Sources like sigfpe and many texts on FP seem to imply the mindblowingness of this concept but perhaps I'm unable to see the subtlety that's being alluded to.
Here's how I understand it:
Knowing something is a monoid allows us to extrapolate the fact that we can work within a map-reduce setting where the associativity of the operations allows us to split/combine the computation in arbitrary order i.e., (a1+a2)+a3 == a1+(a2+a3). It can also allow one to distribute this across machines and achieve high parallelization. (Thus, I could mentally go from a theoretical construct -> computer science understanding -> practical problem solving.)
For me it was obvious (as a result of studying Cat) to see that monads have a monoidal structure in the category of endofunctors. However, what is the implication one can draw from this and what is its programmatic/design/engineering impact when we're coding with such a mental model?
Here's my interpretation:
Theoretical Implication: All computable problems at their heart are monoidal in a sense.
Is this correct? If so, I can understand the enlightenment. It's a different perspective on understanding the notion/structure of computable problems that wouldn't be obvious if coming from only a Turing/Lambda model of computation and I can be at peace.
Is there more to it?
Practical Implication: Is it simply to provide a case for the do-notation style of programming? That is, if things are monoidal we can better appreciate the existence of the do/for constructs in Haskell/Scala. Is that it? Even if we didn't know about the monoidal underpinnings, we needn't invoke the monoidalness to make this claim since bind >>= and flatMap constructs are defined to be associative. So what gives? Or is it more to do with the foldability of monadic constructs and that is the indirect enlightenment that is being alluded to?
Question(s): What am I missing here? Is it simply the recognition of the fact that monads are generalized monoids and that they can be combined in any order similar to map-reduce operations like monoids? How does knowing about the monoidal property help improve the code/design in any way? What's a good example of before/after to show this difference (before knowing about monads/monoidality and after)?
Answer: This answer may not be exactly what you are looking for. That is, I think perhaps the importance of this characterisation is being overemphasised here. The quote
a monad in X is just a monoid in the category of endofunctors of X
is originally from Mac Lane's Categories for the Working Mathematician, where it appears as a helpful intuition for the definition of monad, which, alone, can seem quite unfamiliar at first. By characterising it as a monoid in a particular monoidal category, the reader is given an alternative perspective. Note that the chapter on monads actually comes before the chapter on monoidal categories: the remark is intended to be helpful, rather than precise (it is made precise only later).
The quote was then rephrased in James Iry's infamous article Brief, Incomplete and Mostly Wrong History of Programming Languages.
A monad is just a monoid in the category of endofunctors, what's the problem?
Presented out of context, as it is in the article, it is meant to amuse. The quote has since become a meme in the functional programming community, primarily because it is amusing, rather than a key insight for functional programming (though it does also serve to pique the curiosity of functional programmers, drawing them into the wonderful world of category theory). My view is that this characterisation, while helpful and interesting, is not as important as one might imagine from its popularity.
However, this is not to say there is no insight to be gained from this characterisation. First, let me point out that, while the presentation of a monad with a multiplication and unit is clearly suggestive of a monoid, as you point out, the Kleisli presentation, with a bind and return operation, is not. It is the Kleisli presentation that is common in functional programming, so certainly the characterisation as a monoid is more interesting from this perspective.
From a theoretical perspective, one of the insights is indeed that many natural structures in computer science (and mathematics) are monoidal. From the perspective of monads (particularly with their relation to cartesian operads and Lawvere theories), monoidal structure corresponds to substitution structure (equivalently, composition structure). Substitution is ubiquitous in computer science (for example, capture-avoiding substitution in a type theory, or grafting of trees). Monads are just one more example.
While understanding formally the statement in question may not be enlightening, I suggest that it is enlightening to understand the different perspectives on monads, as it allows you to see the different ways in which monads might be used (e.g. as containers, describing algebraic structure, describing multi-compositional systems, etc.). It can be hard to appreciate just how often they pop up without having seen the different lights in which they can be seen. In this sense, monads as monoids is just one perspective (and probably not the most enlightening).
Finally, while monads themselves are inarguably very useful in pure functional programming generally, I'm not sure the perspective as monads as monoids is helpful per se. I think the Kleisli perspective, which happens to be equivalent, is the most enlightening perspective here.
In summary, this response may be a little disappointing: I don't think that understanding this relationship is all that helpful or enlightening practically (i.e. for programming). However, it is a useful perspective to keep in mind, along with the others, when considering monads theoretically. | {
"domain": "cs.stackexchange",
"id": 16663,
"tags": "functional-programming, category-theory"
} |
SRA toolkit and SRA- prefix | Question: I am trying to download raw sequence data from the SRA database using the SRA toolkit (v.3.0.2). Unfortunately, the authors of the article I am reading gave SRA numbers (i.e. with the SRA prefix, not the SRR). This does not seem to work with the prefetch command:
prefetch --max-size 100000000 SRAxxxxxx -O downloads_SRAxxxxxx/
I ended up going on NCBI and manually finding the relevant SRR number for each sample; however, I was wondering if I could have done it programmatically. I appreciate your help.
Edit: Thank you for your comments, here are a couple of examples:
SRA030738 ; SRA030736 ;
Searching for them on SRA I found the SRR numbers, respectively:
SRR135604 ; SRR135602.
The paper is Hasan et al, 2012: DOI: 10.1073/pnas.1207359109
Answer: Using Entrez Direct E-utils, run the following command for your two examples:
esearch -db SRA -query 'SRA030738 OR SRA030736' | efetch -format docsum | xtract -pattern DocumentSummary -element Runs | xtract -pattern Run -element Run@acc
(I know, it seems strange to run xtract twice, but the formatting on the plain xml output is dodgy. The first pass parsed out the unwanted > or < or " characters. The second pass worked on the re-formatted stdout) | {
"domain": "bioinformatics.stackexchange",
"id": 2472,
"tags": "sratoolkit"
} |
Why is it faster (as in proportion to volume) to boil 4 cups of water than to boil 2 cups? | Question: I did an experiment where I boiled two cups (500ml) of water in a kettle, and it took 1:30 minutes to reach around 98 C, average. However, when I boiled 4 cups of water, (1L) it only took me 2:30 minutes, when I expected it to be 3:00 minutes. Does this mean that the more water I boil, the faster it will reach 100 C (proportional to its volume, of course)? The kettle and the thermometer used were cooled down first before boiling another batch of 24.5 C water, and I did a few trials.
Can you tell me the reason for this? And is there an equation I can use, to figure out, for example, how long it would take to boil 6 cups?
Answer: Double the ammount of water does not need doulbe the ammount of time to heat, since while the energy needed is doubled indeed, losses due to vaporization and radiation from the kettle should be approximately constant.
You can plot the time needed for a given ammount of water to boil and try to fit a function into that. With two data points you can manage to fit a straight line, corresponding to linear growth, although I do not expect that to be a good fit. Try doing measurements with 1, and 3 cups, too. Then you have more data and see what kind of function fits the data best. That way you can extrapolate to higher ammounts. | {
"domain": "physics.stackexchange",
"id": 62391,
"tags": "thermodynamics, heat, water, volume"
} |
Fourier Series Coefficients | Question: Question:
The fourier series coefficients is given as:
$$c_k= \begin{cases}
1 \qquad & k \ \text{ even} \\
2 \qquad & k \ \text{ odd} \\
\end{cases}$$
the period of the signal is $T=4$, what is signal $x(t)$?
Attempt:
when i am trying to find the signal by applying the general formula at the end i am getting exponential term now i am finding that hard to convert it into impulse again because i know answer of this signal will be a impulse but i am not getting any idea to convert it into impulse again? Please help with this?
Answer: You should use the synthesis equation of an impulse train with period $T$ (which is easy to derive):
$$x(t)=\sum_{k=-\infty}^{\infty}\delta(t-kT)=\sum_{k=-\infty}^{\infty}\frac{1}{T}e^{jk\frac{2\pi}{T} t}\tag{1}$$
That is: the Fourier coefficients for all terms is a constant ($\frac{1}{T}$).
Now assume that there are two impulses with different amplitudes $a$ and $b$ per period, or $$x(t)=a\delta(t)+b\delta(t+2),\ -4< t \le0$$ In such case we have
$$\begin{align}
x(t)&=\sum_{k=-\infty}^{\infty}a\cdot\delta(t-kT)+b\cdot\delta(t+\frac{T}{2}-kT)\\&=\sum_{k=-\infty}^{\infty}a\cdot\delta(t-4k)+b\cdot\delta(t+2-4k)\\
&=\sum_{k=-\infty}^{\infty}\frac{a}{4}e^{jk\frac{2\pi}{4} t}+\sum_{k=-\infty}^{\infty}\frac{b}{4}e^{jk\frac{2\pi}{4} (t+2)}\\
&=\sum_{k=-\infty}^{\infty}\frac{1}{4}e^{jk\frac{\pi}{2} t}\left(a+be^{jk\pi}\right)\\
&=\sum_{k=-\infty}^{\infty}\frac{1}{4}e^{jk\frac{\pi}{2} t}\left(a+b(-1)^k\right)\\
&=\begin{cases}
\displaystyle\sum_{k=-\infty}^{\infty}\frac{a+b}{4}e^{jk\frac{\pi}{2} t},& k\text{ even}\\[10pt]
\displaystyle\sum_{k=-\infty}^{\infty}\frac{a-b}{4}e^{jk\frac{\pi}{2} t},& k\text{ odd}
\end{cases}
\end{align}$$
Now refering again to $(1)$ and comparing it with the question, we should have
$$c_k=\begin{cases}
\frac{a+b}{4}=1\\[10pt]
\frac{a-b}{4}=2
\end{cases}$$ | {
"domain": "dsp.stackexchange",
"id": 5371,
"tags": "fourier-series, periodic"
} |
Zincblende structure, is it FCC or diamond face centred? | Question: I want to calculate number of atoms or lattice points of the zincblende but
I can't distinguish the crystal shape or structure of Zincblende because :
According to the information of this websiteZincblende/sphalerite is based on a fcc lattice of anions
As we know that FCC has just 4 atoms per unit cell.
In other hand my teacher said that Zincblende is similar to diamond structure which has 8 atoms per unit cell
Answer: It is usual to call a lattice fcc even if it has more than one atom per primitive unit cell. That is, it is common to talk about the underlying Bravais lattice in crystals with non-trivial basis (i.e. more than one atom per primitive unit cell).
In this sense, diamond is a fcc lattice with two atoms per primitive unit cell and the same applies to the zincblende structure. Wikipedia has a nice picture that shows that the fcc nature of the lattice.
To extend: The positions of atoms in a crystal can be described by giving a Bravais lattice, specified by vectors $\vec a_1, \vec a_2, \vec a_3$ and a "basis" of $n$ atoms with offsets $\vec b_1, \ldots, \vec b_n$ from the lattice site (the name is badly chosen as this has nothing to do with the basis of a vector space). That is, an atom of kind $i$ is found at the location
$$ \vec r = k \vec a_1 + l \vec a_2 + m \vec a_3 + \vec b_i $$
for any integers $k, l, m$.
The vectors $\vec a_1, \vec a_2, \vec a_3$ span the unit cell (note that this choice is not unique, for example we could use $2\vec a_1$ instead of $\vec a_1$ and use a larger basis).
The unit cell is called primitive if we cannot choose such a representation with a basis with less atoms (this choice is also not unique, we could for example use $\vec a_1, \vec a_2 + \vec a_3, \vec a_2 - \vec a_3$ instead). The Bravais lattice described by these vectors, however, is unique and tells us which kind of lattice the crystal posesses. In the case of zincblende and diamond this is an fcc lattice and we need two atoms placed at $\vec b_1$ and $\vec b_2$ (one of those can obviously be chosen to be zero).
The customary unit cell is chosen as to make the symmetry of the system apparent. So for fcc lattices we use a cubic unit cell which consists of 4 primitive unit cells (and therefore contains 4 atoms in copper and 8 in diamond or zincblende). | {
"domain": "physics.stackexchange",
"id": 34697,
"tags": "x-ray-crystallography"
} |
Anagrams for a given input | Question: This question is the first draft, my new revised code is here:
Anagrams for a given input 2.0
Here is my code that I wrote, which basically lists the anagrams for a given input.
I did this by reading each line in my dictionary file and comparing it to the anagram. Only if it matched did I add it to the list, instead of adding all the words to the list and then sorting through them.
I compared the words by first checking the length of both words, and then if they matched I put put each word into a list and then sorted the list, then I compared the two lists using the equals() function. And iterated my counter.
The function of my getOutPut() was just to neaten up the printout.
What I would like are some critiques on my work and what I should/could have done differently, especially pertaining to semantic efficiency, code correctness, common practices that I may be unaware of, and code efficiency.
This is only my second project, so I want to learn from it to become a better programmer. That's why I'm asking.
The methods are in the order they are called in:
(Note that I split the code up to make it easier to see, all the methods are inside the Anagramatic Class)
package anagramatic;
/**
* IDE : NETBEANS
* Additional Libraries : Guava
* @author KyleMHB
*/
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Scanner;
import javax.swing.JOptionPane;
public class Anagramatic {
static int k;
-
public static void main(String[] args) throws FileNotFoundException{
String anagram;
anagram=input();
ArrayList<String> words=readFile(anagram);
String output= getOutPut(words);
JOptionPane.showMessageDialog(null,
"The anagram "+anagram+" has "+k+" matches, they are:\n\n"+output);
}//PVSM
-
private static String input() {
String input;
input = JOptionPane.showInputDialog(null,
"Enter the Word or words you would like to be processed");
return input;
}//input
-
private static ArrayList readFile(String anag) throws FileNotFoundException{
k=0;
ArrayList<String> list;
ArrayList<Character> a;
ArrayList<Character> b;
try (Scanner s = new Scanner(new File("words.txt"))){
list = new ArrayList<>();
while (s.hasNext()){
String word= s.next();
if (word.length()==anag.length()){
a = new ArrayList<>();
b = new ArrayList<>();
for(int i=0;i!=word.length();i++){
a.add(anag.charAt(i));
b.add(word.charAt(i));
}//forloop to make two lists of the words
Collections.sort(a);
Collections.sort(b);
if(a.equals(b)){
list.add(word);
k++;
}//comparing the two lists
}//if length
}//while
}//try
return list;
}//readfile
-
private static String getOutPut(ArrayList<String> words) {
String wordz="[";
int x=0;
int y=0;
for(int i=0; i!=words.size()-1;i++){
if(x!=7){
wordz+=words.get(i)+", ";
x++;
}else{wordz+="\n";x=0;}
}//for
wordz+=words.get(words.size()-1)+"]";
return wordz;
}//getOutPut
}//Anagramatic
Answer: I'll focus on the heart of your solution first, which is your readFile() method. Here's how I would write it:
public static List<String> anagramsInFile(String word, File f)
throws FileNotFoundException {
char[] canonical = canonicalize(original);
ArrayList<String> anagrams = new ArrayList<String>();
try (Scanner s = new Scanner(f)) {
while (s.hasNext()) {
String candidate = s.next();
if ( (candidate.length() == word.length()) &&
(Arrays.equals(canonical, canonicalize(candidate))) ) {
anagrams.add(candidate);
}
}
}
return anagrams;
}
private static char[] canonicalize(String original) {
char[] array = original.toCharArray();
Arrays.sort(array);
return array;
}
Points to note:
Naming is important. handleFile() is vague; anagramsInFile() is meaningful. Also, I find anag to be confusing parameter name.
Hard-coding the filename inside the method is a bad idea.
Unless the caller has a reason to require an ArrayList, your method signature should just commit to returning some kind of List. Better yet, consider returning a Set.
You only have to canonicalize the original string once, not once per word in the file.
Canonicalization deserves a helper method, since you have to do it to both the original string and to the contents of the file.
Canonicalizing to a char[] avoids working character by character, since you can take advantage of String.toCharArray(). It's probably more efficient as well.
There's no point in maintaining k: it's just the size() of the returned list. Maintaining k as a class variable is particularly egregious, as there's no reason for that count to be part of the state of the class.
If your blocks are so lengthy and deeply nested that you need comments on the closing braces, consider that a Bad Code Smell and address that issue instead. (By the way, your braces were misaligned, which causes confusion that even your close-brace comments can't compensate for.)
Next, let's look at getOutPut(). Here's how I would write it:
private static String formatOutput(List<String> words) {
StringBuilder out = new StringBuilder('[');
int wordsPrinted = 0;
Iterator<String> w = words.iterator();
while (w.hasNext()) {
out.append(w.next());
if (w.hasNext()) {
out.append((++wordsPrinted % 8 == 0) ? "\n" : ", ");
}
}
return out.append(']').toString();
}
Points to note:
When concatenating so many strings, you really need to use a StringBuilder. Repeated string + string would create a temporary string each time.
Your output is buggy: you drop every eighth word, replacing it with a newline.
Be careful with edge cases: your code crashes when words is empty.
Try to structure your code so it's obvious that certain properties hold true. For example, in my solution, you can see that every word gets printed (because if .hasNext() is true, the next word will get appended). You can also see that every word except the last word is followed by a delimiter (either newline or comma).
Avoid cryptically named variables like x and y. In fact, you don't even use y.
Naming your result wordz when there's another variable named words makes the code a pain to read.
Finally:
public static void main(String[] args) throws FileNotFoundException{
String word = input("Enter the word or words you would like to be processed");
List<String> anagrams = anagramsInFile(word, new File("words.txt"));
display(String.format("The word %s has %d matches. They are:\n\n%s",
word, anagrams.size(), formatOutput(anagrams)));
}
private static String input(String prompt) {
return JOptionPane.showInputDialog(null, prompt);
}
private static void display(String message) {
JOptionPane.showMessageDialog(null, message);
}
Observations:
I've decomposed the work differently. There's input() and display(), which are analogous to each other, and encapsulate the Swing UI. (If you ever want to convert the program to work in the text console, you just modify those functions.)
More importantly, you can now see at a glance what the whole program does just by looking at main(). Note that all string constants are there as well: the prompt, the filename, and the output.
Don't declare any more variables than you need to. If possible, when you declare variables, assign them in the same statement.
Use String.format(). | {
"domain": "codereview.stackexchange",
"id": 4665,
"tags": "java, optimization, array"
} |
How can I use pandas agg here to avoid iteration? | Question: can anyone help me improve this pandas code?
import pandas as pd
df = pd.DataFrame(
[
[
'chr1', 222
],
[
'chr1', 233
],
[
'chr1', 2123
],
[
'chr2', 244
]
], columns = ['chrom', 'pos']
)
df2 = pd.DataFrame(
[
[
'chr1', 221, 223
],
[
'chr1', 230, 240
],
], columns = ['chrom', 'start', 'end']
)
Gives me 2 dfs with genomic coordinates. The first one is an exact position:
chrom pos
0 chr1 222
1 chr1 233
2 chr1 2123
3 chr2 244
and the second is a range:
chrom start end
0 chr1 221 223
1 chr1 230 240
I need to find the count of exact coordinates that are in one of the ranges (in the same chrom)
This works but is slow:
c=0
for chrom, data in df.groupby('chrom'):
tmp = df2.query(f'chrom == "{chrom}"')
for p in data.pos:
for s, e in zip(tmp.start, tmp.end):
if s < p < e:
c+=1
Then c = 2
I think I can use agg to do this without iteration (and hopefully faster) but I can't get it working. Can anyone show me how?
PS I am also asking this on stackoverflow.
Answer: # pip install pyranges or conda install -c bioconda pyranges
import pyranges as pr
g1 = pr.from_string("""Chromosome Start End
chr1 222 223
chr1 223 224
chr1 233 234
chr1 235 236
chr1 2237 238
chr1 2123 2124
chr2 244 245""")
g2 = pr.from_string("""Chromosome Start End
chr1 221 223
chr1 230 240
chr2 0 1000""")
r = g2.count_overlaps(g1)
r
# +--------------+-----------+-----------+------------------+
# | Chromosome | Start | End | NumberOverlaps |
# | (category) | (int32) | (int32) | (int64) |
# |--------------+-----------+-----------+------------------|
# | chr1 | 221 | 223 | 1 |
# | chr1 | 230 | 240 | 2 |
# | chr2 | 0 | 1000 | 1 |
# +--------------+-----------+-----------+------------------+
# Unstranded PyRanges object has 3 rows and 4 columns from 2 chromosomes.
# For printing, the PyRanges was sorted on Chromosome.
r.df
# 0 chr1 221 223 1
# 1 chr1 230 240 2
# 2 chr2 0 1000 1 | {
"domain": "bioinformatics.stackexchange",
"id": 1726,
"tags": "python, pandas, chromosomes"
} |
Ligand exchange reactions... are they one way or reversible? | Question: Are ligand exchange reactions one-way reactions or reversible? I know this is a very silly question but its not said outright in any place...
For example, in my high school chemistry book, these two ligand exchange reactions are shown:
$$\ce{[Cu(H2O)6]^{2+} (aq) + 4Cl- (aq) <=> [CuCl_4]^{2-} (aq) + 6H2O (l)}$$
$$\ce{[Cu(H2O)6]^{2+} (aq) + 4NH3 (aq)-> [Cu(NH3)4(H2O)2]^{2+} (aq) + 4H2O (l)}$$
The first is reversible but the second is one-way
Also, my teacher stated that ligand exchange reactions are irreversible... this confused me further
Can someone please tell me what is right...
Answer: Let's take the binding of $\ce{O2}$ and $\ce{CO}$ to hemoglobin as example. $\ce{CO}$ does bind much better than $\ce{O2}$!
We can consider the reaction of oxygen-loaded hemoglobin with $\ce{CO}$ a ligand exchange reaction. If that would be completely irreversible, the logical consequences in the case of an intoxication would be: [...] six feet under ;)
Instead, intoxicated patients are treated with oxygen at higher pressure.
To me, that sounds like Le Chatelier in the ICU and it only makes sense in the case of equilibria. | {
"domain": "chemistry.stackexchange",
"id": 853,
"tags": "coordination-compounds, transition-metals"
} |
TypeError: stop() takes exactly 3 arguments (1 given) | Question:
Hi everyone,
Currently, I'm trying to modify the baxter Joint Trajectory Client to direct use the left arm to perform the hard-coded position, instead of asking the user to add in the argument to choose the arms. After I done my modification and run the code it shows me : TypeError: stop() takes exactly 3 arguments (1 given) *
Of course, I had run the server node at same time.
It's there any part of my program was wrong or it's due to the server node was required to get the argument at the same time?
This is the example in the Baxter wiki.
And this is my program node link:https: //drive.google.com/open?id=0B6JF01xXRuNbZ0JqWG1WMGNsUWM
Thank you!
Originally posted by Zero on ROS Answers with karma: 104 on 2016-06-14
Post score: 1
Answer:
Quoting rospy documentation,
rospy.on_shutdown(h)
Register handler to be called when
rospy process begins shutdown. h is a
function that takes no arguments. You
can request a callback using
rospy.on_shutdown() when your node is
about to begin shutdown. This will be
invoked before actual shutdown occurs,
so you can perform service and
parameter server calls safely.
Messages are not guaranteed to be
published.
Isn't your callback method expecting 3 arguments (self, position, time)? But the method h() takes no arguments (argument self is provided).
When the callback is called, the TypeError is thrown because the callback method is called is called with insufficient number of arguments. And I'm not sure you need those two arguments because you don't use them inside the method anyway.
Inside the Trajectory class, check the stop() definition to this :
def stop (self):
self._client.cancel_goal()
It will solve the problem.
Originally posted by janindu with karma: 849 on 2016-06-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Zero on 2016-06-14:
Hi Janindu,
You answer work for me, and I'm able to run my core.But it just didn't form the position. At my service running terminal it shows :[WARN] [WallTime: 1465966247.239178] Inbound TCP/IP connection failed: connection from sender terminated before handshake header received.
Comment by Zero on 2016-06-14:
0 bytes were received. Please check sender for additional details.
So this is mean that my program didn't send out the data?
Comment by janindu on 2016-06-15:
In your ylj_joint_traTest.py, you have commented the following line.
#server_up = self._client.wait_for_server(time=rospy.Duration(10.0))#Blocks until the action server connects to this client.
What's the reason behind that?
Comment by Zero on 2016-06-15:
this line was not using in my program, that line was used to check the server up if I put in the try to take out the error.
Comment by Zero on 2016-06-15:
in the actual example there they use this to check the error. So since I done need to take out the error, I just don't use it.
Comment by janindu on 2016-06-15:
I think you need that test.
self._client.wait_for_server() will block until the action server connects to this client. The error you see now looks like a connection error.
I suggest you uncomment it and check for error. At least it will give us more information.
Comment by Zero on 2016-06-15:
it shows that TypeError: wait_for_server() got an unexpected keyword argument 'time'
Comment by janindu on 2016-06-15:
I think you have it as
server_up = self._client.wait_for_server(time=rospy.Duration(10.0))
It should be
server_up = self._client.wait_for_server(timeout=rospy.Duration(10.0))
Comment by Zero on 2016-06-15:
oh! I found the problem! I had to change the time=rospy.Duration(10.0) to timeout=rospy.Duration(10.0). And yes, you are right! I should not comment that line, that line had his purpose! Thank you so much!
Comment by janindu on 2016-06-15:
No worries. Is everything sorted now?
Comment by Zero on 2016-06-15:
Yes! you are right! I made a silly mistake! thank you for your patiently explained and solved my problem!
Comment by Zero on 2016-06-15:
yes, now should be fine.
Comment by Zero on 2016-06-15:
Hi, janindu.
Would you mind give your email address to me? You look so good in this area. So if I had any Baxter relate question I can direct ask you. That will save my life.
Comment by janindu on 2016-06-15:
Hey Zero, to be frank I'm a beginner myself. I only heard about ROS in March 2016 (yep, I'm that new) when I started my PhD. And I didn't answer any Baxter related questions here either.
But yeah I'm always willing to help.
janindu1@gmail.com
Comment by Zero on 2016-06-15:
oh...I see. But your explain and answer was really useful for me. I really appreciate it! | {
"domain": "robotics.stackexchange",
"id": 24923,
"tags": "ros"
} |
Model bounces when calling the "reset_world" service in gazebo | Question:
Hi,
I am trying to reset a model to it's original position but when I call the "/reset_world" service (without a large time delay with time.sleep) the model bounces in the air.
Any help to prevent this bounce/ wobble would be appreciated as many resets will be needed in the simulation (after every episode for Q-Learning) so it would save a lot of time.
(Below is the code for controlling the model and resetting the simulation as well as the URDF)
for episode in range(5000):
episode_rewards = 0
rclpy.spin_once(self.imu_sub)
self.observation = self.imu_sub.get_latest_observation()
while(not self.is_episode_finished()):
current_state = self.discretise_observation()
action = self.choose_action(current_state)
self.cmd_vel_pub.publish(action) #move_robot()
rclpy.spin_once(self.imu_sub)
self.observation = self.imu_sub.get_latest_observation()
new_state = self.discretise_observation()
reward = self.get_reward(new_state[0])
episode_rewards += reward
lr = self.get_learning_rate(episode+1)
self.Q_table[current_state][action] = self.Q_table[current_state][action] + lr * (reward + self.DISCOUNT_FACTOR * np.max(self.Q_table[new_state]) - self.Q_table[current_state][action])
self.get_logger().info(f"{episode}| Episode rewards: {episode_rewards}")
self.epsilon = max(0.01, self.epsilon * self.epsilon_decay)
self.all_rewards.append(episode_rewards)
self.resetSim.send_request()
class ResetSim(Node):
def __init__(self):
super().__init__("minimum_reset_client")
self.client = self.create_client(Empty, "/reset_world")
while not self.client.wait_for_service():
self.get_logger().info("Service not available, waiting again...")
self.req = Empty.Request()
def send_request(self):
self.future = self.client.call_async(self.req)
rclpy.spin_until_future_complete(self, self.future)
return self.future.result()
<robot name="m2wr" xmlns:xacro="https://www.ros.org/wiki/xacro" >
<link name="link_chassis">
<!-- pose and inertial -->
<pose>0 0 0 0 0 0</pose>
<inertial>
<mass value="5"/>
<origin rpy="0 0 0" xyz="0 0 0.1"/>
<inertia ixx="0.0395416666667" ixy="0" ixz="0" iyy="0.106208333333" iyz="0" izz="0.106208333333"/>
</inertial>
<collision name="collision_chassis">
<geometry>
<box size="0.5 0.5 0.8"/>
</geometry>
</collision>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<box size="0.5 0.5 0.8"/>
</geometry>
<material name="blue"/>
</visual>
</link>
<!-- Create wheel right -->
<link name="link_right_wheel">
<inertial>
<mass value="0.2"/>
<origin rpy="0 1.5707 1.5707" xyz="0 0 0"/>
<inertia ixx="0.00052666666" ixy="0" ixz="0" iyy="0.00052666666" iyz="0" izz="0.001"/>
</inertial>
<collision name="link_right_wheel_collision">
<origin rpy="0 1.5707 1.5707" xyz="0 0 0" />
<geometry>
<cylinder length="0.1" radius="0.2"/>
</geometry>
</collision>
<visual name="link_right_wheel_visual">
<origin rpy="0 1.5707 1.5707" xyz="0 0 0"/>
<geometry>
<cylinder length="0.1" radius="0.2"/>
</geometry>
</visual>
</link>
<!-- Joint for right wheel -->
<joint name="joint_right_wheel" type="continuous">
<origin rpy="0 0 0" xyz="0 0.3 -0.4"/>
<child link="link_right_wheel" />
<parent link="link_chassis"/>
<axis rpy="0 0 0" xyz="0 -1 0"/>
<limit effort="10000" velocity="1000"/>
<joint_properties damping="1.0" friction="1.0" />
</joint>
<!-- Left Wheel link -->
<link name="link_left_wheel">
<inertial>
<mass value="0.2"/>
<origin rpy="0 1.5707 1.5707" xyz="0 0 0"/>
<inertia ixx="0.00052666666" ixy="0" ixz="0" iyy="0.00052666666" iyz="0" izz="0.001"/>
</inertial>
<collision name="link_left_wheel_collision">
<origin rpy="0 1.5707 1.5707" xyz="0 0 0" />
<geometry>
<cylinder length="0.1" radius="0.2"/>
</geometry>
</collision>
<visual name="link_left_wheel_visual">
<origin rpy="0 1.5707 1.5707" xyz="0 0 0"/>
<geometry>
<cylinder length="0.1" radius="0.2"/>
</geometry>
</visual>
</link>
<!-- Joint for right wheel -->
<joint name="joint_left_wheel" type="continuous">
<origin rpy="0 0 0" xyz="0 -0.3 -0.4"/>
<child link="link_left_wheel" />
<parent link="link_chassis"/>
<axis rpy="0 0 0" xyz="0 1 0"/>
<limit effort="10000" velocity="1000"/>
<joint_properties damping="1.0" friction="1.0" />
</joint>
<gazebo>
<plugin name="differential_drive_controller" filename="libgazebo_ros_diff_drive.so">
<update_rate>20</update_rate>
<left_joint>joint_left_wheel</left_joint>
<right_joint>joint_right_wheel</right_joint>
<wheel_separation>0.4</wheel_separation>
<wheel_diameter>0.2</wheel_diameter>
<wheel_torque>0.1</wheel_torque>
<command_topic>cmd_vel</command_topic>
<odometry_topic>odom</odometry_topic>
<odometry_frame>odom</odometry_frame>
<robot_base_frame>link_chassis</robot_base_frame>
</plugin>
</gazebo>
<gazebo reference="link_chassis">
<sensor name="my_imu" type="imu">
<always_on>true</always_on>
<!-- Publish at 30 hz -->
<update_rate>30</update_rate>
<plugin name="my_imu_plugin" filename="libgazebo_ros_imu_sensor.so">
<ros>
<!-- Will publish to /imu/data -->
<namespace>/imu</namespace>
<remapping>~/out:=data</remapping>
</ros>
<frame_name>link_chassis</frame_name>
<initial_orientation_as_reference>false</initial_orientation_as_reference>
</plugin>
</sensor>
</gazebo>
Originally posted by vertical_beef576 on ROS Answers with karma: 17 on 2023-07-24
Post score: 0
Answer:
By wobble/bounce, do you mean a small movement at the robot's start position? If that is the case, it is probably because the offset from ground maybe too low, and the wheel of the robot rubs against the ground_plane of the world, causing the weight shift.
Node(
package='gazebo_ros', executable='spawn_entity.py',
arguments=[
'-topic', 'robot_description',
'-entity', 'm2wr',
'-x', '0',
'-y', '0',
'-z', '0.0' # No offset
],
output='screen'
),
There is indeed a small bounce/wobble when you try calling the service with no offset
To combat this, you could spawn the robot at a small offset from the origin (say 5cm):
Node(
package='gazebo_ros', executable='spawn_entity.py',
arguments=[
'-topic', 'robot_description',
'-entity', 'm2wr',
'-x', '0',
'-y', '0',
'-z', '0.05' # Minimal offset
],
output='screen'
),
As indicated above, spawning at a minimal offset removes all bounce.
Note
If you want to reset only the robot without resetting all the poses of the models in the simulation, you can use the /set_entity_state service after adding the following block to your world file:
Snippet to be added to your world:
<plugin name="gazebo_ros_state" filename="libgazebo_ros_state.so">
<ros>
<namespace>/</namespace>
<remapping>model_states:=model_states_demo</remapping>
<remapping>link_states:=link_states_demo</remapping>
</ros>
<update_rate>50.0</update_rate>
</plugin>
ROS2 Service Call:
ros2 service call /set_entity_state gazebo_msgs/SetEntityState "state: {name: m2wr, pose: {position: {x: 0.0, y: 0.0, z: 0.0}, orientation: {x: 0.0, y: 0.0, z: 0.0, w: 1.0}}, reference_frame: world}"
Originally posted by Gaurav Gupta with karma: 276 on 2023-07-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 38465,
"tags": "ros, gazebo, ros2, service"
} |
Is this simplified consensus problem easier than the original? | Question: There is a famous Consensus Problem in Distributed Computing.
Let's consider and try to find the best possible algorithm for a simplified version of the consensus problem.
Assumptions: a process may undergo only crash failure (process abruptly stops and does not resume), process represent a complete graph.
Simplification: a crash failure may occur only between round, so there no a case when a process succeeds in sending some message and fails to send other message during the same round.
The algorithm for the general case FloodSet when a process may crash during the round is described in Distributed algorithms - Nancy Ann Lynch. By the analysis it was shown that $f+1$ rounds is enough to reach a consensus.
Intuitively, it looks like that the simplification completely changes the approach to the solution. It is may be enough just one round, every processes send an input message to every other processes and agree on the minimal value.
What is the simplest possible algorithms for simplified problem ?
What can we do in the case of general graph?
Answer: What you're describing as simplified problem is usually called consensus with clean crashes.
You can just compute the maximum of all received messages once. To see that this is sufficient, observe that everyone receives the exact same messages after the first round. Thus everyone agrees on the same maximum value and all nodes decide correctly.
To solve crash-fault consensus on a general graph you need to assume $(f+1)$-connectivity. Then you could simulate each round of the FloodSet algorithm by flooding through the entire graph (taking $D$ rounds), yielding $D(f+1)$ rounds in total. I would be surprised if there's a more efficient way to do this (without additional assumptions on the graph, .e.g., high expansion). | {
"domain": "cs.stackexchange",
"id": 1415,
"tags": "algorithms, algorithm-analysis, distributed-systems"
} |
When do we have rigid body rotation in fluids? | Question: I'm studying articles about spinning drop method. In most of approaches, the fluid movements are taken as rigid body rotation.
My question is not about only spinning drop device, I want to know, when in nature do we have rigid body rotations for fluids. Should our fluids have large viscosity to considered rigid body rotation for them? If yes, why? or if no, so what features should they have?
Answer: Rigid body motion is something you can assume for the spinning drop method in part because, almost certainly, the actual instruments you use for it have small-diameter capillaries, the speed of rotation isn't insanely fast, and the instrument lets the drop settle down a bit before the measurement is actually taken. Correct me if I'm wrong.
For most fluids at most familiar physical scales, rigid-body rotation is something you have to work at. If you take a pail of water and put it on a record turntable at 33 1/3 rpm, you'd have to wait a fairly long time for rigid-body motion to set in (depending on how close to "rigid" you want). Even then, when it does settle down, it's not primarily from viscous forces, but from Ekman pumping (large-scale circulations caused by differential rotation); if you could somehow turn that off, it would take an even longer time.... you could calculate it given the equation I give below if you want.
But if you crank up the viscosity, or reduce the size, then rigid-body rotation can set in much faster.
If you put the fluid in a nice rotationally-symmetric container and rotate the container around its axis of symmetry, the e-folding time-scale for rigid body rotation to set in due purely to viscous forces is of order
$$
\tau \simeq \frac{D^2\rho}{\mu}
$$
where $D$ is the diameter of the container, $\mu$ is the dynamic viscosity, and $\rho$ is the density of the fluid. So, decreasing the diameter of the container (capillary) makes a huge difference. This relation by the way just comes from the Stokes term in the Navier-Stokes equations.
In the case of the spinning-drop method it's a bit more complex b/c you have two fluids, but that's the general idea.
It's also worth noting that certain vortices in fluids, esp 2D fluids, will have vortex cores that nearly rotate like rigid bodies, but that's a whole 'nuther topic.
(I had to look up "spinning drop method", which is something I had never heard of. Interesting! I had never heard of it, and I've been studying fluid dynamics for years. Nice question.)
Edit: to answer the question more precisely,
a) the answer depends on how close to "rigid" suffices,
b) higher viscosity helps - a 2x viscous fluid reaches rigid-body rotation 2x as fast,
c) smaller in diameter helps tremendously - a 2x smaller diameter reaches rigid-body rotation 4x as fast,
d) at fixed dynamic viscosity, reducing the density helps (not that you have much choice typically)
e) having corrugations or other surface roughness probably helps, but this gets a bit more complex b/c it probably won't make much difference for very viscous fluids, where the Reynolds number for the initial rotation speed is already small to begin with. | {
"domain": "physics.stackexchange",
"id": 67616,
"tags": "fluid-dynamics, rotational-dynamics, rigid-body-dynamics"
} |
Algorithm to find the saddle point in a Binary tree | Question: How to find a saddle point in a binary tree. where saddle point is a node in a tree whose value = min(the node and all its ancestors) = max ( the node and all its descendants)
Answer: Here is a sketch.
Given a node $u$, let $p_u,\ell_u,r_u,v_u$ denote its parent, it's left children, it's right children, and the value stored in $u$, respectively. Let $m(u)$ denote the minimum among all the values stored in the ancestors of $u$ ($u$ is an ancestor of itself). Let $M(u)$ denote the maximum among all the values stored in all the descendants of $u$ ($u$ is a descendant of itself).
Notice that $m(u) = \min\{v_u, m(p_u)\}$ and that $M(u)=\max\{ v_u, M(\ell_u), M(r_u) \}$ and hence all values $m(\cdot)$ can be computed in linear time by a preorder visit, while all values $M(\cdot)$ can be computed in linear time by postorder visit.
Once you know $m(u)$ and $M(u)$ for a node $u$, you can check whether $u$ is a saddle. This should be more than enough to get you started. You can fill in the other details by yourself. | {
"domain": "cs.stackexchange",
"id": 19155,
"tags": "algorithms, binary-trees"
} |
How does conservation of angular momentum not violate the time invariance of Newtonian mechanics? | Question: Richard Feynman states:
This follows from the fact that if we substitute −t for t in the original differential equation, nothing is changed, since only second derivatives with respect to t appear. This means that if we have a certain motion, then the exact opposite motion is also possible. In the complete confusion which comes if we wait long enough, it finds itself going one way sometimes, and it finds itself going the other way sometimes. There is nothing more beautiful about one of the motions than about the other. So it is impossible to design a machine which, in the long run, is more likely to be going one way than the other, if the machine is sufficiently complicated.
One might think up an example for which this is obviously untrue. If we take a wheel, for instance, and spin it in empty space, it will go the same way forever. So there are some conditions, like the conservation of angular momentum, which violate the above argument. This just requires that the argument be made with a little more care. Perhaps the walls take up the angular momentum, or something like that, so that we have no special conservation laws. Then, if the system is complicated enough, the argument is true. It is based on the fact that the laws of mechanics are reversible.
Richard Feynman 46-3 paragraphs 2 & 3 (https://www.feynmanlectures.caltech.edu/I_46.html).
I am not convinced that, if angular momentum is conserved, that a wheel will eventually (on an enormous enough time span) reverse itself in the absence of other forces or temperature gradients.
If a wheel is spinning alone in empty space, it must conserve its angular momentum such that it can never reverse. Doesn't this violate Feynman's argument? I do not understand his refutation, how can we avoid having "special conservation laws?"
Answer: Feynman isn't saying that a wheel spinning in empty space will eventually reverse itself. As he says, that is an example for which the earlier claim is "obviously untrue," essentially because it is too simple a system.
His argument is that if we add complexity - perhaps we have $10^{23}$ wheels which may exchange angular momentum with each other or with the environment (e.g. the walls, some background radiation field, external particles) - then the fact that the total angular momentum in the universe is conserved is not sufficient to constrain the dynamics of the individual wheels in any meaningful way. When we restrict our focus only to the wheels and not to the surrounding environment with which they may exchange angular momentum, angular momentum is no longer a conserved quantity. | {
"domain": "physics.stackexchange",
"id": 92282,
"tags": "newtonian-mechanics, rotational-dynamics, angular-momentum, conservation-laws"
} |
Validity of Friedmann's equation | Question: Recently, a friend gives me the Friedmann's equations under the following form:
\begin{gathered}
\left(\frac{\dot{a}}{a}\right)^{2}=\frac{8 \pi G}{3 c^{4}} \rho+\frac{\Lambda}{3}-\frac{k}{3 a^{2}}\quad(1) \\
\frac{\ddot{a}}{a}=-\frac{4 \pi G}{c^{4}} P-\frac{1}{2}\left(\frac{\dot{a}}{a}\right)^{2}-\frac{k}{2 a^{2}}+\frac{\Lambda}{2}\quad(2)
\end{gathered}
I have difficulties to convince myself that they are right since from side I know them under the form :
\begin{aligned}
&\frac{R^{\prime \prime}}{R}=-\frac{4 \pi G}{3}\left(\rho+\frac{3 p}{c^{2}}\right)+\frac{\Lambda}{3}\quad(3) \\
&\left(\frac{R^{\prime}}{R}\right)^{2}=\frac{8 \pi G \rho}{3}+\frac{\Lambda}{3}-\frac{k}{R^{2}}\quad(4)
\end{aligned}
In order to make the link, I think I have to take $a(t)=\dfrac{R(t)}{R_{0}}$ but even with this convention, I can't get to find the equations (1) and (2). There are factors and terms that differs between eq(2) and eq(3), like between eq(1) and eq(4).
EDIT : so finally, Could someone tell me if (1) and (2) are wrong and if (3) and (4) are right ?
Answer: (1) and (4) are almost the same, except for factors of $c^4$ and $3$ in the first and third terms on the right hand side.
If you add zero in the form $\frac12(8πGρ/3 + Λ/3 - k/R^2) - \frac12(R'/R)^2$ to the right hand side of (3), you get (2), except for a factor of $c^2$ difference in the first term.
So (1) and (2) are almost right, aside from a wrong factor of $3$ and some misplaced factors of $c$. | {
"domain": "physics.stackexchange",
"id": 87064,
"tags": "cosmology, coordinate-systems, space-expansion, cosmological-constant"
} |
compiling pr2_build_map_gazebo_demo on debian | Question:
The compilation stops on debian/amd64 because the package pr2_mechanism_model requests the file hardware_interface/hardware_interface.h.
Where is this file ?
Originally posted by Fabien R on ROS Answers with karma: 90 on 2013-03-06
Post score: 0
Answer:
I installed ros_control from git to get this file.
Originally posted by Fabien R with karma: 90 on 2013-06-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13227,
"tags": "ros-fuerte, debian"
} |
C# AES Encryption | Question: I've been researching AES encryption a bit over the past several days. The official (MSDN) examples I've seen are encrypting and decrypting using the same AES instance. They don't go in to what to do when generating and saving an encrypted value with AES and needing to decrypt it later with another AES instance.
I came up with the following and am wondering if there is anything wrong with it, aside from defaulting the password to a static value (I will be developing something else to manage encryption passwords)? It generates a random salt on encryption and stores it with the encrypted cipher prior to Base64 encoding. This assures that running the encryption twice on the same input does not result in the same cipher text.
public static string Encrypt(string plainText, string password = "BadgersAreAwesome")
{
if (plainText == null)
throw new ArgumentNullException("plainText");
if (password == null)
throw new ArgumentNullException("password");
// Will return the cipher text
string cipherText = "";
// Utilizes helper function to generate random 16 byte salt using RNG
byte[] salt = GenerateSaltBytes(SaltSize);
// Convert plain text to bytes
byte[] plainBytes = Encoding.Unicode.GetBytes(plainText);
// create new password derived bytes using password/salt
using (Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(password, salt))
{
using (Aes aes = AesManaged.Create())
{
// Generate key and iv from password/salt and pass to aes
aes.Key = pdb.GetBytes(aes.KeySize / 8);
aes.IV = pdb.GetBytes(aes.BlockSize / 8);
// Open a new memory stream to write the encrypted data to
using (MemoryStream ms = new MemoryStream())
{
// Create a crypto stream to perform encryption
using (CryptoStream cs = new CryptoStream(ms, aes.CreateEncryptor(), CryptoStreamMode.Write))
{
// write encrypted bytes to memory
cs.Write(plainBytes, 0, plainBytes.Length);
}
// get the cipher bytes from memory
byte[] cipherBytes = ms.ToArray();
// create a new byte array to hold salt + cipher
byte[] saltedCipherBytes = new byte[salt.Length + cipherBytes.Length];
// copy salt + cipher to new array
Array.Copy(salt, 0, saltedCipherBytes, 0, salt.Length);
Array.Copy(cipherBytes, 0, saltedCipherBytes, salt.Length, cipherBytes.Length);
// convert cipher array to base 64 string
cipherText = Convert.ToBase64String(saltedCipherBytes);
}
aes.Clear();
}
}
return cipherText;
}
public static string Decrypt(string cipherText, string password = "BadgersAreAwesome")
{
if (cipherText == null)
throw new ArgumentNullException("cipherText");
if (password == null)
throw new ArgumentNullException("password");
// will return plain text
string plainText = "";
// get salted cipher array
byte[] saltedCipherBytes = Convert.FromBase64String(cipherText);
// create array to hold salt
byte[] salt = new byte[SaltSize];
// create array to hold cipher
byte[] cipherBytes = new byte[saltedCipherBytes.Length - salt.Length];
// copy salt/cipher to arrays
Array.Copy(saltedCipherBytes, 0, salt, 0, salt.Length);
Array.Copy(saltedCipherBytes, salt.Length, cipherBytes, 0, saltedCipherBytes.Length-salt.Length);
// create new password derived bytes using password/salt
using (Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(password, salt))
{
using (Aes aes = AesManaged.Create())
{
// Generate key and iv from password/salt and pass to aes
aes.Key = pdb.GetBytes(aes.KeySize / 8);
aes.IV = pdb.GetBytes(aes.BlockSize / 8);
// Open a new memory stream to write the encrypted data to
using (MemoryStream ms = new MemoryStream())
{
// Create a crypto stream to perform decryption
using (CryptoStream cs = new CryptoStream(ms, aes.CreateDecryptor(), CryptoStreamMode.Write))
{
// write decrypted data to memory
cs.Write(cipherBytes, 0, cipherBytes.Length);
}
// convert decrypted array to plain text string
plainText = Encoding.Unicode.GetString(ms.ToArray());
}
aes.Clear();
}
}
return plainText;
}
Answer:
You're obtaining more than 20 bytes from PBKDF2-HMAC-SHA-1 and the attacker doesn't need the data from the second block (12 IV bytes), so your code slows down defenders by a factor 2 without affecting attackers.
generate random 16 bit salt using RNG
16 bits is very short. You should use 16 bytes or 128 bits. I suspect this is a typo in the comment, since Rfc2898DeriveBytes rejects salts shorter than 8 bytes.
1000 iterations of PBKDF2-HMAC-SHA-1 is pretty low. I'd use at least 10000.
You don't have a MAC, leaving you open to active attacks, such as padding oracles
if you use aes.CreateEncryptor().TransformFinalBlock you can throw out all those streams
I'd consider using UTF-8 over UTF-16. It's shorter for most non-Asian texts.
You're using password based encryption. That can be the right choice, but depending on your application you should consider:
Key based encryption, with the key stored in a password encrypted file key file (similar to SSH keys).
This means you only need to run the expensive key derivation operation when opening the password file, instead of per message. So you can probably afford more iterations, improving security.
Key based encryption with a randomly generated key
An encrypted network protocol like TLS
Consider using a better password hash, like bcrypt or scrypt and/or a faster (native) implementation with more iterations. That way you strengthen the password more, making password guessing attacks more expensive.
Rfc2898DeriveBytes generates a random salt if you pass in the length of the salt, so you don't need your own method. | {
"domain": "codereview.stackexchange",
"id": 11803,
"tags": "c#, .net, security, cryptography, aes"
} |
Query rosparam for public/private status | Question:
Can the rosparam command line tool tell me whether a parameter is private or public? If so, how?
Originally posted by dinosaur on ROS Answers with karma: 233 on 2017-08-18
Post score: 0
Answer:
"Private" parameters aren't really "private." The concept of private parameters is just useful for bookkeeping. The "private" namespace of a node is just the namespace that has the same path as the node. The parameter server doesn't hide private parameters from other nodes.
You could, perhaps, infer that a parameter is private if there exists a node whose ROS graph path is a prefix of the parameter. However, this would be prone to errors. For example, the following launch file is valid:
<launch>
<node name="my_node" pkg="my_pkg" type="my_exe" />
<group ns="my_node">
<param name="my_private_param" value="foo"/>
<node name="my_inception_node" pkg="my_pkg" type="my_other_exe">
<param name="my_super_private_param" value="foo"/>
</node>
</group>
<node name="another_node" pkg="my_pkg" type="yet_another_exe"/>
</launch>
The parameter /my_node/my_private_param is private to /my_node, but it's public to /my_node/my_inception_node. Even /another_node can see that parameter by querying the fully-qualified path, i.e. /my_node/my_private_param.
Originally posted by Ed Venator with karma: 1185 on 2017-08-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2017-08-19:
While there is no explicit hiding, I don't believe that you can say that:
parameter /my_node/my_private_param is private to /my_node, but it's public to /my_node/my_inception_node.
private parameters are private to the node by definition. The fact that my_inception_node lives in the ..
Comment by gvdhoorn on 2017-08-19:
.. same namespace doesn't change that fact.
If you create a private NodeHandle in my_inception_node, it will (should) not be able to find my_private_param.
Comment by Ed Venator on 2017-08-19:
My point is that there is such a thing as a private NodeHandle, but no such thing as a private parameter. You can create a NodeHandle scoped to any namespace you like. A private NodeHandle is just a special case where you create a NodeHandle scoped to the path of the node.
Comment by Ed Venator on 2017-08-19:
You're correct that a private NodeHandle in my_inception_node can't find /my_node/my_private_param. That's what I meant when I called it "public" in that context. | {
"domain": "robotics.stackexchange",
"id": 28641,
"tags": "rosparam"
} |
Where are the map files stored when rosrun map_server map_saver is run | Question:
I have used this command to store map
rosrun map_server map_saver static_map:=dynamic_map
This is the output i got:
Waiting for the map
Received a 1984 X 1984 map @ 0.050 m/pix
Writing map occupancy data to map.pgm
Writing map occupancy data to map.yaml
Done
Where are these maps physically stored? If i want to use them again how do i load them?
Originally posted by Krush on ROS Answers with karma: 1 on 2020-12-28
Post score: 0
Answer:
HI,
By default the maps are stored on the same location where you run the command with the names to map.pgm and map.yaml.
If you want to specify a different path and filename you can use the -f option. There are quite more parameters, and instructions on how to use map_server on its wiki page http://wiki.ros.org/map_server
Originally posted by Mario Garzon with karma: 802 on 2020-12-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by bekirbostanci on 2020-12-29:
map yaml and pgm files do not have to be in the same path. If you edit yaml file you can see pgm file path in there. | {
"domain": "robotics.stackexchange",
"id": 35911,
"tags": "ros, map-saver"
} |
Calculating frequency and damping ratio from transfer function given eigenvalues | Question: I have the following standard transfer function for a damped linear oscillator:
$$G(s) = \dfrac{\omega_0^2}{s^2 + 2\zeta\omega_0s + \omega_0^2}$$
Now I have two eigen values at locations $-100 \pm 100i$.
I want to calculate the damping coefficient, $\zeta$, and natural frequency $\omega_0$.
I thought that the characteristic polynomial of matrix $A$ from the linear system in state space form is the denominator of $G(s)$. However, when solving $s^2 + 2\zeta\omega_0s + \omega_0^2=0$ for $s = -100 - 100i$, I get an equation with two unknowns right? What theoretical point am I missing here? how should I go about this?
Thanks in advance
Answer: If I understand you correctly your system has two complex conjugate poles at $p=-100+100i$ and $p^*=-100-100i$ (where $^*$ denotes complex conjugation). Consequently, your denominator polynomial can be written as
$$\begin{align}(s-p)(s-p^*)&=s^2-(p+p^*)s+|p|^2\\&=s^2-2\text{Re}\{p\}s+|p|^2\\&=s^2-2|p|\cos(\phi)s+|p|^2\end{align}\tag{1}$$
where $\phi$ is the pole angle: $p=|p|e^{i\phi}$. For the given poles you have $\phi=3\pi/4$. Comparing $(1)$ with the denominator in your question immediately gives $\omega_0=|p|$ and $\zeta=-\cos\phi$. | {
"domain": "dsp.stackexchange",
"id": 3179,
"tags": "linear-systems, transfer-function, control-systems"
} |
Error propagation in cone width for kinematic neutron imaging | Question: I'm trying to figure out the error in the opening angle for a cone created with kinematic neutron imaging. The angle is defined as:
$$
\theta = \sin^{-1}\left(\sqrt{\frac{Ep}{E}}\right)\,.
$$
And I want to find the error in this angle. I don't know how to propagate error through an inverse sine function so I made a substitution where:
$$
u = \sin^2(\theta)\,,\qquad u = \frac{Ep}{E}\,.
$$
My work is attached, but my delta-theta at the end doesn't have units of radians or degrees. Where did I go wrong?
Answer: The rule of thumb is to just take the differential of whatever equation relates the different variables. That tells you how small changes in the variables must be related.
I.e. if your new variable $q$ is related to the old variables $x$, $y$, and $z$ by $q = f(x,y,z)$, then
$$
dq = \frac{\partial f}{\partial x}dx +\frac{\partial f}{\partial y}dy+ \frac{\partial f}{\partial z}dz.
$$
I think the key for you will be the differential of $\arcsin(x)$:
$$
d \arcsin(x) = \frac{dx}{\sqrt{1-x^2}}.
$$
and that of $\sqrt{E_p/E}$:
$$
d\sqrt{\frac{E_p}{E}} = \frac{dE_p}{2\sqrt{E_p E}} -\frac{1}{2}\sqrt{\frac{E_p}{E}} \frac{dE}{E}
$$
Try using those to expand out the differential of $\theta=\arcsin\left(\sqrt{\frac{E_p}{E}}\right)$. | {
"domain": "physics.stackexchange",
"id": 57374,
"tags": "error-analysis, imaging"
} |
YCbCr to jpeg-YCbCr | Question: Im sampling an image in YCbCr from an ov7670 camera and using jpegant library to encode a jpeg file.
Whenever I do it directly using Y,Cb,Cr values from my camera, I get a pink-violet version of the image.
As far as I can understand on Wikipedia, there are different YCbCr standards, so I guess my camera use a different YCbCr standard.
Since the Jpegant library comes with an RGB example, if I convert my YCbCr values to RGB and then again rgb to YCbCr (an then JPEG) through the provided example, I get the a decent result.
Is there any way to get jpeg-YCbCr from my camera YCbCr (or any other YCbCr standard)?
EDIT:
The camera sends me YCbCr in 4:2:2 format (that means that two
sequential pixel share the same Cb and Cr channels), I proceed to
downsample them to 4:2:0 ;
Encode those samples into jpeg and get a pink-violet image ;
I convert camera YCbCr to RGB this way :
.
R = Y + 1.402 * (Cr - 128)
G = Y + 1.772 * (Cb - 128)
B = Y - 0.34414 * (Cb - 128) - 0.71414 * (Cr- 128)
Those values are good, I checked.
I then convert them back into YCbCr using the formula provided with the library example:
Y' = 0.299*R + 0.587*G + 0.114*B
Cb' = -0.1687*R - 0.3313*G + 0.5*B + 128
Cr' = 0.5*R - 0.4187*G - 0.0813*B + 128
With new values, it encodes correctly .
Example YCbCr of values :
before:
156 144 145
after:
177 105 129
Answer: The matrix that will convert your camera provided YCbCr samples to normalized Y'Cb'Cr' values can be produced by the following Matlab code
a = 1.402; b = 1.772; c=-0.34414; d=-0.71414;
M2 = [1 0 a a ; 1 b 0 b; 1 c d (c+d); 0 0 0 -1];
M1 = [0.299 0.587 0.114 0; -0.1687 -0.3313 0.5 1; 0.5 -0.4187 -0.0813 1];
M = M1*M2
You should provide an input vector of the form:
vin = [Y Cb Cr -128]
Then your output will be;
vout = M*vin;
For example vin = [156 144 145 -128]' produces the output :
M*[156 144 145 -128]'
ans =
177.7573
105.7629
129.4807
Rounding to integer will produce your expected result.
$$
M_1=
\begin{bmatrix}
0.2990 &~~~0.5870 &~~~0.1140 &0.0000\\
0.1687 &-0.3313 &~~~0.5000 &1.0000\\
0.5000 &-0.4187 &-0.0813 &1.0000\\
\end{bmatrix}
$$
$$
M_2=
\begin{bmatrix}
1.000 &0.0 &1.402 &1.402\\
1.000 &1.772 &0.000 &1.772\\
1.000 &-0.34414 &-0.71414 &-1.0583\\
0.000 &0.000 &0.000 &-1.000
\end{bmatrix}
$$ | {
"domain": "dsp.stackexchange",
"id": 5412,
"tags": "jpeg"
} |
Terminology of Orbits | Question: A hopefully simple terminology question: When you have an object that orbits another object in space, what do you call the object being orbited in relation to the object orbiting?
For instance, you might call the object orbiting the "satellite" of the orbited object, but what would you call the object being orbited?
Orbitee is all i've been able to come up with but it doesn't seem quite right ;)
Answer: When one body has a much larger mass, so it's motion is negligable, it can be called the central body (even for eccentric orbits, where the central body is a the focus of the ellipse, and far from the centre)
This is the terminology used by Wikipedia editors
The central body in an orbital system can be defined as the one whose mass (M) is much larger than the mass of the orbiting body (m)...
Also by Glasgow university
The most straightforward orbit calculations occur when the central body is much more massive than the orbiting body.
and elsewhere
The angular momentum of a satellite depends on both the mass of orbiting satellite and central body as well as the radius of the orbit.
Normally, in a specific situation "planet", or "star" can be used. | {
"domain": "astronomy.stackexchange",
"id": 2979,
"tags": "orbit, terminology"
} |
Can tidal forces significantly alter the orbits of satellites? | Question: I would assume that there are other larger, more significant, forces acting on artificial satellites, but can tidal forces drastically alter the orbit of a satellite over time?
I was thinking this could especially be an issue for a satellite in geostationary orbit, because they have to be extremely precisely positioned. However, I could see this being an issue for satellites in other orbits as well, just not to the same degree.
Answer: Tidal force acting on a natural satellite, like the moon around the earth, is the result of the deformability of the earth as the moon affects it and slowly the moon recedes from the earth. In general these tidal forces can be accelerating or decelerating :
their orbital period is shorter than their planet's rotation. In other words, they revolve faster around the planet than the planet rotates. In this case the tidal bulges raised by the moon on their planet lag behind the moon, and act to decelerate it in its orbit.
The size of the artificial satellites is such that this type of effect is very small in disturbing the orbit . After all the moon with all its size is still here and will be in orbit forever though at a distance, unless there is a collision with a third body or the sun turns nova.
The energy losses due to friction with the matter ( there is no complete vacuum) in their orbit is important and will mask any effect since the orbits are continually corrected for the losses as Whatroughbeast says in his/her answer.
The tidal bulges due to the Moon on the earth do affect satellites and have to be taken into account as discussed here. | {
"domain": "physics.stackexchange",
"id": 55119,
"tags": "orbital-motion, tidal-effect"
} |
Is there a better flow for returning the response? | Question: So I have the getSessionNamespace method in the below class in which I am trying to return different scenarios but one json response. So my solution is to store the required message and status code in a response array and vary it depending on the situation.
Is there a cleaner more obvious method?
<?php
namespace Drupal\auth\Controller;
use Drupal\Core\Controller\ControllerBase;
use Drupal\auth\Service\AuthGroupService;
use Symfony\Component\HttpFoundation\Cookie;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Symfony\Component\HttpFoundation\JsonResponse;
use GuzzleHttp\Exception\ClientException;
use Symfony\Component\HttpKernel\Exception\HttpException;
/**
* Class SessionController
* @package Drupal\my_moduke\Controller
*/
class SessionController extends ControllerBase
{
/**
* @var AuthGroupService
*/
protected static $authGroupService;
/**
* @param ContainerInterface $container
*
* @return static
*/
public static function create(ContainerInterface $container)
{
self::$authGroupService = $container->get('auth.auth_group');
return parent::create($container);
}
/**
* @param Request $request
* @param bool $internal
*
* @return mixed|JsonResponse
*/
public function getSessionNamespace(Request $request, $internal = false)
{
try {
if (!$request->headers->has('host')) {
throw new \HttpHeaderException('Header \'host\' not found.');
}
$host = $request->headers->get('host');
// Query datastore for the authGroup data for the given host.
$result = $this->getAuthGroupData($host);
$configId = \Drupal::config('auth.settings')->get('DS5.auth_group_id');
if (!$this->allowedAccess($configId, $result['authGroupId'])){
throw new HttpException(403, 'Not allowed.');
}
$response = [
'result' => $result,
'status' => 200
];
} catch (ClientException $e) {
$response = [
'result' => 'The endpoint responded with a ' . $e->getCode(),
'status' => $e->getCode()
];
} catch (\Exception $e) {
$response = [
'result' => $e->getMessage(),
'status' => $e->getStatusCode()
];
}
if ($internal) {
return $response['result'];
}
$jsonResponse = new JsonResponse($response['result'], $response['status']);
if (!$_COOKIE[$response['result']['cookieName']] && $response['status'] === 200) {
$this->createSessionCookie($jsonResponse, $response['result']);
}
return $jsonResponse;
}
/**
* @param JsonResponse $response
* @param array $values
*
* @return JsonResponse
*/
private function createSessionCookie(JsonResponse $response, array $values)
{
$dateTime = new \DateTime('+'. $values['sessionTimeToLiveSeconds'] .' seconds');
$sessionCookie = new Cookie(
$values['cookieName'],
session_id(),
$dateTime,
'/',
$values['domain']
);
$response->headers->setCookie($sessionCookie);
}
/**
* @param $configId
* @param $authGroupId
*
* @return bool
*/
private function allowedAccess($configId, $authGroupId)
{
return $configId === $authGroupId;
}
/**
* @param $host
*
* @return mixed
*/
private function getAuthGroupData($host)
{
// Fetch from authGroup datastore endpoint.
return self::$authGroupService->fetchAuthGroupData($host);
}
}
Answer: I think this generally looks pretty good. I would consider two things:
First I don't like the "internal" option here which leads to mixed return values. Have you considered having getSessionNameSpaceInternal() which would call this method and extract the desired return value from the JsonResponse?
Doing the above would allow you to directly return from catch block s like this:
} catch (ClientException $e) {
return new JsonResponse(
'The endpoint responded with a ' . $e->getCode(),
$e->getCode()
);
} catch (\Exception $e) {
return new JsonResponse( $e->getMessage(), $e->getStatusCode() );
} | {
"domain": "codereview.stackexchange",
"id": 24218,
"tags": "php"
} |
move_group plan for 2 or more waypoints then execute | Question:
Hi all,
I want to move arm to pose1 and pose2, the straightforward way is
move_group.setStartState(*group.getCurrentState());
move_group.setPoseTarget(pose1);
move_group.plan(plan1);
move_group.execute(plan1);
move_group.setStartState(*group.getCurrentState());
move_group.setPoseTarget(pose2);
move_group.plan(plan2);
move_group.execute(plan2);
The problem is that if plan1 succeeds, and plan2 fails, plan1 is still executed. Is there a way to first check plan1 and plan2? If both succeed, then execute the plans.
I saw the argument of setPoseTargets() can be a list of poses, but the plan will only choose one randomly from the list as the goal...
Originally posted by xibeisiber on ROS Answers with karma: 137 on 2020-07-01
Post score: 0
Answer:
The following works. (reference)
#include<moveit/robot_trajectory/robot_trajectory.h>
#include<moveit/robot_state/robot_state.h>
group.setStartState(*group.getCurrentState());
group.setPoseTarget(target_pose);
success = group.plan(plan);
moveit::core::RobotModelConstPtr robot_model=group.getRobotModel();
robot_trajectory::RobotTrajectory trajectory(robot_model, planning_group);
trajectory.setRobotTrajectoryMsg(*(group.getCurrentState()),plan.trajectory_);
group.setStartState(*trajectory.getLastWayPointPtr());
group.setPoseTarget(target_pose2);
success2 = group.plan(plan2);
if(success and success2){
success = group.execute(plan);
if(success)
success2 = group.execute(plan2);
}
Originally posted by xibeisiber with karma: 137 on 2020-07-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35214,
"tags": "ros, moveit, ros-kinetic, move-group"
} |
Pair together light curve and radial velocity data of specified star | Question: I would like to create a web application that works with light curves and radial velocities of stars with exoplanets. I found a NASA bulk data download with both light curves and radial velocities (it's exactly what I'm looking for). However, I don't know how to pair together the light curve and radial velocity of a single star (without downloading the whole dataset).
For example, using the script below I can download the light curve of KIC 10666592, where there is 1 exoplanet:
wget -O 'kplr010666592-2009131110544_slc.fits' 'http://exoplanetarchive.ipac.caltech.edu:80/data/ETSS//Kepler/005/755/19/kplr010666592-2009131110544_slc.fits' -a search_33681064.log
Now, how can I download radial velocity data for this specific star? Scripts for radial velocity look like (there is no KIC in URL):
wget -O UID_0000522_RVC_001.tbl http://exoplanetarchive.ipac.caltech.edu:80/data/ExoData/0000/0000522/data/UID_0000522_RVC_001.tbl -a RADIAL.log
Is there any way to connect KIC (Kepler input catalog) and radial velocity UID?
Answer: An excellent page to get most or all names for a star is Simbad which also happens to know the Kepler IDs (KID).
Ont the other hand, every of the RV data files contains info on the star's identifier it belongs to in the header, e.g:
\STAR_ID='HD 4628'
So using Simbad you should be able to do a cross-matching. SimBad allows scripted query and includes all alias names for stars; thus query for one known star name and look for identifiers found in the list of RV-observed stars or vice versa. Simbad allows to be queried via scripts, so you can make a short programme to do this task for you.
However I'm not 100% convinced that you necessarily find many matches between those two data sets; it's easier to do photometry than spectroscopy. | {
"domain": "astronomy.stackexchange",
"id": 4484,
"tags": "exoplanet, kepler, nasa, light-curve, radial-velocity"
} |
Would someone be able to see where they are headed once they crossed the event horizon? | Question: Let us say an explorer was studying a supermassive black hole and ended up blundering past the event horizon. From my understanding, which may be wrong, the only paths allowed after crossing the event horizon are paths that bring the object closer to the singularity. Therefore, if the hapless explorer faced the direction of travel, would none of the photons ahead of the explorer be able to reach the explorer's eyes?
Answer: If there is other infalling matter, then certainly they may be able to see it. A simple example is that if they hold their own hand in front of them, they will still be able to see their hand. This is an example of the equivalence principle, one form of which states that spacetime is always locally flat, so that on small enough scales, gravitational effects are not observable.
The infalling observer may also be able to see things at fairly large distances, if those things fell in at a late enough time. The easy way to tell that this is true is to look at a Penrose diagram. | {
"domain": "physics.stackexchange",
"id": 54918,
"tags": "black-holes, event-horizon"
} |
Visualising the skeleton from openni_tracker in rviz | Question:
I'm trying to display the skeleton that the openni_tracker detects in rviz. I just installed pi_tracker and ran the skeleton_tracker node that uses it's own program for visualisation, but I know you can track the skeleton in rviz.
The TF display in rviz should be ok, it reports that it successfully found all the /tf-s used for different body parts, but now I don't know how to represent them and visualise them with markers, but I can't find any published topics with the message type that the Marker needs.
First thing I'd like to ask is: do I need another node or something that could publish the topics readable by the Marker?
Second: Do I need to use a Marker display type or something else, like MarkerArray instead?
Originally posted by kameleon on ROS Answers with karma: 68 on 2012-03-08
Post score: 0
Original comments
Comment by kameleon on 2012-03-14:
Now I tried again, and this time I can't even get the tf's to work. When I rosrun openni_tracker, it works, but in rviz, there are no fixed frames available. When I roslaunch openni_camera.launch, I can find the /openni_camera frame, but the TF display can't find the tracker's body parts transforms
Comment by kameleon on 2012-03-14:
I don't remember what exactly I did last time that got it working. I bet that when it worked if I had zoomed out as Pi robot says, I would definitely be able to see the /tf's.
Answer:
You can visualize tf by setting the Fixed frame to /openni_camera or something like /head in Rviz! it shows every part of tracked human with a tf related to that part!
Originally posted by Alireza with karma: 717 on 2012-03-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Pi Robot on 2012-03-08:
@kameleon If you still don't see the TF frames after following @Alireza's suggestion, make sure you zoom way out in RViz using your mouse wheel--I once found that the frames were just out of the default view. | {
"domain": "robotics.stackexchange",
"id": 8526,
"tags": "ros, rviz, openi-tracker, pi-tracker"
} |
Which force does a weighing scale measure? | Question: I have come across several answers on the internet, which address the question "Does a weighing scale measure mass or weight?" but I assure you that this is not one such question.
My doubt is this; while solving problems involving free body diagrams of weighing machines, when asked to find the reading of the scale, I'm a bit confused; exactly which force does the scale measure? The $mg$ downwards, due to the weight of the body on the scale alone or the normal force? I need this concept to solve problems such as the one that the following picture describes, in which I am required to find the reading of the weighing scale marked (1).
Please help! Much thanks in advance :) Regards.
Answer: Generally, a scale will measure the normal force it supplies to the object resting on it. In the special case where the scale is stationary (as it appears in your picture), this is equal to $mg$, or the weight of the object.
If the system is accelerating, the normal force (and thus the reading of the scale) will increase or decrease appropriately. However, this normal force is no longer equal to the weight. | {
"domain": "physics.stackexchange",
"id": 33310,
"tags": "newtonian-mechanics, forces, weight"
} |
Mono16 images displayed as 8 bits in rostopic echo | Question:
Hi there !
I generate a 16bits image (stored in an opencv CV_16U) that I convert to a msg using cv_bridge with the right encoding (mono16). When I do a rostopic echo on the image topic, I get this kind of output :
header: seq: 481 stamp:
secs: 1456138732
nsecs: 716579801 frame_id: '' height: 250 width: 1 encoding: mono16
is_bigendian: 0 step: 2 data: [0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 2, 0, 7, 0, 7, 0,
9, 0, 9, 0, 7, 0, 9, 0, 7, 0, 9, 0, 9,
0, 9, 0, 7, 0, 11, 0, 7, 0, 9, 0, 11,
0, 9, 0, 9, 0, 9, 0, 13, 0, 9, 0, 11,
0, 11, 0, 9, 0, 11, 0, 11, 0, 11, 0,
13, 0, 11, 0, 11, 0, 13, 0, 11, 0, 13,
0, 13, 0, 13, 0, 11, 0, 13, 0, 14, 0,
13, 0, 13, 0, 14, 0, 14, 0, 13, 0, 16,
0, 14, 0, 14, 0, 16, 0, 14, 0, 16, 0,
14, 0, 192, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0]
so the image appears as mono16 but the values displayed never go over 255 and there are 0 appearing between values that I can see are wrong. So looking at this, it looks like it's displaying the image as an 8bit image instead of 16bits image. I can then read the image in my node and the data looks fine so it's just a display issue really.
I haven't thing any mention of this in any answer here so I suspect it's a bug, am I the only one to observe this ?
Configuration :
Ubuntu 14.04 LTS - ros-indigo full-desktop
Thanks
Originally posted by Thomas Guerneve on ROS Answers with karma: 11 on 2016-02-22
Post score: 1
Answer:
This is just the encoding of the image data as a byte array. I would assume you'll find twice the byte entries here than you have image pixels. You just have to interpret two bytes as a 16-bit pixel.
Originally posted by dornhege with karma: 31395 on 2016-02-22
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Thomas Guerneve on 2016-02-24:
Absolutely, as I said I can read the image and use it without any issue, but it should be displayed properly in rostopic echo, it's handy for debugging...
Comment by dornhege on 2016-02-24:
It is displayed correctly in rostopic - in the sense that rostopic shows you, what exactly is send as the message. What you are looking for is a semantic interpretation of messages. rostopic cannot do that as it would need to know how to interpret any message. It's also the tool to show raw data. | {
"domain": "robotics.stackexchange",
"id": 23865,
"tags": "ros, rostopic-echo, rostopic"
} |
Electrical field between two conductor halves inside a capacitor | Question: If we place a conductor between the plates of a capacitor, the conductor reaches an electrostatic equilibrium with the surrounding electric field. At this equilibrium state, the charges within the conductor have redistributed such that the electric field inside the conductor is nullified. Now, what happens if we separate this conductor into two halves, each containing the redistributed charges corresponding to one side of the conductor (meaning we have one half that is positively charged and one half that is negatively charged). We did not take the conductors out of the field. Would there be an electric field between the separated halves of the conductor if we look at the complete system (Capacitor and its field and conductors and their interaction)? My book says there is no field because it would cancel out the external field of the capacitor but I always thought that electric field lines always end on charges, which would not make a superposition possible.
Answer: If I've understood you aright, the 'final' system consists of 4 flat conducting plates in 'stack' configuration but with gaps between them. Let's call them A, B, C and D. A and D are the original capacitor plates, with (I assume) equal and opposite charges (call them $Q$ and $-Q$ on their 'inner' surfaces (those facing B and C). B and C will have 'induced' charges, $-Q$ and $Q$ on their outer faces (those facing A and D). B and C will have no charge on their 'inner' surfaces (facing each other).
Symmetry shows that in the places where there is an electric field in this set-up it will be normal to the plates (as long as we're well away from the edges of the plates).
Now consider a Gaussian surface in the form of a box with four of its walls normal to plate B and partly inside plate B but sticking out into the gap between B and C. The other two walls are parallel to the plates. One of them (wall X, say) is inside B, the other (wall Y, say) is in the gap between B and C. No charge is enclosed in the box, because there is no charge on the inner surface of B. Therefore no electric flux enters or leaves the box. No flux crosses wall X because there is no electric field in the metal. No flux crosses the four walls normal to the plates either, therefore no flux crosses wall Y. There is therefore no electric field in the gap between B and C. | {
"domain": "physics.stackexchange",
"id": 96095,
"tags": "electromagnetism, electrostatics, electric-fields, charge"
} |
Continuity equation for the conservation of energy from the conservation of the energy-momentum tensor | Question: I am working through the book Cosmology by Daniel Baumann, and in the subsection that covers the continuity equation (part of section 2.3.1 on perfect fluids) the author makes a claim that confuses me. He starts by stating that, in Minkowski space, energy and momentum are conserved, and therefore:
The energy density $\rho c^2$ satisfies the continuity equation, which means that the rate of change of the density equals the divergence of the energy flux:
$$\dot{\rho}=-\partial_i\pi^i$$
The evolution of the momentum density satisfies the Euler equation:
$$\dot{\pi}_i=\partial_iP$$
And here comes the claim:
These conservation laws can be combined into a four-component conservation equation for the energy-momentum tensor:
$$\partial_\mu T^\mu_{\ \ \ \ \nu}=0$$
In the previous subsection, all the necessary information about the energy-momentum tensor was provided, including that the assumption of homogeneity and isotropy of the universe forces this tensor to take the form:
$$T_{00}\equiv\rho(t)c^2\qquad T_{i0}\equiv c\pi_i=0\qquad T_{ij}=P(t)g_{ij}(t,\vec{x})$$
So, what I am trying to do is to recover the equations
$$\dot{\rho}=-\partial_i\pi^i\qquad \text{and}\qquad \dot{\pi}_i=\partial_i P$$
from the four-component equation $\partial_\mu T^\mu_\nu=0$ (where I know we are using Einstein's summation convention for the index $\mu$ and that $\nu=0,...,3$).
For the $\nu=0$ component, this is my attempt:
$$\partial_\mu T^\mu_{\ \ \ \ 0}=0\ \Rightarrow\ \partial_0(-\rho c^2)+\partial_i(c\pi^i)=0\ \Rightarrow\ \dfrac{1}{c}\partial_t(-\rho c^2)+\partial_i(c\pi^i)=0\ \Rightarrow$$
$$\Rightarrow\ -c\dot{\rho}+c\partial_i\pi^i=0\ \Rightarrow\ \dot{\rho}=\partial_i\pi^i$$
where I have used that $\partial_0=c^{-1}\partial_t$ since $x^0=ct$, and where I have raised one of the indices in the energy-momentum tensor by using the metric tensor:
$T^0_{\ \ \ \ 0}=g^{0\beta}T_{\beta 0}=-T_00=-\rho c^2$
$T^i_{\ \ \ \ 0}=g^{i\beta}T_{\beta 0}=g^{i\beta}c\pi_\beta=c\pi^i$
So, instead of $\dot{\rho}=-\partial_i\pi^i$, I get $\dot{\rho}=\partial_i\pi^i$. I'm tempted to say this is an errata in the book but I suppose I'm probably wrong somewhere.
I have read that the energy-momentum tensor, written as $T^{\mu\nu}$, is symmetric and therefore $T^{\mu\nu}=T^{\nu\mu}$, but this means it's symmetric with both indices as upper indices, not that $T_{\mu\nu}$ is symmetric too, right? Because we need to use the metric to lower indices, and the product of two symmetric matrices isn't symmetric in general. So am I right in assuming that $T_{0j}=T_{j0}$, which I would need to deduce the second equation?
I'm thoroughly confused, any help would be greatly appreciated.
Edit: I'm not convinced that $T^\mu_{\ \ \ \ \nu}$ is also symmetric if I assume that $T_{\mu\nu}$ is symmetric. My calculations are as follows. I start with the components of $T_{\mu\nu}$, given by:
$$T_{00}\equiv\rho c^2\qquad T_{i0}\equiv c\pi_i\qquad T_{0j}=T_{j0}=c\pi_j \qquad T_{ij}=P g_{ij}$$
where the rest of the components equal zero. If I raise one of the indices, I get:
$T^0_{\ \ \ \ 0}=g^{0\beta}T_{\beta 0}=-T_{00}=-\rho c^2$
$T^i_{\ \ \ \ 0}=g^{i\beta}T_{\beta 0}=g^{i\beta}c\pi_\beta=c\pi^i$
$T^0_{\ \ \ \ j}=g^{0\beta}T_{\beta j}=-T_{0j}=-c\pi_j$
$T^i_{\ \ \ \ j}=g^{i\beta}T_{\beta j}=g^{ik}T_{kj}=Pg^{ik}g_{kj}=P\delta^i_j$
If $T^\mu_{\ \ \ \ \nu}$ is symmetric, then in particular $T^i_{\ \ \ \ 0}=T^0_{\ \ \ \ i}$, which means $\pi^i=-\pi_i$. But that isn't the case, since:
$$\pi^i=g^{i\beta}\pi_\beta=g^{ii}\pi_i=\pi_i$$
Then, where is my mistake in the calculations?
Answer: You are on the right track. Based on your calculations, your metric $g \equiv \eta = \mathrm{diag}(-1,+1,+1,+1)$ in cartesian co-ordinates. Here, I take $c=1$ as this makes calculations easier to write down. You can always re-insert $c$ at the end with some simple dimensional analysis. Two things to keep in mind now -
The error in sign may well be due to an errata in the book. It ought to be $T_{i0}\equiv -\pi_i$ for the conventions used. This sign is positive for the (+---) signature used by Baumann in his lecture notes from which I presume, this book was developed.
The symmetry is a property of the object in this case and not an artefact of the representation. In other words, it is the energy-momentum tensor that is symmetric here and not specifically its covariant form, but in general. Thus your assumption $T_{0j}=T_{j0}$ holds. It might be a good exercise to prove that the symmetry holds in general (for this case) for clarity.
Hope this helps :) | {
"domain": "physics.stackexchange",
"id": 94352,
"tags": "general-relativity, cosmology, conservation-laws, tensor-calculus, stress-energy-momentum-tensor"
} |
Why is not created the install_aarch64/share directory? - cross compile tool | Question:
Hi, I want to compile a ROS2 foxy demo in my pc running a VM with Linux Ubuntu 18.04 and then run it in my 64 bits arm processor running Linux Ubuntu 20.04 arm64, I am using this tool, I install the prerequisites, when I install docker I use this method and then I use this steps to run containers as a non-root user as mentioned in the prerequisites part:
Prerequisites
This tool requires that you have already installed
Docker
Follow the instructions to add yourself to the docker group as well, so you can run containers as a non-root user
Python 3.6 or higher
then I install the ross cross compile and follow this tutorial, everything goes ok but I realize 2 things:
1.- the expected outputs are not the same
I don´t have the tutorial.repos
2.- the install_aarch64/share path is not created, I realize this when I try to run the build in my target, in this step:
rosdep install --from-paths install_aarch64/share --ignore-src --rosdistro foxy -y
I got an error that says that the install_aarch64/share directory does not exists and wen I enter In the install_aarch64 directory I can confirm that
what could be the problem?
Originally posted by jg_spitfire on ROS Answers with karma: 31 on 2021-10-31
Post score: 0
Answer:
The tool had a bug, now it is fixed
Originally posted by jg_spitfire with karma: 31 on 2021-11-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2021-11-10:
Could you link to where it is fixed? A pull request perhaps?
Comment by jg_spitfire on 2021-11-10:
https://github.com/ros-tooling/cross_compile
Comment by gvdhoorn on 2021-11-10:
That's a link to the repository. If the bug was fixed in that repository, could you link to where it was fixed?
Comment by jg_spitfire on 2021-11-10:
I don´t know, I am not the repo owner or mantainer, you could search in the issues tab in the repo
Comment by gvdhoorn on 2021-11-10:
You wrote:
The tool had a bug, now it is fixed
so how did you notice it is now fixed?
You just updated it and it happened to work when you tried again?
Comment by jg_spitfire on 2021-11-11:
I was trying to use it and I could not do it so I open an issue, read the issues
Comment by gvdhoorn on 2021-11-11:
Why not just link us to those issues? I'm not sure I understand what the problem was to do that in your initial answer.
Comment by jg_spitfire on 2021-11-11:
You could check the issues in the repo, I shared the link before
Comment by gvdhoorn on 2021-11-11:
There seems to be only a single issue opened by you: ros-tooling/cross_compile#342. | {
"domain": "robotics.stackexchange",
"id": 37072,
"tags": "ros2"
} |
Basic question about Markov Localization, probability and belief distribution shift | Question: I am starting in robotics, and reading about Markov Localization, I have one doubt, probably very stupid, but, well, I want to solve it.
Let's take the CMU Website example: https://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/node3.html
Basically and very summarized:
The robot does not know its location (uniform probability of being
at any point), but knows there are 3 doors.
It senses a door, and since that door could be any one out of the 3, the belief distribution spikes at those 3 doors.
It moves to the right, and the belief distribution shifts to the right also, let's say "following the robot movement", and then the convolution is done when finding the 2nd door... This gets described in the next graphic, from the CMU:
But why does the belief distribution get shifted to the right, and not to the left, as the door is left behind?
Shouldn't the robot sense that there's no door between door 1 and 2 (starting from the left)?
Is there something about probability theory I've forgotten (I studied it like 14 years ago)?
Answer: The robot has sensed a door, so the initial belief distribution matches the three possible door positions. i.e. the only three places that it is possible for the robot to be in that scenario.
The robot moves to the right so, since the belief distribution matches the possible positions of the robot, the belief distribution must also move to the right. As the robot moves, so the belief distribution must move with it.
Notice that the three peaks are no longer quite as well defined at this point because of that motion. Movement is inherently noisy, which introduces uncertainty. The more the robot moves, the less certain it becomes about its position, and the belief distribution thus tends to flatten or smooth out.
Finally, the robot senses the second door. Given the (previously known) possible door positions, the belief distribution now also centres on the second door in the diagram. | {
"domain": "robotics.stackexchange",
"id": 2358,
"tags": "localization, movement, probability"
} |
writing a driver for 4-wheels differential drive robot | Question:
I want to make a custom robot and my problem is publishing odometry values. I have and Ackermann steering car robot, but I read that it is hard to integrate that in ROS.
Thus, I switched to this one: http://mikroelectron.com/Product/4WD-Drive-Aluminum-Mobile-Car-Robot/
As you see, it is a 4WD differential drive robot chassis with wheel encoders. I've read about this package here: http://wiki.ros.org/diff_drive_controller
but I don't know if it is a "driver" or not. Can you help me in this? I just want to integrate such a robot with ROS in order to do other stuff that I want which will include using Kinect and stuff... so getting accurate poses are essential.
I looked for ready solutions like Jakal but their price is simply insane!
thanks!
Originally posted by VEGETA on ROS Answers with karma: 11 on 2016-07-05
Post score: 0
Answer:
You might be better off looking at this package to start
http://wiki.ros.org/differential_drive and this related page
https://code.google.com/archive/p/knex-ros/wikis/ProjectOverview.wiki
I looked briefly at the web page and I didn't see where the encoders were included. Anyway if you have the encoder input you either do the odometty on the arduino or use rosserial to send the values to your PC and to it there. In either case you will have to use rosserial to send the topics to your computer you are running ROS on.
Originally posted by mcshicks with karma: 51 on 2016-07-05
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by VEGETA on 2016-07-05:
nice help there, I thought of using Arduino but I guess Raspberry Pi 3 is going to be used as main controller with ROS so why not just use it directly?
my question is how to calculate odometry? what are the required values and so on?
it is better if there is an example.
thanks!
Comment by mcshicks on 2016-07-06:
Look at this example: How to perform odometry on an arduino for a differential wheeled robot?. You need a method (encoder) to measure the wheel rotation and the distance between the wheels
Comment by VEGETA on 2016-07-19:
so just this short amount of code is necessary? then use rosserial to send it to navigation packages?
I read somewhere here that there is a ready-to-use differential drive driver software... do you know about it?
Comment by VEGETA on 2016-07-24:
My robot is not 2 wheels but it is 4 wheels... is that still diff drive or is it skid drive?
Comment by tooght on 2018-01-21:
hi @VEGETA did you find the answer to this question? I have found some examples for this problem. However, they are all made using 2 wheels. And I think it's very difficult to make a 4-wheel robot using these examples.
https://goo.gl/9wknlr
https://goo.gl/JtEWFC
https://goo.gl/UUVdzA | {
"domain": "robotics.stackexchange",
"id": 25151,
"tags": "ros, differential-drive, driver"
} |
do ros suport mpi? Can High-performance computing integrated into ros? | Question:
ros for fuete has related thing about mpi , why next versions of ros no longer suport mpi?
Originally posted by lligen on ROS Answers with karma: 1 on 2014-06-30
Post score: 0
Answer:
ROS doesn't natively support using MPI for message-passing between nodes. It uses sockets for inter-process communication, and shared pointers for intra-process communication. These are generally fast enough for most users.
Judging by the wiki page, the MPI stack was probably experimental, and it looks like it wasn't fully integrated with ROS. There's no in-depth documentation, no tutorials, and it isn't released.
Originally posted by ahendrix with karma: 47576 on 2014-06-30
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by lligen on 2014-06-30:
Thank you for your reply. Would you mean that there is no need to integrate mpi into ros? Then, how can we accelerate our programs when it contains large amounts of calculation? Distributed computing, cloud computing, can it work with ros?
Comment by ahendrix on 2014-07-01:
ROS nodes are simply C++ or Python programs; you can do anything you would in a normal C++ program, including using libraries outside of ROS. | {
"domain": "robotics.stackexchange",
"id": 18443,
"tags": "ros"
} |
What is the $\lambda$ parameter in the $U3$ gate used for? | Question: The most general single qubit gate is $\mathrm{U3}$ given by matrix
$$
\mathrm{U3}=
\begin{pmatrix}
\cos(\theta/2) & -\mathrm{e}^{i\lambda}\sin(\theta/2) \\
\mathrm{e}^{i\phi}\sin(\theta/2) & \mathrm{e}^{i(\phi+\lambda)}\cos(\theta/2)
\end{pmatrix}.
$$
If the gate is applied on qubit in state $|0\rangle$ again the most general description of quantum state is obtained, i.e.
$$
|\varphi_0\rangle = \cos(\theta/2)|0\rangle + \mathrm{e}^{i\phi}\sin(\theta/2)|1\rangle,
$$
where angles $\phi$ and $\theta$ describe position of the state on Bloch sphere.
When the gate is applied on qubit in state $|1\rangle$, the result is
$$
|\varphi_1\rangle = \mathrm{e}^{i\lambda}(-\sin(\theta/2)|0\rangle + \mathrm{e}^{i\phi}\cos(\theta/2)|1\rangle)
$$
Obviously term $\mathrm{e}^{i\lambda}$ can be ignored because it is the global phase of a state.
I can imagine that global phase can be useful for constructing contolled global phase gate but it can be implemented as $\mathrm{Ph}(\lambda) \otimes I$,
where
$$
\mathrm{Ph}(\lambda) =
\begin{pmatrix}
1 & 0 \\
0 & \mathrm{e}^{i\lambda}.
\end{pmatrix}
$$
My question is: What a parameter $\lambda$ in $\mathrm{U3}$ is used for?
Answer: The parameter $e^{i\lambda}$ is only a global phase if it's acting on the basis state $|1\rangle$. Act on any superposition of states, and it's a relative phase, and absolutely critical. | {
"domain": "quantumcomputing.stackexchange",
"id": 1246,
"tags": "quantum-gate, quantum-state, ibm-q-experience"
} |
How to avoid falling into the "local minima" trap? | Question: How do I avoid my gradient descent algorithm into falling into the "local minima" trap while backpropogating on my neural network?
Are there any methods which help me avoid it?
Answer: There are several elementary techniques to try and move a search out of the basin of attraction of local optima. They include:
Probabalistically accepting worse solutions in the hope that this
will jump out of the current basin (like Metropolis-Hastings acceptance in Simulated Annealing).
Maintaining a list of recently-encountered states (or attributes thereof) and not returning
to a recently-encountered one (like Tabu Search).
Performing a random walk of a length determined by the current state of the search (an explicit 'Diversification strategy', e.g. as used in 'Reactive Tabu Search').
See the excellent (and free online) book 'Essentials of Metaheuristics' by Sean Luke for more details on these kind of techniques and some rules of thumb about when and how to use them. | {
"domain": "ai.stackexchange",
"id": 33,
"tags": "neural-networks, backpropagation, optimization, gradient-descent"
} |
Can you assign a type to any term of the λEA-calculus? | Question: The untyped language of System-F and similar is the λ-calculus. That language has terms that can't be typed on System-F, λx.(x x) λx.(x x) being the most obvious example. The λEA-calculus, as described here, and its variants, has an interesting property that all stratified terms of the untyped language are total and strongly normalizing. That raises 2 questions:
Does that mean that, for all terms of λEA, there is a valid type derivation?
Is this a common phenomena, i.e., are there other systems with this same characteristic, or is it something particular of linear logic based proof languages?
Answer: For question 1, the answer is no, and is no for almost any type discipline (except certain intersection types): the fact that a term is (strongly or weakly) normalizable does not imply in general that it is typable. Typability implies termination, not the other way around.
The specific case of of $\lambda EA$, however, brings up another issue which may be the object of confusion, so let me discuss it. Let us start with a counter-example for your question 1: the term
$$[M]_{\lambda x.N=y,z}$$
is not typable. This is because the construct $[M]_{P=y,z}$ represents contraction of variables $y$ and $z$ in $M$ (i.e., $y$ and $z$ are morally equated in $M$ and $P$ is "plugged in" on them), so for such a term to be typable $P$ must have type $!A$, i.e., it must be duplicable. Now, in my example, $P$ is an abstraction and can never receive type $!A$ (its type will always have the form $B\multimap C$).
There are many more examples like this in $\lambda EA$. Intuitively, typability ensures more than termination: it also avoids "clashes", i.e., a term may stop its reduction because it "gets stuck". Typically, this happens when a destructor of a certain form meets a constructor of a different form (e.g., a head::tail pattern that is matched, i don't know, against a binary tree). For example, in the above case, the constructor matching the destructor $[M]_{P=y,z}$ is $!(-)$, not $\lambda$, i.e., if $P$ ever reduces to something with a constructor at the root, one expects $P=!(N)[\ldots]$, not $P=\lambda x.N$.
In truth, typability is orthogonal to termination. There may be a misunderstanding about this because, in the context of the pure $\lambda$-calculus, which is the one used for showcasing type disciplines to a beginner, there are no "clashes" (there is only one constructor and one destructor), so clash-avoidance is completely hidden and only termination shows up. But as soon as one spices things up a bit, adding more constructors and destructors, the distinction between termination and clash-avoidance arises, and types are seen to actually ensure the latter more than the former (for instance, PCF is a typed language which does not terminate, as is every real-world functional programming language). Mor precisely, types always ensure clash-avoidance and might, in many cases of interest, also ensure termination.
Regarding your question 2, I am guessing that by "same characteristic" you mean the fact that the untyped calculus terminates? In this case, it may not be very common for the programming languages one considers in practice, but it is not very hard to come up with examples, which must not necessarily have anything to do with linearity (for instance: consider any programming language with no gotos and having a distiction between "for" and "while" loops; eliminate while loops; the language you obtain terminates, whether you have types or not). The interest of $\lambda EA$ is not that it terminates but that it does so with an interesting complexity bound (elementary). | {
"domain": "cstheory.stackexchange",
"id": 4135,
"tags": "lambda-calculus, functional-programming, linear-logic"
} |
Why do simple carburetors richen the mixture as altitude increases? | Question: (Question copied largely verbatim from https://aviation.stackexchange.com/questions/59328/why-do-carburetors-tend-to-produce-richer-mixture-at-higher-altitude. I have the identical question and it's really eating at me!)
Simple first-principles analysis of fluid dynamics suggest that the pressure driving fuel into a carburetor venturi should change linearly with air density.
The pressure drop across the venturi is proportional to air density and the fuel is at ambient pressure in the float chamber, so I would expect the fuel flow to reduce proportionally with density, and that response to preserve the fuel-air ratio over changing altitude.
But in practice that does not seem to be the case. Proper response to altitude requires additional modification that most airplane carburetors don't have, so the pilot usually has to manually lean out the engine during the climb. What am I missing here?
More specifically, I would expect that at the same RPM, the volume flow rate will be the same—because the engine pulls in its displacement per revolution. Now velocity in the venturi $v$ is just
$$ v = \frac{\dot V}{A} $$
Where $\dot V$ is the volume flow rate and $A$ is the cross-section of the venturi. So it will also be the same independent of altitude. Since dynamic pressure
$$ P_d = \frac 1 2 \rho v^2 $$
And that is also the pressure that pulls in the fuel (when the float chamber is open to ambient pressure). Substituting mass flow
$$ \dot m = \rho v $$
$$ P_d = \dot m \frac 1 2 v $$
and as long as $v$ is mostly constant,
$$ P_d \sim \dot m $$
So in other words, the pressure which drives the fuel through the jets and into the venturi varies somewhat linearly with the mass of air going through the carburetor. This would suggest that the fuel/air ratio will stay relatively constant... which of course is the opposite of observed behavior.
So this leaves open the dependence of fuel flow on $P_d$. If the relation is reasonably close to linear, it should mean the venturi mixes properly by mass. I can see a reason why higher pressure should cause less than linear increase in fuel flow, but not much why it should cause more than linear increase in fuel flow—but that is what the actual behavior would need.
Answer: TLDR; It's complex, I don't have a simple physical description. It basically comes down to the fuel-air ratio being a function of the square root of pressure whereas air mass is simply proportional to pressure. Maybe someone can add an intuitive explanation in the comments?
Preface
I figured it out. The answer is subtle and requires a decent dive into the math behind fluid flow.
Simple carburetor
Let's consider the constant throttle setting of the below carb:
For simplicity, we will assume both engine RPM and ambient temperature are constant.
We will also assume that the system has no head losses.
Fluid dynamics
Liquid
An ideal liquid has no viscosity and its density is constant. Through the Bernoulli Equation, the ideal flow of such a liquid through a passage can be written as:
$$ \dot{m_l} = A_l \rho_l u_l = A_l \sqrt{2 \rho_l \Delta P_l} $$
where
$m_l$ is the liquid mass flow
$\Delta P_l = P_{01,l} - P_{2,l}$
$\rho_l$ is the fluid density
$u_l$ is the fluid velocity
$A_l$ is the flow area (in this case, the carburetor jet)
Gas
Using Bernouilli, the equation for an ideal gas is almost the same:
$$ \dot{m_g} = A_g \rho_g u_l \phi_g = A_g \phi_g \sqrt{2 \rho_g \Delta P_g} $$
where subscripts have changed from l to g and:
$\phi_g$ is the gas flow compressibility parameter
Fuel/air ratio
Definition
We'll assume air is an ideal gas, and that the liquid is gasoline. (I apologize in advance for the fact that gas can mean both a phase of matter and a shortened version of "gasoline". Throughout this answer, I will use gas to mean only a phase of matter and gasoline only for petroleum fuel.)
$$ F_M = \frac{\dot{m_l}}{\dot{m_g}} = \frac{A_l \sqrt{2 \rho_l \Delta P_l}}{A_g \phi_g \sqrt{2 \rho_g \Delta P_g}} $$
where:
$F_M$ is the fuel/air ratio
Simplification
Simplifying this expression by eliminating variables, abstracting out the constant coefficients $A_l$, $A_g$, and $\rho_l$, and-- most importantly-- noting that the pressure differential $\Delta P_l$ and $\Delta P_g$ are identical (because in the case of the ideal carburetor both go from the air inlet to the venturi body:
$$ F_M \propto \frac{1}{\phi_g \sqrt{\rho_g}} $$
Analysis at different pressures, i.e altitudes
Let's look at thee $F_M$ at two different pressures, $P_0$ and $P_1$:
$ F_{M,0} \propto \frac{1}{\phi_{g,0} \sqrt{\rho_{g,0}}} $
$ F_{M,1} \propto \frac{1}{\phi_{g,1} \sqrt{\rho_{g,1}}} $
Dividing these two, and noting that the coefficients eliminated in the Simplification step now fully cancel out, resulting in equality:
$$ \frac{ F_{M,0}}{F_{M,1}} = \frac{\frac{1}{\phi_{g,0} \sqrt{\rho_{g,0}}}}{\frac{1}{\phi_{g,1} \sqrt{\rho_{g,1}}}} = \frac{\phi_{g,1} \sqrt{\rho_{g,1}}}{\phi_{g,0} \sqrt{\rho_{g,0}}}$$
I won't go into the derivation of $\phi$, but suffice to say that for the conditions given in the Simple carburetor section, it is constant at all altitude. This results in the very simple equation:
$$ \frac{ F_{M,0}}{F_{M,1}} = \sqrt{\frac{\rho_{g,1}}{\rho_{g,0}}}$$
Conclusion
Assuming that condition 0 is at altitude, condition 1 is the sea level reference, and that density, $\rho$, is a linear function of pressure, the ratio of the fuel mass flow at seal level to the fuel mass flow at altitude is:
$$ \frac{ F_{M,alt}}{F_{M,S.L}} = \sqrt{\frac{P_{S.L}}{P_{alt}}}$$
Compare this to the ratio of air mass flow at altitude and note the lack of square roots:
$$ \frac{ \dot{m_{g,alt}}}{\dot{m_{g,S.L}}} = \frac{P_{alt}}{P_{S.L}}$$
This is why the ratios get out of whack as altitude changes.
Worked example
What is the richening percentage going from sea level on an STP day to 5000m (assuming perfect dry adiabatic lapse)?
At 5000m, according to https://www.mide.com/air-pressure-at-altitude-calculator this is 54kPa, or ~0.5atm.
$$ \sqrt{\frac{P_{S.L.}}{P_{alt}}} = \sqrt{\frac{101}{54}} = 1.37 $$
So the carburetor is pushing a 37% richer mixture than at sea level.
NOTE that this is not the 50% which would happen if the carb simply pumped in the same amount of fuel for every engine stroke, no matter the air density. | {
"domain": "engineering.stackexchange",
"id": 4873,
"tags": "carburetor"
} |
Maxwell Boltzmann speed distribution: why isn't speed element integrated when converting from velocity distribution? | Question: Maxwell Boltzmann velocity distribution is given by $$f_{\vec v}(v_x,v_y,v_z)=A^{3/2}\exp{[B(v_{x}^{2}+v_{y}^{2}+v_{z}^{2})]}$$
To convert the velocity distribution into speed distribution, spherical coordinates were used, with
$v^2=v_{x}^{2}+v_{y}^{2}+v_{z}^{2}$, and volume element $v^2 \sin\theta dv d\theta d\phi$.
The integral is $$f(v)=A^{3/2} \exp{[Bv^2]} \int_{0}^{2\pi} \int_{0}^{2\pi} v^2 \sin\theta dv
d\theta d\phi$$
, which becomes $$f(v)=A^{3/2} \exp{[Bv^2]} 4\pi v^2$$
Why isn't speed element integrated here?
Answer: 'Integrating out' the angular components leads to a result that depends only on the magnitude of the velocity vector (i.e. the speed). That is what you want: a function that describes the distribution of speeds. If you integrated wrt to the speed, then you would not have a function of speed anymore (you would get instead the probability of finding a particle with speed between $v$ and $v + dv$).
If, for example, you then want to calculate the average speed, you would multiply that function (which is a probability distribution) by the variable 'speed' (v) and integrate over all speeds. | {
"domain": "physics.stackexchange",
"id": 93568,
"tags": "thermodynamics, statistical-mechanics, integration"
} |
Why do so many people have group O blood? | Question: Please forgive me in case my question wouldn't make much sense. I was reading about ABO blood groups on Wikipedia, where I learnt that O is a recessive allele, and that it seems the A allele predates the O allele. My question is, therefore, how is it possible that almost half of the human population is group O?
For instance, I would expect (quite naively I reckon), if only alleles A and O existed, that around 75% of the population should be A, and 25% should be O.
Answer: There are many different variants of O (all loss-of-function) indicating that this mutation has arisen many times in the human population. The prevalence of O is indeed taken as evidence of balancing selection. Various pathogens use the A or B antigens as receptors. The cited paper presents evidence about the phylogeny of the ABO gene in human populations and reports that there is clear evidence of balancing selection.
We propose several hypotheses for the cause of [balancing selection], which most likely involved interactions with multiple pathogens at different geographic regions and timescales.
Calafell et al (2008) Evolutionary dynamics of the human ABO gene. Human Genetics 124: 123-135 | {
"domain": "biology.stackexchange",
"id": 2167,
"tags": "human-genetics, hematology"
} |
Unit tests for a function that calculates demerit points for speeding | Question: I'm currently working on a unit testing course (NUnit 3.x). I've been tasked with a simple class to test all edge cases.
using System;
namespace TestNinja.Fundamentals
{
public class DemeritPointsCalculator
{
private const int SpeedLimit = 65;
private const int MaxSpeed = 300;
public int CalculateDemeritPoints(int speed)
{
if (speed < 0 || speed > MaxSpeed)
throw new ArgumentOutOfRangeException();
if (speed <= SpeedLimit) return 0;
const int kmPerDemeritPoint = 5;
var demeritPoints = (speed - SpeedLimit)/kmPerDemeritPoint;
return demeritPoints;
}
}
}
And -
using System;
using NUnit.Framework;
using TestNinja.Fundamentals;
namespace TestNinja.UnitTests
{
[TestFixture]
public class CalculateDemeritPointsTests
{
private DemeritPointsCalculator _demeritPointsCalculator;
[SetUp]
public void SetUp()
{
_demeritPointsCalculator = new DemeritPointsCalculator();
}
// my solutions
[Test]
[TestCase(65,0)]
[TestCase(70,1)]
[TestCase(75,2)]
[TestCase(80,3)]
[TestCase(0,0)]
[TestCase(60,0)]
[TestCase(66,0)]
public void CalculateDemeritPoints_WhenCalled_ReturnsExpectedInt(int speed, int expectedResult)
{
var result = _demeritPointsCalculator.CalculateDemeritPoints(speed);
Assert.That(result, Is.EqualTo(expectedResult));
}
[Test]
public void CalculateDemeritPoints_SpeedLessThan0_ThrowsException()
{
Assert.That(() => _demeritPointsCalculator.CalculateDemeritPoints(-1), Throws.InstanceOf<ArgumentOutOfRangeException>());
}
[Test]
public void CalculateDemeritPoints_SpeedGreaterThan300_ThrowsException()
{
Assert.That(() => _demeritPointsCalculator.CalculateDemeritPoints(301), Throws.InstanceOf<ArgumentOutOfRangeException>());
}
}
}
Am I using to many test cases for CalculateDemeritPoints_WhenCalled_ReturnsExpectedInt?
I was thinking about merging the final two tests into one as they would also work with test cases, but I wasn't sure if I was doing to many.
Answer:
Am I using too many test cases for CalculateDemeritPoints_WhenCalled_ReturnsExpectedInt?
Yes. I'd say so, anyway.
Let's say I'm maintaining code you've written. When a test case fails, ideally it will give me exactly the information I need to fix it, and it will give me that information as quickly as possible. The very first thing I will see when I learn a test has failed? That test's name.
Currently if I see that this test method has failed, all I learn from the name is that the calculator didn't return what was expected. That doesn't give me very much information about what kind of bug to look for, so I'll have to dig a little deeper to find what what speed was passed in, what demerit value was returned, and why that value was unexpected.
You can save me that digging by adding a test called CalculateDemeritPoints_LegalSpeed_GivesNoDemerits, and putting all your TestCase(_,0) cases there. As a bonus, [TestCase(60)] on top of a method named "Legal Speed" is more instantly readable than [TestCase(60,0)] on top of a method named "Expected".
In general, if a single test method is testing different kinds of behavior, I will say it's too much. As a rule of thumb, different branches of code probably deserve different test methods.
As a side note, I think it would be fine if you combined the two exception test methods into one.
On the other hand, I'd also be fine if you skipped test parameterization and made separate test methods for every single test case.
The key, in my opinion, is to look at it through the maintainer's eyes. How much can you assist that person in understanding what your code is doing, and what's going wrong with it?
As another side note, I also dislike (although it's not a strong dislike) the common practice of class-level setup and teardown methods. Especially in a case like this; the [Setup] is saving you exactly one line of code per test, and test code is cheap.
As a maintenance programmer, if I see that a test has failed, I will always have to go inspect the failing test method. Will you also make me search for any other methods that might have fired as a part of the test? For example: If a test is failing because the test object was constructed with a strange argument, and the call to the constructor was in a setup method, that means cause of the failure has been hidden from me.
For that reason, I like to err on the side of verbose test methods, in order to make each method self-contained (and therefore, easier to follow). | {
"domain": "codereview.stackexchange",
"id": 35329,
"tags": "c#, unit-testing, nunit"
} |
Spiralling of electrons into the nucleus | Question: A drawback (and a major one) of Rutherford's model was that the electrons, being accelerated charged particles, would gradually spiral into the nucleus and collapse into it. My question is how did Bohr rectify this drawback in his own model.
In Bohr's model too, electrons move in a circular path, which means they are accelerated. Thus, going by Electromagnetic theory of Maxwell, those accelerated electrons should radiate out their energy, and collapse into the nucleus, just like they did in Rutherford's model. However every book mentions that this does not happen in Bohr's model. Why not?
Please tell me where I am wrong, because this problem has been disturbing me for quite long now.
Answer: As a matter of fact, Bohr literally went past around this issue. In 1913, he proposed his model postulating electrons could only have certain classical motions:
Electrons in atoms orbit the nucleus.
The electrons can only orbit stably, without radiating, in certain orbits (called by Bohr the "stationary orbits") at a certain discrete set of distances from the nucleus. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss as required by classical electromagnetic theory (This was based upon Planck's quantum theory of radiation).
Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency ν determined by the energy difference of the levels according to the Planck relation:
$$ \Delta {E}=E_{2}-E_{1}=h\nu \ $$
where h is Planck's constant.
As remarked above, these are postulates (statements assumed to be true from which further reasoning follows), and (even though that part was left unexplained), it's acceptance was due to the fact it explained correctly many experimental results. It was further displaced by the modern quantum theory.
Hope it helps, it also puzzled me a few years ago! :P
Reference: https://en.wikipedia.org/wiki/Bohr_model | {
"domain": "chemistry.stackexchange",
"id": 14809,
"tags": "electrons, atoms"
} |
Pushing Buttons Remotely over Ethernet | Question: So I want to program something that will simply push a button, but controllable over ethernet. I'm new to robotics so I don't know where to start. What's the best way to control an actuator over a network connection?
Answer: First, let's consider what actuator you need to physically push a button. One straight forward solution is to use a RC-Servo motor. An RC-Servo motor is a high torque (DC motor with gearbox) actuator which can be instructed to rotate at a specific angle. It is controlled through a PWM signal. So you need to figure out the duty cycle of the PWM signal to do the desired movement and push the button you want.
Secondly we need a controller that:
is able to generate a PWM signal
communicate through the Internet
First thing that comes to my mind is to use a Raspberry Pi. Of course it is a huge overkill to use it just for pushing a button but in the process you will learn a lot of interesting things about setting up Raspberry Pi. You can setup a minimal web-server using python in less that 5 minutes.
Arduino with an Ethernet shield as DaemonMaker suggested is also a great setup. | {
"domain": "robotics.stackexchange",
"id": 1082,
"tags": "arduino, actuator"
} |
Dehydration of carboxylic acids using P2O5 | Question: Why isn’t concentrated $\ce{H2SO4}$ used for the dehydration of carboxylic acids to anhydrides? What is special about $\ce{P2O5}$?
Answer: $\ce{P2O5}$ is a solid. Hence, it is easy to separate from reaction products. Moreover, $\ce{P2O5}$ is a stronger dehydrating agent than $\ce{H2SO4}$ | {
"domain": "chemistry.stackexchange",
"id": 10125,
"tags": "organic-chemistry, carbonyl-compounds"
} |
How does the diameter have any effect on the first harmonic frequency? | Question:
The answer is C. What I don't understand is how the diameter would have any effect on the first harmonic frequency as the formula $$f=\frac{1}{2L}\sqrt{\frac{T}{\mu}}$$
Answer: The key is that $\mu$ is the linear density of the string, i.e. the mass per unit length. Both strings are made of the same material, so they have the same density $\rho$. The mass of a cylinder of density $\rho$, length $L$, and diameter $d$ is $\frac{\pi}{4}\rho d^2L$. This means that the linear density is
$$\mu=\frac{m}{L}=\frac{\pi}{4}\rho d^2$$ | {
"domain": "physics.stackexchange",
"id": 67577,
"tags": "waves, frequency"
} |
How appropriate/adaptable is HPLC for in situ sample analysis on Titan? | Question: I have a limited background in chemistry (nothing beyond general Chem 2), so I am asking for help with a group project. We have two chemists on our team, but communication is limited at the time and I am trying to answer some specific questions for myself so that we can move forward. We're designing a probe to study the chemical composition of Titan's lakes and sediment deposits in the lake beds, among other functions. We would like to adapt HPLC for the project using UV-Vis spectroscopy as the detection method. However, from what I understand, a UV-Vis spectrophotometer must be calibrated using all known solutes, which we can't know in this situation. I read that when the detector is showing a peak, some of what is passing through the detector can be diverted to a mass spectrometer and the fragmentation pattern can be used to identify unknown compounds, which is what we would like to do. However, I'm not sure how this would work or whether it would be wise given the constraints of our project. Our motivation for adopting HPLC is that we can keep all substances in their original phase for analysis. But if I'm not mistaken, mass spectrometry will require vaporization. My question then is, is there a way to use HPLC to identify unknown compounds suspended in the lake without disturbing the state of the sample? Any advice would be appreciated.
Answer: It is a very good question because this is an active area of research. Having heard talks from the actual people who are making analytical instruments for space analysis, I can mention the actual issues before coming to your actual question so that you are aware of the latest problems. As far as I know, HPLCs have not been sent, but GC systems have been successfully used.
Key problems
(i) Weight. It costs an enormous amount (in millions) of money per kg to send anything in space. Routine HPLCs are very heavy. We need two heavy duty persons to lift the stack of standard equipment.
(ii) Latest HPLCs can now be reduced to the size of a small brief case. Check capillary chromatography. Also check HPLC on a chip. It is the size of a hand palm.
(iii) Elecric power consumption: Think how much power is consumed by the motors in the pump (huge) and other analytical equipment including the mass spec. You don't have lot electrical power in space. This the biggest problem besides weight issues.
(iv) Think of excessive radiation. Can the plastics and equipment tolerate huge amount of radiation from the Sun and elsewhere.
(v) Temperatures: Think of temperature extremes with liquid as solvents.
(vi) There is no HPLC column (or GC column) in the world which can separate everything. May be you would attach at least 7-8 columns at once and see which one separates the sample (having a system with multiple columns at once is commercially available. This is done by pharma people)
Our motivation for adopting HPLC is that we can keep all substances in
their original phase for analysis.
Yes, HPLC is a non-destructive technique. Mass spectrometer is not. However, UV-detection is useless for unknowns because you have no clue about their absorption spectra. Even if you have an absorption spectrum, it provides very little information about the full structure. Only mass spectrometer can be used in case of complete unknowns. Keep in mind that a typical HPLC C-18 column can only separate hydrophobic compounds under ideal conditions. The sole job HPLC or GC is to simplify the mass spectrum of a sample. Imagine you inject a sample containing 50 analytes into a MS. Imagine the complicated mess. With an HPLC, only compound will reach the detector one at a time. Mass spectrum is very simple then.
My question then is, is there a way to use HPLC to identify unknown compounds suspended in the lake without disturbing the state of the sample? Any advice would be appreciated.
Short answer no. HPLC cannot detect or separate suspended matter. If you meant substances dissolved in the lake, yes, then it is possible. Every sample has to be carefully filtered to remove any particles.
Since you have two chemists, why don't you just do a real experiment on lake water. | {
"domain": "chemistry.stackexchange",
"id": 12259,
"tags": "spectroscopy, chromatography"
} |
Can filter "depth" be adjusted by mixing dry and wet signals? | Question: Can filter "depth" be adjusted by mixing dry and wet signals?
I.e. can I simulate e.g. a +6dB bandshelf/peak filter at 1kHz by mixing in some of the dry unequalized signal and some of a wet signal that has been bandpass filtered at 1kHz and the filter has around the same shape as the bandshelf/peak.
Can it be theoretically the same?
Answer: adding the input to the output of a scaled 2nd-order bandpass IIR will get you the classic peak/cut EQ curve.
adding the input to the output of a scaled 1st-order LPF or HPF will get you a 1st-order low-shelf or high-shelf EQ.
adding the input to the output of a scaled 2nd-order LPF or HPF will get you a particular form of a 2nd-order low or high shelving EQ, but there are many others. doing it this way will often leave an unintended null or lump in the frequency response. if you want a symmetric (in log frequency) and perfectly monotonic gain vs. frequency curve, i might suggest referring to the Audio EQ Cookbook. | {
"domain": "dsp.stackexchange",
"id": 2946,
"tags": "filtering"
} |
What are efficient approaches to implement unit propagation in DPLL-based SAT solvers? | Question: I'm trying to decompose deduction steps of DPLL algorithm -- unit propagation and pure literal elimination -- for parallelization. However, I want a baseline and asymptotic analysis to compare to my parallel approach for these two deduction procedures.
My naive approach to unit propagation would be to perform linear scan on all clauses, and that of pure literal elimination is again to linear scan and count the number of + and - polarities. But, it wouldn't make sense to compare my algorithm to the most naive approach available.
After reading this paper -- "The Quest for Efficient Boolean Satisfiability Solvers" -- I came to know of two more approaches:
Head-tail lists
Two-watched literals (which disables pure literal search)
But I believe there can be better data structures to represent the input formula and thus better algorithms and complexity analyses.
I'm only concerned with the approaches to deduce assignments at the beginning of DPLL procedure pseudocode as given on Wikipedia's page, and not branching heuristics or CDCL.
Are there any resources and comparisons of approaches to implement unit propagation (and possibly PLE) in DPLL? Or is two-watched literals still the most popular/ efficient technique till date?
Answer: Theory
Unit Propagation
For unit propagation, two-watched literals with circular updates is asymptotically optimal. I suggest reading "Optimal Implementation of Watched Literals and More General Techniques" which generalizes the concept and proves optimality. (It can also be helpful to read "An Efficient Algorithm for Unit Propagation" to see why head/tail meets the same optimality, though with larger storage costs and backtracking costs.)
When a watcher is falsified, it must find some unassigned literal in the clause to replace it. (If it cannot, then in a two-watcher scheme the remaining watcher has become unit; in a one-watcher scheme the clause has become empty.) Critically, when doing this search for a replacement literal one must begin where the last search ended, treating the clause as a circular buffer.
When done this way, updating watchers takes constant time when amortized across the search tree. Therefore unit resolution takes amortized linear time. (Older solvers did not do this circular tracking, and instead always began the search from the start of the clause; this leads to quadratic worst-case complexity instead.)
When backtracking, watched literals require no updates.
Pure Literal Elimination
For pure literal elimination, after unit propagation we need to determine all unassigned literals with zero instances of a polarity. Polarity count can only change when a clause transitions between satisfied and unsatisfied, so when a literal is assigned/unassigned we must necessarily scan over all the clauses it occurs in (not just watches, but occurs) and update polarity counts. Care must be taken not to double-count clauses, but updating literals in a clause is already linear so this doesn't affect asymptotic worst case.
When a literal has zero instances of a polarity, it may be assigned. No unit propagation is necessary, since a pure literal has no clauses to resolve with.
When backtracking, polarity counts do require updates.
Conditional Autarkies
As an aside, the kind of pure literals DPLL seeks to find are actually a special case of a more general concept called conditional autarkies.
An autarky α is a (partial) assignment that satisfies all clauses it touches. A conditional autarky is an autarky with an assumption, i.e. if α_aut is an autarky for F ∧ α_con, then α_con ∪ α_aut is a conditional autarky.
Given a conditional autarky for F, the formula F ^ (α_con → α_aut) is not implied but is equisatisfiable. Note that the conditional part may be empty, so vanilla autarkies may also be assumed.
Pure literals are just a special case of autarky, satisfying every clause they touch. If the literal is only pure after assigning some variables then that is a conditional pure literal, a special case of a conditional autarky.
Practice
Unit Propagation
Modern solvers spend a bulk of their time doing unit propagation, so this is a highly-optimized section of code. Although the worst case for unit resolution is amortized linear, in practice the number of clauses updated is less than this. With watched literals you might even hit a best case of not being a watcher for any clauses! That's one of the reasons why this is so efficient in practice.
To optimize storage, a clause will store the two watchers as the first two literals in its elements list. This partitions the clause into the watcher half (size 2) and the remainder half. This allows the search for a new watcher to never consider the other watcher as viable.
Memory is slow, so cache misses are a significant cause of latency in watchlist updates. Iterating a watchlist is fast, since it is contiguous and the prefetcher does its thing well. But to actually iterate the literals of a clause (as well as retrieve the clause search position for circular updates) requires an indirection which is almost certainly going to be a cache miss.
To remedy this, each watch in a watchlist stores a cached literal from the clause called a 'blocking literal'. If this literal is satisfied, we don't need to do anything and move on. This check is likely to hit cache and be ~free. Otherwise we go ahead and perform the indirection to find a new watcher. (And the blocking literal will be updated to a more likely useful literal as a side effect.)
The next optimization is to note that for binary clauses, the watcher along with the blocking literal is enough space for the entire clause. No indirection is required at all and no clause ID is needed, provided we know this is a binary clause. To do that, we declare that our solver doesn't support more than 2^31 literals. This lets us store literals in a u32 with the MSB free for other uses; in our case we will set this MSB if the clause is binary.
That is, if the MSB is set then this value (with the MSB cleared) is the other literal of a binary clause, else this value is a blocking literal for a long clause and the clause ID is the next u32 after it. The result is that we get a watchlist containing variable-length elements:
[Other][Blocking][Clause ID][Blocking][Clause ID][Other][Other]etc.
As we iterate, we check the MSB to know which case we are in. Large real-world problems contain huge numbers of binary clauses. Not only does this speed up solving such problems due to more things fitting in cache and fewer indirections, but it saves a substantial amount of memory as well since binary clauses never allocate clause IDs.
Allocating clauses is an entirely other thing that is left as an exercise for the reader. (Hint: storing data together means it lives in cache together, so store the clause metadata like search position contiguous with the clause literals.)
I'd recommend reading over CaDiCal's propagation code with these concepts in mind.
Pure Literal Elimination
Pure literal elimination during assignment search loop is practically never worth it. Given everything outlined above, tacking on more tracking to count polarities nullifies the efficiency of propagation for something that almost never provides value. It also requires occurrence lists instead of just watch lists. You can have both, but the memory costs for occurrence lists are difficult to minimize.
This inefficiency also rears its head during backtracking, because now you have to increase polarity counts on clauses no longer satisfied. Unit propagation with watched literals doesn't require any work for backtracking.
All of this is especially true with clause learning. If p is some conditional pure variable, then (α_con → p) is merely a single clause and the solver can learn a lot more than this on its own.†
PLE can be cheaply performed once at the start of search. Since no unit propagation is occurring, the propagation engine remains unscathed. This is fairly common in basic solvers, but advanced solvers do not explicitly do this.
Bounded Variable Elimination
A much more powerful technique called bounded variable elimination can be performed during preprocessing or inprocessing. This takes a variable and generates all of the resolvents over it, adding them to the formula and removing the original clauses uttering this variable. The variable now no longer appears anywhere; the 'bounded' part comes in when we decide not to do this if the formula would become unhelpfully larger if we went through with it.
BVE is one of the more important processing techniques to apply in modern solving. As part of BVE, pure literals are trivially detected because they have no resolvents to produce. This technique subsumes performing any explicit PLE at the start of search.
This isn't quite the same as assigning pure literals during assignment search, but given this technique is applied arbitrarily often during search it probably approximates it pretty well at a negligible cost. Due to this, research into integrating PLE to propagation is, as far as I know, dead.
†
There is a bleeding-edge paradigm called satisfiability-driven clause learning (SDCL), which not only learns clauses from conflicts but learns clauses by solving small generated formulas during search. This allows it to learn not just implied clauses, but also equisatisfiable ones (called 'redundant clauses').
The generated formulas are called pruning predicates, and are a function of the formula being solved and the current partial assignment. If the predicate is satisfiable, then the current partial assignment may be pruned from the search space.
One kind of redundant clause, SBC (sometimes SET), can be found by solving a so-called positive reduct. All satisfying assignments for the positive reduct are conditional autarkies for F. So in effect this is finding conditional autarkies (rather than just conditional pures) during the search.
Incorporating this with practical effectiveness is an open problem.
For more on this I suggest reading "Encoding Redundancy for Satisfaction-Driven Clause Learning". | {
"domain": "cs.stackexchange",
"id": 19856,
"tags": "time-complexity, satisfiability, sat-solvers"
} |
Prove Minimum AABB Construction | Question: Assume a point set $P[n]$ with $n$ points which align in one plane in the euclidean space, so $p(x,y) \in \mathbb{R}^2$. Looking for an algorithm to construct an AABB (axis aligned bounding box) I did not find anything. But I thought that the simpliest appraoch could be the best: traverse all $p \in P$ and keep track of the minimum and maximum value of each dimension. The AABB is spanned by the two points $(x_{min},y_{min})$ and $(x_{max},y_{max})$. This clearly runs in $\mathcal{O}(n)$.
Now I struggle proving this kind of obvious fact (the best algorithm runs in $\mathcal{O}(n)$). It is clear that all points of the set have to be checked for inclusion which the algorithm does, given the fact that the set is not preprocessed in any way, like ordered.
What is a good formal approach for proving this?
Side note: Is there a better algorithm for constructing an AABB?
Answer: Suppose that the points are $(x_1,y_1),\ldots,(x_n,y_n)$. You are looking for the smallest (in some sense) bounding box $[X_m,X_M] \times [Y_m,Y_M]$ containing all your points. For every $i$, we must have $X_m \leq x_i \leq X_M$ and $Y_m \leq y_i \leq Y_M$. In particular,
$$
\begin{align*}
X_m \leq \min_i x_i =: x_{\min}, \\
X_M \geq \max_i x_i =: x_{\max}, \\
Y_m \leq \min_i y_i =: y_{\min}, \\
Y_M \geq \max_i y_i =: y_{\max}.
\end{align*}
$$
Summarizing, this shows that every bounding box must satisfy
$$
X_m \leq x_{\min} \leq x_{\max} \leq X_M, \\
Y_m \leq y_{\min} \leq y_{\max} \leq Y_M,
$$
that is, it must contain $B := [x_{\min},x_{\max}] \times [y_{\min},y_{\max}]$. Conversely, $B$ is itself a bounding box, since $x_{\min} \leq x_i \leq x_{\max}$ and $y_{\min} \leq y_i \leq y_{\max}$ for all $i$. We conclude the following fact:
$B$ is the intersection of all bounding boxes. | {
"domain": "cs.stackexchange",
"id": 11202,
"tags": "computational-geometry"
} |
Don't see scan message on Rviz even though it's published | Question:
I'm trying to solve this puzzle. It seems to be simple but it doesn't work.
So, the OdomNode is subscribed to the original LaserScan message on /sick_lrs_36x1/scan topic. It receives the message and call LaserScan publisher to publish the message on /scan topic. A customized message is published with a new frame_id and with now timestamp.
The original scan message is getting played using the bag tool.
Running
ros2 topic echo /sick_lrs_36x1/scan
seems to give exact message as
ros2 topic echo /scan message
Interestingly, I can observe the /sick_lrs_36x1/scan in rviz, but cannot see the customized message.
Below is my code:
class OdomNode : public rclcpp::Node
{
public:
OdomNode() : Node("odom")
{
scan_ = this->create_subscription<sensor_msgs::msg::LaserScan>("/sick_lrs_36x1/scan", rclcpp::SensorDataQoS(),
std::bind(&OdomNode::callbackPose, this, std::placeholders::_1));
publisher_ = this->create_publisher<sensor_msgs::msg::LaserScan>("scan", rclcpp::SensorDataQoS());
}
private:
void callbackPose(const sensor_msgs::msg::LaserScan::SharedPtr msg)
{
now = this->get_clock()->now();
sensor_msgs::msg::LaserScan laser;
laser.header.frame_id = "cloud";
laser.header.stamp = now;
laser.angle_min = msg->angle_min;
laser.angle_max = msg->angle_max;
laser.angle_increment = msg->angle_increment;
laser.scan_time = msg->scan_time;
laser.time_increment = msg->time_increment;
laser.range_min = msg->range_min;
laser.range_max = msg->range_max;
laser.ranges = msg->ranges;
publisher_->publish(laser);
}
rclcpp::Time now;
rclcpp::Subscription<sensor_msgs::msg::LaserScan>::SharedPtr scan_;
rclcpp::Publisher<sensor_msgs::msg::LaserScan>::SharedPtr publisher_;
rclcpp::TimerBase::SharedPtr transform_timer_;
};
int main(int argc, char **argv)
{
rclcpp::init(argc, argv);
auto node = std::make_shared<OdomNode>();
rclcpp::spin(node);
rclcpp::shutdown();
return 0;
}
Originally posted by nigeno on ROS Answers with karma: 76 on 2022-03-04
Post score: 0
Answer:
I figured.
You need to set the Reliability Policy of the LaserScan topic in Rviz to Best Effort or System Default
Originally posted by nigeno with karma: 76 on 2022-03-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37483,
"tags": "ros, rviz, scan"
} |
C code using process synchronization and multi-threading | Question: This is a matrix-vector multiplication program using multi-threading. It takes matrixfile.txt and vectorfile.txt names, buffer size, and number of splits as input and splits the matrixfile into splitfiles in main function (matrix is divided into smaller parts). Then mapper threads writes the value into buffer and reducer thread writes result into resultfile.txt. Resultfile algorithm is not efficient but the code works which I tested with various inputs.
I appreciate any correction and comment.
Program:
/* -*- linux-c -*- */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <math.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <time.h>
#include <pthread.h>
#include <errno.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <semaphore.h>
#include "common.h"
#include "stdint.h"
const int ROWS = 10000;
const int COLS = 3;
int twodimen[10000][3];
int count_lines;
int vector[10000];
int vector_lines;
int NUMBER_OF_ROWS;
int splitnum;
int INPUT_BUF_SIZE;
sem_t *sem_mutex; /* protects the buffer */
sem_t *sem_full; /* counts the number of items */
sem_t *sem_empty; /* counts the number of empty buffer slots */
void * mapperThread(void * xx){
int filecount = (intptr_t)xx;
char filename[20] = "splitfile";
char txt[5] = ".txt";
char num[10];
sprintf(num, "%d", filecount);
strcat(filename, num);
strcat(filename, txt);
printf ("mapper thread started with: %s \n", filename);
struct buffer * bp = find(filecount);
// OPENING SPLIT FILE
FILE *splitfileptr;
char *sline = NULL;
size_t slen = 0;
ssize_t sread;
splitfileptr = fopen(filename, "r");
if (splitfileptr == NULL){
exit(EXIT_FAILURE);
}
while ((sread = getline(&sline, &slen, splitfileptr)) != -1) {
char *line_copy = strdup(sline);
if (SYNCHRONIZED) {
sem_wait(sem_empty);
sem_wait(sem_mutex);
// CRITICAL SECTION BEGIN
bp->buf[bp->in] = line_copy;
bp->count = bp->count + 1;
bp->in = (bp->in + 1) % INPUT_BUF_SIZE; // incrementing buffer count, updating
// CRITICAL SECTION END
sem_post(sem_mutex); // releasing the mutex
sem_post(sem_full); // incrementing full count, sem_post is signal operation
}
}
printf("producer ended; bye...\n");
pthread_exit(0);
}
void * reducerThread(char* resultfilename){
printf("reducer thread started\n");
FILE *resultfileptr;
char *line = NULL;
size_t len = 0;
ssize_t read;
char* item;
int index = 0;
while (index < count_lines) {
for(int i = 0; i < splitnum; i++){
struct buffer * bp = find(i);
if (SYNCHRONIZED && bp->count != 0) {
sem_wait(sem_full); // checks whether buffer has item to retrieve, if full count = 0, this statement will cause consumer to wait
sem_wait(sem_mutex); // makes sure when we are executing this section no other process executes at the buffer
// CRITICAL SECTION BEGIN
item = bp->buf[bp->out]; // just retrieving the buffer. putting into item.
bp->count = bp->count - 1;
bp->out = (bp->out + 1) % INPUT_BUF_SIZE; // updating out index variable, this is a circular bufer
index++;
printf("retrieved item is: %s", item);
twodimen[atoi(&item[0]) - 1][0] = atoi(&item[0]);
twodimen[atoi(&item[0]) - 1][2] = twodimen[atoi(&item[0]) - 1 ][2] + atoi(&item[4]) * vector[atoi(&item[2]) - 1];
// CRITICAL SECTION END
sem_post(sem_mutex); //
sem_post(sem_empty); // number of empty cells in the buffer should be 1 more. incrementing empty size.
}
}
}
// WRITING TO RESULTFILE
resultfileptr = fopen(resultfilename, "w+");
for(int i = 0; i < NUMBER_OF_ROWS; i++){
for(int j = 0; j < COLS; j++){
if(twodimen[i][j] != 0 && twodimen[i][j + 2] != 0){
char str[10];
sprintf(str, "%d %d \n", twodimen[i][j], twodimen[i][j + 2]);
fprintf(resultfileptr, "%s", str);
}
}
}
printf("consumer ended; bye...\n");
fflush (stdout);
pthread_exit(NULL);
}
int main(int argc, char**argv)
{
clock_t start_time = clock();
const char *const matrixfilename = argv[1];
const char *const vectorfilename = argv[2];
const char *const resultfilename = argv[3];
const int K = atoi(argv[4]);
INPUT_BUF_SIZE = atoi(argv[5]);
splitnum = K;
printf ("mv started\n");
printf ("%s\n", matrixfilename);
printf ("%s\n", vectorfilename);
printf ("%s\n", resultfilename);
printf ("K is %d\n", K);
printf ("splitnum is %d\n", splitnum);
printf ("INPUT_BUF_SIZE is %d\n", INPUT_BUF_SIZE);
if(INPUT_BUF_SIZE > BUFSIZE || INPUT_BUF_SIZE < 100){
printf("Buffer input should be between 100 and 10000, BUFSIZE = 10000 will be used as default \n");
INPUT_BUF_SIZE = BUFSIZE;
}
FILE *fileptr;
count_lines = 0;
char filechar[10000], chr;
fileptr = fopen(matrixfilename, "r");
// extract character from file and store it in chr
chr = getc(fileptr);
while(chr != EOF)
{
// count whenever new line is encountered
if(chr == '\n')
{
count_lines = count_lines + 1;
}
// take next character from file
chr = getc(fileptr);
}
printf("countlines is %d \n", count_lines);
fclose(fileptr); // close file
printf("There are %d lines in in a file\n", count_lines);
int s = count_lines / K;
int remainder = count_lines % K;
printf("S is %d \n", s);
FILE *fw, *fr;
char *line = NULL;
size_t len = 0;
ssize_t read;
// CREATING SPLIT FILES AND WRITING TO THEM
for(int i = 0; i < K; i++){
char filename[20] = "splitfile";
char txt[5] = ".txt";
char its[10];
sprintf(its, "%d", i);
strcat(filename, its);
strcat(filename, txt);
fw = fopen(filename, "w+");
fr = fopen(matrixfilename, "r");
if(i == K - 1){
for(int j = 0; j < count_lines; j++){
while(((read = getline(&line, &len, fr)) != -1) && j >= (i * s)){
char *line_copy = strdup(line);
fprintf(fw, "%s", line_copy);
j++;
}
}
}
else{
for(int j = 0; j < count_lines; j++){
while(((read = getline(&line, &len, fr)) != -1) && j >= (i * s) && j <= (i + 1) * s - 1){
char *line_copy = strdup(line);
fprintf(fw, "%s", line_copy);
j++;
}
}
}
fclose(fw);
fclose(fr);
}
FILE *vectorfileptr;
vector_lines = 0;
char vchr;
vectorfileptr = fopen(vectorfilename, "r");
vchr = getc(vectorfileptr);
line = NULL;
len = 0;
// COUNTING THE SIZE OF VECTOR
while(vchr != EOF)
{
// count whenever new line is encountered
if(vchr == '\n')
{
vector_lines = vector_lines + 1;
}
// take next character from file
vchr = getc(vectorfileptr);
}
fclose(vectorfileptr);
printf("There are %d lines in vector file\n", vector_lines);
vector[vector_lines];
vectorfileptr = fopen(vectorfilename, "r");
if (vectorfileptr == NULL)
exit(EXIT_FAILURE);
int linenumber = 0;
while ((read = getline(&line, &len, vectorfileptr)) != -1) {
char *line_copy = strdup(line);
vector[linenumber] = atoi(line_copy);
linenumber++;
}
fclose(vectorfileptr);
for(int i = 0; i < vector_lines; i++){
printf("vector %d: %d\n", i, vector[i]);
}
FILE *countfileptr;
countfileptr = fopen(matrixfilename, "r");
NUMBER_OF_ROWS = 0;
while ((read = getline(&line, &len, countfileptr)) != -1) {
char *line_copy = strdup(line);
if(atoi(&line_copy[0]) > NUMBER_OF_ROWS){
NUMBER_OF_ROWS = atoi(&line_copy[0]);
}
}
fclose(countfileptr);
/* first clean up semaphores with same names */
sem_unlink (SEMNAME_MUTEX);
sem_unlink (SEMNAME_FULL);
sem_unlink (SEMNAME_EMPTY);
/* create and initialize the semaphores */
sem_mutex = sem_open(SEMNAME_MUTEX, O_RDWR | O_CREAT, 0660, 1);
if (sem_mutex < 0) {
perror("can not create semaphore\n");
exit (1);
}
printf("sem %s created\n", SEMNAME_MUTEX);
sem_full = sem_open(SEMNAME_FULL, O_RDWR | O_CREAT, 0660, 0);
if (sem_full < 0) {
perror("can not create semaphore\n");
exit (1);
}
printf("sem %s created\n", SEMNAME_FULL);
sem_empty = sem_open(SEMNAME_EMPTY, O_RDWR | O_CREAT, 0660, BUFSIZE); // initially bufsize items can be put
if (sem_empty < 0) {
perror("can not create semaphore\n");
exit (1);
}
printf("sem %s create\n", SEMNAME_EMPTY);
for(int i = 0; i < splitnum; i++){
insertFirst(0,0,0,i);
}
int err;
pthread_t tid[splitnum];
printf ("starting thread\n");
for(int i = 0; i < splitnum; i++){
err = pthread_create(&tid[i], NULL, (void*) mapperThread, (void*)(intptr_t)i);
if(err != 0){
printf("\n Cant create thread: [%s]", strerror(err));
}
}
pthread_t reducertid;
pthread_create(&reducertid, NULL, (void*) reducerThread, (char*) resultfilename);
for(int i = 0; i < splitnum; i++){
pthread_join(tid[i],NULL);
}
pthread_join(reducertid,NULL);
// join reducer thread
// closing semaphores
sem_close(sem_mutex);
sem_close(sem_full);
sem_close(sem_empty);
/* remove the semaphores */
sem_unlink(SEMNAME_MUTEX);
sem_unlink(SEMNAME_FULL);
sem_unlink(SEMNAME_EMPTY);
fflush( stdout );
exit(0);
}
HEADER file:
/* -*- linux-c -*- */
#ifndef COMMON_H
#define COMMON_H
#define TRACE 1
#define SEMNAME_MUTEX "/name_sem_mutex"
#define SEMNAME_FULL "/name_sem_fullcount"
#define SEMNAME_EMPTY "/name_sem_emptycount"
#define ENDOFDATA -1 // marks the end of data stream from the producer
// #define SHM_NAME "/name_shm_sharedsegment1"
#define BUFSIZE 10000 /* bounded buffer size */
#define MAX_STRING_SIZE
// #define NUM_ITEMS 10000 /* total items to produce */
/* set to 1 to synchronize;
otherwise set to 0 and see race condition */
#define SYNCHRONIZED 1 // You can play with this and see race
struct buffer{
struct buffer *next;
char * buf[BUFSIZE]; // string array
int count; /* current number of items in buffer */
int in; // this field is only accessed by the producer
int out; // this field is only accessed by the consumer
int source; // index of the producer
};
struct buffer *head = NULL;
struct buffer *current = NULL;
void printList(){
struct buffer *ptr = head;
while(ptr != NULL){
printf("items of buffer %d: \n", ptr->source);
printf("buffer count is : %d \n", ptr->count);
printf("buffer in is : %d \n", ptr->in);
printf("buffer out is : %d \n", ptr->out);
for(int i = 0; i < ptr->count; i++){
printf("%s", ptr->buf[i]);
}
ptr = ptr->next;
}
}
void insertFirst(int count, int in, int out, int source){
struct buffer *link = (struct buffer*) malloc(sizeof(struct buffer));
for(int i = 0; i < BUFSIZE; i++){
link->buf[i] = "";
}
link->count = count;
link->in = in;
link->out = out;
link->source = source;
link->next = head;
head = link;
}
struct buffer* find(int source){
struct buffer* current = head;
if(head == NULL){
return NULL;
}
while(current->source != source){
if(current->next == NULL){
return NULL;
}
else{
current = current->next;
}
}
return current;
}
#endif
Answer: A few semi-random observations.
You define functions in your header file, so if you include that in multiple source files in one project you'll get multiple definition errors from the linker. (Also, those functions are using standard library functions without including the required header files.)
main makes assumptions about the number of parameters passed to the program. If you don't pass enough, you'll dereference a NULL or out-of-bounds pointer (e.g., an invalid value for argv[5]). You should verify that you have enough parameters (by checking argc) before attempting to access any of the parameters.
Rather than the verbose count_lines = count_lines + 1;, you can just use ++count_lines;.
Your code for building the split filename is nearly identical in the two places you use it. You can put it in a function to avoid the duplication, and simplify it by using sprintf to build the entire filename rather than using sprintf and strcat.
sprintf(buf, "splitfile%d.txt", n);
where buf and n are passed as parameters to the function. buf should be long enough to hold any value for n, 9 + 4 + 1 + 11 = 25 characters, assuming n is no larger than 32 bits. (That's 9 bytes for the base filename, 4 for the extension, 1 for the terminating nul, and 11 for a signed 32 bit integer printed as a decimal.)
You don't verify that fw and fr (and some of your other file handles) have successfully been opened before making use of them.
Most of your strdup calls will leak, and are not necessary.
At one point in main you call atoi(&line_copy[0]) twice - one inside an if, and once in the following statement. This should be called once, stored in a local variable:
int nr = atoi(line_copy);
if (nr > NUMBER_OF_ROWS)
NUMBER_OF_ROWS = nr;
reducerThread will be an infinite loop if SYNCHRONIZED is 0. | {
"domain": "codereview.stackexchange",
"id": 38213,
"tags": "c, multithreading, thread-safety"
} |
Disk Partition type lookup table | Question: The following code I am using to lookup the description based on a "Partition Type" in either a string, an integer or hex value. By calling parttype(parttype) in the below code.
Is there a more pythonic way for this?
__PARTTYPE_TO_DESCRIPTION__ = {
"00": "Empty",
"01": "DOS 12-bit FAT",
"02": "XENIX root",
"03": "XENIX /usr",
"04": "DOS 3.0+ 16-bit FAT (up to 32M)",
"05": "DOS 3.3+ Extended Partition",
"06": "DOS 3.31+ 16-bit FAT (over 32M)",
"07": "Windows NTFS | OS/2 IFS | exFAT | Advanced Unix | QNX2.x pre-1988",
"08": "AIX boot | OS/2 (v1.0-1.3 only) | SplitDrive | Commodore DOS | DELL partition spanning multiple drives | QNX 1.x and 2.x ('qny')",
"09": "AIX data | Coherent filesystem | QNX 1.x and 2.x ('qnz')",
"0a": "OS/2 Boot Manager | Coherent swap partition | OPUS",
"0b": "WIN95 OSR2 FAT32",
"0c": "WIN95 OSR2 FAT32, LBA-mapped",
"0d": "SILICON SAFE",
"0e": "WIN95: DOS 16-bit FAT, LBA-mapped",
"0f": "WIN95: Extended partition, LBA-mapped",
"10": "OPUS (? - not certain - ?)",
"11": "Hidden DOS 12-bit FAT | Leading Edge DOS 3.x logically sectored FAT",
"12": "Configuration/diagnostics partition",
"14": "Hidden DOS 16-bit FAT <32M | AST DOS with logically sectored FAT",
"16": "Hidden DOS 16-bit FAT >=32M",
"17": "Hidden IFS",
"18": "AST SmartSleep Partition",
"19": "Claimed for Willowtech Photon coS",
"1b": "Hidden WIN95 OSR2 FAT32",
"1c": "Hidden WIN95 OSR2 FAT32, LBA-mapped",
"1e": "Hidden WIN95 16-bit FAT, LBA-mapped",
"20": "Rumoured to be used by Willowsoft Overture File System",
"21": "Reserved for: HP Volume Expansion, SpeedStor variant | Claimed for FSo2 (Oxygen File System)",
"22": "Claimed for Oxygen Extended Partition Table",
"23": "Reserved - unknown",
"24": "NEC DOS 3.x",
"26": "Reserved - unknown",
"27": "PQservice | Windows RE hidden partition | MirOS partition | RouterBOOT kernel partition",
"2a": "AtheOS File System (AFS)",
"2b": "SyllableSecure (SylStor)",
"31": "Reserved - unknown",
"32": "NOS",
"33": "Reserved - unknown",
"34": "Reserved - unknown",
"35": "JFS on OS/2 or eCS ",
"36": "Reserved - unknown",
"38": "THEOS ver 3.2 2gb partition",
"39": "Plan 9 partition | THEOS ver 4 spanned partition",
"3a": "THEOS ver 4 4gb partition",
"3b": "THEOS ver 4 extended partition",
"3c": "PartitionMagic recovery partition",
"3d": "Hidden NetWare",
"40": "Venix 80286 | PICK | Linux/MINIX",
"41": "Personal RISC Boot | PPC PReP (Power PC Reference Platform) Boot",
"42": "Windows dynamic extended partition | Linux swap | SFS (Secure Filesystem)",
"43": "Linux native",
"44": "GoBack partition",
"45": "Boot-US boot manager | Priam | EUMEL/Elan ",
"46": "EUMEL/Elan",
"47": "EUMEL/Elan",
"48": "EUMEL/Elan",
"4a": "Mark Aitchison's ALFS/THIN lightweight filesystem for DOS | AdaOS Aquila (Withdrawn)",
"4c": "Oberon partition",
"4d": "QNX4.x",
"4e": "QNX4.x 2nd part",
"4f": "QNX4.x 3rd part | Oberon partition",
"50": "OnTrack Disk Manager (older versions) RO | Lynx RTOS | Native Oberon (alt)",
"51": "OnTrack Disk Manager RW (DM6 Aux1) | Novell",
"52": "CP/M | Microport SysV/AT",
"53": "Disk Manager 6.0 Aux3",
"54": "Disk Manager 6.0 Dynamic Drive Overlay (DDO)",
"55": "EZ-Drive",
"56": "Golden Bow VFeature Partitioned Volume | DM converted to EZ-BIOS | AT&T MS-DOS 3.x logically sectored FAT",
"57": "DrivePro | VNDI Partition",
"5c": "Priam EDisk",
"61": "SpeedStor",
"63": "Unix System V (SCO, ISC Unix, UnixWare, ...), Mach, GNU Hurd",
"64": "PC-ARMOUR protected partition | Novell Netware 286, 2.xx",
"65": "Novell Netware 386, 3.xx or 4.xx",
"66": "Novell Netware SMS Partition",
"67": "Novell",
"68": "Novell",
"69": "Novell Netware 5+, Novell Netware NSS Partition",
"70": "DiskSecure Multi-Boot",
"71": "Reserved - unknown",
"72": "V7/x86",
"73": "Reserved - unknown",
"74": "Scramdisk partition | Reserved - unknown",
"75": "IBM PC/IX",
"76": "Reserved - unknown",
"77": "M2FS/M2CS partition | VNDI Partition",
"78": "XOSL FS",
"7e": "Claimed for F.I.X.",
"7f": "Proposed for the Alt-OS-Development Partition Standard",
"80": "MINIX until 1.4a",
"81": "MINIX since 1.4b, early Linux | Mitac disk manager",
"82": "Linux swap | Solaris x86 | Prime",
"83": "Linux native partition",
"84": "OS/2 hidden C: drive | Hibernation partition",
"85": "Linux extended partition",
"86": "Old Linux RAID partition superblock | FAT16 volume set",
"87": "NTFS volume set",
"88": "Linux plaintext partition table",
"8a": "Linux Kernel Partition (used by AiR-BOOT)",
"8b": "Legacy Fault Tolerant FAT32 volume",
"8c": "Legacy Fault Tolerant FAT32 volume using BIOS extd INT 13h",
"8d": "Free FDISK 0.96+ hidden Primary DOS FAT12 partitition",
"8e": "Linux Logical Volume Manager partition",
"90": "Free FDISK 0.96+ hidden Primary DOS FAT16 partitition",
"91": "Free FDISK 0.96+ hidden DOS extended partitition",
"92": "Free FDISK 0.96+ hidden Primary DOS large FAT16 partitition",
"93": "Hidden Linux native partition | Amoeba",
"94": "Amoeba bad block table",
"95": "MIT EXOPC native partitions",
"96": "CHRP ISO-9660 filesystem",
"97": "Free FDISK 0.96+ hidden Primary DOS FAT32 partitition",
"98": "Free FDISK 0.96+ hidden Primary DOS FAT32 partitition (LBA) | Datalight ROM-DOS Super-Boot Partition",
"99": "DCE376 logical drive",
"9a": "Free FDISK 0.96+ hidden Primary DOS FAT16 partitition (LBA)",
"9b": "Free FDISK 0.96+ hidden DOS extended partitition (LBA)",
"9e": "ForthOS partition",
"9f": "BSD/OS",
"a0": "Laptop hibernation partition",
"a1": "Laptop hibernation partition | HP Volume Expansion (SpeedStor variant)",
"a3": "HP Volume Expansion (SpeedStor variant)",
"a4": "HP Volume Expansion (SpeedStor variant)",
"a5": "BSD/386, 386BSD, NetBSD, FreeBSD",
"a6": "OpenBSD | HP Volume Expansion (SpeedStor variant)",
"a7": "NeXTStep",
"a8": "Mac OS-X",
"a9": "NetBSD",
"aa": "Olivetti Fat 12 1.44MB Service Partition",
"ab": "Mac OS-X Boot partition | GO! partition",
"ad": "RISC OS ADFS",
"ae": "ShagOS filesystem",
"af": "MacOS X HFS | ShagOS swap partition",
"b0": "BootStar Dummy",
"b1": "HP Volume Expansion (SpeedStor variant) | QNX Neutrino Power-Safe filesystem",
"b2": "QNX Neutrino Power-Safe filesystem",
"b3": "HP Volume Expansion (SpeedStor variant) | QNX Neutrino Power-Safe filesystem",
"b4": "HP Volume Expansion (SpeedStor variant)",
"b6": "HP Volume Expansion (SpeedStor variant) | Corrupted Windows NT mirror set (master), FAT16 file system",
"b7": "Corrupted Windows NT mirror set (master), NTFS file system | BSDI BSD/386 filesystem",
"b8": "BSDI BSD/386 swap partition",
"bb": "Boot Wizard hidden",
"bc": "Acronis backup partition",
"bd": "BonnyDOS/286",
"be": "Solaris 8 boot partition",
"bf": "New Solaris x86 partition",
"c0": "CTOS | REAL/32 secure small partition | NTFT Partition | DR-DOS/Novell DOS secured partition",
"c1": "DRDOS/secured (FAT-12)",
"c2": "Hidden Linux",
"c3": "Hidden Linux swap",
"c4": "DRDOS/secured (FAT-16, < 32M)",
"c5": "DRDOS/secured (extended)",
"c6": "DRDOS/secured (FAT-16, >= 32M) | Windows NT corrupted FAT16 volume/stripe set",
"c7": "Windows NT corrupted NTFS volume/stripe set | Syrinx boot",
"c8": "Reserved for DR-DOS 8.0+",
"c9": "Reserved for DR-DOS 8.0+",
"ca": "Reserved for DR-DOS 8.0+",
"cb": "DR-DOS 7.04+ secured FAT32 (CHS)",
"cc": "DR-DOS 7.04+ secured FAT32 (LBA)",
"cd": "CTOS Memdump",
"ce": "DR-DOS 7.04+ FAT16X (LBA)",
"cf": "DR-DOS 7.04+ secured EXT DOS (LBA)",
"d0": "REAL/32 secure big partition | Multiuser DOS secured partition",
"d1": "Old Multiuser DOS secured FAT12",
"d4": "Old Multiuser DOS secured FAT16 <32M",
"d5": "Old Multiuser DOS secured extended partition",
"d6": "Old Multiuser DOS secured FAT16 >=32M",
"d8": "CP/M-86",
"da": "Non-FS Data | Powercopy Backup",
"db": "Digital Research CP/M, Concurrent CP/M, Concurrent DOS | CTOS (Convergent Technologies OS -Unisys) | KDG Telemetry SCPU boot",
"dd": "Hidden CTOS Memdump",
"de": "Dell PowerEdge Server utilities (FAT fs)",
"df": "DG/UX virtual disk manager partition | BootIt EMBRM",
"e0": "Reserved by STMicroelectronics for a filesystem called ST AVFS",
"e1": "DOS access or SpeedStor 12-bit FAT extended partition",
"e3": "DOS R/O | SpeedStor",
"e4": "SpeedStor 16-bit FAT extended partition < 1024 cyl.",
"e5": "Tandy MSDOS with logically sectored FAT",
"e6": "Storage Dimensions SpeedStor",
"e8": "LUKS",
"eb": "BeOS BFS",
"ec": "SkyOS SkyFS",
"ed": "plans to use this for an OS called Sprytix",
"ee": "Indication that this legacy MBR is followed by an EFI header",
"ef": "Partition that contains an EFI file system",
"f0": "Linux/PA-RISC boot loader",
"f1": "Storage Dimensions SpeedStor",
"f2": "DOS 3.3+ secondary partition",
"f3": "Storage Dimensions SpeedStor",
"f4": "SpeedStor large partition | Prologue single-volume partition",
"f5": "Prologue multi-volume partition",
"f6": "Storage Dimensions SpeedStor",
"f7": "DDRdrive Solid State File System",
"f9": "pCache",
"fa": "Bochs",
"fb": "VMware File System partition",
"fc": "VMware Swap partition",
"fd": "Linux raid partition with autodetect using persistent superblock",
"fe": "SpeedStor > 1024 cyl. | LANstep | IBM PS/2 IML (Initial Microcode Load) partition, located at the end of the disk. | Windows NT Disk Administrator hidden partition | Linux Logical Volume Manager partition (old)",
"ff": "Xenix Bad Block Table"}
def parttype_2_description(parttype):
try:
"""
returns the Partition Type Description
based on a two character (hex) string Partition type
"""
return __PARTTYPE_TO_DESCRIPTION__[parttype.lower()]
except KeyError:
return 'Unknown partition type: ' + parttype.lower()
def parttype_int_2_description(parttype):
"""
returns the Partition Type Description
based on an integer partition type
"""
return parttype_2_description(str(hex(parttype))[2:].rjust(2, '0'))
def parttype_hex_2_description(parttype):
"""
returns the Partition Type Descriptoin
based on a hex partition type
"""
return parttype_2_description(str(parttype)[2:].rjust(2, '0'))
def ishex(value):
if not str(value)[:2] == '0x':
return False
try:
hexval = int(value, 16)
return True
except:
return False
def isint(value):
return isinstance(value, int)
def isstr(value):
return isinstance(value, str)
def parttype(parttype):
"""
returns the partition type descriptor based on
a string, int or hex partition type
"""
if ishex(parttype): return parttype_hex_2_description(parttype)
if isint(parttype): return parttype_int_2_description(parttype)
if isstr(parttype): return parttype_2_description(parttype)
return
def main():
print('do not run this interactively')
print('import and call the parttype() function')
return
if __name__ == '__main__':
main ()
Answer: The dictionary
The naming rules in PEP 8 state:
__double_leading_and_trailing_underscore__: "magic" objects or attributes that live in user-controlled namespaces. E.g. __init__, __import__ or __file__. Never invent such names; only use them as documented.
If your intention is to simply indicate that the dictionary is "private", use a _single_leading_underscore. Coupled with the convention to use ALL_CAPS for constants, I would name it _PARTTYPE_TO_DESCRIPTION.
The keys in the dictionary represent numbers, right? Then why not write them as numbers? It is easier to normalize strings into integers than to format integers as strings, since there are a multitude ways to write 15 (e.g. "0f", "0F", "0x0f", "0x0F").
The lookup functions
I'm not a fan of the parttype_…_2_description(…) naming. The 2 looks like it's supposed to be some version number.
Instead of three lookup functions, why not offer one function that just does the "right thing" depending on the argument value?
I don't think that you should return 'Unknown partition type: 13' as if it were a valid result. You could either raise an exception, or let the caller specify the fallback value. When composing the exception string, don't mess with the input (.lower()) — it's confusing.
The parttype_2_description docstring is botched. It needs to be the very first thing inside the function.
Suggested solution
I would write one function that handles all the cases, and include a docstring with doctests to thoroughly describe how to use it.
_PARTTYPE_TO_DESCRIPTION = {
0x00: "Empty",
0x01: "DOS 12-bit FAT",
0x02: "XENIX root",
0x03: "XENIX /usr",
0x04: "DOS 3.0+ 16-bit FAT (up to 32M)",
0x05: "DOS 3.3+ Extended Partition",
0x06: "DOS 3.31+ 16-bit FAT (over 32M)",
0x07: "Windows NTFS | OS/2 IFS | exFAT | Advanced Unix | QNX2.x pre-1988",
0x08: "AIX boot | OS/2 (v1.0-1.3 only) | SplitDrive | Commodore DOS | DELL partition spanning multiple drives | QNX 1.x and 2.x ('qny')",
0x09: "AIX data | Coherent filesystem | QNX 1.x and 2.x ('qnz')",
0x0A: "OS/2 Boot Manager | Coherent swap partition | OPUS",
0x0B: "WIN95 OSR2 FAT32",
0x0C: "WIN95 OSR2 FAT32, LBA-mapped",
0x0D: "SILICON SAFE",
0x0E: "WIN95: DOS 16-bit FAT, LBA-mapped",
0x0F: "WIN95: Extended partition, LBA-mapped",
0x10: "OPUS (? - not certain - ?)",
…
0xFF: "Xenix Bad Block Table",
}
def partition_description(type, unknown_description=None):
"""
Return the Partition Type Description for the partition type, given either
as an integer or as a hex string.
>>> partition_description(15)
'WIN95: Extended partition, LBA-mapped'
>>> partition_description(0x0f)
'WIN95: Extended partition, LBA-mapped'
>>> partition_description('0x0f')
'WIN95: Extended partition, LBA-mapped'
>>> partition_description('0x0F')
'WIN95: Extended partition, LBA-mapped'
>>> partition_description('0F')
'WIN95: Extended partition, LBA-mapped'
>>> partition_description('0f')
'WIN95: Extended partition, LBA-mapped'
If unknown_description is also given, then it will be returned if there is
no such partition type.
>>> partition_description(0x13, 'Bogus partition!')
'Bogus partition!'
If unknown_description is None or is omitted, then ValueError will be
raised for unrecognized partition types.
>>> partition_description('0x13')
Traceback (most recent call last):
...
ValueError: Unknown partition type: 0x13
"""
type_num = type if isinstance(type, int) else int(type, base=16)
description = _PARTTYPE_TO_DESCRIPTION.get(type_num, unknown_description)
if description is None:
raise ValueError('Unknown partition type: ' + str(type))
return description | {
"domain": "codereview.stackexchange",
"id": 20817,
"tags": "python, python-3.x"
} |
Recommended number of features for regression problem | Question: In the following link the answer recommends a feauture amount of N/3 for regression (or it is quoted).
Where N corresponds to the sample size:
How many features to sample using Random Forests
Is there any paper which quotes this?
Answer: Not sure what is meant by a paper. Are you asking if there is mathematical proof that this is the best setting all of the time?
There are some experiments pointed to from here, a text book quote in that answer, and in the link you posted.
The answer is "it depends". You can tune this parameter for your data and problem. Perhaps n/3 is a good place to start and maybe close enough to the optimum that it does not need to be further tuned, compared to the other parameters and the time you have. | {
"domain": "datascience.stackexchange",
"id": 10426,
"tags": "regression, statistics, random-forest, optimization, sampling"
} |
Help writing a dynamic programming algorithm in English | Question: It’s Friday night and you have $n$ parties to go to. Party $i$ has a start time $s_i$ end time $t_i$ and a value $v_i \ge 0$. Think of the value as an indicator of how excited you are about that party. You want to pick a subset of the parties to attend so that you get maximum total value. The only constraint is that in order to get a value $v_i$ from party $i$ you need to attend the party from start to finish. Hence you cannot drop in midway and leave before the party ends. This means that if two parties overlap, you can only attend one of them. For example, if party 1 has a start time of 7pm and ends at 9pm and party 2 starts at 7:30pm and ends at 10pm, you can only attend one of them. On the other hand, if party 2 starts at 9pm or later then you can attend both. Given as input start times, end times and values of each of the n parties, design an $O(n \log n)$ time algorithm to plan your night optimally. Describe in English the idea of your algorithm. If you use dynamic programming define the memo table, base case and recursive steps. You don’t need to write the pseudo code or provide a proof of correctness.
I am having difficulty beginning this problem. My initial thought was to have the base case be to choose the value with the initial start time and the greatest $v_i$, however if party $i$ runs for your entire start to end time, you might not maximize $v_i$. I would greatly appreciate some insight into how to do this problem.
Answer: Since this problem is a learning exercise of yours, I will just write the ordered subproblems used in dynamic programming. In general, I consider it is about more than half way through a nontrivial problem that can be solved by dynamic programming when the ordered subproblems have been specified clearly, at least in term of "difficult" thinking since the rest are more routine.
The subproblems are computing $v[i]$, the maximal total value you can get when you just finished party $P_i$. To help computing these subproblems, that is, from earlier subproblems to later subproblems, you may sort the parties by their ending times so that $P_1,P_2,\cdots,P_n$ end in order of time. After the sorting, $v[i]$ will only depend on those $v[j]$ where $j<i$ or, to be more precise, those $P_j$ whose ending time is before $P_i$'s starting time. (This condition can be made even more restrictive, which may be unnecessary for your exercise and which may be necessary to get a better grade.) There are several more details such as the base case and the recurrence relation, which I will leave for you to fill in. | {
"domain": "cs.stackexchange",
"id": 12494,
"tags": "algorithms, dynamic-programming, knapsack-problems, pseudocode"
} |
Invariance, covariance and symmetry | Question: Though often heard, often read, often felt being overused, I wonder what are the precise definitions of invariance and covariance. Could you please give me an example from field theory?
Answer: The definitions of these terms are somewhat context-dependent. In general, however, invariance in physics refers to when a certain quantity remains the same under a transformation of things out of which it is built, while covariance refers to when equations "retain the same form" after the objects in the equations are transformed in some way.
In the context of field theory, one can make these notions precise as follows. Consider a theory of fields $\phi$. Let a transformation $T$
$$
\phi \to\phi_T
$$
on fields be given. Let a functional $F[\phi]$ of the fields be given (consider the action functional for example). The functional is said to be invariant under the transformation $T$ of the fields provided
$$
F[\phi_T] = F[\phi]
$$
for all fields $\phi$. One the other hand, the equations of motion of the theory are said to be covariant with respect to the transformation $T$ provided if the fields $\phi$ satisfy the equations, then so do the fields $\phi_T$; the form of the equations is left the same by $T$.
For example, the action of a single real Klein-Gordon scalar $\phi$ is Lorentz-invariant meaning that it doesn't change under the transformation
$$
\phi(x)\to\phi_\Lambda(x) = \phi(\Lambda^{-1}x),
$$
and the equations of motion of the theory are Lorentz-covariant in the sense that if $\phi$ satisfies the Klein-Gordon equation, then so does $\phi_\Lambda$.
Also, I'd imagine that you'd find this helpful. | {
"domain": "physics.stackexchange",
"id": 30453,
"tags": "symmetry, tensor-calculus, definition, covariance, invariants"
} |
How many sets of vectors can be represented as the solutions of a Horn-SAT instance? | Question: Let the solution space of a SAT instance be the set of Boolean vectors of satisfying assignments of $\{0,1\}$ to the variables (that result in the formula evaluating to TRUE).
In other words, a solution space represents all the solutions of an instance.
There are at most $2^n$ solutions, so $n$-variable SAT (denoted here by SAT$_n$) can represent at most $2^{2^n}$ different solution spaces.
Moreover, it is easy to see that this number can actually be reached, by considering instances where every clause contains all $n$ variables.
We know that Horn-SAT is less expressive than SAT, for instance there is no Horn-SAT$_2$ representation of the solution space of $a \lor b$, which contains the 3 vectors $01, 10, 11$.
Hence Horn-SAT$_n$ must be able to represent fewer than $2^{2^n}$ solution spaces.
What is the number of solution spaces of a Horn-SAT$_n$ instance?
For completeness, recall that a literal is either a variable or its negation,
and that $n$-variable SAT consists of formulas built from at most $n$ variables (we can fix the set of variable names, e.g. $\{1,2,\dots,n\}$), in conjunctive normal form (a conjunction of disjunctions), where each disjunction of literals is known as a clause.
Horn-SAT$_n$ is the fragment of $n$-variable SAT where each clause in an input formula contains at most one non-negated variable.
An equivalent counting formulation of the question is: how many different values can #Horn-SAT$_n$ have?
However, note that I am not interested here in the complexity of #Horn-SAT, just its range of values for each number of variables.
If one tries to work syntactically, then one faces the same challenge as for SAT.
Suppose one tries to count the number of different SAT instances syntactically.
There are $2^{3^n}$ different ways to write down a SAT instance with $n$ variables and no repeated clauses (if the variable order is fixed and no variable may appear more than once in a clause), so one has to find the equivalence classes of syntactically different instances that lead to the same solution space, to retrieve the $2^{2^n}$ number.
One then needs to similarly take the quotient of the set of different Horn-SAT instances with $n$ variables to obtain the correct value of the number of different solution spaces.
It is not clear to me how to take quotients for SAT, let alone how to do so for Horn-SAT.
Answer: $\def\pw{\mathcal P}$
It is an easy exercise to show that if we identify Boolean assignments to $n$ variables with subsets of $[n]=\{0,\dots,n-1\}$, then the solution sets of Horn CNFs are exactly the families $S\subseteq\pw([n])$ that are closed under nonempty intersections.
The picture is a little nicer for conjunctions of clauses each of which contains exactly one positive literal. Their solution sets are families closed under arbitrary intersections (where in particular the empty intersection is the full set), also known as closure systems or Moore families. These are in 1–1 correspondence with closure operators (functions $C\colon\pw([n])\to\pw([n])$ satisfying $X\subseteq C(Y)\iff C(X)\subseteq C(Y)$), consequence relations (relations ${\vdash}\subseteq\pw([n])\times[n]$ satisfying a handful of natural conditions), and I guess there are more similar combinatorial characterizations (https://oeis.org/A102896 or https://oeis.org/A102897 mention semilattices).
These two numbers only differ by a factor of two (given a closure system $S$, we may either retain $[n]\in S$ or throw it out without affecting anything else).
Alekseev proved that the number $\alpha(n)$ of closure systems on an $n$-element set satisfies
$$2^{\binom{n}{\lfloor n/2\rfloor}}\le\alpha(n)\le2^{\binom{n}{\lfloor n/2\rfloor}(1+o(1))}.$$
See http://www.renyi.hu/~ohkatona/paper_72.pdf for related information. | {
"domain": "cstheory.stackexchange",
"id": 3170,
"tags": "cc.complexity-theory, co.combinatorics, sat"
} |
In this lecture, does Feynman suggest that absolute position exists? | Question: I apologize if this question has been asked before, but I was unable to find it if it has.
I am watching this 1964 Character of Physical Law lecture (link is to minute 41), and in this portion Feynman discusses Heisenberg's uncertainty principle. As he describes it, it sounds like it is a limitation on measurement, not on the nature of reality (he talks about not being able to measure without "disturbing" the electrons too much). However, as discussed in other questions on this site, the uncertainty principle is much more fundamental than a limitation on measurement.
So, my question is: Am I misunderstanding Feynman, or has the understanding of Heisenberg's uncertainty principle changed since 1964? If I am misunderstanding, I'd appreciate a correction.
I'm aware that Feynman goes on to say that hidden variable theories don't work, but it seems that there could be a single trajectory, unpredictable at the onset, that we could still measure.
Answer: Some context is necessary here. Feynman was lecturing to undergraduate students in an introductory course of physics. Most of these students would have had no prior exposure to quantum mechanics (or any physics for that matter). As a teaching strategy, it is often useful to talk about experiments and other physical situations that are easy to imagine. With this in mind, it is easier for students to understand that in order to measure the position of an electron, you have to hit it with a photon. But, that transfers momentum to the electron, disturbing its prior motion.
Furthermore, the question of whether the Uncertainty Principle was fundamental or instrumental was still unsettled at the time of the Feynman lectures. In fact, the question had been unanswered for so long that most physicists considered it nonsense (in technical jargon, a "philosophical question"). It wasn't until 1964 (after the lectures) that John Stewart Bell published his theorem that allowed the question to be answered experimentally. That is, he found a way to measure whether a particle actually has a definite position (or any other property subject to the Uncertainty Principle) prior to measurement. The answer, confirmed in many experiments up to the present, is no. | {
"domain": "physics.stackexchange",
"id": 39904,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle"
} |
Spectral Networks and Deep Locally Connected Networks on Graphs | Question: I’m reading the paper Spectral Networks and Deep Locally Connected Networks on Graphs and I’m having a hard time understanding the notation shown in the picture below (the scribbles are mine):
Specifically, I don’t understand the notation for the matrix F. Why does it include an i and a j?
Answer: In case anyone was curious, I found my answer in the Appendix in this paper, which explains the notation better: Neural Message Passing for Quantum Chemistry | {
"domain": "datascience.stackexchange",
"id": 8975,
"tags": "research, graph-neural-network"
} |
Is the coefficient of restitution a constant? (/Validity over range) | Question: We were taught about coefficient of restitution and it's definition and that it is treated as a constant.
This was termed "Newton's law of restitution".
However, I don't seem to understand why that would seem to be. It seems pretty arbitrary why it would hold true. (No rigours or intuitive proof).
If there is one, I haven't seemed to find it online. (As there is for other laws like ohms law and Newton's law of cooling *)
Although I have accepted it to be a fact (to solve questions). I can not imagine it (COR) being anything more than a fancy way to give us an extra information which was previously missing.(As I have noticed with impulse-momentum equations).
I would like to gain some theoretical insight to why it is treated as a constant.(Or over a range of values)
(*By the proofs of the laws, what I meant:
Newton's law of cooling is an approximation of the Stephen-Boltzmann equation which is understood well.
Also ohms law can be "derived" through knowledge of drift velocity/relaxation time/Electric field. Which is what I am looking for.)
Answer: The Law of Restitution is usually stated as a constant ratio $e$ between relative velocities of separation and approach for a particular pair of colliding objects. A more intuitive formulation is that a constant fraction $1-e^2$ of the total kinetic energy lost in the collision.
Like Ohm's Law and Hooke's Law, the Law of Restitution is empirical. It is an approximation which is valid over a limited range of relative speeds of approach, typically 0.1 to 100 m/s. For some materials, notably rubber, which does not obey Hooke's law, it not a good approximation over this range. The law is not based on theory, but models of deformable materials can be used to "explain" how $e$ depends on the material, geometric and kinetic properties of the colliding bodies.
According to K L Thornton in Contact Mechanics, CUP 1987, chapter 11, p 363 :
The coefficient of restitution is not a material property, but depends on the severity of the impact. At sufficiently low velocities $V \lt V_Y$ the deformation is elastic and $e \approx 1$. The coefficient of restitution falls gradually with increasing velocity. When a fully plastic indentation is formed our theory suggests that $e \propto V^{-1/4}$.
For low impact speeds all colliding bodies behave like a spring and dashpot. The deformation is elastic, in the sense that the body returns to its original shape, but some energy is lost due to internal friction (elastic hysteresis). The amount of frictional loss relates to the speed of sound waves $c$ in the material. If the material is very hard (stiff), so that the deformation is small and $c \gg V$ then almost all of the stored energy is returned as kinetic energy ($e \approx 1$). For 'soft' materials ($V \approx c$) a significant portion of the KE is lost ($e \lt 1$).
Thornton's analysis suggests that collisions cease to be elastic ($e \approx 1$) when $p=\rho V^2/Y \lt 10^{-6}$, where $\rho$ is density, $V$ is impact speed and $Y$ is the yield stress of the softer material. For a hard steel sphere striking a medium hard steel floor the impact speed at which plastic deformation starts is $V_Y \approx 0.14 m/s$. Thereafter permanent deformation occurs in part of the contact region, and the size of this region increases as impact velocity increases. Fully plastic deformation occurs for $p \approx 10^{-3}$ when typically $V \approx 5m/s$, above which the $V^{-1/4}$ law dominates.
The shallow indentation theory breaks down at $p \approx 10^{-1}, V \approx 100m/s$. The onset of hydrodynamic flow at $p \approx 10, V \approx 1000m/s$ is marked by plastic deformation outside the contact region. | {
"domain": "physics.stackexchange",
"id": 47258,
"tags": "newtonian-mechanics, classical-mechanics, collision"
} |
What happens to entangled particles when one of them is measured? | Question: Imagine two entangled photons came into existence and one of them is being measured. We have altered one of the quantum states of the photon and caused its wavefunction to collapse and then we measured it's is spin up. Since we know the other pair must be spin down because both the photons are correlated mathematically speaking I am not sure if any physical exchange is going on but right now I trust SR no information can beat the cosmic speed limit.
Now I want to know, what happens to the other photon, did its wavefunction collapse too because we knew it is spin down although we never directly interacted with it?
Answer: This is a very interesting question, and to answer it I am going to consider the following example. Suppose, as you said, that we have a source that, somehow (there are different techniques which allow us to obtain entangled photons such as employing nonlinear cristals) generates a pair of photons that are in the following global state
\begin{equation}
|\Phi\rangle = \dfrac{1}{\sqrt{2}}\big[|0_A1_B\rangle + |1_A0_B\rangle\big],
\end{equation}
where the $0$ represents vertical polarization and $1$ horizontal polarization, for instance. This is an example of maximally entangled state, which belong to the basis of Bell states (see the book Quantum Information and Quantum COmputation by Nielsen and Chuang, for example). Each of the photons are sent to system $A$ (usually known as Alice) and system $B$ (usually known as Bob), which explains the presence of the subscripts that I have considered above. Also, I assume that Alice and Bob are far away from each other and that they can only communicate classically (via telephone, for example).
Now, as you said, Alice performs a random polarization measurement so with probability $p_A = \frac{1}{2}$ she can obtain either horizontal or vertical polarization. So, if she obtains after the measurement horizontal polarization, i.e., a $1$, the global state of the system collapses to
\begin{equation}
|\Phi'\rangle = |1_A 0_B\rangle.
\end{equation}
But, what about Bob? He doesn't know what the result of Alice measure was, so if we want to put ourselves in his position, we have to eliminate somehow Alice degrees of freedom. This is done in quantum information via the density matrix representation, which in our case is given by
\begin{equation}
\rho = |\Phi\rangle \langle \Phi |
= \dfrac{1}{2}
\Big[
|1_A0_B\rangle \langle 1_A 0_B| + |0_A1_B\rangle \langle 0_A1_B|
+ |1_A0_B\rangle\langle 0_A1_B| + |0_A1_B\rangle \langle 1_A 0_B|\Big],
\end{equation}
and taking the partial trace with respect to $A$, that is, to sum over Alice degrees of freedom, we get that Bob sees the state
\begin{equation}
\rho_B = \text{tr}_A \rho = \sum_{i=0,1} \langle i |\rho|i \rangle =
\dfrac{1}{2}|0\rangle \langle 0 | + |1\rangle \langle 1 | = \dfrac{\boldsymbol{1}}{2},
\end{equation}
where $\boldsymbol{1}$ represents the identity. Hence, no matter what Alice does on her state that, if Bob does not know what the result o her measurement was, every horizontal or vertical polarization measurement that he does on his state will have an outcome probability equal to $\frac{1}{2}$.
Nevertheless, if after the measurement Alice tells Bob her result (in our case horizontal polarization), then the density matrix of the global system will be
\begin{equation}
\rho = |1_A 0_B\rangle \langle 1_A 0_B|,
\end{equation}
and then Bob gets with probability $1$ vertical polarization.
Hope this answers your question satisfactorilly! | {
"domain": "physics.stackexchange",
"id": 64242,
"tags": "quantum-mechanics, wavefunction, quantum-entanglement"
} |
$\nabla \times \bf{u} \neq 0$ but $\oint_{c} \bf{u} \cdot \textit{d}r \textit{=0}$? | Question:
Consider the vector field $\vec{u}=(xy^2,x^2y,xyz^2)$
The curl of the vector field is $$\nabla \times\vec{u}=(xz^2,-yz^2,0)$$
Consider the line integral of $\vec{u}$ around the ellipse $C$ $x^2+4y^2=1, z=-1$.
With $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1, a=1, b=\frac{1}{2}$, the parameterisation gives $$\vec{r}=(x,y,z)=(cos\theta,\frac{1}{2}sin\theta,-1)$$
$$d\vec{r}=(-sin\theta,\frac{1}{2}cos\theta,0))$$
$$\vec{u}=(\frac{1}{4}cos\theta sin^2\theta,\frac{1}{2}cos^2\theta sin\theta,\frac{1}{2}sin\theta cos\theta) $$
$$\oint_{C} \vec{u} \cdot d\vec{r}=\frac{1}{4}\int^{2\pi}_0sin\theta cos\theta(cos^2\theta -sin^2\theta)d\theta=0$$
(I got the same result by working in cartesian cooridnates without parameterizing the curve)
But this does not make sense because
$$\nabla\times \vec{u}=\lim_{\delta S \to 0}\frac{1}{\delta S}\oint_{\delta C}\vec{u} \cdot d\vec{r}$$
so if a vector field is conservative, its curl should be zero.
Can someone please explain where my conceptual errors lie?
Answer: The path you have chosen is special in a way, for this vector field. The path is cylindrically symmetric. The fact that $z$ is constant in your path means that $u_z$ becomes irrelevant to the integral. What remains is $(xy^2,yx^2)$. This is odd in $x,y$. Thus if the path is symmetric with $(x,y)\to(-x,-y)$ (as is the same for cylindrically symmetric path), the contribution from the positive half of the path is cancelled by the negative half exactly.
But for a vector field to be conservative, the path integral must be independent for any path chosen. So if you choose a path that’s not cylindrically symmetric then you’ll get a non-zero value. | {
"domain": "physics.stackexchange",
"id": 68545,
"tags": "differential-geometry, vector-fields, calculus, vortex, conservative-field"
} |
Updating the total when any of a set of spinners changes its value | Question: I'm new person who working on jQuery and I need your help.
$('.spinner-input, #flight-class').change(function() {
var ap = parseInt($('#adult-passenger').val());
var sp = parseInt($('#student-passenger').val());
var cp = parseInt($('#child-passenger').val());
var bp = parseInt($('#baby-passenger').val());
var fc = $('#flight-class option:selected').text();
var totalCount = ap + sp + cp + bp;
$('#kisi-sayisi').val(totalCount + ' - ' + fc);
});
You can see my code in here
And working demo is here
So, here is the situtation: There is nothing bad about this code, its working normally when you click "number of people" in form. You can see, if you click minus or plus sign, form is updating itself. But when I look this code, I'm feeling like repeating myself.
When I tried turn this code to DRY format, i do this:
('.spinner-input, #flight-class').change(function() {
var passengerId = parseInt($(this).val());
});
Now I get value of input. I need to get sum of values but I can't because something is missing here.
How can I DRY this code?
Answer: You can list the four types of input spinners in an array.
$('.spinner-input, #flight-class').change(function() {
var ids = ['adult', 'student', 'child', 'baby'];
var totalCount = ids.reduce((prev, id) => parseInt($(`#${id}-passenger`).val()) + prev , 0);
var fc = $('#flight-class option:selected').text();
$('#kisi-sayisi').val(totalCount + ' - ' + fc);
}); | {
"domain": "codereview.stackexchange",
"id": 24769,
"tags": "javascript, beginner, jquery, event-handling"
} |
scipy.optimize.minimize throwing error | Question: I used minimize for optimizing the ansatz parameter. I wrote the line like
result = minimize(cost, init_params, method="COBYLA", options={'maxiter':200})
and it is throwing this error ValueError: Mismatching number of values and parameters. For partial binding please pass a dictionary of {parameter: value} pairs.. Although I'm sure this is how maxiter is given and shown in Qiskit notebooks.
For further information, this is my circuit
and I'm sure init_params have been given 9 initial parameters for each $R_y$ gates in the ansatz. So I'm wondering where did I go wrong?
Answer: Depending on how you've created this circuit, it has either 0 or 1 parameter at most. 0 if you just perform a rotation of angle $1$, and 1 if you've created your gates using a parameter named "1". Create your circuit like this and it should work:
from qiskit.circuit import QuantumCircuit, Parameter
qc = QuantumCircuit(3, 3)
for i in range(3):
qc.ry(Parameter(f"$\\theta_{i}$"), i)
qc.cz(0, 1)
qc.cz(0, 2)
for i in range(3):
qc.ry(Parameter(f"$\\theta_{i + 3}$"), i)
qc.cz(1, 2)
qc.cz(0, 2)
for i in range(3):
qc.ry(Parameter(f"$\\theta_{i + 6}$"), i)
qc.draw("mpl")
This will give you the following result:
As a side note, you can check the number of parameters your circuit expects using the num_parameters attribute:
print(qc.num_parameters) # Should print 9 | {
"domain": "quantumcomputing.stackexchange",
"id": 4705,
"tags": "qiskit, quantum-circuit, optimization"
} |
send int16 via std_msg/string msg | Question:
Dear All
I am trying to send out int16 from host pc to arduino target. I firstly converted the int16 to two chars and then packaged them into one std_msg/string and send it to the target. My problem is that when I send data which is < 256 and > 0. the high bit will be zero. In this case, the string length will not be two char anymore which cause the com failed. Do anyone know how to fix this problem? thanks a lot.
I understand that my approach is a little awkward. I can use std_msg/int16 do that job directly. I am doing this is aiming to use the same string to carry multiple commands and data.
Originally posted by jayson ding on ROS Answers with karma: 29 on 2011-12-28
Post score: 1
Answer:
Encode the data as either a hex or decimal character string, then parse it on the other end. That is a lot more effort than just sending std_msgs/int16, but seems necessary for multiple commands using the same string message type.
Maybe there is a better way to approach your design problem. Could you instead use multiple message types? Multiple topics?
Originally posted by joq with karma: 25443 on 2011-12-29
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by ahendrix on 2011-12-30:
Agreed; using multiple topics with the appropriate message types is the right way to do this. And you get demultiplexing for free rather than having to do it yourself. | {
"domain": "robotics.stackexchange",
"id": 7755,
"tags": "arduino, rosserial"
} |
Bouncing square in a box | Question: I wrote my first animation in Java FX and would like a code review.
I created StackPane with Rectangle. Rectangle starts to move inside StackPane to right-bottom direction and changes direction after hitting the StackPane bound.
Please review the code that moves Rectangle. Is it a correct implementation of a Java FX animation?
//...
new RectangleMover(pane, shape);
// ...
RectangleMover.java
public class RectangleMover extends Timer
{
private final StackPane pane;
private final Rectangle shape;
private boolean moveRight = true;
private boolean moveBottom = true;
public RectangleMover(StackPane pane, Rectangle shape)
{
super(true);
this.pane = pane;
this.shape = shape;
this.scheduleAtFixedRate(new Task(), 0, 10);
}
private static int boolToInt(boolean bool)
{
return bool ? 1 : -1;
}
private final class Task extends TimerTask
{
@Override
public void run()
{
shape.setLayoutX(shape.getLayoutX() + boolToInt(moveRight));
shape.setLayoutY(shape.getLayoutY() + boolToInt(moveBottom));
if (shape.getLayoutX() >= pane.getWidth() - shape.getWidth())
{
moveRight = false;
}
if (shape.getLayoutY() >= pane.getHeight() - shape.getHeight())
{
moveBottom = false;
}
if (shape.getLayoutX() <= 0)
{
moveRight = true;
}
if (shape.getLayoutY() <= 0)
{
moveBottom = true;
}
}
}
}
JFXTester.java (main class)
import javafx.application.Application;
import javafx.event.ActionEvent;
import javafx.event.EventHandler;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.layout.StackPane;
import javafx.scene.paint.Color;
import javafx.scene.shape.Rectangle;
import javafx.stage.Stage;
public class JFXTester extends Application
{
StackPane root;
@Override
public void start(Stage primaryStage)
{
root = new StackPane();
root.getChildren().add(createStartButton());
Scene scene = new Scene(root, 300, 250);
primaryStage.setTitle("Hello World!");
primaryStage.setScene(scene);
primaryStage.show();
}
private Rectangle createRectangle()
{
Rectangle r = new Rectangle();
r.setX(0);
r.setY(0);
r.setWidth(30);
r.setHeight(30);
r.setFill(new Color(0.8, 0.7, 0.6, 0.5));
return r;
}
private Button createStartButton()
{
final Button btn = new Button();
btn.setText("START!");
btn.setOnAction(new EventHandler<ActionEvent>()
{
@Override
public void handle(ActionEvent event)
{
btn.setVisible(false);
Rectangle r = createRectangle();
root.getChildren().add(r);
new RectangleMover(root, r);
}
});
return btn;
}
public static void main(String[] args)
{
launch(args);
}
}
How to run application (java 1.7.0_51-b13):
D:\.temporary\java>dir /a-d /b /s
D:\.temporary\java\jfxtester\JFXTester.java
D:\.temporary\java\jfxtester\RectangleMover.java
D:\.temporary\java>%JAVA_HOME%\bin\javac -cp .;%JRE_HOME_7%\lib\jfxrt.jar ./jfxtester/JFXTester.java
D:\.temporary\java>%JRE_HOME_7%\bin\java -cp .;%JRE_HOME_7%\lib\jfxrt.jar jfxtester.JFXTester
Answer: No, this is not the correct way to do it. I would recommend that you read JavaFX Transitions documentation.
I personally could not get your code to work properly (Using Java 8, but that should not matter as Java is always backwards compatible). The rectangle showed on the screen, but it did not move at all.
I got this exception:
Exception in thread "Timer-0" java.lang.IllegalStateException: Not on FX application thread; currentThread = Timer-0
at com.sun.javafx.tk.Toolkit.checkFxUserThread(Unknown Source)
at com.sun.javafx.tk.quantum.QuantumToolkit.checkFxUserThread(Unknown Source)
at javafx.scene.Scene.addToDirtyList(Unknown Source)
at javafx.scene.Node.addToSceneDirtyList(Unknown Source)
at javafx.scene.Node.impl_markDirty(Unknown Source)
at javafx.scene.shape.Shape.impl_markDirty(Unknown Source)
at javafx.scene.Node.impl_transformsChanged(Unknown Source)
at javafx.scene.Node$13.invalidated(Unknown Source)
at javafx.beans.property.DoublePropertyBase.markInvalid(Unknown Source)
at javafx.beans.property.DoublePropertyBase.set(Unknown Source)
at javafx.scene.Node.setLayoutX(Unknown Source)
One I fixed that problem to make it run on the JavaFX thread, the rectangle still did not move an inch pixel.
About the cleaniness of your code:
public class RectangleMover extends Timer
extends Timer is a very bad smell here. There's absolutely no reason to extend Timer. Use a timer as a field inside your class instead. Prefer composition over inheritance.
Speaking of Timer, you could say that it is more or less deprecated, it is often better to use a ScheduledExecutorService. However, if you do the animation properly, you will not need a Timer or ExecutorService at all.
I do like how you're using your boolToInt method (just a shame that I can't the results of it does not seem to be working). I also like how you have separated your methods overall.
I just don't like that your code does not seem to be working for me... | {
"domain": "codereview.stackexchange",
"id": 8403,
"tags": "java, animation, javafx"
} |
Lorentz Transformations for Polar coordinates or Inertial Frame in Polar Coordinates | Question: Do polar coordinates define an inertial frame or not?
Everywhere in GR, the authors of all the books talk about bring the metric to diag(-1, 1,1,1) which would show that a Local Inertial Frame exists at each point on the manifold. And the coordinate in this local frame would be $(t, x, y, z)$.
But can't Polar coordinates define an inertial frame. Can't the metric be brought to \begin{equation} diag(-1, 1, r^2, r^2 \sin^2(\theta))\end{equation} at every point. And the frame attached to the point is still inertial but just the coordinates used will be spherical instead of cartesian. Is this wrong.
In fact books begin constricting inertial frames using perpendicular rods and this cartesian coordinates. Can't we construct inertial frame using polar coordinates.
Edit- after an answer
Lorentz Transformations are transformations between different frames. Will a transformation from cartesian coordinate to polar coordinates be called a Lorentz transformation or called just a coordinate transformation. Now I have read that Lorentz transformations are linear. Transformation from cartesian to polar or from $(r, \theta, \phi) -> (r', \theta', \phi') $ would be Non Linear.
So will they be Lorentz Transformation.
Answer:
Can't we construct inertial frame using polar coordinates.
Yes, polar coordinates can be used as coordinates for an inertial frame. For example, when we solve the gravitational two-body problem in Newtonian mechanics we typically use polar coordinates to write $\mathbf F=m\mathbf a$. The equations then take a nicer form than in Cartesian components. Simply using polar coordinates in a non-rotating frame does not introduce any non-inertial forces, as does happen when one considers a rotating frame.
Courses in Special Relativity typically restrict their discussion of Lorentz transformations to Cartesian coordinates for simplicity; one is mainly interested in what happens in the direction of the relative motion between the two frames, and perpendicular to it, so Cartesian coordinates are natural, especially when one takes one of the Cartesian axes to be along the relative velocity. But the geometry of Minkowski spacetime is the same regardless of whether its metric is written in Cartesian coordinates, spherical polar coordinates, or any other coordinates describing a flat spacetime.
For details of the Lorentz transformation in cylindrical and spherical coordinates see this paper. | {
"domain": "physics.stackexchange",
"id": 74619,
"tags": "general-relativity, special-relativity, coordinate-systems, inertial-frames"
} |
Why is there no end user Application, yet? | Question: machine learning is being hyped since the deep neural networks.
It seems to me, that you have to program in order to do machine learning.
But is the process of training data and labeling data is the same of every problem. Why isn't there an Excel like application that enables thousands of non experts to do machine learning ?
Disclaimer : I am not a data scientist .
Answer: Listing 2 examples:
IBM Watson Analytics
Amazon ML use case
Preparing the data for supervised learning is require skills. Not all data came labeled and in form to be used as in need to solving problems.
Also many more platforms/Api are in market now but for sure you can't solve a problem only with 1 algorithm, is needed much more ... Hope it help. | {
"domain": "datascience.stackexchange",
"id": 678,
"tags": "machine-learning"
} |
What is the order of the genetic operations in NEAT? | Question: I was trying to implement NEAT, but I got stuck at the speciating of my clients/genomes.
What I got so far is:
the distance function implemented,
each genome can mutate nodes/connections,
two genomes can give birth to a new genome.
I've read a few papers, but none explicitly explains in what order what step is done. What is the order of the genetic operations in NEAT?
I know that for each generation, all the similar genomes will be put together into one species.
I have other questions related to NEAT.
Which neural networks are killed (or not) at each generation?
Who is being mutated and at what point?
I know that these are a lot of questions, but I would be very happy if someone could help me :)
Answer:
What is the order of the genetic operations in NEAT?
You start by evaluating all of the initial neural networks and compute their initial fitness.
Then you speciate,
kill off the worst neural networks,
mutate and crossover to produce offspring, and
evaluate again.
The order of events is described on page 109 onwards in the original NEAT paper.
Which neural networks are killed (or not) at each generation?
The neural networks with the worst performance are killed off after speciation. None of the neural networks survive - the entire population is replaced with the offspring of the nets remaining after the culling stage. That said, you can implement elitism, where you keep some small portion of the best-performing nets and carry them over to the next generation without mutating them, but that is optional.
Who is being mutated and at what point?
At the end of each generation, after speciation and culling. To produce offspring, some of the remaining nets are subjected to mutation (think asexual reproduction - like single-celled organisms - but the offspring is mutated so that it differs from the parent). The rest are subjected to crossover in random pairs, so this would be the equivalent of sexual reproduction where you need two parents.
Hope that helps. | {
"domain": "ai.stackexchange",
"id": 576,
"tags": "evolutionary-algorithms, neat, neuroevolution"
} |
Nuclear Physics Question | Question: How could I calculate the chance of a proton actually joining the nucleus and 2 protons and 3 and so on? Assuming a large number of protons are fired at a substance?
Answer: What you're looking for is the absorption cross section $\sigma$ of the target. This quantity has units of area, and is typically measured in barns (1 barn = 1 b = 10-28 m2.) There are many such cross-sections one can define, for any particle interaction you might care to name. If you have a beam of protons with a flux of $f$ (i.e., $f$ protons per area per time), and you send this beam at a target for a time $\Delta t$, then the expected number of absorptions is
$$
\langle N \rangle = f \sigma \Delta t.
$$
Calculating these cross sections from first principles is basically impossible (but then, that's true of most calculations in nuclear physics.) Instead, they must be experimentally measured, or looked up in the literature. Note that the cross section usually depends on the energy of the incoming protons; it also depends on the target, so once the target has absorbed one proton, the cross-section to absorb a second proton will be different since the target has changed.
Finally, note that proton capture is relatively rare compared to the more common neutron capture, since the proton and the target are both positively charged and tend to repel each other. Cross sections for scattering of protons off of nuclei can be defined in much the same way. There may not be a huge amount of data out there for proton capture cross sections, compared to either proton scattering or neutron capture. | {
"domain": "physics.stackexchange",
"id": 59422,
"tags": "particle-physics, nuclear-physics"
} |
Why are GRU and LSTM better than standard RNNs? | Question: It seems that older RNNs have a limitation for their use cases and have been outperformed by other recurrent architectures, such as the LSTM and GRU.
Answer: These newer RNNs (LSTMs and GRUs) have greater memory control, allowing previous values to persist or to be reset as necessary for many sequences of steps, avoiding "gradient decay" or eventual degradation of the values passed from step to step. LSTM and GRU networks make this memory control possible with memory blocks and structures called "gates" that pass or reset values as appropriate. | {
"domain": "ai.stackexchange",
"id": 1485,
"tags": "comparison, recurrent-neural-networks, long-short-term-memory, gated-recurrent-unit"
} |
Why do functional languages disallow reassignment of local variables? | Question: Fair warning: I don't actually know a functional language so I'm doing all the pseudocode in Python
I'm trying to understand why functional languages disallow variable reassignment, e.g. x = x + 1. Referential transparency, pure functions, and the dangers of side effects are all mentioned, but the examples tend to go for the low-hanging fruit of functions that depend on mutable globals, which are also discouraged in imperative languages.
My question involves variables created and mutated within the function. For example:
def numsum1(n):
sum = 0
i = 1
while i <= n:
sum = sum + i
i = i + 1
return sum
The functional way of doing this seems to be tail recursion, where the updated sum and i are passed from function call to function call. I know that there are existing higher-order functions for this, but I think this illustrates the similarity to numsum1 more plainly:
def numsum2(n): return numsumstep(0, 1, n)
def numsumstep(sum, i, n):
if i <= n:
return numsumstep(sum + i, i + 1, n)
else:
return sum
numsum1 and numsum2 do the exact same thing (with tail call optimization) and are both referentially transparent. I do see why numsum1 is internally referentially opaque; the expressions i + 1 and sum + i change in value with each iteration and thus cannot be replaced by a constant value. But why does that matter if numsum1 itself is referentially transparent? Are there examples of functions that become referentially opaque solely because of reassigning local variables?
Answer: In a pure functional programming language, there is no real notion of time at all. So, saying that a variable x has value a at one point and then b later simply doesn't make any sense – it's like asking a character in a painting why she always stares in the same direction.
The advantage of having no time is that you never† need to worry about the order in which computations happen. If a variable is in scope then it also has the correct value, i.e. the value it has been assigned. (Which assignment may actually be “after” the computation in which it is needed – definitions can be reordered at will.)
Whereas in an imperative language – well, consider this program:
def numsum1(n):
sum = 0
i = 1
while i <= n:
sum = sum + i
i = i + 1
midterm = sum
while i >= 0:
sum = sum - i
i = i - 1
return (midterm, i)
If for some reason you need to refactor and pull the midterm definition behind the second loop, overlooking that it actually mutates sum again, then you would get the wrong result.
Now, you might well argue that this is defeated if you need to use recursion to basically fake mutation. Isn't there just as much, or even more, potential for mistakes if you have a recursive call using a parameter still called x that is effectively the same variable anyway?
– Not quite, because outside of the recursive calls the variable is guaranteed to stay the same. The refactoring problem with the above example wouldn't happen in a functional language.
Furthermore, as Odalrick already wrote, recursion isn't actually what's normally used to replace loops in functional languages. The idiomatic Haskell version of your program is
import Data.List (sum)
numsum :: Int -> Int
numsum n = sum [1..n]
...or, using more general-purpose tools,
numsum n = foldl' (+) 0 . take n $ iterate (+1) 1
†That's a bit of an exaggeration. Of course, you do sometimes need to take time into account even in a functional language. Obviously, if it runs somehow interactively (IO monad in Haskell), then those parts are subject to latency considerations. And even for completely pure computations, one side effect that you can't possibly avoid is memory consumption. And that's indeed the one thing that Haskell truely isn't good at: it's really easy to write code that typechecks, works, is correct, but takes gigabytes of memory (when a few kilobytes should have been enough) because some thunks are never garbage-collected. | {
"domain": "cs.stackexchange",
"id": 18864,
"tags": "functional-programming, imperative-programming"
} |
Do all objects in a system need to have the same acceleration? | Question: What is the definition of a system? Could multiple objects accelerating at different magnitudes and directions still be considered a system?
Answer:
Could multiple objects accelerating at different magnitudes and directions still be considered a system ?
Yes, definitely. A system is whatever set of objects you want to group together. Whether or not this is a useful way to group objects depends on the problem you are trying to solve. | {
"domain": "physics.stackexchange",
"id": 90031,
"tags": "newtonian-mechanics, classical-mechanics, linear-systems"
} |
Identity 3.51 in Peskin/Schroeder | Question: This identity is used when solving the Dirac equation in Peskin & Schroeder and other texts:
$$(p\cdot \sigma)(p\cdot \bar \sigma)=p^2=m^2 \tag{3.51},$$
and although it seems simple enough I cannot for the life of me arrive at the answer. This is as close as I can get:
$$(p\cdot \sigma)(p\cdot \bar \sigma)=(p^0+p^i\sigma_i)(p^0-p^j\sigma_j)=(p^0)^2-p^0p^j\sigma_j+p^i\sigma_ip^0-p^ip^j\sigma_i\sigma_j, \tag{1}$$
From here the second and third term cancel, and we use the standard Pauli matrix identity on the fourth term:
$$\sigma_i\sigma_j=\delta_{ij}\Bbb I+i\epsilon_{ijk}\sigma^k, \tag{2}$$
which gives:
$$(p^0)^2-p^ip^j\sigma_i\sigma_j=E^2-p^ip^j(\delta_{ij}\Bbb I+i\epsilon_{ijk}\sigma^k)=E^2-\vec p^2+ip^ip^j\epsilon_{ijk}\sigma^k=m^2+ip^ip^j\epsilon_{ijk}\sigma^k \tag{3}.$$
From here I need the last term to vanish in order to obtain the correct answer, but I cannot see any way to do so. Could the solutions perhaps be:
$$i\epsilon_{kij}p^ip^j\sigma^k=i(\vec p\times \vec p)_k\sigma^k=0? \tag{4}$$
Unfortunately I thought of this as I was typing the question, but just in case this isn't valid and/or this question is helpful to other people I will still post it.
Answer: The product of a term that's anti-symmetric in two indices and a term that's symmetric in two indices is always zero. If $A_{ij} = -A_{ji}$ and $S^{ij} = S^{ji}$, then
$$ A_{ij} S^{ij} = - A_{ji}S^{ji},$$
but we can just rename the indices we're summing over by exchanging $i\leftrightarrow j$ and so $A_{ij} S^{ij} = -A_{ij} S^{ij} = 0$.
The cross product of a vector with itself being zero is just a special case of that. | {
"domain": "physics.stackexchange",
"id": 75546,
"tags": "dirac-equation"
} |
Encode non-negative integer to a string of ascii letters and digits | Question: I want to encode a non-negative integer to ASCII letters and digits.
I wrote below code for that, that works, but which I assume too immature to use in production.
#! /usr/bin/env python3
from string import digits, ascii_letters
def encode(number: int, pool: str) -> str:
"""Encode a non-negative integer as string."""
result = ''
pool_size = len(pool)
while number:
number, remainder = divmod(number, pool_size)
result += pool[remainder]
return result
def main():
size = (1 << (8 * 2)) - 1
codes = set()
for i in range(size):
print(i, code := encode(i, digits+ascii_letters))
codes.add(code)
print('Unique:', len(codes) == size)
if __name__ == '__main__':
main()
I also tried the encoding functions in the base64 library to no avail. Most of them will create strings containing other than the desired characters at some point and b16encode() has a too small alphabet, resulting in too long codes (no lower-case).
If you can provide a better solution to convert an arbitrary non-negative integer into a string of characters exclusively from string.ascii_letters + string.digits, that would be great.
Update
Project on GitHub
Answer: Your current encode doesn't have validation, and won't produce meaningful output for number below 1.
size and the way you use it in a range don't entirely make sense; I would sooner expect size = 1 << (8 * 2) so that 65535 is included. Also your range should start at 1 and not 0, since 0 produces an empty string. (Or modify your encode so that 0 does indeed produce a non-empty string.)
Given your goal of
easy-to-remember and easy-to-type PINs
I don't think that digits, ascii_letters constitute an appropriate pool. Disambiguation of zero and capital O, lower-case L and upper-case I, etc. are problematic. Best to write your own pool literal that avoids these.
Make a judgement call as to which is easier to remember: a shorter string that includes mixed case, or a longer string with one case only. With a pool of 54 upper-case, lower-case and numeric characters, and an ID of at most 16 bits, the maximum encoded length is
$$
\lceil
\frac {16 \log2} {\log54}
\rceil = 3
$$
In the other direction, if you want to limit the number of encoded characters to 3, the minimum pool size is
$$
\lceil
2^{\frac {16} 3}
\rceil
= 41
$$
Suggested
import math
from typing import Iterator
LEGIBLE = (
'ABCDEFGHJKLMNPRTUVWXYZ'
'abcdefghijkmnopqrstuvwxyz'
'2346789'
)
# One above the maximum ID.
ID_LIMIT = 1 << (8 * 2)
def encode_each(number: int, pool: str = LEGIBLE) -> Iterator[str]:
if number < 0:
raise ValueError(f'{number} is non-encodable')
pool_size = len(pool)
while True:
number, remainder = divmod(number, pool_size)
yield pool[remainder]
if not number:
break
def encode(number: int, pool: str = LEGIBLE) -> str:
return ''.join(encode_each(number, pool))
def all_codes(size: int = ID_LIMIT) -> Iterator[str]:
for i in range(size):
yield encode(i)
def test() -> None:
codes = tuple(all_codes())
assert len(codes) == 65536
assert len(set(codes)) == 65536
n_symbols = math.ceil(math.log(65536) / math.log(len(LEGIBLE)))
for code in codes:
assert 0 < len(code) <= n_symbols
def demo() -> None:
ids = 0, 1, 100, 1000, 60000, 65535
for i in ids:
print(i, '->', encode(i))
if __name__ == '__main__':
test()
demo()
Output
0 -> A
1 -> B
100 -> zB
1000 -> gW
60000 -> GjY
65535 -> mda | {
"domain": "codereview.stackexchange",
"id": 43008,
"tags": "python, python-3.x, formatting, integer"
} |
Rademacher Complexity of the Composition with an Indicator | Question: Consider the statistical learning setting where you have an arbitrary hypothesis space $\mathcal{H}$, a data space $\mathcal{Z}$, and a bounded loss function $\ell: \mathcal{H}\times \mathcal{Z} \rightarrow [0,1]$. Further, for $c\in(0,1)$, let $\mathcal{F}_c$ be the function class defined by
\begin{align}
\mathcal{F}_c := \{ z \mapsto \mathbb{I}\{\ell(h,z) \leq c\}: h \in \mathcal{H}\}.
\end{align}
Question. Is it in any way possible to relate the Rademacher complexity of the function class $\mathcal{F}_c$, to that of $\ell \circ \mathcal{H}:= \{z\mapsto \ell(h,z): h \in \mathcal{H}\}$?
My goal is to show that when the complexity of the latter class is small, so is the complexity of the former.
Rademacher Complexity. The Rademacher complexity of a function class $\mathcal{F}$ is defined as
\begin{align}
\mathfrak{R}_n(\mathcal{F}) := \mathbb{E}\left[\sup_{f\in \mathcal{F}}\frac{1}{n} \sum_{i=1}^n \sigma_i f(z_i)\right], \quad n \in \mathbb{N},
\end{align}
where $(\sigma_i,z_i)$ are i.i.d. random variables with $(\sigma_i)$ having a Rademacher distribution.
Failed Attempt. There are results on the Rademacher complexity of the composition of functions, but these typically rely on some Lipschitzness properties, which do not hold for our function class $\mathcal{F}_c$ since we compose with an indicator function.
Answer: In general, composing a threshold function with a function class can arbitrarily increase the Rademacher complexity (so the answer to your question is negative). Let $\Omega$ be a finite set, $|\Omega|=N$, and let $F=[-a,a]^\Omega$ be the set of all functions from $\Omega$ to $[-a,a]$.
When $N\gg n$, the Rademacher complexity (either empirical or expected) of $F$ will be $\Theta(a)$. Let $F'$ be $F$ composed with the sign function; now $F'=\{-1,1\}^\Omega$ and its Rademacher complexity in this regime is $\Theta(1)$. Since $a>0$ can be taken arbitrarily small, this shows that the multiplicative gap can be arbitrarily large. | {
"domain": "cstheory.stackexchange",
"id": 5106,
"tags": "complexity-classes, machine-learning"
} |
Can the Fermi energy lie into the band gap? | Question: Fermi energy $\rightarrow$ highest energy level filled at $T=0K$
Fermi level $\rightarrow$ Energy level where we have a chance of $50\%$ to find an electron.
Now in my course text they say that for an insulator, the Fermi energy is inside a band gap. But since this zone is forbidden, how can this be the Fermi energy, if this Fermi energy is the highest occupied zone, how can it lie inside a forbidden zone?
So I assume that it is the Fermi level that will lie inside the band gap? But then, if this is a forbidden zone, how can you have $50\%$ chance of finding an electron if this zone is forbidden?
My last question, if the Fermi level is inside the band gap, does the Fermi energy lie in the band below the band gap?
Answer: Part of the confusion arises because (not necessarily different) authors use the words Fermi level and Fermi energy inconsistently. Some use them interchangable (usually in the first sense), some use them the way you defined it (it is also common to use the term chemical potential instead of Fermi level).
In your nomenclature:
In a prototypical semi-conductor (one condution band and one valence band), at $T = 0$ the Fermi level lies exactly in the center of the bandgap (and slightly moves around when you heat your system). The Fermi level defined this way is the chemical potential of the electrons.
The Fermi energy will be the highest energy in the valence band (as that is the highest energy level occupied at $T = 0$).
But don't let authors confuse you by their inconsistent use of the terms. In calculations the Fermi energy usually does not appear (it is rather a qualitative feature of Fermionic systems), the Fermi level does appear, as the chemical potential in the Fermi distribution.
On to the question how the probability of finding the electron can be 50% if there are no states there: This statement is only true if the density of states is constant. The actual probability of finding an electron at energy $E$ is the Fermi distribution (which has the value $0.5$ at the Fermi level) multiplied by the density of states. If you now turn on temperature, the number of particles has to be conserved, as the Fermi distribution is symmetric, the Fermi level has to be in the middle of the bandgap at $T = 0$ to assure this.
Properly formulated, in a semi-conductor the Fermi energy will lie on top of the valence band (this language is necessary, as saying the Fermi energy lies in the band usually means that the system in question is a conductor). There again is a problem with inconsistent language. When saying the Fermi energy lies in the band gap, what is meant is this: It lies at the highest possible energy of the band (or in my terms on top of the band). | {
"domain": "physics.stackexchange",
"id": 22418,
"tags": "semiconductor-physics, conductors, insulators"
} |
Numerical solution for free quantum particle | Question: I'm struggling with finding another approach for numerically simulating the free quantum particle (with cyclic boundary conditions). I have made a short document on this:
https://github.com/Ch3shireDev/PyQuanta/blob/master/PyQuanta.ipynb
In short - standard, working approach to simulating Psi function is to add evolution as an exponent of wave numbers multiplied with Psi in it's frequency representation in space. In short, it looks like this:
$$\psi(t+dt) = ifft(exp(-ik^2 dt/2)\cdot fft(\psi))$$
Which is great - $\psi$ function is complex, Fourier transform is great for differentiating cyclic complex functions etc. The problem is - I want to describe $\psi$ function as two real functions - in my main intention as $Re^{i\theta}$, but for now - only as $\psi = f+ig$. And, sadly, I don't have any working solution which wouldn't blow functions to infinity and would give any similar results. I'm desperate and I'm asking for help. Could you show me any good working methods for numerical differentiation of functions?
Answer: Define
$$
\psi(x, t) = \int_{-\infty}^{+\infty}{\rm d}k~ e^{2\pi i k x}\hat{\psi}_k(t)
$$
such that the equation
$$
i\partial_t\psi(x, t) = -\frac{1}{2}\partial_x^2\psi(x,t)
$$
becomes
$$
\int_{-\infty}^{+\infty}{\rm d}k~\left[i\partial_t \hat{\psi}_k(t) + \frac{1}{2}(2\pi i k)^2 \hat{\psi}_k(t)\right] e^{2\pi i k x} = 0
$$
from this you can conclude that
$$
\partial_t\hat{\psi}_k(t) = -2i \pi^2k^2 \hat{\psi}_k(t) \tag{1}\label{1}
$$
with solution
$$
\hat{\psi}_k(t) = e^{-2i\pi^2 k^2 t}\hat{\psi}_k(0) \tag{2}\label{2}
$$
So to calculate $\psi(x,t)$ use $\ref{2}$ and to calculate its derivative use $\ref{1}$. Below there's small code to do this
def propagate(dt, x, psi):
k = np.fft.fftfreq(x.shape[-1], x[1] - x[0])
f1 = np.fft.fft(psi)
f2 = np.exp(-2j * (np.pi * k) ** 2 * dt)
psik = f1 * f2
psi = np.fft.ifft(psik)
dpsi = np.fft.ifft(-2j * (np.pi * k) ** 2 * psik)
return psi, dpsi
You can test it with
x = np.linspace(-0.5 * L, 0.5 * L, num = n)
s = 0.05 * L
psi0 = np.exp(-0.5 * (x / s) ** 2) / np.sqrt(2 * np.pi * s * s) + 0j
# propagation
psi1, dpsi1 = propagate(0.1, x, psi0)
plt.plot(x, np.abs(psi0), 'b-')
plt.plot(x, np.abs(psi1), 'r-')
plt.show() | {
"domain": "physics.stackexchange",
"id": 40098,
"tags": "schroedinger-equation, computational-physics, simulations, fourier-transform, software"
} |
Understanding electric field and potential inside an half-connected wire | Question: Let's say we have a 9 V battery and a wire as shown in the image below:
Let's assume H is the reference for measuring potential. I know the potential in F is 9 V, and I know the electric field in F and H are undefined (this is a controversial thing, I want to avoid that, I don't care for the electric field on F and H). I want to know what is the electric field and electric potential on A, B, C, D, E, G and I, and most importantly, why they assume those values.
I think the electric field is zero on A, B, C, D and E, because otherwise there would be current, which would be odd. I also think the electric potential is constant at those points (consequence of I just said), and equal to the potential in F (which is the battery value, 9 V).
I have no clue for the other points though.
If I'm correct for points A to E, still I have questions regarding them. My thoughts don't really explain why the electric field is zero, it's just that it must be zero, otherwise, something absurd happens. But I can't believe nature "thinks" "oh hey, the field must be zero there, otherwise something absurd will happen". I want an "true" explanation.
I've read this, this and this questions, and learned a lot from them, but in my situation I have an open circuit, and I still can't understand it.
Answer:
I think the electric field is zero on A, B, C, D and E, because otherwise there would be current, which would be odd
And you are totally right for an electrostatic system (with no current). Instead of explaining it by, "this would be odd", let's have a look at what happens in the instant you add the wire to the battery pole.:
Before the wire touches the pole, electrons are spread evenly in the wire.
At the instant the wire touches, suddenly the large negative charge at the pole repels the electrons. They move towards the other end. By now there is and electric field $E=\frac{F}{q}$, caused by the battery to move the electrons
After a very short time enough electrons have moved to the other end. Now they repel each other just as much as the repulsion from the pole. In other words, an equal electric field has been set up in the other end cancelling the battery's field. The net field at all points is now zero, and everything is still (electrostatic).
I also think the electric potential is constant at those points
Also true. It's like putting two balls on the same shelf. They will not roll around on this shelv; they would only roll if one shelf was lower than the other. For the charges, there is only an electric potential if they "want to move there".
So with reference to point E or D or C or any other point within the wire, there is no electric potential, since no area has less repulsion than another area. If that would have been the case, it would very quickly be evened out by electrons moving around until that potential has been used up (like the balls rolling until they are as far down as possible, lowering the potential as much as possible.)
I have no clue for the other points though.
If the wire end is very close to an area of lower potential, which seems to be the case, then this acts like a capacitor. The electric field in between the capacitor plates is non-zero. Put a charge there and it will be simultaneously repelled from one side and attracted to the other, and will move. This is the case for point I.
About point G. It is just as for point I, so it depends on the distance between the two plates. But as you say, those plates are in reality a battery and not simply two seperated poles. If point G is inside the battery, then it might experience the so called electromotive force. This is the work done by the battery on the charges at the lower pole to bring them back to the upper pole, ready to move again through the circuit. This is a "potential" in the sense that they are added energy. G must be very specifically located to say anymore though.
and I know the electric field in F and H are undefined (this is a controversial thing, I want to avoid that, I don't care for the electric field on F and H)
Interesting, if you have a link, please provide it :) | {
"domain": "physics.stackexchange",
"id": 22254,
"tags": "electrostatics, electricity, potential"
} |
Number Guessing Game Version 1 | Question: I made a really basic number guessing game based off a list of recommended projects, and i want to see if it follows the normal coding conventions before I aim to improve it(with lives, or hints, ect). This is also my first attempt at loops. I am wondering if this follows the normal coding conventions, if i was unnecessarily repetitive with the code, and if there is any thing I can do to improve the code.
#Guessing Game!
#The code has the user guess a number from 1-100
#if they get it right, they are congratulated and asked to play again
#if they get it wrong, they must keep guessing
#Game code
import random as r
play = True
#rnum = The random number
#gnum = The guessed number
while play:
x = r.randrange(1,100,1)
rnum = str(x)
#this exists for testing
print(rnum)
gnum = input("Guess the number!")
if gnum == rnum:
print("Good Job! Play Again?")
play_again = input("y/n")
if play_again == "y":
continue
else:
break
if gnum != rnum:
gnum = input("Nope, guess again!")
if gnum == rnum:
print("Good Job! Play Again?")
play_again = input("y/n")
if play_again == "y":
continue
else:
break
print("Thank you for playing!")
Answer: Broadly: whereas the game does work, it isn't very fun, since no feedback is given to the user (e.g. that the number guessed is too high or too low, or that you're close or far). The only way to win is to try every single number from 1 to 100.
Your play loop variable can be deleted since you can break out of the loop directly when needed.
r is not a very good alias for the random module. Either keep it as random, or use import from syntax.
randrange is not the right function to call; since you care about an inclusive range call randint instead. As it stands, your code doesn't do what you say it does since the maximum will only be 99, not 100.
You have repeated code that should be centralised - your "play again" section.
Rather than casting your rnum as a string, you should do the opposite and validate and cast the user input to an integer.
Suggested
"""
Guessing Game!
The code has the user guess a number from 1-100
if they get it right, they are congratulated and asked to play again
if they get it wrong, they must keep guessing
"""
from random import randint
while True:
# rnum = The random number
rnum = randint(1, 100)
while True:
# gnum = The guessed number
gnum = input("Guess the number! ")
if not gnum.isnumeric():
print("Invalid integer")
elif int(gnum) == rnum:
break
else:
print("Nope, guess again!")
print("Good Job! Play Again?")
play_again = input("y/n")
if play_again != "y":
break
print("Thank you for playing!") | {
"domain": "codereview.stackexchange",
"id": 42960,
"tags": "python, number-guessing-game"
} |
Area spanned by unitarity triangle of CKM matrix | Question: I currently read an article and they mentioned, that using the unitarity relation between the 1. and 3. column of the CKM matrix, one can easily show that the area spanned of the unitarity triangle is given by $2A = |\mathrm{Im}(V_{ub}V^*_{ud}V^*_{cb}V_{cd})|$. So I tried to show that by myself. I have absolutely no idea if this is gonna work, but here is my attempt:
I started with the unitarity relation of the 1. and 3. column:
$$V^*_{ub}V^{\,}_{ud} + V^*_{cb}V^{\,}_{cd} + V^*_{tb}V^{\,}_{td} = 0$$
This relation can be represented as a triangle in the complex plane, called unitarity triangle. I made then the following to vectors:
$$\vec{a} = \begin{pmatrix}\mathrm{Re}(V^*_{ub}V_{ud})\\\mathrm{Im}(V^*_{ub}V_{ud}) \\ 0\end{pmatrix},
\vec{b} = \begin{pmatrix}\mathrm{Re}(V^*_{cb}V_{cd})\\\mathrm{Im}(V^*_{cb}V_{cd}) \\ 0\end{pmatrix}$$
Then taking the cross product of these two vectors should yield the mentioned area in the article:
\begin{align}2A&=|\vec{a}\times \vec{b}|= |\begin{pmatrix}0\\ 0 \\ \mathrm{Re}(V^*_{ub}V_{ud})\mathrm{Im}(V^*_{cb}V_{cd})-\mathrm{Im}(V^*_{ub}V_{ud})\mathrm{Re}(V^*_{cb}V_{cd})\end{pmatrix}|\\[10pt]
&= |\mathrm{Re}(V^*_{ub}V_{ud})\mathrm{Im}(V^*_{cb}V_{cd})-\mathrm{Im}(V^*_{ub}V_{ud})\mathrm{Re}(V^*_{cb}V_{cd})|\end{align}
Now I don't see how that equals $|\mathrm{Im}(V_{ub}V^*_{ud}V^*_{cb}V_{cd})|$.
Answer: You use that Im$(ab)= $Re$(a)$Im$(b) + $ Re$(b)$Im$(a)$, for
$$a = V_{ub}V^*_{ud}\qquad\qquad b=V^*_{cb}V_{cd}$$
to get
\begin{align*}
\text{Im}(V_{ub}V^*_{ud}V^*_{cb}V_{cd})&=\text{Re}(V_{ub}V^*_{ud})\text{Im}(V^*_{cb}V_{cd}) + \text{Re}(V^*_{cb}V_{cd})\text{Im}(V_{ub}V^*_{ud}) \\
&=\text{Re}(V_{ub}^*V_{ud})\text{Im}(V^*_{cb}V_{cd}) - \text{Re}(V^*_{cb}V_{cd})\text{Im}(V_{ub}^*V_{ud})
\end{align*}
Where in the second equation I used
\begin{align*}
\text{Re}(a^*) = \text{Re}(a) \qquad\qquad \text{Im}(a^*) = -\text{Im}(a)
\end{align*} | {
"domain": "physics.stackexchange",
"id": 20570,
"tags": "standard-model"
} |
Is the law of conservation of energy broken by the side-effects of heating and cooling a liquid that evaporates and condenses? | Question: Let's say there's a puddle of water on the ground. I use a magical device to give it enough thermal energy to vaporize into water vapor. The water vapor floats up into the sky. I then use the magical device to absorb the same amount of thermal energy I previously gave it. The water vapor then condenses to water and falls back on the ground and forms the same puddle of water on the ground.
My device absorbed and gave the same amount of energy, therefore the net energy in the system should be the same. However, it seems like the system's energy should increase from the water vapor floating up into the sky and producing thermal energy from friction with the air molecules. It should also produce more energy from friction with the air when it falls as rain drops towards the Earth and also when it hits the ground and disperses more thermal energy from its kinetic energy.
This breaks the law of conservation of energy, but I don't see what's wrong with my model. I thought about this when I read that the rain produces a lot of thermal energy from friction with the air.
Answer: The first device is not so magical - you can accomplish the same result with a fire under a pot.
The second device, on the other hand, is indeed magical: you are converting heat into usable energy without any side effects. This is prohibited by the second law of thermodynamics.
You are also violating the first law, though, and the thermodynamics police are coming for you ;)
The problem lies in making an unwarranted assumption.
The vapour rises in the air, and you say you want to consider effects such as "friction", or the exchange of heat between water and other air molecules.
A proper description of this phenomenon will show that any energy gained by the air is lost by the water, so when you activate your second-law-violating device the vapour will yield less energy than what you put in initially - unless, of course, you are also able to retrieve the energy which was lost to the air. | {
"domain": "physics.stackexchange",
"id": 83225,
"tags": "thermodynamics, energy, energy-conservation"
} |
Why is the Electric Field from Point Charges vs a Continuous Line Different? | Question: I was doing some calculations regarding the difference between the electric field generated by point charges vs a continuous line of charge (trying to come up with a cool/challenging physics problem), and I came across this interesting result which seems incorrect to me:
Imagine you have an infinite line of point charges each with charge q, separated by distance x. At one end of the line, you measure the electric field at distance x from the last charge. To visualize, the measurement point is m and the charges are c:
(m) --x-- (c) --x-- (c) --x-- (c) ...
When calculating the electric field at point m, you get:
$$
\frac{kq}{x^2}+\frac{kq}{(2x)^2}+\frac{kq}{(3x)^2}+...=\frac{kq}{x^2}(1+\frac{1}{2^2}+\frac{1}{3^2}+...\frac{1}{n^2})=\frac{kq\pi^2}{6x^2}
$$
This is because $1+\frac{1}{2^2}+\frac{1}{3^2}+...\frac{1}{n^2}=\frac{\pi^2}{6}$ by some difficult proofs.
Now lets image instead of the line of point charges, you have a continuous infinite line of charge which starts a distance x away from the measurement point, with charge density q/x. To visualize, the measurement point is m:
(m) --x-- (---------Infinite line of charge with charge density q/x---------)
Now to calculate the field:
$$
\int_{x}^{\infty}\frac{kq}{xr^2}dr=-(\frac{kq}{x\infty})+(\frac{kq}{x^3})=\frac{kq}{x^3}
$$
Which is clearly quite different from the above result.
I feel like I have made a simple error in either my calculations or in my conception of the problem. It seems that both situations should have the same (or at least similar) results, as both have the same linear charge density. So I guess my question is this: are my above calculations valid, and if so, is there an easy way to understand why these results are different?
Answer: You simply made a mistake in the integral:
$$\int_x^\infty \frac{kq}{xr^2} dr = \frac{kq}{x} \left.\frac{1}{r}\right|^x_\infty = \frac{kq}{x^2}.$$
You can see the mistake immediately from the fact that your result doesn't have the right units - always always check your units!
So what changes? The discrete distribution produces a field that is larger by a factor of $\pi^2/6 \approx 1.6$. This makes sense: picture dividing the continuous line into segments of length $x$ and charge $q$. Going from this to the discrete distribution means concentrating all the charge of each segment at its left endpoint, thus moving it closer to the measurement point and strengthening the field. | {
"domain": "physics.stackexchange",
"id": 85400,
"tags": "electromagnetism, electrostatics, electric-fields"
} |
About 'de Broglie hypothesis' and the double slit experiment | Question: EDIT: As i mentioned in my original question, i do not have the background to fully understand @Timaeus answer (which was very detailed indeed).
I would appreciate if someone could give a more 'classical physics' answer ,even not so detailed ,in order to clarify some things my self a little bit more.
In addition i would like to know the difference between the term 'wave' and 'wavefunction' and how two uncharged particles would interfere in the experiment demonstrated by a single's particle's emission source.
Without enough theoretical background in physics , I post this question which actually has two related parts.
Wavefunction and 'de Broglie hypothesis':
As far as I can understand the wavefunction of massless particles, is described by the magnitude's change of a certain particle's property, i.e. the wavefunction of a photon is described as the changing of the intensity of it's EM field over time. This wave moves through space with velocity $C$ and carries energy equal to $hf$.
On the other hand 'de Broglie hypothesis' suggests that 'all matter has wave properties' and the wavelength of this function is equal to $\lambda=h/p=h/mv\cdot \gamma^{-1}$.
Now here comes my first question: In which particle's property is this wavefunction related to?
Or does this wavefunction actually describes the particle's motion (~) with it's simultaneous transport through
space with velocity $v$,carrying $\rm{KE}\;?$
b) In the double slit experiment that is demonstrated by a single's electrons emission source, what physical property is the interference pattern related to ?
Is this pattern the result of the interference of accelerating electron's EM wave or something else?
If the source emits 'n' single electrons, how many arrivals of matter do we detect on the screen?
Answer: There are two approaches to quantum mechanics: nonrelativistic quantum mechanics, and quantum field theory.
In quantum field theory, there is one wave in physical space for each type of particle/antiparticle. A photon field, an electron/positron field, a muon/antimuon field, and so on. But the fields are operator valued, and quite complex. And there isn't a clean thing that corresponds to a single particle. The one field collectively represents all the photons and such in the whole universe.
In nonrelativistic quantum mechanics you have to pick a number of particles, such as $n$ particles and then each has a spin space $\mathbb C,$ or $\mathbb C^2,$ or $\mathbb C^3$ or $\mathbb C^k$ and then the wavefunction is a function from $\mathbb R^{3n}$ (note this is configuration space, which is much larger than physical space) and it goes into $\mathbb C^{k_1}\otimes\mathbb C^{k_2}\otimes\dots\otimes \mathbb C^{k_n}$.
This wave moves through space with velocity $C$ and carries energy equal to $hf$.
That never happens.
On the other hand 'de Broglie hypothesis' suggests that 'all matter has wave properties' and the wavelength of this function is equal to $\lambda=h/p=h/mv\cdot \gamma^{-1}$.
The $p$ is canonical momentum, not mechanical momentum. And even if it were mechanical momentum, the correct formulas are things like $\vec p=E\vec v/c^2$ (which holds for all particles) not $\vec p=\gamma m\vec v$ (which only holds for massive particles).
Just because some formulas are more popular and they hold for some special cases doesn't make them the correct formulas.
Now here comes my first question: In which particle's property is this wavefunction related to?
The wavefunction describes all properties of all particles.
Or does this wavefunction actually describes the particle's motion (~) with it's simultaneous transport through
space with velocity $v$,carrying $\rm{KE}\;?$
It's a wave in configuration space. So the probability current describes flows in configuration space. A point in configuration space is an assignment of locations to all particles. So it's a flow from a configuration of all particles to another configuration of all particles.
b) In the double slit experiment that is demonstrated by a single's electrons emission source, what physical property is the interference pattern related to ?
The interference pattern is in the residuals of the locations of the particles.
Is this pattern the result of the interference of accelerating electron's EM wave or something else?
Something else. The interference of the spin in configuration space. With a residual over the locations of the screen. As measured by the frequency of a statistical ensemble which is related to the overall amplitude of a single instance of the ensemble.
If the source emits 'n' single electrons, how many arrivals of matter do we detect on the screen?
Less than $n$ if some hit the barrier on the way to the screen. The ensemble is all $n$ and in nonrelativistic quantum mechanics you get the prediction of the frequency of different locations from the wavefunction for just one electron. | {
"domain": "physics.stackexchange",
"id": 29126,
"tags": "quantum-mechanics, waves, wavefunction, double-slit-experiment, wave-particle-duality"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.