anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What is the difference between the uniform-cost search and Dijkstra's algorithm? | Question: Every computer science student (including myself, when I was doing my bachelor's in CS) probably encountered the famous single-source shortest path Dijkstra's algorithm (DA). If you also took an introductory course on artificial intelligence (as I did a few years ago, during my bachelor's), you should have also encountered some search algorithms, in particular, the uniform-cost search (UCS).
A few articles on the web (such as the Wikipedia article on DA) say that DA (or a variant of it) is equivalent to the UCS. The famous Norvig and Russell's book Artificial Intelligence: A Modern Approach (3rd edition) even states
The two-point shortest-path algorithm of Dijkstra (1959) is the origin of uniform-cost search. These works also introduced the idea of explored and frontier sets (closed and open lists).
How exactly is DA equivalent to UCS?
Answer: The answer to my question can be found in the paper Position Paper: Dijkstra's Algorithm versus Uniform Cost Search or a Case Against Dijkstra's Algorithm (2011), in particular section Similarities of DA and UCS, so you should read this paper for all the details.
DA and UCS are logically equivalent (i.e. they process the same vertices in the same order), but they do it differently. In particular, the main practical difference between the single-source DA and UCS is that, in DA, all nodes are initially inserted in a priority queue, while in UCS nodes are inserted lazily.
Here is the pseudocode (taken from the cited paper) of DA
Here is the pseudocode of the best-first search (BFS), of which UCS is just a particular case. Actually, this is the pseudocode of UCS where $g(n)$ is the cost of the path from the source node to $n$ (although the title indicates that this is the pseudocode of BFS). | {
"domain": "ai.stackexchange",
"id": 3956,
"tags": "comparison, search, uniform-cost-search, dijkstras-algorithm, shortest-path-problem"
} |
Why is exclusive breastfeeding recommended for 6 months only? | Question: The WHO recommends exclusive breastfeeding for the first 6 months of a child's life.
Review of evidence has shown that, on a population basis, exclusive breastfeeding for 6 months is the optimal way of feeding infants. Thereafter infants should receive complementary foods with continued breastfeeding up to 2 years of age or beyond.
Their website mentions "review of evidence", but what is that evidence? Why 6 months and not 4 or 5 or 7 or 8 or 9 or anything like that?
Can anyone give which review is WHO talking about?
Answer: I guess this is the review (ISBN: 92 4 156211 0) that they are referring to.
We found no objective evidence of a “weanling’s dilemma.” Infants who
are exclusively breastfed for 6 months experience less morbidity from
gastrointestinal infection than those who are mixed breastfed as of 3
or 4 months, and no deficits have been demonstrated in growth among
infants from either developing or developed countries who are
exclusively breastfed for 6 months. Moreover, the mothers of such
infants have more prolonged lactational amenorrhea. Although infants
should still be managed individually so that insufficient growth or
other adverse outcomes are not ignored and appropriate interventions
are provided, the available evidence demonstrates no apparent risks in
recommending, as public health policy, exclusive breastfeeding for the
first6 months of life in both developing and developed country
settings. Large randomized trials are recommended in both types of
setting to rule out small adverse effects on growth and to confirm the
reported health benefits of exclusive breastfeeding for 6 months.
However, another report suggests that exclusive breastfeeding for a long time may lead to deficiency of some nutrients which cannot be supplemented via maternal diet.
The dual dependency on exogenous dietary sources and endogenous stores
to meet requirements needs to be borne in mind particularly when
assessing the adequacy of iron and zinc in human milk. Human milk,
which is a poor source of iron and zinc, cannot be altered by maternal
supplementation with these two nutrients. It is clear that the
estimated iron requirements of infants cannot be met by human milk
alone at any stage of infancy. The iron endowment at birth meets the
iron needs of the breastfed infant in the first half of infancy,i.e. 0
to 6 months. If an exogenous source of iron is not provided,
exclusively breastfed infants are at risk of becoming iron deficient
during the second half of infancy. Net zinc absorption from human milk
falls short of zinc needs, which appear to be subsidized by prenatal
stores.
Both these reviews are from WHO website; you can search for different WHO research materials on the site.
Note: WHO does not say that breastfeeding should not be continued till a later stage. As noted in the review, exclusive breastfeeding for more than six months can cause some nutritional deficiencies. However, reducing exclusive breastfeeding to 3-4 months can reduce the immunity of the infant. | {
"domain": "biology.stackexchange",
"id": 9544,
"tags": "human-biology, nutrition, literature"
} |
Laravel build a route from a model | Question: I've created a library, TomHart/laravel-route-from-model, and looking to get a review on it. As well as the usual code review, I'm also looking for feedback from a users point of view, if you were to use the library, is there anything extra you wish it did, anything different etc.
The key class is:
<?php
namespace TomHart\Routing;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Routing\Router;
use Illuminate\Routing\UrlGenerator;
use Symfony\Component\Routing\Exception\RouteNotFoundException;
class RouteBuilder
{
/**
* Get the router instance.
*
* @return Router
*/
private function getRouter(): Router
{
return app('router');
}
/**
* Get the UrlGenerator.
*
* @return UrlGenerator
*/
private function getUrlGenerator(): UrlGenerator
{
return app('url');
}
/**
* This allows a route to be dynamically built just from a Model instance.
* Imagine a route called "test":
* '/test/{name}/{id}'
* Calling:
* routeFromModel('test', Site::find(8));
* will successfully build the route, as "name" and "id" are both attributes on the Site model.
*
* Further more, once using routeFromModel, the route can be changed. Without changing the call:
* routeFromModel('test', Site::find(8));
* You can change the route to be:
* '/test/{name}/{id}/{parent->relationship->value}/{slug}/{otherParent->value}'
* And the route will successfully change, as all the extra parts can be extracted from the Model.
* Relationships can be called and/or chained with "->" (Imagine Model is a Order):
* {customer->address->postcode}
* Would get the postcode of the customer who owns the order.
*
* @param string $routeName The route you want to build
* @param Model $model The model to pull the data from
* @param mixed[] $data Data to build into the route when it doesn't exist on the model
*
* @return string The built URL.
*/
public function routeFromModel(string $routeName, Model $model, array $data = [])
{
$router = $this->getRouter();
$urlGen = $this->getUrlGenerator();
$route = $router->getRoutes()->getByName($routeName);
if (!$route) {
throw new RouteNotFoundException("Route $routeName not found");
}
$params = $route->parameterNames();
foreach ($params as $name) {
if (isset($data[$name])) {
continue;
}
$root = $model;
// Split the name on -> so we can set URL parts from relationships.
$exploded = collect(explode('->', $name));
// Remove the last one, this is the attribute we actually want to get.
$last = $exploded->pop();
// Change the $root to be whatever relationship in necessary.
foreach ($exploded as $part) {
$root = $root->$part;
}
// Get the value.
$data[$name] = $root->$last;
}
return rtrim($urlGen->route($routeName, $data), '?');
}
}
Answer: Laravel coupling
What a shame that you decided to tie this to laravel. You can decouple the entire library from laravel framework and only provide a bundle for laravel.
routeFromModel accepts Model, but it can actually work for any object.
$route = $router->getRoutes()->getByName($routeName);
if (!$route) {
throw new RouteNotFoundException("Route $routeName not found");
}
$params = $route->parameterNames();
This means you really don't need the router, you just need something that gives you an "array of prameter names" based on a "name".
return rtrim($urlGen->route($routeName, $data), '?');
Returning just the data here would make it more flexible.
IoC
You are pulling the RouteBuilder from DI container, why not have the container inject those deps. The way it is now, it could just be a static class with only static methods... | {
"domain": "codereview.stackexchange",
"id": 36810,
"tags": "php, library, laravel"
} |
Why are magnitudes normalised during synthesis (IDFT), not analysis (DFT)? | Question: In most examples and FFT code that I've seen, the output (frequency magnitudes) of the forward DFT operation is scaled by N -- i.e. instead of giving you the magnitude of each frequency bin, it gives you N times the magnitude.
Operationally, this is simply because the DFT is calculated by taking the inner product of the signal with each basis sine (i.e. un-normalised correlation). However, that doesn't answer the philosophical question of why don't we just divide by N before returning the output?
Instead, most algorithms divide by N when re-synthesising.
This seems counter-intuitive to me, and (unless I'm missing something) it makes all explanations of the DFT very confusing.
In every scenario I can dream up, the actual magnitude (not the magnitude * N) is the value I need from a DFT operation, and the normalised magnitude is the value I want to input into an IDFT operation.
Why isn't the DFT defined as DFT/N, and the IDFT defined as a simple sum of normalised-magnitude sinusoids?
Answer: Whether you scale the output of your DFT, forward or inverse, has nothing to do with convention or what is mathematically convenient. It has everything to do with the input to the DFT. Allow me to show some examples where scaling is either required or not required for both the forward and inverse transform.
Must scale a forward transform by 1/N.
To start with, it ought to be clear that to analyze a simple sine wave, the length of the transform should be irrelevant, mathematically speaking. Suppose N=1024, Freq=100 and your signal is:
f(n) = cos(Freq * 2*Pi * n/N)
If you take a 1024 point DFT of f(n), you will find that bin[100] = 512. But this isn't a meaningful value until you scale it by N. 512/1024 = 1/2 and of course, the other 1/2 is in the negative spectrum in bin[924].
If you double the length of the DFT, N=2048, the output values would be twice those of the 1024 point DFT, which again, makes the results meaningless unless we scale by 1/N. The length of the DFT should not be a factor in these sorts of analysis. So in this example, you must scale scale the DFT by 1/N.
Must not scale a forward transform.
Now suppose you have the impulse response of a 32 tap FIR filter and want to know its frequency response. For convenience, we will assume a low pass filter with a gain of 1. We know that for this filter, the DC component of the DFT must be 1. And it should be clear that this will be the case no matter the size of the DFT because the DC component is simply the sum of the input values (i.e. the sum of the FIR coefficients).
Thus, for this input, the DFT is not scaled by 1/N to get a meaningful answer. This is why you can zero pad an impulse response as much as you want without affecting the outcome of the transform.
What is the fundamental difference between these two examples?
The answer is simple. In the first case, we supplied energy for every input sample. In other words, the sine wave was present for all 1024 samples, so we needed to scale the DFT's output by 1/1024.
In the second example, by definition, we only supplied energy for 1 sample (the impulse at n=0). It took 32 samples for the impulse to work its way through the 32 tap filter, but this delay is irrelevant. Since we supplied energy for 1 sample, we scale the DFT's output by 1. If an impulse were defined with 2 units of energy instead of 1, we would scale the output by 1/2.
Must not scale an inverse transform.
Now let's consider an inverse DFT. As with the forward DFT, we must consider the number of samples we are supplying energy to. Of course, we have to be a bit more careful here because we must fill both the positive and negative frequency bins appropriately. However, if we place an impulse (i.e. a 1) in two appropriate bins, then the resulting output of the inverse DFT will be a cosine wave with an amplitude of 2 no matter how many points we use in the inverse DFT.
Thus, as with the forward DFT, we don't scale the inverse DFT's output if the input is an impulse.
Must scale an inverse transform.
Now consider the case where you know the frequency response of a low pass filter and want to do an inverse DFT to get its impulse response. In this case, since we are supplying energy at all points, we must scale the DFT's output by 1/N to get a meaningful answer. This isn't quite as obvious because the input values will be complex, but if you work through an example, you will see that this is true. If you don't scale by 1/N you will have peak impulse response values on the order of N which can't be the case if the gain is 1.
The four situations I've just detailed are end point examples where it is clear how to scale the DFT's output. However, there is a lot of gray area between the end points. So let’s consider another simple example.
Suppose we have the following signal, with N=1024, Freq=100:
f(n) = 6 * cos(1*Freq * 2*Pi * n/N) n = 0 - 127
f(n) = 1 * cos(2*Freq * 2*Pi * n/N) n = 128 - 895
f(n) = 6 * cos(4*Freq * 2*Pi * n/N) n = 896 - 1023
Notice the amplitude, frequency, and duration differences for the three components. Unfortunately, the DFT of this signal will show all three components at the same power level, even though the 2nd component has 1/36 the power level of the other two.
The fact that all three components are supplying the same amount of energy is obvious, which explains the DFT results, but there is an important point to be made here.
If we know the duration for the various frequency components, then we can scale the various frequency bins accordingly. In this case, we would do this to accurately scale the DFT's output:
bin[100] /= 128; bin[200] /= 768; bin[400] /= 128;
Which brings me to my final point; in general, we have no idea how long a particular frequency component is present at the input to our DFT, so we can't possibly do this sort of scaling. In general however, we do supply energy for every sample point, which is why we should scale the forward DFT by 1/N when analyzing a signal.
To complicate matters, we would almost certainly apply a window to this signal to improve the DFT's spectral resolution. Since the first and third frequency components are at the beginning and end of the signal, they get attenuated by 27 dB while the center component gets attenuated by only 4 dB (Hanning window).
To be clear, the output of the DFT can be a pretty poor representation of the input, scaled or not.
In the case of the inverse DFT, which is usually a pure mathematics problem, as opposed to the analysis of an unknown signal, the input to the DFT is clearly defined, so you know how to scale the output.
When analyzing a signal with a spectrum analyzer, analog or FFT, the problems are similar. You don't know the power of the signal displayed unless you also know its duty cycle. But even then, the windowing, span, sweep rates, filtering, the detector type, and other factors all work to goof the result.
Ultimately, you have to be quite careful when moving between the time and frequency domains. The question you asked regarding scaling is important, so I hope I have made it clear that you must understand the input to the DFT to know how to scale the output. If the input isn't clearly defined, the DFT's output has to be regarded with a great deal of skepticism, whether you scale it or not. | {
"domain": "dsp.stackexchange",
"id": 1304,
"tags": "fft, dft, magnitude, normalization"
} |
Common Ion Effect - Ionic Equilibrium | Question: Question
In which of the aqueous solutions of the following, dissociation of $\ce{NH4OH}$ will be minimum?
A) $\ce{NaOH}$
B) $\ce{H2O}$
C) $\ce{NH4Cl}$
D) $\ce{NaCl}$
My Thoughts
My book says that the answer is option C '$\ce{NH4Cl}$' giving the reason as common ion effect.
But I think that option A '$\ce{NaOH}$' also has a common ion as $\ce{OH-}$.
What should be the right answer and how do we compare which will cause more suppression by common ion effect?
I get why $\ce{H2O}$ will not suppress dissociation of $\ce{NH4OH}$ that much since its equilibrium constant is very low (of the order of $10^{-14}$). But then why not $\ce{NaOH}$ (which is a very strong base and thus will almost completely dissociate into constituent ions)? Which one out of $\ce{NaOH}$ and $\ce{NH4Cl}$ is a more stronger electrolyte?
Is it because $\ce{NH4Cl}$ will form a buffer with $\ce{NH4OH}$?
I am feeling really confused about this. Any help will be highly appreciated!
Final Question
I now understand that both $\ce{NaOH}$ and $\ce{NH4Cl}$ will cause decrease in dissociation of $\ce{NH4OH}$. Hence my final question is this: Which will cause more decrease in dissociation and why?
Answer: Let's consider an aqueous solution, the concentration of which is $\pu{1 M}$ in $\ce{NH3}$ and $\pu{1 M}$ in $\ce{NaOH}$. Thus, following equilibrium would be taken place:
$$\ce{NH3 (aq) + H2O <=> NH4+ (aq) + OH- (aq)}\tag1$$
$$\ce{NaOH (aq) -> Na+ (aq) + OH- (aq)}\tag2$$
The $\mathrm{p}K_\mathrm{b}$ of equilibrium $(1)$ is 4.75, thus $K_\mathrm{b} = \pu{1.78E{-5}}$. In pure ammonia solution, from equation $(1)$:
$$K_\mathrm{b} = \pu{1.78E{-5}} = \frac{[\ce{NH4+}][\ce{OH-}]}{[\ce{NH3}]}\tag3$$
If ionized amount at equilibrium is $\alpha$, then
$$K_\mathrm{b} = \pu{1.78E{-5}} = \frac{\alpha \times \alpha}{1-\alpha} = \alpha^2 \ \Rightarrow \ \therefore \ \alpha = \sqrt{\pu{1.78E{-5}}} = \pu{4.22E{-3}}$$
Assumptions: $\alpha \lt\lt 1$, and thus $1-\alpha \approx 1$, and $\alpha \gt \gt \pu{1.00E{-7}}$ and autoionization of water can be ignored. At the end, since $[\ce{NH4+}] = [\ce{OH-}] = \alpha = \pu{4.22E{-3}}$, both of these assoumtions are correct.
Now consider, if you have $\ce{NaOH}$ in your solution. Since it is a strong base, it completely dissociate according to equation $(2)$. Thus, there is a common ion in this solution: $[\ce{OH-}] = \pu{1 M}$. Hence, from the equation $(3)$:
$$K_\mathrm{b} = \pu{1.78E{-5}} = \frac{\beta \times (1+\beta)}{1-\beta} = \beta \ \Rightarrow \ \therefore \ \beta = \pu{1.78E{-5}} $$
Hence ($\alpha \gt \beta$), the ionization amount of $\ce{NH3}$ in presence of the common ion $\ce{OH-}$ is less than that of the solution when no common ions are present.
In similar way, you can prove the ionization amount of $\ce{NH3}$ is larger in the presence of the common ion $\ce{NH4+}$ Using following equilibria:
$$\ce{NH3 (aq) + H2O <=> NH4+ (aq) + OH- (aq)}\tag1$$
$$\ce{NH4Cl (aq) -> NH4+ (aq) + Cl- (aq)}\tag4$$
$$\ce{NH4+ (aq) + H2O <=> H3O+ (aq) + NH3 (aq)}\tag5$$ | {
"domain": "chemistry.stackexchange",
"id": 14357,
"tags": "physical-chemistry, equilibrium, aqueous-solution, kinetics, ph"
} |
Are there any compound lists searchable by elemental content? | Question: As I say in the question, I am looking for lists of compounds that I can search based on elemental content. Essentially, I want to find candidate compounds for boron doping liquid scintillator, we have a few likely candidates already but we want to see the available chemical space and see if there are better/cheaper candidates. I want to avoid high electron densities, so essentially I want to find compounds that only have boron, carbon, and hydrogen. I will filter later for hazard/reactivity/flammability.
Does such a list / service exist? If not, is there a general list of compounds with their formulas that I could then parse with a python script or something?
Answer: After some more searching, followed by asking reddit, I got a few good suggestions.
ChemSpider's advanced search method allows you to, among other things, search for elements containing specific elements and lacking other elements.
Also, for large lists of compounds I can use python tools to extract the tables from the CRC Handbook pdfs and then, search that data. | {
"domain": "chemistry.stackexchange",
"id": 5138,
"tags": "organic-chemistry"
} |
What is the complexity class most closely associated with what the human mind can accomplish quickly? | Question: This question is something I've wondered about for a while.
When people describe the P vs. NP problem, they often compare the class NP to creativity. They note that composing a Mozart-quality symphony (analogous to an NP task) seems much harder than verifying that an already-composed symphony is Mozart-quality (which is analogous to a P task).
But is NP really the "creativity class?" Aren't there plenty of other candidates? There's an old saying: "A poem is never finished, only abandoned." I'm no poet, but to me, this is reminiscent of the idea of something for which there is no definite right answer that can be verified quickly...it reminds me more of coNP and problems such as TAUTOLOGY than NP or SAT. I guess what I'm getting at is that it's easy to verify when a poem is "wrong" and needs to be improved, but difficult to verify when a poem is "correct" or "finished."
Indeed, NP reminds me more of logic and left-brained thinking than creativity. Proofs, engineering problems, Sudoku puzzles, and other stereotypically "left-brained problems" are more NP and easy to verify from a quality standpoint than than poetry or music.
So, my question is: Which complexity class most precisely captures the totality of what human beings can accomplish with their minds? I've always wondered idly (and without any scientific evidence to support my speculation) if perhaps the left-brain isn't an approximate SAT-solver, and the right-brain isn't an approximate TAUTOLOGY-solver. Perhaps the mind is set up to solve PH problems...or perhaps it can even solve PSPACE problems.
I've offered my thoughts above; I'm curious as to whether anyone can offer any better insights into this. To state my question succinctly: I am asking which complexity class should be associated with what the human mind can accomplish, and for evidence or an argument supporting your viewpoint. Or, if my qusetion is ill-posed and it doesn't make sense to compare humans and complexity classes, why is this the case?
Thanks.
Update: I've left everything but the title intact above, but here's the question that I really meant to ask: Which complexity class is associated with what the human mind can accomplish quickly? What is "polynomial human time," if you will? Obviously, a human can simulate a Turing machine given infinite time and resources.
I suspect that the answer is either PH or PSPACE, but I can't really articulate an intelligent, coherent argument for why this is the case.
Note also: I am mainly interested in what humans can approximate or "do most of the time." Obviously, no human can solve hard instances of SAT. If the mind is an approximate X-solver, and X is complete for class C, that's important.
Answer: I don't claim this is a complete answer, but here are some thoughts that are hopefully along the lines of what you're looking for.
NP roughly corresponds to "puzzles" (viz. the NP-completeness of Sudoku, Minesweeper, Free Cell, etc., when these puzzles are suitably generalized to allow $n \to \infty$). PSPACE corresponds to "2-player games" (viz. the PSPACE-completeness of chess, go, etc.). This is not news.
People generally seem to do alright with finite instances of NP-complete puzzles, and yet find them non-trivial enough to be entertaining. The finite instances of PSPACE-complete games that we play are considered some of the more difficult intellectual tasks of this type. This at least suggests that PSPACE is "hitting the upper limits" of our abilities. (Yet our opponents in these PSPACE-complete games are generally other people. Even when the opponents are computers, the computers aren't perfect opponents. This heads towards the question of the power of interactive proofs when the players are computationally limited. There is also the technicality that some generalizations of these games are EXP-complete instead of PSPACE-complete.)
To an extent, the problem sizes that arise in actual puzzles/games have been calibrated to our abilities. 4x4 Sudoku would be too easy, hence boring. 16x16 Sudoku would take too much time (not more than the lifetime of the universe, but more than people are generally willing to sit to solve a Sudoku puzzle). 9x9 seems to be the "Goldilocks" size for people solving Sudoku. Similarly, playing Free Cell with a deck of 4 suits of 13 cards each and 4 free cells seems to be about the right difficulty to be solvable yet challenging for most people. (On the other hand, one of the smartest people I know is able to solve Free Cell games as though she were just counting natural numbers "1,2,3,4,...") Similarly for the size of Go and Chess boards.
Have you ever tried to compute a 6x6 permanent by hand?
I suppose the point is that if you take natural problems in classes significantly above PSPACE (or EXP), then the only finite instances that people are capable of solving seem to be so small as to be un-interesting. Part of the reason "natural" is necessary here is that one can take a natural problem, then "unnaturally" modify all instances of size $< 10^{10}$ so that for all instances a human would ever try the problem becomes totally intractible, regardless of its asymptotic complexity.
Conversely, for problems in EXP, any problem size below the "heel of the exponential" has a chance of being solvable by most people in reasonable amounts of time.
As to the rest of PH, there aren't many (any?) natural games people play with a fixed number of rounds. This is also somehow related to the fact that we don't know of many natural problems complete for levels of PH above the third.
As mentioned by Serge, FPT has a role to play here, but (I think) mostly in the fact that some problems naturally have more than one "input size" associated with them. | {
"domain": "cstheory.stackexchange",
"id": 1560,
"tags": "cc.complexity-theory, complexity-classes, soft-question"
} |
variation of electrostatic potential on moving radially outwards from the nucleus of an atom | Question: I was wondering how would the electrostatic potential change on moving radially outwards from the nucleus in an atom, considering the effect of the electron clouds around it.
Answer: The atom has some charge distribution $\rho(r)$. We don't don't know what form the function $\rho(r)$ has, but we do know it depends only on $r$ because an atom is spherically symmetric.
When you have a spherical charge distribution the potential at a distance $r$ is simply due to the total charge inside the distance $r$:
$$ V(r) = -\frac{1}{4\pi\epsilon_0}\frac{Q(r)}{r} \tag{1} $$
There is the positive charge due to the nucleus, which doesn't depend on $r$, and the negative charge due to the electron cloud is:
$$ Q_e(r) = \int_0^r \rho(R)4\pi R^2 dR \tag{2} $$
To calculate $V(r)$ you need to know the form of the charge distribution $\rho(r)$. For any atom with more than one electron there is no analytic formula for $\rho(r)$. We have to compute the charge distribution numerically, typically by doing a Hartree-Fock calculation. The HF calculation gives us $\rho(r)$, and we can then numerically integrate equation (2) to get $V(r)$. | {
"domain": "physics.stackexchange",
"id": 23747,
"tags": "quantum-mechanics, electrostatics, electrons, atomic-physics, potential"
} |
Why can an object not be pulled beyond the polygon which is forming from the attachment points? | Question: I am looking for the physical equations that explains why a 2D object on a 2D surface, e.g. a rectangle cannot be pulled further than the greatest/smallest x/y coordinates of the points it is attached to. This becomes particularly obvious if the points are not placed on a rectangle but form an irregular polygon. The ropes can be shortened and lengthen without any limitation.
As my sketch indicates, I assume that the X and Y components of the forces acting through the ropes have to be looked at. I also know that in the first sketch, with equal forces acting on all ropes I have an equilibrium state that I would like my system to be in. I also assume that the y component in the second scetch does not simply disappear but what happens to it? Does it become indefinitely large or small?
Can someone help me to structure this problem, I am struggling to put it into the relevant physical context. Thanks so much.
EDIT: In the case of irregularly arranged anchoring points:
Would it be possible to reach an equilibrium state with the rectangle still being axis aligned but at the same y-height as the top left anchoring point?
What causes the shape to rotate if the forces are not balanced and is this caused by the momentum induced by a lever?
What is that lever?
Answer: You have to distinguish carefully between ropes (black in your sketches) and forces. The red arrows in your sketches are not neccessarily forces because a long rope can be under no tension at all (no force acting on it), but in the right sketch, the downward red arrows suggest a large force acting downward. If such force was acting with no corresponding equal force acting upwards, the box would start moving.
In fact, in the right picture there must be no downward force at all because the upper ropes can't provide an upward force to create equilibrium.
You can indeed get the box beyond the pivots if you consider dynamic movement. | {
"domain": "physics.stackexchange",
"id": 49936,
"tags": "forces, kinematics, geometry, equilibrium"
} |
Python bloom filter | Question: I was hoping to get some feedback on my bloom filter implementation using mmh3. mmh3 is a hashing library based on the murmur hash which is a fast non-cryptographically secure hashing algorithm.
I tested it with some simple words and it seems to be working.
import hashlib
import bitarray
import mmh3
def calc_optimal_hash_func(lenght_of_entries):
m = (-lenght_of_entries * math.log(0.01)) / ((math.log(2)) ** 2)
k = (m / lenght_of_entries) * math.log(2)
return int(m), int(math.ceil(k))
def lookup(string, bit_array, seeds):
for seed in range(seeds):
result = mmh3.hash(string, seed) % len(bit_array)
# print "seed", seed, "pos", result, "->", bit_array[result]
if bit_array[result] == False:
return string, "Def not in dictionary"
return string, "Probably in here"
def load_words():
data = []
word_loc = '/usr/share/dict/words'
with open(word_loc, 'r') as f:
for word in f.readlines():
data.append(word.strip())
return data
def get_bit_array():
words = load_words()
w_length = len(words)
array_length, seeds = calc_optimal_hash_func(w_length)
bit_array = array_length * bitarray.bitarray('0')
for word in words:
try:
for seed in range(seeds):
# print "word", word, "seed", seed, "pos", result, "->", bit_array[result]
pos = mmh3.hash(word, seed) % array_length
bit_array[pos] = True
except:
pass
return seeds, bit_array
if __name__ == '__main__':
seeds, ba = get_bit_array()
print(lookup('badwordforsure', ba, seeds))
print(lookup('cat', ba, seeds))
print(lookup('hello', ba, seeds))
print(lookup('jsalj', ba, seeds))
Answer: Sadly I'm no really into Bloom filter, so those aspects needs to be reviewed by someone else. However, there are some idiomatic stuff I would like to address in your code.
First let me mention some style issues (mostly my preferences, and not really large issues):
Consider specifying imports – You are using limited parts of bitarray and mmh3, so you could consider using from bitarray import bitarray and from mmh3 import hash. However, this is based on personal preferences, and using mmh3.hash() does clearly indicate which hash function you're using.
Do you need the hashlib import? – It doesn't seem like you using anything from it. Is it needed?
Don't use more parentheses than needed – In calc_optimal_hash_func() you use a lot of parentheses. Are really all of those needed? It seems a little too much, and I'd prefer not to use that many, as it kind of clutters up the formulas to some extent. This is still more of a general advice, though.
Don't test for == False – Use the not operator instead, and do if not bit_array[result]:. It simply make more sense.
Variable naming – Naming a variable result, or bit_array, or string doesn't convey anything about the purpose of the variable. These and some of the other could benefit from better naming.
Comment on the non-obvious stuff – It would be nice to see some comments describing what actually happens in your code. What kind of a lookup do you do?
Mostly good spacing – Most of your code is reasonable easy to read, but I don't like the alignment at start of get_bit_array(). I think it would be better to use the default way of words = load_words() and so on.
Code smells
In addition to the smaller style issues, there are some bigger code smells in your code which I would like to address:
load_words() hides a global constant – It always loads /usr/share/dict/words, which kind of removes the need for a function. It would make a little more sense if you had load_words(dictionary_file).
load_words() reads the whole file into memory – If I'm not mistaken, part of the reason you're wanting to use a Bloom Filter is to verify membership within a really large data structure. When you load the entire thing into memory, there is no need for the filter, you'd be better off checking for membership in the array directly!
Why the try...except around the mmh3.hash()? – This seems a little strange, as you pass the catch all the time. Does this serve some unknown purpose? If so, it should have been documented. And if not, it should be removed.
Consider making a class, instead of functions – Having get_bit_array() return two variables, which you need to shuffle around later on, makes me think that this would better be served with a class. Imaging something like the following main section:
bloom_filter = BloomFilter('/usr/share/dict/words')
print(bloom_filter.lookup('badwordforsure'))
print(bloom_filter.lookup('cat'))
print(bloom_filter.lookup('hello'))
print(bloom_filter.lookup('jsalj'))
This would also expose what you're filtering towards, and it would allow for better interface and handling in general. It does seem like most of the functions are only used once, with the exception of the lookup function.
Join load_words() and get_bit_array() – In order to avoid keeping the entire dictionary in memory, I would build the bit_array directly when reading the file. This would really ease the memory requirements of your code.
Alternative implementation
Here is an implementation were I've taken most of these advice into account:
import mmh3
import bitarray
import math
class BloomFilter:
"""By building a bit array based upon a dictionary, this class
allows for probable membership, and certain non-membership of
any lookups within the filter."""
def __init__(self, dictionary_file):
"""Based on the dictionary_file, builds a bit array to
be used for testing membership within the file for a given
percentage, and accurate non-membership."""
# Skip file to get number of words
number_words = sum(1 for line in open(dictionary_file))
# Initialize some variables
self.calc_optimal_hash(number_words)
self.bit_array = self.array_length * bitarray.bitarray('0')
# Reread file, and build bit array
with (open(dictionary_file, 'r')) as dict_file:
for word in dict_file.readlines():
for seed in range(self.seed_count):
hashIndex = mmh3.hash(word, seed) % self.array_length
self.bit_array[hashIndex] = True
def calc_optimal_hash(self, number_words):
"""Calculate array_length and seed_count."""
# If I'm mistaken in precedence, re-add parentheses :-)
m = -number_words * math.log(0.01) / math.log(2) ** 2
k = m / number_words * math.log(2)
self.array_length = int(m)
self.seed_count = int(math.ceil(k))
def probable_member(self, word):
"""Test whether word probably is in the dictionary, or
are surely not in the dictionary."""
for seed in range(self.seed_count):
candidateHash = mmh3.hash(word, seed) % self.array_length
if not self.bit_array[candidateHash]:
return False
return True
def lookup(self, word):
"""Test whether word probably is in the dictionary, or
are surely not in the dictionary."""
if self.probable_member(word):
return '"{}" is most likely in dictionary'.format(word)
else:
return '"{}" is not in dictionary'.format(word)
def main():
bloom_filter = BloomFilter('/usr/share/dict/words')
print(bloom_filter.lookup('badwordforsure'))
print(bloom_filter.lookup('cat'))
print(bloom_filter.lookup('hello'))
print(bloom_filter.lookup('jsalj'))
main()
If you're using Python 2.x, I would also consider using xrange(seed_count) if seed_count is somewhat large, to avoid creating that array in-memory. This is better handled by default in Python3.
In the suggested code I read the file twice in order to get the number of words. This would in a lot of cases be better than reading the entire file into memory, and reading is usually cheap. If not an exact number of words is needed, I would estimate this number by taking the file size, and dividing by the average word length.
Finally, I've also created the probable_member() to return a boolean regarding the membership, as this allows for other uses rather than just presenting a text.
Hopefully you'll see the benefit of packaging this into a class. This implementation should allow for a much smaller memory footprint, and it would also easier allow for multiple filter to be used in parallel by instantiating multiple filter simultaneously if that should be wanted. | {
"domain": "codereview.stackexchange",
"id": 25447,
"tags": "python, bloom-filter"
} |
What is the difference between a piston and a plunger compressor/pump? | Question: My research has found a Quora link and another source giving two different, albeit not contrasting, definitions. Also, as far as I have seen, looking up "plunger compressor" also refers me to pumps.
I am looking to understand the proper definitions, and see some animated GIFs should you have some, that exactly tell these two apart.
Answer: Per the Plunger Pump Wiki:
A plunger pump is a type of positive displacement pump where the high-pressure seal is stationary and a smooth cylindrical plunger slides through the seal. This makes them different from piston pumps and allows them to be used at higher pressures. | {
"domain": "engineering.stackexchange",
"id": 2856,
"tags": "mechanical-engineering, pumps, compressors"
} |
How do scientists study liquid tungsten? | Question: Inspired by this What If article: https://what-if.xkcd.com/50/
In the above article, Randall Munroe mentions that liquid tungsten is difficult to study because of its extremely high melting point. Because of this property, containers for the tungsten tend to melt before the metal itself. This might be a fairly elementary question, but given this difficulty, what methods do modern chemists use to study it, or other materials with extreme melting points?
Answer: One way is to use electristatic or electromagnetic forces to hold the liquud in place. This abstract refers to a "non-contact method" and an "electrostatic levitate", which us enough to reveal the basic strategy. Unfortunately the article is behind a paywall, so you have to put up to get details on the good stuff.
Industrially, metals like tungsten are not processed as liquids. They are ... see here. | {
"domain": "chemistry.stackexchange",
"id": 12234,
"tags": "melting-point"
} |
PHP MySQLi wrapper class | Question: I've created a minimal PHP-MySQLi database wrapper class that can be used to run insert, select, update and delete queries via prepared methods with ease.
Here's the wrapper class:
<?php
/**
* MySQLi Database Class
* @category Database Access
* @package Database
* @author AashikP
* @copyright Copyright (c) 2018 AashikP
* @license https://opensource.org/licenses/MIT The MIT License
* @version 0.1
*/
namespace database;
class MySQLiDB
{
// Mysqli instance.
private $mySqli;
// Save Prefix if defined.
private $prefix = '';
// Generate an array from given $data values for bindParam
private $bind_arr = array(''); // Create the empty 0 index
// Set type to use in bindPar function
private $type;
// Set table with prefix if exists
private $table;
// array to generate bind_results
private $result_arr = array('');
// array to catch multiple rows of results
private $multi_result_arr = array();
// array to fetch values
private $fetch = array();
public function __construct()
{
// Create a connection
$this->connect();
// Check if a database prefix is defined. If defined, set prefix value
defined('DB_PREFIX') ? $this->setPrefix(DB_PREFIX) : null;
}
// Connect using a mysqli instance
private function connect()
{
$this->mySqli = new \mysqli(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME);
// Is there an issue connecting to the database?
if ($this->mySqli->connect_errno) {
echo '<br/>', 'Error: Unable to connect to Database.' , '<br>';
echo "Debugging errno: " . $this->mySqli->connect_errno , '<br>';
echo "Debugging error: " . $this->mySqli->connect_error , '<br>';
unset($this->mySqli);
exit;
}
}
// Set prefix for the table name if there's a prefix setup in the config file
private function setPrefix($value = '')
{
$this->prefix = $value;
}
// Function to insert data into table
public function insert($args)
{
// set type
$this->type = 'insert';
// set table and configure prefix, if available
$this->setTable($args['table']);
// generate insert query
$query = $this->genQuery($args);
// prepare query statement
$stmt = $this->mySqli->prepare($query);
if ($this->mySqli->errno) {
die('Unable to insert data:<br /> '.$this->mySqli->errno .' : '. $this->mySqli->error);
}
// generate the bind_arr to be used to bind_param
$this->bindPar($args);
// bind parameters for statement execution
call_user_func_array(array($stmt, 'bind_param'), $this->returnRef($this->bind_arr));
// execute the statement (return error if execution failed)
if (!$stmt->execute()) {
die('Error : ('. $this->mySqli->errno .') '. $this->mySqli->error);
}
// close statement
$stmt->close();
$this->reset();
}
// Function to update data
public function update($args)
{
// set type for use in query generator
$this->type = 'update';
// set table and configure prefix, if available
$this->setTable($args['table']);
// generate update query
$query = $this->genQuery($args);
// prepare query statement
$stmt = $this->mySqli->prepare($query);
if ($this->mySqli->errno) {
die('Unable to insert data:<br /> '.$this->mySqli->errno .' : '. $this->mySqli->error);
}
// generate the bind_arr to be used to bind_param
$this->bindPar($args);
// bind parameters for statement execution
call_user_func_array(array($stmt, 'bind_param'), $this->returnRef($this->bind_arr));
// execute the statement (return error if execution failed)
if (!$stmt->execute()) {
die('Error : ('. $this->mySqli->errno .') '. $this->mySqli->error);
}
// close statement
$stmt->close();
$this->reset();
}
// Function to select data from the table
public function select($args)
{
// set type for use in query generator
$this->type = 'select';
// set table and configure prefix, if available
$this->setTable($args['table']);
// generate select query
$query = $this->genQuery($args);
// prepare query statement
$stmt = $this->mySqli->prepare($query);
if ($this->mySqli->errno) {
die('Unable to select data:<br /> '.$this->mySqli->errno .' : '. $this->mySqli->error);
}
// generate the bind_arr to be used to bind_param
$this->bindPar($args);
// bind parameters for statement execution if bind_arr is not empty
// bind_arr will be empty if you're trying to retrieve all the values in a row
if (!empty($this->bind_arr)) {
call_user_func_array(array($stmt, 'bind_param'), $this->returnRef($this->bind_arr));
}
// execute the statement (return error if execution failed)
if (!$stmt->execute()) {
die('Error : ('. $this->mySqli->errno .') '. $this->mySqli->error);
}
// if you've manually defined the data that you need to retrieve, generate result set
if (is_array($args['data'])) {
// generate the result set as an array to be
$this->genResultArr($args);
call_user_func_array(array($stmt, 'bind_result'), $this->returnRef($this->result_arr));
if ($this->mySqli->errno) {
die('Unable to select data:<br /> '.$this->mySqli->errno .' : '. $this->mySqli->error);
}
$this->fetch = array(); // making sure the array is empty
$i=0;
while ($stmt->fetch()) {
$this->multi_result_arr = array_combine($args['data'], $this->result_arr);
// Get the values and append it to fetch array $i denotes the row number
foreach ($this->multi_result_arr as $arr => $val) {
$this->fetch[$i][$arr] = $val;
}
$i++;
}
// if there's just one row of results retrieved, just reset the array
// so that you can directly call the value by $fetch['column_name']
if (count($this->fetch) == 1) {
$this->fetch = $this->fetch[0];
}
} elseif ($args['data'] == '*') {
// Generate a result metadata variable to be used to fetch column names in the array
$res = $stmt->result_metadata();
// Copy the column tables as an array into the fields variable to generate bind_result later
$fields = $res->fetch_fields();
// Field count for iteration
$count = $res->field_count;
// row count to chose type of array (multidimensional if more than one row found)
$row = $res->num_rows;
for ($i = 0; $i < $count; $i++) {
$this->multi_result_arr[$i] = $this->result_arr[$i] = $fields[$i]->name;
}
call_user_func_array(array($stmt, 'bind_result'), $this->returnRef($this->result_arr));
if ($this->mySqli->errno) {
die('Unable to select data:<br /> '.$this->mySqli->errno .' : '. $this->mySqli->error);
}
$this->fetch = array(); // making sure the array is empty
$i=0;
// create a fetch array that combines the required db column names with the retrieved results
while ($stmt->fetch()) {
$this->fetch[$i] = array_combine($this->multi_result_arr, $this->result_arr);
$i++;
}
// if there's just one row of results retrieved, just reset the array
// so that you can directly call the value by $fetch['column_name']
if (count($this->fetch) == 1) {
$this->fetch = $this->fetch[0];
}
}
$stmt->close();
// reset values for next query
$this->reset();
return $this->fetch;
}
// Function to delete values from a Database
public function delete($args)
{
// delete function must not be used to truncate tables
if (!isset($args['where'])) {
echo 'If you really want to delete all the contents, use truncate() method.';
return;
} elseif (isset($args['data'])) { // if you're just deleting fields, use update statement instead
echo 'If you want to delete certain column in a row, use the update statement instead';
}
// set type for use in query generator
$this->type = 'delete';
// set table and configure prefix, if available
$this->setTable($args['table']);
// generate delete query
$query = $this->genQuery($args);
// prepare query statement
$stmt = $this->mySqli->prepare($query);
if ($this->mySqli->errno) {
die('Unable to delete data:<br /> '.$this->mySqli->errno .' : '. $this->mySqli->error);
}
// generate the bind_arr to be used to bind_param
$this->bindPar($args);
// bind parameters for statement execution
call_user_func_array(array($stmt, 'bind_param'), $this->returnRef($this->bind_arr));
// execute the statement (return error if execution failed)
if (!$stmt->execute()) {
die('Error : ('. $this->mySqli->errno .') '. $this->mySqli->error);
}
// close statement
$stmt->close();
$this->reset();
}
// Deletes all the data and resets the table. Please use with caution
public function truncate($table)
{
// set table and configure prefix, if available
$this->setTable($table);
// query to truncate the entire table
// NOTE: This is irreversible
$query = 'TRUNCATE ' . $this->table;
// prepare query statement
$stmt = $this->mySqli->prepare($query);
// execute the statement (return error if execution failed)
if (!$stmt->execute()) {
die('Error : ('. $this->mySqli->errno .') '. $this->mySqli->error);
}
// close statement
$stmt->close();
$this->reset();
}
// prefix table name if db prefix is setup
private function setTable($table)
{
$this->table = $this->prefix . $table;
}
// Generates the mysqli query statement
private function genQuery($args)
{
switch ($this->type) {
case 'insert':
$query = "INSERT INTO `" . $this->table .'` ';
$query .= $this->genInsert($args['data']);
$query .= " VALUES " . $this->genInsval($args['data']);
break;
case 'select':
$query = "SELECT " . $this->genSelect($args) . " FROM " . $this->table;
if (isset($args['where'])) {
$query .= $this->genWhere($args);
}
if (isset($args['order'])) {
$query .= $this->genOrder($args);
}
if (isset($args['group'])) {
$query .= $this->genGroup($args);
}
if (isset($args['limit'])) {
$query .= " LIMIT " . $args['limit'];
}
break;
case 'update':
$query = "UPDATE `" . $this->table . "` SET";
$query .= $this->genUpdate($args['data']);
if (isset($args['where'])) {
$query .= $this->genWhere($args);
}
break;
case 'delete':
$query ="DELETE FROM `" . $this->table . '` ';
if (isset($args['where'])) {
$query .= $this->genWhere($args);
}
break;
default:
$query ='';
break;
}
return $query;
}
// Generate insert query
private function genInsert($data)
{
$ins_query = '( ';
foreach ($data as $key => $value) {
if ($data[$key] == end($data)) {
$ins_query .= ' ' . $key . ' ';
continue;
}
$ins_query .= ' ' . $key . ', ';
}
$ins_query .= ')';
return $ins_query;
}
// generate the value part of the insert query to be used as a prepared statement
// Eg (? , ?, ?)
private function genInsVal($data)
{
$ins_value = '(';
foreach ($data as $k => $v) {
if ($data[$k] == end($data)) {
$ins_value .= '?';
continue;
}
$ins_value .= '?, ';
}
$ins_value .=')';
return $ins_value;
}
// generate update query
private function genUpdate($data)
{
$update_query = '';
foreach ($data as $key => $value) {
$update_query .= ' ' .$key .' =?,' ;
}
$update_query = rtrim($update_query, ',');
return $update_query;
}
// Generate select query
private function genSelect($sel_array)
{
$sel_string = '';
if (is_array($sel_array['data'])) {
foreach ($sel_array['data'] as $value) {
$sel_string .= $value . ', ';
}
$sel_string = rtrim($sel_string, ', ');
} elseif ($sel_array['data'] == '*') {
$sel_string = '*';
}
return $sel_string;
}
// Generate where condition for query generator (genQuery)
private function genWhere($where_arr)
{
$where_query = ' WHERE';
if (isset($where_arr['whereOp'])) {
$opr = $where_arr['whereOp'];
} else {
$opr = '=';
}
// Check if the given array is associative
if ($this->isAssoc($where_arr)) {
foreach ($where_arr['where'] as $key => $value) {
$where_query .= ' ' . $key . $opr . '? ';
}
} else {
foreach ($where_arr['where'] as $value) {
$where_query .= ' ' . $value . $opr . '? ';
}
}
if (isset($where_arr['and']) && !empty($where_arr['and'])) {
$where_query .= $this->andWhere($where_arr);
}
if (isset($where_arr['or']) && !empty($where_arr['or'])) {
$where_query .= $this->orWhere($where_arr);
}
return $where_query;
}
// Generate and condition for query generator (genQuery)
private function andWhere($and_arr)
{
$and_query = ' AND';
if (isset($where_arr['andOP'])) {
$opr = $where_arr['andOP'];
} else {
$opr = '=';
}
foreach ($and_arr['and'] as $key => $value) {
$and_query .= ' ' . $key . $opr . '? ';
}
return $and_query;
}
// Generate OR condition for query generator (genQuery)
private function orWhere($or_arr)
{
$or_query = ' OR';
if (isset($or_arr['orOP'])) {
$opr = $or_arr['orOp'];
} else {
$opr = '=';
}
foreach ($or_arr['and'] as $key => $value) {
$or_query .= ' ' . $key . $opr . '? ';
}
return $or_query;
}
// Generate order by condition
private function genOrder($args)
{
$order_query = ' ORDER BY ' . $args['order'] .' ';
if (isset($args['oType']) && (($args['oType'] == 'ASC') || ($args['oType'] == 'DESC'))) {
$order_query .= $args['oType'];
}
return $order_query;
}
// Generate group by conditions
private function genGroup()
{
$grp_query = ' GROUP BY ' . $args['group'] .' ';
if (isset($args['gType']) && (($args['gType'] == 'ASC') || ($args['gType'] == 'DESC'))) {
$grp_query .= $args['gType'];
}
return $grp_query;
}
// Check the input array and forward it to bindParam for further processing
private function bindPar($args)
{
if (isset($args['data']) && $this->type != 'select') {
$this->bindParam($args['data']);
}
if (isset($args['where'])) {
$this->bindParam($args['where']);
}
if (isset($args['and'])) {
$this->bindParam($args['and']);
}
if (isset($args['or'])) {
$this->bindParam($args['or']);
}
if ($this->type == 'select' && !isset($args['where']) && !isset($args['and']) && !isset($args['or'])) {
unset($this->bind_arr);
}
}
// Organize generation of bind_arr in the below method based on $data
private function bindParam($data)
{
if (is_array($data)) {
if ($this->isAssoc($data)) {
foreach ($data as $key => $value) {
$this->bindValues($value);
}
} else {
foreach ($data as $value) {
$this->bindValues($value);
}
}
} else {
$this->bindValues($data);
}
}
// Detect type and push values inside the bind_arr to be submitted as bind parameters
private function bindValues($value)
{
$this->bind_arr[0] .= $this->detectType($value);
array_push($this->bind_arr, $value);
}
// Detect value type to generate bind parameter
protected function detectType($value)
{
switch (gettype($value)) {
case 'string':
return 's';
break;
case 'integer':
return 'i';
break;
case 'blob':
return 'b';
break;
case 'double':
return 'd';
break;
}
return '';
}
protected function returnRef(array &$arr)
{
//Referenced data array is required by mysqli since PHP 5.3+
if (strnatcmp(phpversion(), '5.3') >= 0) {
$refs = array();
foreach ($arr as $key => $value) {
$refs[$key] = & $arr[$key];
}
return $refs;
}
return $arr;
}
// Generate a result array with selected values from database for given data
private function genResultArr($args)
{
$this->result_arr = array();
foreach ($args['data'] as $value) {
array_push($this->result_arr, $value);
}
}
// Check if an array is associative
private function isAssoc(array $array)
{
$keys = array_keys($array);
return array_keys($keys) !== $keys;
}
// Reset to default values after an operation
private function reset()
{
$this->type = null;
$this->table = '';
$this->bind_arr = array('');
$this->result_arr = array();
$this->multi_result_arr = array();
}
// Disconnects the active connection
private function disconnect()
{
if (isset($this->mySqli)) {
$this->mySqli->close();
unset($this->mySqli);
}
}
// Making sure we don't have open connections
public function __destruct()
{
if (isset($this->mySqli)) {
// if there's an active connection, close it
if ($this->mySqli->ping()) {
$this->disconnect();
}
}
}
}
Here's the optional config.php file that goes with it:
<?php
/**
* This is an example configuration file. Even though the file is optional,
* the constants defined below are required for the wrapper class to work.
*/
/** MySQL database name */
define('DB_NAME', 'DATABASE NAME HERE');
/** MySQL database username */
define('DB_USER', 'DATABASE USER NAME HERE');
/** MySQL database password */
define('DB_PASSWORD', 'DATABASE PASSWORD HERE');
/** MySQL hostname */
define('DB_HOST', 'localhost');
/** [Optional] MySQL database prefix */
define('DB_PREFIX', '');
How it works:
Available functions
insert();
update();
select();
delete();
truncate();
USAGE
Please note
You'll either need to define the required configs in the class file or include the config.php file along with this code in order to make a successful db connection as shown below:
<?php
$db = new MySQLiDB;
?>
INSERT
To insert content, create an array with table and the data to be inserted
and then call the insert function
'table' => 'table name';
'data' is an array with the 'keys' as field names and 'values',
as the values that need to be entered in the fields
<?php
$args = [
'table' => 't1',
'data' => [
'f11' => '123',
'f12' => 'hello'
]
];
// Calling the below function will submit f11 = 123, f12 ='hello' etc into the table 't1'
$db->insert($args);
?>
UPDATE
To update content, create an array with table and the data to be inserted
and then call the update function
'table' => 'table name';
'data' is an array with the keys as field names and values as the values that need to be updated in the database fields
if you need to update specific values, you can specify where, and and or properties
<?php
$args = [
'table' => 't1',
'data' => [
'f11' => '123',
'f14' => '456',
],
// [Optional] However, if you do not define a where condition, fields in every row will
// be overwritten with the arg contents.
'where' => [
'id' => 10
],
// [Optional] where operator is '=' by default, you only need to specify
// this if you would like to use a different operator.
'whereOp' => '=',
// [Optional] 'and' condition, works the same way as where condition
// andOp is '=' by default
'and' => [
],
'andOp' => '=',
// [Optional] 'or' condition, works the same way as where condition
// orOp is '=' by default
'or' => [
],
'orOp' => '',
];
?>
Calling the below function would update value of 'f1' to 'test' where id = 10 and 'f2' = 'foo'
<?php
$args = [
'table' => 't1',
'data' => [
'f1' => 'test',
],
'where' => [
'id' => 10,
],
'and' => [
'f2' => 'foo',
]
];
$db->update($args);
?>
You can even set value to empty if you want to delete certain value from a row.
The below code will set the value of field one to '' and field two to 'foo' where id = 1
(f1 and f2 are the fields)
<?php
$args = [
'table' => 'test',
'data' => [
'f1' => '',
'f2' => 'foo'
],
'where' => [
'id' => 1
]
];
$db->update($args);
?>
SELECT
To select content, create an array with table and the data to be selected
and then call the select function
'table' => 'table name';
'data' is an array with the keys as field names that need to be retrieved
if you need to select everything, you can use 'data' => '*'
<?php
// available options
$args = [
'table' => 'table_name',
// data can either be an array with values defining field names that need to be retrieved, or just 'data' => '*'
'data' => [
'field1', 'field2'
],
'where' => [
'field3' => 'foo'
],
'whereOp' => '!=', // (only need to be defined if its anything other than =)
'and' => [
],
'andOp' => '', // (only need to be defined if its anything other than =)
'or' => [
],
'orOp' => '', // (only need to be defined if its anything other than =)
'limit' => 2, // this will limit the rows returned
'order' => '', // order by
'oType' => '', // ASC or DESC
'group' => '', // group by
'gType' => '', // ASC or DESC
];
// Example
$args = [
'table' => 't1',
'data' => [
'f1'
],
'where' => [
'f1' => 'hi',
]
];
$fetch = $db->select($args);
foreach ($fetch as $res) {
// below code will dump all the rows. If you want a specific output,
// check the echo statement below
var_dump($res);
}
// Or you can chose to display them row wise
echo $fetch[0]['f2'];
?>
In the above code, we're displaying the value of field f2 from the first row of returned result set
If there's only one row in the result set, you can access it directly as shown in the below echo statement
DELETE
This is used to delete a row. If you only want to remove a single field from a row, use the update statement instead and set the value of that field to '' (empty string)
<?php
$args = [
'table' => 't1',
'where' => [
],
'or' => [
],
];
$db->delete($args);
?>
Example
<?php
// Below statements should delete the row where id = 10, in table 'test'
$args = [
'table' => 'test',
'where' => [
'id' => 10
]
];
$db->delete($args);
?>
TRUNCATE
If you want to delete an entire table, you will need to use the truncate function instead.
<?php
$db->truncate($table_name);
?>
Improvements I'm unsure about:
Minimize connection: Should I remove the disconnect option from my __destruct() and then make a check in the connect function to see if there's an active connection and only if there's none at the moment, make the connection? Would it be any better performance wise or in terms of code readability?
<?php
// Example
private function connect()
{
if (!$this->mySqli->ping()) {
// make the connection
}
}
?>
Unnecessary comments: I've repeated the comments in multiple cases where there are same functions. I personally believe these comments are unnecessary, but I want to know if this is the recommended method
This is not currently used in production. I'm still learning the art of PHP and I wrote the piece of code as a learning experience. As of now the code works as intended. However, what I would like to know is, if the code can be used in production environment?
Link to repo: Github
Answer: This is not much a wrapper. Rather call it a wannabe Query Builder.
I don't know the reason, but many people are constantly trying to write something like this. And I don't understand why.
Okay, for the DML queries it makes sense - it always makes a cool feeling when you automate a routine task using a template. So, for the insert query it asks for the method insert(). But select?
Do you really want to write
$args = [
'table' => 't1',
'data' => [
'f1'
],
'where' => [
'f1' => 'hi',
]
];
$fetch = $db->select($args);
instead of just
$fetch = $db->select("SELECT f1 FROM t1 WHERE h1='hi'");
really really? No kidding? But why? Do you think it looks cool? Or make it you write less code? Or make another programmer to understand it better?
Do you really want to make that neat and universal SQL split into array with gibberish keys? WTF is "data"? Okay, I am working on a project with you. Why should I puzzle myself with such questions? Why can't I use the familiar SQL that reads as plain English?
And where are JOINs? And what will be your $settings array when you add them?
Come on, you've gone too far. I understand the logic that led you here but it's time to turn back. Leave SELECT queries alone and let them be written as plain SQL with parameters.
Besides, such a class should be ten times shorter. In the recent years PHP has been improved a lot, making most of tricks you are using obsoleted. Not to mention such rather silly code blocks like
if ($this->isAssoc($data)) {
foreach ($data as $key => $value) {
$this->bindValues($value);
}
} else {
foreach ($data as $value) {
$this->bindValues($value);
}
}
here, the condition is useless as both loops are doing literally the same.
This is not currently used in production.
This is actually the main problem. You are writing a code to handle some imaginary queries. That's the worst idea a developer could have. One should start from writing raw API calls with raw SQL, get the idea what queries are going to be used, and only then start to think about automation.
Here is the similar review I made a while ago. Please check it out, it explains a lot of pitfalls in your code.
Besides, you may refer to my article on the common database wrapper mistakes, as your code suffer from most of them, namely:
Flawed error reporting.
Type-hinted binding.
Select function
SQL injection in Insert and Update methods
Statefulness (what if you would need to run a nested query)
Protected mysqli instance
In a nutshell, a good mysqli wrapper should be just a handful of lines (taken from the linked above answer):
Class DB
{
public function __construct($host, $user, $pass, $db, $charset)
{
mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT);
$this->mysqli = new mysqli($host, $user, $pass, $db);
$this->mysqli->set_charset($charset);
}
public function query($sql, $params, $types = "")
{
$types = $types ?: str_repeat("s", count($params));
$stmt = $this->mysqli->prepare($sql);
$stmt->bind_param($types, ...$params);
$stmt->execute();
return $stmt;
}
}
while anything else should be added after strong consideration only. | {
"domain": "codereview.stackexchange",
"id": 29167,
"tags": "php, object-oriented, mysqli, wrapper"
} |
Will a reaction happen slower if it has a higher enthalpy change than its counterparts? | Question:
Suppose both reactions are conducted in identical conditions and they have the same activation energy, and B has 0 enthalpy. Which one will happen faster? My experiment said reaction A happen slower than B but I have no idea why. Is it because of the enthalpy change difference, or other factors?
Thank you!
Answer: In Diagram A, one puts in X amount of energy to get "over the hump", and, as the reaction completes, gets out somewhat less than X. That means one has to continually feed in energy to keep the reaction going... some energy is consumed.
In Diagram B, it appears that as much energy comes out of the reaction as was put into it, implying that energy can be recovered for the reaction to continue.
If more energy is released than added, then the reaction is not just self sustaining, but can proceed very rapidly, as more and more heat accumulates. An example would be thermite: $\ce{Fe2O3 + 2Al->Al2O3 + Fe}$.
This is not exactly as simple as stated, though. For example, in Diagram B, perhaps the energy source is heat, but the reaction emits light (chemiluminescence), allowed to escape, so more heat must be added -- some heat is being lost as light. | {
"domain": "chemistry.stackexchange",
"id": 18061,
"tags": "reaction-mechanism, enthalpy"
} |
Generator iterator with push back function | Question: In a compiler project for an LL(1/2) grammar, I have had a need for a generator iterator with a push back function. I have been surprised not to find any perfectly applicable solution (clean, simple, obvious, standard, etc.). On the other hand I was surprised of how easy it was to create exactly what I wanted after I decided to invest a little time.
Here is what I came up with:
class Back_pushable_iterator:
"""Class whose constructor takes an iterator as its only parameter, and
returns an iterator that behaves in the same way, with added push back
functionality.
The idea is to be able to push back elements that need to be retrieved once
more with the iterator semantics. This is particularly useful to implement
LL(k) parsers that need k tokens of lookahead. Lookahead or push back is
really a matter of perspective. The pushing back strategy allows a clean
parser implementation based on recursive parser functions.
The invoker of this class takes care of storing the elements that should be
pushed back. A consequence of this is that any elements can be "pushed
back", even elements that have never been retrieved from the iterator.
The elements that are pushed back are then retrieved through the iterator
interface in a LIFO-manner (as should logically be expected).
This class works for any iterator but is especially meaningful for a
generator iterator, which offers no obvious push back ability.
In the LL(k) case mentioned above, the tokenizer can be implemented by a
standard generator function (clean and simple), that is completed by this
class for the needs of the actual parser.
"""
def __init__(self, iterator):
self.iterator = iterator
self.pushed_back = []
def __iter__(self):
return self
def __next__(self):
if self.pushed_back:
return self.pushed_back.pop()
else:
return next(self.iterator)
def push_back(self, element):
self.pushed_back.append(element)
def main():
it = Back_pushable_iterator(x for x in range(10))
x = next(it) # 0
print(x)
it.push_back(x)
x = next(it) # 0
print(x)
x = next(it) # 1
print(x)
x = next(it) # 2
y = next(it) # 3
print(x)
print(y)
it.push_back(y)
it.push_back(x)
x = next(it) # 2
y = next(it) # 3
print(x)
print(y)
for x in it:
print(x) # 4-9
it.push_back(x)
y = next(it) # 9
print(x)
if __name__ == "__main__":
main()
Any feedback is appreciated. A better name? ;-)
Note: I have already posted this code as a very late answer to https://stackoverflow.com/questions/2425270/how-to-look-ahead-one-element-in-a-python-generator, but got no feeback.
Answer: This is to expand on Austin Hastings comment.
According to Python Standard Library documentation,
The intention of the protocol is that once an iterator’s next() method raises StopIteration, it will continue to do so on subsequent calls. Implementations that do not obey this property are deemed broken.
It means that once the underlying iterator raises StopIteration, your object shall not accept any more push backs.
I would call the list lookahead rather than pushed_back, but it is a matter of taste.
Otherwise, LGTM. | {
"domain": "codereview.stackexchange",
"id": 35140,
"tags": "python, iterator, generator"
} |
How much of the underwater land masses within the arctic ocean is considered continental shelf? | Question: Right now I'm in the middle of writing my master thesis which is about modelling the ocean bottom pressure (obp) under areas with sea-ice coverage.
In most Oceanic General Circulation Model's (ogcm) obp is calculated prognostically, so it's basically just the sum of the sea-surface height $\zeta$.
Recent studies show an increase in $\zeta$ for regions with continental shelf within a reasonable timespan, so I was wondering how much of the arctic sea is above continental shelf? I know it is probably a lot, but are there any numbers, units of $\text{km}^{2}$?
Answer: Yool & Fasham in An Examination of the Continental shelf pump in an open ocean general circulation model Global Biogeochemical Cycles Volume 15, Issue 4, pages 831–844, divide the continental shelves into 32 named regions (see Fig. 2), and give areas for each (see table 1).
The regions that are completely in the arctic ocean are:
Russian arctic shelf 1,503,000 sq. km.
Siberian arctic shelf 1,825,000 sq. km.
North American arctic shelf 498,000 sq. km.
There are three other regions that are partially in the arctic ocean,
Iceland and Greenland shelf, North Sea shelf, and Berring Sea shelf, but looking at Fig. 2 and table 1 these seem like small contributors.
Roughly 4 million sq. km total. | {
"domain": "earthscience.stackexchange",
"id": 306,
"tags": "ocean, oceanography, sea-level, sea-ice"
} |
How does dsDNA become ssDNA when binding to a nitrocellulose membrane? | Question: I would like to know how double stranded DNA becomes single stranded when binding to nitrocellulose membrane in southern blots. Does it require a special reagent? Is denaturing a special property of the membrane itself?
Answer: The DNA is denatured in-gel prior to transfer using alkali conditions, most commonly a solution containing 0.5 M NaOH (buffered with 1.5 M NaCl). Following that the pH of the gel is neutralised in 0.5 M Tris-HCl, pH 7.5; 1.5 M NaCl.
I have found the Roche DIG application manual to be a well-written and reliable resource for performing Southern and Northern blots. The section on Southern blots starts at page 94 and you can modify the protocol to suit other detection systems. | {
"domain": "biology.stackexchange",
"id": 6069,
"tags": "genetics, dna"
} |
Why doesn't the fridge condense water like an air conditioner? | Question: Simple question, why doesn't the fridge condense water like an air conditioner?
I know that my ac condenses water (even in cool or heat) and don't know why the fridge doesn't. Aren't they both supposed to work the same way?
If not, then why not build ac's like fridges? I mean, they both do the same, cool some room space.
Maybe the fridge does condense water and I just don't see it, if this is the case, then where does the water go?
Answer: Short version: it does.
The first difference is that the fridge has a small, enclosed volume of air that it cools. There is a pretty limited amount of water vapour in the fridge to be condensed out at any time. Once it is all gone, there is no more until the fridge door is opened to let some new air in (although only some of the air will be replaced each time). The volume of air in a fridge is unlikely to contain more than a gram or two of water (a few cubic centimeters at most).
The AC on the other hand has an effectively unlimited source of water vapour.
Water that condenses out within the fridge can go on to the inner surfaces of the fridge, but can also go on to items in the fridge (and thus get taken out when items are removed). Some cardboard containers in my fridge are noticably damp when I remove them. Milk containers usually have some water condensed on them.
I've no idea whether vegetables or fruit in a fridge can account for absorbing some of the water vapour (equally they might be a source of water vapour) - they are a possible confounding factor.
Basically, the rate at which new water vapour is added to the fridge via the door opening is pretty low, and removing items from the fridge is likely to be a net removal of condensed water from the fridge, so the reservoir of water vapour condensed into liquid within the fridge at any time is pretty small, and when spread over all the potential surfaces and objects within the fridge, is not that noticable.
Plus every fridge I've ever had has also had a small drainage hole at the back of the compartment (the back wall being the coldest), although I've no idea where it drains off to. | {
"domain": "physics.stackexchange",
"id": 28770,
"tags": "water, cooling, condensation"
} |
Why is it so hard to accelerate macroscopic objects? | Question: It seems all we're capable of accelerating currently are atomic particles. Why can't we, say, accelerate a clock to relativistic speeds?
Answer: The obvious answer is their mass, accelerating a subatomic particle to relativistic velocity takes many orders of magnitude less energy than accelerating a macroscopic particle.
More subtly, we can only accelerate subatomic particles that we can manipulate with electromagnetic fields, nobody has a proposed viable experiments to study relativistic neutron or neutrino reactions because we really have no way to manipulate them. What this means is that we need relatively light, charged, particles that we can accelerate to relativistic velocity in a reasonable space with reasonable amount of energy.
Which brings us to our next point. Modern accelerators use what are called "RF cavities" to accelerate charged particles like protons and electrons, these are effectively giant pulsed capacitors with the particle flying in the space between the plates. We are governed by material science and the permiativity of the vacuum and have a limit to the maxium voltage we can put across the plates, and hence the maximum energy we can impart to a particle per unit length of the cavity. For modern accelerators this is on the order of 1 MeV per cm. 1 MeV on an electron goes a long way (which is why your TV works). 1 MeV on a baseball barely does anything. If the LHC seems big, an accelerator to collide two baseballs with the same velocity would be wider than the solar system (using modern technology).
More realistically though, we accelerate particles so we can study the physics during collisions of two particles. When we collide electrons and positrons for instance, nearly 100% of the energy carried by the particle is deposited into the reaction if they collide head on, its this fact that lets us see the creation of new and exotic particles following the collision (And why LEP began seeing Z bosons as soon as they crossed the 180GeV threshold). When we move to the LHC for example, we are colliding two protons- which, roughly speaking are sacks of consisting of 3 quarks. And the momentum (and therefore the energy) of the proton is more or less evenly distributed among the three quarks. So the energy available to us in the collision is actually about 6 times less than an eqiuivalent electron-positron collider. Imagine doing this with a baseball, which is composed of trillions of trillions of particles. The energy that each particle carries is a tiny fraction of the total energy of the baseball, therefore every single collision event only has a tiny fraction of the total energy of the system, and in the end we get no useful reaction out of a collision that could not have just as easily been accomplished by putting two televisions face to face. | {
"domain": "physics.stackexchange",
"id": 35,
"tags": "general-relativity, accelerator-physics"
} |
A theoretical object that doubles in length pushing an object through space | Question: If you were to have a theoretical object that could double itself along a chain so you would start off with 1 then it doubles to two, four, eight etc eventually the width of the object would be such a large increase the end would move a huge distance but each individual component only moves relatively to the piece it is connected to.
Could you use this object to push against a solid object and a projectile to accelerate past the speed of light?
Answer: You are actually close to something that is very important in physics, and your intuition is mostly right.
A rigid pole that doubles its length every so often does not exist. However, in most modern theories of cosmology the universe itself is expanding. This means that a given area of vacuum (say 1 meter across) will grow exponentially with time, so will double in length over some (very long) timescale, then double again in the same time then again and again. This doubling again and again is very much like your proposed "doubling pole".
Now, a consequence of this model of universal expansion is that their are some places in the universe we can never reach. Imagine some galaxy that is "far far away". This galaxy is so far in fact that even though the distance between us and it will take a very long time to double the rate of extra space produced by this doubling is faster than the speed of light (which is the maximum speed we can travel).
To understand why this is an analogy might help. Imagine that you earn 3 dollars per day (this represents you travelling at light speed, $3\times 10^8 m/s$), if you have a loan that grows by 1% per day then (for a sufficiently large dept, in this case over 300 dollars) the amount of money you owe will grow faster than you can possibly pay it off - because of the power of compound interest. The 300 dollars is the distance you initially had to cover, the 1% interest is the expansion of the universe.
So their are galaxies that are, in a sense, moving away from us faster than the speed of light by roughly the mechanism you describe. In fact everything beyond the "Hubble Radius" is effectively receeding from us faster than lightspeed because of the "compound interest".
For more material see:
https://en.wikipedia.org/wiki/Expansion_of_the_universe
and:
https://en.wikipedia.org/wiki/Hubble_volume | {
"domain": "physics.stackexchange",
"id": 63324,
"tags": "speed-of-light"
} |
CNN: training accuracy vs. validation accuracy | Question: I just finished training two models, while the one is pretrained and the other trained from scratch and created two diagrams afterward with their data, but as I am very new to machine learning, I don't get what they state.
Why is the training accuracy so low? Did I use to less data? I had about 7200 pictures for training and 800 for validation!
What does it mean, that the validation accuracy of the pretrained algorith is so much higher as the other one? Does it mean the pretrained is two times better then the one trained from scratch?
Answer:
Why is the training accuracy so low?
This is because your model is underfit. Few of the reasons for this could be,
you might be using small learning rate.
your model architecture is simple (small) and not big enough to recognize patterns from the data. Try increasing layers.
try removing regularization if any.
As per the best of my knowledge and assumptions, I think following could be some of the reasons for validation accuracy to be higher than training accuracy. You might consider investigating in these areas.
The dataset domain might not be consistent? This means that there might be different types of images present in you dataset. For some images (or a type of images) the model is able to learn correctly (as a result ~ 50% accuracy on train set). And for the rest of the images the model is getting confused i.e. it is difficult for the model to recognize these other 50% images. And there might be a possibility that the particular type of images that are easier for model to recognize are present in the validation set. You can ensure that the domain of train and validation sets is same.
The dataset might not be properly split? This means the domain might be consistent but the dataset has imbalanced classes? There might be a possibility that the train set might contain some classes having more instances (majority classes) and some classes having very less instances (minority classes). Generally, model gets a hard time recognizing these minority classes, hence less train accuracy. And perhaps the validation set is containing only majority classes, which are very easy for the model to recognize.
What does it mean, that the validation accuracy of the pretrained algorith is so much higher as the other one? Does it mean the pretrained is two times better then the one trained from scratch?
Yes, it means given unseen 800 images to both of the models, the pretrained model predictions are two times better then the one trained from scratch.
Edited as per the suggestion from Nikos M. | {
"domain": "datascience.stackexchange",
"id": 9664,
"tags": "python, tensorflow, pytorch"
} |
How to write an algorithm which doubles stack in order to accommodate for new members where push is in constant time in the worst case? | Question:
We're given the algorithm which uses stack in order to store data (stack is implemented via an array). When the stack is full the algorithm does the following:
Creates a new array double the size of the original one.
Copies all elements of the old array to the new array, preserving the order.
Adjust the algorithm so that the insertion of new element will be in constant time in the worst case.
The limitation here is we must use arrays as stack implementation.
The worst case is when for example the old stack of size $n$ is full and now we need to insert a new element. So we create a new stack $S_2$ of size $2n$ and we want to insert the new element to $S_2[n+1]$.
I'm stuck with the constant time requirement, I'd appreciate any hints to the right direction.
NB: I'm aware that the title is a bit general, I couldn't think of anything more specific, if you know under which category of algorithms it falls under please let me know and I'll edit the heading.
Answer: I don't think that it is possible to push an element to a full array-based stack in worst case $\mathcal{O}(1)$ time. However, you can rest assured that each push runs in constant amortized time whenever you multiply the length of the full array by a factor of $q > 1$ (like you do; $q = 2$ in your case). This is why:
Suppose the initial array capacity is $m$. Next we choose $q > 1$ such that $\lfloor qm \rfloor > m$, or namely, $q$ must be sufficiently large in order to trigger an array expansion.
Suppose the total accumulated work of adding $n$ elements to the stack is
$$W = m + mq + mq^2 + \dots + \overbrace{mq^k}^n$$.
We require $k$ to be the smallest integer such that $mq^k \geq n$, which leads us to the following inequalities:
$$
\begin{aligned}
mq^k &\geq n \\
q^k &\geq \frac{n}{m} \\
\log_q q^k &\geq \log_q \Bigg( \frac{n}{m} \Bigg) \\
k &\geq \log_q n - \log_q m.
\end{aligned}
$$
Since $k$ is required to be the smallest integer satisfying the above inequality, we can set $k = \lceil \log_q n - \log_q m \rceil$. Also,
$$
\begin{align}
qW = mq + mq^2 + \dots + mq^{k + 1} &\Rightarrow W - qW = m(1 - q^{k+1}) \\
&\Rightarrow W = m\frac{1 - q^{k+1}}{1-q}.
\end{align}
$$
Since $k = \lceil \log_q n - \log_q m \rceil$, we obtain
$$
\begin{align}
W &= m\frac{1 - q^{\lceil \log_q n - \log_q m \rceil + 1}}{1 - q} \\
&\leq m\frac{1 - q \cdot q^{\lceil \log_q n \rceil}}{1 - q} \\
&\leq m \frac{1 - q \cdot q^{\log_q n + 1}}{1- q} \\
&= m \frac{1 - q^2n}{1 - q}.
\end{align}
$$
Now we have that
$$
\begin{align}
\frac{1}{n} W &\leq \frac{1}{n} \Bigg[ m \frac{1-q^2n}{1-q} \Bigg] \\
&= \frac{1}{n} \Bigg[ \frac{m}{1-q} - \frac{nmq^2}{1-q} \Bigg] \\
&= \frac{m}{(1-q)n} - \frac{mq^2}{1-q} \\
&\leq \frac{m}{1-q} - \frac{mq^2}{1-q} \\
&= \frac{m(1-q^2)}{1-q} \\
&= \frac{m(1+q)(1-q)}{1-q} \\
&= m(1+q),
\end{align}
$$
which is constant since $m$ and $q$ are fixed parameters independent of $n$. | {
"domain": "cs.stackexchange",
"id": 9167,
"tags": "algorithms, time-complexity, stacks"
} |
Using time series to predict house prices vs. multiple linear regression | Question: Re-post.
Machine Learning Courses often teach house prices prediction using multiple linear regression - when we want to predict the value of a variable based on the value of two or more other variables. Indeed the one I did on coursera and that makes sense to me.
I note that others state that a techniques like time series is also a suitable method to predict the house price, given a set of input variable.
I am not clear as to why this could be so. Simply because there is no notion of selling of a house at a regular interval afaik. That is to say, the sale date (one of the variables) can be at any time, not at a reasonably regular interval. Or is it possible because it is not just one house but many houses that can contribute to such an interval?
So, looking for a clarification as to why time series could be an approach for house price prediction. To borrow from wikipedia: A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average.
Answer: I agree, the simple version of this problem isn't really a time series problem. There are some ways to deal with irregular intervals, but, houses are not all the same and we do not have a series of sale prices for it in most cases.
What may of course be very predictive is time. Housing prices probably have a long-term trend that's linear over the space of a few years, at least. There is probably a seasonality component, so month might be meaningful if used correctly. | {
"domain": "datascience.stackexchange",
"id": 6957,
"tags": "time-series, regression"
} |
How to determine directions of vectors of an electromagnetic wave | Question: I did an exercise which probably is quite popular,
in which you draw an electromagnetic wave and prove that it should
propagate at the speed of light $1 \over \sqrt {\mu_0\epsilon_0}$ using Farday's law and Ampere's law.
Basically if this is the wave:
Let's say the E-field (red) is in the X direction, the B-Field (blue) is in the Y direction,
and the velocity of the wave is in the Z direction.
You take for example for ampere's law a surface in the ZY plane with a length L
equal to the amplitude of the wave,
and a width equal to $\lambda\over 4$
You do a similar thing with Faraday's law and you get the speed of light,
assuming you know that the E-field and B-field propagate in this manner.
I got the right answer but I wondered about this:
Let's say I only had the E-field and I know the wave propagates at the speed of light, I assume this is enough information to draw the B-field at each point.
But how will I know the direction? Both Faraday's law and Ampere's law say you need a closed loop integral and the rules I've been taught say
you go over the loop in a clockwise direction for example and take the
normal to the surface according to the right hand rule etc.
But clockwise and counter-clockwise direction don't really give me much information in this case, so how can I determine the direction of the B-field
if I only have the E-field?
Answer: If you're careful about how you define the surface, then you will get the correct direction out of Maxwell's equations. In vector calculus (and generally in math and physics), a surface has an orientation, which also specifies the orientation of the loop that forms its boundary. So you can't just pick a direction to go around the loop at random. The direction in which you go around the loop is related to the orientation of the normal vector to the surface.
Of course, you don't actually need to do a surface integral to figure this out. The electric and magnetic fields in an EM wave (at any given position and moment in time) are related by the equation
$$\mathbf{B} = \frac{1}{c}\hat{\mathbf{k}}\times\mathbf{E}$$
where $\hat{\mathbf{k}}$ is a unit vector that points in the direction of propagation of the wave. If you don't know which direction the wave is moving, you can't tell which way the magnetic field points. | {
"domain": "physics.stackexchange",
"id": 14528,
"tags": "electromagnetism, electromagnetic-radiation"
} |
Why is the frequency bandwidth of the environment important for Markovianity? | Question: In the derivation of Spontaneous Emission in two level systems in Quantum Optics (be it Wigner Weisskopf or a different approach, such as density operators to find the master equation), one makes (several) assumptions. One of the more prominent ones here is the Markov approximation, which I think is most easily described in the density operator context. Here one assumes that to find the state of the atomic system $S$ at time $t$, one does not have to integrate $\rho_s(t')$ from $0$ to $t$, but instead can just take it to be $\rho_s(t)$. The way I see it, this means that the system does not have a memory of what has happened to it before time $t$. But what I do not completely understand is what is required for this to hold.
What I can find from other sources is that it requires 'a broad range of frequencies' to be present. Why is this the case? I read one hypothetical scenario in which one would construct a photonic crystal such that the density of modes would be 0 up to $\omega_{eg}$ (the transition of the two level system), and constant afterwards. The writer then claimed that the Markov approximation would not hold, because we have a real difference between lower and higher than the emitter frequency, meaning that the emitter has a real memory of what has happened to it. I personally do not understand this line of reasoning, but I assume it to be true. Could someone explain why this is so, and perhaps also elaborate on the conditions the Markov approximation requires?
Edit: Perhaps another example to illustrate what I don't understand: another source writes that if you were to put your emitter in a bandgap, so that the only radiation it could couple to would be in the range $\omega_{eg}-\gamma,\omega_{eg}+\gamma$, spontaneous emission would also not occur. I really do not see why not; why do we need to have it couple to so many different modes of the field?
Answer: A short, mathematical answer to the question is found in the properties of Fourier transforms. The temporal response of the environment to a perturbation is given by the Fourier transform of its frequency response to the same perturbation. Therefore, if a broad range of frequencies in the bath are perturbed, the response occurs over a narrow range of times. Let me try to briefly explain how this mathematical structure arises from the physics.
Spontaneous emission can be understood from the following hand-wavy arguments. The electron in an excited state produces an electric field. This field fluctuates over time; these fluctuations drive the transitions in the electronic state. The spontaneous emission therefore arises from the effect of the electron on its environment, which in turn produces a back-action that affects the electron.
The response of the electromagnetic field to a perturbation $\mathbf{E} = E\hat{\mathbf{z}}$ (I have arbitrarily chosen polarisation in the $z$ direction) is captured by the response function:
$\Gamma(t) = \langle E(t) E(0)\rangle,$ where $E(t)$ denotes the Heisenberg-picture operator. This function is central to the theory of linear response to a small perturbation. For example, if one introduces a classical electric dipole that oscillates with a time-dependent dipole moment $d(t)$, the resulting electric field at that point is given by the convolution
$$ \delta\langle E(t) \rangle \approx -i\int_0^t\mathrm{d}s\; d(t-s) \Gamma(s).$$
The previous paragraphs serve merely to motivate the appearance of the response function $\Gamma(t)$. In a physically realistic case where we have a quantum dipole (e.g. an atom) with two states separated by a frequency $\epsilon$, the response function determines the rate of spontaneous emission, which is proportional to the quantity:
$$ \gamma(t) \sim \int_0^t\mathrm{d}s\; e^{i\epsilon s} \Gamma(s). $$
Assuming that $\Gamma(s)$ decays much more rapidly than $1/\epsilon$, for times $t\gg 1/\epsilon$ we can make the Markov approximation
$$ \gamma(t) \to \gamma = \int_0^{\infty}\mathrm{d}s\; e^{i\epsilon s} \Gamma(s), $$
so that we effectively have a constant spontaneous emission rate over time, leading to pure exponential decay.
When does $\Gamma(s)$ decay rapidly enough so that we can make the Markov approximation? The electric field $E(t)$ contains many components (normal modes) which oscillate at different frequencies. If we make this decomposition we get a Fourier representation like
$$\Gamma(t) = \int_0^\infty\mathrm{d}\omega\; e^{-i\omega t} J(\omega),$$
where the spectral density $J(\omega)$ quantifies the degree to which the field at frequency $\omega$ is perturbed by a dipole. For an atom interacting with the electromagnetic field in free space, you will normally get something like
$$J(\omega) \sim \lambda \frac{\omega^3}{\omega_c^2}e^{-\omega/\omega_c}.$$
Here $\lambda$ is a small dimensionless coupling parameter, and $\omega_c$ is a large frequency cutoff on the order of $c/a_0$, where $a_0$ is the Bohr radius. From purely dimensional arguments you can see that
$$ \Gamma(t) \sim \frac{\lambda \omega_c^2}{(\omega_c t)^4}. $$
This tells you that $\Gamma(t)$ vanishes after times much bigger than $\tau = 1/\omega_c$. This time $\tau$ is called the memory time. Since here $\omega_c \approx 10^{18} \text{Hz}$, while the typical optical frequencies are $\epsilon \approx 10^{14} \text{Hz}$, the Markov approximation is well justified.
The extreme example of Markovian noise (white noise) corresponds to $J(\omega) = \text{const.}$, in which case $\Gamma(t) = \delta(t)$, i.e. the bath memory time is infinitesimally small. The opposite extreme is something like a photonic crystal, where the environment has a sharp band edge at frequency $\Omega$ where $J(\omega)$ goes to zero. In that case the response function ends up something like
$$ \Gamma(t) = \int_0^\Omega e^{-i\omega t} J(\omega) \sim f(t) e^{-i\Omega t}$$
where $f(t)$ is some function of time. Now if $\Omega$ is comparable to $\epsilon$, you can imagine that there will be resonance effects, and there will be no smooth irreversible transfer of energy into the environment. Rather $\gamma(t)$ becomes a complicated function of time and you will see non-Markovian dynamics. If the frequency $\epsilon$ lies deep within the band-gap then there is no spontaneous emission at all, since there are simply no electromagnetic field modes to couple to, i.e. effectively $J(\omega) = 0$ in the relevant frequency range.
Hopefully these examples should convince you that the frequency scale which sets the bandwidth of perturbed modes in the environment ($\omega_c$, $\Omega$) is on the order of the inverse memory time. Thus, large bandwidths correspond to shorter memory times, i.e. more Markovian environments.
DISCLAIMER: All equations given here are based purely on memory and minimal back-of-the-envelope consistency checks. The proportionality factors and various terms which I arbitrarily deemed irrelevant are almost certainly missing. | {
"domain": "physics.stackexchange",
"id": 19540,
"tags": "quantum-electrodynamics, quantum-optics, density-operator, cavity-qed, open-quantum-systems"
} |
Does IMU data improve odometry in Gazebo? | Question:
Hello,
Would using /robot_pose_ekf to fuse odometry and IMU data give me a better pose estimate in a Gazebo simulation? Or does Gazebo provide perfect odometry data without slip and drift?
Thanks!
-Alan
Originally posted by ajhamlet on ROS Answers with karma: 3 on 2013-06-05
Post score: 0
Answer:
The question is a bit unclear. Gazebo can give you a ground-truth pose with respect to the gazebo defined origin. If you are defining your own origin and using a package to calculate the odometry, then depending on the noise the IMU data can improve or degrade the odometry calculations.
Originally posted by astaranowicz with karma: 238 on 2013-06-06
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by ajhamlet on 2013-06-06:
Oh, I was unaware of that, so that actually answers my question! Cheers! | {
"domain": "robotics.stackexchange",
"id": 14440,
"tags": "imu, gazebo, navigation, odometry, robot-pose-ekf"
} |
Evaluate Ricci tensor at specific coordinate | Question: Let's use 2D space and 1D time. There is a point mass at origo $(0, 0, 0)$ with mass $M$ just to keep things simple. What are the Ricci tensor elements at $(x, y, 0)$? I've found lot's of ways to symbolically calculate the tensor elements in various scenarios, but what if I want a specific values for e.g. visualising on a grid or similar. In 2+0D space, it seems only the Ricci scalar is needed if I understand things correctly.
Answer: I'm assuming in this answer that we are considering standard General Relativity, albeit in $2+1$ dimensional spacetime. By that I mean we assume Einstein's Equations,
$$R_{\mu\nu} - \frac{1}{2}Rg_{\mu\nu} = 8 \pi T_{\mu\nu}$$
to hold.
If that is indeed the case, then let us pick an arbitrary point $(x,y,0)$ away from the origin. In this point, the RHS vanishes, since you asked for an isolated mass at the origin. Hence, we get
$$R_{\mu\nu} - \frac{1}{2}Rg_{\mu\nu} = 0.$$
If we multiply by $g^{\mu\nu}$ and contract, we'll find that
$$\begin{align}
g^{\mu\nu}R_{\mu\nu} - \frac{1}{2}R g^{\mu\nu}g_{\mu\nu} &= 0, \\
R - \frac{3}{2}R &= 0, \\
R &= 0,
\end{align}$$
and replacing this result in the original equation we find $R_{\mu\nu} = 0$.
For a $(2+1)$D spacetime, the Ricci tensor (not scalar) contains all the information available in the Riemann tensor. Hence, $R_{\mu\nu} = 0$ means the metric is flat in every point that is not the origin. I'm not certain, but I'm quite sure that if you want the metric to be continuous (if it isn't the Riemann tensor won't be well-defined), it will have to be flat at the origin as well. Notice that $(2+1)$D doesn't allow a Newtonian approximation.
As for 2 dimensional, the same calculation I wrote above will lead to
$$R_{\mu\nu} - \frac{1}{2}R g_{\mu\nu} = 0.$$
The reason being, as you stated, that the Ricci scalar completely determines the curvature. As a consequence, Einstein equations will become
$T_{\mu\nu} = 0,$$
and the only solutions are trivial.
For more on this, you might want to take a look at Sec. 15.2 of Thanu Padmanabhan's Gravitation: Foundations and Frontiers. | {
"domain": "physics.stackexchange",
"id": 84416,
"tags": "general-relativity, differential-geometry, metric-tensor, coordinate-systems, curvature"
} |
FIR filter digital differentiator with low cutoff | Question: I'm trying to design a digital differentiator FIR Filter. It features a lowpass, such that above the cutoff frequency the amplification is very low. I get the coefficients by a linear program minimizing the chebychef error of desired and actual frequency response.
It works really well, but I cannot place the cutoff frequency below some 0.1*pi rad/sample. Small cutoff frequencies still have very steep rising amplitude responses in low frequencies and thus need a broad transition band.
The picture shows such a design and the very broad transition band. The red is the desired and blue obtained frequency response. I've weighted the bands accordingly. Also I'm not talking about bandpass, nor lowpass, I design a differentiator - thus the linear slew rate in low frequencies.
There are limits to the possible lowpass frequency, correct? How can I make the cutoff even smaller, or even better: why is this degradation happening?
I know, that the frequency response in my formulation has the form
$$
H(e^{j\omega}) = 2\sum_{k=0}^M j \,h(k)\, \sin(k \omega)
$$
where $M$ is $(N-1)/2$ with order $N$ filter. And thus the shape can be better traced by having longer filters. But the gain actually is very small.
Also I read, that with a derived then sampled Blackman Window (without control over cutoff frequency) one obtains a cutoff of around $\omega_C \approx 0.005$, while I struggle with $0.1$... I want to know why exactly.
This document suggests a method for a first order differentiator, where "only" the derivatives at $\omega = 0$ are matched to the ideal one. It results in an earlier drop off. However, as I understand it, this cannot be achieved with higher order differentiators, since second order is a quadratic function and I am not sure if a (basically taylor) approx in derivatives is sufficient for that. Let alone even higher orders.
Answer: What you want is what I call a Jekyl and Hyde design. The low pass is parsimoniously represented in terms of cosines and the derivative in sines. So let Jekyl be Jekyl and Hyde be Hyde.
The way to go very low would be a multirate approach. Low pass and decimate for a few cascades until the high pass part of your differentiator is a reasonable fraction of the band. If you need the original sample rate, upsample appropriately. Your filter plot implies that you really aren't interested in most of your original band, so why retain it? | {
"domain": "dsp.stackexchange",
"id": 5388,
"tags": "lowpass-filter, finite-impulse-response"
} |
(homework) finding location of third point charge | Question: having trouble with the following
There are three point charges along the x-axis:
q1 is +3µC, located at the origin
q2 is -5µC, located at x = +0.200m
q3 is -8µC
Where is q3 located if the net force on it is 7.00 N in the -x direction?
The way I see to go about this is as follows:
It is not located at x>0.2, as q2 repulses it and is closer and larger than q1
At 0 < x < 0.2 the force from both q1 and q2 goes in -x direction
At x < 0 It is repulsed by q2 but attracted by the closer q1. Given q2's larger size it may still be in this range
If q3 is at x < 0 (which I know to be correct), then F1 is +x direction and F2 is -x direction, so the problem is expressed as F1 - F2 = -7 N where F1 is the force from q1 on q3 and F2 is the force from q2 on q3.
By Coloumb's Law we have that:
F1 = k(q1q3)/sq(r1) and F2 = k(q2q3)/sq(r2)
r1 equals -x where x is the location of q3 on the x-axis (negative), and r2 equals -x+0.2
F1 - F2 = -7
k(q1q3)/sq(-x) - k(q1q3)/sq(-x+0.2) = -7
k(q1q3) and k(q2q3) are known numbers, and gives:
(2.16*10e-1)/sq(-x) - (3.60*10e-1)/sq(-x+2) = -7
Now in earlier problems in the book steps were simple from here, as the right side was 0; so you could just multiply the equation with sq(-x+2)*sq(-x) and solve the polynomial for x = 0. In this case though that leaves a fourth power polynomial on the right side since it isn't empty, which I have no idea how to practically solve.
Also, plotting the left-side equation I get here into a graph program and identifying F=-7 gives me an x value slightly different from the book (at -0.161, while the book says the solution is -0.144); larger than any rounding errors I can think of would imply.
Basically I've been scratching my head over this for a few hours and can't seem to figure out what I am misunderstanding, if it's the mathematics or some aspect of the physics of it that I'm not getting.
Answer: Lets to this step by step and take care of the signs!
Let $q_1=3$µC, $q_2=5$µC and $q_3=-8$µC.
The formula for the force, acting on particle one due two the presence of particle two, is given by
$$\vec{F}_{12}=\frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{{| \vec{r}_{21}|}^2} {\hat{r}}_{21},$$
where $\hat{r}_{21}$ is the unit vector pointing from charge two to charge one. The overall force on one particle is the sum of all (Coulomb-) forces.
The distance $r_{13}$ is the unknown we are looking for. The distance $r_{23}$ can be written as $r_{23}=0.2-r_{13}$, since we know from your reasoning, that particle 3 is to the left of particle 1, i.e. it has a negative $x$-coordinate. As we are in 1D, we do not have to care about the full vector expression. The total force is then given as
$$F_{tot}=F_{13}+F_{23} = \frac{1}{4\pi\epsilon_0} \left( \frac{q_1 \cdot q_3}{|r_{13}|^2} + \frac{q_2 \cdot q_3}{|r_{23}|^2}\right) \\
=8.988\times 10^9\ \mathrm{N\cdot m^2\cdot C}^{-2} \left( - \frac{24\times 10^{-12} \mathrm{C^2}}{|r_{13}|^2} + \frac{40\times 10^{-12} \mathrm{C^2}}{|0.2\mathrm{m}-r_{13}|^2}\right)\\
=\left(-\frac{0.215712}{|r_{13}|^2} + \frac{0.35952}{|0.2\mathrm{m}-r_{13}|^2} \right) \mathrm{N\cdot m^2}\\
\stackrel{!}{=}-7\mathrm{N} \quad \text{(force is in the negative direction)}$$
You should realize that the signs differ from yours! Solving this equation with whatever methods, yields something like $r_{13}=-0.146\,972 \, \mathrm{m}$, which is close to what your book says. | {
"domain": "physics.stackexchange",
"id": 24432,
"tags": "homework-and-exercises, electrostatics"
} |
Why don't anti-viral drugs like "Acyclovir" work against coronaviruses? | Question: I've always used Acyclovir to treat cold sores, why doesn't it work on other viruses?
How do coronaviruses differ from herpesviruses?
Answer: Aciclovir specifically targets HSV-family viruses.
It is metabolized by a viral-specific enzyme into an inhibitor of the specific DNA polymerase expressed by the virus:
in infected cells, HSV or VZV coded thymidine kinase facilitates
the conversion of aciclovir to aciclovir monophosphate, which is then converted to aciclovir triphosphate by cellular enzymes. Aciclovir triphosphate acts as an inhibitor of and substrate for the herpes specified DNA polymerase, preventing further viral DNA synthesis.
https://www.ebs.tga.gov.au/ebs/picmi/picmirepository.nsf/pdf?OpenAgent&id=CP-2009-PI-00595-3
Other viruses don't express these specific enzymes, and even if they have similar enzymes they may not interact the same way with the drug. Even within the herpesvirus family, the drug is not equally effective against every virus.
Coronaviruses are RNA viruses, so even though the drug is not effective for DNA viruses outside the herpesvirus family, coronaviruses are definitely not going to be affected because they don't even have a DNA polymerase to inhibit. | {
"domain": "biology.stackexchange",
"id": 10410,
"tags": "pharmacology, virology, coronavirus"
} |
Minesweeper with GUI | Question: I created the famous Minesweeper-game in Java, for which I used java-swing to create the GUI.
Here's the code:
Control.java: This class contains the main-method, which just opens the GUI.
import javax.swing.SwingUtilities;
public class Control {
//Just to start GUI
public static void main(String args[]) {
SwingUtilities.invokeLater(Gui::new);
}
}
Minesweeper.java: This class is responsible for creating the field, placing the mines and calculating the "neighbor-mines".
import java.util.Random;
public class Minesweeper {
//Saves the places of the mines and the number of neighbor-mines
private int[][] neighbors = new int[Gui.size][Gui.size];
private boolean[][] memory = new boolean[Gui.size][Gui.size];
//Places the bombs/mines randomly in the field
public void placeBombs() {
Random random = new Random();
int i = 0;
while(i < Gui.size * 3) {
int x = random.nextInt(Gui.size);
int y = random.nextInt(Gui.size);
if(neighbors[x][y] == 0) {
neighbors[x][y] = -1;
i++;
}
}
}
//Counts the "neighbor-mines"
public void countNeighbors() {
for(int x = 0; x < Gui.size; x++) {
for(int y = 0; y < Gui.size; y++) {
memory[x][y] = true;
if(neighbors[x][y] != -1) {
neighbors[x][y] = 0;
for(int i = x - 1; i <= x + 1; i++) {
for(int j = y - 1; j <= y + 1; j++) {
if(i >= 0 && j >= 0 && i < Gui.size && j < Gui.size ) {
if(neighbors[i][j] == -1) {
neighbors[x][y]++;
}
}
}
}
}
}
}
}
public int getterNeighbors(int x, int y) {
return neighbors[x][y];
}
public boolean getterMemory(int x, int y) {
return memory[x][y];
}
public void setterMemory(int x, int y, boolean value) {
memory[x][y] = value;
}
}
Gui.java: The name is pretty self-explanatory: This class is responsible for the GUI.
import javax.swing.BorderFactory;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JOptionPane;
import javax.swing.JPanel;
import javax.swing.JTextField;
import javax.swing.SwingUtilities;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.FlowLayout;
import java.awt.Font;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.MouseAdapter;
import java.awt.event.MouseEvent;
import javax.swing.Timer;
public class Gui {
Minesweeper minesweeper = new Minesweeper();
public static final int size = 15;
private JFrame frame = new JFrame("Minesweeper");
private JButton[][] buttons = new JButton[size][size];
JTextField counter = new JTextField();
final private int delay = 1000;
private int seconds = 0;
private int minutes = 0;
Timer timer;
//Timer
final private ActionListener taskPerformer = new ActionListener() {
public void actionPerformed(ActionEvent evt) {
if(seconds < 59) {
seconds++;
}
else {
minutes++;
seconds = 0;
}
counter.setText(minutes + " : " + seconds);
}
};
public Gui() {
minesweeper.placeBombs();
minesweeper.countNeighbors();
timer = new Timer(delay, taskPerformer);
timer.start();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
JPanel panel = new JPanel();
JPanel panel1 = new JPanel();
//Creates the buttons
for(int i = 0; i < size; i++) {
for(int j = 0; j < size; j++) {
buttons[i][j] = new JButton();
buttons[i][j].setText("");
buttons[i][j].setBackground(Color.GRAY);
buttons[i][j].setName(i + "" + j);
buttons[i][j].setBorder(BorderFactory.createLineBorder(Color.BLACK));
final int x = i;
final int y = j;
//Right-click
buttons[i][j].addMouseListener(new MouseAdapter() {
boolean test = true;
public void mouseClicked(MouseEvent e ) {
if(SwingUtilities.isRightMouseButton(e) && buttons[x][y].isEnabled()) {
if(test) {
buttons[x][y].setBackground(Color.ORANGE);
test = !test;
}
else {
buttons[x][y].setBackground(Color.GRAY);
test = !test;
}
}
}
});
//Left-click
buttons[i][j].addActionListener(
new ActionListener() {
public void actionPerformed(ActionEvent e) {
buttonClicked(x, y);
}
});
panel1.add(buttons[i][j]);
}
}
//Layout
frame.setSize(600,450);
panel.setLayout(new java.awt.FlowLayout());
panel.setSize(600, 400);
panel1.setLayout(new java.awt.GridLayout(size, size));
panel1.setPreferredSize(new Dimension(400, 400));
JPanel panel2 = new JPanel();
panel2.setLayout(new java.awt.GridLayout(2,1, 100, 100));
panel2.setSize(200,400);
//Restart button
JButton restart = new JButton();
restart.addActionListener(
new ActionListener() {
public void actionPerformed(ActionEvent e) {
frame.setVisible(false);
frame.dispose();
SwingUtilities.invokeLater(Gui::new);
}
});
restart.setText("Restart");
//More Layout
counter.setHorizontalAlignment(JTextField.CENTER);
counter.setEditable(false);
counter.setText("Counter");
Font font1 = new Font("SansSerif", Font.BOLD, 30);
counter.setFont(font1);
restart.setFont(font1);
panel2.add(counter);
panel2.add(restart);
panel.add(panel1, FlowLayout.LEFT);
panel.add(panel2);
frame.add(panel);
frame.setVisible(true);
}
private void buttonClicked(int i, int j) {
buttons[i][j].setEnabled(false);
buttons[i][j].setBackground(Color.WHITE);
if(minesweeper.getterNeighbors(i, j) == -1) {
youLost(i, j);
}
else if(minesweeper.getterNeighbors(i, j) == 0) {
zeroNeighbors(i, j);
checkWin();
}
else {
buttons[i][j].setText(Integer.toString(minesweeper.getterNeighbors(i, j)));
checkWin();
}
}
//Recursive function to reveal more fields
private void zeroNeighbors(int x, int y) {
minesweeper.setterMemory(x, y, false);
for(int i = x - 1; i <= x + 1; i++) {
for(int j = y - 1; j <= y + 1; j++) {
if(i >= 0 && j >= 0 && i < Gui.size && j < Gui.size) {
buttons[i][j].setEnabled(false);
buttons[i][j].setBackground(Color.WHITE);
buttons[i][j].setText(Integer.toString(minesweeper.getterNeighbors(i, j)));
if(minesweeper.getterNeighbors(i, j) == 0 && minesweeper.getterMemory(i, j)) {
zeroNeighbors(i, j);
}
}
}
}
}
private void youLost(int x, int y) {
for(int i = 0; i < size; i++) {
for(int j = 0; j < size; j++) {
buttons[i][j].setEnabled(false);
buttons[i][j].setBackground(Color.WHITE);
}
}
timer.stop();
buttons[x][y].setBackground(Color.RED);
JOptionPane.showMessageDialog(frame, "You Lost!");
}
private void checkWin() {
boolean test = true;
for(int i = 0; i < size; i++) {
for(int j = 0; j < size; j++) {
if(buttons[i][j].isEnabled() && minesweeper.getterNeighbors(i, j) != -1) {
test = false;
}
}
}
if(test) {
for(int i = 0; i < size; i++) {
for(int j = 0; j < size; j++) {
buttons[i][j].setEnabled(false);
buttons[i][j].setBackground(Color.WHITE);
}
}
timer.stop();
JOptionPane.showMessageDialog(frame, "You Won!");
}
}
}
I would appreciate any suggestions on improving the code and especially the general code-structure.
Answer: To review your code, I opened it in IntelliJ, which is an integrated development environment (IDE). One of its main features is the thousands of inspections it has for making code simpler and shorter. For example, it suggests:
In Control.java, instead of writing String args[], the usual way is to write String[] args. Changing this does not affect the code execution in any way, it only makes the code easier to read for humans.
In Minesweeper.java, instead of writing private int[][], you can write private final int[][] to document that this variable is only ever assigned once, which also helps the human reader since otherwise this variable might be modified in any of the other 60 lines.
In Gui.java, instead of writing new ActionListener() { … }, you can replace that code with a much shorter form, which is called a lambda expression. That's a terribly unhelpful name if you don't know what it is about. A much better name is unnamed method, or in some other programming languages, anonymous function. Basically it's just a piece of code that can be run.
So much for the simple transformations. Having these code transformations at your finger tips makes it easy to experiment with your code and apply these suggestions from the IDE, as well as undo them if you don't like them.
An IDE can also format the source code, so that it has a consistent look that is familiar to many other readers. For example, in your code you write for(int i, while the common form is to have a space after the for, which makes it for (int i.
On a completely different topic, the label that displays the elapsed time sometimes jumps around on the screen. This is because the seconds "0" is thinner than the seconds "00". To avoid this, you can replace this code:
counter.setText(minutes + " : " + seconds);
with this code:
counter.setText(String.format("%d : %02d", minutes, seconds));
The String.format function is quite powerful. It defines a format with placeholders, into which the remaining arguments are inserted. In this case it means:
%d is just a decimal number.
%02d is a decimal number, but with at least 2 digits. Any number thinner than this will be filled with 0.
See this article for other popular programs that didn't get this right, there are even some programs by Apple.
When I saw your code first, I was a bit disappointed that the Minesweeper class uses the constant Gui.size. That constant has nothing to do with the GUI, it should rather be defined in the Minesweeper class, since it is not specific to the screen representation but rather to the abstract representation of the mine field.
It would also be nice if I could have a Minesweeper object with different sizes. To do this, you can edit the Minesweeper class in these steps:
At the top of the class, modify the code to be:
public class Minesweeper {
private final int width;
private final int height;
public Minesweeper(int width, int height) {
this.width = width;
this.height = height;
}
Replace Gui.size with widthOrHeight everywhere in Minesweeper.java.
Replace each instance of widthOrHeight with either width or height, whichever fits.
Finally make the width and the height of the mine field publicly available by adding these methods at the bottom of the Minesweeper class:
public int getWidth() { return width; }
public int getHeight() { return height; }
Now you can define mine fields of arbitrary sizes.
There's certainly more to say, but I'll leave that to the other reviewers. | {
"domain": "codereview.stackexchange",
"id": 38030,
"tags": "java, swing, minesweeper"
} |
How to validate that an audio algorithm result is independent of microphone device? | Question: I have a machine learning algorithm that takes speech sample audio recordings collected from mechanical Turk. During processing it was shown that some audio from certain OS/microphone devices have been preprocessed by noise suppression algorithm by default which affects the results I am getting.
As part of my writeup I need to ensure that OS and Microphone preprocessing is accounted for. Ideally I would like my algorithm to be independent of the recording source, however I am not even sure how to go about detecting the kinds of preprocessing that is being applied and give a lower and upper error bounds for how it affect my algorithm.
My question is how does other people deal with preprocessing on audio files that are outside your control? Are there some common preprocessing that I should know about (echo cancellation, noise suppression, etc.)?
Something I can do for example is to add noise suppression myself on some reference files and add +/- for the max and minimum deviation from the results from the reference file. However it's a bit unsatisfactory as I do not know how different my preprocessing is to other devices.
Answer: There isn't a universal pre-processing chain, but here are some common ones that I'd look out for:
1) Noise Gating
If a signal is below a certain threshold, the mic is muted. To identify: You'll see a VERY quiet noise floor, followed by a sudden spike in level.
2) Automatic Gain Control (AGC)
Mic volume modulates based on the signal level. This will be difficult to identify unless you have multiple recordings from the same system, which will show modulation in the noise floor based on the signal level.
3) Compression
This one should be self-evident. You'll see a very steady signal level that makes the waveform look like a "block", i.e. not much changes in level throughout the entire recording.
These are all relatively simple real-time processing examples. There may also be more sophisticated post-processing applied in some cases, such as noise suppression. | {
"domain": "dsp.stackexchange",
"id": 10102,
"tags": "normalization, preprocessing"
} |
What is the vacuum solution of Dirac equation? | Question: What am I generally asking is what solution of massive Dirac equation could be considered vacuum solution
Answer: I think you've gotten a bit mixed up: The vacuum state $|\Omega \rangle$ is not a solution to the Dirac equation just as much as any given initial state $|\psi \rangle$ is not a solution to any equation of motion, because it is not even a function of time $t$. The Dirac equation answers the question: given some initial state (or field, in the interaction or Heisenberg pictures, which are more commonly used in QFT), how does it evolve forwards in time? It doesn't provide any extra information about the "contents" of a state.
A similar question that you could ask: If you moved to the Schrödinger picture, what would be the solution to the dirac equation with initial condition $|\psi_0 \rangle = |\Omega \rangle$? I have never seen this solved, and would be curious about the answer as well. | {
"domain": "physics.stackexchange",
"id": 92225,
"tags": "field-theory, dirac-equation"
} |
Calculating bit flip and phase error using local operations for GHZ state | Question: Suppose each qubit of $\text{GHZ}_3$ State is distribute to $3$ different parties at different locations through a noisy quantum channel. So each qubit can possibly go through: a bit flip error, a phase error or a combination of these two.
Is there some operation that can be performed locally by each parties to determine the errors in the qubit?
Basically I wanted to generalize the idea for Bell state to GHZ state as given here. Clearly in the case of Bell state when Alice sends other half to Bob, there exists measurements that can be performed localy to determine bit flip and phase error. That is (see here for detail):
$$
\begin{aligned}
&\Pi_{\mathrm{bf}}=\frac{1}{2}\left(\mathrm{id} \otimes \mathrm{id}-\sigma_{z} \otimes \sigma_{z}\right) \\
&\Pi_{\mathrm{pe}}=\frac{1}{2}\left(\mathrm{id} \otimes \mathrm{id}-\sigma_{x} \otimes \sigma_{x}\right)
\end{aligned}
$$
I am interested in estimating amount of error in GHZ state using local operations in order to apply the error correcting codes and to calculate the fidelity.
Answer: I think the place you're starting from is misleading. If Alice and Bob share a Bell pair, and one qubit could have had an error, then, yes, they would like to measure observables $X\otimes X$ and $Z\otimes Z$ to detect the error. In principle, you can measure the values of these two simultaneously because they commute.
However, if you're trying to measure $Z\otimes Z$ on two qubits that are spatially separated, you either need
to use some extra entanglement (which you don't have) to implement the $Z\otimes Z$ measurement directly, or
to measure both qubits in the $Z$ basis. This is a bad thing because it completely destroys your state. (To see this differently, you're measuring the observables $Z\otimes I$ and $I\otimes Z$. These do not commute with $X\otimes X$). Hence, you can detect one of the types of error, but your entanglement is completely gone.
The same thing is going to happen with a GHZ state. | {
"domain": "quantumcomputing.stackexchange",
"id": 3600,
"tags": "error-correction, noise"
} |
Why don't people use Hamilton's equations for a relativistic free particle? | Question: A relativistic free particle has the Hamiltonian in general:
$$ \mathcal{H} = \sqrt{{\bf p}^2c^2+m^2c^4}.$$
I read somewhere that says, it is possible to go further and say that the EoM are Hamilton's equations. But it is not done as there is "less interest" in such a discussion.
Is there something deeper to this? Like another formalism is ''better''.
(My guess is a more trivial one though. That is, it is not useful because the equations just get very cluttered and ugly)
Answer: I) Here we will assume that OP is talking about a relativistic point particle with zero spin in a $d$-dimensional Minkowski spacetime with metric $\eta_{\mu\nu}$ of sign convention $(−,+,\ldots,+)$. Also we put $c=1$ for simplicity.
Note that the relativistic point particle has world-line reparametrization invariance, which is a gauge symmetry/redundancy in the formulation. We are (to a large extent) free to parametrize the world-line of the point particle in any way we wish. Let us call the world-line parameter for $\tau$ (which does not have to be the proper time). This gauge freedom can be encoded in an einbein field $e=e(\tau)>0$. The resulting Hamiltonian Lagrangian is$^1$
$$ L_H~:=~ p_{\mu} \dot{x}^{\mu} - \underbrace{\frac{e}{2}(p^2+m^2)}_{\text{Hamiltonian}}, \tag{1} $$
cf. e.g. this Phys.SE post.
Here dot means differentiation wrt. $\tau$. The square of the momentum vector is
$$\begin{align} p^2~:=~& \eta^{\mu\nu} p_{\mu} p_{\nu}\cr ~=~&-(p^0)^2+{\bf p}^2\cr
~=~&-2p^+p^- + {\bf p}_{\perp}^2, \end{align}\tag{2}$$
where we have used light-cone coordinates in the last expression.
II) Static gauge $x^0=\tau$. If we integrate out $p^0$ and $e$, we get OP's square root model
$$\begin{align} \left. L_H\right|_{x^0=\tau}
\quad\stackrel{p^0}{\longrightarrow}&\quad
{\bf p}\cdot \dot{\bf x}- \underbrace{\left(\frac{1}{2e} + \frac{e}{2}({\bf p}^2+m^2)\right)}_{\text{Hamiltonian}}\cr\cr
\quad\stackrel{e}{\longrightarrow}&\quad
{\bf p}\cdot \dot{\bf x} - \underbrace{\sqrt{{\bf p}^2+m^2}}_{\text{Hamiltonian}} .\end{align}\tag{3} $$
For sufficiently short$^2$ times $\Delta \tau=\tau_f-\tau_i$, the path integral becomes$^3$
$$\begin{align}& \langle {\bf x}_f,\tau_f \mid {\bf x}_i,\tau_i\rangle\cr
~=~&i\hbar\Delta\tau\int_{\mathbb{R_+}} \!\frac{\mathrm{d}e}{2} \int_{\mathbb{R}^d} \!\frac{\mathrm{d}^dp}{(2\pi\hbar)^d} \exp\left[\frac{i}{\hbar}\left( p_{\mu} \Delta x^{\mu}
-\underbrace{\frac{e}{2}(p^2+m^2)}_{\text{Hamiltonian}}\Delta\tau\right)\right]\cr
~=~& \int_{\mathbb{R}^{d-1}} \!\frac{\mathrm{d}^{d-1}{\bf p}}{(2\pi\hbar)^{d-1}}
i\hbar\Delta\tau\int_{\mathbb{R_+}} \!\frac{\mathrm{d}e}{2}
~\underbrace{\frac{1}{\sqrt{2\pi\hbar ie\Delta\tau}}}_{\text{Gauss. } p^0\text{-int.}} \cr
&\exp\left[\frac{i}{\hbar}\left( {\bf p}\cdot \Delta {\bf x} -\underbrace{\left( \frac{1}{2e} + \frac{e}{2}({\bf p}^2+m^2)\right)}_{\text{Hamiltonian}}\Delta\tau\right) \right]\cr
~\stackrel{(6)}{=}~& \int_{\mathbb{R}^{d-1}} \!\frac{\mathrm{d}^{d-1}{\bf p}}{(2\pi\hbar)^{d-1}} \frac{\hbar}{2\sqrt{{\bf p}^2+m^2}} \exp\left[\frac{i}{\hbar}\left( {\bf p}\cdot \Delta {\bf x} - \Delta \tau \underbrace{\sqrt{{\bf p}^2+m^2}}_{\text{Hamiltonian}}\right)\right]\cr
~=~&i\hbar\Delta\tau\int_{\mathbb{R_+}} \!\frac{\mathrm{d}e}{2}
~\underbrace{\frac{1}{(2\pi\hbar ie\Delta\tau)^{d/2}}}_{\text{Gauss. } p\text{-int.}}
\exp\left[\frac{i}{2\hbar}\left( \frac{(\Delta x)^2}{e\Delta\tau} - m^2e\Delta\tau\right) \right]\cr
~\stackrel{(6)}{=}~&\frac{1}{(2\pi)^{d/2}}\Big(\frac{m/\hbar}{ \sqrt{(\Delta x)^2}}\Big)^{\frac{d}{2}-1}K_{\frac{d}{2}-1}\Big(\frac{m}{\hbar}\sqrt{(\Delta x)^2}\Big)
,\end{align} \tag{4} $$
which also happens to be the standard scalar propagator $\langle\Omega|T[\phi (x_f)\phi (x_i)]|\Omega\rangle $ in QFT/2nd quantization, cf. e.g. Refs. 1-3. From a 2nd quantized perspective, the $e$-integration in eq. (4) is a Schwinger parametrization of the Fourier transformed propagator $$\frac{i}{\hbar}\langle\Omega|T[\widetilde{\phi} (p_f)\widetilde{\phi} (p_i)]|\Omega\rangle~=~\frac{\hbar^2}{p_f^2+m^2-i\epsilon}(2\pi\hbar)^d\delta^d(p_f\!+\!p_i). \tag{5} $$
As is well-known, eq. (4) is Lorentz covariant and falls off exponentially outside the light-cone. In eq. (4) we have used the integrals
$$\begin{align}
\int_{\mathbb{R}_+} \!\frac{\mathrm{d}e}{e^{1+\nu}}\exp\left[-ae-\frac{b}{e}\right] ~=~&2\left(\frac{a}{b}\right)^{\nu/2} K_{\nu}\left(2\sqrt{ab}\right),\cr
\int_{\mathbb{R}_+} \!\frac{\mathrm{d}e}{e^{1-\nu}}\exp\left[-ae-\frac{b}{e}\right] ~=~&2\left(\frac{b}{a}\right)^{\nu/2} K_{\nu}\left(2\sqrt{ab}\right),\cr
\int_{\mathbb{R}_+} \!\frac{\mathrm{d}e}{\sqrt{e}}\exp\left[-ae-\frac{b}{e}\right] ~=~&\sqrt{\frac{\pi}{a}} \exp\left[-2\sqrt{ab}\right],\cr {\rm Re}(a), {\rm Re}(b)~>~&0.\end{align}\tag{6} $$
III) Light-cone gauge $x^+=\tau$. If we integrate out $p^-$ and $e$, we get
$$\begin{align} \left. L_H\right|_{x^+=\tau}
\quad\stackrel{p^-,~e}{\longrightarrow}\quad & -p^+\cdot \dot{x}^-
+{\bf p}_{\perp}\cdot \dot{\bf x}_{\perp}\cr & - \underbrace{\frac{{\bf p}_{\perp}^2+m^2}{2p^+}}_{\text{Hamiltonian}} .\end{align}\tag{7} $$
IV) We stress that the Euler-Lagrange (EL) equations for either of the Hamiltonian Lagrangians (1), (3), and (7) lead to Hamilton's equations. The point is now that physical quantities should not depend on the choice of gauge-fixing. We are free to use the most convenient gauge choice. Each formulation (1), (3), and (7) are valid, and have their pros and cons. The static gauge choice (3) is disfavored because of the square root.
References:
M.E. Peskin & D.V. Schroeder, An Intro to QFT; eq. (2.50).
M.D. Schwartz, QFT and the Standard Model; eq. (6.25).
O. Corradini & C. Schubert, Spinning Particles in QM & QFT, arXiv:1512.08694; subsection 1.5.1, eqs. (1.160-162)
T. Padmanabhan, QFT: The Why, What and How, 2016; subsections 1.3.1 + 1.4.4.
--
$^1$ Strictly speaking, there are also Faddeev-Popov ghost terms and gauge-fixing terms, which we have ignored for simplicity. These action terms are consistently generated in the BFV formulation, cf. e.g. my Phys.SE post here. The normalization factor in eq. (4) can be derived via Gaussian integration in the BFV formulation over the 2 bosonic variables $x^0$, $B$; and the 4 fermionic variables $\bar{C}$, $P$, $C$, $\bar{P}$.
$^2$ Here we just consider a single time slice for simplicity. The full path integral is the continuum limit of multiple time slice discretizations with insertion of corresponding completeness relations. It turns out that the result (4) for the free theory does not depend on the number of time slice discretizations.
$^3$ Here we use the Feynman $i\epsilon$-prescription ${\rm Re}(i\Delta\tau)>0$. The Gaussian integration over $p^0_E=i p^0_M$ becomes damped after a Wick-rotation $\tau_E=i\tau_M$, $x^0_E=ix^0_M$ to Euclidean signature. | {
"domain": "physics.stackexchange",
"id": 30239,
"tags": "special-relativity, hamiltonian-formalism, field-theory, classical-field-theory, point-particles"
} |
What does BNG stand for | Question: When i look at the available datasets in https://www.openml.org i often see a BNG dataset with no further information about it.
Can someone explane what BNG means in this context?
I am especially interested in this dataset: https://www.openml.org/d/1389
Has anyone more information about where this data set comes from?
Answer: The Bayesian Network Generated (BNG) datasets are a set of artificially generated datasets openly available on OpenML. These datasets were generated to fill the need for a large heterogeneous set of large datasets. This paper describes the BNG generator best:
Algorithm Selection on Data Streams.
Small quote from the paper about the BNG data generator:
The generator takes a dataset as input, and outputs a data stream containing
a similar concept, with a predefined number of instances. The input dataset is
preprocessed with the following operations: all missing values are first replaced by the majority value of that attribute, and numeric attributes are discretized using Weka’s binning algorithm.
A personal note: For general Machine Learning studies, I would refrain from using BNG (or any other kind of artificially generated) datasets, as the concept is generally simpler than the original dataset. Instead, it is recommendable to use a per-defined benchmark suite, such as the OpenML-100. | {
"domain": "datascience.stackexchange",
"id": 2509,
"tags": "dataset"
} |
Electric Potential, Work Done by Electric Field & External Force | Question: I did lot of searching but couldn't find any textbook nor any webpage which could clarify my doubt. Maybe my doubt is insanely stupid and I was dumb for not realizing it at the first place.
The negative of the work done by the electrostatic field in bringing a charge from infinity to a point is called electric potential.
Let us assume that there is a positive charge at the origin. Let work done by the external force to bring a positive from infinity to a point P close to the origin be W. And hence work done by the field to bring a positive charge from infinity to the point P will be -W.
If I did work W to bring the charge from infinity or if the field brought the charge from infinity, either way the change in potential energy will be same.
In case of work done by the field, we say that the work done is stored in the form of electrostatic potential energy.
In case of work done by the external force, we say that the work done was positive and the energy was taken by some external source.
In the above case, I did work W in bringing it & the field did work -W on the charge trying to push it away so I did positive work of W & the field did negative work of W. We can say that the work done by the field was stored as potential energy.
Where did the work done by the external force (me) go?
I can summarize the whole doubt in the following line:
I did work W to bring an charge towards another unlike charge, and therefore the field also did work -W, the net work done on the charge is 0. But there is still a change in potential energy. Why did the potential energy change? Isn't conservation of energy being violated here?
Please clarify my doubt (I do understand that there is a horrible conceptual error in one of my arguments but I do not know which one it is). If I am not wrong this has nothing to do with electrostatics rather has to deal with field theory/inverse square law. I believe I will be encountering the same problem again when I would be studying another force, maybe gravitational, which obeys inverse square law.
Thank You
Answer: You can describe the electric force it terms of potential energy, because it is a conservative force. In doing so you actually replace the concept of work done by this force by the concept of potential energy. So you can not longer use both descriptions simultaneously. If you describe the electric force as doing work, then you made positive work and the electric force negative work, so that there is no net gain of kinetic energy in the object. It is a mistake to say that in this description the particle also has potential energy, because in doing so you be considering the work made by the electric field twice (both, as doing work and as gaining potential energy. The descriptions are equivalent, but it is either one or the other. If you chose the potential energy description then you no longer deal with the work of the electric force, as it is implicitly inside the concept of potential energy. | {
"domain": "physics.stackexchange",
"id": 71531,
"tags": "electrostatics, work, potential-energy"
} |
Calculating the accurate emf of an electric generator | Question: Let's consider the following simple model of an electric generator:
Many textbook say the following:
The magnetic flux through the coil is $\Phi=BA\sin \theta$, where $\theta=\omega t+\phi$ is the angle the plane on which the coil lies makes with the upward vertical. $A$ denotes the area bounded by the coil, and $B$ the magnetic field strength. The emf induced, according to Farady's Law, is therefore $\mathcal E=\frac{d\Phi}{dt}=\omega BA \cos (\omega t+\phi)$.
This is, or course, under the assumption that the magnetic field is uniform and constant. However, the current in the coil produces some magnetic field, according to Ampere's Law, so the magnetic field should not be constant in general.
Question: Is the emf $\mathcal E$ still a sine wave if the change in magnetic field caused by current in the coil is taken into account? Of course, the answer depends on the type of load the generator is connected to, but now let's assume the load is a resistor. How will the waveform be different form $\mathcal E=\omega BA \cos (\omega t+\phi)$? Can I find an analytical expression for $\mathcal E$ in this case?
Answer: The effect that you are worried about is self-induction. Given circuit (including the coil and the rest of wires) can be assigned quantity called self-inductance, denoted usually as $L$. The meaning of self-inductance is that magnetic flux through the circuit due to its own current is
$$
\Phi_{self} = LI.
$$
To get total induced emf, one has to calculate total flux and negative of rate of change of this total flux:
$$
emf = -\frac{\Phi_{external}+\Phi_{self}}{dt},
$$
$$
emf = - \frac{d}{dt}\left(BA\sin (\omega t +\phi) + LI\right),
$$
$$
emf = -\omega BA\cos (\omega t +\phi) - L\frac{dI}{dt}.
$$ | {
"domain": "physics.stackexchange",
"id": 60402,
"tags": "electromagnetism, electricity"
} |
p vs V graph - dependence question | Question: Context:
I'm reading Fermi's Thermodynamics. On page 6, he states that the work done transforming a system from state $A$ to state $B$ (with the accompanying graph below) is given by $$W=\int_{V_A}^{V_B}p\text{ } dV.$$
$\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }$
Question: In the graph, is $p$ dependent on $V$? If so, how and why? I can't make sense of it intuitively. Is the book already assuming the Ideal Gas Law?
Answer: Answer by @FGSUZ is correct. I just want to add a few minor points.
Pressure $p$ depends on $V$, but not only on $V$. You may choose the other thermodynamic variable to be for example temperature $T$ (any choice different from $V$ will do), and then the dependence is written $p(V,T)$. In a thermodynamic process going from state 1 to state 2, $V$ and $T$ are continuously varied. This process forms a three-dimensional curve in the $p$-$V$-$T$ diagram. What is presented in Fermi's graph is the projection of this three-dimensional curve on the $p$-$V$ plane. To calculate work this is all we need. See that no assumption was made regarding the thermodynamic process except that it is quasistatic which enables us to represent the process by a continuous curve; in particular, ideal gas behavior was not assumed. | {
"domain": "physics.stackexchange",
"id": 44193,
"tags": "thermodynamics, work, ideal-gas"
} |
Prospects for detection of gravitons? | Question: With the announcement of the detection of gravitational waves, questions about the implications proliferate. Some relate to the possible existence of gravitons. The analogous relationship between gravitons/gravitational waves and photons/electromagnetic waves is frequently mentioned.
The detection of individual photons required experiments of very low intensity light, yet their existence was inferred (prior to their actual detection) by Planck and Einstein (among others) using the properties of experimental black body radiation and the photo-electric effect.
If the prospects for detection of gravitons requires similar study of very low intensity gravitational waves, then those prospects are very dim indeed. My question: are there similar indirect "experimental" methods for inferring the existence of gravitons?
Answer: The short answer is no.
As far as I know the first person to address this issue was Freeman Dyson - at least his is the name you see associated with the question. Googling finds only this article from 2004 that is behind a paywall, though I'm sure I encountered Dyson's ideas some time before 2004.
Anyhow there is a thorough discussion of the problem in Can Gravitons Be Detected? by Tony Rothman, Stephen Boughn. They confirm that the answer is no in practice though they suggest that in principle gravitons could be detected.
The problem is that gravitons interact extraordinarily weakly with matter, and there simply isn't any physically realistic equipment with the sensitivity to detect a single graviton. Incidentally the same problem means it's extremely unlikely we'll ever be able to observe a graviton being produced in a collider. | {
"domain": "physics.stackexchange",
"id": 28636,
"tags": "gravity, quantum-gravity, gravitational-waves, carrier-particles"
} |
Derivation of Lagrangian? | Question: I know that the Lagrangian $L$ is defined to be $T-V$, i.e. the difference between kinetic energy and potential energy. Also the Action $S$ is defined to be $\int Ldx$ and from this we can derive Newton's 2nd law of motion.
If we get Newton's second law out, does it mean that the formulation is correct? Couldn't it be just a coincidence?
Where do we derive these expressions for the Action and for the Lagrangian from?
Answer: This site derives the principle of least action from Newton's laws. http://www.damtp.cam.ac.uk/user/tong/dynamics/two.pdf | {
"domain": "physics.stackexchange",
"id": 12929,
"tags": "newtonian-mechanics, classical-mechanics, lagrangian-formalism, variational-principle"
} |
Landau mechanics - Normal modes of oscillation | Question: In Landau's Mechanics book there's a section in which he explains small oscillations in systems with $s \geq 1$ degrees of freedom.
He writes the kinetic and potential energies as
$$
T = \sum_{i, k} \frac{1}{2}a_{ik}(q_0)\dot{q}_i\dot{q}_k \hspace{1cm} U = \sum_{i, k} \frac{1}{2}k_{ik}x_ix_k
$$
where $q_0$ is a stable equilibrium point, so that the matrix $K = (k_{ik})$ is positive definite, and $x = q - q_0$. Also, he puts $a_{ik}(q_0) = m_{ik}$, so that
$$
T = \sum_{i, k} \frac{1}{2}m_{ik}\dot{q}_i\dot{q}_k
$$
Using Lagrange's equations while looking for solutions of the form $x_k = A_k e^{i\omega t}, \; A_k, \omega \in \mathbb{C}$, he obtains
$$
\sum_k (-\omega ^2m_{ik} + k_{ik})A_k = 0
$$
which can be rewritten in matrix form as $(-\omega ^2M + K)A = 0$. Both $M$ and $K$ are positive definite and hence invertible, so the last equation is equivalent to $(M^{-1}K - \omega^2 I)A = 0$. So, what Landau is looking for is the eigenvalues and eigenvectors of $M^{-1}K$.
Then he says that, provided the eigenvalues are all different, the components $A_k$ of $A$ are proportional to the minors of the determinant of $(M^{-1}K - \omega^2I)$, with $\omega^2$ eigenvalue.
Why is this? Cramer's rule is useless because the matrix is not invertible.
My reasoning is as follows: put $C = \frac{M^{-1}K}{\omega^2}$. Then, we have $CA = A$.
If we consider the determinant of the matrix C whose $i$th column is replaced with $A$, we get
$$
D(C^1, \dots, C^{i-1}, A, C^{i+1}, \dots, C^s) = D(C^1, \dots, C^{i-1}, \sum_j A_jC^j, C^{i+1}, \dots, C^s) = A_i D(C)
$$
and so
$$
A_i = \frac{D(C^1, \dots, C^{i-1}, A, C^{i+1}, \dots, C^s)}{D(C)}= \frac{1}{D(C)}\sum_k M_{ik}A_k
$$
where the last equality follows form Laplace's expansion of the determinant in the numerator and $M_{ik}$ are coefficients that are proportional to the minors of $C$.
How do I conclude that the coefficients $A_k$ are proportional to the minors?
EDIT: I'm looking for a proof of this fact.
Answer: The components of the null vector are actually any column of the transpose cofactor matrix, so, then, the fabulous, wonderful adjugate matrix.
For a given matrix N, you are seeking the null vector, so $\det (N)=0$. Now the transpose of the cofactor matrix is
$$
\operatorname{Adj}(N)=C^T,
$$
where C, the cofactor matrix of N, has the properly sign-permuted minors in the corresponding entries. The property of the adjugate is that
$$
N ~\operatorname{Adj}(N)= 1\!\!1 ~ \det (N).
$$
But we assumed $\det (N)=0$, so the r.h.side of the above vanishes.
Any column of the adjugate matrix is a good null vector of N, which is why both the answers to the question you read linked by @Phoenix87 and Landau do not bother to specify which row they pick to compute the cofactors for. In this case, the rank of the Adjugate is just 1! | {
"domain": "physics.stackexchange",
"id": 66843,
"tags": "lagrangian-formalism, linear-algebra, coupled-oscillators, normal-modes"
} |
When temperature is decreased, why do reactions occur at all? | Question: I admit that my knowledge of collision theory may be lacking, but, as I understand it, when particles collide, a reaction will not occur without overcoming the activation energy.
That being said, as the temperature of the environment in which the collision takes place is decreased, I believe it is logical that the kinetic energy of these particles will also decrease. Hence, to me, it would make sense if none of these particles had the energy required to react.
So, my question is: Why do reactions still occur when the environment's temperature around a collision decreases? That is, shouldn't there be a point (such as the temperature in a freezer, perhaps?) in which the activation energy cannot be overcome at all?
Answer: The activation energy varies a lot for various reactions. As long as T > 0 K, there is still disordered kinetic energy in the molecules. But the reason gasoline doesn't spontaneously combust, is that the temperature isn't high enough to overcome the activation energy - a spark is needed. So you are onto something, just remember that the activation energy varies with each reaction. | {
"domain": "chemistry.stackexchange",
"id": 769,
"tags": "energy, kinetic-theory-of-gases"
} |
Doubt regarding microscopic averaging of current density | Question: We know that when averaging the velocity vector field of a current distribution over a particular volume, we get velocity fields that are slow varying over macroscopically small / infinitesimal distances.
This is helpful as this is primarily the reason why we can write $$\vec j = \vec v \rho$$
that's because we assume all the charges that enter this small area element $dA$ (which is many 1000s molecules in diameter) enter it with the same velocity $\vec v(r)$, and the analysis is carried out in a simple fashion considering all the charges entering this area coming from a volume $ \vec v \cdot d\vec A dt $ and hence charge per unit time through this macroscopically infinitesimal area $$dI = \rho \vec v \cdot d\vec A$$ which is the current, macroscopically infinitesimally.
Suppose I have a current distribution, and I want to know the microscopic current density at a given point through an area $dA$ which is about a handful of molecules in size. Suppose the $i^{th}$ type of particle(s) move(s) with a velocity $\vec v_{i}$, the last particle of the $i^{th}$ type that will enter this area $d\vec A$ is a distance $\vec v dt$ away, and hence if $\rho_{i}$ is the density of the distribution of the $i^{th}$ type of particles, then the amount of charge entering this area solely due to the $ i^{th} $ particle will be $$dQ_{i}=\rho_{i} d\vec A \cdot \vec v_{i} dt $$
Now plugging in charge density of the $i^{th}$ particle, $$\rho_{i}(r')= q_{i} \delta{(r'-r_{i}'(t))}$$
$$I_{i}=\frac{dQ_{i}}{dt}= q_{i} \delta{(r'-r_{i}'(t))} d\vec A \cdot \vec v_{i} $$ and the total current is just the sum of contributions from all types of particles:
$$I=\frac{dQ}{dt}= \sum{q_{i} \delta{(r'-r_{i}'(t))} \vec v_{i} \cdot d\vec A } $$
Giving us $$\vec j = \sum{ q_{i} \delta{(r'-r_{i}'(t))} \vec v_{i} }$$
(This derivation of the $j_{micro} $ expression is my own, and I am highly positive it's correct (mainly because I know the formula is), if not, correct me)
Now my question is, how to go from
$$ \vec j_{micro}= \sum{ q_{i} \delta{(r'-r_{i}'(t))} \vec v_{i} }$$
to
$$\vec j_{macro}(r)=\rho(r) \vec v(r)$$
We know that Maxwell's laws, especially $$\nabla \times b = \mu_{0} \vec j + \mu_{0} \epsilon_{0} \frac{d\vec e}{dt} $$ can be averaged to get the corresponding macroscopic laws, and the averaging procedure being commutative, commutes through these differential operators to give: $$ \nabla \times <b> = \mu_{0} <\vec j> + \mu_{0} \epsilon_{0} \frac{d<\vec e>}{dt} $$ so I would like to know how one can jump from the microscopic current density to macroscopic current density.
(Note: when I say 1000s of molecules, it's solely for bookkeeping purpose, it could be higher depending on the situation and the kind of averaging we want.
Also, it is not necessary to answer the question based on Lorenz averaging method, any kind is fine as long as it reproduces the correct results, I'm willing to accept it)
Answer: microscopically,
$$\vec j_{micro}=\sum_{all-charges}{q_{i}}\vec v_{i} \delta(r-r_{i}'(t))$$
consider an appropriate volume to average over, $\Delta V$. I denote the position of the volume element in space with the vector $\vec R$. It is clear that averaging is done over a sphere with centre $\vec R$.
when we do the averaging:
$$\left< \vec j_{micro} \right>=\vec j_{macro} (\vec R)=\frac{\iiint_{\Delta V}\sum_{all-charges}q_{i} \vec v_{i}\delta(r-r_{i}'(t)) d^3r}{\Delta V}$$
now, if the charge's position is within the volume $\Delta V$, the term pertaining to it survives the integral, while the term pertaining to charges that lie outside the volume perishes.
With this in mind, the integral then integrates to:
$$ \left< \vec j_{micro} \right>=\vec j_{macro}(\vec R)=\frac{\sum_{\Delta V} q_{i} \vec v_{i}}{\Delta V} $$
Leaving this here for a moment, we turn our attention to $\rho_{micro}(r)=\sum_{all-charges} q_{i}\delta(r-r_{i}'(t)) $. Averaging this quantity over the same volume $\Delta V$ gives us:
$$\left< \rho_{micro} \right>=\rho_{macro}(\vec R)=\frac{\iiint_{\Delta V}\sum_{all-charges} q_{i}\delta(r-r_{i}'(t)) d^3r}{\Delta V}$$
which gives us:
$$\rho_{macro}(\vec R)=\frac{\sum_{\Delta V} q_{i}}{\Delta V}$$
now, we defnie the velocity vector field that we use in our macroscopic description of things as:
$$\vec v_{macro} (\vec R)=\frac{\left< \vec j_{micro} \right>}{\left< \rho_{micro} \right>}=\frac{\sum_{\Delta V} q_{i} \vec v_{i}}{\sum_{\Delta V} q_{i}}$$
This allows us to average the maxwell's equations, in particular, the Ampere-Maxwell equation:
$$\nabla \times \vec b = \mu_{0} \vec j_{micro} + \mu_{0} \epsilon_{0} \frac{d\vec e}{dt}$$
$$\nabla \times \left< \vec b \right> = \mu_{0} \left< \vec j_{micro} \right> + \mu_{0} \epsilon_{0} \frac{d \left< \vec e \right>}{dt}$$
this gives
$$\nabla \times \vec B = \mu_{0} \rho_{macro}\vec v_{macro} + \mu_{0} \epsilon_{0} \frac{d\vec E}{dt}$$ and we can safely call $\rho_{macro} \vec v_{macro}$ as the macroscopic current density that we have been using all along. This fits with the theory as the $\rho_{macro}$ appearing here is exactly the same that appears in Guass' Law, keeping things consistent. | {
"domain": "physics.stackexchange",
"id": 96231,
"tags": "electromagnetism, electric-current, maxwell-equations"
} |
Algorithm for constructing BST from post-order traversal | Question: Given a post-order traversal of Binary Search tree with $k$ nodes, find an algorithm that constructs the BST.
My Algortihm
Let $n$ represent the next element to be inserted.
Let $P(y)$ represent the parent of node $y$.
We will read the traversal in reverse. The last element of the traversal is the root. Let $l = root$. $l$ will represent the element
last inserted in the BST (except for the 3rd case below- where it will
change to the parent).
Loop the following till there's no element left to be inserted
if $l<n$ then $n$ is the right child of $l$. For the next insertion,
$l$ changes to it's right child and $n$ becomes the next element(in
reverse order of traversal ).
else, if $l>n$ and $P(l)<n$ then $n$ is the left child of $l$. For the next
insertion, $l$ changes to it's left child and $n$ becomes the next
element(in reverse order of traversal).
else, if $l>n$ and $P(l)>n$ then $l$ becomes $P(l)$.($n$ hasn't been inserted - we loop with $l$ changed)
[Let $P(root)=- \infty$, so that the $2^{nd}$ case applies]
Complexity Analysis : Every element may contribute at max 3 comparisons, 1 each for - left child, right child and for finally leaving i.e. subtree has been constructed. Even if I missed a comparison or two, It should be constant no. of comparisons per element and no. of operations for node construction will also be constant per element. Hence, giving $O(k)$ time complexity.
Actual Question
If the algorithm is correct, I need the correctness proof for it. Yes, I thought I had the proof but then brain got fried and I am stuck and unable to reason succinctly.
If the algorithm is incorrect, then why? And what is time complexity of the most efficient algorithm for the same question?
Also, is the $O(k)$ complexity correctly calculated - irrespective of the correctness of the algorithm?
Answer: You are in the right track. But the algorithm is incomplete. You missed the case inserting element on the left sub tree and back.
Here is the modified algorithm: (Changes marked in bold)
Let $n$ represent the next element to be inserted.
Let $P(y)$ represent the parent of node $y$.
Let $g=G(y)$ represent the first node g on the path $y\to root$ such that $P(g)<g$.
We will read the traversal in reverse. The last element of the traversal is the root. Let $l = root$. $l$ will represent the element last inserted in the BST (except for the 3rd case below- where it will change to the parent). Let $g=root$ tracking $G(l)$, initialize empty stack $stkG$ storing previous $g$'s on current path.
Loop the following till there's no element left to be inserted
if $l<n$ then $n$ is the right child of $l$. For the next insertion,
$l$ changes to it's right child and $n$ becomes the next element(in
reverse order of traversal ). Push $g$ on $stkG$, and let $g=l$.
else, if $l>n$ and $\textbf{P(g)<n}$ then $n$ is the left child of $l$. For the next
insertion, $l$ changes to it's left child and $n$ becomes the next
element(in reverse order of traversal).
else, if $l>n$ and $\textbf{P(g)>n}$ then $l$ becomes $\textbf{P(g)}$ and pop $g$ from $stkG$.($n$ hasn't been inserted - we loop with $l$ changed)
[Let $P(root)=- \infty$, so that the $2^{nd}$ case applies]
For correctness, you can prove the following loop invariant:
Root is the first inserted element. for each insertion except root, the parent element has been inserted before.
Each insertion $n$ correctly maintains the order between $n$ and $l$. ($n<l$ if $n$ is left child of $l$, $n>l$ otherwise)
After backtracking step 3, the next insertion always occurs on the left branch.
1 and 3 ensures the post-order traversal, while 2 ensures the BST.
For complexity:
insertion cost: each element is inserted exactly once.
traversal and comparison: the algorithm actually performs a post-order traversal in reverse order, with $O(1)$ comparison on each step.
$g$,$stkG$ maintain cost: each node, which is right child of parent, is pushed and popped from $stkG$ at most once.
Thus the time complexity is $O(k)$.
Alternative algorithm:[1]
Using a recursive procedure:
procedure BST_from_postorder(post, len, target)
/*
input: post[0..len-1] -- (partial) postorder traversal of BST
len -- length of post to be processed.
target -- a cutoff value to stop processing
Output: tree_ptr -- pointer to root of tree/sub tree constructed from post[len_rest..len-1]
len_rest -- remaining length that has not been processed.
*/
1. if len <= 0 or post[len-1] <= target, then return null.
2. root <- new Node created from from post[len-1].
3. (root->right, new_len) <- BST_from_postorder(post, len-1, post[len-1])
4. (root->left, rest_len) <- BST_from_postorder(post, newlen, target)
5. return (pointer to root, rest_len)
/* BST_from_postorder(post, length of post, -infinity)
will return the BST construct from given postorder traversal. */ | {
"domain": "cs.stackexchange",
"id": 5168,
"tags": "algorithms, algorithm-analysis, runtime-analysis, correctness-proof"
} |
Can we picture metallic bonding as an equilibrium between electrons and cations? | Question: Can we picture metallic bonding as an equilibrium between electrons and cations?
Suppose:
$$\ce{Al^3+ + 3e- <=> Al}$$
Answer: In metals, electrons are non-localized, forming a "sea" of electrons, rather than having them localized, as in the $\ce{Na+Cl-}$ lattice of crystalline salt. See Metallic bonding for a more complete description.
It is, of course, a matter of degree, as covalent, ionic and metallic bonding can "blend" from one to the other. A bond can be considered partially ionic and covalent, for example; see these helpful graphics | {
"domain": "chemistry.stackexchange",
"id": 3078,
"tags": "bond, ions, metal"
} |
Artificial gravity in a spinning space station | Question: I have a question about artificial gravity in a spinning space station.
From astro.cornell.edu:
In space, it is possible to create "artificial gravity" by spinning your spacecraft or space station. When the station spins, centrifugal force acts to pull the inhabitants to the outside.
If a person in the space station is moving in a circle along with the space station, the centripetal force is towards the centre. So how will the person be "pulled to the outside"?
Answer: The force of gravity we feel is really not gravity, but it is the bulk resistance from material we stand on. In elementary physics courses we call this the normal force. In more advanced physics such as general relativity we see the material of the Earth as deviating our path from a geodesic. With gravity there is no force to feel, except in the case of extreme tidal forces. So if you are on a rotating spacecraft the floor keeps pushing up on you just as the ground or floor does here on Earth. That is in fact what we feel. In the case of the rotating space station the material strength of the station maintains the centripetal force that keeps you on a circular path. | {
"domain": "physics.stackexchange",
"id": 39771,
"tags": "newtonian-mechanics, reference-frames, centripetal-force, centrifugal-force"
} |
Ruby - Summing groups of odd and even integers in an array until no groups remain | Question: The problem my code solves is listed below. I know my code can be improved, it takes a while to run when bigger arrays are entered as input.
"Problem - Given an array of integers, sum consecutive even numbers and consecutive odd numbers. Repeat the process while it can be done and return the length of the final array."
def sum_groups(arr)
z = arr.size
x = []
i = 0
until i == z
arr = arr.chunk{|x| x.even?}.map{|x, y| y}.map{|x| x.inject(&:+)}
x << arr
i += 1
end
x[-1].size
end
An example input -
For arr = [2, 1, 2, 2, 6, 5, 0, 2, 0, 5, 5, 7, 7, 4, 3, 3, 9]
The result should be 6.
[2, 1, 10, 5, 30, 15] - Value of numbers in final array.
The length of final array is 6
Any review and help would be appreciated to make this code more efficient.
Answer: The whole thing about sums is misdirection; you don't need to sum anything, since all you need to output is the final length of the array. So what you want to know is how many consecutive terms there are of a given parity, not what their sum is.
The trick is that only an odd number of odd terms will produce an odd sum; anything else will produce an even sum. So you can skip calculating the actual sum, because by knowing the parity and number of terms, you'll know if that sum's going to be odd or even.
Secondly, you can make it recursive instead of loop-based.
Here's my take:
def count_chunks_recursively(array)
# get our consecutive chunks
chunks = array.chunk(&:odd?).to_a
# if there are as many chunks as there are elements, we're done
return array.size if chunks.size == array.size
# otherwise, map the chunks to 0 or 1, depending on whether they
# sum up to something even or something odd
mapped = chunks.map { |odd, terms| odd && terms.size.odd? ? 1 : 0 }
# ... and repeat the process recursively
count_chunks_recursively(mapped)
end
As for your current code:
You very, very rarely have to use plain loops in Ruby; there's almost always something in Enumerable or Array that you can use instead.
x[-1].size would be more conventionally written as x.last.size
You use the a method reference when producing the sum – inject(&:+) – but you don't use the same technique for the other parts of the same line. I.e. this
arr.chunk{|x| x.even?}.map{|x, y| y}.map{|x| x.inject(&:+)}
could just be this:
arr.chunk(&:even?).map(&:last).map { |x| x.inject(&:+) }
or in Ruby 2.4+ which has a built-in sum method:
arr.chunk(&:even?).map(&:last).map(&:sum)
Sidenote: inject (specifically) actually doesn't need the method reference, but can also take a plain symbol: inject(:+). But for consistency's sake, I'd stick to using the &. | {
"domain": "codereview.stackexchange",
"id": 24930,
"tags": "ruby"
} |
Appreciation of the 5 state process model | Question: I'm starting out in an Operating Systems module. I have a few understanding questions to think about, which will not be gone through in class.
A process state model is an abstraction (or model) which is used to explain what can happen to a process. Give a state of the process, it tells you what is the next allowed states and what causes the process to be in that state, and so on. A model is only at a certain level of abstraction. In this question, we will be focusing on the 5-state process model from the lecture.
(a) Is it possible for a process to never go to Terminated state. If so, give 3 possible scenarios when this can occur. If not, explain your reasoning.
(b) Suppose a process is in terminated state. Discuss whether you should be able to check at any time if a process is in terminated state at all times. Are there any implications?
5 state process model
I have attempted to answer all these questions, but because of their theoretical nature, I think there might be a lot more dimensions to these.
a)
I think it is not possible for a process to never go to termination, given enough time, because there are multiple ways a kill() or exit() command can be called, depending on their policy and mechanism in the operating system?
b)
I think it is important because a process in termination state still holds some data and requires the OS to do some cleanup. A process might not be running for other reasons other than termination, it could be in a blocked state without any IO activity or just be inside the ready state.
Can someone check my understanding on these questions? Thanks.
Answer: a) A process does not necessarily have to terminate, it only reaches the terminated state if it exits voluntarily. If a process has no exit statement (e.g. a server), it would never reach the terminated state unless killed by the OS. Other ways for a process to never reach termination include not being scheduled when ready because some other process with higher priority is always scheduled before it (starvation), or being stuck in the blocked state waiting for some resource but never getting it, for example, trying to obtain a lock (deadlock/livelock).
b) If we cannot check wether the process has reached the terminated state or not, we can never free up its resources (stack, data, process table entry). It is important for the OS to be able to tell what state a process is in for the scheduler to work properly. If a child process terminates and reaches the 'Z' state, the operating system needs to have this information when it tries to pick a process to schedule. This prevents processes that no longer need CPU time from being scheduled. If the OS did not have this state information available at all times, the implications would be wasting memory space and CPU time.
I suggest reading up on the wiki for the
exit,
wait and
fork syscalls for better understanding :) | {
"domain": "cs.stackexchange",
"id": 13320,
"tags": "terminology, operating-systems, process-scheduling"
} |
Meaning of quarter hole color fill in drawing | Question: In this drawing 4 of the holes have the top left quarter filled in black.
What is the meaning of that? It it some special type of hole?
The drawing is from a Waveguide and in the specs it says
Flange type FDP32 (Cover)
So I'm thinking it might be some way of mounting the cover.
Is there a standarized meaning for the symbol?
Answer: They are just identifiers for different types of holes. If the same symbol was used for every hole then every hole would need the size specified beside it if there was more than one type.
With identifiers they just have to list the legend: | {
"domain": "engineering.stackexchange",
"id": 4071,
"tags": "mechanical-engineering, technical-drawing"
} |
how to solve the problem of cannot determine linker language for target? | Question:
hi:)
when i build the program, found the error like 'cannot determine linker language for link target
below is the my cmakelist.txt's content
find_package(catkin REQUIRED COMPONENTS
rospy
std_msgs
OpenCV
)
catkin_python_setup()
include_directories(
${catkin_INCLUDE_DIRS}
${OpenCV_INCLUDE_DIRS}
)
add_executable(opencv_test2 script/opencv_test2.py)
target_link_libraries(opencv_test2
${catkin_LIBRARIES}
${OpenCV_LIBRARIES}
)
install(PROGRAMS
script/opencv_test2.py
DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
please, how to solve this problem?
please know where is the problem exactly.
Edit:
i revised my cmakelist.txt
but occur the error that ' cannot speify link libraries for target " scripts/opencv_test1.py" which is not built by this project.
note opencv_test1.py can execute
what is the problem?
i must solve this problem. pleas help me
cmake_minimum_required(VERSION 2.8.3)
project(opencv_test1)
find_package(catkin REQUIRED COMPONENTS
rospy
std_msgs
)
find_package(OpenCV REQUIRED)
catkin_package(
LIBRARIES opencv_test1
CATKIN_DEPENDS opencv2 rospy std_msgs
DEPENDS system_lib
)
include_directories(
${catkin_INCLUDE_DIRS}
${OpenCV_INCLUDE_DIRS}
)
target_link_libraries(scripts/opencv_test1.py
${catkin_LIBRARIES}
${OpenCV_LIBRARIES}
)
Originally posted by anthuny shin on ROS Answers with karma: 1 on 2015-05-09
Post score: 0
Original comments
Comment by gvdhoorn on 2015-05-10:
Don't use answers to update your question, this is not a forum. Update your original question in the future.
I've moved the contents of your answer to your question.
Answer:
Is this your entire CMakeLists.txt? Please provide a complete copy.
If this is the complete contents, you seem to be missing two important statements. A typical CMakeLists.txt starts with:
cmake_minimum_required(VERSION 2.8.3)
project(your_package)
See also wiki/catkin/CMakeLists.txt - Overall Structure and Ordering and catkin 0.6.14 documentation » How to do common tasks » Package format 2 (recommended) » Catkin configuration overview - CMakeLists.txt
Also:
add_executable(opencv_test2 script/opencv_test2.py)
You can (and should) use a CMakeLists.txt for ROS Python nodes, but you don't use add_executable(..) then. See catkin 0.6.14 documentation » How to do common tasks » Package format 2 (recommended) » Installing Python scripts and modules for more info on using catkin with Python.
Note that you don't need to install your script / node in order to be able to rosrun or roslaunch it. You only need a minimal CMakeLists.txt to build your package, catkin will do the rest.
Edit:
i revised my cmakelist.txt
but occur the error that ' cannot speify link libraries for target " scripts/opencv_test1.py" which is not built by this project.
Have you read the documentation I linked? You cannot link Python nodes to anything using target_link_libraries(..). That is for C/C++/other compiled languages only. There is nothing to link with Python.
Also: include_directories(..) is useless with Python nodes. The PYTHONPATH is setup by sourcing the correct setup.(bash|sh|zsh) after building your workspace.
A minimal CMakeLists.txt for a Python only package is probably something like:
cmake_minimum_required(VERSION 2.8.3)
project(your_project_name)
find_package(catkin REQUIRED)
# depending on whether you have any libraries to install/export
#catkin_python_setup()
# add relevant arguments here (CATKIN_DEPENDS etc)
catkin_package()
Originally posted by gvdhoorn with karma: 86574 on 2015-05-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by dharmendra on 2020-05-11:
cmake_minimum_required(VERSION 2.8.3)
project(kr120_final)
find_package(catkin REQUIRED)
catkin_package()
install(DIRECTORY launch DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
PATTERN "setup_assistant.launch" EXCLUDE)
install(DIRECTORY config DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION})
add_executable(move_group_interface src/move_group_interface.cpp)
target_link_libraries(src/move_group_interface.cpp ${catkin_LIBRARIES})
This is what i have in CMakeList.txt file when i build catkin environment by "catkin_make" command, it does give the same error,
i have written node in c++ language,and followed the moveit official documentation. | {
"domain": "robotics.stackexchange",
"id": 21643,
"tags": "ros"
} |
RViz notification on "params modified" | Question:
Hello all,
I want to create a panel in RViz (part of a plugin) and this panel would contain widgets allowing me to create/modify ROS parameters.
Now, as is, my RViz panel will not be updated if the parameter gets modified outside of RViz - the parameter value displayed by RViz will be wrong. Hence my question: is there a ROS mechanism allowing to get notified on parameters value change? ...or another mechanism allowing to sync my panel?
Should I simply setup a 5Hz timer to update my panel? Or is there a better approach?
Thanks,
Antoine.
Originally posted by arennuit on ROS Answers with karma: 955 on 2015-06-23
Post score: 0
Answer:
For parameters that are expected to change, dynamic_reconfigure is generally a better solution. If that isn't an option, I believe you will have to poll for changes (like with a timer as you mentioned).
Originally posted by Dan Lazewatsky with karma: 9115 on 2015-06-23
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by jarvisschultz on 2015-06-23:
Dan's answer is good, but if you must use params, note that if you are using C++ you can reduce the overhead associated with constantly polling the parameter server by using ros::NodeHandle::getParamCached() | {
"domain": "robotics.stackexchange",
"id": 21992,
"tags": "ros, rviz, parameter, parameters"
} |
Efficiently map different codes to rotation-different codes | Question: Let $\mathbb{F}^n_2$ denote the set of $n$-bit 0-1 strings. How to construct an efficiently computable function $f:\mathbb{F}^n_2\to \mathbb{F}^m_2 (m>n)$ satisfying that $\forall u\neq v$,$f(u)\neq f(v)$ and $f(u)$ is not a circular rotation of $f(v)$?
$m$ is expected to be as small as possible.
Thank you very much for your kindness.
Answer: Here's a more efficient way of doing it. Let's map all the strings of length $n$ into strings of length $n+O(\sqrt{n} \log n)$ with no consecutive string of $0$'s of length more than $\sqrt{n}$. We then add a string $1 0^{a}1$ at the end, where $a \geq \sqrt{n} + 1$. Our mapping isn't always going to give us the same length string, so $a$ can vary.
How do we do this encoding? We need a way of encoding long string of 0's. Let's do that by
taking a string of $0$'s of length at least $t$ and mapping it to a string of zeros of length $t$ followed by a number telling us how many zeros were in it.
$$
\begin{array}{lcl}
0^{\sqrt{n}} & \rightarrow & 0^{\sqrt{n}}10\ldots000 \\
0^{\sqrt{n}+1} & \rightarrow & 0^{\sqrt{n}}10\ldots001 \\
0^{\sqrt{n}+2} & \rightarrow & 0^{\sqrt{n}}10\ldots010 \\
0^{\sqrt{n} +3} & \rightarrow & 0^{\sqrt{n}}10\ldots011 \\
0^{\sqrt{n} +4} & \rightarrow & 0^{\sqrt{n}}10\ldots100 \\
\vdots &\rightarrow & \vdots
\end{array}
$$
Because we will never have a string of more than $n$ 0's, we can use the first $\lceil\log n \rceil$ bits after the string $0^{\sqrt{n}} 1$ to encode the length of the string of 0's. Thus, the most we can increase the length of our code by is a factor of $1 + \frac{\log n+2}{\sqrt{n}}$. After this expansion of the code, we need to add at least $\sqrt{n} + 1$ consecutive 0's. This lets us take $m = n + \sqrt{n}(\log n + 3)$.
I think you can probably get $m \approx n + \log n\,$ by using the ideas behind arithmetic codes, but I don't have time to work the details out right now. If somebody else wants to work them out and post an answer, please feel free to do so. | {
"domain": "cs.stackexchange",
"id": 1525,
"tags": "coding-theory"
} |
Predict Customer Next Purchase with Sequence | Question: Suppose I buy products: [1,2,3,4]
Another customer X bought: [2,3]
Most probably customer X next purchase will be: 4
Sequence is very important in my problem
I tried association analysis using R, but it don't take under consideration the sequence
Please advise what algorithms I need to solve this?
Do I need to First do Clustering to find similar customers?
Answer: For sequence problems, I generally used Recurrent Neural Network. It has a property to learn its values, which is based upon the previous state and current input.
Since in your case sequence is important, you can use RNN. You can also try LSTM(type of RNN) cell.
And Additional Note: For product suggestion(where sequence is not
important) you can also use Apriori algorithm. This algorithm try to
build association between the products of a single order. | {
"domain": "datascience.stackexchange",
"id": 5573,
"tags": "machine-learning, data-mining, predictive-modeling, recommender-system"
} |
Probabilities of duplicate mail detection by comparing notes among servers | Question: I have the following problem:
We want to implement a filtering strategy in e-mail servers to reduce the number of spam messages. Each server will have a buffer, and before sending an e-mail, it checks whether there is a duplicate of the same message in its own buffer and contacts k distinct neighboring servers at random to check whether the duplicate is in another buffer. In case any duplicate message is detected, it will be deleted as spam, otherwise it will be sent after all negative replies are received.
Let us assume that there are N mail servers, and that a spammer sends M copies of each spam mail. We assume that all copies are sent simultaneously and that each mail is routed to a mail server randomly.
Given M, N and k I need to find out the probabilities that no spam message is deleted (i.e. no server detects spam), all spam messages are deleted (all servers detect spam) and spam messages are deleted from at at least one server.
So far, I have used combinations without repetition to find out the cases that need to be taken into account for an M and N. Now I need to find out the probability that one server receives at least two copies of a message, but I am at complete loss. Could you please provide some insight into the problem?
Answer: If a given server receives $m \leq M$ copies, it does not receive $M-m$ copies. Also, there are many ways to pick $m$ messages out of $M$; you have to consider all of them. Furthermore, important assumptions are that
"randomly" means "uniformly at random", that is the probability that any given copy goes to any given server is $\frac{1}{N}$, and that
each copy is routed independently of its siblings, that is we have independent random events for the individual copies.
This is all you need to piece together the probability $\operatorname{Pr}(S_i = m)$ that server $i$ receives $m$ messages.
If you put the above together, you get
$\qquad \displaystyle \operatorname{Pr}(S_i = m) = \binom{M}{m} \left(\frac{1}{N}\right)^m \left(1 - \frac{1}{N}\right)^{M-m}$
for $S_i$ being the number of copies the $i$-th server receives, $1\leq i \leq N$. You should recognize this kind of probability weight.
If you have doubts that this is correct, run some simulations (not a proof!).
Now it is easy to compute $\operatorname{Pr}(S_i \geq 2) = 1 - \operatorname{Pr}(S_i = 1) - \operatorname{Pr}(S_i = 0)$.
When using the derived weight in other calcuations, keep in mind that the $S_i$ are not independent because $\sum_{i=1}^N S_i = M$. The underlying probability distribution is the well-studied multinomial distribution. | {
"domain": "cs.stackexchange",
"id": 279,
"tags": "combinatorics, probability-theory"
} |
Could GA's determine fitness by "Fighting" against each other? | Question: I am developing AI in the form of NEAT, and it has passed certain tasks like the XOR problem outlined in the NEAT Research Paper. In the XOR Problem, the fitness of a network was determined by an existing function (XOR in this case). It also passed another tests. One I developed was to determine the sine at a certain point X in radians. It also worked, but yet again, its fitness was determined by an existing function (sin (x)).
I've recently been working on training it to play Tic Tac Toe. I decided that to determine its fitness, it would play against a "dumb" AI, placing O's in random locations on the grid, and gaining fitness based on whether or not it placed X's in a valid location (losing fitness if it placed an X on top of another X or an O), and gaining a lot of fitness if it won against the "dumb" AI. This would work, but when a network got really lucky and the "dumb" AI placed O's in impractical locations, the network would win and gain a lot of fitness, making it very difficult for another network to beat that fitness. Therefore, the learning process did not work and I was not able to generate a Tic Tac Toe network that actually worked well.
I do not want the GA to learn based off an "intelligent" tic tac toe AI because the whole point of me training this GA is so that I do not have to make the AI in the first place. I want it to be able to learn rules on its own without me having to hard code an AI to be very good at it.
So, I got to thinking, and I thought it would be interesting if the fitness of a network could be determined based off how well it played against OTHER NETWORKS in its generation. This does seem similar to how humans learn to play games, as I learned to play chess by playing against other people hundreds of times, learning from my mistakes, and my friends also increased in their ability to play chess as well. If GA's were to do that, that would mean I don't have to program AI to play the game (in fact, I wouldn't have to program a "dumb" AI as well, I would only have to hard code the rules of the game, obviously).
My questions are:
Has there been any research or results from GA's determining their fitness based off competing against each other? I did some searching but I have no idea what to look for in the first place (searching 'NEAT fight against each other' did not work well :-( )
Does this method of training a GA seem practical? It seems practical to me, but are there any potential drawbacks to this? Are GA's meant to only calculate predetermined functions that exist, or do they have the potential to learn and do some decision making?
If I were to do this, how would fitness be determined? Say, for the tic tac toe example, should fitness be determined based on whether or not a network places its X's or O's in viable locations, and add fitness if it wins and subtracts fitness if it loses? What about tying the game?
Should networks of the same species compete against each other? If they did, then it would seem impractical to have species in the first place, as networks in the same species competing against each other would not allow a successful species to rise to the top, as it would be fighting against each other.
Kind of out of topic, but with my original idea for the tic tac toe GA, would there be a better way to determine fitness? Would creating an intelligent AI be the best way to train a GA?
Thanks for your time, as this is somewhat lengthy, and for your feedback!
Answer: i'm the main developer of Neataptic, a Javascript neuro-evolution library.
Very effective! Realise that this is how real-life evolution happened as well: we kept on improving against other species, which forced them to improve as well.
Very practical, especially if you don't want to set up any 'rules' like you say, it makes the genomes find out what the rules are themselves.
Basically, you let each genome in the population play X games against other genomes, I advise you let each genome play against every other genome in the population. An example of scoring would be giving the genome 1 point for winning, and 0.25 or 0.5 for a tie. Each game should always have a result!
I'm not sure about this one, as I haven't implemented speciation.
I want to give you some examples that I have worked on:
Agar.io AI (neuro-evolved neural agents) - basically, I let neural networks evolve to get the highest score they can in agar.io, by competing against each other! It worked better than I expected.
Currently i'm working on new project, a kind of 'cops and robbers' style game. | {
"domain": "ai.stackexchange",
"id": 934,
"tags": "neural-networks, machine-learning, genetic-algorithms"
} |
What physical features determine if a planet is a major, minor or dwarf planet? | Question: Like many, when I was growing up, we always were taught, hence always learned that there were 9 planets. However, recently, decisions were made and all of a sudden there were 8 major planets and a series of dwarf and minor planets.
What physical features delineate a body being referred to as a major planet as opposed to dwarf planet or minor planet?
Answer: Planets
For a body to be classified as a planet it must have a few physical characteristics:
Mass
It must have enough mass to have a strong enough gravity to overcome electrostatic forces to bring it to a state of hydrostatic equilibrium.
Hydrostatic equilibrium is important because early in a planets life it is nearly entirely fluid, crust and all
Internal Differentiation
The life cycle of a planet essentially leads to denser heavier metals being at the centre of a planet, surrounded by a mantle which must at some point have been fluid (it can still be solid and called a planet as long as it used to be fluid.)
Atmosphere
This is usually driven by its mass but a planet should have an atmosphere. This means it should be massive enough to have a strong enough gravity to hold some gasses to its surface.
More massive planets are capable of keeping lighter gasses, such as hydrogen, bound to them too. I.e Jupiter.
Magnetosphere
A magnetosphere suggests that the body is still geologically active. This means they have flows of elements that conduct electricity in their interiors.
As specified by the IAU:
A celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit.
Minor Planets
Minor planets are things such as asteroids, they are usually small mass, rocky bodies. This is different from small mass bodies that might be predominantly ice or water.
Dwarf Planets
Size and Mass
The IAU have not specified an upper or lower limit for mass to be considered a Dwarf Planet, therefore it is predominantly determined by one other feature called:
Orbital Dominance
Orbital dominance is achieved when a body has cleared it's orbit of all other bodies.
For example planets are able to remove small bodies out of their area through impact, capture, or gravitational disturbance.
Any body that is incapable of doing so is therefore classified as a dwarf planet. So if Jupiter was incapable of clearing its neighbourhood of bodies then it too would be a Dwarf planet. | {
"domain": "astronomy.stackexchange",
"id": 18,
"tags": "planet, classification, dwarf-planets"
} |
Qt and non-Qt nodes in the same package? | Question:
I have used qt_create to make a Qt ROS package. However, when I try to add another basic ROS node (one that doesn't use Qt), I get the following error, stating that I have duplicate main functions:
make[3]: Entering directory `/home/phillip/ROS_workspace/ros-swarm-tools/interpreter/build'
Linking CXX executable ../bin/interface
CMakeFiles/interface.dir/src/interpreter.o: In function `main':
/home/phillip/ROS_workspace/ros-swarm-tools/interpreter/src/interpreter.cpp:158: multiple definition of `main'
CMakeFiles/interface.dir/src/main.o:/home/phillip/ROS_workspace/ros-swarm-tools/interpreter/src/main.cpp:20: first defined here
collect2: ld returned 1 exit status
It seems as though cmake is trying to link the new ROS node (interpreter) with the old one (interface) in the ../bin/ folder. I am not familiar enough with cmake to know how to modify CMakeLists.txt to prevent it from doing this. Am I right in thinking that this is the problem? If so, how would I go about fixing it, or is there an easy tutorial somewhere for combining multiple nodes in a ROS Qt project?
Originally posted by pmwalk on ROS Answers with karma: 65 on 2012-02-27
Post score: 2
Answer:
The qt_create script by default sets up a package for development of a single binary (keeps it simple).
The easiest way to get around it is to simply create two different packages. Or if you want to share some code between the two, one library package and two binary packages that depend on the library package.
I don't know how complicated your source structure setup, but the cmake you probably want to look at is the line in CMakeLists.txt:
file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
That one assigns all the .cpp files under the src directory to QT_SOURCES and uses them to build the qt program. It's collecting the sources for both your mains together. Instead of globbing for sources, just comment that and assign sources manually. e.g.
set(QT_SOURCES src/main.cpp src/qnode.cpp src/main_window.cpp)
set(OTHER_SOURCES src/other_main.cpp)
rosbuild_add_executable(qfoo ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
rosbuild_add_executable(foo ${OTHER_SOURCES}) # non qt node
If you don't mind really learning the cmake though - check the qt_tutorials package for a complicated example of combining a build of a library and several nodes in the one package.
Originally posted by Daniel Stonier with karma: 3170 on 2012-02-27
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by pmwalk on 2012-02-28:
Perfect! That's exactly what I needed. I am planning on learning cmake for sure, but I wanted to try and fix this as soon as possible. | {
"domain": "robotics.stackexchange",
"id": 8415,
"tags": "qt"
} |
What's wrong with this LEAN proof? | Question: I'm learning to use the LEAN theorem prover and I got stuck in a proof of a simple fact in first-order logic:
$$ p(x) \rightarrow \forall x p(x) $$
My code is the following:
variables (A : Type) (p q : A → Prop)
example (x : A) : p x → (∀ x, p x) :=
assume H : p x,
take x : A,
show p x, from H
but it keeps saying that H has type p x_1, instead of what I expected, which is type p x.
Answer: Your example is not true, that's why you cannot prove it. If we assume your example is true (which we do using the sorry tactic), then we can prove false. The proof goes as follows. We first pick x to be 0 and p to be the property that a number n is equal to 0. So p x is p 0 is 0 == 0 which is obviously true. Your example now provides us with the proof that p holds for all numbers. We pick 1, which is a contradiction because 0 <> 1.
variables (A : Type) (p : A → Prop)
lemma lma (x : A) : p x → (∀ x', p x') :=
assume H : p x,
take x' : A,
sorry
lemma inconsistent : false :=
assert h : 0 == 0 → (∀ x, x == 0), from (lma num (λ n, n==0) 0),
assert e : 1 == 0, begin
refine (h _ 1),
reflexivity
end,
show false, by cases e
The formula that you posted is also false in first order logic, even ∀ x, p x → (∀ x, p x) is not a true statement. Note that your example is different from the true statement (∀ x, p x) → (∀ x, p x).
The error message that you see stems from the fact that you are trying to use the hypothesis p x where the hypothesis p x' is expected (you are confusing the xes) | {
"domain": "cstheory.stackexchange",
"id": 3394,
"tags": "lo.logic, type-theory, coq"
} |
Preserving integral through downsampling | Question: I've got a high-resolution image segmentation, which I'm downsampling (and non-linearly deforming) to fit on an ROI image. I typically blur the high-res image before downsampling with a Gaussian-fwhm related to the downsampling factor. The result is high-res and low-res images with the same mean.
I'd like now for my high-res and low-res images to have the same sum. If I scale up the low-res image by the downsampling factor, will local sums be preserved across the image? Is there a better way to downsample and preserving local sums?
Answer: Increase the gain of your filter by the downsampling factor.
You do this by multiplying the filter taps by the downsampling factor. This is equivalent to doing exactly what you have done before and then multiplying the results by the downsampling factor. | {
"domain": "dsp.stackexchange",
"id": 1134,
"tags": "image-processing, downsampling"
} |
How can I optimise this bitmask generation algorithm? | Question: I am emulating the left shifting of a 128-bit integer using two 64-bit integers. For this I must calculate the bits that need to be moved into the higher portion. I have the following algorithm for generating a bitmask that masks the last until bits.
Where until is an integer in the range 0..64:
~(power(2,64-until))-1)
For convenience this is it in Python:
def mask(until): bin((~(2**(64-until)-1))&0xFFFFFFFFFFFFFFFF)
Is there a faster way to do the same thing by avoiding the costly use of power()? Or is this the most efficient way?
Answer: You can compute a power of 2 using the operation of left shift, in C and related languages <<. For example, $2^n$ is the same as 1 << n.
An alternative is to precompute the mask for all possible input values, and store them in the array. You'll have to try it out to see whether this is faster than the other suggestion. | {
"domain": "cs.stackexchange",
"id": 15784,
"tags": "optimization"
} |
Need advice regarding setup of a new robot platform | Question:
Greetings!
I am soon starting up my new hobby project: a SLAM-robot with no clear goal yet (part from navigation). All the parts have arrived but I need some advice regarding what stack to start with, as I do not want to assemble all software components from scratch. I will start of by presenting the list of hardware.
CPU-board: Hardkernel ODROID-U3
Controller board: Arduino plug-in board for ODROID-U3
Motors: Two pretty fat stepper motors
Stepper motor drivers: Two L298N based stepper motor driver boards
Sensor: Microsoft Kinect
The robot will be assembled on a frame built of sheet aluminium together with a 360º turning nosewheel and airplane wheels for the stepper motors. The Kinect sensor will be mounted on the top of the aluframe.
Now there are several issues that I need to address and I hope that the ROS community can help me with some of them or at leaste provide som useful suggestions.
The ODROID officially supports releases of ubuntu 13.10 and up. As I understand the latest Ubuntu officially supported by ROS is 13.04. Obviously there is a gap here and I have the options of installing a non-supported OS on the ODROID, or compiling ROS from source - I am not very impressed by any of those two solutions. Another option is to run Android but I have a bad feeling about that since it seems very experimental.
I do not want to build my application from scratch as this is a waste of time imho. What I need to figure out is which pre-assembled stack to use in order to get as close as possible to my particular setup. I'd be happy if the only thing I'd actually need to implement in code is the stepper motor interface on the arduino.
What is the recommended workflow for developing the robot application? SSH in and have the build environment located on the ODROID? Cross compile on a PC?
Finally, I am interested in some cool applications for the robot. I.e. vacuum cleaner, spy etc.
Let's get this discussion started!
/Simon
Originally posted by aerkenemesis on ROS Answers with karma: 21 on 2014-03-07
Post score: 2
Original comments
Comment by demmeln on 2014-03-07:
Please don't open duplicates.
Answer:
I have had no trouble at all running the ARM debs on top of the LUbuntu 13.05 U2 image; particularly since the Odroid U2 and U3 are nearly identical CPUs and boards. I know a few people that are even running the ROS navigation stack using these tools.
For basic navigation, the ROS navigation stack is a good starting point and is available as pre-built debs for ARM. You won't have to write the complex algorithms, but you will still have to understand them at a basic level so that you can tune them to run well on your particular hardware. Expect this tuning to take at least a week of full-time work, once your base drivers are up and running well. The RobotSetup guide is a good place to start.
Personally, I like to SSH to my robot and do everything on-board. A number of other ARM users like to set up a cross-compilation toolchain, edit and build off-board, and then copy the final binaries to their boards. I recommend that you try both approaches and see which approach works better for you.
Originally posted by ahendrix with karma: 47576 on 2014-03-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by nckswt on 2014-07-09:
I'm working with a similar setup as aerkenemesis - how do you configure your sources.list for XUbuntu 13.10? Or would I have to install a new version of Ubuntu? | {
"domain": "robotics.stackexchange",
"id": 17201,
"tags": "ros, turtlebot, ros-hydro, stack, odroid"
} |
Kobuki unpredictability | Question:
I've been trying to run python scripts on Kobuki Turtlebot in order to make it perform specific patterns on the floor. I've noticed that while executing the same script on the same spot, it provides radically different performances. This is an example of my script:
#!/usr/bin/env python
# for robot A
import rospy
from kobuki_testsuite import TravelForward
from std_msgs.msg import Empty
from geometry_msgs.msg import Twist
from std_msgs.msg import Empty
import roslib; roslib.load_manifest('kobuki_testsuite')
import sys
import copy
import os
import subprocess
from tf.transformations import euler_from_quaternion
from math import degrees
from sensor_msgs.msg import Imu
from geometry_msgs.msg import Quaternion
global stage
stage = 1
global length
length = 1 # length in metres
global width
width = 1
#des = 0.5 # desired angle in radians - it's negative if angle is larger than 3.14
def resetter():
try:
pub = rospy.Publisher('/mobile_base/commands/reset_odometry', Empty, queue_size=10)
rospy.init_node('resetter', anonymous=True)
rate = rospy.Rate(10) # 10hz
for i in range(0,10):
pub.publish(Empty())
rate.sleep()
except rospy.ROSInterruptException:
pass
def ImuCallback(data, angle):
global stage
quat = data.orientation
q = [quat.x, quat.y, quat.z, quat.w]
roll, pitch, yaw = euler_from_quaternion(q)
sys.stdout.write("\r\033[1mGyro Angle\033[0m: [" + "{0:+.4f}".format(yaw) + " rad] ["\
+ "{0: >+7.2f}".format(degrees(yaw)) + " deg]"\
+ " \033[1mRate\033[0m: [" + "{0:+.4f}".format(data.angular_velocity.z) + " rad/s] ["\
+ "{0: >+7.2f}".format(degrees(data.angular_velocity.z)) + " deg/s] ")
if angle > 0:
if yaw > angle or yaw < 0:
stage += 1
else:
if yaw > angle and yaw < 0:
stage += 1
def publish(pub, v, sec, step):
cnt = 0
m = sec * 2
while cnt < m and not rospy.is_shutdown() and stage == step:
pub.publish(v)
cnt = cnt + 1
rospy.sleep(0.5)
def rotation_anticlockwise(step, angle, r):
rospy.Subscriber("/mobile_base/sensors/imu_data", Imu, ImuCallback, angle)
cmdvel_topic ='/cmd_vel_mux/input/teleop'
pub = rospy.Publisher(cmdvel_topic, Twist)
vel = Twist()
print "rotation"
print stage
while not rospy.is_shutdown() and stage == step:
vel.angular.z = 0.5
vel.linear.x = 0.5 * r
rospy.loginfo("Anticlockwise")
publish(pub, vel, 18, step)
rospy.loginfo("Done")
rospy.sleep(2)
# if stage == 0:
# vel.angular.z = 0
# rospy.sleep(2)
if __name__ == '__main__':
global stage
resetter()
rotation_anticlockwise(1, 1.57, 0)
resetter()
print stage
stage = 2
rotation_anticlockwise(2, -0.02, 0.2 * length)
What I meant for it to do is to rotate to 90 degrees and then make a full circle with a specified radius.
Most of the times I run the script I get it to rotate a bit and then stand still or rotate and then make just a part of a circle.
What troubles me is I don't understand whether the problem is my script or some mechanics inside the robot. Could it be that its bumper sensors are too sensitive and make it stop unexpectedly? If so, is there a way around?
Originally posted by Elizaveta.Elagina on ROS Answers with karma: 28 on 2015-02-23
Post score: 0
Original comments
Comment by dornhege on 2015-02-23:
Can you try moving the rospy.init_node, so it is called exactly once. About the bumpers: Do they actually actuate, so that the robot stops?
Comment by Elizaveta.Elagina on 2015-02-23:
Thank you for the advice about rospy.init_node, but the problem seems to be something else. Are the bumpers only actuated when they're pressed? Apparently, they have nothing to do with my problem.
Comment by dornhege on 2015-02-23:
Usually yes. The standard behavior might even be to slightly drive backwards, depending on what is running. Just try pressing them manually and see. I have not seen bumpers misfire.
Answer:
The problem was that the function ImuCallback doesn't accept new parameters as ordinary functions. And if it's called more than once, it sometimes works with old parameters instead of newly received. To solve this I give the angle as a global variable not as a parameter for a callback function.
Originally posted by Elizaveta.Elagina with karma: 28 on 2015-03-05
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by karmel on 2018-01-24:
hi ,can you tell me what the pattern was in your code , and if you can give me the edited code ,please .
thanks | {
"domain": "robotics.stackexchange",
"id": 20955,
"tags": "python, turtlebot"
} |
Which animal dug this 5 cm diameter hole deep into mud and build a large mound around it? | Question: I noticed the large structure below today at the edge of a very small pond in northern Taiwan lowlands. It is June and it has rained a bit the past few days, and this 15 cm mud dome with a 5 cm diameter hole going straight down looked fresh and wet and glistening.
It is hard to tell for sure but it looks like the borough went deep enough to reach the water line of the pond next to it, but I could not tell because it's dark inside and I did not have a flashlight.
The mound is large, perhaps 15 cm tall and wide, which suggests that a lot of material has been removed and the tunnel could be quite deep.
The pond is only about 3 x 5 meters in diameter.
Question: Snake? Crab? Mole? What further information can I collect about it that would help to identify what produced this?
click images for full size view
Answer: A crustacean (land crayfish). There are a lot to choose from.
http://web.nchu.edu.tw/~htshih/crab_fw/fw-crabe.htm | {
"domain": "biology.stackexchange",
"id": 10665,
"tags": "species-identification, limnology, habitat"
} |
At what distance from the Sun can planetary moons exist? | Question: Mercury and Venus are theorized to have no moons because they are so close to the sun.
Is there a theoretical distance in which moons tend to exist based on simulations?
Answer: There are several factors determining the inner limit to moons. Perhaps the simplest is that it needs to stay inside the Hill sphere, the region around the planet where the planet's gravity dominates over the sun's. If the planet's orbit has semi-major axis $a$ and eccentricity $e$ the farthest the moon can orbit is $$r_H \approx a(1-e)\sqrt[3]{\frac{m}{3M}}$$ where $m$ is the planet mass and $M$ the sun mass.
The closest a satellite can orbit a planet is the Roche limit, $$r_R = r_m\sqrt[3]{\frac{2m}{m_m}}$$ where $r_m$ and $m_m$ is the radius and mass of the moon. Equating $r_H=r_R$ and assuming $e=0$ to get the minimal $a$ where a moon is possible gives
$$ a_{min}= r_m\sqrt[3]{\frac{6M}{m_m}}.$$
For a Moon with $r_m=1737.4$ km and $M/m_m=27090711$ (e.g. our moon), this is 0.006 AU (948,179 km), 1.36 solar radii! This is still just barely outside the Roche limit for an Earth-sized planet relative to the sun.
(See (Donnison 2010) for a more careful estimation for the full three-body problem applied to moons. (Domingos, Winter & Yokoyama 2006) found the rough limits $a_{crit}\approx 0.4895(1-1.0305e_{planet}-0.2738e_{sat})r_H$ for prograde satellites and about twice this limit for retograde satellites.)
However, while this shows that in principle you can have moons extremely close to stars, in practice they are not going to occur.
The most obvious problem is that very close planets will become tidally locked to the sun, and that will make the moon spiral inward since it will dissipate orbital energy through tidal deformation of the planet. The effect becomes bigger for heavier satellites. (Barnes & O'Brien 2002) calculate the following allowed region in a 4.6 Gy old system around a 1 Jupiter mass planet:
The curve scales as $m_m \propto a^{13/2} m^{8/3}r^{-5}$; for an Earth-like primary the corresponding masses have to be 2.7% of the Jupiter case (although the different tidal properties of the planets makes this estimate somewhat dodgy).
There are other destabilizing factors for small bodies close to the sun such as the Yarkovsky effect. So while a tiny moon can reside close to the sun, it will likely not remain there long.
The opposite question, if there is an outer limit to planets holding satellites, presumably can be answered negatively. Clearly there are less and less disturbances the further you go outward, and the only issue is if a planet can accrete or capture a satellite. Given the common presence of satellites around trans-Neptunian objects this seems fairly common. | {
"domain": "astronomy.stackexchange",
"id": 5598,
"tags": "natural-satellites, mathematics"
} |
Entropy Maximization using undetermined multipliers | Question: This is from Problems in Thermodynamics and Statistical Physics by P.T. Landsberg
A system can be in any one of N states. Using the method of undetermined multipliers to show that for the maximum entropy, $S = -k \sum_i p_i \ln p_i$ where $p_i = 1/N$, $$S = k \ln N\,,$$
the solution is as follows:
Using an undetermined multiplier $\alpha$, write $$f = -k \sum_i(p_i \ln p_i - \alpha p_i)$$
$$\frac{\partial f}{\partial p_j} = -k (\ln p_i + 1 -\alpha ) = 0$$
$\ln p_j = \alpha -1$for all $j$, therefore all $p_i$ are equal. The maximum entropy is
$$S_{max} = -k \sum_i(\frac{1}{N}\ln \frac{1}{N}) = k \ln N$$
I don't understand the last part on how to go from the sum to the final expression, given $\sum_i p_i = 1\;.$
Answer: It's a very simple mathematical identity, which arises because the sum is over $N$ things that are all the same. So
$$
-k \sum_{i=1}^N \frac{1}{N}\ln \frac{1}{N}
$$
is the same as
$$
-k N \left( \frac{1}{N}\ln \frac{1}{N} \right)
$$
and I think you should be able to do the rest from there. | {
"domain": "physics.stackexchange",
"id": 9831,
"tags": "statistical-mechanics"
} |
The Blacklist - follow-up | Question:
Follow-up from this question using @Toby Speight's answer:
The primary concern is jq improvement/optimization, but please detail any others.
#!/bin/sh
set -eu
sources=$(mktemp)
trap 'rm "$sources"' EXIT
curl -s -o "$sources" https://raw.githubusercontent.com/T145/packages/master/net/adblock/files/adblock.sources
for key in $(jq -r 'keys[]' "$sources")
do
case $key in
gaming | oisd_basic )
# Ignore these lists
;;
* )
url=$(jq -r ".$key.url" "$sources")
rule=$(jq -r ".$key.rule" "$sources")
curl -s "$url" |
case $url in
*.tar.gz) tar -xOzf - ;;
*) cat ;;
esac |
gawk --sandbox -- "$rule"
esac
done |
sed -e 's/\r//g' -e 's/^/0.0.0.0 /' | sort -u > the_blacklist.txt
# use sort over gawk to merge sort multiple temp files instead of using up limited memory
Answer: Don't use a for loop to iterate over program output. See http://mywiki.wooledge.org/BashFAQ/001
Try this -- extract the keys, urls, rules all at once:
jq -r 'keys[] as $k | [$k, .[$k].url, .[$k].rule] | @tsv' "$sources" |
while IFS=$'\t' read key url rule; do
case $key in
...
If, for some reason, your shell does not understand $'\t', use
while IFS="$(printf '\t')" read ... | {
"domain": "codereview.stackexchange",
"id": 41437,
"tags": "linux, shell, sh, jq"
} |
Regular Expression matching customer number strings | Question: Using VB.NET, I have created an AddIn for Autodesk Inventor and the customer has a bunch of drawing number strings which follow this sort of scheme:
P01867-13-TP09-001-4950-1775-1175-895-1125-835
P01867-13-TP09-002-4950-1775-1045-895-1035
P01867-13-TP02-019-L-1137-275-852-102
P01867-13-TP02-019-L-1137-275-852-102
P01867-13-TP02-019-R-1137-275-852-102
P01867-13-TP02-021-L-1137-1055-1372
P01867-13-TP02-021-L-1137-535-1027
P01867-13-TP02-021-L-1137-795-1184
P01867-13-TP02-021-R-1137-1055-1372
P01867-13-TP02-021-R-1137-535-1027
P01867-13-TP02-021-R-1137-795-1184
P01867-13-TP02-025-L-1137-1315-1581
P01867-13-TP02-025-R-1137-1315-1581
P01867-13-TP03-005
P01867-13-TP02-019-L-1137-275
P01867-13-TP02-019-R-1137-275
P01867-13-TP02-019-R-1137
P01867-13-TP02-019-L-1137
In order to account for these groups of three digits within the variations I have created the following regex:
(\w*\d*-\d*-\w*\d*-\d*-\w-)(\d*)-(\d*)-(\d*)-(\d*)-(\d*)|(\w*\d*-\d*-\w*\d*-\d*-\w-)(\d*)-(\d*)-(\d*)-(\d*)|(\w*\d*-\d*-\w*\d*-\d*-\w-)(\d*)-(\d*)-(\d*)|(\w*\d*-\d*-\w*\d*-\d*-\w-)(\d*)-(\d*)|(\w*\d*-\d*-\w*\d*-\d*-\w-)(\d*)|(\w*\d*-\d*-\w*\d*-\d*-)(\d*)-(\d*)-(\d*)-(\d*)-(\d*)|(\w*\d*-\d*-\w*\d*-\d*-)(\d*)-(\d*)-(\d*)-(\d*)|(\w*\d*-\d*-\w*\d*-\d*-)(\d*)-(\d*)-(\d*)|(\w*\d*-\d*-\w*\d*-\d*-)(\d*)-(\d*)|(\w*\d*-\d*-\w*\d*-\d*-)(\d*)|(\w*\d*-\d*-\w*\d*-\d*)|.*(\w*\d*-\d*-)(\d*)-(\d*)-(\d*)-(\d*)-(\d*)|.*(\w*\d*-\d*-)(\d*)-(\d*)-(\d*)-(\d*)|.*(\w*\d*-\d*-)(\d*)-(\d*)-(\d*)|.*(\w*\d*-\d*-)(\d*)-(\d*)|.*(\w*\d*-\d*-)(\d*)|.*(\w*\d*-\d*-\w-)(\d*)-(\d*)-(\d*)-(\d*)-(\d*)|.*(\w*\d*-\d*-\w-)(\d*)-(\d*)-(\d*)-(\d*)|.*(\w*\d*-\d*-\w-)(\d*)-(\d*)-(\d*)|.*(\w*\d*-\d*-\w-)(\d*)-(\d*)|.*(\w*\d*-\d*-\w-)(\d*)
I now have to add the capability of looking for a sixth group of digits so figured I would ask here if there is a method within regex (which I may have overlooked) that will allow me to improve upon/simplify the above monster.
Answer: ^P\d{5}-13-TP\d{2}-\d{3}(-(L|R|\d{4})(-\d{3,4})*)?$
Your current regex is WAY too forgiving. First of all, every one of your example starts with a P followed by some numbers, but you accept ANY COMBINATION of letters at the beginning. I'm assuming that ALEX01867-13-TP02-019-L-1137 isn't a valid key, so you should take steps to reject it by using hungry quantifiers as little as possible (*, +). Using \d{3,4} matches a digit between 3 and 4 times, so that will
let you limit the sort of input you accept.
The same goes for the 5th group - according to your examples, it's either L, R, or 4 digits. In Regex, that looks like this: (L|R|\d{4})
Next, you are using alternation (option1|option2) to capture the different "forms" your string comes in as, but you are repeating a bunch of stuff (for example, the \w*\d* at the beginning). You can limit the scope of the alternation by surrounding it in brackets (()). You can see this in action with the (L|R|\d{4}) example - that whole bracket group becomes a single token that matches somewhere in a string (or doesn't).
Sometimes the string ends after the 4th group (Before the L/R group), and sometimes it doesn't. Instead of using alternation to solve this, which makes the regex VERY long, you can just surround the entire regex AFTER that point in brackets with a question mark (an-(example)?). This makes the entire second part optional.
Finally, your problem asks if there is a simple method to improve the regex. By ending it in (-\d{3,4})* you can match ANY length of additions to the end, assuming the all come in the form -015 or -1992 or whatever. If you knew that there was always a max of 15 numbers added to the end, you could change that star (*) to a max quantifier ({,15}). If sometimes the number only has two digits, change the {3,4} to {2,4}, etc.
See it in action here | {
"domain": "codereview.stackexchange",
"id": 16240,
"tags": "regex, vb.net"
} |
Negative Energy in Inflation Theory (Low/Zero Energy Universe) | Question: I've been reading Max Tegmark's book: Our Mathematical Universe. It's very interesting, but I wanted to know more about one particular thing. The book simplifies things and I know inflation theories to be varied and complex, but I will briefly describe what Max was saying and hopefully someone can pick up one what I'm talking about.
Max describes the inflation period as containing a non-diluting, inflating substance, where the energy used to inflate it causes it's mass to increase through extreme negative pressure. And gravity repulses this substance, accelerating its growth because the negative pressure causes negative gravity.
So where did this energy come from to create all this new mass? Well he says that the gravitational force provided this energy, and that to balance the energy it created negative energy in the gravitational field. Every time the gravitational field accelerates something it gains negative energy apparently.
So what does this mean? It clearly means that for all or nearly all energy in the universe there must be an equal amount of negative energy in the gravitational field (or anywhere else that can have negative energy). But what is this energy doing? Surely this means the universe could cancel out all energy and return to nothing or nearly nothing?
So can someone please explain what this negative energy actually is.
Thank you.
Answer: In Newtonian mechanics, a particle might gain kinetic energy while a corresponding gravitational potential energy decreases, thus you get that kind of conservation of energy. The total energy is the same before and after any event. However, the amount of energy depends on who's looking.
In Special Relativity a transfer of energy has to happen at an event (a specific time and specific location), so you have to have changes in kinetic energy be compensated by a loss of energy in another body or field. An example is the electromagnetic field which has an energy density, momentum density, and stress at every point in spacetime. Energy can transfer from the electromagnetic field to the particle and thus you get conservation of energy. It can be expressed by saying the energy in some region of space at some time is equal to the energy at an earlier time plus or minus the net flux of energy in or out of that region of space during that time interval. But energy conservation is just one part of a unified energy-momentum conservation, and that conservation can be expressed in a frame independent manner.
In General Relativity you generalize the kind of conservation of energy as is found in Special Relativity, but the tensor T, called the stress-energy tensor, that keeps track of the energy density, momentum density and stress at every event in spacetime is actually the same tensor from Special Relativity, and so it has no terms that correspond to gravitational potential energy. Breaking the stress-energy tensor into just an energy part is frame dependent and General Relativity is formulated in a frame independent manner. Some people try to make an energy psuedo-tensor, but that is a different tensor. And it is the stress-energy tensor T (not the psuedo-tensor) that is the source of the gravitational curvature, just as charges and currents are the source of electromagnetic waves.
So simply put, don't expect General Relativity to have something like "total energy of the universe", because that's just something that isn't naturally there. There is a stress-energy tensor, which if you pick a frame gives you an energy density at an event, but there is usually no natural frame, so no natural energy density.
But when talking about the stress-energy tensor in different epochs, there might be a sense where is has certain properties and at other times has other properties. One property a stress-energy tensor can have is whether it satisfies various so-called energy conditions. And a common consequence of many energy conditions is that the energy density (for every frame) is non-negative. So the question about whether a particular stress-energy tensor has a negative energy density is a legitimate question in General Relativity. | {
"domain": "physics.stackexchange",
"id": 15770,
"tags": "gravity, energy, cosmology, cosmological-inflation"
} |
what happen to the speed of gas particles when the pressure increased? | Question: There is gas particles inside a piston, if the piston was pushed inward, the volume will decrease and the pressure will increase, so my question is
What will happen to the speed of the gas particles?
Answer: Kinetic theory of gases connects the macroscopic properties $(eg, P,V,T)$ to the microscopic properties $(eg, v_{rms})$ of a system. The assumptions of KTG apply to my answer.
Let me first state the symbols that I am going to use and the parameter they represent :
$P$ - Pressure of gas , $V$- Volume of gas , $T$ - Temperature of gas , $n$ - number of moles of gas , $M$ - Mass of gas in V volume , $\rho$ - density of gas , and $M'$ - Molecular mass of gas.
According to KTG,
$P=\frac{\rho v_{rms}^{2}}{3}$.
$v_{rms}^{2}=\frac{3PV}{M}$ ,
Using ideal gas equation, $PV=nRT$
$v_{rms}^{2}=\frac{3RT}{M'}$ , beacuse $M=nM'$.
As you can see,
$v_{rms}\propto \sqrt{T}$.
The change is speed of gas particles actually depends on the type of process the system undergoes.
We can have two cases :
1) When $\Delta T =0$:
If $\Delta {T}=0$ (isothermal process, $P \propto \frac{1}{V}$), then there will be no change in the speed the of the gas particles.
In this case when the gas is compressed to half the volume, the pressure is doubled. Therefore for such case, speed of the gas particles remains the same.
2) When $\Delta T \neq 0$ :
If $\Delta T \neq 0 $ (Say, adiabatic process), then the speed of the particles will increase with increase in temperature and vice versa. Adiabatic expansion cools the system and adiabatic compression heats up the system.
Similarly, we can make comparisons , For :
Isobaric processes - $\Delta P=0, V\propto T \propto v_{rms}^{2}$ and
isochoric processes - $\Delta V =0,P\propto T \propto v_{rms}^{2}$.
Conclusion : A general comment cannot be made. The process that the system is undergoing has to be mentioned. | {
"domain": "physics.stackexchange",
"id": 40276,
"tags": "pressure, gas"
} |
New message format for compressed pointcloud2? | Question:
I'm trying to subscribe the pointcloud message (~10MByte per frame) from a 3D camera, but find the DDS transmission consumed lots of bandwidth, and the latency is unacceptable under ROS2.
Considering pointcloud is more and more important for robot usages and actually it has big potential to be compressed, I'd like to check if it's possible to add a new message format in sensor_msgs, something like CompressedPointCloud2 for the data compressed by pcl. Thus the transmission effectiveness will be improved a lot.
Is there any other better idea?
Originally posted by cathyshen on ROS Answers with karma: 33 on 2018-07-24
Post score: 2
Original comments
Comment by Dirk Thomas on 2018-07-24:
Please include more information in your question. For example what programming language are you experiencing this with (C++ or Python), what version of ROS 2 you are using, etc.
Comment by dejanpan on 2018-07-24:
@cathyshen also, can you in addition to what Dirk asked, provide e.g. a bag with the PointCloud2 data so we can benchmark?
Comment by cathyshen on 2018-07-25:
@Dirk Thomas, @dejanpan, I'm using C++ and the ROS2 version is Bouncy. I can provide a PCD file with 1 frame 640x480 pointcloud2 data inside, but the file size is too big to share here which is about 12M. A manipulate PCD file should also work to reproduce this issue.
Comment by peci1 on 2018-11-21:
Related github issue: https://github.com/ros-perception/perception_pcl/issues/152
Answer:
There is a Discourse thread regarding this issue:
https://discourse.ros.org/t/image-and-pointcloud2-performance-on-ros2/5391
In short: The performance on large messages like PointCloud2 seems to be much worse in ROS2 than in ROS1. Since transferring large point clouds is a common use case, this is unacceptable, and I expect this to be fixed at some point. This is the core issue behind your problem, so it should be addressed first.
That being said, it sounds like a great idea to offer point cloud compression! However, before a message is added to the core ROS message packages like sensor_msgs, there should be at least one (but better several) implementations that use the proposed message for a longer time. So the first step would be for you to create that message in your own code and provide an implementation. If this is successful and there is wide-spread interest, one can begin the process of adding the message to sensor_msgs. If you want to go down this road, let me suggest Google Draco for point cloud compression:
https://github.com/google/draco
I haven't used it myself yet, but it looks extremely intriguing.
Originally posted by Martin Günther with karma: 11816 on 2018-07-24
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by cathyshen on 2018-07-25:
That's really informative! Glad to know there is patch to fix the big-size data transmission issue in fastrtps.
It's fair to have a reference implementation for compressed pointcloud msgs. I'll look at both pcl Octree and draco solutions.
Comment by Martin Günther on 2018-07-26:
Hmm... but if you use an octree to downsample a point cloud, the result will be a regular point cloud with fewer points, not a lossless compressed pointcloud. So we wouldn't need a new message type. Am I missing something?
Comment by cathyshen on 2018-07-26:
you probably are right, let me do further investigation on kinds of solutions, and back to you.
Comment by paplhjak on 2019-08-12:
I have implemented a point_cloud_transport package for transport of PointCloud2 messages, which uses plugin interface ala image_transport. In the README, you can also find links to other repositories, which contain but are not limited to:
plugin for compression using Google Draco
template for implementing your custom plugins
tutorial on how to use the package
Any help in further development of the project is welcome and appreciated.
Hope that this helps. | {
"domain": "robotics.stackexchange",
"id": 31337,
"tags": "ros2, pointcloud"
} |
Does irregular reflection form images? | Question: In one of my test papers at school, we had a true-false question which said "Irregular reflection can form an image." I marked it true, however I was not given any marks for the question.
I asked my teacher about it, he told me that the "lights scatter and do not form images." I was not satisfied with this explanation so I Googled for it; I found only a Wikipedia page about diffused reflection useful.
According to my knowledge, an image is an intersection of reflected or refracted light rays. So in the image about irregular reflection, I find intersection of reflected light rays.
Answer: Simply because two light rays intersect at a point it does not mean that an image is formed.
You need millions (not necessarily, but a lot) of light rays to intersect at a point to form an image.
The reason is that the intensity of light emerging from a two-ray intersection is too less for any human eye to detect. For an image formed due to a concave mirror, however, the intensity is clearly much more, allowing the eye to detect the image: | {
"domain": "physics.stackexchange",
"id": 43395,
"tags": "optics, visible-light, reflection"
} |
FFT-less $O(n\log n)$ algorithm for pairwise sums | Question: Suppose we are given $n$ distinct integers $a_1, a_2, \dots, a_n$, such that $0 \le a_i \le kn$ for some constant $k \gt 0$, and for all $i$.
We are interested in finding the counts of all the possible pairwise sums $S_{ij} = a_i + a_j$. ($i = j$ is allowed).
One algorithm is to construct the polynomial $P(x) = \sum_{j=1}^{n} x^{a_j}$ of degree $\le kn$, and compute its square using the Fourier transform method and read off the powers with their coefficients in the resulting polynomial. This is an $O(n \log n)$ time algorithm.
I have two questions:
Is there an $O(n \log n)$ algorithm which does not use FFT?
Are better algorithms known (i.e $o(n \log n)$)? (FFT allowed).
Answer: It would seem that this problem is equivalent to integer/polynomial squaring:
1. It is known that polynomial multiplication is equivalent to integer multiplication.
2. Obviously, you already reduced the problem to polynomial/integer squaring; therefore this problem is at most as hard as squaring.
Now I will reduce integer squaring to this problem:
Suppose you had an algorithm:
$$
F(\mathbf{\vec a})\rightarrow P^2(x),\\
\text{where } P(x)=\sum_{a_i \in \mathbf{\vec a}} x^{a_i}
$$
This algorithm is essentially the algorithm you request in your question. Thus, if I had a magic algorithm that can do this, I can make a function, ${\rm S{\small QUARE}}\left(y\right)$ that will square the integer $y$ (oh yes, I do love mathjax :P):
$$
\begin{array}{rlr}\hline
&\mathbf{\text{Algorithm 1}} \text{ Squaring}&\hspace{2em}&\\
\hline
{\tiny 1.:}&\mathbf {\text{procedure }} {\rm S{\small QUARE}}\left(y\right):\\
{\tiny 2.:}&\hspace{2em}\mathbf {\vec a} \leftarrow ()
&&\triangleright~\mathbf {\vec a}\text{ starts as empty polynomial sequence}\\
{\tiny 3.:}&\hspace{2em}i \leftarrow 0\\
{\tiny 4.:}&\hspace{2em}\mathbf{while}~y\ne0~\mathbf{do}
&&\triangleright~\text{break }y\text{ down into a polynomial of base }2\\
{\tiny 5.:}&\hspace{4em}\mathbf{if}~y~\&~1~\mathbf{then}
&&\triangleright~\text{if lsb of }y\text{ is set}\\
{\tiny 6.:}&\hspace{6em}\mathbf{\vec a} \leftarrow \mathbf{\vec a}i
&&\triangleright~\text{append }i\text{ to }\mathbf{\vec a}~\text{(appending }x^i\text{)}\\
{\tiny 7.:}&\hspace{4em}\mathbf{end~if}\\
{\tiny 8.:}&\hspace{4em}i \leftarrow i + 1\\
{\tiny 9.:}&\hspace{4em}y \leftarrow y \gg 1
&&\triangleright~\text{shift }y\text{ right by one}\\
{\tiny 10.:}&\hspace{2em}\mathbf{end~while}\\
{\tiny 11.:}&\hspace{2em}P^2(x) \leftarrow F\left(\mathbf{\vec a}\right)
&&\triangleright~\text{obtain the squared polynomial via } F\left(\mathbf{\vec a}\right)\\
{\tiny 12.:}&\hspace{2em}\mathbf{return}~P^2(2)
&&\triangleright~\text{simply sum up the polynomial}\\
{\tiny 13.:}&\mathbf {\text{end procedure}}\\
\hline
&\end{array}
$$
Python (test with codepad):
#https://cs.stackexchange.com/q/11418/2755
def F(a):
n = len(a)
for i in range(n):
assert a[i] >= 0
# (r) => coefficient
# coefficient \cdot x^{r}
S = {}
for ai in a:
for aj in a:
r = ai + aj
if r not in S:
S[r] = 0
S[r] += 1
return list(S.items())
def SQUARE(x):
x = int(x)
a = []
i = 0
while x != 0:
if x & 1 == 1:
a += [i]
x >>= 1
i += 1
print 'a:',a
P2 = F(a)
print 'P^2:',P2
s = 0
for e,c in P2:
s += (1 << e)*c
return s
3. Thus, squaring is at most as hard as this problem.
4. Therefore, integer squaring is equivalent to this problem. (they are each at most as hard as each-other, due to (2,3,1))
Now it is unknown if integer/polynomial multiplication admits bounds better than $\mathcal O(n\log n)$; in fact the best multiplication algorithms currently all use FFT and have run-times like $\mathcal O(n \log n \log \log n)$ (Schönhage-Strassen algorithm) and $\mathcal O\left(n \log n\,2^{\mathcal O(\log^* n)}\right)$ (Fürer's algorithm). Arnold Schönhage and Volker Strassen conjectured a lower bound of $\Omega\left(n \log n\right)$, and so far this seems to be holding.
This doesn't mean your use of FFT is quicker; $\mathcal O\left(n\log n\right)$ for FFT is the number of operations (I think), not the bit complexity; hence it ignores some factors of smaller multiplications; when used recursively, it would become closer to the FFT multiplication algorithms listed above (see Where is the mistake in this apparently-O(n lg n) multiplication algorithm?).
5. Now, your problem is not exactly multiplication, it is squaring. So is squaring easier? Well, it is an open problem (no for now): squaring is not known to have a faster algorithm than multiplication. If you could find a better algorithm for your problem than using multiplication; then this would likely be a breakthrough.
So as of now, the answer to both your questions is: no, as of now, all the ~$\mathcal O(n\log n)$ multiplication algorithms use FFT; and as of now squaring is as hard as multiplication. And no, unless a faster algorithm for squaring is found, or multiplication breaks the $\mathcal O(n\log n)$ barrier, your problem cannot be solved faster than $\mathcal O(n \log n)$; in fact, it cannot currently be solved in $\mathcal O(n\log n)$ either, as the best multiplication algorithm only approaches that complexity. | {
"domain": "cs.stackexchange",
"id": 1953,
"tags": "algorithms, time-complexity, fourier-transform"
} |
Is this phase right? | Question: Hello at physics lectures we wrote a phase of a sine wave like this:
$$\phi = kx - \omega t$$
Is this right? As I recall the phase of a wave should be written like this:
$$\phi = \omega t - kx$$
And if a wave changes direction $(k \rightarrow -k)$ like this:
$$\phi = \omega t + kx$$
Can someone explain to me if first usage is even possible and when if so.
Answer: It holds that $\sin(x) = - \sin(-x)$ and therefore $\sin(\omega t - kx) = - \sin(kx - \omega t)$. | {
"domain": "physics.stackexchange",
"id": 6227,
"tags": "waves, conventions"
} |
Landau's derivation of polarization | Question: In "Electrodynamics of Continuous Media" Landau writes the following:
The total charge in the volume of the dielectric is zero; even if it is placed in an electric field we have $\int\bar{\rho}dV=0$. This integral equation, which must be valid for a body of any shape, means that the average density can be written as the divergence of a certain vector, which is usually denoted by $-\mathbf{P}$: $$\bar{\rho}=-\nabla \cdot \mathbf{P}$$
while outside the body $\mathbf{P}=0$. For, on integrating over the volume bounded by a surface which encloses the body but nowhere enters it, we find $$\int\bar{\rho}dV=-\int\nabla \cdot \mathbf{P}dV=-\oint \mathbf{P} \cdot d\mathbf{f}=0$$
However I don't quite understand how the vanishing of $\int\bar{\rho}dV=0$ for every volume guarantees that $\bar{\rho}$ can be written as the divergence of some vector field. In fact, it seems like a convenient ad hoc assumption used to employ the divergence theorem. I wasn't able to find any mathematical theorem that states that "if $\int_V f dV=0$ for every $V$ then $f=\nabla \cdot \mathbf{F}$ where $\mathbf{F}$ is some vector field".
He later uses a similar argument to derive the magnetization vector. He states that because the surface integral $\int \mathbf{j} \cdot d\mathbf{f} = 0$ for all cross-sectional areas in a dielectric it means that $\mathbf{j}$ can be written as a rotor of some vector field $\mathbf{M}$.
So is this a physical argument or a mathematical trick? And if it's the latter, how can it be justified?
Answer: You are correct, this is one of the weaker parts of L&L where they know the result and pretend it is obvious, or that it is the only possibility, and provide some verbal introduction without really showing the genesis of the concept.
Mathematically, any scalar function $\bar{\rho}$ of position can always be expressed as divergence of some vector field, so the L&L narrative that integral of $\bar{\rho}$ being zero is somehow important property for this field $\mathbf P$ to exist is incorrect. We know this because
we already know from Maxwell's equations that for arbitrary charge density $\rho$ there is a field $\mathbf E$ that obeys
$$
\rho = \epsilon_0 \nabla \cdot \mathbf E;
$$
for any function $\bar{\rho}$, we can define the Coulomb field
$$
\mathbf E_C(\mathbf x) = \int K\bar{\rho}(\mathbf x') \frac{\mathbf x- \mathbf x'}{|\mathbf x- \mathbf x'|^3}d^3\mathbf x'
$$
which obeys
$$
\bar{\rho} = \epsilon_0 \nabla \cdot \mathbf E_C.
$$
So if we wanted just the relation $\bar{\rho} = - \nabla\cdot \mathbf P$, and did not care for physical meaning of polarization (such as it has to be zero outside the body), we could define
$$
\mathbf P = -\epsilon_0 \mathbf E_C.
$$
This is just mathematics, not physics.
A much more instructive way to introduce polarization and magnetization in EM theory is the standard way: they are average electric and magnetic moment per unit volume.
Using this definition and some pictures and vector analysis, one can show that for neutral dielectric body and volume/surface entirely inside this body $V'/S'$, we have
$$
\oint_{S'} \mathbf P \cdot d\mathbf S' = - \int_{V'}\bar{\rho} ~dV'
$$
and we can derive the relation between charge and divergence of $\mathbf P$: using the Gauss-Ostrogradskii theorem, we can transform the above equation into
$$
\int_{V'} \nabla \cdot \mathbf P dV' = -\int_{V'}\bar{\rho} ~dV'
$$
and since this is valid for arbitrarily small volume element of the body, due to definition of divergence, we have also
$$\nabla \cdot \mathbf P = -\bar{\rho}.
$$ | {
"domain": "physics.stackexchange",
"id": 95095,
"tags": "electrostatics, electric-fields, polarization, vector-fields, dielectric"
} |
Is it possible to determine when an accelerometer is in a vibrating state compared to a non-vibrating state? | Question: I would like to know if so, how raw 3-axis accelerometer data could be analyzed and manipulated real-time to register periods of vibration. The device being used has a max sample rate of 62Hz (I understand this is quite low), and aims to recognize when it is in a state of vibration as opposed to freely moving when mounted on a users person.
I have gathered some test data and have plotted resultant acceleration magnitude, although it is looking difficult to recognize contrast between vibration and free movement. I have also looked into plotting the change in angle between each two contiguous readings, thinking along the lines that the angle should be greater than 90 degrees (in the case that the vibration isn't resonating with my hardware's frequency) due to uniform oscillation - would I be along the right lines with this?
Just to clarify, I do not need to measure the vibrations, just acknowledge them.
Thanks!
Answer: In principle it is possible to do what you are asking for by taking the Fourier transform of the signal coming from the accelerometer. During normal motion, the energy will be distributed among different bins of the FFT with most energy in the lowest (DC) frequency bins. However, if you have a "vibration" component present, you will see a significant amount of energy in the higher frequency bins.
Note that it is possible to see frequencies above 31 Hz (Nyquist) if the response of the sensor itself is sufficiently fast, and there is no filter in front of the sampling / digitization. Such higher frequencies will appear "aliased" to a lower frequency bin - this makes it hard to determine their actual frequency, but that's not what you were asking for. | {
"domain": "physics.stackexchange",
"id": 25785,
"tags": "vectors, measurements, oscillators, vibrations, sensor"
} |
Calculate volume of a crater | Question: Is there an equation I can use to determine the volume of a crater in my 'drop the ball on sand' experiment? I looked at the equations for volume of hemisphere and cone but do not seem to fit the shape of the crater. The crater looks like a cone but has a spherical cap instead of a sharp point which gives it a bowl-shape.I have measurements of the depth and diameter of the crater but that's about it. I am hoping to find he relationship between the energy of the impactor and the volume of the crater
Answer: If you draw the following diagram:
you can see that the volume of the crater is the volume of the "truncated inverted cone" plus the volume of the bit of sphere. Since the volume of a cone is $V=\frac13 A h$ where $A$ is the area of the base and $h$ is the height, the volume of the truncated cone is given by
$$V_{cone} = \frac{\pi}{12}(d^2(h+h')-d'^2(h'))$$
The volume of the salmon-colored bit of sphere can be found by integrating. Luckily, Wolfram already did the hard work and we can start with their result:
$$V_{cap} = \frac{\pi}{3}c^2(3R-c)$$
Actually they used $h$ for the height of the cap, but we already use that for a different quantity - so I will define the height of the cap as $c$.
In my drawing, we can deduce the height of the cap from the angles $\alpha$ and the fact that $\beta = \pi - \alpha$ so $\cos\beta = -\cos\alpha$:
$$c + r\cos\beta = r\\
c = r(1+\cos\alpha)$$
We can also solve for $d'$ in terms of the other quantities. I will leave it up to you to take it from here. | {
"domain": "physics.stackexchange",
"id": 23703,
"tags": "planets, volume"
} |
Remove the object's sibling on click | Question: In my program below I am getting the parent node game 1 and then removing one of the two child nodes, depending on which one is clicked. I currently have two event listeners, with two separate functions for removing one child vs the other. It doesn't feel very dry to me, and I think I could benefit from having someone skilled in JS take a look at this and show me how this could be more easily accomplished.
var firstWinner = document.getElementById('game1');
function removefirst() {
firstWinner.removeChild(firstWinner.childNodes[0]);
}
function removeSecond() {
firstWinner.removeChild(firstWinner.childNodes[1]);
}
var gameOneNodes = document.getElementById('game1').childNodes;
gameOneNodes[1].addEventListener('click', removefirst, false)
gameOneNodes[0].addEventListener('click', removeSecond, false)
Answer: You should define one click handler that removes all siblings of the clicked element. That is, remove all children of the parent, except the target of the event itself.
document.getElementById('game1').addEventListener('click', function(event) {
// Remove all siblings of the clicked element
for (var c = this.childNodes.length - 1; c >= 0; c--) {
if (this.childNodes[c] != event.target) {
this.removeChild(this.childNodes[c]);
}
}
});
<div id="game1">
<button>Alpha</button>
<button>Bravo</button>
<button>Charlie</button>
<button>Delta</button>
</div>
You could also iterate this way, but there's an awkward c--:
for (var c = 0; c < this.childNodes.length; c++) {
if (this.childNodes[c] != event.target) {
this.removeChild(this.childNodes[c--]);
}
} | {
"domain": "codereview.stackexchange",
"id": 19235,
"tags": "javascript, dom"
} |
Shortcut key for right panel | Question:
Is there a short cut key to bring up the right panel in the Gazebo GUI?
I've built gazebo from source on Windows and seem to be missing the window expansion icon that controls the right panel visibility.
C:\fakepath\gazebo_missing_rightmenu.png
Originally posted by azaclauson on Gazebo Answers with karma: 35 on 2016-06-13
Post score: 0
Answer:
Have you tried dragging the middle of the dark grey edge of the screen? On Ubuntu three dots show up, but some things look weird on other OS's.
Originally posted by chapulina with karma: 7504 on 2016-06-13
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by azaclauson on 2016-06-14:
Yep that worked. | {
"domain": "robotics.stackexchange",
"id": 3934,
"tags": "gazebo-8"
} |
Addition of 2 integers | Question: I've implemented an algorithm found in a book. It adds 2 positive integers of equal size. How can it be improved? It is assumed that inputted data is always positive integers with an equal number of digits.
EDIT
As there are lots of comments regarding the question, I would like to rephrase it. This program tries (yes it tries, could be wrong) to simulate how humans do manual additions of 2 positive integers. This is similar to the technique students learn in (I hope..) primary schools. It is based on the book "Invitation to Computer Science". One obvious improvement to this program is adding validations. What else can be done to improve it?
class Program
{
static void Main(string[] args)
{
Add2Numbers(97,47);
Console.ReadLine();
}
static void Add2Numbers(int a, int b)
{
int carry = 0,sum=0;
List<int> lstA = new List<int>();
List<int> lstB = new List<int>();
List<int> lstSum = new List<int>();
foreach (var item in GetDigit(a))
{
lstA.Add(item);
}
foreach (var item in GetDigit(b))
{
lstB.Add(item);
}
for (int j = 0; j < lstB.Count;j++ )
{
sum = lstA[j] + lstB[j] + carry;
if (sum > 9)
{
sum = sum - 10;
carry = 1;
}
else
carry = 0;
lstSum.Add(sum);
}
if (carry > 0)
lstSum.Add(carry);
lstSum.Reverse();
foreach (var item in lstSum)
{
Console.Write(item);
}
}
static IEnumerable<int> GetDigit(int number)
{
while (number > 0)
{
yield return number % 10;
number = number / 10;
}
}
}
Answer: I generally prefer foreach to for, but in the case where you are using the index variable to iterate through two lists at once, for is a pretty good option. I still try to use .Zip() instead, as the other answer demonstrated, but sometimes that can get hairy. Here are some smaller improvements that don't involve changes to the overall structure.
It's always a good idea to use descriptive names
I would rename LstA to DigitsOfA (same for B, and Sum)
I would rename GetDigit to GetDigits
It's usually1 a good idea to delay declaration of a variable until it's used
A related concept is reducing variable scope; not using a global variable if a local variable will do. This doesn't have any technical impact, but it does have a readability impact: If I see you using a variable but have to scroll up to see how that variable was declared, the flow of my reading has been interrupted. We could call this "spatial scope".
Here, this would mean declaring sum and carry just before the for loop, instead of at the top of the function.
You could even declare sum inside the loop. It does seem inefficient to instantiate a new variable every iteration of the loop, but (1) the impact will probably be low, even negligible, (2) you can generally trust the compiler to do something smart (3) if making my program a tiny bit easier to read also makes it a tiny bit slower, that's a trade I'll gladly make every time.
I usually prefer var over explicit types when declaring a variable
This is the subject of a fair amount of debate, but here's my reasoning. It's easier for me to read
var prices = new Dictionary<Fruit, decimal>();
var lemon = new Lemon();
var price = 17.50;
prices.Add(lemon, price);
than
Dictionary<Fruit, decimal> prices = new Dictionary<Fruit, decimal>();
Lemon lemon = new Lemon();
decimal price = 17.50;
prices.Add(lemon, price);
In the former, the information I immediately want to know (what are the variables' names? how are they used?) all flows nicely down the left edge.
It's nearly always easier (to write and to read) to use LINQ's .ToList() than to call List.Add() in a loop.
This means you can initialize your lists quite easily as var digitsOfA = GetDigits(a).ToList();
Implicit braces do make your code shorter and prettier, but they are also the source of a very common bug. You write
if (awake)
GetCoffee();
then I come along and update your code to
if (awake)
GetCoffee();
GetBreakfast();
All of a sudden I'm eating breakfast while I'm still asleep. And if you're thinking I'd have to be a real bonehead to make that mistake, you're right! But it's a mistake I've made before, it's a mistake that will be made again... And it's a mistake that would be impossible if the original code were
if (awake)
{
GetCoffee();
}
So consider very carefully how likely it is that others might modify your code in the future, or that you might modify it before you have your coffee... I recommend the braces every time.
You can dodge the whole brace issue, however, by modular arithmetic instead of if:
carry = sum / 10;
sum = sum % 10;
I quite like the GetDigits function as it stands! That's an excellent use of yield return, in my opinion.
1This advice does not apply to Javascript, thanks to variable hoisting | {
"domain": "codereview.stackexchange",
"id": 32078,
"tags": "c#, algorithm, reinventing-the-wheel"
} |
Chain slipping off a sphere - Mechanics problem | Question: A uniform flexible chain of length l rests on a fixed smooth sphere of radius R such that one end A of the chain is at top of the sphere while the other end B is hanging freely. The chain is held stationary by a horizontal thread PA as shown in the figure. Calculate the acceleration of the chain when the thread is burnt.
I have solved this and reached the correct answer, but I'm not sure whether the method I've used is 100% correct or not. Here it is:
Now I have a doubt:
Is it correct to integrate change in tension (dT) for each element this way, considering that the direction of dT is changing? Does it need to be further split into components? I was taught that integration should only be done for vectors that are unidirectional or be made unidirectional through components. Please help me with this.
Also, let me know if there is any other mistake I've made in this solution.
Answer: What you're doing is perfectly fine! It is legitimate to integrate a force "in the chain direction", even when the chain changes direction. That's an example of a generalized coordinate. Instead of Cartesian coordinates, you're using "chain coordinates", which measure the overall displacement of the chain.
Generalized coordinates of this kind are properly justified in Lagrangian mechanics, but they're very useful even if you just know Newtonian mechanics. I give a few more examples of this kind of reasoning in this handout. | {
"domain": "physics.stackexchange",
"id": 59914,
"tags": "homework-and-exercises, newtonian-mechanics, forces, classical-mechanics"
} |
Multimaster-FKIE can make node private? | Question:
I am using the Multimaster-FKIE package to connect two ROS cores. I would need a certain node to be private to each ROS core. Is that possible? I need to run two Gazebo simulations with a package gazebo-ros-control which provides controllers for the robots, but it is only possible to run one of the node and so to control only one of the robots. I need to control both and couple them. One sends commands to the other. How can I achieve that?
Originally posted by kump on ROS Answers with karma: 308 on 2019-01-31
Post score: 0
Answer:
Yes, there is a parameter ~ignore_nodes in master_sync node that does just that
so to make the /gazebo and /gazebo_gui nodes private, for example, this command is used when launching the master_sync node:
rosrun master_sync_fkie master_sync _ignore_nodes:=['/gazebo','/gazebo_gui']
Originally posted by kump with karma: 308 on 2019-02-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 32385,
"tags": "ros-kinetic"
} |
Obtaining an expression for spontaneous magnetization in 1D Ising model with $H=0$ from the beginning | Question: The usual trick to find the spontaneous magnetization for the 1D Ising model is to calculate the partition function $Z$ with the Hamiltonian
$$\mathscr{H}=-J\sum\limits_{i}S_iS_{i+1}-H\sum\limits_{i}S_i.\tag{1}$$
Then find the Helmholtz Free energy $F=-kT\ln Z$ and then use the formula
$$M(H=0,T)=-\Big(\frac{\partial F}{\partial H}\Big)_{H=0}.$$
But rather than putting $H=0$ at the end of the calculation to obtain the spontaneous magnetization $M(0,T)$, is it possible to start with $H=0$ in the Ising Hamiltonian (1) from the beginning and then use the definition $M(0,T)=\sum\limits_{i}S_i$ to calculate $M(0,T)$?
Answer: You can just compute the expected value of $M_N=\sum_{i=1}^N \sigma_i$ with respect to the Gibbs measure:
$$
\langle M_N\rangle_{N,T} = \sum_{i=1}^N \langle\sigma_i\rangle_{N,T} ,
$$
where $\langle\cdot\rangle_{N,T}$ represents expectation with respect to the Gibbs measure for a system of $N$ spins at temperature $T$ (and $H=0$).
Of course, if one uses free or periodic boundary condition, then the expectation is $0$, at all temperatures, by symmetry. In order to get something nontrivial, let us assume $+$ boundary condition (that is, assume that the first and last spins interact each with a boundary spin with fixed value $1$).
The expectation of $\sigma_i$ can be computed easily either through the transfer matrix, or using a high-temperature expansion. The latter is shorter, since only two graphs contribute to both the numerators and the denominator: setting $x=\tanh(J/kT)$,
$$
\langle\sigma_i\rangle_{N,T} = \frac{x^i+x^{N+1-i}}{1+x^{N+1}}.
$$
Therefore,
$$
\langle M_N\rangle_{N,T} = 2 \frac{x^{N+1}-x}{x^{N+2}-x^{N+1}+x-1}.
$$
In particular,
$$
\lim_{T\to 0} \frac1N\langle M_N\rangle_{N,T} = 1.
$$
Note that, above, the $+$ boundary condition does introduce a (weak) symmetry breaking. This is not necessary. One can derive spontaneous symmetry breaking (of course, this is more relevant in higher dimensions) without introducing any symmetry breaking, neither external field, nor boundary conditions. I explained this in this answer. | {
"domain": "physics.stackexchange",
"id": 55269,
"tags": "statistical-mechanics, phase-transition, symmetry-breaking, ising-model, partition-function"
} |
Quantum PAC learning | Question: Background
Functions in $AC^0$ are PAC learnable in quasipolynomial time with a classical algorithm that requires $O(2^{log(n)^{O(d)}})$ randomly chosen queries to learn a circuit of depth d [1]. If there is no $2^{n^{o(1)}}$ factoring algorithm then this this is optimal [2]. Of course, on a quantum computer we know how to factor, so this lower bound does not help. Further, the optimal classical algorithm uses the Fourier spectrum of the function thus screaming "quantumize me!"
[1] N. Linial, Y. Mansour, and N. Nisan. [1993] "Constant depth circuits, Fourier transform, and learnability", Journal of the ACM 40(3):607-620.
[2] M. Kharitonov. [1993] "Cryptographic hardness of distribution-specific learning", Proceedings of ACM STOC'93, pp. 372-381.
In fact, 6 years ago, Scott Aaronson put the learnability of $AC^0$ as one of his Ten Semi-Grand Challanges for Quantum Computing Theory.
Question
My question is three-fold:
1) Are there examples of natural function families that quantum computers can learn faster than classical computers given cryptographic assumptions?
2) Has there been any progress on the learnability of $AC^0$ in particular? (or the slightly more ambitious $TC^0$)
3) In regards to the learnability of $TC^0$, Aaronson comments: "then quantum computers would have an enormous advantage over classical computers in learning close-to-optimal weights for neural networks." Can somebody provide a reference on how weight updating for neural nets and $TC^0$ circuits relate? (apart from the fact that threshold gates look like sigmoid neurons) (This question was asked and answered already)
Answer: I'll take a shot at your first question:
Are there examples of natural function families that quantum computers can learn faster than classical computers given cryptographic assumptions?
Well, it depends on the exact model and the resource being minimized. One option is to compare the sample complexity (for distribution-independent PAC learning) of the standard classical model with a quantum model that is given quantum samples (i.e., instead of being given a random input and the corresponding function value, the algorithm is provided with a quantum superposition over inputs and their functions values). In this setting, quantum PAC learning and classical PAC learning are basically equivalent. The classical upper bound on sample complexity and the quantum lower bound on sample complexity are almost the same, as shown by the following sequence of papers:
[1] R. Servedio and S. Gortler, “Equivalences and separations between quantum and classical learnability,” SIAM Journal on Computing, vol. 02138, pp. 1–26, 2004.
[2] A. Atici and R. Servedio, “Improved bounds on quantum learning algorithms,” Quantum Information Processing, pp. 1–18, 2005.
[3] C. Zhang, “An improved lower bound on query complexity for quantum PAC learning,” Information Processing Letters, vol. 111, no. 1, pp. 40–45, Dec. 2010.
Moving on to time complexity, using the same quantum PAC model, Bshouty and Jackson showed that DNFs can be quantum PAC learnt in polynomial time over the uniform distribution [4], further improved in [5]. The best known classical algorithm for this runs in $O(n^{\log n})$ time. Atici and Servedio [6] also show improved results for learning and testing juntas.
[4] N. Bshouty and J. Jackson, “Learning DNF over the uniform distribution using a quantum example oracle,” SIAM Journal on Computing, vol. 28, no. 3, pp. 1136–1153, 1998.
[5] J. Jackson, C. Tamon, and T. Yamakami, “Quantum DNF learnability revisited,” Computing and Combinatorics, pp. 595–604, 2002.
[6] A. Atıcı and R. Servedio, “Quantum Algorithms for Learning and Testing Juntas,” Quantum Information Processing, vol. 6, no. 5, pp. 323–348, Sep. 2007.
On the other hand, if you're interested in the standard classical PAC model only, using quantum computing as a post-processing tool (i.e., no quantum samples), then Servedio and Gortler [1] observed that there is a concept class due to Kearns and Valiant that cannot be classically PAC learnt assuming the hardness of factoring Blum integers, but can be quantumly PAC learnt using Shor's algorithm.
The situation for Angluin's model of exact learning through membership queries is somewhat similar. Quantum queries can only give a polynomial speedup in terms of query complexity. However, there is an exponential speedup in time complexity assuming the existence of one-way functions [1].
I have no idea about the second question. I'd be happy to hear more about that too. | {
"domain": "cstheory.stackexchange",
"id": 1829,
"tags": "quantum-computing, lg.learning, machine-learning, open-problem, ne.neural-evol"
} |
Hamiltonian capable of quantum computation | Question: Suppose we have a 1D spin chain evolving in time according to some Hamiltonian $H_t(p_0, p_1, p_2 \ldots)$, where $p_i$ are classical parameters ``set by the lab equipment". Divide time into discrete intervals of some length $\Delta t$: we are allowed to change the $p_i$ every step. In addition, assume the ability to choose whether or not to perform a projective measurement in the spin-z basis at each site at each timestep.
What is an example of a nearest-neighbour spin-chain Hamiltonian that can implement quantum computation in this way?
Answer: Many Hamiltonians will be good. In particular, if you have control over the individual couplings, any interaction, together with a local terms in two directions, will suffice.
As an example, the Ising model with two fields,
$$
\sum p_is_z^is_z^{i+1} + \sum p_i' s_z^i + \sum p_i'' s_x^i
$$
will do: The local terms allow to implement arbitrary rotations about $X$ and $Z$, and thus any rotation, and the Ising term allows to realize a gate $\mathrm{diag}(1,i,i,1)$ (up to a phase), which is locally equivalent to a CNOT. Together, this yields a universal gate set. | {
"domain": "physics.stackexchange",
"id": 61941,
"tags": "quantum-information, quantum-computer, many-body"
} |
Chemical potential of fermions | Question: Hey guys I am trying to determine the chemical potential for electrons in metals.
I have that:
For the valance band, $\epsilon\lt\epsilon_\mathrm v$, $\rho(\epsilon)=g_\mathrm v$, while for the conduction band, $\epsilon\gt\epsilon_\mathrm c$, $\rho(\epsilon)=g_\mathrm c$, where $g_\mathrm v$ and $g_\mathrm c$ are constants. The Fermi energy $F=\frac12(\epsilon_\mathrm v+\epsilon_\mathrm c)$ is located in the middle of the gap. The number of electrons is the same for all $T$.
I was able to calculate the average number of electrons in the conduction band
$$\langle N_\mathrm c\rangle=2g_\mathrm ck_\mathrm BT\mathrm e^{-(\epsilon_\mathrm c-\mu)/(k_\mathrm BT)}$$
and the average number of holes
$$\langle N_\mathrm h\rangle=2g_\mathrm vk_\mathrm BT\mathrm e^{(\epsilon_\mathrm v-mu)/(k_\mathrm BT)}$$
I need to show that
$$\mu=\epsilon_\mathrm F-(k_\mathrm BT/2)\ln\left(\frac{g_\mathrm c}{g_\mathrm v}\right)$$
I also know that $\mu=\epsilon_\mathrm F$ at $T=0$ and that $\mu=\frac{\mathrm dG}{\mathrm dN}$.
Answer: Since the number of electrons is the same for all T
$$ N_h = N_c $$
It follows that
$$2g_vk_bTe^{(\epsilon_v-mu)/k_BT} = 2g_ck_BTe^{-(\epsilon_c-\mu)/k_BT} $$
Next is just to solve for $\mu$ and to use the identity:
$ \epsilon_F = \frac{1}{2}(\epsilon_c+\epsilon_v)$ . To reach the desired expression. | {
"domain": "physics.stackexchange",
"id": 66458,
"tags": "statistical-mechanics, electrons, fermions, chemical-potential, fermi-energy"
} |
Ultrasound and IR sensors vs Kinect for Robot navigation | Question:
Hi all,
I have a mobile robot and I would like it to navigate around the building. I already have a 2D map of the building. I have Rotational encoders to get the odometry information and IMU/UWB for localization. I only have Ultrasound, IR sensors and Kinect which I can use for navigation. I want to know which is better for navigation (using Ultrasound and IR sensors or Kinect) given that I am aiming for pretty good accuracy as well as it should not be very computationally expensive. In my opinion, Kinect will do a better job but my concern with Kinect is that it might be computationally very expensive as compared to Proximity sensors given that I have to run it on the NVIDIA Jetson TK1 board (https://developer.nvidia.com/jetson-tk1) but then again if I go with Proximity sensors, I have to use bunch of them and I don't know how effective and efficient that will be. Also, I am little worried about the dead band in case of Kinect which is around 50 cm which is way more than the dead band for Ultrasound sensors (~ 10 - 15 cm).
Any guidance regarding this will be appreciated.
Update 1: Can Kinect sensor be used for mobile robot navigation when there is a glass wall? I think it can not be used but I am not sure.
Thanks in advance.
Naman
Originally posted by Naman on ROS Answers with karma: 1464 on 2015-03-25
Post score: 3
Original comments
Comment by 2ROS0 on 2015-03-25:
Roomba! :)
Answer:
The move_base package can take a PointCloud as sensor_topic but it's more expensive and computationally it's going to cost you more than ultrasonic and/or IR. Ultrasonic are cheap and work fast but they are more use as a "last minute reaction" kind of sensor. Having the kinect gives a better accuracy in the general navigation.
As a side note, you're not doing any slam since you already have a map.
It all depend on you. Which is more important : Accuracy or speed or price or possibility to have your robot "upgraded" software-wise ? That's going to be what will help you make your choice :).
From my experience, the 3D camera didn't slowed down the robot "computation" that much, if you're using something other than a raspberry. But it costs a lot more money than IR/ultrasonic :P
Originally posted by Maya with karma: 1172 on 2015-03-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Naman on 2015-03-30:
Thanks a lot. One more thing: Kinect can't be used when there is a glass wall, right as it will just pass through it?
Comment by Maya on 2015-03-30:
That's actually a good question :P. I'll say yes but it might depend, I'm not sure.
Comment by 2ROS0 on 2015-03-30:
@Naman that is correct | {
"domain": "robotics.stackexchange",
"id": 21242,
"tags": "mobile-robot, ros, navigation, kinect, robot-localization"
} |
Product of Pauli-matrix exponentials | Question: Given Pauli matrices $\sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ and $\sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}$, can one write $e^{\alpha \sigma_z} e^{\beta \sigma_x}$ in terms of $e^{\beta \sigma_x} e^{\alpha \sigma_z}$ ($\alpha$, $\beta$ are some complex numbers). For example, is it possible to find some $M$ such that
\begin{equation}
e^{\alpha \sigma_z} e^{\beta \sigma_x} = e^{\beta \sigma_x} e^{\alpha \sigma_z}M?
\end{equation}
Answer: Use the standard formula,
$$
e^{\alpha \sigma_z}= I\cosh \alpha + \sigma_z \sinh\alpha = , \\
e^{\beta \sigma_x}= I\cosh \beta + \sigma_x \sinh\beta ,
$$
to compute the quadruple product suggested in the comments,
$$
M=e^{-\alpha \sigma_z}e^{-\beta \sigma_x}e^{\alpha \sigma_z}e^{\beta \sigma_x}\\ =
e^{-ia\sigma_z}e^{-ib\sigma_x}e^{ia \sigma_z} e^{ib\sigma_x}\\
= \Biggl (I\cos a\cos b+i\left (-\sigma_y\sin a \sin b -\sigma_z \sin a \cos b-\sigma_x\sin b \cos a \right ) \Biggr)\\ \times \Biggl ( I\cos a\cos b +i\left (-\sigma_y \sin a \sin b + \sigma_z \sin a \cos b + \sigma_x \sin b \cos a \right ) \Biggr)\\
= I(1-2\sin^2a ~\sin^2 b)\\ +i\Bigl (\sigma_x 2\sin^2 a\sin b\cos b-\sigma_y 2\sin a \cos a\sin b\cos b -\sigma_z 2\sin^2b\sin a \cos a \Bigr ).
$$
Can you take it from here? You might be amused taking $a=\pi$, and seeing it geometrically, or else $b=\pi$ .
Note the all-important unitarity constraint/check: the sum of the squares of the coefficients of I and the σ s, respectively amounts to 1, as it should!
From the coefficient of the identity matrix, you find its arccos, θ, and you just confirmed that the coefficient of the $i\hat n\cdot \vec \sigma$, for normalized unit $\hat n$, messier to write down, must be $\sin\theta$! So,
$$
M=e^{i\theta \hat n\cdot \vec \sigma}.
$$
For imaginary hyperbolic angles, so real trigonometric angles, this amounts to a quadruple group element product for SU(2), dubbed the group (not algebra) commutator. | {
"domain": "physics.stackexchange",
"id": 93208,
"tags": "homework-and-exercises, operators, commutator, mathematics"
} |
Confusion about floating potential and Langmuir probes in plasmas | Question: I am working to understand Langmuir probes and this concept of floating potential keeps popping up that I have some confusion on. I am reading a paper called Understanding Langmuir probe current-voltage characteristics, and at the end of the first paragraph it states:
"The early users of probes naively assumed that the potential of the plasma at the location of the probe known as the plasma potential or space potential and designated as $V_P$ could be determined by measuring the potential on the probe relative to one of the electrodes. However, this procedure determined the floating potential $V_f$ of the probe which is generally not the same as the plasma potential. By definition, a probe that is electrically floating, collects no net current from the plasma, and thus its potential rises and falls to whatever potential is necessary to maintain zero net current"
I understand that the definition of floating potential is the potential at which ion and electron currents cancel each other out, but why does the probe have to stay at zero net current, as the last sentence in the quote above implies? Why does it have to be floating? To obtain an I-V curve with a Langmuir probe, you must go above and below the floating potential, so it is clearly possible to access potentials other than the floating potential. What is so special about the floating potential?
In the wikipedia article for Langmuir probes, under the section "Floating Potential", it makes the remark that the floating potential is the experimentally accessible quantity. Why is this the case? And if it is true, how can you even make a full I-V curve at potentials other than the floating potential?
Answer: In a plasma-probe system such as this, a net current implies the accumulation of charge somewhere. Such an accumulation would result in an electric field. Electric fields do work to get rid of themselves, thus they would inhibit or enhance currents to get rid of the charge imbalance. This is part of the reason why probes in plasmas float relative to the plasma, i.e., they accumulate some net charge and generate an associated electric field. This is sometimes referred to as the plasma sheath around an instrument. If the instrument is in space and exposed to sunlight, it will have an addition current caused by photoelectrons being ejected from any exposed conducting surface.
In regards to a Langmuir probe, you, the user, typically force the current through the probe. This is called biasing the probe. That is, you bias its current positive or negative, which will cause it to float correspondingly. You can drive the probe to saturation and effectively crush the natural floating potential effect that would otherwise happen when in a plasma (e.g., we sometimes push the instruments to the rails to collapse all surrounding photoelectrons to reduce the normally occurring sheath).
On Parker Solar Probe, they control the biasing currents to the electric field antenna. The spacecraft is moving upwards of 200 km/s relative to the Sun in parts of its orbit and it's plowing through regions of space with a lot of micron-sized dust (e.g., wrote a more detailed answer about this stuff here https://space.stackexchange.com/a/17646/12508). As such, it gets bombarded by hypersonic dust, which ablates and ejects material from the spacecraft. The WISPR instrument can sometimes see this debris leaving the heat shield, which is really neat (and kind of scary for the team). During every inbound orbit, the FIELDS team changes the biasing current settings to account for the new plasma they expect to pass through. On one pass they went a little too far and pushed the instrument to the rails for a little bit. Very interestingly, they saw some of that debris from the heat shield suddenly start to orbit the electric field antenna (i.e., they were acting like a long, current-carrying wire and the debris like huge, charged particles). It makes for some really neat movies. | {
"domain": "physics.stackexchange",
"id": 89889,
"tags": "experimental-physics, measurements, plasma-physics"
} |
.NET 4.5 licensing subsystem using RSA-4096 strong name key, SHA256 signed XML, and assembly signature enforcing | Question: Abstract
For the past week I have been looking at taking advantage of the .NET 4.5 improvements to code signing and XML signing to produce a licensing subsystem I can use to license my own products. I now have the thing working pretty well, and I am looking for input as to how it could be improved, or what security issues I might have missed. Please feel free to propose enhancements and point out flaws if you see them.
What I wanted was to be able to take a solution with assemblies signed with a strong name key, and use that same key pair to sign licenses on my server and validate them in the client application. Historically the strong name assembly signature in .NET was always pretty weak. Until .NET 4.5 it only supported RSA keylengths of 1024 bits and used the now obsolete SHA1 hashing algorithm to produce signatures. The assembly signature is also not enforced by default, and only used to identify assemblies in the GAC. NET 4.5 has now added the support for any key length, and for the SHA256 signatures. The same support was also added to the SignedXml class. I wanted to use the strong name key pair to A) avoid having to distribute a separate public key and B) avoid having to dish out for a suitable CA certificate. I had an idea to use reflection to extract the public key from the signed assembly and use it to first validate the assembly integrity by enforcing signature validation, and then validate the license when an assertion was made.
What I am looking for here is constructive criticism, possible vulnerabilities in the approach (other than the private key becoming known), or how I could make this better/more secure.
The Code
First, I created a strong name key file with a 4096-bits RSA key:
Public Shared Function SaveKeyPairToSnk(rsa As RSACryptoServiceProvider, filename As String) As Boolean
Try
Using fs As New FileStream(filename, FileMode.Create, FileAccess.Write)
Dim bytes = rsa.ExportCspBlob(True)
fs.Write(bytes, 0, bytes.Length)
End Using
Return True
Catch ex As Exception
Return False
End Try
End Function
And signed some test assemblies with it in a VS solution.
Then, I needed a way to enforce the assembly signatures before asserting the license, to ensure they were not tampered with. I designed a base class that P/Invokes to StrongNameSignatureVerificationEx (mscoree.dll) for this, and throws is the validation fails. My protected assemblies would inherit from this class and call the base constructor on activation, which would validate the calling assembly signature and throw if validation failed, or if the calling assembly had a different public key than the base assembly, to marry them together. If the base class constructor throws, the protected derived class cannot be used.
Public Class LicensedClassBase
Shared Sub New()
VerifyAssemblySignature(Assembly.GetCallingAssembly())
End Sub
Public Sub New()
VerifyAssemblySignature(Assembly.GetCallingAssembly())
End Sub
Private Shared Sub VerifyAssemblySignature(assmbly As Assembly)
Dim wasVerified As Boolean
If Not (NativeMethods.StrongNameSignatureVerificationEx(assmbly.Location, True, wasVerified) _
AndAlso wasVerified _
AndAlso assmbly.GetName().GetPublicKey().SequenceEqual(Assembly.GetExecutingAssembly().GetName.GetPublicKey())) Then
Throw New LicensingException("Signature verification failed: Assembly signature did not match.")
End If
End Sub
End Class
Then I started working on the licenses. This is a sample of a signed license file. This is actually one of my debugging licenses:
<License xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Id>243</Id>
<CustId>4365</CustId>
<CustName>Joe Blow</CustName>
<IssueDate>2016-05-07T15:49:46.1482476-04:00</IssueDate>
<ExpiryDate>2017-05-07T15:49:46.1482476-04:00</ExpiryDate>
<ProductId>1</ProductId>
<ProductName>Abacus</ProductName>
<ProductVersion>1</ProductVersion>
<ProductEdition>DEV</ProductEdition>
<ProductCount>1</ProductCount>
<HardwareIds>
<string>l/tYpAUEn9yhRQg9bijp/g==</string>
</HardwareIds>
<Features>
<string>2EF5D742-F06F-42E0-9199-06D94B31B97E</string>
<string>F4A23FDF-39CC-422E-A2AC-D279A27B64FF</string>
</Features>
<Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
<SignedInfo>
<CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315" />
<SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256" />
<Reference URI="">
<Transforms>
<Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
<Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
</Transforms>
<DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256" />
<DigestValue>gNyvSh639wV7wHa4UYGPG524pjQ8JZBgaHhEiAm541k=</DigestValue>
</Reference>
</SignedInfo>
<SignatureValue>ntQaT+PMZIS6eke81Vu0uRy8JJDhDfPic5e9Er34tDm00oprQ4qAFVJ1reuXSt+GIf/8XZAV0vR9RLqbB6R5K26lfQc5FCUotLYYjAYexFxwFzJqFV2hrYjhNxYHnXZRs37wY9iVbZlrG7fmEvqg7uN5cb1/K5a3VTFPoZvcUYkswfbzgxmdMdFDdOJCLLLA5oQEI3E60G32FABTJi11Sn9vCSnyePEJdi8yhJCUU9897bD7t2vkoyfbl7Ud5UyEPXUuKDBuX1uIUlU1WatlvH4qghaeV/LfQk8RSP7wHrtrB6T281ko+1+CdebnjTg5FTjo8vwknBXgDK8CRSQVm6DxNf0zeE+IGOhGXFRMCfFOsS9/jnKLT0wMIIqxPMKBX5cXDTX/4udHw6hLEc9H9X/vQLCyTl76ew8gdpgtZZKt8T/Tms8GUrAcIqZYIsUO399LS17lPtOJ2rXlzhDZSjRdVzHnQmGOWxDMtRF9Jb6b13Gr9JuXtPOmrJTl9kCsr+Dv81/h1aCa6xuwIkJtKS2n233+E6zsuSXj/eQJH56lsOJq9ijyXPtRV8LPXkY1Dta5vBwV2EeBA2LAzVOqU6SmM0B99XMCV90PcRLw71OnpdmMs/iUBQNyzn3Awk68hcJy5H3StZD5kl41RObYHQLvVU8/U6bFuwUiY1MAizM=</SignatureValue>
</Signature>
</License>
This was signed using this function:
Public Shared Function SignXml(xmlDoc As XmlDocument, rsaKey As RSACryptoServiceProvider) As XmlDocument
Try
CryptoConfig.AddAlgorithm(GetType(RSAPKCS1SHA256SignatureDescription), "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256")
Dim signedXml As New SignedXml(xmlDoc)
signedXml.SigningKey = rsaKey
signedXml.SignedInfo.SignatureMethod = "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"
Dim reference As New Reference()
reference.Uri = ""
reference.AddTransform(New XmlDsigEnvelopedSignatureTransform())
reference.AddTransform(New XmlDsigExcC14NTransform())
reference.DigestMethod = "http://www.w3.org/2001/04/xmlenc#sha256"
signedXml.AddReference(reference)
signedXml.ComputeSignature()
Dim xmlDigitalSignature As XmlElement = signedXml.GetXml()
xmlDoc.DocumentElement.AppendChild(xmlDoc.ImportNode(xmlDigitalSignature, True))
If xmlDoc.FirstChild.GetType() = GetType(XmlDeclaration) Then xmlDoc.RemoveChild(xmlDoc.FirstChild)
Return xmlDoc
Catch ex As Exception
xmlDoc = Nothing
End Try
Return xmlDoc
End Function
And is validated using this one:
Public Shared Sub VerifySignedXml(xmlDoc As XmlDocument, rsaKey As RSACryptoServiceProvider)
Dim signedXml As New SignedXml(xmlDoc)
Dim nodeList As XmlNodeList = xmlDoc.GetElementsByTagName("Signature")
If nodeList.Count > 0 Then
signedXml.LoadXml(CType(nodeList(0), XmlElement))
Else
Throw New LicensingException("Signed XML verification failed: No Signature was found in the document.")
End If
If Not signedXml.CheckSignature(rsaKey) Then
Throw New LicensingException("Signed XML verification failed: Document signature did not match.")
End If
End Sub
Both are called from the base class constructor, to which I added this code to assert the license:
Private Shared Sub AssertLicense(assmbly As Assembly)
VerifyAssemblySignature(assmbly)
If assmbly IsNot Assembly.GetExecutingAssembly() Then
Dim _config = New Configuration.ConfigManager()
Dim serializer As New XmlSerializer(GetType(License))
Dim featureId = ""
Dim attrib = assmbly.GetCustomAttributes(True).OfType(Of FeatureIdAttribute)().FirstOrDefault
If attrib IsNot Nothing Then
featureId = attrib.FeatureId
End If
Utils.VerifySignedXml(_config.License, Utils.GetAssemblyPublicKey(assmbly))
Using reader As XmlReader = New XmlNodeReader(_config.License)
If serializer.CanDeserialize(reader) Then
Dim lic As License = serializer.Deserialize(reader)
Dim now = Utils.GetCurrentDateTime()
If lic Is Nothing Then
Throw New LicensingException("Your license is corrupted.")
End If
If lic.IssueDate > now Then
Throw New LicensingException("Your license has not been activated yet.")
End If
If lic.ExpiryDate < now Then
Throw New LicensingException("Your license is expired.")
End If
If Not lic.HardwareIds.Contains(Utils.GetHardwareId()) Then
Throw New LicensingException("Your license is not valid for this hardware platform.")
End If
If Not My.Application.Info.ProductName.StartsWith(lic.ProductName, True, CultureInfo.InvariantCulture) Then
Throw New LicensingException("Your license is not valid for this product.")
End If
If Not My.Application.Info.Version.ToString().StartsWith(lic.ProductVersion, True, CultureInfo.InvariantCulture) Then
Throw New LicensingException("Your license is not valid for this version of the product.")
End If
If Not (attrib IsNot Nothing _
AndAlso lic.Features.FirstOrDefault(Function(f) f.ToUpperInvariant() = featureId.ToUpperInvariant) IsNot Nothing) Then
Throw New LicensingException("You current license does not include access to the feature invoked.")
End If
End If
End Using
End If
End Sub
And this is how the hardware ID is generated:
Public Shared Function GetHardwareId() As String
Try
Dim rawId = ""
Using mbs As New ManagementObjectSearcher("Select * From Win32_processor")
rawId += mbs.Get().Cast(Of ManagementObject)().First()("ProcessorID").ToString
End Using
Using dsk As New ManagementObject("win32_logicaldisk.deviceid=""c:""")
dsk.Get()
rawId += dsk("VolumeSerialNumber").ToString()
End Using
Using mos As New ManagementObjectSearcher("Select * From Win32_ComputerSystemProduct")
rawId += DirectCast(mos.Get().Cast(Of ManagementObject)().First()("UUID"), String)
End Using
Using md5 As New MD5CryptoServiceProvider
Return Convert.ToBase64String(md5.ComputeHash(Encoding.UTF8.GetBytes(rawId)))
End Using
Catch ex As Exception
Return Nothing
End Try
End Function
For the features support, I created a custom assembly attribute with a GUID as the feature ID, and I stamp the feature's assembly with it.
Answer: I Think you solution is a good compromise between efford and security.
However, verifing the signature of the assembly from inside the assembly is not a big approvment of security. If anybody is able to tamper your assembly, she can just remove the check. One further improvment could be to move the verification to a native asembly. To ensure that the method of the native assembly is called, you could encrypt one or more of the application's base assemblies and load (and decrypt) them with the method of the native assembly that verify the entry assembly.
Another improvment could be the way you handle the hardware ID.
You generate a single harware ID from 3 device ids (CPU, HD and Motherboard). If one of the device ids change, your hardware ID does not match anymore. A more tolerant solution is to check that at least 2 of the 3 device ids matches. That ensures that your license is still valid if the customer change its processor for instance. | {
"domain": "codereview.stackexchange",
"id": 20024,
"tags": ".net, security, vb.net, cryptography"
} |
Which design to maximize friction for a device standing on concrete? | Question: I have a device 4cm X 5cm X 15cm that stands on asphalt/concrete/road. So, the device is taller than its base which makes contact with the asphalt.
I want to maximize the friction so that the device stands as well as possible without any attachment.
I'm considering a thin rubber add-on attached at the bottom of the device and I'm wondering what is the best design for such rubber to maximize the friction with asphalt.
I selected rubber because it has the highest friction coefficient with asphalt - there is a reason why tires are made of rubber...
But how should I design the bottom side of this rubber part that will be in contact with the asphalt? Here is an idea which I got inspiration from a bathtub mat. What is the best pattern possible? And what should be the pattern size (small or big relative to the 5cm X 4cm area)?
Answer: I would use three(triangulated) very sharp tip toes, stainless steel conical points so that they dig into the asphalt a little bit with a little pressure from the installer :-)
I don't know if they have them anymore, but track and field shoes use to have threaded conical spikes with the shoe soles having threaded inserts to receive the spikes. You could use something similar. Me thinks this will grip the ground better than a textured rubber pad. | {
"domain": "engineering.stackexchange",
"id": 2648,
"tags": "mechanical-engineering, friction, concrete, industrial-engineering, product-engineering"
} |
How to describe an keras Model in a scientific report | Question: how would you describe a machine learning model in a scientific report? It should be detailed but I just listed the hyperparameters... Have you got more important properties?
Answer: Some other details you could mention are:
total number of model parameters (e.g. 1.2M or 0.15M) & depth of the network (e.g. 38-layered network)
family/style of the network architecture (e.g. encoder-decoder arch., LSTM)
specifics of connections between network layers (e.g. residual-, dense-, skip-connections)
specifics of individual components of the network structure (e.g. dilated-convs. (CNNs), attention (LSTMs))
description/reasoning of why you chose a particular structure/sequence of connections in your deep learning model
specifics of training/validation/testing procedures (e.g. augmented training data, cross-validation, test-time-augmentation (TTA), frozen network weights)
other specific details/caveats that allow the results of your deep learning model be easily reproduced from the scientific report
For more info on the best kinds of details to be included in the report, refer to "Methodology"/ "Training"/ "Implementation"/ "Proposed Architecture" sections of the deep learning research papers in your relevant area. | {
"domain": "ai.stackexchange",
"id": 1616,
"tags": "machine-learning, keras, academia"
} |
Comparing puzzle solvers in Java | Question: I have this program that solves a \$(n^2 - 1)\$-puzzles for general \$n\$. I have three solvers:
BidirectionalBFSPathFinder
AStarPathFinder
DialAStarPathFinder
AStarPathFinder relies on java.util.PriorityQueue and DialAStarPathFinder uses so called Dial's heap which is a very natural choice in this setting: all priorities are non-negative integers and the set of all possible priorities is small (should be \$\{ 0, 1, 2, \dots, k \}\$, where \$k \approx 100\$ for \$n = 4\$).
DialHeap.java:
package net.coderodde.puzzle;
import java.util.HashMap;
import java.util.Map;
import java.util.NoSuchElementException;
/**
* This class implements Dial's heap.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Nov 16, 2015)
* @param <E> the type of the actual elements being stored.
*/
public class DialHeap<E> {
private static final int INITIAL_CAPACITY = 64;
private static final class DialHeapNode<E> {
E element;
int priority;
DialHeapNode<E> prev;
DialHeapNode<E> next;
DialHeapNode(E element, int priority) {
this.element = element;
this.priority = priority;
}
}
private final Map<E, DialHeapNode<E>> map = new HashMap<>();
private DialHeapNode<E>[] table = new DialHeapNode[INITIAL_CAPACITY];
private int size;
private int minimumPriority = Integer.MAX_VALUE;
public void add(E element, int priority) {
checkPriority(priority);
if (map.containsKey(element)) {
return;
}
ensureCapacity(priority);
DialHeapNode<E> newnode = new DialHeapNode(element, priority);
newnode.next = table[priority];
if (table[priority] != null) {
table[priority].prev = newnode;
}
if (minimumPriority > priority) {
minimumPriority = priority;
}
table[priority] = newnode;
map.put(element, newnode);
++size;
}
public void decreasePriority(E element, int priority) {
checkPriority(priority);
// Get the actual heap node storing 'element'.
DialHeapNode<E> targetHeapNode = map.get(element);
if (targetHeapNode == null) {
// 'element' not in this heap.
return;
}
// Read the current priority of the 'element'.
int currentPriority = targetHeapNode.priority;
if (priority >= currentPriority) {
// No improvement possible.
return;
}
unlink(targetHeapNode);
targetHeapNode.prev = null;
targetHeapNode.next = table[priority];
targetHeapNode.priority = priority;
if (table[priority] != null) {
table[priority].prev = targetHeapNode;
}
if (minimumPriority > priority) {
minimumPriority = priority;
}
table[priority] = targetHeapNode;
}
public E extractMinimum() {
if (size == 0) {
throw new NoSuchElementException("Extracting from an empty heap.");
}
DialHeapNode<E> targetNode = table[minimumPriority];
table[minimumPriority] = targetNode.next;
if (table[minimumPriority] != null) {
table[minimumPriority].prev = null;
} else {
if (size == 1) {
// Extracting the very last element. Reset to maximum value.
minimumPriority = Integer.MAX_VALUE;
} else {
minimumPriority++;
while (minimumPriority < table.length
&& table[minimumPriority] == null) {
++minimumPriority;
}
}
}
--size;
E element = targetNode.element;
map.remove(element);
return element;
}
public int size() {
return size;
}
private void ensureCapacity(int capacity) {
if (table.length <= capacity) {
int newCapacity = Integer.highestOneBit(capacity) << 1;
DialHeapNode<E>[] newTable = new DialHeapNode[newCapacity];
System.arraycopy(table, 0, newTable, 0, table.length);
System.out.println(table.length + " -> " + newCapacity);
table = newTable;
}
}
private void checkPriority(int priority) {
if (priority < 0) {
throw new IllegalArgumentException(
"Heap does not handle negative priorities. Received: " +
priority);
}
}
private void unlink(DialHeapNode<E> node) {
int priority = node.priority;
if (node.next != null) {
node.next.prev = node.prev;
}
if (node.prev != null) {
node.prev.next = node.next;
} else {
table[priority] = node.next;
}
}
}
PuzzleNode.java:
package net.coderodde.puzzle;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Random;
/**
* This class implements a puzzle node for {@code n^2 - 1} - puzzle.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Nov 16, 2015)
*/
public class PuzzleNode {
private final byte[][] matrix;
private byte emptyTileX;
private byte emptyTileY;
private int hashCode;
public PuzzleNode(int n) {
this.matrix = new byte[n][n];
byte entry = 1;
for (int y = 0; y < n; ++y) {
for (int x = 0; x < n; ++x) {
matrix[y][x] = entry++;
}
}
matrix[n - 1][n - 1] = 0;
hashCode = Arrays.deepHashCode(matrix);
emptyTileX = (byte)(n - 1);
emptyTileY = (byte)(n - 1);
}
private PuzzleNode(PuzzleNode node) {
int n = node.matrix.length;
this.matrix = new byte[n][n];
for (int y = 0; y < n; ++y) {
for (int x = 0; x < n; ++x) {
this.matrix[y][x] = node.matrix[y][x];
}
}
this.hashCode = Arrays.deepHashCode(this.matrix);
this.emptyTileX = node.emptyTileX;
this.emptyTileY = node.emptyTileY;
}
@Override
public boolean equals(Object o) {
if (o == null) {
return false;
}
if (!o.getClass().equals(this.getClass())) {
return false;
}
PuzzleNode other = (PuzzleNode) o;
if (this.hashCode != other.hashCode) {
return false;
}
return Arrays.deepEquals(this.matrix, other.matrix);
}
@Override
public int hashCode() {
return hashCode;
}
public PuzzleNode up() {
if (emptyTileY == 0) {
return null;
}
PuzzleNode ret = new PuzzleNode(this);
ret.matrix[emptyTileY][emptyTileX] = this.matrix[emptyTileY - 1]
[emptyTileX];
ret.matrix[--ret.emptyTileY][emptyTileX] = 0;
ret.hashCode = Arrays.deepHashCode(ret.matrix);
return ret;
}
public PuzzleNode right() {
if (emptyTileX == this.matrix.length - 1) {
return null;
}
PuzzleNode ret = new PuzzleNode(this);
ret.matrix[emptyTileY][emptyTileX] = this.matrix[emptyTileY]
[emptyTileX + 1];
ret.matrix[emptyTileY][++ret.emptyTileX] = 0;
ret.hashCode = Arrays.deepHashCode(ret.matrix);
return ret;
}
public PuzzleNode down() {
if (emptyTileY == matrix.length - 1) {
return null;
}
PuzzleNode ret = new PuzzleNode(this);
ret.matrix[emptyTileY][emptyTileX] = this.matrix[emptyTileY + 1]
[emptyTileX];
ret.matrix[++ret.emptyTileY][emptyTileX] = 0;
ret.hashCode = Arrays.deepHashCode(ret.matrix);
return ret;
}
public PuzzleNode left() {
if (emptyTileX == 0) {
return null;
}
PuzzleNode ret = new PuzzleNode(this);
ret.matrix[emptyTileY][emptyTileX] = this.matrix[emptyTileY]
[emptyTileX - 1];
ret.matrix[emptyTileY][--ret.emptyTileX] = 0;
ret.hashCode = Arrays.deepHashCode(ret.matrix);
return ret;
}
public List<PuzzleNode> children() {
List<PuzzleNode> childrenList = new ArrayList<>(4);
insert(childrenList, up());
insert(childrenList, right());
insert(childrenList, down());
insert(childrenList, left());
return childrenList;
}
public List<PuzzleNode> parents() {
List<PuzzleNode> parentList = new ArrayList<>(4);
insert(parentList, up());
insert(parentList, right());
insert(parentList, down());
insert(parentList, left());
return parentList;
}
public int getDegree() {
return matrix.length;
}
public byte get(int x, int y) {
return matrix[y][x];
}
public PuzzleNode randomSwap(Random rnd) {
final PuzzleNode newNode = new PuzzleNode(this);
int degree = this.matrix.length;
int sourceX = rnd.nextInt(degree);
int sourceY = rnd.nextInt(degree);
for (;;) {
if (matrix[sourceY][sourceX] == 0) {
sourceX = rnd.nextInt(degree);
sourceY = rnd.nextInt(degree);
} else {
break;
}
}
for (;;) {
int targetX = sourceX;
int targetY = sourceY;
switch (rnd.nextInt(4)) {
case 0:
--targetX;
break;
case 1:
++targetX;
break;
case 2:
--targetY;
break;
case 3:
++targetY;
break;
}
if (targetX < 0 || targetY < 0) {
continue;
}
if (targetX >= degree || targetY >= degree) {
continue;
}
if (matrix[targetY][targetX] == 0) {
continue;
}
byte tmp = newNode.matrix[sourceY][sourceX];
newNode.matrix[sourceY][sourceX] = newNode.matrix[targetY][targetX];
newNode.matrix[targetY][targetX] = tmp;
newNode.hashCode = Arrays.deepHashCode(newNode.matrix);
return newNode;
}
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
sb.append("[")
.append(emptyTileX)
.append(", ")
.append(emptyTileY)
.append("]\n");
int n = this.matrix.length;
for (int y = 0; y < n; ++y) {
for (int x = 0; x < n; ++x) {
sb.append(String.format("%2d", matrix[y][x])).append(' ');
}
if (y < n - 1) {
sb.append('\n');
}
}
return sb.toString();
}
private static void insert(List<PuzzleNode> list, PuzzleNode node) {
if (node != null) {
list.add(node);
}
}
}
PathFinder.java:
package net.coderodde.puzzle;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
/**
* This interface defines the API for path finding algorithms and a couple of
* methods for constructing shortest paths.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Nov 16, 2015)
*/
public interface PathFinder {
public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target);
default List<PuzzleNode> tracebackPath(PuzzleNode target,
Map<PuzzleNode,
PuzzleNode> parentMap) {
List<PuzzleNode> path = new ArrayList<>();
PuzzleNode current = target;
while (current != null) {
path.add(current);
current = parentMap.get(current);
}
Collections.<PuzzleNode>reverse(path);
return path;
}
default List<PuzzleNode>
tracebackPath(PuzzleNode touchNode,
Map<PuzzleNode, PuzzleNode> PARENTSA,
Map<PuzzleNode, PuzzleNode> PARENTSB) {
List<PuzzleNode> path = tracebackPath(touchNode, PARENTSA);
PuzzleNode current = PARENTSB.get(touchNode);
while (current != null) {
path.add(current);
current = PARENTSB.get(current);
}
return path;
}
}
BidirectionalBFSPathFinder.java:
package net.coderodde.puzzle;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Deque;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
/**
* This class implements bidirectional breadth-first search.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Nov 16, 2015)
*/
public class BidirectionalBFSPathFinder implements PathFinder {
@Override
public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target) {
Objects.requireNonNull(source, "The source node is null.");
Objects.requireNonNull(target, "The target node is null.");
if (source.equals(target)) {
// Bidirectional algorithms do not handle correctly the case where
// the source and target nodes are the same.
returnTarget(target);
}
Deque<PuzzleNode> QUEUE_A = new ArrayDeque<>();
Deque<PuzzleNode> QUEUE_B = new ArrayDeque<>();
Map<PuzzleNode, PuzzleNode> PARENTS_A = new HashMap<>();
Map<PuzzleNode, PuzzleNode> PARENTS_B = new HashMap<>();
Map<PuzzleNode, Integer> DISTANCE_A = new HashMap<>();
Map<PuzzleNode, Integer> DISTANCE_B = new HashMap<>();
QUEUE_A.addLast(source);
QUEUE_B.addLast(target);
PARENTS_A.put(source, null);
PARENTS_B.put(target, null);
DISTANCE_A.put(source, 0);
DISTANCE_B.put(target, 0);
int bestCost = Integer.MAX_VALUE;
PuzzleNode touchNode = null;
while (!QUEUE_A.isEmpty() && !QUEUE_B.isEmpty()) {
if (touchNode != null) {
if (bestCost < DISTANCE_A.get(QUEUE_A.getFirst()) +
DISTANCE_B.get(QUEUE_B.getFirst())) {
return tracebackPath(touchNode, PARENTS_A, PARENTS_B);
}
}
if (QUEUE_A.size() < QUEUE_B.size()) {
PuzzleNode current = QUEUE_A.removeFirst();
if (DISTANCE_B.containsKey(current)) {
int cost = DISTANCE_A.get(current) +
DISTANCE_B.get(current);
if (bestCost > cost) {
bestCost = cost;
touchNode = current;
}
}
for (PuzzleNode child : current.children()) {
if (!DISTANCE_A.containsKey(child)) {
DISTANCE_A.put(child, DISTANCE_A.get(current) + 1);
PARENTS_A.put(child, current);
QUEUE_A.addLast(child);
}
}
} else {
PuzzleNode current = QUEUE_B.removeFirst();
if (DISTANCE_A.containsKey(current)) {
int cost = DISTANCE_A.get(current) +
DISTANCE_B.get(current);
if (bestCost > cost) {
bestCost = cost;
touchNode = current;
}
}
for (PuzzleNode parent : current.parents()) {
if (!DISTANCE_B.containsKey(parent)) {
DISTANCE_B.put(parent, DISTANCE_B.get(current) + 1);
PARENTS_B.put(parent, current);
QUEUE_B.addLast(parent);
}
}
}
}
return Collections.<PuzzleNode>emptyList();
}
private List<PuzzleNode> returnTarget(PuzzleNode target) {
List<PuzzleNode> path = new ArrayList<>(1);
path.add(target);
return path;
}
}
AStarPathFinder.java:
package net.coderodde.puzzle;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.PriorityQueue;
import java.util.Queue;
import java.util.Set;
/**
* This class implements A* pathfinding algorithm.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Nov 16, 2015)
*/
public class AStarPathFinder implements PathFinder {
private int[] targetXArray;
private int[] targetYArray;
private void processTarget(PuzzleNode target) {
int n = target.getDegree();
this.targetXArray = new int[n * n];
this.targetYArray = new int[n * n];
for (int y = 0; y < n; ++y) {
for (int x = 0; x < n; ++x) {
byte entry = target.get(x, y);
targetXArray[entry] = x;
targetYArray[entry] = y;
}
}
}
@Override
public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target) {
Objects.requireNonNull(source, "The source node is null.");
Objects.requireNonNull(target, "The target node is null.");
processTarget(target);
Queue<NodeHeapEntry> OPEN = new PriorityQueue<>();
Set<PuzzleNode> CLOSED = new HashSet<>();
Map<PuzzleNode, PuzzleNode> PARENTS = new HashMap<>();
Map<PuzzleNode, Integer> DISTANCE = new HashMap<>();
OPEN.add(new NodeHeapEntry(source, 0));
DISTANCE.put(source, 0);
PARENTS.put(source, null);
while (!OPEN.isEmpty()) {
PuzzleNode current = OPEN.remove().node;
if (current.equals(target)) {
return tracebackPath(target, PARENTS);
}
if (CLOSED.contains(current)) {
continue;
}
CLOSED.add(current);
for (PuzzleNode child : current.children()) {
if (!CLOSED.contains(child)) {
int g = DISTANCE.get(current) + 1;
if (!DISTANCE.containsKey(child)
|| DISTANCE.get(child) > g) {
PARENTS.put(child, current);
DISTANCE.put(child, g);
OPEN.add(new NodeHeapEntry(child, g + h(child)));
}
}
}
}
return Collections.<PuzzleNode>emptyList();
}
private int h(PuzzleNode node) {
int n = node.getDegree();
int distance = 0;
for (int y = 0; y < n; ++y) {
for (int x = 0; x < n; ++x) {
byte entry = node.get(x, y);
if (entry != 0) {
distance += Math.abs(x - targetXArray[entry]) +
Math.abs(y - targetYArray[entry]);
}
}
}
return distance;
}
private static final class NodeHeapEntry
implements Comparable<NodeHeapEntry> {
PuzzleNode node;
int priority;
NodeHeapEntry(PuzzleNode node, int priority) {
this.node = node;
this.priority = priority;
}
@Override
public int compareTo(NodeHeapEntry o) {
return Integer.compare(priority, o.priority);
}
}
}
DialAStarPathFinder.java:
package net.coderodde.puzzle;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
/**
* This class implements A* pathfinding algorithm using Dial's heap.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Nov 16, 2015)
*/
public class DialAStarPathFinder implements PathFinder {
private int[] targetXArray;
private int[] targetYArray;
private void processTarget(PuzzleNode target) {
int n = target.getDegree();
this.targetXArray = new int[n * n];
this.targetYArray = new int[n * n];
for (int y = 0; y < n; ++y) {
for (int x = 0; x < n; ++x) {
byte entry = target.get(x, y);
targetXArray[entry] = x;
targetYArray[entry] = y;
}
}
}
@Override
public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target) {
Objects.requireNonNull(source, "The source node is null.");
Objects.requireNonNull(target, "The target node is null.");
processTarget(target);
DialHeap<PuzzleNode> OPEN = new DialHeap<>();
Set<PuzzleNode> CLOSED = new HashSet<>();
Map<PuzzleNode, PuzzleNode> PARENTS = new HashMap<>();
Map<PuzzleNode, Integer> DISTANCE = new HashMap<>();
OPEN.add(source, h(source));
DISTANCE.put(source, 0);
PARENTS.put(source, null);
while (OPEN.size() > 0) {
PuzzleNode current = OPEN.extractMinimum();
if (current.equals(target)) {
return tracebackPath(target, PARENTS);
}
if (CLOSED.contains(current)) {
continue;
}
CLOSED.add(current);
for (PuzzleNode child : current.children()) {
if (!CLOSED.contains(child)) {
int g = DISTANCE.get(current) + 1;
if (!DISTANCE.containsKey(child)) {
PARENTS.put(child, current);
DISTANCE.put(child, g);
OPEN.add(child, g + h(child));
} else if (DISTANCE.get(child) > g) {
PARENTS.put(child, current);
DISTANCE.put(child, g);
OPEN.decreasePriority(child, g + h(child));
}
}
}
}
return Collections.<PuzzleNode>emptyList();
}
private int h(PuzzleNode node) {
int n = node.getDegree();
int distance = 0;
for (int y = 0; y < n; ++y) {
for (int x = 0; x < n; ++x) {
byte entry = node.get(x, y);
if (entry != 0) {
distance += Math.abs(x - targetXArray[entry]) +
Math.abs(y - targetYArray[entry]);
}
}
}
return distance;
}
}
PerformanceDemo.java:
import java.util.List;
import java.util.Random;
import net.coderodde.puzzle.AStarPathFinder;
import net.coderodde.puzzle.BidirectionalBFSPathFinder;
import net.coderodde.puzzle.DialAStarPathFinder;
import net.coderodde.puzzle.PathFinder;
import net.coderodde.puzzle.PuzzleNode;
public class PerformanceDemo {
public static void main(String[] args) {
int SWAPS = 16;
PuzzleNode target = new PuzzleNode(4);
PuzzleNode source = target;
long seed = System.nanoTime();
Random random = new Random(seed);
for (int i = 0; i < SWAPS; ++i) {
source = source.randomSwap(random);
}
System.out.println("Seed: " + seed);
profile(new BidirectionalBFSPathFinder(), source, target);
profile(new AStarPathFinder(), source, target);
profile(new DialAStarPathFinder(), source, target);
}
private static void profile(PathFinder finder,
PuzzleNode source,
PuzzleNode target) {
long startTime = System.nanoTime();
List<PuzzleNode> path = finder.search(source, target);
long endTime = System.nanoTime();
System.out.printf("%s in %.2f milliseconds. Path length: %d\n",
finder.getClass().getSimpleName(),
(endTime - startTime) / 1e6,
path.size());
}
}
DialHeapTest.java:
package net.coderodde.puzzle;
import java.util.NoSuchElementException;
import org.junit.Test;
import static org.junit.Assert.*;
import org.junit.Before;
public class DialHeapTest {
private DialHeap<Integer> heap;
@Before
public void before() {
heap = new DialHeap<>();
}
@Test
public void test() {
for (int i = 9, size = 0; i >= 0; --i, ++size) {
assertEquals(size, heap.size());
heap.add(i, i);
assertEquals(size + 1, heap.size());
}
int i = 0;
while (heap.size() > 0) {
assertEquals(Integer.valueOf(i++), heap.extractMinimum());
}
try {
heap.extractMinimum();
fail("Heap should have thrown NoSuchElementException.");
} catch (NoSuchElementException ex) {
}
// 9 -> 14
// 8 -> 13
// ...
// 0 -> 5
for (i = 9; i >= 0; --i) {
heap.add(i, i + 5);
}
for (i = 5; i < 10; ++i) {
heap.decreasePriority(i, i - 5);
}
for (i = 0; i < 5; ++i) {
assertEquals(Integer.valueOf(i + 5), heap.extractMinimum());
}
for (i = 0; i < 5; ++i) {
assertEquals(Integer.valueOf(i), heap.extractMinimum());
}
// Test that the heap expands its internal array whenever exceeding its
// size.
for (i = 0; i < 1000; ++i) {
heap.add(i, i);
}
heap.add(10_000, 32_000);
while (heap.size() > 0) {
heap.extractMinimum();
}
heap.add(1, 1);
heap.add(0, 0);
heap.decreasePriority(0, 10);
assertEquals(Integer.valueOf(0), heap.extractMinimum());
assertEquals(Integer.valueOf(1), heap.extractMinimum());
assertEquals(0, heap.size());
heap.add(1, 1);
heap.add(0, 2);
heap.add(0, 0);
assertEquals(Integer.valueOf(1), heap.extractMinimum());
assertEquals(Integer.valueOf(0), heap.extractMinimum());
}
}
The best figure I got so far:
Seed: 665685156966189
BidirectionalBFSPathFinder in 6929.62 milliseconds. Path length: 33
AStarPathFinder in 458.56 milliseconds. Path length: 33
DialAStarPathFinder in 104.86 milliseconds. Path length: 33
Any critique is much appreciated.
Answer: General
This is a perfect example as multiple implementations of an interface make it very robust. So I only have only little to address.
Normalization
Break your implementations of the search(.., ..)-methods into smaller pieces By doing so try to preserve/improve locality, avoid rampantly parameter declaration and avoid passing working references. Especially your search()-method of the BidirectionalBFSPathFinder is very long. Extracting methods and introducing inner working classes should help.
Multiple return statements, break, continue
If you experience difficulties by extracting methods or other classes then maybe multiple return statements within a method are the problem. Try to have only one return statement per method at the end. This will make sure that your code can be refactored and extended easily.
"break" and "continue" cause the same problems. Avoid them in general and search other structures that preserve a well-defined control flow. Often "break" is used after checking a condition. This condition should be where it supposed to be: The loop header/footer. If you have multiple break statements the breaking conditions are spread all over the place. But they should all be at ONE place.
default methods in interfaces
Do not use default methods in interfaces to substitute the introduction of an abstract class. As they are public scope these methods are accessable to any client and confuses him/her about the usage. Introduce an abstract class where you provide general functionality for every concrete implementation of PathFinder and extend it (AbstractPathFinder). Then you get rid of the public scope and make it protected so subclass cann access these utility methods.
Your interface will then look as clean as heaven:
public interface PathFinder {
public List<PuzzleNode> search(PuzzleNode source, PuzzleNode target);
} | {
"domain": "codereview.stackexchange",
"id": 18471,
"tags": "java, algorithm, pathfinding, sliding-tile-puzzle, priority-queue"
} |
Chloroplasts in an animal cell | Question: What would happen if we inject a chloroplast organelle into an animal cell?
Will the animal cell destroy it? Or is it possible that the chloroplast will somehow survive, and even replicate? Could there be photosynthesis in such a cell, or will some of the necessary mechanisms be missing?
Answer: To answer your bigger question:
Yes, most of this is possible - under some conditions -, and animals and animal cells can acquire chloroplasts, and use them.
E.g.: see Elysia chlorotica whose cells actively take up chloroplasts and use them, and keep them alive (though not replicating). - Though some genes of algae are also contained in the Elysia chlorotica genome - which may be considered as partial replication.
Also there are salamanders that have replicating algae within them (since embryogenesis) - even algae (with chloroplasts) within animal cells - though here the algae might be rather understood as symbionts or "cell types", and the animal cells don't have the chloroplasts by themselves. | {
"domain": "biology.stackexchange",
"id": 8046,
"tags": "photosynthesis, chloroplasts"
} |
In a finite horizon reinforcement learning problem, are the $Q$ and value functions dependent on time? | Question: Typically the definition I see for the $Q$ and value functions is
$$
Q^\pi(s_t, a_t) = \mathbb{E}_\tau\left[\sum_{t'=t}^T\gamma^{t'-t}r(s_{t'}, a_{t'})\ |\ s_t, a_t\right] \\
V^\pi(s_t) = \mathbb{E}_\tau\left[\sum_{t'=t}^T\gamma^{t'-t}r(s_{t'}, a_{t'})\ |\ s_t\right]
$$
where the expectation is taken over trajectories. If $T = \infty$ (that is, in an infinite time horizon), $Q^\pi(s_t, a_t)$ and $V^\pi(s_t)$ do not depend on time. However, for finite time horizons, it seems like they are time dependent: even if $s_2 = s_3$ are the same state, we can have $V^\pi(s_2)\neq V^\pi(s_3)$.
However, I often see people talking about the value and Q-functions as if they aren't dependent on time, irrespective of the value of $T$. For example, when I was learning about policy evaluation, where we fit a function approximator to the value function, the training data we use is $\{(s_t^i, \hat{V}^\pi(s_t^i))\}$ (sampled from a bunch of roll-outs) - this makes it seem like the timestep $t$ is irrelevant: if $s_2^i = s_3^j$, we're expecting the neural network to map both $s_2^i$ and $s_3^j$ to two different values: $\hat{V}^\pi(s_2^i)$ and $\hat{V}^\pi(s_3^j)$. Obviously a function approximator could just fit to the "average" value, but something still seems awry in its formulation.
As other evidence, I often see people write $V^\pi(s)$ for the value function, even in a finite horizon setting, making no mention of the time parameter. In implementations of Q-learning I've seen, the $Q$ function is represented as a 2d-array, not a 3d-array (meaning it's only dependent on the state and the action, not on a timestep).
Can someone help me wrap my head around my misunderstanding? How is it that we can effectively ignore the time parameter? Is is that for large enough $T$, the problem is effectively infinite horizon (since $\gamma^{t'-t}$ becomes so small)?
Answer: An important caveat that you might be missing is the distinction between "finite horizon" and "episodic" setting for you RL problems. In episodic setting there is a set of terminal states that end the episode. In that context, there is also the time $T$ of termination - but $T$ is a random variable that varies from episode to episode. Because of that it is also sometimes called "indefinite-horizon" setting.
The continuing and episodic settings can be unified by adding an "absorbing state", that transitions only to itself and returns zero reward. So all the results for infinite time horizon setting are valid for episodic setting (even when $\gamma=1$).
For more details you can check the Sutton and Barto book, sections 3.3 and 3.4, plus there is a discussion in the "remarks" section after the chapter 3.
As for finite-horizon problems, your reservations are exactly correct. $Q(s,a)$ values at $t = T-1$ would be exactly equal to expected rewards. At $t = T-2$ you'll have to perform 1-step lookahead, etc. effectively "unrolling" the Bellman equation for each time slice. | {
"domain": "datascience.stackexchange",
"id": 11488,
"tags": "reinforcement-learning, q-learning"
} |
What types of string properties are verifiable in polynomial time? | Question: When given the string and the property in question as a potential certificate. Is there any classification theorem that says something along the lines of: all properties (of strings) that have this property (as a sub property) are verifiable in polynomial time?
Are there any collections of types of patterns in strings that are verifiable in poly time?
A trivial property is that a collection of strings with these properties belong to a language in NP (belonging to NP being the sub property).
I'm looking for something more concrete.
I'm looking for the common thread between string properties like these that makes these properties verifiable in poly time for any string.
i.e. is there a way to pick properties of strings out of a hat in such a way that the properties you pick are guaranteed to be verifiable in poly time in any string.
Maybe there is a way to do this with implicit complexity--where the only properties you can build (in some restricted language) are the ones that are verifiable in poly time?
Answer: Verifying a property of strings over an alphabet $\Sigma$ is precisely the same problem as checking whether a string is part of a language, called the Entscheidungsproblem or decision problem.
Language : $\Sigma^* \mapsto \{0,1\}$
What you are interested in are 'properties of strings' or in other words 'classes of languages'.
The class you are probably looking for is 'P', which contains all languages for which the decision problem can be solved in polynomial time on a deterministic Turing machine. Interestingly this class is the same as the class of languages for which the decision problem can be solved by polynomial circuits.
All C programs which contain constantly bounded loops belong to P for example (they can easily be turned into a polynomial circuit).
From there you can extend the language to include other loops that terminate in polynomial time. You have to be careful with nested loops. There are special Hoare-type logics for this purpose. | {
"domain": "cs.stackexchange",
"id": 15948,
"tags": "polynomial-time"
} |
Density problem | Question:
Spacecraft are commonly clad with aluminum to provide shielding from radiation. Adequate shielding requires that the cladding provide 20. g of aluminum per square centimeter. How thick must the aluminum cladding be to provide adequate shielding?
So I know that generally accepted value for density of aluminium is 2.7g per cm^3. The answer states it is (20g/cm2) / (2.7g/cm3)to get a thickness of 7.4cm. But this makes no sense to me. Why would you divide an area by density to get height???
Answer: Assume we have an aluminium plate that is $1 cm$ thick. An area of that plate that is $1cm^2$ will have a volume of $1cm^3$ and therefor a mass of $2.7g$.
To get the correct shielding we want that $1cm^2$ to have a mass of $20g$. To do that we need to make the plate $20/2.7 = 7.4 cm$ thick.
This is exactly the same as you did when you divided $\frac{20g}{cm^2}$ by $\frac{2.7g}{cm^3}$. The $g$ and $cm^2$ cancel out, and you end up with $cm$ for the thickness. | {
"domain": "physics.stackexchange",
"id": 89416,
"tags": "homework-and-exercises, radiation, density"
} |
Other than the South Pole where is the windless place on Earth? | Question: For this other question "Would this chambered cylinder be possible", preferably near the equator where is a calmest place from the troposphere to the stratosphere where is the windless place one Earth most of the year?
Answer: Not just the south pole, but 'Ridge A' and many other parts of the high Antarctic Plateau, at or about 4000 metres altitude, are generally recognized as being the least windy. Otherwise, there are a many parts of the high pressure belts at about +/- 30 degrees which have little wind for most of the year. These tend to be very dry deserts where occasional winds have momentum from other regions. On a local scale there are some deep valleys in tropical rain forests. Once you get below the canopy turbulence level they seldom receive winds of any significance - just the lightest breeze from impeded convection. However, records are hard to find because anemometers in such locations are not really representative of anything.
There is an instagram which claims that Fern tree bus stop, in Hobart, Tasmania, is the 'calmest place on Earth'.
But my experience of Hobart is that icy winds in winter can be far from calm.
These things are relative. Compared to the 2100 km/hour winds of Neptune, everywhere on our planet is as close to windless as makes no difference. | {
"domain": "earthscience.stackexchange",
"id": 951,
"tags": "atmosphere, wind, geography, troposphere, stratosphere"
} |
What are the limitations of RNNs? | Question: For a school project, I'm planning to compare Spiking Neural Networks (SNNs) and Deep Learning recurrent neural networks, such as Long Short Term Memory (LSTMs) networks in learning a time-series. I would like to show some case where SNNs surpass LSTMs. Consequently, what are the limitations of LSTMs? Are they robust to noise? Do they require a lot of training data?
Answer: I finally finished the project. Given really short signals and a really small training set, SNNs (I used Echo State Machines and a neural form of SVM) vastly out-performed Deep Learning recurrent neural networks. However, this may be mostly because I'm really bad at training Deep Learning networks.
Specifically, SNNs performed better at classification of various signals I created. Given the following signals:
The various approaches had the following accuracy, where RC = Echo State Machine, FC-SVM = Frequency Component SVM and vRNN = Vanilla Deep Learning Recurrent Neural Network:
SNNs were also more robust to noise:
For more information, including how I desperately tried to improve the Deep Learning classification approach performance, check out my repository and the report I wrote which is where all the figures came from.
Update:
After spending some time away from this project, I think one of the reasons that RNNs do horribly at this project is that they're bad at dealing with really long signals. Had I chunked the signals together with some sort of smoothing as preprocessing, they probably would have performed better. | {
"domain": "cs.stackexchange",
"id": 7575,
"tags": "machine-learning, neural-networks"
} |
Whats the deal with black holes and "no information from inside the event horizon can leave"? | Question: I don't know if this would fall under the other questions but I have always been confused by this... Information has left the black hole; its mass, rotation, location, and a few other properties. My problem has been that a true singularity should collapse the space around it and become separate from the universe as being able to observe the effects of a singularity breaks the fact that it is a singularity.
If I see a black hole I can already tell its mass and be effected by the gravity of an object which I can receive no other information from. This would break things too!
Let's just say I am an all powerful being who can do almost impossible things, but I still have to follow the laws of the universe. I decide to create a gravity observation station in orbit which uses lasers to see very small changes in the black hole. It also has a teleporter of sorts.
I then enter the black hole and construct another station, this one however uses small black holes and impossibly powerful thrusters and materials to do certain things. Its purpose is to manipulate and sustain a small black hole "orbiting" the inside of the bigger black hole (being sustained, barely, by the station) it then changes the direction this smaller black hole orbits in a kind of binary way.
I then step into a "teleporter" converted to binary instructions and encoded into these gravitational and rotational information. (Let's just say the station has no lifespan) and this station over a huge amount of time begins to piece together this code, and reconstructs me in the "teleporter" on board.
Have I just circumvented the event horizon using disclosed information available to me? Why would or wouldn't this work because for what I see being able to describe a black hole violates the properties of the black hole.
Answer: You are right in that, if you could send a message of any kind from inside a black hole then there is no magic required rebuild you from the information about you. If you could send even a single bit of information, then the black hole wouldn't be so black. You want to do this using the rotation of the black hole.
This doesn't work, for various reasons.
Information has left the black hole; its mass, rotation, location, and a few other properties
The mass etc is a property of the black hole, not information travelling from inside the black hole. The reason is that you can't manipulate them in the way you describe, so they can't be used to send a message.
I then enter the black hole and construct another station
Here is your first problem. From the frame of reference outside the black hole, there is infinite time dilation at the event horizon. Hence you never quite reach the event horizon, nor enter the black hole.
From the frame of reference of the infalling person, the event horizon is crossed. So let's somehow pretend that you are now inside the event horizon.
Its purpose is to manipulate and sustain a small black hole "orbiting" the inside of the bigger black hole.
You can't orbit, you can only fall. The orbital speed is higher than the speed of light, so you will reach the singularity, as surely as you will reach next Tuesday. Your time has now been infinitely dilated, so events that take happen in finite time for you, take an infinite amount of time in the frame of the outside observer, so that makes sending a message using variation in rotation impossible.
Finally, from inside the black hole you can't change the total mass or rotation. The angular momentum of a closed system is conserved, so if you use your thrusters to change the direction of the small black hole you are carrying, the total angular momentum of the black hole remains constant. You can't change the total angular momentum with thrusters if you include both the angular momentum of the ship and the angular momentum of the gas ejected by the thrusters.
Similarly you can't change the total mass or location of the black hole from inside the black hole. And you can't do anything in a finite time from the perspective of an outside observer.
Thus no information can be encoded in the observed properties of the black hole. | {
"domain": "astronomy.stackexchange",
"id": 5697,
"tags": "black-hole, event-horizon"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.