anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How did the "serendipitous rediscovery" of Sulfasalazine as an antirheumatic agent after 30 years happen? | Question: This excellent answer describes the history of the ~50 year old drug Sulfasalazine, and it's worthwhile to take a moment and read through the answer now.
Roughly speaking the drug is an antibiotic molecule linked to a molecule of aspirin, developed at a time when rheumatoid arthritis inflammation was assumed to be caused by an infection. Ironically the drug is now widely used as a treatment for RA, but its mechanism of action does not seem to be related to the way that it was originally designed to work.
Sulfasalazine is on the World Health Organization's Model List of Essential Medicines. See section 17.3, Anti-inflammatory medicines.
Within one of the block quotes in that answer from a publication by the Department of Medicine at Princeton can be found:
[...] Early therapeutic results were encouraging, but the drug was discarded as an antirheumatic agent for 30 years, until its serendipitous rediscovery (emphasis added).
Pubmed shows this to be Pinals R.S., J Rheumatol Suppl. 1988 Sep;16:1-4., PMID: 2903922, but I can't find access to the Journal of Rheumatology Supplements that far back in time.
Can somebody track this down and relate the "serendipitous rediscovery" story here? Or perhaps find a link to another author's version of the story I can read? What was the serendipitous biological evidence that led to rediscovery of Sulfasalazine as a disease-modifying anti-rheumatic drug? (DMARD, see also arthritis.org)
Answer: Your question led me to:
Pinals, RS (1988) Sulfasalazine in the Rheumatic Disease. Seminars in Arthritis and Rheumatism 17:246-259. I reproduce the introduction to this paper below:
Most physicians are familiar with sulfasalazine (SSZ) as an agent commonly used to treat inflammatory bowel disease for more than 40 years. In 1978, McConkey et al reported preliminary studies suggesting that this sulfonamide might be effective in rheumatoid arthritis (RA).1 The use of SSZ arose from earlier open and controlled trials with a sulfone, dapsone, which had demonstrable efficacy in RA, but the benefits were accompanied by relatively frequent, often unacceptable adverse effects.2 Dapsone was effective in dermatitis herpetiformis, presumably because of immunosuppressive activity. Another therapeutic agent for this skin disorder was sulfapyridine (SP), a drug with a poor record of gastric tolerance. Therefore, McConkey et al elected to try SSZ, a compound that released sulfapyridine in the bowel after passage through the stomach. Ironically, McConkey was initially unaware that SSZ had been synthesized specifically to treat RA in the late 1930s. In 1980, McConkey et al reported their open experience with SSZ in 74 patients treated for up to 1 year.3 The improvement in erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), and clinical score (a global evaluation) was similar to that achieved with the disease-modifying antirheumatic drugs (DMARDs), and appeared somewhat sooner, often within 2 months. Moreover, although adverse effects were frequent, they were seldom of sufficient severity to discontinue treatment. This encouraging report led to several controlled trials in the United Kingdom and elsewhere, all confirming the efficacy of this interesting agent. The first study in the United States has been reported4 and, although SSZ has not been approved by the Food and Drug Administration for use in RA, many rheumatologists have used it as an alternative DMARD because of its availability and long record of safety and efficacy in inflammatory bowel disease.
What I have gleaned from this and a couple of papers cited there is that B McConkey, working at a hospital in Birmingham in the UK became interested in developing methods of clinical assessment of rheumatoid arthritis (RA) in the 1960s and this led eventually to the discovery that the drug dapsone is effective as a treatment but with an unacceptable level of adverse side effects. Then, to quote McConkey:
We had studied dapsone partly because of its effect in leprosy and dermatitis herpetiformis; its mode of action in those diseases may be through its immunosuppressant properties. Another drug used in dermatitis herpetiformis is sulphapyridine;it did not attract us as a contender but it is a constituent of salicyl-azo-sulphapyridine (sulphasalazine), a compound originally formulated for RA and latterly found to have immunosuppressant properties.
McConkey et al (1980) Sulphasalazine in rheumatoid arthritis. British Medical Journal 1: 442-444
(This is ref 3 in the Pinals introduction.)
So clearly there was a strand of chemical logic in the rediscovery of the drug; the only serendipitous aspect was that McConkey did not at first know that sulfasalazine was originally designed as a treatment for RA. | {
"domain": "biology.stackexchange",
"id": 7635,
"tags": "pharmacology, inflammation"
} |
How do we know what the kinetic energies are in the principle of least action | Question: I've been trying to wrap my head around the principle of least action, and have come across a conceptual snag.
The classic example that is often given is of a mass's vertical position as a function of time, in a gravitational field. The parabola is the correct path, but other imaginary paths are shown on the same graph as a means of illustration.
The next idea is that action, S, is defined as the definite time integral, between two time points, of the kinetic energy, KE, minus the potential energy, PE, and that the path that is actually taken is the path that minimizes S.
Next, calculus of variations is used to find the path that minimizes S.
I am going to study calculus of variations so I can better follow the relevant ideas, but suppose you wanted to approximate a solution to this problem the hard way, and generate millions of arbitrary paths, and solve for S in each one.
My question is this:
How would you know what KE and PE of the system are, at each point in time?
Is the answer as simple as taking the time derivative of the path to calculate velocity (which will allow a calculation of KE), and obtaining PE using mgh?
But that doesn't seem reasonable, since conservation of energy can easily be violated here (for example, if one imaginary path had a bunch of minima at different heights, then the total energy would be different at each minima, since KE is 0 at these points, but PE is different.
More generally, given that these imaginary paths could not occur under the actual laws of physics, how do we know which laws apply when calculating the action at each point?
Answer:
Is the answer as simple as taking the time derivative of the path to calculate velocity (which will allow a calculation of KE), and obtaining PE using mgh?
Yes, it is.
But that doesn't seem reasonable, since conservation of energy can easily be violated here.
It is true that a path drawn down at random easily violates energy conservation, but this is not a problem. The correct path is the one that minimizes S among all the imaginable paths, even those paths in which energy is not conserved. But it happens that these "non-conservative" paths never minimize S.
To be more precise, a very important theorem makes the following statement. If the potential energy is not explicitly function of time (that is, PE as a function depends only on the coordinates of the system and not on time; it still evolves over time, but only because the coordinates evolve), then the path that minimizes S respects energy conservation.
More generally, given that these imaginary paths could not occur under the actual laws of physics, how do we know which laws apply when calculating the action at each point?
The amazing thing about the principle of least action is that we do not need to assume anything about the laws that apply! As you will learn if you continue to study this topic, once that the PE has been properly defined (for example, in our case it is mgh) the correct laws follow from the principle of least action, just like energy conservation does. That is, you can throw in literally any imaginable path (as long as it is continuous and differentiable) because you can be sure that the ones that minimize S respect the correct laws. The laws themselves follow from the fact that the action has to be minimal.
The principle of least action is a foundation of the whole corpus of Mechanics (and not only Mechanics), alternative but equivalent to Newton's Laws. But we cannot show you this at length here. You will see it by yourself if you continue to study the topic. | {
"domain": "physics.stackexchange",
"id": 62319,
"tags": "classical-mechanics, lagrangian-formalism"
} |
What do blue cone cells add to visual function? | Question: First of all, I saw this other question in the SE sites with a good answer, but I didn't find an explanation about the blue cones specifically.
So most human beings have 3 types of cones (cells specialized in color sensing : blue, green and red).
While it may seem obvious that the red and green-sensing cones are an evolutionary advantage (given our natural environment and how we ate for millenia), for instance to locate fruits and differentiate ripeness, I can't find a clear advantage in having the blue cones. Blue doesn't seem to relate to anything like food, predators, sexuality. Am I missing an obvious utility ?
Let's try to imagine what happens if we remove blue cones :
The sky would look gray. I don't see a survival issue there.
We wouldn't be able to make a difference between yellow and white (white = green + red in this case). Is it a big deal ?
I have the reflex to think "if a character is widespread, there must be a good reason". It seems I should rather say "there must be an explanation".
So I'm trying to think differently, and here are other possible approaches to address this question :
I read that dogs have only blue & yellow cones. Maybe our ancestors had only two cone types, including blue, then a third type appeared and blue stayed just because it wasn't a problem to keep it ?
What we call visible spectrum is in fact the only part of the spectrum that can go through air and water (in the eye) without being absorbed (link). Blue cones would be there just because "hey there's something to see" ?
For a long time we had only fires to look at, at night. No TV or screen with Q&A sites to consult. Is it possible that some of our ancestors stared for too long a hot fire, losing green and red cones ? Those with blue cones were not totally blind and managed to reproduce ! (this one is... almost a joke)
Sadly these approaches don't really convince me.
ADDITION : here is a visual example to see what happens if we remove blue information from an image (original image taken here). The image without any blue (middle) is confusing because it makes you think we could see yellow (while yellow wouldn't be different from white). That's why I added the image on the right, containing only information from red and green channels (the blue channel is replaced by min(Red,Green)). I think it illustrates well that blue isn't important, at least for vegetables and fruits !
Answer: Short answer
Primate color vision started off with photosensitive cells detecting medium to long wavelengths (M/L cones detecting greens and reds) and short wavelengths (S cones most sensitive to blue). By neurophysiologically weighing their relative contributions the full spectrum (red, orange, yellow, green, blue and indigo) can be detected by dichromats. Addition of a slightly different M/L cone and generating separate M and L cones increases the resolution of color vision in the medium and long wavelengths, which is thought to enhance color discrimination useful for foraging fruits. Indeed, blue in itself is not needed for this task. However, missing the blue cones would eliminate a large proportion of the visual spectrum.
Background
Evolutionary spoken, primate vision started off with dichromatic vision. These early dichromats had retinas with one type of cone sensitive to medium-to-long wavelengths (M/L cones) and one type sensitive to short wavelengths (S cones) (Jacobs, 1996). This organization determines the dynamic range of color vision, i.e., to see colors from short wavelengths (greens and blues) to long wavelengths (red and yellows). Weighing their relative contributions by neural networks in the retina allows the intermediate wavelengths to be resolved.
Later in evolution, some diurnal species had a gene duplication on their X chromosome, generating a M/L duplicate gene (Jacobs, 2009). Small mutations in these genes eventually paved the way for two closely related M/L opsins, namely one a little more sensitive to longer wavelengths (L opsin) and one to slightly shorter wavelengths (M opsin). The dynamic range of color vision still depends heavily on the S opsin in the blue cones, as the M and L opsins have absorbance spectra much closer to each other (Fig. 1). The benefit of two opsins with nearly identical absorbance spectra is a higher color discrimination ability (higher color resolution) in the medium to long wavelengths. High discriminating power in the red-yellow-green part of the spectrum is thought to be beneficial for diurnal fruit foraging primate species (Jacobs, 1996). Note that cones need a lot of photons to operate; nocturnal primates are typically dichromats and they will not benefit anyway from another M/L cone.
For fruit foraging alone, the L and M cones do a pretty good job, as your example shows. However, there is more to the eye than food alone. Eliminating the blue cones would substantially narrow down the dynamic range of color vision. Further, evolution started off with S and L/M cones and added another M/L cone to that existing system. Evolution didn't start off with a blind ape and the goal to generate the perfect fruit forager; no it started off with a dichromat and over time improved its ability to seek food during daylight.
Fig. 1. Absorbance spectra of the different opsins. source: Kevin MD
References
- Jacobs, PNAS (1996); 93: 577-81
- Jacobs, Philos Trans R Soc Lond B Biol Sci (2009); 364(1531): 2957–67 | {
"domain": "biology.stackexchange",
"id": 6524,
"tags": "evolution, neurophysiology, vision, human-evolution"
} |
Non-existent package gazebo in ROS hydro | Question:
Hi All,
I am using rosmake a package in ROS hydro, but it gives me below errors:
[rosbuild] Building package labrob_hummingbird_controller
Failed to invoke /opt/ros/hydro/bin/rospack deps-manifests labrob_hummingbird_controller
[rospack] Error: package/stack 'labrob_hummingbird_controller' depends on non-existent package 'gazebo' and rosdep claims that it is not a system dependency. Check the ROS_PACKAGE_PATH or try calling 'rosdep update'
CMake Error at /opt/ros/hydro/share/ros/core/rosbuild/public.cmake:129 (message):
Failed to invoke rospack to get compile flags for package
'labrob_hummingbird_controller'. Look above for errors from rospack
itself. Aborting. Please fix the broken dependency!
It seems that there is no gazebo dependency any more in ROS hydro. Does anyone know which dependency I should use to replace gazebo? The manifest.xml of the stack 'labrob_hummingbird_controller' can be found below:
<package>
<description brief="HummingBirdController">
HummingBirdController
</description>
<author>Lorenzo Rosa</author>
<license>BSD</license>
<review status="unreviewed" notes=""/>
<url>http://ros.org/wiki/HummingBirdController</url>
<depend package="gazebo"/>
<depend package="geometry_msgs"/>
<depend package="gazebo_plugins"/>
<depend package="ar_pose"/>
<!--depend package="ipc_bridge"/-->
<export>
<gazebo plugin_path="${prefix}/lib" />
</export>
</package>
Originally posted by Sendoh on ROS Answers with karma: 85 on 2014-02-20
Post score: 0
Answer:
ROS hydro + Gazebo have a new integration package now
I hope this helps you http://gazebosim.org/wiki/Tutorials/1.9/Overview_of_new_ROS_integration
Originally posted by waldezjr with karma: 16 on 2014-03-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17040,
"tags": "ros, gazebo, ros-hydro, rosmake, roshydro"
} |
Is it possible to generate conditions such that the temperature difference created by current is high enough to sustain it? | Question: We know that current can generate temperature difference and vice-versa.
But is it possible to generate such conditions that the temperature difference created by current is high enough to sustain it?
Answer: Although it is a little unclear what you mean here, I will respond with some thoughts.
If what you ask were true (that the temperature difference could create a self-sustaining current flow) then you'd have a perpetual motion machine. Any time this possibility pops up in an analysis, it's an indication that there's an error in your reasoning, and you have to go back and identify it- rather than asserting your right to a Nobel prize, as many others do in similar circumstances. | {
"domain": "physics.stackexchange",
"id": 51796,
"tags": "electricity, temperature, electric-current, perpetual-motion, thermoelectricity"
} |
1D Acoustical Relations beyond nearest neighbor couplings | Question: Consider some 1D Lattice of atoms with nth neighbor coupling of strength k_{n}. I'm looking for the dispersion relation for acoustical phonons under these conditions.
I start with the Lagrangian,
$$L = K- V$$
$$L = \sum^{\infty}_{n} \frac{1}{2}m \dot{x}_{n}^{2} - \sum^{\infty}_{p=1} \frac{1}{2}k_{p} \{(x_n-x_{n+p})^2 + (x_n - x_{n-p})^2\}$$
Mass is the same for each atom. The Lagrange equation should be
$$m \ddot{x}_{n}=\sum_{p=1} k_p(x_{n-p}+x_{x+p}-2x_n)$$
Now, if I use a travelling wave solution as an ansatz, I should get my dispersion relation as some infinite series. Is this correct? If so, help me out because I can't make it work. Thanks!
Answer: A dispersion relation tells you the conditions under which a certain solution holds, which means that in order to get a dispersion relation you need to assume a solution of some general form. This is a system of harmonic oscillators coupled over a long range, so it is natural to assume a plane wave solution. Using your notation, try
$x_n= e^{i(kna - \omega t)}$
where $\omega$ is the frequency of phonon oscillation. You want to specify which $\omega$ satisfy the equations of motion. It should only take a couple of steps to arrive at the dispersion relation from this.
I hope this helps. | {
"domain": "physics.stackexchange",
"id": 5043,
"tags": "homework-and-exercises, condensed-matter, solid-state-physics"
} |
Grignard reaction with cycloheptatriene | Question: I'm having trouble with the following set of reactions and their products.
Product A is achieved by exploiting the unique aromaticity of cycloheptatriene cations. When A is treated with the Grignard reagent $\ce{MeMgI}$, a mixture of B and C is obtained. Treating A with catalytic amount of $\ce{HCl}$ in $\ce{CCl_4}$ furnishes isomer A’, which is again converted into a mixture of B and C upon treatment with Lewis-acidic Grignard reagents.
For A I got the following structure, since I could not figure out how the aromaticity of the first structure is affecting the reaction.
This cannot be correct, since the grignard reaction would produce only one product and not a mixture of two.
Having done some reasearch online on the resonance structures of cycloheptatriene, I thought of the following structure for A. But this would not produce two different products either if I am not mistaken.
Answer: The product A, shown below, is formed by first reducing the ketone to a secondary alcohol with $\ce{LiAlH_4}$ and then by adding a methyl group to form an ether using the reagent $\ce{CH_3I}$.
$\hskip2in$
Next the grignard reaction with product A will form two products. The grignard reagent $\ce{MeMgI}$ can attack A in a $1$,$2$ or in a $1$,$4$ mode, due to the unique aromaticity of cycloheptatriene, giving a mixture of products B and C, the structures of which are shown below.
$\hskip1in$ | {
"domain": "chemistry.stackexchange",
"id": 15548,
"tags": "organic-chemistry, aromatic-compounds, grignard-reagent"
} |
Berry phase in 1D materials | Question: The Berry phase $\phi_B$ is the phase that an eigenstate acquires after its momentum vector goes around a circle at constant energy around the Dirac point.
It is defined as $\phi_B = -i \int \langle\psi|\partial_{\theta}|\psi\rangle$ and is well-known to be non-trivial in 2D material graphene, where the eigenstate is $\psi = \left(1, e^{i \theta} \right)^T$ and so $\phi_B = \pi$.
What is the physical meaning of Berry phase in 1D material? How to go around a circle in 1D?
Answer: The Berry phase in one dimension is usually called the Zak phase . Viewing the parameter space as a 1-D Brillouin zone, then for a two band Hamiltonian:
$$ H = h_x \sigma_x + h_y \sigma_y + h_z \sigma_z,$$
the Zak phase is half the solid angle of the winding path of the unit vector
$$ \hat{n} = (h_x, h_y, h_z)/ \sqrt{h_x^2+h_y^2+h_z^2}$$
on the Bloch sphere.
When the Hamiltonian has various symmetries, restrictions appear on the winding path, for example, when the Hamltonian has chiral symmetry, the winding path becomes a great circle and the result can assume the values of $0$ or $\pi$.
Several applications of the Zak phase were also found.
The King-Smith-Vanderbilt formula relates the Zak phase to the polarization.
The value of the Zak phase is related to the existence of edge states.
The values of the Zak phase are related to the $\mathbb{Z}_2$ invariants of the bands. | {
"domain": "physics.stackexchange",
"id": 29012,
"tags": "topological-insulators, graphene, topological-phase, berry-pancharatnam-phase"
} |
why does FeCl₃ have such a specific smell? | Question: When I use $\ce{FeCl3}$ solution for etching circuits it gives off an acidic smell.
I was wondering what is the cause of the smell.
I don't think it's caused by the $\ce{Cl2}$ from $\ce{2FeCl3->2FeCl2 + Cl2}$ because chlorine has a different smell.
The product used is the one in this picture.
Answer: The only clue I have been able to find is a one-off sentence in the Wikipedia article for $\ce{FeCl3}$, for which no reference is given:
Iron(III) chloride undergoes hydrolysis to give an acidic solution.
If so, then the reactions are probably those along the conversion of $\ce{FeCl3}$ to $\ce{Fe2O3}$, producing hydrochloric acid $(\ce{HCl})$. The hydrogen chloride gas that escapes from the solution has a pungent acidic acrid smell.
Possible reactions include:
$$\ce{FeCl3 + H2O <=> FeOCl + 2HCl}$$
$$\ce{2FeCl3 + H2O <=> Cl2FeOFeCl2 + 2HCl}$$
$$\ce{2FeOCl + 2H2O <=> Fe2O3(s) + 2HCl}$$
Because these reactions are in equilibrium, solutions of $\ce{FeCl3}$ can be stabilized by addition of $\ce{HCl}$. Addition of $\ce{HCl}$ also promotes the formation of the tetrachloroferrate ion $\ce{FeCl4-}$, which is more resistant to hydrolysis.
$$\ce{FeCl3 + HCl -> FeCl4- + H+}$$
Ferric oxide (Iron(III) oxide $\ce{Fe2O3}$), is insoluble. Does your solution develop reddish precipitate as it ages? That would be evidence in favor of hydrolysis as the source of $\ce{HCl}$. | {
"domain": "chemistry.stackexchange",
"id": 660,
"tags": "inorganic-chemistry"
} |
Actionlib connection_monitor.cpp fails when trying to compile ROS Medlodic from source | Question:
Here is the failure output:
[ 97%] Building CXX object CMakeFiles/actionlib.dir/src/connection_monitor.cpp.o
/home/chrisl8/ros_catkin_ws/src/actionlib/src/connection_monitor.cpp: In member function ‘bool actionlib::ConnectionMonitor::waitForActionServerToStart(const ros::Duration&, const ros::NodeHandle&)’:
/home/chrisl8/ros_catkin_ws/src/actionlib/src/connection_monitor.cpp:278:66: error: no matching function for call to ‘boost::date_time::subsecond_duration<boost::posix_time::time_duration, 1000>::subsecond_duration(double)’
boost::posix_time::milliseconds(time_left.toSec() * 1000.0f);
^
In file included from /usr/include/boost/date_time/posix_time/posix_time_config.hpp:16,
from /usr/include/boost/date_time/posix_time/posix_time_system.hpp:13,
from /usr/include/boost/date_time/posix_time/ptime.hpp:12,
from /usr/include/boost/date_time/posix_time/posix_time_types.hpp:12,
from /usr/include/boost/thread/thread_time.hpp:11,
from /usr/include/boost/thread/detail/platform_time.hpp:11,
from /usr/include/boost/thread/pthread/condition_variable.hpp:9,
from /usr/include/boost/thread/condition_variable.hpp:16,
from /usr/include/boost/thread/condition.hpp:13,
from /home/chrisl8/ros_catkin_ws/src/actionlib/include/actionlib/client/connection_monitor.h:43,
from /home/chrisl8/ros_catkin_ws/src/actionlib/src/connection_monitor.cpp:36:
/usr/include/boost/date_time/time_duration.hpp:285:14: note: candidate: ‘template<class T> boost::date_time::subsecond_duration<base_duration, frac_of_second>::subsecond_duration(const T&, typename boost::enable_if<boost::is_integral<Functor>, void>::type*)’
explicit subsecond_duration(T const& ss,
^~~~~~~~~~~~~~~~~~
/usr/include/boost/date_time/time_duration.hpp:285:14: note: template argument deduction/substitution failed:
/usr/include/boost/date_time/time_duration.hpp: In substitution of ‘template<class T> boost::date_time::subsecond_duration<boost::posix_time::time_duration, 1000>::subsecond_duration(const T&, typename boost::enable_if<boost::is_integral<T> >::type*) [with T = double]’:
/home/chrisl8/ros_catkin_ws/src/actionlib/src/connection_monitor.cpp:278:66: required from here
/usr/include/boost/date_time/time_duration.hpp:285:14: error: no type named ‘type’ in ‘struct boost::enable_if<boost::is_integral<double>, void>’
In file included from /usr/include/boost/date_time/posix_time/posix_time_config.hpp:16,
from /usr/include/boost/date_time/posix_time/posix_time_system.hpp:13,
from /usr/include/boost/date_time/posix_time/ptime.hpp:12,
from /usr/include/boost/date_time/posix_time/posix_time_types.hpp:12,
from /usr/include/boost/thread/thread_time.hpp:11,
from /usr/include/boost/thread/detail/platform_time.hpp:11,
from /usr/include/boost/thread/pthread/condition_variable.hpp:9,
from /usr/include/boost/thread/condition_variable.hpp:16,
from /usr/include/boost/thread/condition.hpp:13,
from /home/chrisl8/ros_catkin_ws/src/actionlib/include/actionlib/client/connection_monitor.h:43,
from /home/chrisl8/ros_catkin_ws/src/actionlib/src/connection_monitor.cpp:36:
/usr/include/boost/date_time/time_duration.hpp:270:30: note: candidate: ‘boost::date_time::subsecond_duration<boost::posix_time::time_duration, 1000>::subsecond_duration(const boost::date_time::subsecond_duration<boost::posix_time::time_duration, 1000>&)’
class BOOST_SYMBOL_VISIBLE subsecond_duration : public base_duration
^~~~~~~~~~~~~~~~~~
/usr/include/boost/date_time/time_duration.hpp:270:30: note: no known conversion for argument 1 from ‘double’ to ‘const boost::date_time::subsecond_duration<boost::posix_time::time_duration, 1000>&’
/usr/include/boost/date_time/time_duration.hpp:270:30: note: candidate: ‘boost::date_time::subsecond_duration<boost::posix_time::time_duration, 1000>::subsecond_duration(boost::date_time::subsecond_duration<boost::posix_time::time_duration, 1000>&&)’
/usr/include/boost/date_time/time_duration.hpp:270:30: note: no known conversion for argument 1 from ‘double’ to ‘boost::date_time::subsecond_duration<boost::posix_time::time_duration, 1000>&&’
make[2]: *** [CMakeFiles/actionlib.dir/build.make:63: CMakeFiles/actionlib.dir/src/connection_monitor.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:1034: CMakeFiles/actionlib.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
Any suggestions for how to overcome this?
Originally posted by ChrisL8 on ROS Answers with karma: 241 on 2018-11-11
Post score: 0
Original comments
Comment by Sietse on 2018-11-23:
Same here, probably because of an upgrade to libboost 1.67. At least here on debian testing, on which it worked fine with libboost 1.62
Answer:
Hi,
I think here found someone the solution:
https://stackoverflow.com/a/53382269/10875592
Cheers
Originally posted by Franek with karma: 26 on 2019-08-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 32032,
"tags": "ros, ros-melodic, compile, actionlib"
} |
Lazily forward properties | Question: I want to have a object where I can reference another object's properties dynamically while they are still considered properties. If I try to do a simple setattr I'll only get the current state when I read it and it won't act like I'm getting the property.
I can't setattr with property(<target_obj>.__class__.<property>.fget), because I can't overwrite their instances of self with the specific target object I want (as far as I know). Is there a simpler way to do this?
I can successfully attach the property, but then I have to make the target object's class singleton or borg (otherwise other instances will attach their objects to my class). In my specific use case I'm fine to accept making the class receiving the property static.
def build_lazy_linker_property(link_obj, property_name, mark_docs=True):
"""
Detach a property from it's object to be used as another classes property.
Note: build_lazy_linker_property is only singleton/borg safe
Example:
- attach property 'current' from class <cache_name> to as 'current_<cache_name>':
setattr(self.__class__, 'current_' + cache_name, build_lazy_linker_property(cache, 'current'))
:param link_obj: Object to link from
:param property_name: Property name to connect from the object to the parent class
:param mark_docs: Boolean: whether to write that this is linked in the docs
:return: property which forwards to the other property
"""
# We couldn't use this in the f methods or it won't lazily evaluate
# prop = getattr(link_obj, property_name)
def fget(self):
return getattr(link_obj, property_name)
def fset(self, value):
property_ = getattr(link_obj.__class__, property_name)
property_.__set__(link_obj, value)
# setattr(property_, 'fset', value)
def fdel(self):
property_ = getattr(link_obj.__class__, property_name)
property_.__delete__(link_obj)
cls_prop = getattr(link_obj.__class__, property_name)
# Find descriptors and add as property inputs
property_inputs = {}
desc_map = {cls_prop.fget: ('fget', fget), cls_prop.fset: ('fset', fset), cls_prop.fdel: ('fdel', fdel)}
for test, fnc in desc_map.items():
# If the property has this descriptor link it
if test:
# add descriptor as kwarg
property_inputs[fnc[0]] = fnc[1]
# Handle docs
doc = ""
# Put that this is a linked property at the top of the docstring
if mark_docs:
doc += "Property linked to '{}.{}'.\n".format(link_obj, property_name)
property_inputs['doc'] = doc
# Put the rest of the docs if they exist
if hasattr(cls_prop, 'doc'):
doc += cls_prop['doc']
property_inputs['doc'] = doc
return property(**property_inputs)
def attach_lazy_link(target_cls, prop_obj, property_name):
"""Attaches property with the same name from prop_obj to target_cls"""
setattr(target_cls, property_name, build_lazy_linker_property(prop_obj, property_name))
My specific use case is difficult to explain so let's look at a test to show how this works as-is first.
def test_build_lazy_linker_property():
class Prop(object):
def __init__(self, prop):
self._prop = prop
@property
def prop(self):
return self._prop
@prop.setter
def prop(self, value):
self._prop = value
class Target(object):
def __init__(self, prop_val):
self.p = Prop(prop_val)
def elevate_prop(self):
attach_lazy_link(self.__class__, self.p, 'prop')
def test_that_props_elevate(self):
self.elevate_prop()
assert self.prop == self.p.prop # noqa this will be unreferenceable until elevate_prop is called
self.prop = 10
assert self.prop == 10
Target(1).test_that_props_elevate()
# Another instance will have the same value set in test_that_props_elevate despite creating a different Prop obj
assert Target(2).prop == value
Here you can see after attach_lazy_link an the prop property from the instance made in Target's init is attached to the Target class. Now any Target instance can use that property from the Prop instance.
In my use case I have caches of connection types for various machines. We have different libraries for different products and hosts. Then based on what is being added (either through factory methods or Mixins) we need to have access to a lot of properties which handle the connections. I specifically want the current/active connection from each cache connected to a property called current_$MACHINE_TYPE.
for cache_name in cache_list:
# make simple properties
cache = getattr(self, cache_name + '_cache')
# This is the relevant part!
setattr(self.__class__, 'current_' + cache_name, build_lazy_linker_property(cache, 'current'))
Answer: I'm not a fan of forcing singletons. To avoid these I can see two ways you can go.
Stick with using a property builder.
To work with this, you'd need to make an insolated Type class for each instance of a Type.
So type(Type()) is not Type, however it should isinstance(Type(), Type).
I think the simplest way to do this would be via a metaclass that implicitly does this on subclass instantiation.
Pros:
Can work on any property.
Cons:
You have to normalize the property.
You need to hack the environment around the wrapped property. (If you mess this up, it may use another class as self)
Kinda hacky on the whole.
Use the standard ways to customize attribute access.
If you want your classes to be singletons, which I'd recommend you don't, then you can use a Singleton metaclass to add this cleanly.
This requires a hard to read base class. But otherwise allows you to bind an object to this object and extract items from it.
Pros:
Well known interface.
Requires one class.
Cons:
Doesn't work with more than one object.
This looked like your use-case from your unit-test, but I'm not sure that's what you want.
As to your code.
I'm not a fan of your comments, they don't help too much.
I am not a fan of property_inputs, desc_map, etc.
You don't allow this to get property's that don't use the propertys wrapper.
I don't think the docs code work as intended, the property never has a doc property. To get the doc read __doc__.
Instead I'd use:
from functools import partial, wraps
def build_property(obj, name, mark_docs=True):
prop = getattr(type(obj), name, None)
if isinstance(prop, property):
fget = prop.fget,
fset = prop.fset,
fdel = prop.fdel,
doc = prop.__doc__
else:
fget = getattr, name
fset = setattr, name
fdel = delattr, name
doc = None
def make_call(fn, *org_args):
if fn is None:
return
f = partial(fn, obj, *org_args)
@wraps(fn)
def inner(_, *args):
return f(*args)
return inner
prop = property(*(make_call(*fn) for fn in (fget, fset, fdel)), doc)
if mark_docs:
text = (
"Property linked to '{}.{}'.".format(obj, name),
prop.__doc__
)
prop.__doc__ = '\n'.join(t for t in text if t)
return prop
You can then use the following to get the first way you could do this. This passes your unit test, except Target(2).prop != Target(1).prop.
class Classington(type):
def __call__(cls, *args, **kwargs):
cls = type(cls.__name__, (cls,), {})
return type.__call__(cls, *args, **kwargs)
class PropertyHolder(metaclass=Classington):
def add_property(self, obj, name, mark_docs=True):
setattr(type(self), name, build_property(obj, name, mark_docs))
class Target(PropertyHolder):
def __init__(self, prop):
self.p = Prop(prop)
self.add_property(self.p, 'prop')
However, the second method uses _object rather than p to hold the object. The below achieves the same as your code, however, if you remove the Singleton metaclass from Target, then it, just like the above code, work as Target(2).prop != Target(1).prop.
class TransparentProxy:
def __init__(self, obj, props=None):
object.__setattr__(self, '_object', obj)
object.__setattr__(self, '_props', set(props or dir(obj)))
def __getattribute__(self, name):
_props = object.__getattribute__(self, '_props')
if name in _props:
_object = object.__getattribute__(self, '_object')
return getattr(_object, name)
else:
return object.__getattribute__(self, name)
def __setattr__(self, name, value):
_props = object.__getattribute__(self, '_props')
if name in _props:
_object = object.__getattribute__(self, '_object')
return setattr(_object, name, value)
else:
return object.__setattr__(self, name, value)
def __delattr__(self, name):
_props = object.__getattribute__(self, '_props')
if name in _props:
_object = object.__getattribute__(self, '_object')
return delattr(_object, name)
else:
return object.__delattr__(self, name)
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
else:
cls._instances[cls].__init__(*args, **kwargs)
return cls._instances[cls]
class Target(TransparentProxy, metaclass=Singleton):
def __init__(self, prop):
super().__init__(Prop(prop), ['prop']) | {
"domain": "codereview.stackexchange",
"id": 26663,
"tags": "python, properties"
} |
Where do beta particles go after being emitted from the nucleus? | Question: What high-school taught me:
In beta radiation, beta particles are lone electrons that are emitted from the nucleus at high speeds after a neutron decays into a proton and an electron.
Beta radiation is dangerous when exposed to humans because it can create burns on the skin.
And from one of my previous questions on Physics SE I learnt that the electron can resist the nuclear attraction of the nucleus because it has magnitudes higher kinetic energy than the pull of the nuclear attraction.
So where does this electron go after it leaves the atom if it resists nuclear attraction from atoms it passes? Does it it hit another nucleus out of pure chance, if so what happens to the nucleus it hits? That would explain its interaction with human skin, but it wouldn't explain the burns, because it would theoretically just affect one atom in the skin layer, it wouldn't be a large enough impact to cause burns (unless lots and lots of these atoms experienced beta decay). If the kinetic energy had anything to do with it, the electron would cause the atom affected to rebound inwards in the direction of the electron's trajectory due to conservation of momentum, unless it just absorbed the electron and the kinetic energy converted into some other form of energy (maybe thermal because of the burnt skin?).
Answer: Since a beta particle is a bare unbound electron, it is highly chemically reactive after it sheds enough of its kinetic energy to interact with atoms and molecules instead of just bouncing violently off them as it zooms through the air.
It is those energetic collisions which convey high energy to the atoms and molecules, breaking them up or ionizing them into chemically reactive states which then react with other molecules or atoms in the neighborhood. Those collisions also can induce the target atom to throw off an energetic photon (x-ray or sometimes gamma ray) which then itself proceeds to wreak further havoc.
(Complicated protein molecules are particularly susceptible to this sort of damage, which is why beta radiation is dangerous for living things. Since the outermost skin cells covering your body are not technically alive, they can withstand the damage- but beta emission inside your body is deadly.)
Anyway... the beta particles ionize those atoms and molecules, turning them into extremely reactive free radicals which then undergo chemical reactions with other atoms and molecules in the surrounding air. At the end of the process, a number of new molecules have been created and one of them along the way winds up with the extra electron- either that, or the extra e- gets sorbed onto the surface of some (insulating) solid in the neighborhood and resides there as a very slight excess of negative charge. | {
"domain": "physics.stackexchange",
"id": 92850,
"tags": "electrons, nuclear-physics, material-science, radiation, biophysics"
} |
HTML email template | Question: Based on several different sources, I have compiled the following as my basic HTML email template. Please let me know if I have missed anything important. I am not sure if I am using \n and \r\n correctly.
$semi_rand = uniqid();
$mime_boundary = "==MULTIPART_BOUNDARY_$semi_rand";
$mime_boundary_header = chr(34) . $mime_boundary . chr(34);
$boundary = "nextPart";
$headers = "From: ".$from."\n";
$headers .= "To: ". $to ."\n";
$headers .= "CC: ". $CC ." \r\n";
$headers .= "Reply-To: ".$from."\r\n";
$headers .= "Return-Path: <". $data['from'] .">\r\n";
$headers .= "MIME-Version: 1.0\r\n";
$headers .= "Content-Type: multipart/alternative;\n boundary=" . $mime_boundary_header ;
$headers .= "\n--$boundary\n"; // beginning \n added to separate previous content
$headers .= "Content-type: text/plain; charset=iso-8859-1\r\n";
$headers .= "\n--$boundary\n";
$headers .= "Content-type: text/html; charset=iso-8859-1\r\n";
$headers .= "Content-Transfer-Encoding:base64\r\n";
$body = "
--$mime_boundary
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
". strip_tags($message) ."
--$mime_boundary
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding:base64
". chunk_split(base64_encode(
'<html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />'.
$style.'</head><body>'.$message.'</body></html>' )) ."
--$mime_boundary--";
mail(null,$subject,$body,$headers,"-f".$email);
Questions:
Will switching base64_encode() to quoted_printable_encode() work or will I also need to convert the string to an 8-bit string somehow?
Should I just remove these extra headers?
Could/should I use \r\n\ at every line break, including the ones in the multiline string?
Answer: Some notes:
base64
It's not usual to base64_encode HTML. While the way you've done it will work, you run the risk of being marked as spam as spam filters see attempts at obfuscating emails as shady. Consider Quoted-Printable instead.
$boundary?
This section doesn't make sense:
$headers .= "\n--$boundary\n"; // beginning \n added to separate previous content
$headers .= "Content-type: text/plain; charset=iso-8859-1\r\n";
$headers .= "\n--$boundary\n";
$headers .= "Content-type: text/html; charset=iso-8859-1\r\n";
$headers .= "Content-Transfer-Encoding:base64\r\n";
You are correctly defining and using $mime_boundary_header, these extra headers are unnecessary and will break the email.
New Lines
You've ended 'To' and 'From' with "\n", these should be "\r\n". You're also using a multiline string for $body. This is ok, but make sure your text editor is using CRLF ("\r\n") for new lines.
To
You still need to pass $to into mail() as the email 'To' header and the recipient can technically be different:
mail($to,$subject,$body,$headers,"-f".$email);
Extra Notes
(This is stuff that doesn't really matter)
The way you generate $semi_rand is acceptable, but you might want to consider using uniqid() instead
If your return-path and reply-to values are the same as your from, you don't need to specify them
Edits in Light of New Questions
Base 64 to Quoted Printable
Will switching base64_encode() to quoted_printable_encode() or will I also need to convert the string to an 8-bit string somehow?
The PHP function quoted_printable_encode() will do that for you - quoted printable is designed to produce 7bit output
Make sure you also update the header to Content-Transfer-Encoding:quoted-printable as well
Removing Unnecessary headers
Should I just remove these extra headers?
Yes, they aren't required for anything
New Lines
Could/Should I use \r\n\ at every line break, including the ones in the multiline string?
You don't have to if you're sure your text editor will use CRLFs, however I prefer to explicitly use "\r\n" for my emails, as you have done for the $headers variable, as it tends to be clearer and reduces the risk of the line endings being changed if you / someone else ever resaves the file with a different editor, etc | {
"domain": "codereview.stackexchange",
"id": 6895,
"tags": "php, html, email"
} |
Simultaneous measurements of Pauli observables and number of copies required | Question: Does simultaneous measurement imply that we can only use $1$ copy of quantum state to measure any set of commuting observables? For example, suppose we have a Bell state $(\lvert 00 \rangle + \lvert 11 \rangle) / \sqrt{2} $, and want to measure this state to $X \otimes X$ and then $Z \otimes Z$ observables. Then, measuring $X \otimes X$ wouldn't change the state so that we can measure $X \otimes X$ and then $Z \otimes Z$ so that we are essentially re-using the state and thus the total number of copy we used is just $1$. Is this a correct understanding of a simultaneous measurement, and does this hold in general when the operators commute?
Answer: Yes, that's actually precisely the reason why simultaneous measurement is so useful, and any mutually commuting set of observables is simultaneously measurable. | {
"domain": "quantumcomputing.stackexchange",
"id": 4355,
"tags": "quantum-state, measurement"
} |
Ploting eigenvectors | Question: I've generated two clouds of 3d points from multivariate_normal
data = np.random.multivariate_normal([2,2,2],[[1,0,0],[0,5,0],[0,0,10]],
size=500)
data = np.vstack((data, np.random.multivariate_normal([-2,-2,-2], [[1,0,0],[0,5,0],[0,0,10]], size=500)))
data = data - data.mean(axis=0)
And try to do PCA like this
covmat = np.cov(data.T)
v, W = np.linalg.eig(covmat)
And draw:
def get_vec(eig_v, eig_vec):
t = np.linspace(0, eig_v)
return np.array([np.array(v * eig_vec) for v in t])
def ang(v1, v2):
return np.rad2deg(np.arccos(np.dot(v1,v2)/np.linalg.norm(v1)/np.linalg.norm(v2)))
l1 = get_vec(v[0], W[:,0])
l2 = get_vec(v[1], W[:,1])
l3 = get_vec(v[2], W[:,2])
x = data[:,0]
y = data[:,1]
z = data[:,2]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot(l1[:,0],l1[:,1],l1[:,2], c='r')
ax.plot(l2[:,0],l2[:,1],l2[:,2], c='b')
ax.plot(l3[:,0],l3[:,1],l3[:,2], c='y')
ax.scatter(x,y,z,c='g')
plt.show()
This is what I get:
It's clearly visible that axes are not orthogonal. I've checked it and they seem to be orthogonal with regard to numbers:
print(ang(W[:,0], W[:,1]))
print(ang(W[:,0], W[:,2]))
print(ang(W[:,1], W[:,2]))
90.00000000000003
89.99999999999999
90.0
Could it be that such a tiny error makes that much visual difference?
Answer: The PCA projections do not look not orthogonal because your figure axes are not equal.
Set all axis equal with something like this:
ax.axis('equal')
or
ax.xlim(-5, 5)
ax.ylim(-5, 5)
ax.zlim(-5, 5)
ax.gca().set_aspect('equal') | {
"domain": "datascience.stackexchange",
"id": 4399,
"tags": "pca, matplotlib"
} |
RGBD SLAM GUI not displaying any video feed | Question:
Hi, I am using ROS Diamondback on Ubuntu 10.04 running on VMware Player. I have also connected a Kinect camera via USB. I think I have installed all required drivers and build all the necessary files without failures.
However, when I execute this command:
roslaunch rgbdslam kinect+rgbdslam.launch
The RGBDSLAM GUI opens up, but it doesn't display any video. Two windows in the bottom say "Waiting for monochrome image..." and "Waiting for depth image..."
When I press space and stop, I see that 0 bytes of data has been recorded. The same thing happens when I press enter to get a single frame.
I am sort of stuck at a dead end, and really have no clue where things are going wrong. Any help would be appreciated.
Originally posted by periphery on ROS Answers with karma: 21 on 2012-06-04
Post score: 0
Answer:
It seems there are problems with vmware, see this question and its answers
Originally posted by Felix Endres with karma: 6468 on 2012-06-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9658,
"tags": "slam, navigation, kinect"
} |
Representing $su(2)$ Lie algebra on a torus | Question: I've recently taken up the study of QFT (as a post retirement hobby), based on texts by David Tong and Anthony Zee.
My question is based on the Lie Algebra of the $SU(2)$ group, and how this may be represented on a manifold such as the 3 (real) dimensional torus.
In particular, I would like to know how I should understand in what way the spin operator commutation operation
$[S_x, S_y] = S_z$ is to be understood when performed on a torus.
I'm not necessarily looking for a direct answer, as a series of images may be too time consuming to produce. I have read Penrose's "Road to Reality" which I thought might give me the best geometrical picture, but no direct answer exists there.
As a self study person, I apologize if I am mistaken in my intuition as to what the torus actually represents.
In reply to the comment below:
I mean "representing" in a general manner, rather than something formal like matrix representation
I mean is there a duality type setup? Can we use the shape of the torus to find out more about the properties of spin half particles?
Another way of asking the question: is there any physical intuition for choosing a torus, and if so how does the shape of the torus reflect spin half particles or is it just a formal mathematical mapping?
My thoughts are that the torus is just another way of looking at the problem, and that there should be equivalent operations on it, but I might be reading too much into it,
Edit to include comments, as this question is based on an incorrect assumption of mine, but someone might find it useful:
Just to make sure: you want to represent the Lie algebra as a manifold, not the group? Because SU(2) the group is the manifold S3, but probably that's not what you are after.
Also, every Lie group has a subgroup which is a torus and it's generated by the exponential of the Cartan subalgebra. In the case of SU(2), which is of rank 1, the torus is just a circle.
Answer: Consider the realization, not representation, of the su(2) Lie algebra, in the spherical basis,
$$
[S_0,S_{\pm}]=\pm S_{\pm} \qquad [S_+,S_-]=2S_{\pm} ~~ ,
$$
in terms of the two angles $\theta$ and $\phi$ going around the 2-torus in the respective "directions",
$$
S_+= \phi \partial_\theta, \qquad S_-= \theta \partial_\phi, \qquad S_0= \phi \partial_\phi - \theta \partial_\theta ~.
$$
You may easily check they satisfy the Lie algebra. | {
"domain": "physics.stackexchange",
"id": 64504,
"tags": "angular-momentum, group-theory, representation-theory, lie-algebra"
} |
Where does the energy of a lightning strike go? | Question: Lightning contains a lot of energy, so where does this energy go after lightning has hit the ground?
Does it travel all the way to the core? What happens after that?
Answer: The majority of the energy is dissipated in the travel through the air from the cloud to the ground. The energy goes into heating the air and generating the shockwave that we hear as thunder. I can't give you a single definitive refernce for this, but Googling "energy dissipation lightning" will find lots of relevant articles.
You can understand why this is because the energy dissipated by a current $I$ travelling through a resistance $R$ is given by $W = I^2R$. In a lightning strike the current is constant, because the charge flowing in one end has to flow out the other end, so the power dissipated is proportional to the resistance. The resistance of air is a lot higher than the resistance of the ground/tree/person or whatever the lightning hits, so the majority of the energy dissipation is in the air.
The electrons flowing from the cloud through the lightning bolt end up in the ground, but with an energy only slightly greater than ambient. They will presumably flow into the surrounding area until the potential difference around the point of strike falls to effectively zero. This is likely to be within a few metres, so they wouldn't get anywhere near the Earth's core. | {
"domain": "physics.stackexchange",
"id": 6834,
"tags": "energy, geophysics, lightning"
} |
Conservation of momentum of ideal gas in pipe | Question: I was reading this question Pressure drop in a pipe due to cooling and have a follow up question. Please note that the question is about an ideal frictionless pipe.
In the question, it is stated
$$ \rho v = C_1$$
$C_1$ being a constant due to conservation of mass, which makes sense to me, that for every cross section there must be the same mass per second, but then
$$ p+\rho v^2=C_2$$
is stated with the reason being "conservation of momentum". I can't find this equation anywhere else and the reasoning behind it also doesn't make sense to me. If we combine it with https://en.wikipedia.org/wiki/Bernoulli%27s_principle
$$v^2/2 + gz +p/\rho = C_3$$
set $g$ to 0, we can get
$$C_2 - \rho v^2 = \rho C_3 - \rho v^2/2 $$
and
$$C_2 = \rho C_3 + \rho v^2/2 $$
and using the first equation (conservation of mass) we can get
$$\rho C_2 = \rho^2 C_3 + C_1^2/2 $$
which implies that $\rho$ is constant, which seems completely wrong to me.
Momentum is also discussed in the comments and everyone seems fine with using temperature/pressure when talking about conservation of momentum. What doesn't make sense to me is how do temperature/pressure contribute when discussing conservation of momentum. Conservation of momentum is about the total momentum, and raising/lowering temperature/pressure, don't affect total momentum. If you have a gas in a stationary box the total momentum is zero, and heating or cooling it won't change that.
Where does this equation and reasoning come from?
Answer: As your system is cooling, its energy is not conserved and Bernoulli does not apply. However from 1-d Euler
$$
\rho \left(\frac{\partial v}{\partial t}+ v \frac{\partial v}{\partial x}\right)= -\frac{\partial P}{\partial x}
$$
and 1-d mass conservation
$$
\frac {\partial \rho} {\partial t}+ \frac{\partial \rho v}{\partial x}=0
$$
we find the momentum conservation law
$$
\frac{\partial \rho v}{\partial t}+ \frac{\partial}{\partial x}\left(\rho v^2+P\right)=0,
$$
so in steady flow we have $\rho v^2+P=const$. This is true for strictly 1-d flow even if heat energy is being added or lost. It appears at first to in conflict with Bernoulli (which holds in the flow is isentropic) but in Bernoulli the speed changes because the pipe changes area, so the Bernoulli flow is not strictly 1-d.
In a pipe with a slowly varying area the momentum law becomes
$$
A\frac{\partial \rho v}{\partial t}+ \frac{\partial\rho v^2 A}{\partial x}= -A \frac{\partial P }{\partial x}.
$$ | {
"domain": "physics.stackexchange",
"id": 85314,
"tags": "momentum, conservation-laws, ideal-gas, bernoulli-equation"
} |
Search for topics with specific message | Question:
Hi,
My question is the following:
Suppose I have a list of topics and each topic publishes a message of type messageType.
Is there a command that allows me output the all messageTypes that are currently being published?
I know "rostopic info -topicname" will give me information as well as the message a specific topic outputs. Is there a wildcard like rostopic info * that lets me see all the messages that are currently being published?
Thank you in advance,
Panos.
Originally posted by panos on ROS Answers with karma: 43 on 2015-05-12
Post score: 1
Answer:
I don't know of a built-in command that does exactly what you want, but you could always combine rostopic outputs using standard command line tools (piping, xargs, etc.).
For example, to get the detailed information about every topic, you could run:
rostopic list |xargs -n 1 rostopic info
Or you could combine with grep:
rostopic list |grep robot |xargs -n 1 rostopic info
Originally posted by jarvisschultz with karma: 9031 on 2015-05-12
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21670,
"tags": "ros, publisher"
} |
How to efficiently calculate points of intersection of a straight line and a contour? | Question: Ex.
I have an image, let's say 100x100 pixels with some shape on it. And a set of straight lines that are passing through origin located at some point of image, for example origin located at point (50, 50). Number of lines determined by step - angle. If step = 15 degrees, then we have 180 / 15 lines. I want to determine points of interesection in efficient way, but all that I came up for now is printing a line on matrix and check for non-zero elements if they overlap non-zero elements of a shape(and check for diagonal connectivity).
Answer: Calculate the straight lines angles with respect to the x-axis
Calculate the angles of each and every point on the contour
Search for points on the contour with same angles as the lines with respect to the origin.
These are the points of line-contour crossing. | {
"domain": "dsp.stackexchange",
"id": 3190,
"tags": "image-processing, math, shape-analysis"
} |
Setting output file name for the ROS video recorder | Question:
I am using the video recorder to save a video stream to a file using ros, but I am having trouble setting the filename parameter.
This is the command I am using
rosrun image_view video_recorder filename:="test" image:=/camera/color/image_raw
These are the messages in the terminal
[ INFO] [1548243269.519044136]: Waiting for topic /camera/color/image_raw...
[ INFO] [1548243269.722238288]: Starting to record MJPG video at [640 x 480]@15fps. Press Ctrl+C to stop recording.
[ INFO] [1548243271.470604124]: Recording frame 52
Video saved as output.avi
The video is saved successfully, but it names the file "output" instead of "test" like I specified.
How do I set the filename parameter?
Originally posted by Drkstr on ROS Answers with karma: 25 on 2019-01-23
Post score: 0
Answer:
try
rosrun image_view video_recorder
_filename:="/home/(usr name)/test.avi" image:=/camera/color/image_raw
Originally posted by Hamid Didari with karma: 1769 on 2019-01-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 32324,
"tags": "ros-melodic"
} |
Find first or last argument lexicographically or temporally | Question: I often find I need to find the earliest or latest file matching a given pattern, and sometimes to choose the lowest or highest string from several possibilities.
The following four functions provide that functionality, and are robust enough to work correctly with all possible inputs. They do require the GNU implementations of sort and head that work with NUL-separated input.
#!/bin/sh
set -eu
first_sorted() {
${1+true} return 1
printf '%s\0' "$@" | sort -z | head -z -n1 | tr '\0' '\n'
}
last_sorted() {
${1+true} return 1
printf '%s\0' "$@" | sort -rz | head -z -n1 | tr '\0' '\n'
}
first_mtime() {
test -e "${1-}" || return 1
first=$1; shift
for i
do
test -e "$i" || return 1
test "$first" -ot "$i" || first=$i
done
printf '%s\n' "$first"
}
last_mtime() {
test -e "${1-}" || return 1
last=$1; shift
for i
do
test -e "$i" || return 1
test "$last" -nt "$i" || last=$i
done
printf '%s\n' "$last"
}
Unit tests:
# Unit tests of string ops
LC_COLLATE=C
export LC_COLLATE
nl='
'
first_sorted && exit 1
test "twenty${nl}seven" = "$(first_sorted "twenty${nl}seven")"
test "twenty${nl}five" = "$(first_sorted "twenty${nl}seven" "twenty${nl}five")"
test "" = "$(first_sorted '' "twenty${nl}five")"
last_sorted && exit 1
test "twenty${nl}seven" = "$(last_sorted "twenty${nl}seven")"
test "twenty${nl}seven" = "$(last_sorted "twenty${nl}seven" "twenty${nl}five")"
test "twenty${nl}five" = "$(last_sorted '' "twenty${nl}five")"
# Unit tests of file ops
d=$(mktemp -d)
trap 'rm -r "$d"' EXIT
cd "$d"
touch -t 12310900 "Hogmanay${nl}morning"
touch -t 12311200 "Hogmanay${nl}midday"
touch -t 12311500 "Hogmanay${nl}afternoon"
touch -t 12312200 "Hogmanay${nl}evening"
first_mtime && return 1
test "Hogmanay${nl}morning" = "$(first_mtime H*)"
last_mtime && return 1
test "Hogmanay${nl}evening" = "$(last_mtime H*)"
Answer: Avoiding duplication
The pairs of functions have duplicated logic in them.
I would extract the comparator to avoid it.
Avoiding clever code
I find this a bit too clever:
${1+true} return 1
This alternative will have the same effect and I think looks more familiar and thereby easier to read:
test $# -gt 0 || return 1
Similarly, I find this one a bit clever too, because it combines two things, validating that there is a parameter and that it's a file:
test -e "${1-}" || return 1
I would spell out the two conditions, to separate the two intents:
test $# -gt 0 || return 1
test -e "$1" || return 1
A note on time complexity
I would be remiss to not point out that sorting (a log-linear operation) to find the first item feels a waste, when the task can be done with a linear operation.
If these functions will be used with at most hundreds of files, and not in a loop, then of course it doesn't matter much.
Using descriptive name instead of i
I only use i in counting loops.
In the posted code path would be a natural name.
Possible bug in the tests
If the posted test code runs in a function, then it's ok.
If not, then the return statement here is invalid:
first_mtime && return 1
I suspect you meant to write exit 1 instead of return 1 consistently everywhere in the test code.
Missing tests
The mtime functions fail when a parameter is not a valid file, but this case is not covered.
Simple remedy:
first_mtime 'nonexistent' && exit 1
last_mtime 'nonexistent' && exit 1
Alternative implementation
The main benefits of this alternative:
Reduced duplicated logic: one common function does the main work
Pure shell implementation (no sort, head, tr), linear logic
Note that the functions starting with underline _ are meant as private.
That's why I didn't add the usual recommended parameter validation.
They assume they are only called correctly as intended.
_first_by_comparator() {
validator=$1; shift
comparator=$1; shift
test $# -gt 0 || return 1
first=$1; shift
"$validator" "$first" || return 1
for item
do
"$validator" "$item" || return 1
test "$first" "$comparator" "$item" || first=$item
done
echo "$first"
}
first_sorted() {
_first_by_comparator true '<' "$@"
}
last_sorted() {
_first_by_comparator true '>' "$@"
}
_file_exists() {
test -e "$1"
}
first_mtime() {
_first_by_comparator _file_exists '-ot' "$@"
}
last_mtime() {
_first_by_comparator _file_exists '-nt' "$@"
} | {
"domain": "codereview.stackexchange",
"id": 44254,
"tags": "sorting, shell, posix, sh"
} |
Neural network returns similar output | Question: I was following Daniel Shiffman's tutorials on how to write your own neural network from scratch. I specifically looked into his videos and the code he provided in here. I rewrote his code in Python, however, 3 out of 4 of my outputs are the same. The neural network has two input nodes, one hidden layer with two nodes and one output node. Can anyone help me to find my mistake? Here is my full code.
import random
nn = NeuralNetwork(2,2,1)
inputs = np.array([[0, 0], [1, 0], [0, 1], [1, 1]])
targets = np.array([[0], [1], [1], [0]])
zipped = zip(inputs, targets)
list_zipped = list(zipped)
for _ in range(9000):
x, y = random.choice(list_zipped)
nn.train(x, y)
output = [nn.feedforward(i) for i in inputs]
for i in output:
print("Output ", i)
#Output [ 0.1229546] when it should be around 0
#Output [ 0.6519492] ~1
#Output [ 0.65180228] ~1
#Output [ 0.66269853] ~0
EDIT_1: I tried debugging my code by choosing all weights and bias' values to 0.5. I did this in both my code and Daniel's. This obviously ended up showing me all outputs with the same value.
After that I increased my weights and bias' values variety from [0 , 1) to [-1, 1). By running this a few times, I would sometimes get the correct output:
[ 0.93749991] # should be ~1
[ 0.93314793] # ~1
[ 0.07001175] # ~0
[ 0.06576194] # ~0
If I ran nn.train() 100 000 times, I get the correct output 2/3 times.
Is this the issue of gradient descent, where it converges to the local minima?
Answer: Local minima.
You have the exact same issue of this question. If you randomize your initial weights, you'll see sometimes you get the correct results, and others you won't. It's because when the weights are initialized with a certain range of values, they will converge to a local minima which you cannot escape with a low learning rate.
A simple solution is to increase the size of your hidden layer, which will make the network more robust to such issues.
When you have only 2 dimensions, a local minima exists. When you have more dimensions, this minima gets harder and harder to reach, as its likelihood decreases. Intuitively, you have a lot more dimensions through which you can improve than if you only had 2 dimensions.
The problem still exists, even with 1000 neurons you could find a specific set of weights which was a local minimum. However, it just becomes so much less likely. | {
"domain": "ai.stackexchange",
"id": 502,
"tags": "neural-networks, python"
} |
Reading, writing, and copying files | Question: Originally, I created a program that echoes files or strings to output. This is a modification of that same program and goes a step further. It does work (in both Linux and Windows), but I can NOT guarantee that it is bug free or that it is even fully compatible with both OS's.
This program writes, reads, and copies files. When writing to a file, it appends the line
buffered input to the file. This program works best with valid text based files. I can NOT guarantee any behavior since the program does not attempt to check the files actual file type; merely the extension name.
The main differences are:
It solicits the user for input instead of operating from the CLI
It operates only on (or with) files
It requires the use of "keywords" to operate
I'm looking for helpful and useful critiques, mainly in programming style, organization, and if it's easily understood (or if there was difficulty in understanding) source code. If any critiques are made, please add in how I can improve upon my "misdeeds".
(Source files are linked via Pastebin)
nanproto.h
/* *********************************
set the preprocessor directives
************************************/
/*
SET THE SWITCHES
----------------
To avoid conflicts while editing,
and recompiling, I used a
"switch mechanism" to turn them
ON and OFF.
*/
#ifndef ON
# define ON 1
#endif
#ifndef OFF
# define OFF 0
#endif
/*
Set the ERROR macro
*/
#ifndef ERROR
# define ERROR -1
#endif
/*
maximum string length
---------------------
11 //tiny
21 //small
41 //medium
81 //large
101 //extra
---------------------
all sizes are offset by one
to include the null character
*/
#ifndef SLEN
# define SLEN 81
#endif
/*
Set the boolean values for the
variables true and false...
*/
#ifndef BOOL
# define BOOL ON
# if BOOL
# define true 1
# define false 0
# endif
#endif
/*
Define the menu options.
-----------------------------------
These options help delegate the
menu's I/O by using the find_key()
known_file_extension(), and menu()
functions defined in the nanite.c,
nanstring.c, and nanfile.c files.
-----------------------------------
More keywords can be added by simply
changing the KEYLEN macro value.
The strings can be found in the
nanstring.c file.
*/
#ifndef MENU_OPTIONS
# define MENU_OPTIONS ON
# if MENU_OPTIONS == ON
# define KEYLEN 6
enum select {
copy, help, line,
quit, read, write
};
const char * keywords[KEYLEN];
const char * keyletters[KEYLEN];
# endif
#endif
/*
maximum buffer size
-------------------
const long buffer_size = 512; //tiny buffer
const long buffer_size = 1024; //small buffer
const long buffer_size = 2048; //medium buffer
const long buffer_size = 4096; //large buffer
const long buffer_size = 8192; //extra large buffer
*/
#ifndef BUFSIZE
# define BUFSIZE 1024
#endif
/*
Define the file options.
------------------------
similar to menu options, these settings
are used to decide whether the given
file extension type is a "valid" one.
considering there are more effecient
methods and this one is a trite and tried
method, its used for educational purposes.
-----------------------------------
More extensions can be added by
simply changing the EXTLEN macro
value. The strings can be found
in the nanstring.c file.
Keep in mind that the order of the
values and string elements must
be the same.
*/
#ifndef FILE_OPTIONS
# define FILE_OPTIONS ON
# if FILE_OPTIONS == ON
# define EXTLEN 9
enum file { txt, asc, c, h, csv, html, log, xhtml, xml };
const char * extension[EXTLEN];
# endif
#endif
/*
Prototypes were left optional
but make a good reference and
allows the "black-box" concept
to stay in play.
Once a function has been tested,
and works, it can be added to the
prototype list.
*/
#ifndef PROTOTYPES
# define PROTOTYPES ON
# if PROTOTYPES == ON
/* *********************************
MENU BASED FUNCTIONS
--------------------
Prototypes for the nanstring.c file
--------------------
these functions operate on strings
************************************/
void eatline(void);
void remove_newline(char *);
void pause_buffer(void);
void string_to_lower(char *);
int find_key(char *);
int menu(char *);
void display_help(void);
void prompt(void);
/* *********************************
FILE BASED FUNCTIONS
--------------------
Prototypes for the nanfile.c file
--------------------
these functions operate on files
************************************/
int known_file_extension(const char *);
int read_file(const char *);
int read_line(const char *, long long);
int write_file(const char *);
int copy_file(const char *, const char *);
# endif
#endif
nanite.c
/*
*************************************************************
Written by: JargonJunkie
*************************************************************
This program reads, writes, and copies files.
*************************************************************
This program was inspired by Problem 13-07 found in the
C Primer Plus book; Chp 13 Programming Exercises.
*************************************************************
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include "./nanproto.h"
int main(void)
{
long long number;
char input[SLEN], source[SLEN], target[SLEN];
enum select option;
prompt();
while (quit != (option = menu(input)))
{
if (ERROR == option)
{
fprintf(stderr, "Invalid Command: %s\n", input);
fputs("Starting over...\n\n", stderr);
continue;
}
switch (option)
{
case copy:
printf("(name source file)?> ");
if (NULL == fgets(source, SLEN, stdin)) {
fprintf(stderr, "Could not read from standard input.\n\n");
break;
}
remove_newline(source);
printf("(name target file)?> ");
if (NULL == fgets(target, SLEN, stdin)) {
fprintf(stderr, "Could not read from standard input.\n\n");
break;
}
remove_newline(target);
putchar('\n');
if (ERROR == copy_file(source, target))
fprintf(stderr, "[Error!]: Failed to copy %s to the file %s\n", source, target);
break;
case help:
display_help();
break;
case line:
printf("(name target file)?> ");
if (NULL == fgets(source, SLEN, stdin)) {
fprintf(stderr, "Could not read from standard input.\n\n");
break;
}
remove_newline(source);
printf("(enter line number)?> ");
while (true != scanf("%lld", &number) || (number < 1))
{
putchar('\n');
puts("Oops! Only Positive Integers Please...");
printf("(enter line number)?> ");
eatline();
}
eatline();
putchar('\n');
if (ERROR == read_line(source, number))
fprintf(stderr, "[Error!]: Failed to open the file %s\n\n", source);
break;
case read:
printf("(name target file)?> ");
if (NULL == fgets(source, SLEN, stdin)) {
fprintf(stderr, "Could not read from standard input.\n\n");
break;
}
remove_newline(source);
putchar('\n');
if (ERROR == read_file(source))
fprintf(stderr, "[Error!]: Failed to read the file %s\n\n", source);
break;
case write:
printf("(name target file)?> ");
if (NULL == fgets(source, SLEN, stdin)) {
fprintf(stderr, "Could not read from standard input.\n\n");
break;
}
remove_newline(source);
putchar('\n');
if (ERROR == write_file(source))
fprintf(stderr, "[Error!]: Failed to write to the file %s\n", source);
break;
default:
fputs("Oops! Something went horribly wrong!\n", stderr);
fprintf(stderr, "[Error] in main() -> while menu() -> switch (option)");
exit(EXIT_FAILURE);
}
}
putchar('\n');
if (option == quit)
{
puts("Exit Success!");
return 0;
}
else
{
puts("Exit Failure!");
return 1;
}
}
nanstring.c
/* *********************************
NANITE STRINGs
---------------------------
This file must be linked with the
nanite.c and nanfile.c files.
---------------------------
This file defines most of the executable
code predefined by the "nanproto.h" file.
These functions are used by the "nanfunct.c"
and "nanite.c" source files.
************************************/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include "./nanproto.h"
/*
Initialize KEYWORDS, KEYLETTERS, and EXTENSION
----------------------------------------------
These keywords are used to compare against
input supplied by the user. If the keyword
is found to be valid, the function returns
a value accordingly; else the function
returns some type of error based value.
*/
const char * keywords[KEYLEN] = {
"copy", "help", "line",
"quit", "read", "write"
};
const char * keyletters[KEYLEN] = {
"c", "h", "l",
"q", "r", "w"
};
const char * extension[EXTLEN] = {
".txt", ".asc", ".c", ".h",
".csv", ".html", ".log",
".xhtml", ".xml"
};
/* *********************************
MENU BASED FUNCTIONS
--------------------
these functions help create the
operability of the menu() interface
used in the nanite source file.
these functions operate on strings
************************************/
/*
The eatline() Function
---------------------------------------
Dispose of input up to the newline
character...
*/
void eatline(void)
{
while (getchar() != '\n')
continue;
}
/*
The remove_newline() Function
---------------------------------------
Removes the last occurance of a newline
character.
If no newline is found, nothing is done.
If a newline is found, it is replaced by
the null character.
*/
void remove_newline(char * string)
{
char * newline;
size_t position, cpy_length, str_length = strlen(string);
if ( NULL != (newline = strrchr(string, '\n')) )
{
cpy_length = strlen(newline);
position = str_length - cpy_length;
string[position] = '\0';
}
}
/*
The pause_buffer Function
---------------------------------------
Causes the output buffer to stop until
the user enters a newline.
*/
void pause_buffer(void)
{
int ch;
putchar('\n');
puts("[enter] to continue...");
ch = getchar();
if (!isspace(ch))
while (getchar() != '\n') continue;
}
/*
The string_to_lower() function
---------------------------------------
Transforms the string to lower case
*/
void string_to_lower(char * string)
{
for (int index = 0; string[index]; index++)
if (isalpha(string[index]))
string[index] = tolower(string[index]);
}
/*
The find_key() Function
---------------------------------------
Compare the given keyword against the
const keywords. IF the keyword is valid,
return the elements location. ELSE return
the ERROR macro.
*/
int find_key(char * source)
{
int length, letters = 3;
enum select option;
_Bool key_is_found = false;
string_to_lower(source);
length = strlen(source);
//compare the key letters
if (length <= letters)
{
for (option = copy; option <= write; option++)
{
if (0 == strcmp(source, keyletters[option]))
{
key_is_found = true;
break;
}
}
}
//compare the key words
if (length > letters)
{
for (option = copy; option <= write; option++)
{
if (0 == strcmp(source, keywords[option]))
{
key_is_found = true;
break;
}
}
}
//return the keywords position
if (key_is_found) return option;
//else keyword was not found
else return ERROR;
}
/*
The menu() Function
---------------------------------------
Takes a string from input and returns
a value to main() while loop. When
a valid quit value is given, the loop
is broken.
*/
int menu(char * string)
{
int option;
printf("(command)?> ");
if (NULL == fgets(string, SLEN, stdin))
{
fprintf(stderr, "[menu()] -> Failed to successfully store string!");
exit(EXIT_FAILURE);
}
else
remove_newline(string);
option = find_key(string);
return option;
}
/*
The display_help() Function
---------------------------------------
Prints the help menu to the standard display
*/
void display_help(void)
{
printf("This program writes files, reads files, and copies files.\n"
"When writing to a file, it appends the line buffered input.\n\n"
"This program works best with valid text based files.\n"
"Since the program does not check to see if the file is\n"
"actually a text based file, it may attempt to print\n"
"garbarge to the display.\n\n");
printf("%-23s%s\n\n", "h or help", "Prints this help menu to the display.");
printf("%-23s%s\n", "w or write", "Takes input from the display and writes to [target] file.");
printf("%-23s%s\n", " ", "When writing a file, input is taken from the display");
printf("%-23s%s\n", " ", "and written to the [target] file. Output is not echoed.");
printf("%-23s%s\n\n", " ", "To exit a write session, provide the EOF character.");
printf("%-23s%s\n", "r or read", "Prints the [target] file to the display.");
printf("%-23s%s\n", "l or line", "Prints the given line from the [source] file to the display.");
printf("%-23s%s\n", "c or copy", "Copies the [source] files contents to the [target] file.");
printf("%-23s%s\n\n", "q to quit", "Exit this application.");
puts("All files can be given a [target], or [source], name.");
puts("EOF for Windows is Ctrl+Z");
puts("EOF for Unix is Ctrl+D\n");
}
/*
The prompt() Function
---------------------------------------
Introductory prompt for when the program is
initially run... is used mainly to clean up
the code in main() since this is only ever
used once through-out the program.
*/
void prompt(void)
{
printf("*************************************************************\n"
"The Read Line Program\n"
"*************************************************************\n"
"The (command)?> display takes only one argument. All subsequent\n"
"arguments follow suit. You may use a letter, or a word, to be\n"
"given as a command. All other options are void. For Example:\n"
"*************************************************************\n"
"h[enter] or help[enter] for the Help Display.\n"
"*************************************************************\n");
}
nanfile.c
/* *********************************
NANITE FILEs
---------------------------
This file must be linked with the
nanite.c and nanstring.c files.
---------------------------
This file defines most of the executable
code predefined by the "nanproto.h" file.
These functions are used by the "nanfunct.c"
and "nanite.c" source files.
************************************/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include "./nanproto.h"
/* *********************************
FILE BASED FUNCTIONS
--------------------
these functions operate on files
************************************/
/*
The known_file_extension() Function
---------------------------------------
IF a known file extension is found,
return true on success. ELSE return ERROR
on failure...
*/
int known_file_extension(const char * filename)
{
enum file type;
_Bool key_is_found = false;
//find the last occurring period
char * file_extension_type = strrchr(filename, '.');
string_to_lower(file_extension_type);
if (NULL == file_extension_type)
return ERROR;
//find the file type
for (type = txt; type <= xml; type++)
{
if (0 == strcmp(file_extension_type, extension[type]))
{
key_is_found = true;
break;
}
}
//return the keywords position
if (key_is_found) return true;
//else keyword was not found
else return ERROR;
}
/*
The read_file() Function
---------------------------------------
Prints the given file to the display.
An ERROR is returned IF anything
goes awry. ELSE "true" is returned.
The BUFSIZE macro can be adjusted from
within the "nanproto.h" file if a
smaller/larger buffer is desired.
*/
int read_file(const char * filename)
{
char ch;
int value;
long long chars, line = 1;
long long max_lines = 0;
FILE * file;
if (ERROR == known_file_extension(filename))
{
fprintf(stderr, "[Invalid Extension Type]: %s\n", filename);
fprintf(stderr, "You're only allowed to use text files.\n\n");
return ERROR;
}
if ( NULL == (file = fopen(filename, "r")))
{
fclose(file);
return ERROR;
}
while (EOF != (ch = fgetc(file)))
if (ch == '\n') max_lines++;
rewind(file);
printf("[%ld]: ", line);
for (line = 2, chars = 0; line <= max_lines; chars++)
{
if (EOF == (ch = fgetc(file))) break;
if (ch != '\n') fputc(ch, stdout);
if (ch == '\n')
{
putchar('\n');
if (chars >= BUFSIZE)
{
pause_buffer();
chars = 0;
}
printf("[%ld]: ", line++);
}
}
puts("\n");
fclose(file);
return true;
}
/*
The read_line() Function
---------------------------------------
This function reads one line at a time
by using the filename and line arguments
provided. IF the line is found, it is
printed to standard output.
The function returns a TRUE value if all
went well and an ERROR value if anything
went wrong.
*/
int read_line(const char * filename, long long line)
{
char ch;
int value;
long long current_line, max_lines = 0;
FILE * file;
if (ERROR == known_file_extension(filename))
{
putchar('\n');
fprintf(stderr, "[Invalid Extension Type]: %s\n", filename);
fprintf(stderr, "You're only allowed to use text files.\n\n");
return ERROR;
}
if (NULL == (file = fopen(filename, "r")))
{
fclose(file);
return ERROR;
}
while (EOF != (ch = fgetc(file)))
if (ch == '\n') max_lines++;
rewind(file);
printf("[%ld]: ", line);
for (current_line = 1; current_line <= max_lines;)
{
if (EOF == (ch = fgetc(file))) break;
if (ch == '\n') current_line++;
if (current_line == line && ch != '\n')
fputc(ch, stdout);
}
puts("\n");
fclose(file);
return true;
}
/*
The write_file() Function
---------------------------------------
Writes line buffered input to the target
file. This function uses the "a" mode,
not the "w" mode. Returns an ERROR
macro if anything went wrong and returns
true if everything went alright.
---------------------------------------
Even though this function is extremely
limiting, its still fun to play with.
---------------------------------------
*/
int write_file(const char * filename)
{
char ch;
int value;
long long chars, line = 1;
FILE * file;
if (ERROR == known_file_extension(filename))
{
fprintf(stderr, "[Invalid Extension Type]: %s\n", filename);
fprintf(stderr, "You're only allowed to use text files.\n\n");
return ERROR;
}
if (NULL != (file = fopen(filename, "r")))
{
fprintf(stdout, "[Warning!]: The file \"%s\" already exists!\n\n", filename);
fclose(file);
}
else
{
fprintf(stdout, "[Attempting]: to create the file \"%s\"\n\n", filename);
fclose(file);
}
if ( NULL == (file = fopen(filename, "a")))
{
fclose(file);
return ERROR;
}
printf("Created the file %s...\n\n", filename);
printf("[EOF on Newline to Close the File]\n\n");
printf("[%ld]: ", line);
for (line = 2, chars = 0; EOF != (ch = fgetc(stdin)); chars++)
{
if (ch != '\n')
fputc(ch, file);
if (ch == '\n')
{
fputc('\n', file);
printf("[%ld]: ", line++);
}
}
putchar('\n');
printf("[%lld] characters were written to the file \"%s\"\n", chars, filename);
printf("[%lld] lines were written to the file \"%s\"\n\n", line, filename);
fclose(file);
return true;
}
/*
The copy_file() Function
---------------------------------------
Copies the source files contents to the
target file. This function uses the "w"
mode, not the "a" mode. Returns an ERROR
macro if anything went wrong and returns
true if everything went alright.
*/
int copy_file(const char * source, const char * target)
{
char ch;
int value;
long long chars, line = 1;
FILE * src, * tar;
if (ERROR == known_file_extension(source))
{
fprintf(stderr, "[Invalid Extension Type]: %s\n", source);
fprintf(stderr, "You're only allowed to use text files.\n\n");
return ERROR;
}
if (ERROR == known_file_extension(target))
{
fprintf(stderr, "[Invalid Extension Type]: %s\n", target);
fprintf(stderr, "You're only allowed to use text files.\n\n");
return ERROR;
}
if (NULL == (src = fopen(source, "r")))
{
fprintf(stdout, "[Error!]: Failed to read the file \"%s\"\n", source);
fclose(src);
return ERROR;
}
if (NULL == (tar = fopen(target, "w")))
{
fprintf(stdout, "[Error!]: Failed to create the file \"%s\"\n", target);
fclose(tar);
return ERROR;
}
printf("Created the file %s...\n", target);
for (line = 1, chars = 0; EOF != (ch = fgetc(src)); chars++)
{
fputc(ch, tar);
if (ch == '\n') line++;
}
putchar('\n');
printf("[%lld] characters were copied to the file \"%s\"\n", chars, target);
printf("[%lld] lines were copied to the file \"%s\"\n\n", line, target);
fclose(src);
fclose(tar);
return true;
}
Answer: Warnings
Pay attention to compiler warnings! Just because they're called "warnings" doesn't mean you can ignore them. They're errors. Fix them.
The most abundant warning is when you're printing line numbers in nanfile.c.
printf("[%ld]: ", line);
The line variable is a long long, so the format string should use %lld.
In the same file, you have a number of int value variables that aren't used for anything. Get rid of them.
In nanstring.c, you assign the result of strlen to an int. This should be size_t.
Now for two pairs of errors. In nanstring.c, my compiler (clang) complains that option might not be initialized when it is returned. Meanwhile, in nanite.c, it complains that you're comparing option against ERROR, which can never be true because -1 is not a possible enum value. You're returning an int from menu, which is poor practice. Instead, add an error case to the select enum.
enum select {
copy, help, line,
quit, read, write,
error = -1
};
Then, you can change the return types of menu and find_key to enum select. I also don't like this at all:
for (option = copy; option <= write; option++)
{
if (0 == strcmp(source, keyletters[option]))
{
key_is_found = true;
break;
}
}
First of all, looping over a return value as a counter is confusing. Second of all, the flag variable key_is_found is unnecessary. Change the name of option to result, initialize it with error and change the loops to this:
for (enum select option = copy; option <= write; option++)
{
if (0 == strcmp(source, keyletters[option]))
{
result = option;
break;
}
}
Then you can remove the flag boolean and the conditional at the end of the function, reducing it to a simple return result;, and the warning is gone! Killing a few birds with one stone.
The API and nanproto.h
This is pretty nice. I like how clearly structured and well-documented everything is. I just have a few minor notes.
#ifndef BOOL
# define BOOL ON
# if BOOL
# define true 1
# define false 0
# endif
#endif
C99 includes the system header file stdbool.h, which has a typedef that maps _Bool to bool and includes true and false constants. You should probably just use that, considering you're using _Bool, anyway.
#ifndef BUFSIZE
# define BUFSIZE 1024
#endif
I find this a little strange. It would make more sense to limit by lines, not by buffer size. This feels like an implementation detail the user should never see, but in fact it affects the program's behavior noticeably.
Also, you note this:
Prototypes were left optional but make a good reference and allows the "black-box" concept to stay in play.
Why are these optional? These should always be on.
nanstring.c
This code is very well written. Building nice CLI tools in C can be difficult, but this pulls it off with aplomb. Again, some minor notes.
From pause_buffer:
if (!isspace(ch))
while (getchar() != '\n') continue;
This isn't terrible, but I'd recommend adding braces to, at the very least, the if statement, just for readability's sake. I'd give the same advice for the function just below it, string_to_lower:
for (int index = 0; string[index]; index++)
if (isalpha(string[index]))
string[index] = tolower(string[index]);
nanfile.c
Still quite solid code. The documentation comments continue to be excellent, and the code style is consistent, clean, and readable.
The biggest issue I found was in known_file_extension. Specifically, right here:
//find the last occurring period
char * file_extension_type = strrchr(filename, '.');
What if there's no period in the filename? Segfault! Make sure you handle that case, too.
Another concern here is repetition. The read_file, read_line, write_file, and copy_file functions are all conspicuously similar. Extract the shared functionality into helper functions! Then the others should become quite trivial.
Here are the main components I identify as needing to be shared:
validate_file_extension — Checks if a file extension is valid and prints a message on failure.
print_numbered_line — Prints a line with a preceding line number.
check_file_existence — Prints out messages if a file is being created or overwritten.
At the same time, I recognize that since you're using the I/O API fairly efficiently, it's hard to modularize the differences, especially in a language like C in which abstraction doesn't come easily or cheaply. It would be nice to make copy_file a composition between read_file and write_file, but I understand that wouldn't be as clean.
In both read_file and read_line, consider padding the line numbers so that transitions from single-digit to double-digit to triple-digit line numbers don't push the whole output to the right by a single character.
Now, here's another important point. This code is wrong:
if ( NULL == (file = fopen(filename, "r")))
{
fclose(file);
return ERROR;
}
If file is NULL, don't call fclose on it! I don't know about on other OSes, but this actually segfaults on my OS X machine. That means if I make a small mistake typing a filename, the whole program is brought down. That's a problem.
The General Application and nanite.c
Again, well-written as usual. I have but a few minor complaints.
First of all, why are all these words capitalized? If you're going to go for the informal tone with "Oops!", make it look like a sentence:
puts("Oops! Only Positive Integers Please...");
Also, I found that when I used the write command, it wrote the file fine, but sending EOF caused menu to fail via this path:
fprintf(stderr, "[menu()] -> Failed to successfully store string!");
I'm not sure exactly why that happens, but you might want to look into that.
Overall, this code is quite high-quality. Writing good C code is a feat in and of itself, and this is definitely a great codebase. Is the application itself worth much? Perhaps not, but I'm guessing that wasn't your point to begin with. | {
"domain": "codereview.stackexchange",
"id": 11887,
"tags": "beginner, c, file, io"
} |
Permutation of n-size array with possible repeated elements. E.g [1, 2, 1] | Question: What would it be a recursive algorithm to get permutations for any list of n elements that might contain or not repeated elements?
For the following 3-element list [1, 1, 2] I would expect the following result:
[1, 1, 2]
[1, 2, 1]
[2, 1, 1]
So far I have the following result:
[1, 1, 2] <- duplicate
[1, 2, 1] <- duplicate
[1, 1, 2]
[1, 2, 1]
[2, 1, 1] <- duplicate
[2, 1, 1]
with algorithm below:
FUNCTION permute(array, nestingLevel) :
FOR index = nestingLevel TO array size -1
SWAP array[index] WITH array[nestingLevel]
CALL permute (array, nestingLevel + 1)
SWAP array[nestingLevel] WITH array[index]
END FOR
IF recursionNestingLevel EQUAL TO array size - 1
PRINT array
END IF
END FUNCTION
DEFINE array[] := 1, 1, 2
CALL permute (array, 0)
Answer: I will change your function a bit, because there is too much going on with swaps and there is variable recursionNestingLevel which is not really declared or needed.
FUNCTION permute(array, nestingLevel) :
IF nestingLevel EQUAL TO array size
PRINT array
RETURN
END IF
CALL permute (array, nestingLevel + 1)
SET index TO nestingLevel + 1
WHILE index LESS THAN array size
SWAP array[index] WITH array[nestingLevel]
CALL permute (array, nestingLevel + 1)
INCREMENT index BY 1
ENDWHILE
END FUNCTION
DEFINE array[] := 1, 1, 2
CALL permute (array, 0)
Now there is a simple idea to prevent recursing over same elements - it eliminates first unnecessary swap of and all redundant calls when array[index] is equal array[nesting level]:
FUNCTION permute(array, nestingLevel) :
IF nestingLevel EQUAL TO array size
PRINT array
RETURN
END IF
CALL permute (array, nestingLevel + 1)
SET index TO nestingLevel + 1
WHILE index LESS THAN array size
+ IF array[index] EQUAL array[nestingLevel] CONTINUE
+ array = CLONE array
SWAP array[index] WITH array[nestingLevel]
CALL permute (array, nestingLevel + 1)
INCREMENT index BY 1
ENDWHILE
END FUNCTION
DEFINE array[] := 1, 1, 2
CALL permute (array, 0)
CLONE here prevents passing by reference of array, which would propagate changes to every recursive call
When you prevent swapping same elements and calling permute, it effectively blocks call with same parameters, so there are no duplicates.
BTW I have tested this code, for [1, 1, 2] it yields [ 1, 1, 2 ], [ 1, 2, 1 ], [ 2, 1, 1 ], with JavaScript | {
"domain": "cs.stackexchange",
"id": 13630,
"tags": "algorithms, data-structures, combinatorics, permutations"
} |
What's the difference between the concatenation and union of symbols within a language | Question: I feel like I'm confusing myself perhaps but I'm having a bit of trouble figuring out how exactly this language works. I'm given the following regular expression
(a + b)* (abba* + (ab)*ba)
Can someone clarify to what union is compared to concatenation? Looking at
(a + b)*
Is it { $\epsilon$, a, b, ab, ba, aa, aab, bbb...)? I just want to make sure, for union, that I can have any combination of a and b in whichever order.
Can I also have an example of what a word from this language may be? From what I gathered
aaababbaabbaababba
abbaba
bba
ba
would all be valid words in the given language, correct?
Answer: Simply put,
the kleene star of concatenation gives
$$(ab)^* = \{\epsilon, ab, abab, ababab, ...\} $$
while the kleene star of union gives
$$(a+b)^* =\{\epsilon,a,b,aa,ab,ba,bb,\ldots\}$$
so you got it correctly, and indeed all the words you write belong to the language.
Recall that for any two sets $L,K$ we have
$LK = \{xy \mid x\in L, y\in K\}$,
$L+K = L \cup K$,
$L^* = \{\epsilon\} \cup L \cup L^2 \cdots$.
and recall that the regular expression "a" corresponds to the set $\{a\}$ and operations between regular expressions correspond to the above operations on sets. | {
"domain": "cs.stackexchange",
"id": 6469,
"tags": "formal-languages, regular-languages, notation"
} |
How does apparent depth depend on viewing angle? | Question: In derivations of apparent depth and articles on refraction the approximation that the viewing angle is small is made in order to derive that
$$D_a = \frac{n_v}{n_o} D_o,$$
where $D_a$ is the apparent depth, $D_o$ is the true depth of the object, $n_o$ is the index of refraction the object is in, and $n_v$ is the index of refraction the viewer is in. That the small angle approximation is mentioned so much suggests that the full answer depends on viewing angle. What is the full formula for the apparent position of the object as a function of viewing angle?
Answer: One way to derive the location of the image produced is to rely on geometric optics and Snell's law. To do this, we impose a coordinate system where the object is at $(0,-D_o)$ and the viewer is somewhere in quadrant $\mathrm{I}$ viewing the object using a light ray that intersects the interface with angle $\theta_o$ to the surface normal in the object's medium, and $\theta_v$ to the normal in the viewer's medium. The path the light ray follows is then given by
\begin{align}
y&=\left\{\begin{array}{ll}
\cot\theta_o x - D_o & x\le D_o\tan\theta_o \\
\cot\theta_v (x-D_o\tan\theta_o) & x > D_o\tan\theta_o.
\end{array}\right.
\end{align}
Snell's law dictates that $n_o\sin\theta_o=n_v\sin\theta_v$. Applying that to the light ray's path, and using some trig identities, gives:
\begin{align}
y&=\left\{\begin{array}{ll}
\frac{\sqrt{1-\left(\frac{n_v}{n_o}\sin\theta_v\right)^2}}{\frac{n_v}{n_o}\sin\theta_v} x - D_o & x\le D_o \frac{\frac{n_v}{n_o}\sin\theta_v}{\sqrt{1-\left(\frac{n_v}{n_o}\sin\theta_v\right)^2}} \\
\cot\theta_v \left(x - D_o \frac{\frac{n_v}{n_o}\sin\theta_v}{\sqrt{1-\left(\frac{n_v}{n_o}\sin\theta_v\right)^2}}\right) & x > D_o\frac{\frac{n_v}{n_o}\sin\theta_v}{\sqrt{1-\left(\frac{n_v}{n_o}\sin\theta_v\right)^2}}.
\end{array}\right.
\end{align}
The graph below contains the ray path diagram for an object at $(0,-1)$ under water ($n_o=1.33$, $n_v=1$). The green rays are sampled at equally spaced viewing angles with $5^\circ$ increments ($5^\circ$ through $85^\circ$, inclusive).
In geometric optics, the coordinates of the image are given by the point where lines the observed light rays are on intersect. This is graphically illustrated for the rays in the above figure below. Notice that there is not a single point where all of the rays intersect, so the position of the image will, indeed, depend on which rays the observer samples.
Assuming that the angles of the observed rays are $\theta_v$ and $\theta_v'$, the observed $x$-position is
\begin{align}
x_a &= \frac{\frac{\frac{n_v}{n_o}\sin\theta_v'}{\sqrt{1-\left(\frac{n_v}{n_o}\sin\theta_v'\right)^2}}-\frac{\frac{n_v}{n_o}\sin\theta_v}{\sqrt{1-\left(\frac{n_v}{n_o}\sin\theta_v\right)^2}}}{\cot\theta_v'-\cot\theta_v}.
\end{align}
As is clear from the second graph above, if the observer samples too many rays (too wide a sampling angle) then no single image is formed. Thus the viewing angles will be nearly equal, allowing us to work to zeroth order in $\theta_v'-\theta_v$ (i.e. take the limit as $\theta_v'-\theta_v \rightarrow 0$) to get that
$$ x_a = D_r \frac{\frac{n_v}{n_o}\left(1-\left[\frac{n_v}{n_o}\right]^2\right)\sin^3\theta_v}{\left(1-\left(\frac{n_v}{n_o}\sin\theta_v\right)^2\right)^{3/2}}. $$
In other words, the object appears to be a distance of $x_a$ closer to the viewer, horizontally, than it really is when $n_o>n_v$. As expected, when $n_v=n_o$ or $\theta_v=0$ then $x_a=0$. Interestingly, the offset is $\mathcal{O}(\theta_v^3)$ for $\theta_v\ll 1$.
Back-substituting $x_a$ to get the apparent depth gives
$$D_a = D_r \frac{\frac{n_v}{n_o}\cos^3\theta_v}{\left(1-\left(\frac{n_v}{n_o}\sin\theta_v\right)^2\right)^{3/2}}.$$
As expected, if $n_o=n_v$ then $D_a=D_r$, and when $\theta_v=0$ $D_a=\frac{n_v}{n_o}D_r$. Adding a parametric plot of $(x_a,-D_a)$ in red onto the second graph above shows the line following the bottom of the blue curve of intersections, as expected. Thus, for any given observation ray the image will be formed very near to where the blue line is tangent to the red one. | {
"domain": "physics.stackexchange",
"id": 53368,
"tags": "optics, refraction, geometric-optics"
} |
Filtering DbContext data dynamically by user input in WPF applications | Question: I need to filter database data based on filters available to the end user in the form of search term text box, select boxes etc.
I have put together this code and need feedback if this is a good way to do it or if there are any better solutions.
using Multi.Model;
using System;
using System.Linq;
using System.Windows.Controls;
namespace Multi.Pages
{
/// <summary>
/// Interaction logic for Page1.xaml
/// </summary>
public partial class Page1 : Page
{
public Page1()
{
InitializeComponent();
textBoxName.TextChanged += TextBoxName_TextChanged;
PopulateDataGrid();
}
private void PopulateDataGrid()
{
using (var db = new optisysEntities())
{
var items = db.clients.AsQueryable();
items = FilterClients(db, items);
dataGrid.ItemsSource = items.ToList();
}
}
private System.Linq.IQueryable<Multi.Model.clients> FilterClients(optisysEntities db, System.Linq.IQueryable<Multi.Model.clients> clients)
{
if (!String.IsNullOrWhiteSpace(textBoxName.Text)) clients = db.clients.Where(c => c.name.Contains(textBoxName.Text)
|| c.phone.Contains(textBoxName.Text));
// if (!String.IsNullOrWhiteSpace(search.Email)) clients = clients.Where(u => u.Email.Contains(search.Email));
// if (search.UsertypeId.HasValue) clients = clients.Where(u => u.UsertypeId == search.UsertypeId.Value);
return clients;
}
private void TextBoxName_TextChanged(object sender, TextChangedEventArgs e)
{
PopulateDataGrid();
}
}
}
Answer: Reducing your code to what I think is the core of your question:
private System.Linq.IQueryable<Multi.Model.clients> FilterClients(optisysEntities db, System.Linq.IQueryable<Multi.Model.clients> clients)
{
if (!String.IsNullOrWhiteSpace(textBoxName.Text))
clients = db.clients.Where(c => c.name.Contains(textBoxName.Text) || c.phone.Contains(textBoxName.Text));
if (!String.IsNullOrWhiteSpace(search.Email))
clients = clients.Where(u => u.Email.Contains(search.Email));
if (search.UsertypeId.HasValue)
clients = clients.Where(u => u.UsertypeId == search.UsertypeId.Value);
return clients;
}
Yes, this is one of the better ways to do dynamic filtering.
I would suggest abstracting the data retrieval into a separate layer. You don't want your form logic handling your underlying ORM directly.
This is exacerbated by the fact that you've put the FilterClients() method by itself. Your UI form therefore contains a method whose reponsibility has nothing to do with the UI.
However, I get the feeling that this application is either tiny, or has only just been developed. So I can understand that this abstraction is something for a later stage. I would suggest doing it immediately to make it less painful in the future, but that's your choice. | {
"domain": "codereview.stackexchange",
"id": 30236,
"tags": "c#, database, entity-framework, wpf, postgresql"
} |
Drilling PCB 3d printer bed | Question: I need a heated surface for a project and was thinking of using one of these PCB heater beds. But I'd need to drill a few holes in it - can I do this without stopping it working? Thanks
Answer: If it is constructed like this one then at best you'll have dead stripes where you've drilled through conductors; at worst you'll short some adjacent stripes together and you'll (probably) get hot spots.
PCB layout software is cheap; you may want to see if you can get the part made as specified in the Wiki I link to (or use a service that uses gold flash rather than tin plate -- gold flash is much thinner, so it won't disturb the resistivity of the copper as much).
If you can, design a board with holes. Run an annular ring of copper around each hole, with stripes going into and out of the rings. Current will conduct around the holes, then go back into stripes where it'll do it's job of heating up the board. Without a lot of painful calculation the heating won't be perfectly uniform, but it should be pretty good -- or you can do a lot of painful calculation, and get it spot on. | {
"domain": "engineering.stackexchange",
"id": 2983,
"tags": "heating-systems, 3d-printing"
} |
how to find breakpoints of a signal based on the frequency of the signal | Question: There are 4 cos signals with 10,20,30 and 100 Hz each.
The main signal is the sum of those 4 and I have to create a low pass filter, on the main signal for the frequency's > 50 Hz.
% design of low pass filter
f = [0 0.6 0.6 1];
m = [1 1 0 0 ];
b = fir2(30, f, m);
[h, w] = freqz(b,1,128);
hold on;
plot(f,m,'b')
plot(w/pi, abs(h), 'r')
xlabel('Normalized Frequency (\times\pi rad/sample)')
ylabel('Magnitude')
legend('Ideal', 'fir2 designed')
legend boxoff
title('comparison of frequency Response Magnitytes');
hold off;
That's the example code that our professor gave us.On maltab using help fir2 there is a similar example
% Example 1:
% Design a 30th-order lowpass filter and overplot the desired
% frequency response with the actual frequency response.
f = [0 0.6 0.6 1]; % Frequency breakpoints
m = [1 1 0 0]; % Magnitude breakpoints
b = fir2(30,f,m); % Frequency sampling-based FIR filter design
[h,w] = freqz(b,1,128); % Frequency response of filter
plot(f,m,w/pi,abs(h))
legend('Ideal','fir2 Designed')
title('Comparison of Frequency Response Magnitudes')
What i don't understand is how to find the frequency breakpoints and the magnitude breakpoints?
Also whats the relation of the filter order with the low-high, band filters?
Answer: The samples in the 'f' vector in the Matlab Example 1 code are arbitrary, ...just an example of a lowpass filter. The purpose of that code is to merely show you how the "designed" (actual) freq response is similar to, but not equal to, the "ideal" rectangular freq response.
I don't like that Matlab Example 1 example. It's confusing to some beginners to have two different magnitude ('m') sample values associated with a single 'f' value of 0.6. A better example would define 'f' as f = [0 0.6 0.7 1].
In any case, the Example 1 example's f = [0 0.6 0.6 1] and m = [1 1 0 0], vectors specify that the lowpass filter's "ideal" passband gain is one at $0*f_s/2$ Hz and is one at $0.6*f_s/2$ Hz. (The '*' symbol means multiply.) So the "ideal" lowpass passband extends from $0*f_s/2$ -to- $0.6*f_s/2$ Hz. The code also specifies that the lowpass filter's "ideal" stopband gain is zero at $0.6*f_s/2$ Hz and is zero at $1*f_s/2$ Hz. So the "ideal" stopband extends from $0.6*f_s/2$ -to- $1*f_s/2$ Hz.
Now if you wanted the filter's "ideal" lowpass passband to extend from $0*f_s/2$ -to- $0.5*f_s/2$ Hz (zero Hz –to- $f_s/4$ Hz), you would define vector 'f' as f = [0 0.5 0.5 1]. | {
"domain": "dsp.stackexchange",
"id": 3466,
"tags": "matlab, filter-design, frequency"
} |
Libfreenect won't install properly on 12.04 Hydro | Question:
I am trying to use libfreenect with a kinect. The output when I run
roslaunch freenect_launch freenect-xyz.launch
is
[ INFO] [1413756004.829132537]: Number devices connected: 1
[ INFO] [1413756004.829250239]: 1. device on bus 000:00 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id '0000000000000000'
[ INFO] [1413756004.831311636]: Searching for device with index = 1
[ INFO] [1413756004.843299915]: No matching device found.... waiting for devices. Reason: [ERROR] Unable to open specified kinect
The next command I ran was
rosrun libfreenect glview.
This returned
[rosrun] Couldn't find executable named glview below /opt/ros/hydro/share/libfreenect
So I tried
alex@alex-OptiPlex-GX620:~$ rospack find libfreenect
/opt/ros/hydro/share/libfreenect
alex@alex-OptiPlex-GX620:~$ cd /
alex@alex-OptiPlex-GX620:/$ cd /opt/ros/hydro/share/libfreenect
alex@alex-OptiPlex-GX620:/opt/ros/hydro/share/libfreenect$ ls
package.xml
From my understanding this means the proper packages that should come with libfreenect have not been installed. I have tried re-installing with no success. I have run
sudo apt-get update
sudo apt-get dist-upgrade
I have made sure to run
sudo modprobe -r gspca_kinect
echo 'blacklist gspca_kinect' | sudo tee -a /etc/modprobe.d/blacklist.conf
The kinect is being connected by USB 2.0. The result of lsusb is
Bus 001 Device 002: ID 045e:02c2 Microsoft Corp.
Bus 001 Device 005: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN
Bus 002 Device 002: ID 046d:c018 Logitech, Inc. Optical Wheel Mouse
Bus 003 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 007: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
Bus 001 Device 008: ID 045e:02ae Microsoft Corp. Xbox NUI Camera
Originally posted by kinect_guy on ROS Answers with karma: 38 on 2014-10-19
Post score: 0
Answer:
The issue is the model of the Kinect itself. Version 1473 is currently experiencing issues with the freenect library. Furthur discussion here: https://github.com/ros-drivers/freenect_stack/issues/12
Originally posted by kinect_guy with karma: 38 on 2014-10-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 19777,
"tags": "kinect"
} |
How to prove that the coefficients of Pauli bases remain its length under unitary operation? | Question: Formal problem statement. For a real vector $\vec r$ in bases of $\sigma_i\otimes \sigma_j$ with $i,j=0,1,2,3$, $i$ and $j$ not equal to $0$ at the same time, $\sigma_0=I$ and other indexes stands for Pauli matrices. There are $15$ of them. How to prove that under the action of unitary operation(4 by 4 matrix), i.e. $U\sum_{ij}r_{ij}\sigma_i\otimes \sigma_j U^\dagger$, the length(Euclidean norm) of $\vec r$ remain unchanged(in the prescribed bases)? We can only consider SU(4) elements.
Here are some of my thoughts. I refer to the 'qubit' case for some inspiration, i.e., the bases are only $3$ Pauli matrices, and the vector has $3$ dimensions. A brute-force proof is to check the square of the coefficients of $U\sigma_1U^\dagger$ sum to 1, the square of the coefficients of $U\sigma_2U^\dagger$ sum to 1, and also the square of coefficients of $U\sigma_3U^\dagger$ sum to 1. Then for any term $\sum_{i=1,2,3}r_i\sigma_i$, after the action of unitary, i.e. $U\sum_{i=1,2,3}r_i\sigma_i U^\dagger$, easy to see the length of $\vec r$ remain unchanged. But the method becomes too complicated in the '$2$-qubit' case for we need to check $15$ matrix equations in total. So for the '$2$-qubits' case, is there some easier method, and also, for the '$d$-qubit' case, is this character still remain?
Answer: One of the key things that does not change as the result of a unitary transformation applied to a matrix is the eigenvalues of the matrix. Consequently, any function of the matrix that only depends on the eigenvalues and not the eigenvectors also does not change. If we can see that the length of $\vec{r}$ is one of these, we're done.
I like your thought about using the one-qubit case for inspiration. That's what helped me. In the 1-qubit case, $M=\sum_{i\in\{1,2,3\}}r_i\sigma_i$ has eigenvalues $\pm\sqrt{r_1^2+r_2^2+r_3^2}$. A way to prove this is to consider $\text{Tr}(M^2)=\sum_i\lambda_i^2$ ($\lambda_i$ are the eigenvalues), and realising that by omitting the $I$ term, you have $\sum_i\lambda_i=0$.
If you do the same thing in the two-qubit case, you'll get
$$
\text{Tr}(M^2)=\sum_i\lambda_i^2
$$
which, being only a function of eigenvalues, is invariant under the action of $U$.
So, now evaluate
$$
\text{Tr}(M^2)=4\sum_{i,j}r_{ij}^2=4\|\vec{r}\|^2.
$$
(This works, because by taking the trace of the product, you only pick out $I$ terms, which only arise when a tensor product of Paulis multiplies itself, and has trace 4.) | {
"domain": "quantumcomputing.stackexchange",
"id": 3522,
"tags": "quantum-state, bloch-sphere"
} |
catkin_make not working in indigo | Question:
I have just upgraded to ubuntu 14.4 and did sudo apt-get install ros-indigo-desktop-full. I used to use hydo on 13.04 so was familiar with catkin. However when I tried to catkin_make my packages, I was told catkin_make: command not found. I have checked ros-indigo-catkin is installed.
Are we suppose to use some of the hydro packages to "bridge the gap" for now? I'm asking this also because ROS-industrial is not yet available in indigo.
Thanks!
Originally posted by jess on ROS Answers with karma: 33 on 2014-07-23
Post score: 0
Answer:
Are you sure you've sourced the ros Indigo setup file in your shell? Try source /opt/ros/indigo/setup.bash
On my Ubuntu 14.04 machine with Indigo, catkin_make is part of the ros-indigo-catkin package, and is located in /opt/ros/indigo/bin/catkin_make
Originally posted by ahendrix with karma: 47576 on 2014-07-24
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by jess on 2014-07-24:
Thanks. For some reason my .bashrc was not being run when I log in like it used to. I've just made sure it does now. Btw, do you know if moveit and ros-industrial hydro can be run with indigo?
Comment by ahendrix on 2014-07-24:
I haven't used either on indigo. The build status page indicates that moveit is released: http://www.ros.org/debbuild/indigo.html?q=moveit and ros_industrial is not. Indigo is pretty similar to Hydro, so you may be able to compile the hydro version of ros_industrial from source on Indigo. | {
"domain": "robotics.stackexchange",
"id": 18747,
"tags": "ros-indigo"
} |
Shortcuts and imports for large RPG basic code | Question: I decided to work on putting together an Arena-style (very basic) text-based RPG as a way to help myself learn Python. Unfortunately, after about 1,000 lines of pieced-together code, I realize that I've been doing myself a *dis*service by reenforcing poor practices, and by going line-by-line, my inefficiency is multiplying exponentially.
I don't honestly know where to start. I'm sure there are libraries, shortcuts, and greatly more efficient methods than what I'm using. I'm okay with starting over (there are errors not worth finding, anyway), but I would hugely appreciate anyone knowledgeable looking over my code, giving me a few of the bigger pointers for how to clean up my current methods, and where to look for new ones. A quick skim, I'm sure, would suffice. I also want to keep it mostly text and basic, so I'm not looking for something like PyGame, just straightforward stuff. (For extra, I plan on utilizing my final code to create a "leaderboard" of randomly-generated characters that fight each other, so pointers in that direction help). I had to cut out many of the very long lists for post length, but it gives you an idea.
import random
import math
class character:
def __init__(self):
# There are hundreds of these... cut down for post length
self.STR = 75
self.DEX = 75
self.CON = 75
self.AGI = 75
self.INT = 75
self.WIS = 75
self.statPoints = 75
self.AP = 0
self.XP = 0
self.LVL = 1
self.name = ''
self.xpToLVL = 500
self.fightsWon = 0
self.XPSinceLVL = 0
self.dmgmitPercent = 0
self.mindmg = 0
self.maxdmg = 0
self.mindmgmit = 0
self.maxdmgmit = 0
self.minrobeSP = 0
self.maxrobeSP = 0
self.WarBuff = 0
self.poisonCounter = 0
self.dmg = 0
self.armorType = 0
self.shieldEquipped = 0
self.visitedPTrainer = 0
self.visitedCTrainer = 0
self.visitedArena = 0
def fightRound(hero, enemy, skillPick):
heavyB = 0
defS = 0
flank = 0
poison = 0
heal = 0
shock = 0
fireB = 0
counterA = 0
resil = 0
shockstun = 0
monkstun = 0
rejuv = 0
if hero.curClass == 1 and skillPick == 1:
heavyB = 1
elif hero.curClass == 1 and skillPick == 2:
defS = 1
elif hero.curClass == 1 and skillPick == 3:
hero.WarBuff += 1
# ETC, cut for post length
EheavyB = 0
EdefS = 0
Eflank = 0
Epoison = 0
Eheal = 0
Eshock = 0
EfireB = 0
EcounterA = 0
Eresil = 0
Eshockstun = 0
Emonkstun = 0
Erejuv = 0
EskillPick = 0
if enemy.curClass == 1:
if hero.HP < ((enemy.CON * 5 * enemy.LVL) / 10):
EskillPick = 1
elif enemy.HP > (enemy.CON * 5 * enemy.LVL) / 2 and hero.HP > (hero.CON * 5 * hero.LVL) / 2:
if random.randint(1, 100) < 66:
EskillPick = 2
elif random.randint(1, 100) < 51:
EskillPick = 2
else:
EskillPick = 1
if enemy.curClass == 2:
if hero.HP > (hero.CON * 5 * hero.LVL) / 2 and enemy.HP > (enemy.HP * 5 * enemy.LVL) / 2:
if random.randint(1, 100) < 80:
EskillPick = 1
elif random.randint(1, 100) < 40:
EskillPick = 2
else:
EskillPick = 1
#ETC, cut down for post length
if enemy.curClass == 1 and EskillPick == 1:
EheavyB = 1
print 'Your enemy lunges forward...'
elif enemy.curClass == 1 and EskillPick == 2:
EdefS = 1
print 'Your enemy pulls back...'
elif enemy.curClass == 1 and EskillPick == 3:
enemy.WarBuff += 1
elif enemy.curClass == 2 and EskillPick == 1:
Eflank = 1
print 'Your enemy tries to flank you...'
elif enemy.curClass == 2 and EskillPick == 2:
Epoison = 1
print 'Your enemy coats his daggers...'
# ETC, cut for post length
hero.STR = float(hero.STR)
hero.DEX = float(hero.DEX)
hero.CON = float(hero.CON)
hero.INT = float(hero.INT)
hero.WIS = float(hero.WIS)
#hero.DEX = float(hero.DEX)
heroMissed = 0
enemyMissed = 0
enemy.STR = float(enemy.STR)
enemy.DEX = float(enemy.DEX)
enemy.CON = float(enemy.CON)
enemy.INT = float(enemy.INT)
enemy.WIS = float(enemy.WIS)
hero.dodge = float(hero.dodge)
hero.block = float(hero.block)
enemy.dodge = float(enemy.dodge)
enemy.block = float(enemy.block)
hero.crit = float(hero.crit)
enemy.crit = float(enemy.crit)
if hero.weaponType == 0:
hero.swing = hero.AGI * 3
elif hero.weaponType == 1:
hero.swing = hero.AGI * 2
elif hero.weaponType > 1:
hero.swing = hero.AGI
heroAttacks = int(math.floor(hero.swing / 100))
heroExtraAttack = hero.swing - (heroAttacks * 100)
if random.randint(1, 100) <= heroExtraAttack:
heroAttacks += 1
if heroAttacks < 1:
heroMissed = 1
if enemy.weaponType == 0:
enemy.swing = enemy.AGI * 3
elif enemy.weaponType == 1:
enemy.swing = enemy.AGI * 2
elif enemy.weaponType > 1:
enemy.swing = enemy.AGI
enemyAttacks = int(math.floor(enemy.swing / 100))
enemyExtraAttack = enemy.swing - (enemyAttacks * 100)
if random.randint(1, 100) <= enemyExtraAttack:
enemyAttacks += 1
if enemyAttacks < 1:
enemyMissed = 1
# print '*DEBUG*: \# of hero attacks:',heroAttacks
# print '*DEBUG*: \# of enemy attacks:',enemyAttacks
# Just until I get items in...
if hero.weaponType == 0:
hero.mindmg = hero.LVL * 10
hero.maxdmg = hero.LVL * 10
elif hero.weaponType == 1:
hero.mindmg = hero.LVL * 5
hero.maxdmg = hero.LVL * 25
#ETC, cut...
hero.dodge = hero.AGI / 20 + hero.MonP * 0.4
if hero.armorType == 2:
hero.dodge = hero.dodge / 4
elif hero.armorType == 1:
hero.dodge = hero.dodge / 2
hero.dodge = float(hero.dodge)
hero.crit = hero.AGI / 20 + hero.RogP * 0.4
hero.crit = float(hero.crit)
hero.dmgmitPercent = float(hero.STR / 20 / 100)
if hero.armorType == 2:
hero.mindmgmit = hero.LVL * 10
hero.maxdmgmit = hero.LVL * 15
elif hero.armorType == 1:
hero.mindmgmit = hero.LVL * 3
hero.maxdmgmit = hero.LVL * 6
elif hero.armorType == 3:
hero.minrobeSP = hero.LVL * 1
hero.maxrobeSP = hero.LVL * 10
enemy.dodge = enemy.AGI / 20 + enemy.MonP * 0.4
if enemy.armorType == 2:
enemy.dodge = enemy.dodge / 4
elif enemy.armorType == 1:
enemy.dodge = enemy.dodge / 2
enemy.dodge = float(enemy.dodge)
enemy.crit = enemy.AGI / 20 + enemy.RogP * 0.4
enemy.crit = float(enemy.crit)
enemy.dmgmitPercent = float(enemy.STR / 20 / 100)
# haven't done this yet
if hero.shieldEquipped == 1:
hero.block = hero.STR / 20
else:
hero.block = 0
if hero.weaponType == 4:
hero.SP = random.randint(hero.mindmg, hero.maxdmg)
elif hero.weaponType != 4:
hero.SP = hero.LVL * 10
if hero.armorType == 3:
hero.SP += random.randint(hero.minrobeSP, hero.maxrobeSP)
hero.SP *= hero.INT / 100
if enemy.weaponType == 4:
enemy.SP = random.randint(enemy.mindmg, enemy.maxdmg)
elif enemy.weaponType != 4:
enemy.SP = enemy.LVL * 10
if enemy.armorType == 3:
enemy.SP += random.randint(enemy.minrobeSP, enemy.maxrobeSP)
enemy.SP *= enemy.INT / 100
if enemy.curClass == 3 or enemy.curClass == 4:
enemyMissed = 0
enemyAttacks = 1
if enemy.curClass == 3 or enemy.curClass == 4:
enemyMissed = 0
enemyAttacks = 1
if heroMissed == 1:
print 'You missed!'
while heroAttacks > 0:
herodmg = float(random.randint(hero.mindmg, hero.maxdmg))
enemymit = float(random.randint(enemy.mindmgmit, enemy.maxdmgmit))
heroagility = float(hero.AGI)
if hero.weaponType == 0:
herodmg *= hero.CON / 100
enemymit = enemymit / (heroagility / 33)
elif hero.weaponType == 1:
herodmg *= hero.DEX / 100
enemymit = enemymit / (heroagility / 50)
elif hero.weaponType == 2:
herodmg *= hero.STR / 100
enemymit = enemymit / (heroagility / 100)
elif hero.weaponType == 3:
herodmg *= hero.STR / 67
enemymit = enemymit / (heroagility / 100)
elif hero.weaponType == 4:
herodmg *= hero.INT / 100
enemymit = 0
if enemymit < 0:
enemymit = 0
enemydodgeroll = enemy.dodge * 100
enemyblockroll = enemy.block * 100
herocritroll = hero.crit * 100
enemydodged = 0
enemyblocked = 0
herocritted = 0
if shock == 1:
herocritroll = 0
enemydodgeroll = 0
enemyblockroll = 0
enemymit = 0
herodmg = hero.SP
shockroll = (500 + hero.WizA1 * 5)
if random.randint(1, 10000) <= shockroll:
enemy.paralyzed = 1
shockstun = 1
# ETC, cut...
if defS == 1:
herodmg = float(herodmg)
herodmg *= (0.75 - (enemy.WarA2 * .0025))
herodmg = int(herodmg)
herodmg = int(herodmg)
if rejuv == 1:
herodmg = 0
heroAttacks = 0
print 'You close your eyes and inhale deeply.'
elif heal == 1:
herodmg = 0
heroAttacks = 0
heroheal = float(hero.SP * (1.1 + (hero.HeaA1 * 0.009)))
heroheal = int(math.ceil(heroheal))
print 'You heal', heroheal, 'damage!'
hero.HP += heroheal
elif hero.paralyzed == 1:
print 'You struggle but can\'t move!'
herodmg = 0
hero.paralyzed = 0
# ETC, cut down for post length
elif herocritted == 1:
print 'You crit for', herodmg, 'damage!'
else:
print 'You strike your enemy for', herodmg, 'damage.'
enemy.HP -= herodmg
heroAttacks -= 1
if enemyMissed == 1:
print 'Your enemy missed!'
while enemyAttacks > 0:
# print 'average enemy dmg is',float((enemy.mindmg+enemy.maxdmg)/2)
enemydmg = float(random.randint(enemy.mindmg, enemy.maxdmg))
heromit = float(random.randint(hero.mindmgmit, hero.maxdmgmit))
enemyagility = float(enemy.AGI)
if enemy.weaponType == 0:
enemydmg *= enemy.CON / 100
heromit = heromit / (enemyagility / 33)
elif enemy.weaponType == 1:
enemydmg *= enemy.DEX / 100
heromit = heromit / (enemyagility / 50)
elif enemy.weaponType == 2:
enemydmg *= enemy.STR / 100
heromit = heromit / (enemyagility / 100)
elif enemy.weaponType == 3:
enemydmg *= enemy.STR / 67
heromit = heromit / (enemyagility / 100)
elif enemy.weaponType == 4:
enemydmg *= enemy.INT / 100
heromit = 0
if heromit < 0:
heromit = 0
# print 'Your mitigation is',heromit
herododgeroll = hero.dodge * 100
heroblockroll = hero.block * 100
enemycritroll = enemy.crit * 100
enemycritted = 0
herododged = 0
heroblocked = 0
if Eshock == 1:
herododgeroll = 0
heroblockroll = 0
enemycritroll = 0
heromit = 0
enemydmg = enemy.SP
Eshockroll = (500 + enemy.WizA1 * 5)
if random.randint(1, 10000) <= Eshockroll:
hero.paralyzed = 1
Eshockstun = 1
if EfireB == 1:
enemycritroll = 0
heromit = 0
enemydmg = enemy.SP
fireBburn = int(
float(math.ceil(enemydmg * (0.1 + enemy.WizA2 * 0.004))))
if fireBburn > hero.fireBburn:
hero.fireBburn = fireBburn
hero.fireBCount = 5
if Eresil == 1:
herododgeroll = 0
hero.dmgmitPercent += (
hero.dodge / 100) * (0.5 + .005 * hero.MonA2)
if random.randint(1, 10000) <= enemycritroll:
enemydmg *= enemy.DEX / 50
enemycritted = 1
if flank == 1:
heromit *= (0.9 - (hero.RogA1 * 0.009))
herododgeroll *= (0.9 - (hero.RogA1 * 0.009))
heroblockroll *= (0.9 - (hero.RogA1 * 0.009))
if Eflank == 1:
heromit *= (0.9 - (enemy.RogA1 * 0.009))
herododgeroll *= (0.9 - (enemy.RogA1 * 0.009))
heroblockroll *= (0.9 - (enemy.RogA1 * 0.009))
# ETC
if enemy.stunned == 1:
if enemydmg - enemy.stunvalue >= 1:
enemydmg = enemydmg - enemy.stunvalue
enemy.stunvalue = 0
enemy.stunned = 0
print 'He stammers a bit, then',
elif enemydmg - enemy.stunvalue < 1:
enemy.stunvalue = enemy.stunvalue - enemydmg
enemydmg = 0
enemydmg = int(enemydmg)
if Erejuv == 1:
enemydmg = 0
enemyAttacks = 0
Erejuv = 0
elif Eheal == 1:
enemydmg = 0
enemyAttacks = 0
enemyheal = float(enemy.SP * (1.1 + (enemy.HeaA1 * 0.009)))
enemyheal = int(math.ceil(enemyheal))
print 'Your enemy heals', enemyheal, 'damage!'
enemy.HP += enemyheal
elif enemy.paralyzed == 1:
print 'Your enemy only twitches.'
enemydmg = 0
enemy.paralyzed = 0
# ETC...
elif enemycritted == 1:
print 'Your enemy crits you for', enemydmg, 'damage!'
else:
print 'Your enemy strikes you for', enemydmg, 'damage.'
hero.HP -= enemydmg
enemyAttacks -= 1
hero = character()
def fight(hero, enemy):
createEnemy(enemy)
hero.HP = hero.CON * 5 * hero.LVL
enemy.HP = enemy.CON * 5 * enemy.LVL
hero.WarBuff = 0
hero.RogBuff = 0
hero.HeaBuff = 0
hero.WizBuff = 0
hero.MonBuff = 0
hero.rejuvCounter = 0
hero.poisonCounter = 0
hero.stunned = 0
hero.fireBburn = 0
hero.fireBCount = 0
hero.stunvalue = 0
hero.paralyzed = 0
while hero.HP > 0 and enemy.HP > 0:
errorcheckFight = 1
while errorcheckFight == 1:
print('What would you like to do?'),
if hero.curClass == 1:
skillPick = input(
' [0] Attack | [1] Heavy Blow | [2] Defensive Strike | [3] Buff: Training')
elif hero.curClass == 2:
skillPick = input(
' [0] Attack | [1] Flank | [2] Poison | [3] Buff: Feinting')
# ETC...
if skillPick > 3:
print
print 'Please select a proper response, from inside the [brackets].'
print
else:
errorcheckFight = 0
print
fightRound(hero, enemy, skillPick)
if enemy.poisonCounter > 0:
print 'Your enemy was Poisoned for', enemy.poisonCounter * hero.RogA2 * 5, 'damage.'
enemy.HP -= enemy.poisonCounter * hero.RogA2 * 5
if hero.rejuvCounter > 0:
herorejuv = int(
float(math.ceil((hero.SP) * (.3 + (hero.HeaA2 * .003)))))
print 'You rejuvenated', herorejuv, 'damage.'
hero.HP += herorejuv
hero.rejuvCounter -= 1
if enemy.fireBCount > 0:
print 'Your enemy burns for', enemy.fireBburn, 'damage.'
enemy.HP -= enemy.fireBburn
enemy.fireBCount -= 0
if enemy.fireBCount < 1:
enemy.fireBburn = 0
if hero.poisonCounter > 0:
print 'You were Poisoned for', hero.poisonCounter * enemy.RogA2 * 5, 'damage.'
hero.HP -= hero.poisonCounter * enemy.RogA2 * 5
if enemy.rejuvCounter > 0:
enemyrejuv = int(
float(math.ceil((enemy.SP) * (.3 + (enemy.HeaA2 * .003)))))
print 'Your enemy rejuvenated', enemyrejuv, 'damage.'
enemy.HP += enemyrejuv
enemy.rejuvCounter -= 1
if hero.fireBCount > 0:
print 'You burn for', hero.fireBburn, 'damage.'
hero.HP -= hero.fireBburn
hero.fireBCount -= 0
if hero.fireBCount < 1:
hero.fireBburn = 0
print
print 'HERO HP:', hero.HP, ' |------| ENEMY HP:', enemy.HP
print
if enemy.HP <= 0:
hero.fightsWon += 1
print 'You Won!'
print 'You gain 100 XP and', int(float(hero.WIS / 75) * 100), 'AP.'
hero.XP += 100
hero.XPSinceLVL += 100
hero.AP += int(float(hero.WIS / 75) * 100)
hero.APSinceLVL += int(float(hero.WIS / 75) * 100)
elif hero.HP <= 0:
hero.fightsLost += 1
print 'You Lost...'
hero.xpToLVL = (hero.LVL * 100 + 500) - hero.XPSinceLVL
if hero.xpToLVL <= 0:
hero.LVL += 1
hero.XPSinceLVL = 0
hero.statPoints += 5
print
print 'You have gained a level! You are now level,', str(hero.LVL) + '!'
classList = ['', 'Warrior', 'Rogue', 'Healer', 'Wizard', 'Monk']
classLevels = [0, hero.WarLVL, hero.RogLVL,
hero.HeaLVL, hero.WizLVL, hero.MonLVL]
if (classLevels[hero.curClass] * 100 + 500) - (hero.APSinceLVL) <= 0:
hero.APSinceLVL = 0
classLevels[hero.curClass] += 1
print 'You have gained a class level! You are now a level', classLevels[hero.curClass], classList[hero.curClass] + '!'
print
print hero.xpToLVL, 'experience left for level', (hero.LVL + 1), 'and', int((classLevels[hero.curClass] * 100 + 500) - (hero.APSinceLVL)), 'left for', classList[hero.curClass], 'level', str(classLevels[hero.curClass]) + '.'
# print '*DEBUG*: Fights Won:',hero.fightsWon,' Fights
# Lost:',hero.fightsLost,' Level:',hero.LVL
main()
def createEnemy(enemy):
#enemy = character()
enemy.LVL = hero.LVL
enemyClass = random.randint(1, 5)
enemy.curClass = enemyClass
enemy.statPoints += (enemy.LVL - 1) * 3
if enemy.curClass == 1:
enemy.STR += int(enemy.statPoints * 0.50)
enemy.CON += int(enemy.statPoints * 0.30)
enemy.AGI += int(enemy.statPoints * 0.20)
enemy.WarP += int(enemy.LVL * 0.70)
enemy.WarA1 += int(enemy.LVL * 0.60)
enemy.WarA2 += int(enemy.LVL * 0.60)
enemy.mindmgmit = enemy.LVL * 5
enemy.maxdmgmit = enemy.LVL * 10
enemy.mindmg = enemy.LVL * 10
enemy.maxdmg = enemy.LVL * 50
wartype = random.randint(1, 2)
if wartype == 1:
enemy.weaponType = 2
enemy.shieldEquipped = 1
enemy.mindmgmit += enemy.LVL * 2
enemy.maxdmgmit += enemy.LVL * 5
else:
enemy.weaponType = 3
enemy.shieldEquipped = 0
elif enemy.curClass == 2:
enemy.DEX += int(enemy.statPoints * 0.50)
enemy.AGI += int(enemy.statPoints * 0.30)
enemy.CON += int(enemy.statPoints * 0.20)
enemy.RogP += int(enemy.LVL * 0.60)
enemy.RogA1 += int(enemy.LVL * 0.50)
enemy.RogA2 += int(enemy.LVL * 0.50)
enemy.mindmgmit = enemy.LVL * 3
enemy.maxdmgmit = enemy.LVL * 6
enemy.mindmg = enemy.LVL * 5
enemy.maxdmg = enemy.LVL * 25
enemy.weaponType = 1
enemy.armorType = 1
# ETC, cut for post length
def classPick(hero):
classList = ['Warrior', 'Rogue', 'Healer', 'Wizard', 'Monk']
classLevels = [hero.WarLVL, hero.RogLVL,
hero.HeaLVL, hero.WizLVL, hero.MonLVL]
print ' -------'
print '---------------------------| ARENA |----------------------------'
print ' -------'
print
charname = raw_input('Please enter your hero\'s name:')
hero.name = charname
print
print 'Please pick a starting class:'
classAnswered = 0
while classAnswered == 0:
classNumcounter = 0
print
while classNumcounter < 5:
print '[' + str(classNumcounter + 1) + ']:', classList[classNumcounter]
classNumcounter += 1
print '[H]elp for more information.'
classAnswer = raw_input('Please enter [1-5] or [H]')
if classAnswer == 'H':
# HAVE FUN HERE -- DON'T FORGET!!
print
print '----------------------------------------------------------------'
print 'THE WARRIOR:'
print 'The warrior is a heavily armored fighter using a sword,'
print 'Versed in offensive and defensive tactics.'
print 'This class is straightforward and harty, suited for beginners.'
# ETC cut for post length
print '----------------------------------------------------------------'
print
else:
classAnswer = int(classAnswer)
if classAnswer == 1:
hero.WarLVL += 1
hero.WarA1 += 1
hero.WarA2 += 1
hero.WarB += 1
hero.WarP += 1
hero.WarE += 1
hero.weaponType = 3
hero.armorType = 2
hero.curClass = 1
elif classAnswer == 2:
hero.RogLVL += 1
hero.RogA1 += 1
hero.RogA2 += 1
hero.RogB += 1
hero.RogP += 1
hero.RogE += 1
hero.weaponType = 1
hero.armorType = 1
hero.curClass = 2
# ETC, cut for post length
classAnswered = 1
classAnswer -= 1
classList = ['Warrior', 'Rogue', 'Healer', 'Wizard', 'Monk']
classLevels = [hero.WarLVL, hero.RogLVL,
hero.HeaLVL, hero.WizLVL, hero.MonLVL]
print
print 'Welcome to the Arena,', hero.name + ', the level', classLevels[classAnswer], classList[classAnswer] + '!'
print
print 'Please visit your [P]ersonal trainer before stepping into'
print 'The Arena itself.'
def personalTrainer(hero):
statCounter = 0
while hero.visitedPTrainer == 0:
print
print 'Ah,', hero.name + ', come in, come in!'
#ETC, cut for post length
print
hero.visitedPTrainer = 1
statsList = ['STR', 'DEX', 'CON', 'AGI', 'INT', 'WIS']
statsValues = [hero.STR, hero.DEX, hero.CON, hero.AGI, hero.INT, hero.WIS]
print '------------------------'
print 'Your current Stats:'
while statCounter < 6:
print '[' + str(statCounter + 1) + ']', statsList[statCounter] + ':', int(statsValues[statCounter])
statCounter += 1
print 'You have', hero.statPoints, 'points left.'
print '------------------------'
statPick = raw_input(
'Train [1]-[6]; [V]iew detailed stats; [L]eave; [H]elp')
if str.upper(statPick) == 'L':
main()
elif str.upper(statPick) == 'V':
print '(Definitely not yet implemented...)'
personalTrainer(hero)
elif str.upper(statPick) == 'H':
# more tedium
print
print '----------------------------------------------------------------'
print 'STRENGTH:'
print 'Strength determines your damage with swords and staves: (STR)%'
print 'And your raw damage mitigation. ((STR-100)/10)%: 200STR = -10%'
print 'It is especially important for Warriors and Healers.'
print
print 'DEXTERITY:'
# ETC, cut for post length
print '----------------------------------------------------------------'
print
personalTrainer(hero)
else:
if hero.statPoints <= 0:
print 'You need more experience to train further.'
personalTrainer(hero)
statPick = int(statPick)
if statPick == 1:
STRadd = input(
'How many points would you like to add to Strength? [0]-[' + str(hero.statPoints) + ']')
if STRadd > hero.statPoints:
STRadd = 0
print 'You haven\'t gained enough experience to train so much!'
hero.STR += STRadd
hero.statPoints -= STRadd
personalTrainer(hero)
elif statPick == 2:
DEXadd = input(
'How many points would you like to add to Dexterity? [0]-[' + str(hero.statPoints) + ']')
if DEXadd > hero.statPoints:
DEXadd = 0
print 'You haven\'t gained enough experience to train so much!'
hero.DEX += DEXadd
hero.statPoints -= DEXadd
personalTrainer(hero)
# Etc - cut for post length
else:
print 'Please pick something from inside the [brackets].'
personalTrainer(hero)
def main():
if hero.WarLVL == 0 and hero.RogLVL == 0 and hero.HeaLVL == 0 and hero.WizLVL == 0 and hero.MonLVL == 0:
classPick(hero)
print
print 'What would you like to do?:'
locationPick = raw_input(
'[F]ight in the Arena; [C]lass Trainer; [P]ersonal Trainer; [H]elp')
if str.upper(locationPick) == 'F':
enemy = character()
fight(hero, enemy)
# ETC...
main()
Answer: The primary problem I see is that your functions try to do everything from start to finish. For example fightRound knows all about the different kinds of weapons and armor and chances of critical hits and what effect these all have on the damage. It would benefit greatly being separated into multiple items that handle individual aspects of this. Consider a conductor of an orchestra: he doesn't tell violinist to pull the bow or the trumpet player when to breathe; the conductor just decides when the notes must be played.
You can make fightRound more like a conductor through something called refactoring. It's a process of finding similar code, turning it into reusable blocks, and using that new block without changing the meaning of your code. By giving that reusable block a name, it can become the master of its primary purpose (playing a trumpet note), and let the caller focus on conducting.
In your code you can find many good candidates for refactoring by looking for if/elif trees. Since I'm not seeing all of your code, I may guess the wrong pattern to simplify. But on the whole this approach should help reduce the amount of code, and increase the focus within each block. But watch out for any cases where my advice is wrong because I didn't have the full picture, and see if you can figure out how to tweak it into something that helps.
Refactoring into functions
Let's start with a simple refactoring. Here's some code from early on in fightRound:
if hero.weaponType == 0:
hero.swing = hero.AGI * 3
elif hero.weaponType == 1:
hero.swing = hero.AGI * 2
elif hero.weaponType > 1:
hero.swing = hero.AGI
[...]
if enemy.weaponType == 0:
enemy.swing = enemy.AGI * 3
elif enemy.weaponType == 1:
enemy.swing = enemy.AGI * 2
elif enemy.weaponType > 1:
enemy.swing = enemy.AGI
These two blocks are the same, except the first one references hero everywhere and the second references enemy. As a first step, I would like to see this become something like:
hero.swing = getWeaponSwing(hero)
enemy.swing = getWeaponSwing(enemy)
where getWeaponSwing(char) was a simple implementation of that common code.
Similarly the interleaved code that calculates attacks and misses is also repeated. This could become its own function, or perhaps merged with the suggested getWeaponSwing and given a more inclusive name.
Refactoring into data
A little further down, just past the comment # Just until I get items in..., there is an abbreviated if/elif tree that sets mindmg and maxdmg per the type of weapon. Let's consider what refactoring can do here. If the overall structure looks the same for all the omitted weaponType values, consider a data-driven refactoring:
# up above, probably globally:
WeaponDamageCoef = [
(10, 10), # level 0
(5, 25), # level 1
... ]
# back in fightRound
coef = WeaponDamageCoef[hero.weaponType]
hero.mindmg = coef[0] * hero.LVL
hero.maxdmg = coef[1] * hero.LVL
# or, a more advanced way
hero.mindmg, hero.maxdmg = [coef * hero.LVL for coef in WeaponDamageCoef[hero.weaponType]]
Refactoring into classes
It's possible that the above scenario was simplified. Maybe it's not always a coefficient you can look up by weapon type, or it's not always multiplied by the level. If you need further customization, you can make multiple weapon classes that offer the same interface. Then store an instance of the weapon on your hero and enemy, and let the weapon figure out its thing. Here's a roughed out example of that:
class Weapon(object):
def getDamageRange(self, char):
return 10 * char.LVL, 10 * char.LVL
class MagicWeapon(object): # or perhaps inherit from Weapon, or some shared base
def getDamageRange(self, char):
return 5 * char.LVL + 3 * char.INT, 7 * char.LVL + 8 * char.INT
...
hero.weapon = MagicWeapon()
...
hero.mindmg, hero.maxdmg = hero.weapon.getDamageRange(hero)
Next steps
Look for as many instances of repeating code as you can find, and try to refactor them into helpers. After you do this, the code should become smaller and easier to manage. You may find that after the first level of refactoring, other similarities start to become apparent and you can do further higher level refactorings.
There are a lot of other opportunities to improve this code. Refactoring won't fix them all. My hope is that once you reduce the quantity of code through good factoring, it will be easier to address the opportunities that remain. | {
"domain": "codereview.stackexchange",
"id": 6049,
"tags": "python, classes, library, python-2.x"
} |
Mistake in using dirac notation when applying $X$ gate to vector | Question: The X gate is given by $\big(\begin{smallmatrix}
0 & 1 \\
1 & 0
\end{smallmatrix}\big)$ in the computational basis. In the Hadamard basis, the gate is $X_H = \big(\begin{smallmatrix}
1 & 0\\
0 & -1
\end{smallmatrix}\big) = |+ \rangle \langle +| - |-\rangle \langle-|$. When I apply the gate to the Hadamard basis vectors, the vectors should flip, and they do when I use matrix notation but not when I'm using dirac notation. I know I'm making a mistake somewhere.
$X_H |+\rangle = (|+ \rangle \langle +| - |-\rangle \langle-|)|+\rangle = |+ \rangle \langle +|+\rangle - |-\rangle \langle-|+\rangle = |+\rangle(1) - |-\rangle(0) = |+\rangle$ and
$X_H |-\rangle = (|+ \rangle \langle +| - |-\rangle \langle-|)|-\rangle = |+ \rangle \langle +|-\rangle - |-\rangle \langle-|-\rangle = |+\rangle (0) -|-\rangle(1) = -|-\rangle$
Meanwhile, in matrix notation,
$X_H|+\rangle = \big(\begin{smallmatrix}
1 & 0\\
0 & -1
\end{smallmatrix}\big) \frac{1}{\sqrt{2}}\big( \begin{smallmatrix}
1 \\
1
\end{smallmatrix}\big) = \frac{1}{\sqrt{2}}\big( \begin{smallmatrix}
1 \\
-1
\end{smallmatrix}\big) = |-\rangle
$
$X_H|-\rangle = \big(\begin{smallmatrix}
1 & 0\\
0 & -1
\end{smallmatrix}\big) \frac{1}{\sqrt{2}}\big( \begin{smallmatrix}
1 \\
-1
\end{smallmatrix}\big) = \frac{1}{\sqrt{2}}\big( \begin{smallmatrix}
1 \\
1
\end{smallmatrix}\big) = |+\rangle
$
Answer: The basis states should not flip, as these two basis states are the eigenstates of the $X$ gate. The $X$ gate flips the computational basis states, the $Z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$ gate flips the Hadamard basis states.
Expressing everything in the computational basis
In the computational basis, we have $X = \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}$. Thus the states (expressed in the computational basis) $|+\rangle = \begin{bmatrix}1 \\ 1\end{bmatrix}$ and $|-\rangle = \begin{bmatrix}1 \\ -1\end{bmatrix}$ are the $+1$ and $-1$ eigenstates respectively, as you can readily check.
Expressing everything in the Hadamard basis
If you express everything in the Hadamard basis, the $X$ gate becomes $X_{H} = \begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix}$.
But, the $|+\rangle$ and $|-\rangle$ states should now also be expressed in this basis. That is, $|+\rangle_{H} = \begin{bmatrix}1 \\ 0\end{bmatrix}$ and $|-\rangle_{H} = \begin{bmatrix}0 \\ 1\end{bmatrix}$. It's now obvious that these states are indeed the $+1$ and $-1$ eigenstates of $X$, expressed in whatever basis.
To summarize your notation
So you dirac notation is correct, and in your matrix notation you expressed the $X$ operator in the Hadamard basis, but the states in the computational basis.
But wait, then what are these states if not the $|+\rangle$ and $|-\rangle$ states?
So what are the states $\begin{bmatrix}1 \\ 1\end{bmatrix}_{H}$ and $\begin{bmatrix}1 \\ -1\end{bmatrix}_{H}$, i.e. these states in the Hadamard basis? As you showed with your matrix notation, they are those states that are flipped under operation of the $X$ gate - they are the computational basis states/eigenstates of the $Z$ operator!
Of course, you can write this out mathematically as well:
$$
\begin{bmatrix}1 \\ 1\end{bmatrix}_{H} = \begin{bmatrix}1 \\ 0\end{bmatrix}_{H} + \begin{bmatrix}0 \\ 1\end{bmatrix}_{H} = \begin{bmatrix}1 \\ 1\end{bmatrix} + \begin{bmatrix}1 \\ -1\end{bmatrix} = \begin{bmatrix}1 \\ 0\end{bmatrix}
$$
As you can see, I've been very sloppy with the normalization factor - the above equation is a factor of $2$ off. | {
"domain": "quantumcomputing.stackexchange",
"id": 2254,
"tags": "quantum-gate, textbook-and-exercises, matrix-representation, hadamard, linear-algebra"
} |
ROS Answers SE migration: Rviz crashes | Question:
I changed the transport hint in Image tab at rviz from 'raw' to 'tf' and it crashed. Now I am facing this error while opening it. Re-installation and recompilation did not work. How to pass through it? Thanks
[ERROR] [1338181335.401547975]: Caught exception while loading: Unable to load plugin for transport 'tf', error string:
According to the loaded plugin descriptions the class image_transport/tf_sub with base class type image_transport::SubscriberPlugin does not exist. Declared types are image_transport/compressed_sub image_transport/raw_sub image_transport/theora_sub
Originally posted by Reza Ch on ROS Answers with karma: 22 on 2012-05-27
Post score: 0
Answer:
rviz saves it's config in the '.rviz' subfolder of your home folder. The easiest option is to just delete the entire 'rviz' folder (it will be re-created on the next rviz startup). You could also look into the config files and try to edit them as to fix the error.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2012-05-28
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 9562,
"tags": "rviz"
} |
Teager-Kaiser Operator vs. Hilbert Transform | Question: Since a couple of months I started working on the extraction (estimation) of signal frequency and amplitude components by means of two different time-frequency approaches, namely the Hilbert transform and the Teager-Kaiser energy operator.
I tested both methods on standard signals, such as chirps, sine, cosine, etc. This preliminary analysis seemed to highlight Teager-Kaiser has better resolution in both frequency and amplitude estimation.
Afterwards I applied the two methods on some acceleration signal derived from a tool simulating the dynamic behavior of a given system. Surprisingly enough, Hilbert transform provided more reliable results in terms of frequency estimation. Teager-Kaiser operator shows some high-frequency content estimation, which is hardly expectable.
I developed the two techniques in Simulink, as visible here:
The results on a synthetic acceleration signal are here:
What did I miss in the design of the Teager-Kaiser operator? Is it any possible to avoid those "false" frequency estimates (those with the arrows)?
Answer: This paper may be of interest:
David Vakman, "On the Analytic Signal, the Teager-Kaiser Energy Algorithm, and Other Methods for Defining Amplitude and Frequency." IEEE Trans. Signal Processing. (1996)
Summarising from the paper:
$$\Psi(u) = a^2w^2 = [u'(t)]^2 - u(t)u''(t)$$
$$\Psi'(u') = a^2w^4 = [u''(t)]^2 - u'(t)u'''(t)$$
where $u(t)$ is the signal, and $a$ and $w$ iare the amplitude and frequency estimates to solve for.
$$a(t) = \frac{\Psi(u)}{\sqrt{\Psi(u')}}$$
$$w(t) = \sqrt{\frac{\Psi(u')}{\Psi(u)}}$$
But this means if at some point in the signal $u'(t)=1$ and $u''(t)=u''''(t)=0$, for example, then $\Psi(u) = 1$ and $\Psi'(u) = 0$ and thus $a(t) = \infty$ and $w(t) = 0$.
The paper thus explains this causes spikes in amplitude and zero frequencies with the Teager-Kaiser algorithm.
For your signal, the reverse must be true. If $\Psi(u) = 0$ and $\Psi'(u) = 1$ then you would get spikes in frequency when amplitude is zero. This would occur when $[u'(t)]^2 = u(t)u''(t)$
The Hilbert transform uses the entire signal to compute. It is implemented by doubling the positive frequencies and setting the negative frequencies to zero. Therefore the energy of the signal does not change (Parseval's theorem). Thus, intuitively, there can be no infinite spikes, as this would require infinite energy. | {
"domain": "dsp.stackexchange",
"id": 6183,
"tags": "signal-analysis, time-frequency, analytic-signal"
} |
How to calculate magnitude of a star in a triple star system? | Question: I want to find a star's magnitude in a triple star. I already knew the the magnitude of the triple star and the other two star, but I don't quit know how to solve it. Is there a way to find it?
Answer: Basically you need to convert between luminosities (which you can add) and magnitudes using
$$M-M_\odot=-2.5\log_{10}(L/L_\odot)$$
Let's call the total luminosity $L_0$ and magnitude $M_0$ and the individual luminosities and magnitudes $L_1$, $L_2$ and $L_3$ and $M_1$, $M_2$ and $M_3$.
Then, you have the total luminosity of the system, directly
$$L_0/L_\odot=10^{-0.4(M_0-M_\odot)}$$
and as the sum of the components
$$L_0/L_\odot=(L_1+L_2+L_3)/L_\odot=10^{-0.4(M_1-M_\odot)}+10^{-0.4(M_2-M_\odot)}+10^{-0.4(M_3-M_\odot)}$$
Solving these equations for $M_3$ gives
$$M_3-M_\odot=-2.5\log_{10}\left(10^{-0.4(M_0-M_\odot)}-10^{-0.4(M_1-M_\odot)}-10^{-0.4(M_2-M_\odot)}\right)$$
I'm assuming you have absolute magnitudes, but you can rewrite the formulae in terms of apparent magnitudes using
$$M=m+5(1-\log_{10}d)$$
but I think the result then also depends on the distance. | {
"domain": "astronomy.stackexchange",
"id": 861,
"tags": "star, apparent-magnitude"
} |
Finding the units of a variable in the wave function | Question: I have the following figure which shows the wave function of an electron. The wave function is not realistic due to the discontinuities in slope, but consider its to approximate a possible smooth wave function.
I wondering what the units of $c$ is this question.
My attempt one:
Since:
The physical interpretation of the wavefunction is that $|\psi(\vec r)|^2dV$ gives the probability of finding the electron in a region of volume $dV$ around the position $\vec r$. Probability is a dimensionless quantity. Hence $|\psi(\vec r)|^2$ must have dimension of inverse volume and $\psi$ has dimension $L^{-3/2}$.
So in this case $\psi$ has units $L^{-1/2}$ this means c must have the same units.
However when is use the normalization condition to find the value of c i get the following equation:
which when solved gives me $c=\sqrt{\frac{2}{5}}$.
However, using the second last equation: $3c^2-\frac{c^2}{2}=\frac{5c^2}{2}=1$.
Since 1 is unitless, the LHS should also be unitless; however, since $c$ has the units $L^{-1/2}$ this would give LHS the units $1/L$
Answer: You forgot to include the units in your limits integration,
$$1=\int_{-\infty}^{+\infty}dx\,\left|\psi(x)\right|^{2}=
\int_{-2\,{\rm nm}}^{-1\,{\rm nm}}dx\,\left|\psi(x)\right|^{2}+
\int_{-1\,{\rm nm}}^{1\,{\rm nm}}dx\,\left|\psi(x)\right|^{2}+
\int_{1\,{\rm nm}}^{2\,{\rm nm}}dx\,\left|\psi(x)\right|^{2},$$
which gives, with your wave function,
$$1=\frac{5}{2}c^{2}\,({\rm nm}).$$
Solving this for $c$ then naturally gives a $c$ with units of $({\rm nm})^{-1/2}$. | {
"domain": "physics.stackexchange",
"id": 80781,
"tags": "quantum-mechanics, wavefunction"
} |
If you measure one "share" of an entangled pair, will the resulting pair be a product state? | Question: If you do a partial measurement on one "share" of en entangled pair, will the resulting pair no longer be entangled, i.e will be a product state?
Answer: It depends on what you mean precisely with "measuring", but generally speaking, no.
If you are talking about a projective measurement, and you are asking about the state of the rest of the system conditionally to the measurement result, then sure the residual state is separated from the measured one.
More precisely, this interpretation of the question amounts to asking the state obtained after performing a partial projection. If the initial state is a bipartite $|\psi\rangle\equiv\psi_{ij}|ij\rangle$, then measuring in the computational basis the first system you get the state
$$|\psi_i\rangle=\frac{1}{\|(|i\rangle\!\langle i|\otimes I)|\psi\rangle\|}(|i\rangle\!\langle i|\otimes I)|\psi\rangle$$
for some $i$. This obviously contains no correlation between first and second system.
As another example, if you have the three-qubit state $|0\rangle(|00\rangle+|11\rangle)+|1\rangle(|00\rangle-|11\rangle)$, then when the first qubit is found to be $|0\rangle$, the state of the rest of the system is described by $|00\rangle+|11\rangle$.
On the other hand, if you mean to measure the first system and neglect the measurement outcome (which is equivalent to just ignoring the first system, regardless of whether you measure it), then the residual state is to be described via the partial trace. This residual state can be pretty much anything. For example, if the initial state is $|0\rangle\!\langle0|\otimes \frac{I}{2}$, then partial tracing the first system gives the maximally mixed state $\frac{I}{2}$. If the initial state is $|0\rangle\otimes(|00\rangle+|11\rangle)$, then partial tracing the first system gives the maximally entangled residual state. | {
"domain": "physics.stackexchange",
"id": 87020,
"tags": "quantum-mechanics, homework-and-exercises, quantum-information, quantum-entanglement, quantum-measurements"
} |
Roast my C# birthday code | Question: So this is a pretty simple code, I think it was?
It asks for the user's birth day and month, and gives it back with the day a discount reminder email will be sent (the day before their birthday)
Now I tried to optimize as much as I could, possible wrong inputs and if the user's birthday is the first of a month
Even though I'm still pretty new to coding, I still want you to criticize my code as much as I could, I would like to improve as much as I can
using System.Text.RegularExpressions;
using System.Linq;
namespace Exercice14
{
class Program
{
static void Main(string[] _)
{
// declaring some variables
int birthDay;
int reminderDay;
string suffix = "th";
string reminderSuffix = "th";
string birthDayT;
string birthMonth;
string reminderMonth;
string[] months = {"January", "February", "March", "April", "May", "June", "July", "August", "September",
"October", "November", "December" };
bool exceptionFirst = false;
// prompts for birth month and capitalize first letter
Console.WriteLine("Hello User!");
Console.Write("Please enter your birth month in letters: ");
birthMonth = Console.ReadLine();
birthMonth = char.ToUpper(birthMonth[0]) + birthMonth.Substring(1);
// check if birth month contains only letters
while(Regex.IsMatch(birthMonth, @"^[a-zA-Z]+$") == false)
{
Console.WriteLine("Birth month should only contain letters!");
Console.Write("Please enter your birth month in letters: ");
birthMonth = Console.ReadLine();
birthMonth = char.ToUpper(birthMonth[0]) + birthMonth.Substring(1);
}
// check if month is right
while (months.Contains(birthMonth) == false)
{
Console.WriteLine("Invalid month?! Please enter a valid english month");
Console.Write("Please enter your birth month: ");
birthMonth = Console.ReadLine();
birthMonth = char.ToUpper(birthMonth[0]) + birthMonth.Substring(1);
}
// prompts for birth day
Console.Write("Please enter your birth day in numbers: ");
birthDayT = Console.ReadLine();
// check for valid day
while (int.TryParse(birthDayT, out int _) == false)
{
Console.WriteLine("Invalid argument! Please enter day in numerals");
Console.Write("Please enter your birth day in numbers: ");
birthDayT = Console.ReadLine();
}
// check for valid day number
while (int.Parse(birthDayT) < 1 || int.Parse(birthDayT) > 31)
{
Console.WriteLine("Invalid date! Please enter a day between 1 and 31");
Console.Write("Please enter birth day in numbers: ");
birthDayT = Console.ReadLine();
}
// assign birth day to variable once tested
birthDay = int.Parse(birthDayT);
// set reminder day and month
reminderDay = birthDay - 1;
reminderMonth = birthMonth;
// check which suffix to use for days AND calculate reminder day and month if exception
if (birthDay == 1) //exception
{
exceptionFirst = true;
suffix = "st";
reminderMonth = months[Array.IndexOf(months, birthMonth) - 1];
}
if (birthDay == 2)
{
suffix = "nd";
reminderSuffix = "st";
reminderDay = 1;
}
if (birthDay == 3)
{
suffix = "th";
reminderSuffix = "nd";
}
if (birthDay > 3)
{
suffix = "th";
reminderSuffix = "th";
}
// print values
Console.WriteLine();
Console.WriteLine("Yer birthday is on the " + birthDay + suffix + " of " + birthMonth );
if (exceptionFirst == true)
{
Console.WriteLine("A reminder email for your birthday discount " +
"\nwill be sent on the last day of " + reminderMonth);
}
else
{
Console.WriteLine("A reminder email for your birthday discount " +
"\nwill be sent on the " + reminderDay + reminderSuffix + " of " + reminderMonth);
}
}
}
}
Answer: Notes :
Months array
Regex
multiple loops and redundant validations.
Direct parsing without validations.
As Roland mentioned, you don't need to redefine what is already existed, nor handling the conversion of dates manually. You need to focus on using what .NET already has, if you don't know, google before you start coding. This way, you will avoid making major changes.
You take two inputs from the user, then you only need 2 validations process. While in your code you're doubling that!, which is unnecessary if you implement it correctly.
Let's start with the month validation, user can input a short name month, or full name, or even a number. As you're dealing with string, you need to take the inputs possibilities, even if you tried to restrict the input, there is still a chance of an invalid input with is the unknown case! So, you will focus on covering known cases, which you already mostly did.
The repeated issue that you are unaware of is that you're assigning and processing then validating the user input, so you need to reverse that. First validate, then process based on that validation.
Here is an example of your process of validation :
// prompts for birth month and capitalize first letter
Console.WriteLine("Hello User!");
Console.Write("Please enter your birth month in letters: ");
birthMonth = Console.ReadLine();
birthMonth = char.ToUpper(birthMonth[0]) + birthMonth.Substring(1);
// check if birth month contains only letters
while(Regex.IsMatch(birthMonth, @"^[a-zA-Z]+$") == false)
{
Console.WriteLine("Birth month should only contain letters!");
Console.Write("Please enter your birth month in letters: ");
birthMonth = Console.ReadLine();
birthMonth = char.ToUpper(birthMonth[0]) + birthMonth.Substring(1);
}
First you asked a user for an input
then you directly get the first char assuming it's a valid string
then you validate it with regex.
if invalid, you do the same steps 1 to 3 until it's valid letter.
what happens if birthMonth is empty or null? it'll throw IndexOutOfRangeException because of birthMonth[0] and if the birthMonth is NULL then it'll also throw a null exception.! these are basic validations which need to be validated before processing.
you've applied the same process to the reset. you need to validate the string first using string.IsNullOrEmpty or string.IsNullOrWhiteSpace or if you prefer to do it manually you can do this
if(birthMonth != null && birthMonth.Length > 0)
for the month part, you don't need the array, you need to use DateTime instead. You can use something like this :
// handle the month conversion
// acceptable inputs : short name, full name, month number
private static bool TryGetMonth(string month, out DateTime date)
{
date = new DateTime();
if(string.IsNullOrEmpty(month))
{
return false;
}
// default datetime format
var format = "dd MMMM yyyy HH:mm:ss tt";
// if user enters a repersental month number then adjust the format
if(int.TryParse(month, out int monthInt))
{
format = "dd M yyyy HH:mm:ss tt";
}
else if(month.Length <= 3 && !month.Equals("May", StringComparison.OrdinalIgnoreCase))
{
format = "dd MMM yyyy HH:mm:ss tt";
}
return DateTime.TryParseExact($"01 {month} 2020 00:00:00 AM", format, CultureInfo.InvariantCulture, DateTimeStyles.None, out date);
}
the DateTime.TryParseExact will handle the conversion, and would return a valid date if the input meets the parsing requirements. Then, from the dateTime, you can have access to its values like month name, number ..etc.
Also, when parsing integers, use int.TryParse to check the validity of the integer first, then extract the parsed integer. This would avoid throwing undesired exceptions.
here is an untested revision of your code using the TryGetMonth method above along with using DateTime to demonstrate my points:
// prompts for birth month and capitalize first letter
Console.WriteLine("Hello User!");
Console.Write("Please enter your birth month in letters: ");
while(TryGetMonth(Console.ReadLine(), out DateTime monthDate ) == false)
{
Console.WriteLine("invalid month");
Console.WriteLine("Please enter your birth month name (short or full name) or number");
}
// prompts for birth day
Console.Write("Please enter your birth day in numbers: ");
while(int.TryParse(Console.ReadLine(), out int birthDay) && (birthDay > 0 && birthDay <= 31))
{
Console.WriteLine("Invalid argument! Please enter day between 1-31 in numerals");
Console.WriteLine("Please enter your birth day in numbers: ");
}
DateTime birthDate = new DateTime(DateTime.Now.Year, monthDate.Month, birthDay);
DateTime reminderDate = birthDate.AddDays(-1);
string suffix;
string reminderSuffix;
string msg;
switch(birthDate.Day)
{
case 1:
suffix = "st";
reminderDate = reminderDate.AddMonths(-1);
break;
case 2:
suffix = "nd";
reminderSuffix = "st";
break;
case 3:
suffix = "rd";
reminderSuffix = "nd";
break;
case 4:
suffix = "th";
reminderSuffix = "rd";
break;
default:
suffix = "th";
reminderSuffix = "th";
}
if(birthDate.Day == 1) {
msg = $"A reminder email for your birthday discount \nwill be sent on the last day of {reminderDate.ToString("MMMM")}";
} else {
msg = $"A reminder email for your birthday discount \nwill be sent on the {reminderDate.Day}{reminderSuffix} of {reminderDate.ToString("MMMM")}";
}
// print values
Console.WriteLine();
Console.WriteLine($"Your birthday is on the {birthDate.Day}{suffix} of {birthDate.ToString("MMMM")}" );
Console.WriteLine(msg);
// reminderDate.ToString("MMMM") would return month name
// "MMMM" for full name and "MMM" for short name (e.g. June and Jun) | {
"domain": "codereview.stackexchange",
"id": 38326,
"tags": "c#, beginner, strings, array"
} |
Is there radioactivity at absolute zero? | Question: Theoretically, will a radioactive material still be radioactive at absolute zero? What would happen at the lowest realistic temperatures we have ever achieved?
Will radioactivity stop at absolute zero, since it is a nuclear phenomenon and nuclear motion slows down as we approach absolute zero (and theoretically stopping entirely at absolute zero)?
Answer: Theoretically, a radioactive material will still be radioactive at absolute zero, and its rate of decay will be $100.00\%$ of that at room temperature. Practically, at the lowest achievable temperatures we observe the same thing: radioactivity is still there, not affected the slightest bit.
Nuclear motion does not slow down as we approach absolute zero, because there is no such thing as nuclear motion in the first place. In a way, all nuclear motion has stopped already at room temperature. Each nucleus just sits there in the ground state and does not know what happens in the chemical world above. From its point of view, the room temperature is the same as absolute zero. To reach its first excited state, it would need energies a great deal greater than that.
Say, you heat your radioactive sample until it melts. Then you heat it a few more thousand degrees, until all materials, including tungsten, melt and then evaporate. Then you heat it some more, until even the strongest chemical bonds are broken and there are no more molecules, just atoms. Then you heat it about ten times more, until atoms lose much of their valent electrons and you have a highly ionized plasma. Then you heat it about a hundred times more, until all atoms lose all their electrons and you have something like a stellar plasma. Then you heat it some more, just in case. Then, and not before, your nuclear processes will show first feeble indication of thermal dependence of any sort.
Short of that, you could just as well have asked if the radioactivity in a sample stops when you paint it blue. | {
"domain": "chemistry.stackexchange",
"id": 17440,
"tags": "atoms, radioactivity"
} |
Why don't we consider quantum effects when thinking about reactions and their mechanisms? | Question: When we're reasoning about chemical reactions and their mechanisms (in organic chemistry in particular), the way we model the behaviour of molecules is almost in a “common sense” way. In terms of tracing their trajectories, steric effects, positive and negative charges behaving how point charges would. The treatment to me feels very different to how we think about electrons, for example.
Is this because reactions occur in the bulk, and even though we are thinking about what is happening in terms of the behaviour of individual molecules, that molecule actually represents some sort of average behaviour of the bulk, and so quantum effects/weirdness are being averaged out and we can think of them acting in a more “common sense” way?
Of course I understand that the actual bonding and our understanding of chemical structures and things like that comes from QM based models.
Answer: The answer in short is Born Oppenheimer Approximation (I'm a physics grad student, by the way), and the fact that electrons are much lighter, or "more quantum", than atom nuclei.
In more details:
Given nuclei are much heavier than electrons, and given typical temperature and pressure in chemistry. It turns out one can use Born Oppenheimer Approximation to treat electron quantum mechanically (using orbitals, for instance) while treating nuclei classically, and still get good enough results (for reaction barrier, for instance). Under Born Oppenheimer Approximation, the energy of a system, given a specific nuclei configuration, is given by electrons energy and electron-nuclei-interaction-energy calculated by quantum mechanics (assuming classical nuclei position) + nuclei energy calculated classically (assuming classical nuclei position).
There are a few cases where Born Oppenheimer Approximation fails in chemistry. For instance, proton transfer (movement of hydrogen) often need quantum mechanical treatment (like tunneling rate) if one wants to get good result (for reaction barrier or reaction rate, for instance) since hydrogen is the lightest of nuclei. Chemistry in astronomy and laboratory with extreme pressure and temperature might require quantum mechanical treatment of nuclei (for instance, superliquidity of Helium 3 requires quantum mechanical treatment of nuclei). Zero-point energy of nuclei vibration also require quantum mechanical treatment of nuclei. | {
"domain": "chemistry.stackexchange",
"id": 17141,
"tags": "quantum-chemistry"
} |
Algorithm to decide whether two vertices are ancestors/descendants of each other | Question: Is there an algorithm that performs the following:
Input: A directed graph and two vertices within that graph
Output: Whether one of the two vertices is the ancestor of the other
For example, in this graph:
A is an ancestor of D. B is neither the ancestor nor the descendant of C.
The best I can think of doing is performing a DFS/BFS from each of the two input vertices and seeing if either search includes the other vertex. This takes O(|V| + |E|) time. Is there a well-known, faster algorithm?
Answer: DFS only requires $O(|V|+|E|)$ time, not $O(|V| \cdot |E|)$ time. (Same for BFS.) Therefore, DFS is an efficient solution for this problem. This is about as efficient as you hope for, as it takes that much time just to read the entire input -- you can't hope for something faster.
The problem becomes more interesting if we're provided the graph $G$ and allowed to do some precomputation, and then want to answer many queries efficiently. This is considerably more challenging, but in some cases, it is possible to do better than running a DFS from scratch every time you receive a new query.
For instance, if the graph is a tree, there is an efficient solution: do a DFS once, and store the pre and post numbers at every node; then each ancestor query can be answered by testing for interval containment.
In a more general graph, this is known as a reachability query, and there are algorithms in the literature for this problem. See, e.g., https://cs.stackexchange.com/a/41432/755 and https://cstheory.stackexchange.com/q/25298/5038 and https://cstheory.stackexchange.com/q/21503/5038. Without loss of generality, you can assume the graph is a dag (if not, first compute all strongly connected components in linear time, and label each vertex with its scc; then you only need to consider the dag of scc's). | {
"domain": "cs.stackexchange",
"id": 5454,
"tags": "graphs"
} |
where is the direction for the orientation of component? | Question:
I made a simple robot and read the orientation of robot's head, but I do not know where is the direction of orientation = 0, How I can check where is the direction of orientation = 0. is the orientation based on the axis, or something else? if based on the axis, how is the axis (or how can I see the robot orientation on the axis)? Take the human for a simple example, if you know your orientation = 0, but where is my direction for orientation = 0, forward, left, right or some angle else?
In my understanding, draw a line in orientation = 0 is a good way to fix the question, but I really have no idea how to draw a line in gazebo. looking forward to the answers.
I am hurry to know the answer to my experiment, Thanks!
Originally posted by langong on Gazebo Answers with karma: 13 on 2018-02-20
Post score: 0
Original comments
Comment by langong on 2018-02-20:
this is mean, where is the direction if orientation = 0. is it in the direction of +x axis, -x axis, -y axis or y axis?
Comment by langong on 2018-02-20:
is anybody has answer for this question? in my understanding (I have run a small experiment to check this problem, but I think it is not enough to fix the question), orientation = 0 is the direction in +x axis, but I am sure if it is right. can you give me some ideas?
Comment by chapulina on 2018-02-20:
Please don't post an answer with more questions, either update the existing question or post a new one, if the question is different.
Answer:
I suggest you right-click your model and choose View->Transparent and then View->Link Frame. You'll then see how your component is oriented in the world.
For example, the box below has a zero orientation with respect to the world in all 3 axes (X/Y/Z):
Good luck with your experiment.
Originally posted by chapulina with karma: 7504 on 2018-02-20
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by langong on 2018-02-21:
Thanks! I have checked like you did, But I am not sure where is the zero orientation? the orientation is a rotation around z axis, but where is the start (zero orientation), we can not fix this question with this way. In my understanding, the start (zero orientation) is from the positive x axis, but I can not sure it is right. do you have some more powerful keys? the above answer is helpful to understand my question, could you go to see the above answer. Thanks again!
Comment by chapulina on 2018-02-21:
The orientation is in 3 dimensions. A zero orientation means that all of the object's axes (X, Y and Z) are aligned with the world's X, Y and Z axes.
Comment by langong on 2018-02-22:
yes, you are right, I am sorry I did not give a specific question. For my question, zero orientation is for zero yaw. where is the start (zero Yaw)? For my understand, the start (zero Yaw) is from positive x axis, is this right, or ...?
Comment by chapulina on 2018-02-22:
Got it. Sure, yaw is the angle from the world's +X to your model's +X, counterclockwise if looked from above.
Comment by langong on 2018-02-24:
perfect, Thanks! how can I check this? do we have some tutorial or document to see? or anyway else?
Comment by chapulina on 2018-02-24:
Gazebo uses right-handed coordinates: https://en.wikipedia.org/wiki/Right-hand_rule#Coordinates
Comment by langong on 2018-02-25:
thanks! and can you help to answer this question? http://answers.gazebosim.org/question/18360/how-can-i-reset-the-robot-to-the-same-situations-in-the-beginning-of-each-running/
Comment by chapulina on 2018-02-25:
It looks like I've answered it a few weeks ago ;) Leave a comment there if you need more help
Comment by dagiopia on 2019-02-28:
@chapulina is there a way to obtain a vector indicating the relative coordinate orientation of the model with respect to the world in the gazebo C++ api or does one have to do a transformation using the pitch, roll and yaw params from WorldPose in order to get that vector?
Comment by chapulina on 2019-02-28:
I'm not sure I understand the question, sounds like you're interested in the model's XYZ position in the world? I think it's best you start a new question for this.
Comment by dagiopia on 2019-03-01:
ok thanks! here's my question http://answers.gazebosim.org/question/22051/link-local-coordinate-frame/ | {
"domain": "robotics.stackexchange",
"id": 4241,
"tags": "gazebo"
} |
materials that can render Bluetooth useless | Question: i'm currently preparing some experiments about using Bluetooth in real life. And to make the data more vary, i need some materials that can really giving side effects to Bluetooth functionality. i've been trying simple test with concrete walls (~7 in) and wooden door (~3.5 cm) and they seems not giving significant change
after some research, i've found that Bluetooth use radio wave (https://en.wikipedia.org/wiki/Bluetooth), and after many search results later i only found this as the possible nearest explanation (How can I create hindrances to radio waves?)
but they only explained conductive materials only
is there any material (other that conductive one, prefer common types that we usually found around) that can interrupt Bluetooth connectivity?
Answer: Such a material (apart from conductive ones) typically doesn't exist. The reason: when a material is interrupting bluetooth, it's either reflecting or absorbing the radio wave. Reflective materials are also absorptive materials, so this comes down to finding a material that absorbs radio waves.
For metals and other conductive materials, electrons are very mobile, and act as the absorbers. For non-conductive materials, absorption tends to be a quantum phenomena. The energy of a single photon must be the same as the transition energy from a quantum base state to an excited state. These energies tend to be much higher than the energy of a radio frequency photon, meaning you won't find any material that absorbs radio waves that isn't also conductive.
Your best bet is either a metal or a thick partially conductive ceramic to block the bluetooth. Concrete is one example, but you'd need a very thick piece. You'd also have to ensure that the concrete is large enough such that the wave won't refract around it. | {
"domain": "physics.stackexchange",
"id": 32920,
"tags": "material-science, radio-frequency"
} |
Removing trailing whitespace from input lines | Question: I'm going through the K&R C book (coming from Python) and am on exercise 1-18, which states:
Write a program to remove trailing blanks and tabs from each line of input, and to delete entirely blank lines.
I wasn't able to find another solution online that looks quite like mine so I am questioning whether it is truly correct, even though it appears to work on anything I throw at it. I let an ending \n be whitespace, removed it and added a general newline to each printed output, but I'm not sure if that is cheating.
If there are any bad practices (or bugs in the code) for standard C please let me know.
#include <stdio.h>
#define MAXLINE 1000
int getli(char line[], int maxline);
main() {
int len;
int i;
char line[MAXLINE];
while ((len = getli(line, MAXLINE)) > 0) {
i = len-1;
/* replace any trailing whitespace character with null character */
while (line[i] == '\n' || line[i] == '\t' || line[i] == ' ') {
line[i] = '\0';
i--;
}
if (line[0] == '\0') {
printf("Blank line\n");
}
else printf("%s\n", line);
}
return 0;
}
/* function to capture input line as char array */
int getli(char s[], int lim) {
int i, c;
for (i=0; i<lim-1 && (c=getchar()) != EOF && c != '\n'; i++) {
s[i] = c;
}
if (c == '\n') {
s[i] = c;
i++;
}
s[i] = '\0';
return i;
}
Answer: What you've got is very "old-fashioned-looking" C code, which is only natural for someone working through K&R. :) For example, declaring main() without a return type (that is, with the implicit int return type) is frowned upon these days.
Declaring all your variables — uninitialized! — at the top of your function is also frowned upon, in every language, because uninitialized variables in general are bad. Because uninitialized variables mean they have to be initialized later, which means assignment, which means mutation, which means more stuff for me to keep track of in my head. It's better to delay defining the variable at all until you know what's going to be put in it.
For example:
int len;
int i;
char line[MAXLINE];
while ((len = getli(line, MAXLINE)) > 0) {
i = len-1;
could definitely be written as
int len;
char line[MAXLINE];
while ((len = getli(line, MAXLINE)) > 0) {
int i = len-1;
And then personally I'd avoid side effects inside the condition of a while loop. It's very K&R, but it's frequently not very easy to read!
while (true) {
char line[MAXLINE];
int len = getli(line, MAXLINE);
if (len <= 0) break;
int i = len-1;
while (line[i] == '\n' || line[i] == '\t' || line[i] == ' ') {
line[i] = '\0';
i--;
}
Now we can flatten that whole business with i into a simple for loop:
while (true) {
char line[MAXLINE];
int len = getli(line, MAXLINE);
if (len <= 0) break;
for (int i = len-1; (line[i] == '\n' || line[i] == '\t' || line[i] == ' '); --i) {
line[i] = '\0';
}
At this point I wonder whether it is in your function's contract that it zeroes out every char in the buffer after the last nonspace character, or if you could avoid writing a lot of those zeroes. One way to do that would be to re-expand the loop:
int i = len-1;
while (line[i] == '\n' || line[i] == '\t' || line[i] == ' ') {
--i;
}
line[i+1] = '\0';
Another way would be to factor that code out into a helper function... or just use one of the many standard library functions for manipulating char buffers! Unfortunately, the function you're looking for in this case is spelled strrspn and is actually not standard, even though strspn is.
*strrspn(line, "\n\t ") = '\0';
(I also notice that Oracle's documentation for strrspn contradicts itself. I assume that on failure it returns a pointer to the end of the string, as shown in their example code, not a pointer to the beginning of the string, as written in their prose description.)
Anyway, writing strrspn from scratch would be a good K&R-style exercise for you!
(len = getli(line, MAXLINE)) looks like a stutter. I don't know why you wouldn't write out the whole word getline... unless you're avoiding the library function, in which case something like read_one_line or get_a_line would be reasonable names to choose.
Don't write MAXLINE on that line, either. Don't give yourself a chance to misspell it or cut-and-paste-error your way into a bug. If you're calling getli with the buffer line, then the second argument should always be the size of the buffer — that is, sizeof line. Sure, that happens to be MAXLINE, just like MAXLINE happens to be 1000; but you should avoid repeating the word MAXLINE for the same reason you should avoid repeating the word 1000. If you call getli(line, sizeof line), you can see right at the callsite — without inspecting any other code — that the call is correct. It is correct by design.
Your getli function has very convoluted control flow. You have a side-effect in the condition again; and then you test for '\n' both inside and outside the loop.
Here's an exercise: Rewrite the code using no loops except while (true). Then see how much clearer you can make the code while preserving that property.
Here's the first pass:
int getli(char s[], int lim) {
int i = 0;
int c;
while (true) {
if (i >= lim - 1) break;
c = getchar();
if (c == EOF) break;
if (c == '\n') break;
s[i] = c;
++i;
}
if (c == '\n') {
s[i] = c;
++i;
}
s[i] = '\0';
return i;
}
Do you spot an opportunity to combine codepaths now? Keep going; you'll find that the code ends up shorter than it started out! | {
"domain": "codereview.stackexchange",
"id": 33661,
"tags": "algorithm, c"
} |
HCN reactor design | Question: For my diploma work, I should design a hydrogen cyanide (HCN) producing reactor.
I found that for the BMA process, overall
$$\ce{CH4 + NH3 -> HCN + 3 H2 \quad{} \Delta{}H_r = \pu{251 kJ / mol} }$$
Also, there is side reaction of ammonia decomposition.
the decomposition of ammonia is the rate-limiting step. So I should consider it to find the reactor volume. Besides Langmuir - Hinshelwood model, I also found power law rate to calculate the parameter manually. The expression is like this:
r = 2.27 * 10^23* exp (-21000/RT)*P(NH3)
Previously, I have used plug-flow design (PFR) equation which integrates dx/-r, where r is expressed in terms of x - concentration.
However, I do not know how to work with the previous equation, how to relate with conversion. And I am not sure about how to get the design parameter from it. My question is how this kind of power law rate expressions with pressure instead of concentration should be used in design equations?
Any contribution is highly appreciated. Thanks in advance
Answer: I will lead you to the general equation. We can relate the partial pressure of a species with conversion by using a simple stoichiometric table.
The reaction is
$$ \ce{NH3(g) + CH4(g) -> HCN(g) + 3H2(g)} $$
$$ \ce{A(g) + B(g) -> C(g) + 3D(g)} \tag{R} $$
where we defined for convenience new letters for all the species. We concentrate on $\ce{A}$.
1. Mole balance
For a PFR the volume needed to achieve a conversion $X_\ce{A}$ is
\begin{equation}
V = \int_0^{X_\mathrm{A}} \frac{F_\mathrm{A0} \; \mathrm{d}X_\ce{A}}
{-r_\ce{A}(X_\ce{A})} \tag{1}
\end{equation}
where $F_\mathrm{A0}$ is the inlet molar flow rate of species $\ce{A}$ in $\pu{mol s^-1}$
2. Rate law
We have a power law in the form of
$$ -r_\ce{A} = kp_\ce{A} \tag{2} $$
3. Stoichiometry
We set up a stoichiometric table for reaction $\ce{R}$ in terms of the molar flows. For a general species $j$ in a reaction where $\ce{A}$ is the limiting reagent, the molar flow rate of species $j$ is
$$ F_{j} = F_\mathrm{A0} (\Theta_j + \nu_j X_\ce{A}) \tag{3} $$
where $\Theta_j = F_{j0}/F_\mathrm{A0}$ is the molar relation of species $j$ with respect to $\ce{A}$ at the entrance of the reactor, and $\nu_j$ is the stoichiometric coefficient of species $j$.
For the reaction to take place, we need the presence of $\ce{A}$ and $\ce{B}$. Considering that there are no products at the reactor inlet, application of Eq. (3) to reaction $\ce{R}$ gives
\begin{align}
F_\ce{A} &= F_\mathrm{A0}(1 - X_\ce{A}) \tag{4} \\
F_\ce{B} &= F_\mathrm{A0}(\Theta_\ce{B} - X_\ce{A}) \tag{5} \\
F_\ce{C} &= F_\mathrm{A0} X_\ce{A} \tag{6} \\
F_\ce{D} &= 3F_\mathrm{A0} X_\ce{A} \tag{7} \\
\end{align}
where the total molar flow rate is
\begin{align}
F_\ce{T} &= \sum_j F_j \\
F_\ce{T} &= F_\mathrm{A0}(1 - X_\ce{A}) + F_\mathrm{A0}(\Theta_\ce{B} - X_\ce{A}) +
F_\mathrm{A0} X_\ce{A} + 3F_\mathrm{A0} X_\ce{A} \\
F_\ce{T} &= F_\mathrm{A0}(1 - X_\ce{A} + \Theta_\ce{B} - X_\ce{A} +
X_\ce{A} + 3X_\ce{A}) \\
F_\ce{T} &= F_\mathrm{A0}(1 + \Theta_\ce{B} + 2X_\ce{A}) \tag{9} \\
\end{align}
Remembering that the partial pressure of species $j$ is given by
$$ p_j = y_j p = \frac{F_j}{F_\ce{T}}p \tag{10}$$
we combine Eqs. (4), (9), and (10) so that
\begin{align}
\require{cancel}
p_\ce{A} &= \frac{\cancel{F_\mathrm{A0}}(1 - X_\ce{A})}
{\cancel{F_\mathrm{A0}}(1 + \Theta_\ce{B} + 2X_\ce{A})}p \\
p_\ce{A} &= \left(\frac{1 - X_\ce{A}}{1 + \Theta_B + 2X_\ce{A}}\right)p \tag{11}
\end{align}
4. Design equation
Combining Eqs. (1), (2), and (11) gives the final result
\begin{equation}
\boxed{V = \int_0^{X_\mathrm{A}} \frac{F_\mathrm{A0} \; \mathrm{d}X_\ce{A}}
{k\left(\dfrac{1 - X_\ce{A}}{1 + \Theta_\ce{B} + 2X_\ce{A}}\right)p}} \tag{12}
\end{equation}
References
An excellent explanation for stoichiometry for liquid-phase and gas-phase reactive systems can be read at Chapter 3 of:
Elements of Chemical Reaction Engineering, H. S. Fogler, 5th ed., Prentice Hall (2016). | {
"domain": "chemistry.stackexchange",
"id": 18002,
"tags": "reaction-mechanism, kinetics, chemical-engineering, rate-equation"
} |
Integer programming: enforce the constraint that a subgraph contains at most $k$ connected components? | Question: I'm considering integer programming on an variation of Steiner Forest Problem:
Given a graph $G=(V,E)$, a cost function: $c:E \rightarrow R^{+}$, a terminal set $T \subseteq V$, and a positive integer $k$, find a subgraph containing at most $k$ connected components that includes all the terminals with minimum total edge cost.
The key point of constructing the ingeger programming is to enforce the $k$ connected components constraint. The only way I came up with is using the generalized subtour elimination constraints over edge variables for each connected component. Is there any other ways to enforcing the $k$ connected components constraint? Any clue or suggestion would be appreciated.
Answer: Base on Komus's constraint, we add another constraint which ensures a Steiner Tree on $G^{'}=(V, E^{'})$, where $E'=\{(i,j): i,j \in V\}$:
$$\sum_{e \in cut(U,V)}x_e \ge 1, \forall u,v \in T, \forall \text{ }u-v\text{ }cut \text{ }(U,V)$$where $cut(U,V)$ denotes the cut set of $(U,V)$.
Together with Komus's constraint, our model is obtained as follows:
$$min \sum_{e \in E}c_e x_e$$
subject to
$$\sum_{e \in cut(U,V)}x_e \ge 1, \forall u,v \in T, \forall \text{ }u-v\text{ }cut \text{ }(U,V)$$
$$\sum_{e \notin E} x_e \le k-1$$
Base on Cao's comment, one more step is needed to prune the solution of our model: if $\sum_{e \notin E}x_e < k-1$, then we delete the largest $\Delta$ edges in the solution, where $\Delta=k-1-\sum_{e \notin E}x_e$. Then we get an optimal solution. | {
"domain": "cstheory.stackexchange",
"id": 4524,
"tags": "integer-programming"
} |
Cannot find gazebo_ros plugins after update from 1.9.4 to 1.9.5 | Question:
I just updated gazebo-prerelease through ubuntu(12.04) update manager from 1.9.4 to 1.9.5 and now gazebo cannot find the gasebo_ros plugins which were working well till now.
Error [Plugin.hh:127] Failed to load plugin libgazebo_ros_gpu_laser.so: libsdf.so.1: cannot open shared object file: No such file or directory
Error [Plugin.hh:127] Failed to load plugin libgazebo_ros_diff_drive.so: libsdf.so.1: cannot open shared object file: No such file or directory
These were the only plugins I was using for you. I did a search for the plugins and found them in the ros simulator_gazebo stack for some reason. Adding them, to GAZEBO_PLUGIN_PATH doesnt solve the issue either.
ROS: Groovy
Originally posted by Shehzi on Gazebo Answers with karma: 21 on 2013-07-26
Post score: 0
Answer:
Try using the main gazebo debian (which is currently 1.9.0). This should work with gazebo_ros.
Originally posted by nkoenig with karma: 7676 on 2013-07-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Shehzi on 2013-07-26:
Works! with main debian. Had to install the gazebo_ros_pkgs from source once more to make it run. Thank you
Comment by evilBiber on 2013-07-29:
Hi,
I quess It all runs with the source install again if you update and rebuild gazebo_ros_pkgs too.
libsdf.so has been replaced with libsdformat.so in the newest releases. | {
"domain": "robotics.stackexchange",
"id": 3399,
"tags": "ros, gazebo-1.9"
} |
Complexity of linear programming | Question: I have a basic question, if I can model a problem $(P)$ by a linear program, can we say that $(P)$ is polynomial?
Linear programs can be solved using simplex, and it was proved that simplex run in exponential time for some instances, so why some references assume that linear programming is polynomial?
Answer: There exist polynomial time algorithms for solving linear programs. These include the ellipsoid algorithm and interior-point methods. See Wikipedia. | {
"domain": "cs.stackexchange",
"id": 16024,
"tags": "complexity-theory, linear-programming"
} |
Deterministic Randomness Extractors | Question: I have read in several papers it is well known that deterministically extracting even one bit from a weak source is impossible. Could someone explain why?
Answer: Intuitively, the situation is you'd like some deterministic extractor $E: \{0,1\}^n \rightarrow \{0,1\}$ that can take in $n$ bits sampled from a weak source and output one bit with probability close to $1/2$, say it outputs 0 with probability $1/2 \pm \epsilon$ and 1 with $1/2\pm\epsilon$.
Here's a weak argument that at the very least, such extractors $E$ can't exist if we don't put any restrictions on the input distribution other than it has 'enough' min-entropy. Suppose $E$ is such a potential extractor. By flipping the output if necessary, we may assume without loss of generality that $|E^{-1}(0)|\ge|E^{-1}(1)|$; that is, $E^{-1}(0)$ is a set of $n$-bit strings of size at least $2^{n}/2$. Thus a random variable that samples uniformly from $E^{-1}(0)$ will have min-entropy at least $n - 1$, but the extractor will never give you any 'random' output other than 0.
Of course, if we tighten the restrictions on the input distribution (say, we assume all $n$ bits are IID) then we do have deterministic extractors that work. But as problem 6.6 in Salil Vadhan's survey of pseudorandomness shows, even weakening the IID assumption a little bit will cause deterministic extractors to fail, by a slight generalization of the same argument as I made above. | {
"domain": "cstheory.stackexchange",
"id": 3156,
"tags": "randomness"
} |
According to Maxwell-Boltzmann distribution, what is probability distribution function proportional to? | Question: If, according to the Maxwell-Boltzmann distribution
$${f(v)}\propto\exp\left(-\frac{\varepsilon}{kT}\right),\tag{1}$$
which is the equation from which the whole final equation is derived, then why is the final equation
$$f(v) = 4\pi v^2\left(\frac{m}{2{\pi}kT}\right)^{3/2}\exp\left(-\frac{mv^2}{2kT}\right)?\tag{2}$$
Clearly, here
$${f(v)}\propto v^2\exp\left(-\frac{mv^2}{2kT}\right).\tag{3}$$
Where have I made a mistake in understanding this?
Answer: The $v$ in your first expression (that $f(v) \propto \exp(-\varepsilon/kT)$) most likely refers to the true velocity which is a vector
$$\vec{v} = (v_x, v_y, v_z),$$
whereas the $v$ in the Maxwell–Boltzmann distribution refers to the magnitude of the velocity
$$v = |\vec{v}| = \sqrt{v_x^2 + v_y^2 + v_z^2}.$$
The latter is more useful to a chemist because we don't really care which direction the particle is moving in, only its speed. However, note that there are many possible combinations of $(v_x, v_y, v_z)$ which yield the same magnitude $v$; so the formula needs to be adjusted for this. The factor ends up being $4\pi v^2$, which is the surface area of a sphere with radius $v$ (you can think of the surface of this sphere as representing all possible velocities with magnitude $v$).
My suggestion would be to look for a derivation in a good physical chemistry textbook, which should explain this more thoroughly. | {
"domain": "chemistry.stackexchange",
"id": 16578,
"tags": "physical-chemistry, kinetic-theory-of-gases, statistical-mechanics"
} |
Is boric acid aqueous solution corrosive to stainless steel? | Question: I'm thinking of trying an aqueous boric acid solution in a nebulizer for a sinus infection, in a similar strength as for eyewash (anywhere from 1 tsp. in 4 oz. to 1 tsp. in 32 oz.). My nebulizer has a stainless steel screen that the vapor comes thru, and I need to know if the aqueous boric acid solution will have any corrosive effect on the stainless? Thank you.
Answer: Boric acid is one of the weakest acids commonly used. Its $p$K$_a$ is equal to $9.24$. Its solution will have a negligible corrosive effect on the stainless. | {
"domain": "chemistry.stackexchange",
"id": 17951,
"tags": "acid-base"
} |
IPython - Clipboard Extension | Question: I often copy things from my terminal to other places (like Discord), and to make my workflow even easier I decided to use the IPython API to make an extension that has two magic functions pickle and clip.
clip can copy the contents of a line (or cell). It can copy both, the input line or the output line.
pickle takes in a variable as an argument and pickles its contents and copies it to your clipboard, it can also unpickle your clipboard's content and load it into a variable or print it.
I've heard that unpickling unknown data can be dangerous but I'm not sure if there is anything I can do about that, other than assume that the user trusts the data he or she is unpickling. (If there are other alternatives please let me know).
Are there any improvements that I could apply to my code? Like making the docstrings/error messages more understandable or patching a bug that I have not spotted, or rewriting something specific.
I'm kind of concerned about the user trying to unpickle a large object, such as a pandas data frame (I was helping someone with a pandas question and told him to pickle the data frame and send it, I didn't feel any noticeable delay as I unpickled the file, but the data frame was small anyways).
I also don't know how I could create tests for magic functions in case I add any extra features or patches in the future.
Any recommendations and constructive feedback are welcome. Thank you for taking the time to read this.
import sys
from argparse import ArgumentTypeError
from ast import literal_eval
from keyword import iskeyword
from pickle import dumps as p_dumps
from pickle import loads as p_loads
import IPython.core.magic_arguments as magic_args
from IPython.core.magic import line_magic, Magics, magics_class
from pyperclip import copy as pycopy
from pyperclip import paste as pypaste
def valid_identifier(s: str):
if not s.isidentifier() or iskeyword(s):
raise ArgumentTypeError(f'{s} is not a valid identifier.')
return s
def valid_line_num(s: str):
valid_conditions = (
s.isdigit(),
s in '_ __ ___ _i _ii _iii'.split(),
s.startswith('_') and s[1:].isdigit(),
s.startswith('_i') and s[1:].isdigit()
)
if not any(valid_conditions):
raise ArgumentTypeError(f'{s} is not a valid line number or a valid ipython cache variable (eg. `_` or `_i3`)')
return s
@magics_class
class IPythonClipboard(Magics):
@line_magic
@magic_args.magic_arguments()
@magic_args.argument('line_number',
default='_',
type=valid_line_num,
nargs='?',
help='The line number to copy the contents from'
)
def clip(self, line: str = ''):
"""Copies an input or output line to the clipboard.
`_i7` copies the input from line 7
`_7` copies the output from line 7
`7` copies the output from line 7"""
args = magic_args.parse_argstring(self.clip, line)
line_num: str = args.line_number
if line_num.isdigit():
line_num = f'_{line_num}'
ip = self.shell
content: str = str(ip.user_ns.get(line_num, ''))
pycopy(content)
@line_magic
@magic_args.magic_arguments()
@magic_args.argument('--output', '-o',
type=valid_identifier,
nargs=1,
help='The variable to store the output to.')
@magic_args.argument('var',
type=valid_identifier,
nargs='?',
help='The variable to pickle.')
def pickle(self, line: str = ''):
"""
Pickles a variable and copies it to the clipboard or un-pickles clipboard contents and prints or stores it.
`%pickle` unpickle clipboard and print
`%pickle v` pickle variable `v` and store in clipboard
`%pickle _` pickle last line's output and store in clipboard
`%pickle -o my_var` unpickle clipboard contents and store in `my_var`"""
ip = self.shell
args = magic_args.parse_argstring(self.pickle, line)
if bool(args.output) and bool(args.var):
msg = (
'Incorrect usage, you can either pickle a variable, or unpickle, but not both at the same time.' '\n'
'\n' f'`%pickle {args.var}` to pickle the contents of `{args.var}` and send them to your clipboard'
'\n' f'`%pickle -o {args.output[0]}` to unpickle clipboard contents and send them to `{args.output[0]}`'
'\n' f'`%pickle` to unpickle your clipboard contents and print'
)
ip.write_err(msg)
return None
if not line or args.output: # user wants to unpickle from clipboard
content: str = pypaste()
possible_errors = (not content.startswith('b') and content[1] != content[-1], # must be like b'...'
not content # clipboard is empty
)
if any(possible_errors): # clipboard doesn't have a valid pickle string
sys.stderr.write(r"Your clipboard doesn't have a bytes-like string (ie. b'\x80\x03N.')")
return None
if args.output: # user wants to unpickle into a variable
ip.user_ns[args.output[0]] = p_loads(literal_eval(content))
else: # user wants to unpickle and print
sys.stdout.write(str(p_loads(literal_eval(content))))
else: # user wants to pickle a var
pycopy(str(p_dumps(ip.user_ns.get(args.var))))
def load_ipython_extension(ipython):
ipython.register_magics(IPythonClipboard)
Answer: Valid line numbers
This is a mix of too-clever, not-very-efficient and not-informative-enough:
valid_conditions = (
s.isdigit(),
s in '_ __ ___ _i _ii _iii'.split(),
s.startswith('_') and s[1:].isdigit(),
s.startswith('_i') and s[1:].isdigit()
)
if not any(valid_conditions):
raise ArgumentTypeError(f'{s} is not a valid line number or a valid ipython cache variable (eg. `_` or `_i3`)')
return s
It really needs to be exploded out to the various error conditions. Also, the fourth condition is likely incorrect because it will never be true; you probably meant [2:]. An example:
if s in {'_', '__', '___', '_i', '_ii', '_iii'} or s.isdigit():
return s
match = re.match(r'_i?(.*)$', s)
if match is None:
raise ArgumentTypeError(f'{s} is not a valid line number or a valid ipython cache variable (eg. `_` or `_i3`)')
if match[1].isdigit():
return s
raise ArgumentTypeError(f'{s} has a valid prefix but {match[1]} is not a valid integer')
Similarly, this:
possible_errors = (not content.startswith('b') and content[1] != content[-1], # must be like b'...'
not content # clipboard is empty
)
if any(possible_errors):
should actually care that a single or double quote is used, and have separated error messages for mismatched quotes vs. missing 'b'. Don't handwave at your users - tell them exactly what went wrong.
Separated newlines
This:
msg = (
'Incorrect...time.' '\n'
'\n' f'...
is odd. Why not just include the newlines in the same string?
msg = (
'Incorrect...time.\n\n'
f'... | {
"domain": "codereview.stackexchange",
"id": 38649,
"tags": "python, python-3.x, api, configuration"
} |
Lorentz transformations: new actual notation for a $4$-vector | Question: For the Lorentz trasformations I use this notation
\begin{equation*}
\left\{\begin{aligned}
x&=\gamma (x'+\beta ct')\\
y&=y'\\
z&=z'\\
ct&=\gamma (ct'+\beta x')\\
\end{aligned}\right.
\end{equation*}
with this matrix
$$L^*=\begin{pmatrix}\gamma & 0 & 0 & \beta\gamma\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
\beta \gamma & 0 & 0 & \gamma\end{pmatrix}$$
Introducing the imaginary unit $i=\sqrt{-1}$, the Lorentz transformations will allow you to switch from an orthogonal Cartesian coordinate system to an orthogonal one. Hence I, actually, use $L$ that is an orthogonal matrix.
$$L=L(\beta)=\begin{pmatrix}\gamma & 0 & 0 & -i\beta\gamma\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
i\beta \gamma & 0 & 0 & \gamma
\end{pmatrix}$$
My usual notation that I use is the following to define a quadrivector $\boldsymbol{\mathcal{X}}=(x,y,z,ict)$, or even better is:
$$\boldsymbol{\mathcal{X}}^\intercal=\begin{pmatrix}
x \\
y \\
z \\
ict
\end{pmatrix}$$
Why most physicists now use $(ct,x,y,z)$ instead of $(x,y,z,ict)$ (or $(ict, x,y,z)$) and let the electromagnetic field tensor have real components?
Answer: I had the same question before. I think there's a paragraph in Kip Thorne's Modern Classical Physics specifically pointed out that the imaginary number could not capture the total spacial/geometrical aspects of the GR (could not recall the detail), therefore I guess, people slowly used the real number and differential metric instead of the imaginary number (you don't actually need to write it in metric form if you use imaginary number, not necessarily). It's useful, though, to notice that imaginary number was a very “cheap”/neat solution, which was used in many old books. Like @bolbteppa has mentioned. | {
"domain": "physics.stackexchange",
"id": 59894,
"tags": "special-relativity, metric-tensor, lorentz-symmetry, complex-numbers, wick-rotation"
} |
What specific properties of laser light make it dangerous? | Question: What makes a laser more dangerous than a high powered single color LED for example?
Is it the fact that it's coherent light, all the photons have the same wavelength, that it can be focused to a small spot, or some combination of the above?
Would I need laser safety glasses when working with high powered LEDs?
Answer: A main reason that laser light is dangerous is that it can easily be focused down to a tiny spot.
If you try to use a series of lenses to focus light from a standard lightbulb, you will find that you are limited in how concentrated you can make it. There is a property of a light source called etendue, which is related to the size of the source and the angular spread over which it emits light, and this is conserved by optical elements such as lenses. For relatively large sources that emit in all directions, this limits how well you can "collect" the light.
Lasers often have very low etendue. The fact that their light is emitted into a single mode makes it easy for it to originate from a tiny point or a uniform beam. Furthermore, the fact that it is monochromatic - that is has a single wavelength - helps to focus it because lenses generally have chromatic aberrations. As such, you can focus a laser to a very intense spot, such that a laser of one watt, much less power than a lightbulb, can generate light intense enough to start fires. Your eyes also themselves focus incoming light onto your retina, so they can potentially focus laser light into a more intense point than other sources. That said, this gives lasers the potential to be dangerous, rather than guaranteeing that they are. If you don't focus the laser light down, it is not necessarily more dangerous than any other source.
Incidentally, sunlight also has low etendue- by the time it reaches us it is barely diverging at all. That is why you can also focus down sunlight with a magnifying glass and start fires. So lasers are not unique in this regard, but they are more dangerous than most other sources of light.
There are other dangers associated with lasers as well. For example, some lasers are in an invisible range, but such lasers can damage your eyes even though you can't tell they are shining on them.
The dangers I've described are generally reduced with visible range LEDs, but any type of light can be dangerous under the right conditions. You should look for specific guidance for the particulars of your situation. | {
"domain": "physics.stackexchange",
"id": 94012,
"tags": "visible-light, laser, vision, light-emitting-diodes"
} |
Calculate 8 hours business day in a weekdays only | Question: I am fairly new into asp.net MVC c# I am working on request holiday app where an employee can request holidays. The problem is the app works fine, however, I am trying to set 8 hours a day when an employee requests a holiday I have scratching my head from 2 days already seen every topic in StackOverflow and here and still can't get to work if anyone has any suggestions it will be much appreciated.
Here is a snippet of the calculation of the working days excluding weekends. Thank you in advance.
public static class DateHelpers
{
public static int DaysBetweenExcludingWeekends(DateTime startDate, DateTime endDate)
{
var days = 0;
while (startDate <= endDate)
{
if (startDate.DayOfWeek != DayOfWeek.Saturday && startDate.DayOfWeek != DayOfWeek.Sunday)
{
days++;
}
startDate = startDate.AddDays(1);
}
return days;
}
}
Answer: You can throw an error if someone is trying to set a work day to a day that should be a day off. Then you just need to handle this error. | {
"domain": "codereview.stackexchange",
"id": 36713,
"tags": "asp.net-mvc"
} |
error occurred when hokuyo node is launched | Question:
I downloaded the hokuyo node, made it. When I tried to use the "hokuyo_test.launch" file given in the package to launch the hokuyo node, an exception threw out:
[ WARN] [1356005675.931956581]: The use_rep_117 parameter has not been specified and has been automatically set to true. This parameter will be removed in Hydromedusa. Please see: http://ros.org/wiki/rep_117/migration
[ERROR] [1356005680.166530146]: Exception thrown while opening Hokuyo.timeout reached (in hokuyo::laser::laserReadline) You may find further details at http://www.ros.org/wiki/hokuyo_node/Troubleshooting
I read the troubleshooting page, but there wasn't any description about "timeout".
Originally posted by Kent on ROS Answers with karma: 140 on 2012-12-20
Post score: 0
Original comments
Comment by Kent on 2012-12-20:
I m actually using a serial usb converter cable, and i set the param “port” as "/dev/ttyUSB0", but i still get the timeout error
Comment by dornhege on 2012-12-20:
Are you running a URG via the serial connection? Why not just use USB directly?
Answer:
Looks like your port isn't set up properly.
I suggest plugging in your LiDAR to a USB port & running the following command to give get rw permissions:
sudo chmod a+rw /dev/ttyACM0
Originally posted by Ernest with karma: 341 on 2013-05-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 12170,
"tags": "ros, hokuyo-node"
} |
What are the properties of constraint forces? | Question: I've just started studying mechanics. I need to find the properties of the constraint forces. I've gone through many books and also searched on internet but do not find any thing useful.
Answer: Constraint forces are those forces responsible for constraining the system to some geometric or kinematic conditions. For example, the force due to the string acting on the bob of a pendulum, the contact force done by a wire on a bead, the normal force the ground does on a block, the force by a light rod connecting two particles in a rigid dumbbell, etc. They are usually originated from interconnections (such as the light rod in the example above) but can also be related to external interactions (such as the contact force done by the wire or the normal force done by the ground).
In general, constraint forces are difficult to deal with in the realm of Newtonian Mechanics because they normally are not specified and depend implicitly on the form of the impressed or specified forces. For example, the tension acting on the bob of a pendulum enters Newton's second law as a variable depending on the bob's weight, the specified force. If we did not know the latter we would not know the former. Moreover, in some cases there are too many constraint forces which requires to many equations and makes it harder or even impracticable to solve the equations of motion.
One of the goals of analytical mechanics is to get rid of these constraint forces since they do not play any major role on the dynamics of the problem and keep only the specified forces. This can be done for a large class of systems, namely those which satisfy the Virtual Work Principle, i.e. the constraint forces are such that their total work on the system vanishes for any infinitesimal displacement respecting the constraints. The idea is that instead of writing down dynamical or equilibrium equations containing constraint forces we effectively replace them by kinematic equations and use them to eliminate some of the unknown variables. For example instead of writing two equations for the motion of the pendulum's bob in a fixed plane and taking into account both the tension along the string and the weight it is possible to neglect the constraint force by assuming that the bob moves along a fixed circle. | {
"domain": "physics.stackexchange",
"id": 42699,
"tags": "classical-mechanics, constrained-dynamics"
} |
EMF of source depends on the charge and the path then what do we mean when we say EMF of a source is $\epsilon$? | Question: EMF of an EMF source (a battery for example) is defined as the work done by the non-conservative force(s) on charged particles as it passes through the terminals of the source divided by the charge of the particle. That is $\epsilon=\frac{dW}{dq}.$ This definition is very similar to that of potential difference. But the major difference, among others, is that the force, in this case, is of non-conservative nature while the potential function is defined only for conservative forces only.
The potential difference between two points is independent of the charge of the particle ie the same potential difference will be calculated between two points whether we calculate it using a $10C$ or $-1.9C$ charge and also independent of the path the charge take. But this is not the case for EMF. The EMF depends on the path and the charge on the particle as it is defined for a non-conservative force. Hence EMF of a source calculated using $10C$ particle along a path will be different if the particle took another path and it will be different still if we move $-1.9C$ charged particle along the same path. So what do we mean when we say that a battery has EMF $\epsilon$? How can we be sure that the work done by the battery-by the non-conservative forces due to battery-on a charged particle $Q$, as it moves through the source, will be $Q\epsilon$? Won't it be different if it took other paths?
Answer: "The emf depends on the path and the charge on the particle."
The emf is indeed path-dependent, but not in general charge-dependent. The work, W, done on a charge, $q$, taken along a specific path is proportional to $q$ and so the emf, defined as $W/q$, is independent of $q$. Examples include (1) a wire of length $l$ moving at speed $v$ at right angles to itself and cutting magnetic flux, for which $\mathscr E =\frac {q(\mathbf v \times \mathbf B).\mathbf l} q =Blv$ and (2) a battery. For the battery the terminal voltage does depend somewhat on the curent, that is the charge passing per second through the battery, but we attribute this to the battery's internal resistance rather than to changes in its emf.
What about path dependency? In the case of the battery, a charge $q$ taking any path from terminal to terminal through the battery, that is through the interfaces between electrodes and electrolyte will result in the same value for $W/q$. [For example, for each pair of electrons that enter and leave a simple cell, one zinc atom will be ionised and two hydrogen ions will accept electrons from copper, so a particular net amount of energy will be released.] But if we take $q$ from terminal to terminal by a route that doesn't go through the battery, no non-conservative chemical work will be done on $q$ ! Path dependency is, then, starkly binary, in the case of the battery! | {
"domain": "physics.stackexchange",
"id": 85198,
"tags": "electromagnetism, electricity, electric-current, voltage, batteries"
} |
What does the "filters" argument in "getBM" do? | Question: In the biomaRt package, there is a function getBM which among a few things is useful for mapping between different gene representations. Currently, I'm using it as follows, to map Ensembl transcript IDs to gene names and Ensembl gene IDs.
ensemble2gene <- getBM(attributes=c("ensembl_transcript_id",
"external_gene_name",
"ensembl_gene_id"),
values = as.list(transcripts),
mart = mart)
How is the following code different:
ensemble2gene <- getBM(attributes=c("ensembl_transcript_id",
"external_gene_name",
"ensembl_gene_id"),
filters = "ensembl_transcript_id",
values = as.list(transcripts),
mart = mart)
It's unclear to me from the description of the filters argument here. But just based on results, I get less results when I run the second one.
Answer: Hi An Ignorant Wanderer,
Filters allow you to narrow down the dataset to query a specific subset of features within your dataset. A description and tutorial of the filtering steps within BioMart can be found here:
https://www.ensembl.org/info/data/biomart/how_to_use_biomart.html
If you remove filters, you will query the full dataset (all human genes, in the example above). However, this is not recommended for BioMart as large, genome-wide queries will overload the BioMart servers. For queries of this size, we suggest using the Ensembl REST API or FTP site. | {
"domain": "bioinformatics.stackexchange",
"id": 2296,
"tags": "ensembl, biomart"
} |
The ever increasing pull of a black hole | Question: If something is caught in the pull of a black hole and keeps accelerating it can't keep accelerating with no limits or else it would accelerate beyond c. So is there a limit on how fast acceleration can increase for an object being pulled in by a black hole? (forgive me if this a horrible question or too broad or off-topic)
Answer: There is no limit in acceleration, but since acceleration is the rate of change of velocity per time, and the time dilatation of the falling particle relative to an outer observer goes to infinity at the event horizon (where the escape velocity would be the speed of light), the falling object never makes it through that border from the viewpoint of an outer observer, because everything freezes at the horizon. From the perspective of the falling particle everything is normal since it is just falling freely and doesn't feel the time dilatation it suffers. For a simple relativistic explanation see Link, for further details on the quantum perspective I recommend Link 2 and Link 3 | {
"domain": "physics.stackexchange",
"id": 19816,
"tags": "black-holes, astronomy, acceleration"
} |
Will the Magdalena Ridge Optical Interferometer be able to image extended objects like the surface of the Moon? | Question: Inspired by several questions:
When will a moon landing site be visible via telescope?
Could the E.H.T. produce an image of the human artifacts on the moon?
Picture of equipment left on the Moon?
Were the Apollo lunar activities observed from Earth?
If one wanted to resolve 1 meter or smaller detail on the surface of the Moon from the surface of the Earth (about 2.6E-09 or 0.5 mas) at say 1 micron wavelength one would need a baseline of order 400 meters.
The longest current optical baselines are only 40 to 80 meters and the longest one currently under construction is the Magdalena Ridge Optical Interferometer which
will have ten 1.4 m (55 in) telescopes located on three 340 m (1,120 ft) arms. Each arm will have nine stations where the telescopes can be positioned, and one telescope can be positioned at the center.
This is sufficient to have of the order of 1 meter resolution at the Moon's distance, but being optimized for star-like sources it's not clear if it will be able to image extended objects like the surface of the Moon.
Question: Will the Magdalena Ridge Optical Interferometer be able to image extended objects like the surface of the Moon, or is it designed only to separate a few star-like objects, e.g. binary stars or star + planetary systems?
note: If information on this specific observatory isn't available, it would be certainly informative to extrapolate from existing imaging work from long baseline optical interferometers. The Moon presents a big challenge since its surface brightness extends over quite a large solid angle, so pinholes at the focus of each telescope in the array would generate a lot of diffracted/scattered light, whereas imaging star-like objects against a dark field would be less susceptible.
Magdalena Ridge Observatory Interferometer computer graphic overlay of the BCF building and the ten telescopes Source
Answer: According to this site The 10 telescopes will be optically linked together in order to make images of astronomical objects with unprecedented detail. The interferometer will have a resolution 100 times greater than the Hubble Space Telescope and will be able to make accurate images of complex astronomical objects many times faster than other existing interferometric arrays.
And: The Interferometer will take delivery of the second telescope enclosure in February 2020 and the second telescope in August 2020. They expect to fully incorporate the second telescope by the end of 2020, which will allow the instrument to produce “fringes,” using the proprietary fringe-tracker called ICoNN.
There is also this headline:
How America's Spooks Seek to Spy on Distant Satellites The
intelligence community has plans for a telescope network that can see
not just a blob in orbit but details such as a satellite’s solar
panels.
from here.
That's one reason why the U.S. Air Force, which wants to monitor its own orbital assets and presumably those of others, is funding MROI. "They want to know: Did the boom break or did some part of the photovoltaic panels collapse?" says Michelle Creech-Eakman, an astronomer at the New Mexico Institute of Mining and Technology in Socorro and project scientist on MROI. But if the facility succeeds, its biggest impact could be on the field of astronomy, by drawing new attention to the promise of optical interferometry, a powerful but challenging strategy for extracting exquisitely sharp images from relatively small, cheap telescopes.
Radio astronomers have had it easier. The long radio wavelengths mean data from separated dishes can be recorded, digitized, time-stamped by an atomic clock, and combined later for analysis. But optical interferometry is far trickier: The short wavelengths of visible light, running at terahertz frequencies, cannot yet be digitized by any electrical system. So the light must be merged in real time, with nanometer precision.
From here.
Also Reference 3.
http://www.mro.nmt.edu/about-mro/interferometer-mroi/ | {
"domain": "astronomy.stackexchange",
"id": 4303,
"tags": "observational-astronomy, photography, angular-resolution, interferometry"
} |
Free spin (Curie) Paramagnetism | Question: I'm working through a derivation for Curie paramagnetism and hope someone could help clarify a couple of steps. The way that makes sense to me (although now I have seen the wikipedia derivation below I realise this way is pretty long) is to not take any high temperature approximations until near the end of the derivation where I have:
$M=g_J\mu_B[(J+1/2)coth[g_J\mu_B\beta(J+1/2)]-\frac{1}{2}coth(g_J\mu_B\beta/2)]$
now to get to the curie susceptibility it seems that when taking the high T limit of the above expression the leading $\frac{1}{x}$ term of the coth expansion is ignored and the second $\frac{1}{3}x$ term is considered (this pops out the correct answer $\chi_{curie}=\frac{n(g_J\mu_B)^2}{3}\frac{J(J+1)}{k_BT}$). I can't find or think or any sensible reason for this apart from when we take the limit of zero B the infinity is just a constant which has no temperature dependence (which is what we're interested in) so we happily ignore it.
The wikipedia method below almost makes sense aside from the last equality where I can't follow how they've simplified the sums (I can see that you could drop the first parts of the sums as they have no H dependence so won't matter when it comes to finding $\chi_{curie}$ but this still doesn't seem to work)
$\bar{m}=\frac{\sum\limits_{M_{J}=-J}^{J}{M_{J}g_{J}\mu _{B}e^{{M_{J}g_{J}\mu _{B}H}/{k_{B}T}\;}}}{\sum\limits_{M_{J}=-J}^{J}{e^{{M_{J}g_{J}\mu _{B}H}/{k_{B}T}\;}}}\simeq g_{J}\mu _{B}\frac{\sum\limits_{M_{J}=-J}^{J}{M_{J}\left( 1+{M_{J}g_{J}\mu _{B}H}/{k_{B}T}\; \right)}}{\sum\limits_{M_{J}=-J}^{J}{\left( 1+{M_{J}g_{J}\mu _{B}H}/{k_{B}T}\; \right)}}=\frac{g_{J}^{2}\mu _{B}^{2}H}{k_{B}T}\frac{\sum\limits_{-J}^{J}{M_{J}^{2}}}{\sum\limits_{M_{J}=-J}^{J}{\left( 1 \right)}}$
(Copied from Wikipedia paramagnetism article)
Answer: Check your algebra.
Hints: 1) If you write the expansion of M(T) for large T (small $\beta$), you will find that the $1/\beta$ term in the first term cancels the one in the second term, so you have to go to the next order.
2) The sum $\sum_{i=-N}^{N} i = 0$, which you can tell by making the variable replacement $j = -i$. | {
"domain": "physics.stackexchange",
"id": 7797,
"tags": "electromagnetism, condensed-matter"
} |
Impact of IMU Position on Localization | Question:
This is the first time I'm using an IMU (Bosch BNO055) on a mobile base. The IMU provides absolute orientation, angular velocity and angular acceleration. I'm planning to use the robot_localization package to fuse this data with wheel encoder data. Even though I did a bunch of searching and reading, I'm still not clear to which extent the position of the IMU on my mobile base can be accounted for. I know that angular velocity and acceleration will differ from the actual values of the base if the IMU is not aligned with the movement axis of the base. Is there a simple way to calculate the actual values if, for example, I were to place the IMU in the right front corner of my mobile base as opposed to the center?
Originally posted by west2788 on ROS Answers with karma: 3 on 2020-12-13
Post score: 0
Answer:
You can mount it anywhere you want, as long as you accurately describe its pose relative to the frame you want to track (base_link_frame in robot_localization). That pose description will go into your TF tree and can be provided either by running static transforms nodes or, better, via a URDF interpreted by robot_state_publisher.
Now whether or not you can accurately measure the pose in relation to your base_link (or whichever frame you want to use for tracking) is a different story. Ideally this should be calibrated (read: computed from measurements by the sensors itself and ground truth), rather than measured geometrically or derived from a model, since the latter will always have errors.
Originally posted by chfritz with karma: 553 on 2020-12-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 35868,
"tags": "imu, navigation, ros-melodic, robot-localization"
} |
Can two spectral lines from different elements be equal? | Question: I'm not asking about how emission and absorption lines are created, nor am I asking why different elements have unique spectral "fingerprints". Those two questions have already been answered multiple times.
My question is simply if there is any fundamental reason why we couldn't find any pair of elements that have a single spectral line in common?
As a second question; if there is no reason why this couldn't be the case, have we actually found a example of this in nature, or is it just theoretically allowed?
Answer: It's theoretically allowed, but extremely unlikely. Spectral lines are very, very narrow: they're normally separated from each other by hundreds of terahertz (few to tens of eV) but their natural widths are rarely bigger than a gigahertz, so there's some five orders of magnitude between the two scales. For two lines to meaningfully coincide, they'd have to match up to the fifth significant figure, which is extremely unlikely.
That said, if you don't care all that much about precision, odds are that you'll be able to find an example - but then you need to specify what precision you find acceptable and how far two lines need to be for you to take them as separate. | {
"domain": "physics.stackexchange",
"id": 43034,
"tags": "atoms, spectroscopy"
} |
Modular exponentiation in C++ | Question: I wrote the following code for a contest and worked fine for me. Could I make it a bit faster?
#include<iostream>
using namespace std;
int power(int,int,int);
int main(int argc,char** argv){
int base,exponent,mod;
cin>>base>>exponent>>mod;
cout<<power(base,exponent,mod)<<endl;
return 0;
}
int power(int base,int exponent,int mod){
if(mod==1)return 0;
int ans=1;
for(int i=0;i<exponent;i++){
ans=(ans*base)%mod;
}
return ans;
}
Answer: The usual way is a little more complex, but a lot faster when the exponent is large. It looks something like this:
template <class T>
T mul_mod(T a, T b, T m) {
if (m == 0) return a * b;
T r = T();
while (a > 0) {
if (a & 1)
if ((r += b) > m) r %= m;
a >>= 1;
if ((b <<= 1) > m) b %= m;
}
return r;
}
template <class T>
T pow_mod(T a, T n, T m) {
T r = 1;
while (n > 0) {
if (n & 1)
r = mul_mod(r, a, m);
a = mul_mod(a, a, m);
n >>= 1;
}
return r;
}
pow_mod right-shifts n (the exponent) each iteration through the loop, so the number of iterations is proportional to the number of bits in the number (where yours in the question is proportional to the exponent itself). In other words, yours is linear on the exponent's magnitude, and this is roughly logarithmic exponent's magnitude.
Actual code review
Since this is CodeReview, not (for example) Stack Overflow, let's also take a look at reviewing your code.
Variable Definitions
You've defined multiple variables in a single definition:
int base,exponent,mod;
Many people find it more readable to define one variable per definition:
int base;
int exponent;
int mod;
... or at least use a separate line for each variable:
int base,
exponent,
mod;
Naming
A function's name should reflect what it really does. Using power for modular exponentiation borders on misleading. I'd rather the name included modular or at least mod.
formatting
At least IMO, a little white space can help readability quite a bit. For one example, instead of:
int power(int base,int exponent,int mod)
...I'd rather see a space after each comma:
int power(int base, int exponent, int mod)
In addition, where there's flow control, the controlled statements should be indented, so this:
if(mod==1)return 0;
would come out like:
if (mod==1)
return 0;
return from main
When execution "flows" off the end of main, there's an implicit return 0;, so you can eliminate that line from your code (though some prefer to keep it).
Use of endl
I'd avoid using std::endl (ever). Most of the time you really just want to write out a new-line, in which case \n is entirely adequate. On the (relatively rare) occasion that you really also want to flush the stream as endl also does, you should do so explicitly.
using namespace std;
Pulling in the entirety of namespace std; like this is generally frowned upon. It's all right for other (more sensibly designed) namespaces, but std defines a huge amount of "stuff", most of which you really don't want directly visible. | {
"domain": "codereview.stackexchange",
"id": 24353,
"tags": "c++, performance, algorithm"
} |
Loops in PyTorch Implementation | Question: I'm trying to implement a regularization term for the loss function of a neural network.
from torch import nn
import torch
import numpy as np
reg_sig = torch.randn([32, 9, 5])
reg_adj = torch.randn([32, 9, 9, 4])
Maug = reg_adj.shape[0]
n_node = 9
n_bond_features = 4
n_atom_features = 5
SM_f = nn.Softmax(dim=2)
SM_W = nn.Softmax(dim=3)
p_f = SM_f(reg_sig)
p_W = SM_W(reg_adj)
Sig = nn.Sigmoid()
q = 1 - p_f[:, :, 4]
A = 1 - p_W[:, :, :, 0]
A_0 = torch.eye(n_node)
A_0 = A_0.reshape((1, n_node, n_node))
A_i = A
B = A_0.repeat(reg_sig.size(0), 1, 1)
for i in range(1, n_node):
A_i = Sig(100 * (torch.bmm(A_i, A) - 0.5))
B += A_i
C = Sig(100 * (B - 0.5))
reg_g_ij = torch.randn([reg_sig.size(0), n_node, n_node])
for i in range(n_node):
for j in range(n_node):
reg_g_ij[:, i, j] = q[:, i] * q[:, j] * (1 - C[:, i, j]) + (1 - q[:, i] * q[:, j]) * C[:, i, j]
I believe that my implementation is computationally not efficient and would like to have some suggestions on which parts I can change. Specifically, I would like to get rid of the loops and do them using matrix operations if possible. Any suggestions or working examples or links to useful torch functions would be appreciated
Answer: I don't have many improvements to offer-- just one major one. Like you suspected, your implementation is not efficient. This is because using a double for loop to set a Torch/NumPy array is not the preferred way to do sum reductions. What is preferred, is the use of torch.einsum. It takes an indices equation and reduces the Tensors into a final representation.
First to note is that your equation for reg_g_ij is not the most simplified form.
In your code, we start with:
q_i * q_j * (1 - C_ij) + (1 - q_i * q_j) * C_ij
But it can be reduced to:
q_i * q_j * (1 - 2 * C_ij) + C_ij
You can prove it yourself with a few lines of algebra.
The last small thing is call .unsqueeze(0) when you're expanding the dimensions of an array. In this case we used this method to expand an array's size from (9, 9) to (1, 9, 9).
A_0 = torch.eye(n_node).unsqueeze(0)
A_i = A
B = A_0.repeat(reg_sig.size(0), 1, 1)
for i in range(1, n_node):
A_i = Sig(100 * (torch.bmm(A_i, A) - 0.5))
B += A_i
C = Sig(100 * (B - 0.5))
reg_g_ij = torch.einsum('ij,ik,ijk->ijk', q, q, 1 - 2 * C) + C
When profiling this approach, we see a pretty big reduction in time:
In [257]: %timeit new(reg_sig, reg_adj)
1000 loops, best of 5: 745 µs per loop
In [258]: %timeit orig(reg_sig, reg_adj)
The slowest run took 4.85 times longer than the fastest. This could mean that an intermediate result is being cached.
100 loops, best of 5: 5.44 ms per loop | {
"domain": "codereview.stackexchange",
"id": 37024,
"tags": "python, pytorch"
} |
Can iproniazid be prepared by reacting isoniazid with 2-chloropropane? | Question: I'm asking as I fear that the 2-chloropropane might react with the pyridine ring to form k-isopropylpridine-4-carbohydrazidine where k=2,3,5. Or that no reaction will take place.
Figure 1: Isoniazid
Figure 2: Iproniazid
Figure 3: 2-chloropropane
Answer: Alkylation of the pyridine ring under normal conditions is very unlikely, so that's not a major concern. The nitrogen is electron withdrawing, hence electrophilic aromatic substitution reactions are much more difficult to achieve on pyridine than benzene, particurarly absent catalysis. Additionally, the α-amino ketone substituent already on the ring is further deactivating (since it contains a π bond conjugated to the ring and a highly electronegative atom, i.e., oxygen).
One issue is that pyridine is liable to form quaternary ammonium salts when reacted with alkyl halides. See for example the substances paraquat, diquat, and the following paper on pyridine chemistry. While the quaternization of H-bearing ammonium cations is easily reversible (given that they can be deprotonated), the same can't be said for the nitrogen of the pyridine ring. Generally, the only methods I'm aware of for dealkylation of quaternary ammonium salts involve hydride donors and strong reducing agents, such as lithium aluminum hydride. Certainly, the ketone group would need to be protected in that situation.
There is also the issue of avoiding polyalkylation, as well as selecting which of the two amines in your substituent react, in view of the fact that the basicity and nucleophilicty of amines increases with the number of alkyl substituents. Sterics, however, might preclude that from becoming a problem in this instance. | {
"domain": "chemistry.stackexchange",
"id": 491,
"tags": "organic-chemistry, synthesis"
} |
Deriving the slow-roll parameter $\eta$ | Question: In inflationary theory, many papers start off by making the slow-roll approximation, on which many things depend. This approximation is usually presented by requiring that two 'slow-roll parameters' are small:
$$\epsilon_V\equiv\frac{1}{16\pi G}\left(\frac{V'}{V}\right)^2 \ll 1$$
$$|\eta_V|\equiv \frac{1}{8\pi G}\left(\frac{V''}{V}\right)\ll 1$$
We then have
$$H^2=\frac{8\pi G}{3}V$$
$$3H\dot{\phi}=-V'$$
Now, the first of these two conditions is reasonably easily derived:
\begin{align*}
\frac{\ddot{a}}{a}&\gg 0\\
\dot{H}+H^2 &\gg 0\\
-\frac{\dot{H}}{H^2}&\ll 1\\
\frac{1}{16\pi G}\left(\frac{V'}{V}\right)^2=\epsilon_V&\ll 1 \hspace{1cm}\text{used slow roll approx.}
\end{align*}
However, I'm not sure how to find the second one (note that I am not asking for an intuitive explanation, I understand what the second parameter represents, I just wanna know how to derive it). Could someone tell me how to derive it (or what the original premise, analogous to $\ddot{a}/a\gg 0$ for $\epsilon_V$, is)?
Answer: For inflation the potential energy of the field dominates the kinetic energy
$\dot{\phi} \ll V(\phi)$
This limit is referred as slow roll and under such conditions the universe expands quasi exponentially
$a(t) \propto \exp \left( H dt\right) = e^{-N} $
where we define the number of e-folds $N$ as:
$dN = -H dt$
so that $N$ is large in the far past and decreases as we go forward in time and as the scale factor $a$ increases.
With this we have:
$\epsilon = -\frac{\dot{H}}{H^{2}} = \frac{1}{H}\frac{dH}{dN}$
Accelerated expansion will only be sustained for a sufficiently long period of time if the second time derivative of $\phi$ is small enough:
$|\ddot{\phi}| \ll |3H\dot{\phi}|, |V'(\phi)|$
So that the equation of motion for the scalar field is approximately:
$3H\dot{\phi} + V'(\phi) \simeq 0$
This condition can be expressed in terms of a second dimensionless parameter, defined as:
$\eta \cong -\frac{\ddot{\phi}}{H\dot{\phi}} \cong \epsilon + \frac{1}{2\epsilon}\frac{d\epsilon}{dN}$
then
$\eta \simeq \frac{1}{8\pi G} \left( \frac{V''(\phi)}{V(\phi)} \right)$
In the slow regime
$\epsilon, |\eta|\ll 1$, where the last condition ensures that the change of $\epsilon$ per e-fold is small. notice that $\eta$ need not be small for inflation to take place. Inflation takes place when $\epsilon <1$, regardless of the value of $\eta$ | {
"domain": "physics.stackexchange",
"id": 13311,
"tags": "cosmology, cosmological-inflation"
} |
Does not using more filters in deeper CNN creates more images? | Question: For example, we have applied 32 filters to a single image. Then it created 32 different images (stack of convolutional values).
And in the second layer, if we apply 64 filters, are all these filters going to be applied on all those 32 images? If so, then it will create 64*32 numbers of output or I am understanding wrong?
I have become confused because when I studied keras documentation, it says that using 64 filters it will create 64 outputs. If anybody enlights me on how the second or deeper layer works in CNN briefly it will be helpful for me.
Answer: No, your understanding is not correct.
Each of the 64 filters of the second layer will be applied to each of the 32 channels from the output of the first layer, resulting in 64 channels in the output of the second layer.
When the input of a convolutional layer has multiple channels, the convolution filter itself has the same number of channels. In your example, if we are using $3\times3$ filters, each filter in the second layer will be a tensor of dimensions $3\times3\times32$. Therefore, the filter "covers" the full depth of the input. Then, you simply perform the element-wise multiplication of the filter with the overlapping region in the input and add all the resulting elements together. Applying just 1 filter, we obtain a result with 1 channel.
This way, the number of channels of the output of a convolutional layer is the same as the number of filters in the convolution. | {
"domain": "datascience.stackexchange",
"id": 9433,
"tags": "deep-learning, cnn, convolutional-neural-network"
} |
Forming ester from amide using protonated alcohol as catalyst | Question: In my level of understanding, if I want to make an amide from an ester, I should add:
$\ce{H3O+}$ for hydrolysis to make carboxylic acid
$\ce{SOCl2}$ to form acid chloride
$\ce{ROH}$, pyridine to form the ester as product
I think this is a good way since it is difficult to form a more reactive product compared to the reactant, but in one of the question, I really want to try to make the 3-step process 1-step:
My attempt is to use a protonated $\ce{ROH}$ as acid-catalst, this time, to form a methyl ester. I used methanol, and somehow I can get to the end.
[for the 4th intermediate, I searched online and found that $\mathrm{p}K_\mathrm{a}$ of $\ce{(CH3)2NH = 10.73}$, $\mathrm{p}K_\mathrm{a}$ of $\ce{CH3OH = 15.56}$, this means that $\ce{(CH3)2NH+}$ is a better leaving group (lower $\mathrm{p}K_\mathrm{a}$ --> lower basicity --> more stable in the reaction mixture)]
Can someone please kindly tell me what is wrong with my mechanism? I know it should be wrong, since the 3-step process is the most suitable way and yield the most.
Answer: There are several points that I'd throw out:
As a technicality, you probably can't buy protonated alcohols; you'd need to use the alcohol plus an acid like HCl in a non-aqueous solvent.
Now, once the amine drops off, it will mop up your catalytic acid, so you had probably better use at least stoichiometric acid, and possibly quite a bit more.
Amides are more stable than esters; you know that already. So, the reaction thermodynamics are not in your favour to begin with. Thus, in order to get a decent yield, you probably also need to use a big excess of alcohol (Le Chatelier's principle: more alcohol → more ester).
You'd probably also need to heat it quite a bit to get the reaction going. (The same is true for hydrolysis with aqueous acid.)
All in all, probably you could get it to work, but it might not necessarily be fun, and if your reactants had any other sensitive functional groups on it they would pretty much be destroyed by heating in strong acid.
In general, this is a tough reaction, and I suspect that if you could find a general way to do this cleanly then it would be quite a substantial achievement. Recently, Hie et al. have developed a Ni-catalysed procedure (Nature 2015, 524 (7563), 79–83), but it only works on aromatic amides, so isn't applicable to your example substrate. (Essentially, both the group attached to the carbonyl carbon, as well as one of the groups on nitrogen, have to be aromatic. Despite that obvious limitation in scope, it's still in Nature, which maybe suggests something about the difficulty of the transformation.) | {
"domain": "chemistry.stackexchange",
"id": 14146,
"tags": "organic-chemistry, carbonyl-compounds"
} |
Grounding insulative material | Question: When grounded, can an insulative material keep its charge for any measurable length of time? Or, I suppose, if it was a perfect insulator, would it discharge at all?
An example might be a charged rubber tire placed on the ground.
Answer: If it is a perfect insulator, then no charge can flow, and it will never discharge. Nothing is a 'perfect' insulator, so it's always a question of timescale - determined, primarily, by its resistivity. This is generally what happens during the build-up of 'static electricity'---for example, from rubbing together class and fur. Both of those materials are good insulators, and can hold a noticeable charge for an appreciable amount of time. | {
"domain": "physics.stackexchange",
"id": 6417,
"tags": "electrostatics, insulators"
} |
Protocol partition number and deterministic communication complexity | Question: Besides (deterministic) communication complexity $cc(R)$ of a relation $R$, another basic measure for the amount of communication needed is the protocol partition number $pp(R)$. The relation between these two measures is known up to a constant factor. The monograph by Kushilevitz and Nisan (1997) gives
$$cc(R)/3 \le \log_2(pp(R)) \le cc(R).$$
Regarding the second inequality, it is easy to give (an infinite family of) relations $R$ with $\log_2(pp(R)) = cc(R)$.
Regarding the first inequality, Doerr (1999) showed that we can replace the factor $c=3$ in the first bound by $c=2.223$. By how much can the first bound be improved, if at all?
Additional motivation from descriptional complexity: Improving the constant $2.223$ will result in an improved lower bound on the minimum size of regular expressions equivalent to a given DFA describing some finite language, see Gruber and Johannsen (2008).
Although not directly related to this question, Kushilevitz, Linial and Ostrovsky (1999) gave relations $R$ with $cc(R)/(2-o(1)) \ge \log_2(rp(R))$, where $rp(R)$ is the rectangle partition number.
EDIT: Notice that the above question is equivalent to the following question in Boolean circuit complexity: What is the optimum constant $c$ such that every boolean DeMorgan formula of leafsize L can be transformed into an equivalent formula of depth at most $c \log_2L$?
References:
Kushilevitz, Eyal; Nisan, Noam: Communication Complexity. Cambridge University Press, 1997.
Kushilevitz, Eyal; Linial, Nathan; Ostrovsky, Rafail: The Linear-Array Conjecture in Communication Complexity is False, Combinatorica 19(2):241-254, 1999.
Doerr, Benjamin: Communication Complexity and the Protocol Partition Number, Technical Report 99-28, Berichtsreihe des Mathematischen Seminars der Universität Kiel, 1999.
Gruber, Hermann; Johannsen, Jan: Optimal Lower Bounds on Regular Expression Size using Communication Complexity. In: Foundations of Software Science and Computation Structures 2008 (FoSSaCS 2008), LNCS 4962, 273-286. Springer.
Answer: Ok, so let me try to prove that two is enough, that is $cc(R)\le 2\log_2(pp(R))$. Sorry but sometimes I write leaves instead of number of leaves/pp(R), whenever the number is smaller than 1, I obviously mean this. Also, I usually write < instead of $\le$ to enhance non-tex readability.
Indirect suppose that there is an R for which this is not true and let us take the R with the smallest possible pp(R) that violates the inequality. We basically have to show that using two bits, we can halve the number of leaves in all four outcomes of the protocol tree, then we are done using induction.
Denote the possible set of inputs of Alice by X and of Bob by Y.
Take the center of the protocol tree that achieves pp(R) leaves, i.e. the node deleting which the tree falls into three parts, each having at most 1/2 of the pp(R) leaves, and denote the corresponding inputs by X0 and Y0.
Without loss of generality we can suppose that Alice speaks at the center and she tells whether her input belongs to XL or XR, whose disjoint union is X0.
Denote the ratio of the leaves to pp(R) in XL $\times$ Y0 by L, in XR $\times$ Y0 by R and in the rest by D.
Now we divide the rest into three more parts, similarly to Doerr, denoting the leaves whose rectangle intersect Y0 $\times$ X by A, whose rectangle intersect X0 $\times$ Y by B and the rest by C.
Notice that A+B+C=D.
Now we know that L+R>1/2, L,R<1/2 and without loss of generality we can suppose that L is at most R. We also know D=A+B+C<1/2. It follows that 2L+A+B<1, from which we know that either L+A<1/2 or L+B<1/2, these will be our two cases.
Case L+A<1/2: First Bob tells whether his input belongs to Y0 or not. If not, we have at most D<1/2 leaves left. If it does, then Alice tells whether her input belongs to XR or not. If not, we have at most L+A<1/2 leaves left. If it does, then we have R<1/2 leaves left.
Case L+B<1/2: First Alice tells whether her input belongs to XR or not. If it does, then Bob tells whether his belongs to Y0 or not, depending on this we have R or B leaves remaining. If the input of Alice is not in XR, then Alice tells whether her input is in XL or not. If it is, then we have L+B<1/2 leaves remaining. If not, we have at most D<1/2 leaves remaining.
In all cases we are done. Let me know what you think. | {
"domain": "cstheory.stackexchange",
"id": 1569,
"tags": "fl.formal-languages, lower-bounds, open-problem, regular-expressions, communication-complexity"
} |
Parenthesis checker | Question: I am working on a parenthesis checker program in Java that reads in a text stream from standard input and uses a stack to determine whether or not its parentheses are properly balanced. For example, this should print true for [()]{}{[()()]()} and false for [(]). I have made a stack class of my own:
public class Stack {
private char items[];
private int top;
Stack(int n){
items = new char[n];
top = -1;
}
void push(char c){
if(top == items.length-1){
System.out.println("Stack full.");
return;
}
top++;
items[top] = c;
}
char pop(){
if(isEmpty()){
System.out.println("Stack empty");
return (char)0;
}
char p;
p = items[top];
top--;
return p;
}
boolean isEmpty(){
if(top == -1)
return true;
else
return false;
}
}
This is my code for the parenthesis class:
public class Parenthesis {
public static void main(String args[]) throws IOException{
int size;
String str;
Boolean isValid;
BufferedReader br =
new BufferedReader(new InputStreamReader(System.in));
System.out.println("Enter Expression:");
try{
str = br.readLine();
}catch(IOException i)
{
System.out.println("Invalid Input");
str ="";
}
if(checkValid(str))
System.out.println("Valid::All Parentheses are balanced.");
else
System.out.println("Invalid::Parentheses are not balanced.");
}
public static Boolean checkValid(String str){
char sym,prev;
Stack s = new Stack(str.length());
for(int i=0; i<str.length();i++){
sym = str.charAt(i);
if(sym == '(' || sym=='{' || sym=='['){
s.push(sym);
}
if(sym == ')' || sym=='}' || sym==']'){
if(s.isEmpty()){
return false;
}
else{
prev = s.pop();
if(!isPairMatch(prev,sym))
return false;
}
}
}
if(!s.isEmpty())
return false;
return true;
}
public static boolean isPairMatch(char character1, char character2){
if(character1 == '(' && character2 == ')')
return true;
else if(character1 == '{' && character2 == '}')
return true;
else if(character1 == '[' && character2 == ']')
return true;
else
return false;
}
}
Is my way of handling the IOException right?
I don't think there will a stack overflow ever, as the size of my stack is the string length.. And only if the stack is not empty, then only the checkValid method is going for a s.pop() operation. So should I remove the overflow and underflow check within the stack class, as the are redundent and never occurs??
I'm also curious to know if this is a good way of doing it. Any suggestions are welcome.
Answer: Stack class
The naming of the constructors inputparameter could be changed to either size or capacity.
The if..else statement inside the isEmpty() method can be replaced by simply returning (top == -1) like Govind Singh Nagarkoti stated in his answer.
if a stack is poped and is empty an exception should be thrown.
if an item is pushed on the stack and the stack is full an exception should be thrown or the stack should be made bigger.
The pop() method could be made shorter at the cost of readability like
char pop(){
if(isEmpty()){
throw new java.util.EmptyStackException("Stack empty");
}
return items[top--];
}
Why can this be done ? Because the flow is: first getting the item at pos followed by decrementing pos.
checkValid() method
public static Boolean checkValid(String str){
char sym,prev;
Stack s = new Stack(str.length());
for(int i=0; i<str.length();i++){
sym = str.charAt(i);
if(sym == '(' || sym=='{' || sym=='['){
s.push(sym);
}
if(sym == ')' || sym=='}' || sym==']'){
if(s.isEmpty()){
return false;
}
else{
prev = s.pop();
if(!isPairMatch(prev,sym))
return false;
}
}
}
if(!s.isEmpty())
return false;
return true;
}
Refering to
if(s.isEmpty()){
return false;
}
else{
prev = s.pop();
if(!isPairMatch(prev,sym))
return false;
}
Here the else isn't needed, as if the stack is empty it will return. So this can be refactored to just
if(s.isEmpty()){
return false;
}
prev = s.pop();
if(!isPairMatch(prev,sym)){
return false;
}
At the end of the method you have
if(!s.isEmpty())
return false;
return true;
}
which can be refactored to
return s.isEmpty();
You are calling two times the str.length() method. Better way is to call it once and store the result in a local variable.
The opening and closing parenthesis should be stored inside a final char array. The checking if the char is inside one of these arrays can be extracted to a own method.
After doing this your refactored method looks like
private final static char[] openingParenthesis = new char[]{'{', '[', '('};
private final static char[] closingParenthesis = new char[]{'}', ']', ')'};
public static Boolean checkValid(String str) {
char sym, prev;
int length=str.length();
Stack s = new Stack(length);
for (int i = 0; i < length; i++) {
sym = str.charAt(i);
if (isOpeningParenthesis(sym)) {
s.push(sym);
} else if (isClosingParenthesis(sym)) {
if (s.isEmpty()) {
return false;
}
prev = s.pop();
if (!isPairMatch(prev, sym)) {
return false;
}
}
}
return s.isEmpty();
}
private static boolean isClosingParenthesis(char character) {
return isContainedInArray(character, closingParenthesis);
}
private static boolean isOpeningParenthesis(char character) {
return isContainedInArray(character, openingParenthesis);
}
private static boolean isContainedInArray(char character, char[] characters) {
for (char c : characters) {
if (character == c) {
return true;
}
}
return false;
}
You can also remove the throws IOException from the main() method, as you catch it. You shoulc extract the reading from the System.in to a separate method also, as your main() is doing a little to much.
private static String readInput(String statement) {
BufferedReader br =
new BufferedReader(new InputStreamReader(System.in));
System.out.println(statement);
try{
return br.readLine();
}catch(IOException i)
{
return "";
}
}
after removing the unused size and isValid variable, your main() method would look like
public static void main(String args[]) {
String input = readInput("Enter Expression:");
if(checkValid(input)){
System.out.println("Valid::All Parentheses are balanced.");
} else {
System.out.println("Invalid::Parentheses are not balanced.");
}
}
Also you should consider to rename the checkValid() method to something more meaningful like containsBalancedParentheses(). | {
"domain": "codereview.stackexchange",
"id": 10434,
"tags": "java, design-patterns, stack"
} |
ROS detection Arduino | Question:
Is it possible to get an Arduino to do its tasks until ROS is connected? Once ROS is connected change its tasks. I am unable to find any way to detect ROS connection. I have seen similar questions where people are trying to figure out if ROS is connected. The common method seems to be using "ros/master" and "rosgraph", neither are usable in Arduino.
Is there a way to get Arduino to detect ROS without the above-mentioned methods?
Thanks
Originally posted by bir on ROS Answers with karma: 13 on 2022-03-23
Post score: 1
Answer:
Can I connect an Arudino to a laptop and run it?
( I have ROS running on my laptop. )
In that case, we believe that the rosserial package can be used to achieve this.
After starting ROS, it sends a topic to Arudino indicating that it has connected to Arudino; Arudino subscribes to the topic and switches the flag in the callback.
If you switch tasks according to that flag, haven't you done what you wanted to do?
ref: https://wiki.ros.org/rosserial/Tutorials
Originally posted by miura with karma: 1908 on 2022-03-25
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 37523,
"tags": "ros"
} |
What is the role of ammonium chloride in the workup of a Grignard reaction? | Question: In the following Grignard reaction, why is aqueous ammonium chloride used to get to the products? I don't see how it participated in or modified the reaction from what is normally expected.
Answer: Ammonium chloride ($\ce{NH4Cl}$) is the work-up reagent that quenches the magnesium alkoxide product of the Grignard addition. It is the reagent of choice as it is a proton source without being acidic; acidic conditions could result in protonation of the tertiary alcohol product and elimination to the alkene. It also ensures that all inorganic salts of Mg will extract into the aqueous phase. | {
"domain": "chemistry.stackexchange",
"id": 11249,
"tags": "organic-chemistry, experimental-chemistry, grignard-reagent"
} |
The origin of the value of speed of light in vacuum | Question: Meaning, why is it the exact number that it is? Why not $2\times10^8$ m/s instead of $3$? Does it have something to do with the mass, size or behavior of a photon?
To be clear, I'm not asking how we determined the speed of light. I know there isn't a clear answer, I'm really looking for the prevailing theories.
Answer: Tom, would you have asked the question "why is the speed of light 1 ls/s" if we happened to measure distance in lightseconds and time in seconds?
The true answer to your question is: the speed of light is 1 if you measure distance and duration in compatible units, and it is whatever your system of units defines it to be if you adopt units that are more cumbersome. Another way of explaining is that speed - loosely speaking - corresponds to an angle in spacetime. And angles are dimensionless.
I know, this is not seen as a satisfactory answer. But that is because you ask the wrong question. The right question is "why is everything around us so slow? Why are the speeds we typically encounter for material objects around 10^-8 level?" | {
"domain": "physics.stackexchange",
"id": 88016,
"tags": "special-relativity, speed-of-light, physical-constants"
} |
Extracting the IP addresses of Docker containers using JSON API | Question: The following piece of code is imported in another file to get the status of the docker containers. I am printing the required information in a proper tabular form. I am using next() for finding out the only key available in the dictionary. This particular key will change hence used next() to find out the key. But the next() raises an exception when it reaches to the end. Currently I a handling that using PASS.
My question is "Is there a better to way to handle the exception caused by next () : StopIteration" ?
import requests
from prettytable import PrettyTable
def container_status(status):
url = None
if status == "all": url = "http://127.0.0.1:6000/containers/json?all=1"
elif status == "running" : url = "http://127.0.0.1:6000/containers/json?all"
else: raise ValueError("status should be either 'all' or 'running'")
return requests.get(url)
def active_containers(status):
response = container_status(status)
table = PrettyTable(["Container Name", "Container ID", "Status", "IP ADDR"])
for i in response.json():
try:
table.add_row([i["Names"][0].encode('utf-8').replace('/', ''),
i['Id'].encode('utf-8')[:12],
i["State"],
i["NetworkSettings"]["Networks"][next(iter(i["NetworkSettings"]["Networks"]))]["IPAddress"]])
except StopIteration:
pass
print(table)
What are the other possibilities where I can improve the code?
Answer: Dictionaries, as any iterable in Python can be unpacked. It means that, if you know the length of said iterable, you can use as many variables to hold each item of the iterable independently.
For instance:
>>> l = [1, 2, 3]
>>> a, b, c = l
>>> a
1
>>> b
2
>>> c
3
>>> d = {"one": 1, "two": 2}
>>> a, b = d
>>> a
"two"
>>> b
"one"
(order may vary for the dictionary)
If the number of elements in the iterable and the number of variables mismatch, you get a ValueError:
>>> d = {"one": 1}
>>> a, b = d
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: not enough values to unpack (expected 2, got 1)
So, if you know that a dictionary has a single key, you can
network_settings, = i["NetworkSettings"]["Networks"]
and get it that way. If it appears that the dictionary may not contain that key, you have no other choice but to use a try: ... except ValueError: pass.
However, you only care about the values of this dictionary. So I wouldn't use this directly on the dictionary but on its .values() (.itervalues() in Python 2). So the loop would look like:
for container in response.json():
network_settings, = container['NetworkSettings']['Networks'].values()
table.add_row(
container['Names'][0].encode('utf-8').replace('/', ''),
container['Id'].encode('utf-8')[:12],
container['State'],
network_settings['IPAddress'])
or
for container in response.json():
try:
network_settings, = container['NetworkSettings']['Networks'].values()
except ValueError:
continue
table.add_row(
container['Names'][0].encode('utf-8').replace('/', ''),
container['Id'].encode('utf-8')[:12],
container['State'],
network_settings['IPAddress'])
In case of possible missing keys.
Now, for the rest of the code. I wouldn't use any of the prettytable stuff within active_containers. It may be interesting, for reusability, to only extract relevant informations in this function, and let the caller format it.
I would also change the meaning of the container_status function a bit. Looking at the URLs, it seems to me that the status parameter is only there to filter out inactive containers. So it should be more obvious using a binary choice:
def container_status(include_stopped=False):
url = 'http://127.0.0.1:6000/containers/json?all'
if include_stopped:
url += '=1'
return requests.get(url)
You get to choose the default based on your most common needs.
Full code would look like:
import requests
from prettytable import PrettyTable
def container_status(include_stopped=False):
url = 'http://127.0.0.1:6000/containers/json?all'
if include_stopped:
url += '=1'
return requests.get(url)
def active_containers():
for container in container_status().json():
network_settings, = container['NetworkSettings']['Networks'].values()
yield (
container['Names'][0].encode('utf-8').replace('/', ''),
container['Id'].encode('utf-8')[:12],
container['State'],
network_settings['IPAddress'])
def main():
table = PrettyTable(['Container Name', 'Container ID', 'Status', 'IP ADDR'])
for container_infos in active_containers():
table.add_row(*container_infos)
print(table)
if __name__ == '__main__':
main()
(Just assuming that active_containers here means you only want running ones, main and active_containers may need an include_stopped parameter otherwise.) | {
"domain": "codereview.stackexchange",
"id": 25693,
"tags": "python, parsing, json, iteration"
} |
Is it physically realistic to have an electric field and polarisation density but no displacement field? | Question: Given a Lagrangian density that describes a classical dielectric in interaction with the EM field, I found the Euler-Lagrange equations, and in the case of the electric field, worked through to find that $\vec{P} = -\epsilon_0 \vec{E}$. For some reason alarm bells are going off in my head but I'm not sure why. This implies from the usual equation $\vec{D} = \epsilon_0 \vec{E} + \vec{P}$ that the displacement field is zero. I suppose this just means that inside this dielectric the polarisation density exactly compensates $\epsilon_0 \vec{E}$, but I'm feeling unsure. Could somebody reassure (/correct!) me?
Answer: The polarisation field is given by ${\bf P} = \chi_e \epsilon_0 {\bf E}$ in linear dielectric materials, where $\chi_e=\epsilon_r - 1$.
ie. the polarisation is defined to be in the same direction as the electric field.
Therefore the equation ${\bf D} = \epsilon_0 \epsilon_r {\bf E}$ is equivalent to ${\bf D} = \epsilon_0 (1+\chi_e) {\bf E} = \epsilon_0 {\bf E} + {\bf P}$.
So unless $\chi_e$, the dielectric susceptibility is negative, the electric field and polarisation field act in the same direction, so can't cancel. Such materials do exist (but I am no expert here), but I suspect you have just made a mistake in deriving the relationship between polarisation field and electric field. | {
"domain": "physics.stackexchange",
"id": 18071,
"tags": "electromagnetism, electromagnetic-radiation, classical-electrodynamics"
} |
Calculating the pH upon titrating barium hydroxide and hydrochloric acid | Question: Question:
Calculate the pH produced from mixing $\pu{25.0 mL}$ of $\pu{0.420 M} $ $\ce{Ba(OH)_2}$ with $\pu{125 mL}$ of $\pu{0.120 M}$ $\ce{ HCl}$.
Attempt:
I'm learning about acids and bases right now (and not very used to it yet). First I wrote the chemical equation $$\ce{2HCl + Ba(OH)_2 \rightarrow 2H_2O + BaCl_2}$$
Then I calculated the number of moles of each $\ce{Ba(OH)_2}$ has
$$ \pu{0.025 L} \times \pu{0.420 M} = 0.0105 $$ moles
and for $\ce{HCl}$ $$0.125 \pu{mL } \times \pu{0.120 M} =0.0150$$ So $\ce{Ba(OH)2}$ is the limiting reactant. However, I'm not sure how to proceed so any help would really be appreciated!
Answer: You are on the right track just one thing. $\ce{Ba(OH)2}$ is in excess and $\ce{HCl}$ is the limiting reactant, which means all of the $\ce{HCl}$ is used up. That alone tells us the pH is above 7, as $\ce{Ba(OH)2}$ is alkaline and there is no acid left.
To find the exact pH of the resulting solution, first we need to find how much of $\ce{Ba(OH)2}$ is left. Since all of the $\ce{HCl}$ is used, we can find the number of moles of $\ce{Ba(OH)2}$ needed to react with it. As you've stated, the number of moles of $\ce{HCl}$ is 0.0150. From the equation we can see that 2 moles of $\ce{HCl}$ reacts with 1 mole of $\ce{Ba(OH)2}$. So 0.0150 moles of $\ce{HCl}$ will react with $\frac{0.0150}{2}$ = 0.0075 moles of $\ce{Ba(OH)2}$. But there are 0.0105 moles of $\ce{Ba(OH)2}$ present. So we find the difference to find how many moles there are in the solution when the reaction is finished.
This number is 0.0105 - 0.0075 = 0.003 moles. Now, one thing about acids and bases, is that their ions dissociate in solution. The more ions dissociate, the stronger the acid or base is. Here, there is no acid remaining, but only 0.003 moles of said base present. $\ce{Ba(OH)2}$ dissociates as follows:
$$\ce{Ba(OH)2 <=> 2OH- + Ba^2+}$$
This means that one mole of $\ce{Ba(OH)2}$ dissociates to form 2 moles of $\ce{OH-}$ ions. So 0.003 moles of $\ce{Ba(OH)2}$ dissociates to form 0.003 * 2 = 0.006 moles of $\ce{OH-}$ ions. The resulting solution has a volume of 125 + 25 = 150 mL. The concentration of $\ce{OH-}$ ions is found by $\frac{0.006}{0.150}$ = 0.04 M.
To find the pH, it's easier to first find the pOH. That can be found using the formula: $$\ce{pOH = -log(OH^-)}$$
In your case, the value of $\ce{OH-}$ is 0.04 and the pOH is $$\ce{pOH = -log(0.04)}$$
or pOH = 1.397940. FINALLY, the pH can be found by subtracting the pOH from 14 because $$\ce{pOH + pH = 14}$$
In this case, pH = 14 - 1.397940 = 12.60206.
This is my first ever answer on this site, hope this helps. If I've made any mistakes, sorry and feel free to help out, ever-active community of Chemistry Stack Exchange | {
"domain": "chemistry.stackexchange",
"id": 10097,
"tags": "acid-base, ph, concentration, mole"
} |
Could time be changing without us knowing? | Question: I've been wondering about relativity for a while now.
We seem to measure time by using the speed of light.
Light can be slowed down or stopped
We assume light doesn't change in a vacuum - yet, how can we know it isn't changing relative to everything right now? If we can affect light, it seems we should expect other known/unknown forces to have the same power.
Basically, is our whole view of time measurement based on light which could theoretically be affected by unknown forces right now, in the past, or in the future?
So my question is, how can we know for sure time is certain? Might it have been relative in the past / present / future?
Answer: Don't read too much into "Light can be slowed down or stopped". Be careful of such statements - they are either journalistic "sensationalism" or the valid effort of science writers trying to explain the properties of exotic optical materials in everyday words.
Firstly, in general relativity, measured lightspeed can vary (see, for example, Wikipedia's Rindler Co-ordinates article) in a frame with proper acceleration, so there the variation is to do with the observer's whim (he or she decides to go crusing in a rocket and blasts off relative to an inertial frame. Or, more mundanely, they decide to sit on the surface of a planet and insist on doing GR calculations in the Rindler metric even though their accelerometer screams at them that there is a more "natural" frame to do their analysis in). OK, I'm being slightly flippant - the Rindler metric has a valid use simplifing some analysis but the seemingly nonconstant lightspeed (nonconstant co-efficient $g_{00} = g^2 x^2$ in the fundamental form) it introduces is a property of the deliberately chosen co-ordinates, not of the physics - in the same way that the superficial "bendiness" of spherical or cylindrical polar co-ordinates labelling Euclidean $\mathbb{R}^3$ belongs wholly to those special co-ordinate systems, and not to the underlying flat Euclidean $\mathbb{R}^3$ space.
Secondly, there is a great deal of interest in "slowing light down" for the purposes of optical storage and such like. All optical mediums made of matter (i.e. as opposed to the vacuum) "slow light down". Call me pedantic, but this really isn't "light" in the fundamental sense. I like to think of such things as quantum superpositions of free photons and excited matter states: an optical medium absorbs photons and then re-emits another "elastically" (i.e. in the same momentum, energy and angular momentum state) a short time later (this happens through interaction with the material's electrons) thus seemingly, through this delay, slowing the light down. "Slow light" research is simply finding really exotic versions of this mechanism so that pulses can be held in storage for either exotic switching purposes in a telecoms network (optically re-ordering a data packet) or keeping them idle if needed for data processing or quantum computing.
But these valid ideas of "slow light" don't affect $c$ as we know it. As for whether this $c$, as the fundamental speed of a massless particle for all observers, has been different in the past or whether it might be so in the future: this is the meaningful part of your question, whose answer I shall defer to a cosmologist. To my untrained eye, if General Relativity is right, the Universe and its history is supposedly a manifold solving the Einstein Field Equations (there is ONE manifold with 3+1 dimensions, not an evolving 3-manifold with a history), and my understanding of the physical content of GR is that it roughly says that tangent spaces to this manifold constructed in a certain way from geodesics are inertial frames wherein special relativity holds, so, by definition, wherever and whenever general relativity holds, $c$ has to be constant. So we'll need a real general relativity-ist to confirm this understanding and cosmologist to answer where, when and how GR is seriously taken to falter. | {
"domain": "physics.stackexchange",
"id": 8800,
"tags": "special-relativity, speed-of-light, time"
} |
Bainite at room temperature? | Question: I read that if, after a tempering from T>723°C to T*=[250°C,550°C], I do an isothermal transformation of an eutectoid steel, I'll get bainitic microstructure.
Does this microstructure exist only if temperature remains at T=T* also after microstructure was formed?
In other words, if I do a cooling until room temperature (after bainite was formed), will it exist at this temperature?
Thank you very much.
Answer: The microstructure is retained upon cooling. After bainite forms, you can even quench it and it will remain bainite. The only way to get martensite is to cool it so fast that bainite has no time to form. The only way to get bainite is to cool it so fast that bainite can form, but not so fast you get martensite, and not so slow that you get pearlite. After the microstructures have formed, they remain, unless you reheat it. | {
"domain": "chemistry.stackexchange",
"id": 7458,
"tags": "metal, metallurgy"
} |
Plotting coverage of annotation over collection of region | Question: I'm trying to plot "meta" coverage of annotation: i.e. features (eg. gene class) over certain regions. It is similar to read coverage plots over gene body, except my input is two bed files (both in BED6 format) - (A) one containing the regions for which the meta plot is needed, and another being the list of genes or any feature whose coverage over the regions in (A) is needed.
Is there any package or tool which can create such plots (my domain is limited to python but I can try to work with R) ?
Something akin to this but for whole gene body[1]:
[1]- Kelley, D, Rinn, J (2012). Transposable elements reveal a stem cell-specific class of long noncoding RNAs. Genome Biol., 13, 11:R107.
Edit: The BED file containing list of regions looks like this:
chr3 39218734 40053659 region1 0.92426187419769 +
chr4 140163762 140453127 region2 0.896103896103896 -
chr7 40549151 41205036 region3 0.986072423398329 +
chr8 81291743 81963246 region4 0.94184168012924 -
chr9 12284032 12539789 region5 0.95539033457249 -
And the bed file containing features to be plotted looks like this:
chr3 39218100 40053200 LINE 1 +
chr4 140163962 140453027 LINE 1 -
chr7 40549002 41204999 SINE 1 +
chr8 81291143 81963846 LTR 1 -
chr9 12284332 12539720 LTR 1 -
Answer: If one assumes that repeat elements of a given type (e.g., LINEs) don't overlap each other, then the following will work:
Split your BED file by repeat element, such that you have a LINE.bed, SINE.bed, etc.
Convert those to bedGraph (e.g., awk 'BEGIN{OFS="\t"}{print $1,$2,$3,"1.0"}' LINE.bed > LINE.bedGraph).
Use UCSC tools to convert those bedGraph files to BigWig.
Install deepTools and run computeMatrix reference-point -b 2500 -a 20 -S LINE.bigWig SINE.bigWig LTR.bigWig -R Regions_Of_Interest.BED -o foo.mat.gz
Make a profile plot with plotProfile (plotProfile -m foo.mat.gz -o foo.png --perGroup)
If you do have overlapping regions you'll need to first make a disjoint set of intervals (you can use bedops for this), find the coverage of them (e.g., bedtools coverage) and then continue on with that. | {
"domain": "bioinformatics.stackexchange",
"id": 860,
"tags": "python, software-recommendation, visualization, coverage"
} |
Pumping Lemma for regular languages proof doubt - Sipser Book | Question: I was reading the proof of pumping lemma from Sipser's book. I couldn't understand certain things mentioned there.
In the second paragraph he has written, "because $r_l$ occurs among first $p+1$ places, we have $l \le p+1$". Here, does $l$ denotes the number of states visited?
Also he wrote "We know that $j \neq l$, so $|y| > 0$; and $l \le p+1$; so $|xy| \le p$"
What I didn't understand is
$j \neq l$
$j \neq l$, so $|y| > 0$
$l \le p+1$; so $|xy| \le p$
Answer: You missed what $j$ and $l$ are. Read the paragraph again. They are the two indices of the same state in the list of visited states. For example if some automata goes from $s_1$ to $s_3$ to $s_5$ to $s_3$ to $s_7$, then you could pick $j = 2$ and $l = 4$, because both $2$nd and $4$th elements in the sequence are $s_3$.
They are always (and, assuming $n > p$, can always be) chosen different.
$y$ was defined as the substring which takes $r_j$ to $r_l$. $j \neq l$ and you need at least one symbol to change state, hence $|y| > 0$.
The substring $xy$ takes $r_1$ to $r_l$. A string of length $x$ visits $x+1$ states, so if we know that $xy$ visited $l \leq p+1$ states then $|xy| = l-1 \leq p$. | {
"domain": "cs.stackexchange",
"id": 1164,
"tags": "regular-languages, automata, pumping-lemma"
} |
Does anyone know if there is a term to describe the following process? | Question: I'm actually currently studying physics but this came up in my textbook (taken from Giancoli 7th edition section 16-10):
The random (thermal) velocities of molecules in a cell affect cloning. When a bacterial cell divides, the two new bacteria have nearly identical DNA. Even if the DNA were perfectly identical, the two new bacteria would not end up behaving in exactly the same way. Long protein, DNA, and RNA molecules get bumped into different shapes, and even the expression of genes can thus be different. Loosely held parts of large molecules such as a methyl group can also be knocked off by a strong collision with another molecule. Hence, cloned organisms are not identical, even if their DNA were identical. Indeed, there can not really be genetic determinism.
I'm aware of different biological processes that can affect gene expression but this is random kinetic motion! Would you call this one of the epigenetic mechanisms that can affect gene expression? If so it would underlie ALL epigenetic mechanisms because all molecules have random kinetic motion.
Answer: The two terms of main interest to you
Cellular noise
Cellular noise is random variability in quantities arising in cellular biology. For example, cells which are genetically identical, even within the same tissue, are often observed to have different expression levels of proteins, different sizes and structures. These apparently random differences can have important biological and medical consequences
Cellular noise was originally, and is still often, examined in the context of gene expression levels – either the concentration or copy number of the products of genes within and between cells. As gene expression levels are responsible for many fundamental properties in cellular biology, including cells' physical appearance, behaviour in response to stimuli, and ability to process information and control internal processes, the presence of noise in gene expression has profound implications for many processes in cellular biology.
There is also the term developmental noise.
Developmental noise is a concept within developmental biology in which the phenotype varies between individuals even though both the genotypes and the environmental factors are the same for all of them. Contributing factors include stochastic gene expression and other sources of cellular noise.
Developmental noise is often sounds like a synonym of cellular noise, but it is supposed to include more processes that are causing phenotypic variation then cellular noise. For example, it is common to consider "micro-environmental variation" as being part of developmental noise and not of cellular noise.
The way your phrased your question
In your question you describe only sources of cellular noise. However, you end up saying Hence, cloned organisms are not identical, even if their DNA were identical. I just want to highlight that, there are other reasons than cellular noise for which two clones differ. These include micro-environmental variance, physiological noise, epigenetic variance and macro-environmental variance (often just called environmental variance). Note that I had never encountered the term "physiological noise" before, I just made it up but I wanted to highlight that noise also happen in among cells processes, not only within cells processes.
Intro to quantitative genetics
For a short introduction to quantitative genetics and the different sources of phenotypic variance (genetic variance, environmental variance, developmental noise, etc..) and how the concept of heritability fits into this discussion, please have a look at the post Why is a heritability coefficient not an index of how “genetic” something is? | {
"domain": "biology.stackexchange",
"id": 8482,
"tags": "genetics, dna"
} |
jQuery Shorten Switch Statement | Question: I have a switch statement that checks whether or not one string matched another that was stored in an array. However, there are multiple cases in which the same result is returned for multiple matches. I originally had a large amount of if/else checks, but tried to shorten that down with the switch statement. I was wondering, is there anyway to shorten down this switch statement?
function checkLoc(xstreets) {
var address;
if(xstreets[2] == undefined) {
address = xstreets[0]+" & "+xstreets[1]+" Chicago IL";
}
else {
switch(xstreets[2]) {
case "aon":
address = "Aon Center Chicago IL";
break;
case "trump":
address = "401 North Wabash Avenue, Chicago, IL";
break;
case "pritzker":
address = "Pritzker Park Chicago IL";
break;
case "hyde":
case "hydepark":
case "uofc":
case "uchicago":
address = "5801 South Ellis Avenue, Chicago, IL";
break;
case "reg":
address = "1100 E 57th St, Chicago, IL 60637";
break;
case "willis":
case "wttw":
address = "233 South Wacker Drive, Chicago, IL";
break;
case "600":
address = "600 West Chicago Avenue, Chicago, IL";
break;
default:
address = xstreets[0]+" & "+xstreets[1]+" Chicago IL";
break;
}
}
return address;
}
Answer: You can convert it to two array. The first will normalize multiple "aliases" of the same key into one and second will have your address values.
var aliases = {
"hyde" : "hydepark",
"uofc" : "hydepark",
"uchicago" : "hydepark",
"wttw" : "willis"
};
var addresses = {
"hydepark" : "5801 South Ellis Avenue, Chicago, IL",
...
};
var alias = aliases[xstreets[2]];
if(alias == undefined) {
alias = xstreets[2]
}
var address = addresses[alias];
if(address == undefined) {
return xstreets[0]+" & "+xstreets[1]+" Chicago IL";
}
return address; | {
"domain": "codereview.stackexchange",
"id": 1890,
"tags": "javascript"
} |
Applying Gauss's law on a disc | Question: Shouldn't the electric flux through a circular disc due to a point charge kept at some finite distance from it be zero as all field lines which enter it exit it also and hence the net would be zero?
Answer: Your disk isn't a closed surface, so Gauss's law doesn't apply here. The idea of field lines entering and exiting the surface relies on the idea of your field line entering on one part of the surface and exiting on the other part of the surface. At a single point on a surface you don't say the field line is entering and exiting the surface. There will be a non-zero flux through the disk since at all points on the disk the field will have components in a single direction through that disk. The only way to make the flux $0$ through the disk is if the point charge and the disk were in the same plane.
This goes to show that the "entering and exiting" of field lines is a nice qualitative picture, but when you want to actually calculate flux you need to go back to the mathematical definition:
$$\Phi=\int \mathbf E\cdot\text d\mathbf a$$
For your point charge example with your disk as your surface, this is obviously not $0$ since, as I mentioned earlier, all values of the integrand on the disk will have the same sign (positive or negative depending on the orientation of your surface and the charge in question). | {
"domain": "physics.stackexchange",
"id": 59776,
"tags": "electrostatics, electric-fields, charge, gauss-law"
} |
Bakery algorithm: what is the choosing[] boolean array for? | Question: I'm studying the bakery algorithm.
do {
choosing[i] = true;
number[i] = max(number[0], number[1], …, number [n – 1])+1;
choosing[i] = false;
for (j = 0; j < n; j++) {
while (choosing[j]); //
while ((number[j]!= 0) && (number[j],j)<(number[i],i)));
}
critical section
number[i] = 0;
remainder section
} while (1);
Please help me know what exactly this choosing doing here. I checked this for all conditions that I could generate to understand its use. But it doesn't seem to be making any difference. Please help me with a case.
Answer: choosing[i] is true while number[i] is being updated to be larger than all the other values in the number array — the new ticket value that the thread is taking. In the body of the for loop, the code first waits for choosing[j] to be false, which indicates that thread number j has chosen its ticket for this round. If thread j goes on executing while thread i hasn't entered the critical section yet, then number[i] won't change, so:
If thread j is still in the same round and hasn't finished the critical section yet, then number[j] is the result of a max computation which may or may not have taken the current value of number[i] into account, depending on the interleaving of the execution of threads i and j.
If thread j is in the remainder section then number[j] is 0.
If thread j has begun a new round then its number[j] is the result of a max computation that took the current value of number[i] into account, thus number[j] > number[i].
In the first case, threads i and j are running their for loop concurrently; only one of them will win the (number[j],j)<(number[i],i) comparison and proceed into the critical section, and the other will wait for the winner's number value to become acceptable again. In the second case, thread j is not competing for the critical section. In the third case, thread j has a higher number and will wait in its own for loop while thread i hasn't reset its number[i] value.
The choosing array works around the lack of atomicity in the computation of number[] values. If it was removed, then you could have two threads computing the same value concurrently and each finding that it had the smaller value at the point of testing. This could happen with as little as two threads (I use the notation k.tmpX to designate local storage of the thread k):
initially: number[1] = number[2] = 0
thread 1 thread 2
//choosing[1]=true
1.tmp1 = number[1]
1.tmp2 = number[2]
//choosing[2]=true
2.tmp1 = number[1]
2.tmp2 = number[2]
number[2] = max(2.tmp1,2.tmp2)+1 = 1
//choosing[2]=false
//while (choosing[1]) {}
while (number[1]≠0 &&
(number[1],1)<(number[2],2)) {}
critical section ...
number[1] = max(1.tmp1,1.tmp2)+1 = 1
//choosing[1]=false
//while (choosing[2]) {}
while (number[2]≠0 &&
(number[2],2)<(number[1],1)) {}
critical section ...
critical section ...
The two threads computed the same ticket number. The bakery algorithm uses the thread number to determine the priority in this case. However, thread 2 decided to the critical thread while thread 1 was still preparing, whereas thread 1 saw the new ticket number of thread 2 when deciding whether to enter the critical section but had computed its own ticket number based on thread 2's old ticket number. So thread 1 incorrectly entered the critical section.
The choosing array solves this problem by preventing thread 2 from entering the critical section. Since thread 1 is computing its ticket number at the time thread 2 is preparing to enter the critical section, thread 2 will block on while (choosing[1]) {}, after which it will see the updated value of number[1] and allow thread 1 to enter the critical section first, while thread 2 waits on while (number[1]≠0 && (number[1],1)<(number[2],2)) {}. | {
"domain": "cs.stackexchange",
"id": 3572,
"tags": "algorithms, concurrency, synchronization"
} |
Understand clearly the figure: Illustration of a Convolutional Neural Network (CNN) architecture for sentence classification | Question: I am studying the blog: Understanding Convolutional Neural Networks for NLP. It is very good blog.
One thing I can't understand clearly about this blog. As the figure Illustration of a Convolutional Neural Network (CNN) architecture for sentence classification as following:
I want to ask:
I know the region sizes(2,3,4) is like 2-gram, 3-gram, 4-gram word, but what’s the meaning of number filters? Here is 2 filters for each region. Why in the author's code about sentence classification is the number of filters defined to 128? Could you give examples to explain the meaning of the number of filters? for example using the sentence of ‘I like this movie very much’ would be great.
2) I understand the height of region size (4) is 4, but in the figure, the height of region(2, 3) are 5 and 6 respectively, I don't know why? I think the height of region is 2 and 3.
Answer:
Answering this in terms of NLP examples is quite hard, remember "All models are wrong, some models are useful." First think of this in an image classification problem context, you want to use a large number of filters to collect a large number of features out of the image, one could detect edges, the other could detect densely coloured areas, one might turn a region to b&w. Extend a similar logic to text, by using a lot of filters, in this case 128, you are trying to capture a lot of features. For an example like , " I like movies very much", a certain filter might detect that like is a positive word and not a similarity comparison, a certain filter of size 2 might detect very much and detect that it is an expression of degeree. You can go on like that, it will be hard to come up with 128 features but the idea is to get enough features. If you think the number is unreasonable and might lead to overfitting, you can reduce the number and compare your results.
No, 1- maxpool means that you take the maximum value of the output vector after applying a filter to the input. So it has nothing to do with the longest word but rather choose an element from the output that express the extracted feature to the highest amount. | {
"domain": "datascience.stackexchange",
"id": 2120,
"tags": "python, deep-learning, nlp, sentiment-analysis, cnn"
} |
Numbers of length N and value less than K | Question: I am solving interview question from here.
Problem : Given a set of digits (A) in sorted order, find how many numbers of length B are possible whose value is less than number C.
Constraints: 1 ≤ B ≤ 9, 0 ≤ C ≤ 1e9 & 0 ≤ A[i] ≤ 9
Input: A = [ 0 1 5], B= 1 , C = 2 ; Output: 2 (0 and 1 are possible)
Input: A = 0 1 2 5 , B = 2 , C = 21 ; Output: 5 (10, 11, 12, 15, 20 are possible)
This is my approach
from itertools import product
from itertools import ifilter
def solve(A, B, C):
if A == [] or B > len(str(C)):
return 0
elif B < len(str(C)):
#constraint is B
if B == 1:
new_list = A
return len(new_list)
else:
new_list = list((product((''.join(str(i)for i in A)),repeat = B)))
b = [''.join(num) for num in new_list]
c = list(ifilter(lambda x: x[0]!='0' , b))
return len(c)
elif B == len(str(C)):
#constraint is C
if B == 1:
new_list = [i for i in A if i< C]
return len(new_list)
else:
new_list = list((product((''.join(str(i)for i in A)),repeat = B)))
b = [''.join(num) for num in new_list]
c = list(ifilter(lambda x: x[0]!='0' and int(x) < C , b))
return len(c)
Test cases:
assert solve([2],5,51345) == 1
assert solve([],1,1) == 0
assert solve([ 2, 3, 5, 6, 7, 9 ],5,42950) == 2592
assert solve([0],1,5) == 1
assert solve([0,1,2,5],1,123) == 4
assert solve([0,1,5],1,2) == 2
assert solve([ 3 ],5, 26110) == 0
assert solve([0,1,2,5],2,21) == 5
How can I optimize this code in terms of memory usage?
Answer: Optimise memory usage
You could optimise memory usage by not converting your iterators into list and by avoiding non-required steps (like join).
Changing a few others details (formatting, adding tests, etc), you'd get something like:
from itertools import product
from itertools import ifilter
def solve(A, B, C):
c_len = len(str(C))
if A == [] or B > c_len:
return 0
elif B < c_len:
# Constraint is B
if B == 1:
return len(A)
else:
candidates = product((str(i) for i in A), repeat = B)
return sum(x[0] != '0' for x in candidates)
else:
assert B == c_len
# Constraint is C
if B == 1:
return sum(i < C for i in A)
else:
candidates = product((str(i) for i in A), repeat = B)
return sum(x[0] != '0' and int(''.join(x)) < C for x in candidates)
assert solve([2],5,51345) == 1
assert solve([],1,1) == 0
assert solve([2, 3, 5, 6, 7, 9],4,42950) == 1296
assert solve([2, 3, 5, 6, 7, 9],5,42950) == 2592
assert solve([0],1,5) == 1
assert solve([0,1,2,5],1,123) == 4
assert solve([0,1,5],1,2) == 2
assert solve([3],5, 26110) == 0
assert solve([0,1,2,5],2,21) == 5
Another algorithm
I'm pretty sure the whole thing can be optimised further by not generating the various numbers to count them but just using mathematical tricks to get the solution with no counting.
The easiest case to handle is B < c_len:
elif B < c_len:
# All combinations of B elements are valid
return len(set(A)) ** B
Actually, as mentionned by Maarten Fabré, this does not handle 0s perfectly. The code below is updated to handle it better.
The last case is trickier. We can try to use recursion to solve smaller versions of the problem. I didn't manage to make this work properly...
from itertools import product, ifilter, dropwhile, product, takewhile
import timeit
def solve_naive(A, B, C):
A = set(str(A))
mini = 10 ** (B-1)
maxi = min(10 * mini, C)
cand = [str(i) for i in (['0'] if B == 1 else []) + range(mini, maxi)]
valid = [i for i in cand if all(c in A for c in i)]
return len(valid)
def solve_op(A, B, C):
# print(A, B, C)
c_len = len(str(C))
if A == [] or B > c_len:
return 0
elif B < c_len:
# Constraint is B
if B == 1:
return len(A)
else:
candidates = product((str(i) for i in A), repeat = B)
return sum(x[0] != '0' for x in candidates)
else:
assert B == c_len
# Constraint is C
if B == 1:
return sum(i < C for i in A)
else:
candidates = product((str(i) for i in A), repeat = B)
return sum(x[0] != '0' and int(''.join(x)) < C for x in candidates)
def solve_maarten(A, B, C):
if A == [] or B > len(str(C)):
return 0
c_tuple = tuple(map(int, str(C)))
combinations = product(A, repeat=B)
if B != 1:
combinations = dropwhile(lambda x: x[0] == 0, combinations)
if B == len(c_tuple):
combinations = takewhile(lambda x: x < c_tuple, combinations)
combinations = list(combinations)
return sum(1 for _ in combinations)
def solve(A, B, C):
c_str = str(C)
c_len = len(c_str)
if A == [] or B > c_len:
return 0
if B < c_len:
a_len = len(set(A))
if B == 1:
return a_len
non_0_len = a_len - (0 in A)
return non_0_len * (a_len ** (B-1))
assert B == c_len # Constraint is C
head, tail = int(c_str[0]), c_str[1:]
nb_first_dig_cand = sum(i < head for i in A)
if not tail or not nb_first_dig_cand:
return nb_first_dig_cand
if head in A: # TODO: This case is not handled properly...
# It should involve ret and solve(A, B-1, int(tail)) or something like that
return solve_maarten(A, B, C)
solve_c = solve(A, B-1, C)
ret = nb_first_dig_cand * solve_c
return ret
tests = [
([2], 4, 51345, 1),
([2], 5, 51345, 1),
([], 1, 1, 0),
([2, 3, 5, 6, 7, 9], 4, 42950, 1296),
([2, 3, 5, 6, 7, 9], 5, 42950, 2592),
([0], 1, 5, 1),
([0, 1, 2, 5], 1, 123, 4),
([0, 1, 5], 1, 2, 2),
([3], 5, 26110, 0),
([0, 1, 2, 5], 1, 21, 4),
([0, 1, 2, 5], 2, 21, 5),
([0, 1, 2, 5], 2, 201, 12),
([0, 1, 2, 5], 3, 2010, 48),
([0, 1, 2, 5], 4, 20108, 192),
([0, 1, 2, 5], 5, 201089, 768),
([0, 1, 2, 3, 4, 5, 7, 8], 5, 201089, 28672),
([0, 1, 2, 3, 4, 5, 7, 8], 6, 201089, 33344),
([0, 1, 2, 3, 4, 5, 7, 8, 9], 6, 200000, 59049),
([0, 1, 2, 3, 4, 5, 7, 8, 9], 6, 999999, 472391),
([1, 2, 3, 4, 5, 7, 8, 9], 6, 200000, 32768),
([1, 2, 3, 4, 5, 7, 8, 9], 6, 999999, 262143),
]
funcs = [solve, solve_op, solve_maarten, solve_naive]
for func in funcs:
start = timeit.default_timer()
for (A, B, C, exp) in tests:
ret = func(A, B, C)
if ret != exp:
print "%s(%s, %d, %d): ret=%d, exp:%d" % (func.__name__, str(A), B, C, ret, exp)
end = timeit.default_timer()
print("Time for %s: %f" % (func.__name__, end - start))
def solve2(A, B, C):
c_str = str(C)
c_len = len(c_str)
if A == [] or B > c_len:
return 0
if B < c_len:
a_len = len(set(A))
if B == 1:
return a_len
non_0_len = a_len - (0 in A)
return non_0_len * (a_len ** (B-1))
assert B == c_len # Constraint is C
head, last_dig = divmod(C, 10)
nb_last_dig_cand = sum(i < last_dig for i in A)
if head == 0:
return nb_last_dig_cand
ret = solve_naive(A, B-1, head - 1) * len(A)
ret_dummy = solve_naive(A, B, C)
print(ret - ret_dummy, A, B, C)
return ret_dummy | {
"domain": "codereview.stackexchange",
"id": 30717,
"tags": "python, programming-challenge"
} |
roscd: command not found | Question:
Hi
I faced this error: "roscd: command not found" same as rosnode except for roscore.
I have already setup my .bashrc by "source /opt/ros/jade/setup.bash"
Thanks
Originally posted by Reza1984 on ROS Answers with karma: 70 on 2015-12-09
Post score: 0
Answer:
Did you check these?;
http://answers.ros.org/question/195052/command-not-found-roscd-rosnode-etc/
http://answers.ros.org/question/66913/roscd-command-not-found-in-groovy-1204/
Also you can check your environment variables.
Originally posted by Akif with karma: 3561 on 2015-12-10
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Reza1984 on 2015-12-10:
Yes I have checked all the possible solutions in the net but none of them worked.
Comment by Akif on 2015-12-10:
Can you check printenv | grep ROS output?
Comment by Reza1984 on 2016-03-03:
Sorry for late reply. I have installed ROS again and it works now.
Comment by Akif on 2016-03-03:
Glad you solved it. | {
"domain": "robotics.stackexchange",
"id": 23196,
"tags": "ros, roscd"
} |
How does Back Propagation in a Neural Net Work? | Question: I understand that, in a Neural Net, Back Propagation is used to update the model's weights and biases to lower loss, but how does this process actually work?
Answer: This answer is based on the following series by StatQuest, which I thoroughly recommend.
Overview
Back Propagation works by calculating the partial derivatives of the loss function, with respect to each weight and bias in the network, and using those derivatives to alter the value of the corresponding weight or bias.
Considering how the weights and biases are passed through the network, through linear combinations, and Activation Functions, calculating the partial derivatives may seem daunting. However, each partial derivative can be calculated, with reasonable ease, using the Chain Rule from differentiation.
Simple Example
For example, let's use a simple neural network like the one below, which uses ReLU activation functions, and the Squared Sum of Residuals for the loss function.
At the output node, the Squared Sum of Residuals has the following form:
$$\text{Loss} = \sum_{i=0}^{N}(\text{expected}_i - \text{predicted}_i)^2$$
where $i$ represents the $i$th data row to be predicted, $\text{expected}_i$ is the expected value of data row $i$, and $\text{predicted}_i$ is the predicted value of data row $i$. The impact of each parameter (weight or bias) on this loss function can be calculated by by working out its contribution to the loss function. The expected values don't change, so the parameters only come into the function through the predicted values.
We'll use $W_5$ as a simple example, first, let's calculate the partial derivative of the loss function with respect to $W_5$ for a single input value:
\begin{align}
\frac{\partial (\text{Loss})}{\partial W_5} &= \frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial W_5} \\
\end{align}
by the chain rule, we have the following:
\begin{align}
\frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial W_5} = \frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial \text{predicted}_0} \cdot \frac{\partial (\text{predicted}_0)}{\partial W_5}
\end{align}
where $\text{expected}_0$ is the expected value of our single input row, $\text{predicted}_0$ is the predicted value of our single input row, and $W_5$ is our parameter. The first half of the righthand side simplifies to the following:
\begin{align}
\frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial \text{predicted}_0} = 2 \cdot (\text{expected}_0 - \text{predicted}_0) \cdot (-1)
\end{align}
To get the second half, we first notice that $\text{predicted}_0$ has the following form:
\begin{align}
\text{predicted}_0 = W_5 \cdot y_{1, 0} + W_6 \cdot y_{2, 0} + b_3
\end{align}
where $W_5$ and $W_6$ are the weights from the graph, $y_{1, 0}$ and $y_{2, 0}$ are the outputs of hidden layer nodes $1$ and $2$ respectively for the data row $0$, and $b_3$ is the bias applied to the output nodes activation function. Therefore the second half of the righthand side has the following simplified form:
\begin{align}
\frac{\partial (\text{predicted}_0)}{\partial W_5} = y_{1, 0}
\end{align}
so:
\begin{align}
\frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial W_5} = -2 \cdot (\text{expected}_0 - \text{predicted}_0) \cdot y_{1, 0}
\end{align}
Multiplying this value by a Learning Rate, for example, 0.001, gives you an update value which you can add to $W_5$ to update the Neural Net and improve its predictive ability.
Deeper Parameters
To extend this process to weights further back in the network you have to perform repeat applications of the chain rule. For example, to get the partial derivative of $\text{Loss}$ with respect to $W_1$ you get the following:
\begin{align}
\frac{\partial (\text{Loss})}{\partial W_1} &= \frac{\partial (Loss)}{\partial \text{predicted}_0} \cdot \frac{\partial \text{predicted}_0}{\partial W_1} \\[10pt]
&= \frac{\partial (Loss)}{\partial \text{predicted}_0} \cdot \frac{\partial \text{predicted}_0}{\partial y_{1, 0}} \cdot \frac{\partial y_{1, 0}}{\partial W_1} \\[10pt]
&= \frac{\partial (Loss)}{\partial \text{predicted}_0} \cdot \frac{\partial \text{predicted}_0}{\partial y_{1, 0}} \cdot \frac{\partial y_{1, 0}}{\partial s_{1, 0}} \cdot \frac{\partial s_{1, 0}}{\partial W_1}
\end{align}
where $s_{1, 0}$ is created by summing the linear functions created by multiplying $x_1$ and $x_2$ by their weights $W_1$ and $W_2$ and adding a bias $b_1$, which gives the following form:
\begin{align}
s_{1, 0} = W_1 \cdot x_1 + W_2 \cdot x_2 + b_1
\end{align}
The form of the $\frac{\partial y_{1, 0}}{\partial s_{1, 0}}$ term is defined by the activation function $y_{1, 0}$, which in our case is ReLU that gives a gradient of 1 if $s_{1, 0} \gt 0$ and 0 if $s_{1, 0} \leq 0$. If we assume that $s_{1, 0} \gt 0$ we get the following:
\begin{align}
\frac{\partial (Loss)}{\partial \text{predicted}_0} \cdot \frac{\partial \text{predicted}_0}{\partial y_{1, 0}} \cdot \frac{\partial y_{1, 0}}{\partial s_{1, 0}} \cdot \frac{\partial s_{1, 0}}{\partial W_1} = -2 \cdot (\text{expected}_0 - \text{predicted}_0) \cdot (W_5) \cdot (1) \cdot (x_1)
\end{align}
Wrapping Up
The method above is used to find the gradient for every parameter, then all the parameters are updated to move the Neural Net towards a solution with a lower loss. This process is repeated until, either, you reach an acceptable loss level, or repeat iterations are making negligible improvements to the loss function. | {
"domain": "datascience.stackexchange",
"id": 11381,
"tags": "machine-learning, neural-network, backpropagation"
} |
Hess Cycle, determining enthalpy change of formation | Question: I am asked to find the enthalpy change of formation of the following:
$$\ce{N2 + 1/2O2 -> N2O}$$
I am given the following enthalpies of reaction:
\begin{align}
\ce{C + N2O &-> CO + N2} &\Delta H_f =-193 \tag{1} \\
\ce{C + 1/2O2 &-> CO} &\Delta H_f=-111 \tag{2}
\end{align}
How do I calculate enthalpy change of formation of the nitrous oxide?
I thought it would just be adding the two enthalpies together because I substitute the second equation into the first one, but this is wrong.
Answer: I took the first equation away from the second equation:
\begin{array}{lll}
&\ce{C + 1/2O2 &-> CO} &\Delta H_f &= -111 \\
- &\ce{C + N2O &-> CO + N2} &\Delta H_f &= -193 \\
\hline
= &\ce{1/2O2 - N2O &-> - N2} &\Delta H_f &= 82 \\
= &\ce{N2 + 1/2O2 &-> N2O }
\end{array} | {
"domain": "chemistry.stackexchange",
"id": 1308,
"tags": "physical-chemistry, thermodynamics, enthalpy"
} |
Difference between multi-tape Turing machine and single tape machine | Question: A beginner's question about "fine-grained" computational power.
Let $M_k$ be a $k$-tapes turing machine, and let $M$ be a single tape turing machine. We know that $M_k$ and $M$ both have the same "computable power". In addition, one can simulate $M_k$ on $M$ in a way that every computation which takes $O(t(n))$ on $M_k$ will take $O(t(n) \log(t(n))$ on $M$.
Here is my question:
Is there a language $L$ such that $L$ can be decided in $O(n)$ time in $k$-tape Turing machine (for fixed $k$, say 2), but can't be decided in $O(n)$ time in a single tape machine? (every single tape machine which decides $L$ needs $\Omega(n \log n)$ time).
In addition, are there any examples of two computational models (classical, not the quantum model) with the same computable power, but with fine-grained differences in their running time? (I guess that major changes in running time would contradict the extended Church-Turing thesis, which is less likely).
Answer: The language of all palindromes needs quadratic time on a single-tape Turing machine. See for example lecture notes of Eric Ruppert. The proof uses crossing sequences. In contrast, on a two-tape Turing machine, the language can be decided in linear time. | {
"domain": "cs.stackexchange",
"id": 14275,
"tags": "complexity-theory, computability, time-complexity, computation-models"
} |
Surviving a fall into a balloon-filled pit | Question: My brothers and i have been debating this question for years. If an average person fell at terminal velocity into the center of a pit that is 1 mile deep x 1 mile in diameter, and the pit is filled with typical latex-based air-filled round party balloons (not helium-filled), would they survive the impact? Each balloon is independent of the other balloons.
Thanks
Edit: this only concerns the impact, not the subsequent events that occur after the person has stopped.
Edit: there are several ways to die from the impact - too abrupt of a stop (do the balloons give enough?), not enough of a stop (would you fall through the balloons? Do they pack densely enough?), static electricity (?), heat (?), etc.
Answer: The situation you're describing is a high-energy impact with a granular material.
While certainly some energy is going to be expended by popping balloons, I think friction amongst balloons and between yourself and the balloons is going to be the dominant effect.
Hot Licks's answer is basically describing a force chain. Modeling a force chain in a mile cube of latex balloons is probably going to take a largish computer⸮
I find akhmeteli's answer unconvincing. In particular, I don't think it's reasonable to model a mass of balloons as having the same viscosity as unconstrained air.
(As an aside, if you are trying to measure the "viscosity" of a mass of balloons, make sure to model them as a shear thinning fluid.)
Granular material is complicated. I see two paths toward a satisfying answer, and I recommend them both.
Start whatever further schooling you need to become a research physicist. If you tell them you want to study granular materials I think you'll find funding; there are lots of industrial and civil-engineering applications.
Do an empirical test. The quantity of balloons you'll need for a small demo is staggering, but not unprecedented. Hitting the balloon-pit with a sandbag thrown from a helicopter is probably harder than it sounds; if you think you have enough balloons you could just jump and steer a bit on the way down. | {
"domain": "physics.stackexchange",
"id": 54658,
"tags": "gravity, velocity, collision"
} |
Shouldn't work be the same in all coordinates? | Question: We know that the work done by a force $\mathbf{F}$, along a path $\mathbf{x}$, is given by:
\begin{equation} W = \mathbf{F}^T \cdot \mathbf{x}
\end{equation}
However, suppose that i apply some change of basis, given by a matrix $A$. So, $\mathbf{F}$ will become $A\mathbf{F}$ and $\mathbf{x}$ will become $A\mathbf{x}$. And so
$$W = (A\mathbf{F})^T \cdot A\mathbf{x} = \mathbf{F}^TA^TA\mathbf{x}$$
Which may not be equal to $\mathbf{F} \cdot \mathbf{x}$. What am i missing? If the path and the force are both the same, shouldn't the work be the same in both cases, no matter what basis am i using for $\Bbb{R}^3$? I am supposing that $A$ is not necessarily orthonormal.
Answer: Let us write the dot product like this:
$$\vec F^T \vec x$$
where the $T$ means transpose. If we now apply the change of coordinates we get:
$$\vec F'^T \vec x'=(A\vec F)^T A\vec x$$
$$=\vec F^T A^T A \vec x$$
Now a change of coordinates matrix must be orthogonal (in this case) so
$$A^TA=I$$
Hence we get:
$$\vec F'^T \vec x'=\vec F^TI \vec x=\vec F^T \vec x$$
which is coordinate independent.
EDIT
Sorry I missed the statement about orthogonal matrices in the question. The point is that we actually only expect the work done to remain the same under orthogonal transformations. Orthogonal transformations correspond to rotations (and reflections) under which we do not on physical grounds expect the work to change. If the matrix is not orthogonal we then are doing things like stretches - these changes units and with the same scalar product we do not expect to get the same answer.
As a side note as ACuriousMind stated in the answers to one of my questions a proper calculation could be done but this would involve a change in the scalar product. | {
"domain": "physics.stackexchange",
"id": 42523,
"tags": "newtonian-mechanics, work, inertial-frames, coordinate-systems, galilean-relativity"
} |
Why does Phase-locking value show spurious effects at the beginning & end of a signal? | Question: I am trying to understand the behaviour of the phase-locking-value, as e.g. defined here, by some simple examples.
Basically, I am creating two signals with the same frequency & amplitude, where the phase-offset between the two is random. Now, as expected, at most time points the PLV gives me a low value indicating that there is not a consistent pattern of phase onset. However, at the beginning & end of the signal, there are some strange effects. I suppose that these might be somehow introduced by the Hilbert transform, or is this a problem with my implementation?
Here is how I defined the function:
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
def plv(x,y,trial_axis=0):
hilbert_x = signal.hilbert(x)
hilbert_y = signal.hilbert(y)
normed_x = hilbert_x/np.absolute(hilbert_x)
normed_y = hilbert_y/np.absolute(hilbert_y)
p = np.absolute(np.mean(normed_x*normed_y.conj(),axis=trial_axis))
return p
Here we have a plot of the behaviour:
t = np.linspace(0,10,1000)
N = 50
s_1 = np.array([np.sin(10*2*np.pi*t) for i in range(N)])
s_2 = np.array([np.sin(10*(2*np.pi*t)+np.random.uniform(low=0,high=2*np.pi) ) for i in range(N)])
p = plv(s_1,s_2)
plt.plot(p)
Answer: What was previously posted as an answer should have been an extended comment -- sorry for that. There simply was not enough space for what I thought I'd write about how the Octave code seemed to behave. Now that the code is cleared, what should have been the answer in the first place (given the rather obvious signs), is now the answer: spectral leakage.
The Hilbert transform is calculated by zeroing half the FFT of the signal and then recomputing the analytical signal through the IFFT. Brute-force zeroing will always lead to some leakage due to the nature of the limited length of the data. This can be easily tested with your code. s1 is a simple sin(), so its Hilbert transform will be a -cos(). Simply plot the difference between -cos()-hx.imag, and you'll see the leakage. The same goes for the second signal.
This is the changed code in Python:
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
f = 10
t = np.linspace(0, 100/f, 1000)
N = 50f = 10
nt = 1000
t = np.linspace(0, 100/f, nt)
N = 50
hor = np.ones(nt)
ver = np.ones(N)
r = np.array( [ np.random.uniform(low=0, high=2*np.pi) for i in range(N) ] )
s1 = np.array( [ np.sin( 2*np.pi*f*t ) for i in range(N) ] )
s2 = np.array( np.sin( 2*np.pi*( f*ver[:,None]*t + hor*r[:,None] ) ) )
c1 = -np.array( [ np.cos(2*np.pi*f*t) for i in range(N) ] )
c2 = -np.array( np.cos( 2*np.pi*( f*ver[:,None]*t + hor*r[:,None] ) ) )
hx = signal.hilbert(s1)
hy = signal.hilbert(s2)
nx = hx/np.absolute(hx)
ny = hy/np.absolute(hy)
p = np.absolute( np.mean( nx*ny.conj(), axis=0 ) )
plt.plot(p)
plt.show()
and this is the test (c2 will show non-overlapping plots due to the randomness):
plt.plot(c1.transpose() - hx.imag.transpose())
plt.show()
The Octave code, for checking:
f = 10
t = linspace(0, 100/f, 1000);
N = 50;
c = ones(N, 1);
ft = f*t.*c;
s1 = sin(2*pi*ft);
c1 = -cos(2*pi*ft);
r = rand(N,1).*c;
s2 = sin(2*pi*(ft + r));
c2 = -cos(2*pi*(ft + r));
hx = hilbert(s1');
hy = hilbert(s2');
nx = hx./abs(hx);
ny = hy./abs(hy);
p = abs(mean((nx.*conj(ny))'));
grid on
subplot(2, 1, 1)
plot(p)
subplot(2, 1, 2)
plot(c1' - imag(hx)) | {
"domain": "dsp.stackexchange",
"id": 9969,
"tags": "signal-analysis, phase, time-domain"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.