anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Triple points for other substances | Question: Can substances other than H2O have a triple point, where the three usual phases of matter (solid/liquid/gas) can exist?
Answer: Yes, see this wikipedia entry:
http://en.wikipedia.org/wiki/Triple_point
which gives the example:
the triple point of mercury occurs at a temperature of −38.8344 °C and a pressure of 0.2 mPa.
and furthermore provides a table of triple points of various common substances:
http://en.wikipedia.org/wiki/Triple_point#Table_of_triple_points | {
"domain": "physics.stackexchange",
"id": 11093,
"tags": "water, states-of-matter"
} |
problem with arm_navigation stack | Question:
Hi everyone!!
I have a problem with arm_navigation stack.
I need to get a trajectory with target time, in other words, I want a path that executed in that time.
The MotionPlanRequest message, that it inside move_arm_action, has two parameters for to defined path duration. But, the result trajectory time is smaller that target time.
Any ideas??
Thanks!!
Originally posted by Ivan Rojas Jofre on ROS Answers with karma: 70 on 2012-02-27
Post score: 0
Answer:
None of the planners we release really plan for dynamics, so while they are part of the message those values are not used. If the problem is that the resulting trajectories are too fast, that should be pretty easy to fix - just increase the times on the trajectory points before you send the path to the controller. At least the PR2 controller should be able to act correctly based on that. If the paths are too slow there's not that much that can be done about that - you can potentially increase the acceleration limits in the joint_limits.yaml file and move the arms somewhat faster. Finally, the right way to do this is have the planner reason more about timing, but that's not something our planners currently do.
Originally posted by egiljones with karma: 2031 on 2012-02-28
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8406,
"tags": "ros, move-arm, arm-navigation, ros-electric"
} |
Are gravitational waves part of dark energy? | Question: Do the current models of the universe consider the energy release of gravitational waves?
E.g. iirc, the last black hole merger detected had a gravitational wave energy equal to ~3 solar masses (hope I'm not mistaken in this fact).
Same for other mergers, and supernovae: the energy related to their gravitational waves should sum up to something.
Answer: Gravitational waves cannot be part of dark energy, because -- just like electromagnetic waves -- their energy density dilutes as the universe expands: both due to the simple expansion of space and due to the cosmological redshifting that the expansion produces.
Dark energy, on the other hand, has an energy density that remains constant as the universe expands (truly constant if it's the cosmological constant, very close to constant if it's something stranger like quintessence), so it can't be made up of GWs.
GWs probably don't contribute much to the energy density we attribute to dark matter, either, since dark matter is diluted by the cosmological expansion but not by redshift. So if the energy density attributed to DM ten billion years ago were due to GWs, then it would be significantly lower than the DM energy density we measure today. (Also, current cosmological observations indicate significant dark matter was present very early in the universe, before the star formation that was necessary to create binary black holes in the first place.)
The GW energy from a binary-black-hole merger is enormous, yes; but those are very rare events. The current estimate from LIGO observations is about 50 or 100 such mergers per year per cubic gigaparsec, which translates into a rate of less than one per million years per galaxy. This paper by the LIGO and Virgo Collaborations has estimates of the current GW energy density in units of the current energy density of the universe (their Figure 1):
The green curve shows the estimated energy density from binary-black-hole mergers as a function of the frequency of the GWs; the red curve shows the contribution from binary-neutron-star mergers. The combined peak is always less than $10^{-8}$ of the critical energy density, so even summing up over frequencies, we have an energy density much smaller than that due to ordinary matter ($\Omega \sim 5$%), so it's probably safe to ignore its effects. | {
"domain": "astronomy.stackexchange",
"id": 3594,
"tags": "black-hole, supermassive-black-hole, gravitational-waves, dark-energy"
} |
Difference between CRC and Hamming Code | Question: I am a bit confused on the difference between Cyclic Redundancy Check and Hamming Code. Both add a check value attached based on the some arithmetic operation of the bits in the message being transmitted. For Hamming, it can be either odd or even parity bits added at the end of a message and for CRC, it's the remainder of a polynomial division of their contents.
However, CRC and Hamming are referred to as fundamentally different ideas. Can someone please elaborate on why this is?
Also why is the CRC compared with the FCS(Frame check sequence) to see if the received message is with or without error? Why not just use the FCS from the beginning? (I might be totally flawed in my understanding by asking this question, please correct me.)
Answer:
CRC is conceived as the remainder of a polynomial division. It is efficient for detecting errors, when the calculated remainder does not match. Depending on the CRC size, it can detect bursts of errors (10 bits zeroed, for example), which is great for checking communications.
The "FCS" term is used sometimes for some transformed version of the CRC (Ethernet for example) : The purpose is to apply the CRC algorithm to both the data and its FCS to cancel the remainder value and get a constant (just like even parity is ensuring an even number of "1" bits, including the parity bit).
Hamming codes are both detection and correction codes. Adding the Hamming code bits, you ensure some distance (as the number of differing bits) between valid codes. For example, with 3 bits distance, you can correct any 1 bit error OR detect any 2 bit error.
Reduced to a single check bit, Hamming codes and CRC are identical (x+1 polynomial) to parity. | {
"domain": "cs.stackexchange",
"id": 21390,
"tags": "communication-protocols, error-correcting-codes, crc"
} |
How to modify the turtlebot robot model in rviz? | Question:
Hi,
I have remodeled my turtlebot with additional hardware to increase the height of the kinect location. I would like to model the same in rviz. I want to know which urdf file i should edit in order to do this. Any help here is much appreciated.
Thanks,
Karthik
Originally posted by karthik on ROS Answers with karma: 2831 on 2012-02-28
Post score: 0
Answer:
The URDF file for the turtlebot is in the ros package turtlebot_description. It is loaded on the parameter server in turtlebot_bringup/minimal.launch. Please note that the package turtlebot_description will be read only. The best way to change it is to copy and rename it (make sure that you get the includes right in turtlebot.xacro).
Originally posted by Lorenz with karma: 22731 on 2012-02-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Yantian_Zha on 2014-04-19:
Hi Lorenz, I cannot find turtlebot.xacro in Groovy. I'm using turtlebot2. I can find only turtlebot.urdf.xacro, and I'm confusing in rvizing it? Pls help me, thanks! | {
"domain": "robotics.stackexchange",
"id": 8425,
"tags": "ros, rviz, urdf, robot-model"
} |
ImportError: No module named rospy | Question:
When I roslaunch my node, the error I receive is
ImportError: No module named rospy
But when I open up the python environment and run import rospy, it imports successfully and I can access rospy.__file__, which returns
/opt/ros/indigo/lib/python2.7/dist-packages/rospy/__init__.pyc
which is on my PYTHONPATH:
declare -x PYTHONPATH="/usr/lib/python2.7/dist-packages:/home/<user>/indigo_ws/devel/lib/python2.7/dist-packages:/opt/ros/indigo/lib/python2.7/dist-packages"
I do have a custom installation of Python that seems to have messed up some catkin_pkg installation, which is why it's had to be appended to the path. When I remove it to run roslaunch as a debug, I just get the same error.
Running Ubuntu 14.04 on a Raspberry Pi, if that helps.
Originally posted by djsw on ROS Answers with karma: 153 on 2016-01-06
Post score: 0
Original comments
Comment by Humpelstilzchen on 2016-01-08:
Did you check PYTHONPATH in os.environ? What version of the python interpreter is started by roslaunch?
Answer:
I worked out what the problem was.
I wanted access to the Raspberry Pi GPIO ports, so I had used launch-prefix="sudo" in my launch file. What this meant was that the file was launched in the root workspace which wasn't set up to use ROS. There is apparently a workaround that I wasn't able to get to work.
So my problem was solved by removing that from the launch file.
In case anybody finds this on google, I can use the GPIO ports in a ROS node by running pigpio. Some googling will produce an example script but this is either out-of-date or wrong - use the examples on the site.
Originally posted by djsw with karma: 153 on 2016-01-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Falko on 2016-03-16:
The pigpio library really made my day! It's working beautifully and I can even control the raspberry gpio pins from a remote machine. (Ubuntu on both machines, ROS running either on both of them or the remote machine only, no "sudo node" involved.) | {
"domain": "robotics.stackexchange",
"id": 23363,
"tags": "rospy, raspberrypi, rasbperrypi"
} |
Windowing before viewing a filter frequency response? | Question: It is well known that it is a good idea to apply windowing to a signal, before obtaining its DFT and viewing its frequency response.
But is it a good idea to apply windowing to the impulse response of a filter, before viewing its frequency response?
Does the answer to this question depend on whether the filter is FIR or IIR?
I could perhaps see why windowing could be better than just truncating an IIR filter. But I can't decide whether the same applies to an FIR filter.
However, even in the case of an IIR filter, we usually don't do this. Instead we usually just plot the response as a function of the IIR filter's transfer function numerator & denominator coefficients (e.g. using freqz in Python or Matlab).
So is it ever a good idea to window the impulse response of a filter?
Answer: It is not a good idea to window before an FFT, unless you want to decrease interference, rectangular window artifacts, and/or sidelobes. In the case of looking at the frequency response of an impulse response, you want to see the sidelobes (they are part of the frequency response) and there are no interfering signals.
There might be rectangular window artifacts if your window is too short and truncates the response while it is still above the noise floor, but the solution to that is to use a longer FFT, not to use a different window. | {
"domain": "dsp.stackexchange",
"id": 4145,
"tags": "filter-design, frequency-response, window-functions"
} |
Single field vs single clock inflation? | Question: My question is very simple and short, but I couldn't find any explicit answer in papers and/or lecture notes.
What is the difference between single field and single clock inflation? Or, maybe, what is single clock inflation?
I know that single field is inflation driven in presence only of a unique field, i.e. both the rapid expansion and the generation of perturbations will originate from this field.
On the other hand, I heard at some point that single clock might mean that expansion is due to inflaton (and it can play the role of a clock), but then there might be an additional field (for instance curvaton) to generate the perturbations. But then is there any restrictions on what a single clock theory can be?
I'm not sure of what I am saying here. Any clarifications?
Answer: Single clock inflation simply means that a single field's value has a one to one relationship with the scale factor $a$. For this to happen you need the field to be undergoing slow roll otherwise the expansion will depend on the field value and its time derivative. | {
"domain": "physics.stackexchange",
"id": 99148,
"tags": "cosmology, cosmological-inflation"
} |
How does the exchange of energy between an archer's arrow and their arm work? | Question: An archer cannot throw an arrow as far as they can shoot it, nor would its penetrating power be as great.
The obvious explanation is that they have traded strength for speed. A slow draw stores energy in the bow that is then released quickly.
The problem is that, when an arrow strikes its target it has a lot of energy and, as far as I know, penetrates further into the target than it would if used as a stabbing weapon in the hand of the archer.
How is this possible? How can the use of a bow increase both range and penetrating power? It seems paradoxical.
Answer: Due to the physiology of the human arm, it's very difficult to throw an object much faster than 100mph. No matter what object is being thrown, it can't move faster than the hand that's holding it, so around 100mph is the upper speed limit of any thrown object.
A bow gets around this by using the elastic potential of the bow to store energy over an arbitrary period of time, rather than by trying to accelerate to maximum speed over the space of your arm's rotation. You can slowly and steadily pull back on the bow, storing more energy than you could have imparted with a throw. This allows arrows launched from a bow to reach 150mph or more. Since kinetic energy is proportional to the square of velocity, increasing speed by 50% more than doubles the amount of energy carried. | {
"domain": "physics.stackexchange",
"id": 75327,
"tags": "forces, momentum"
} |
robot formation in ROS | Question:
Hi, I'm going to work with ground mobile robot formation in ROS, using turtlebot to do some experiments in simulator like stage. However, there's a tough problem.
The velocity command in ROS is made up of linear velocity and angular velocity while a vast number of formation algorithms result in velocity in 2D vector. Is there any method to make a translation? What if the algorithm generates target point to each robot? How to generate ROS velocity,namely Twist, to make diff-drive robots like turtlebot arrive in such points. I don't think using navigation stack is a good idea.
I'd appreaciate it if anyone can give me some advice or some related package in ROS.
Originally posted by SilverBullet on ROS Answers with karma: 74 on 2016-03-22
Post score: 0
Answer:
If you think that the navstack is not a good idea because you need to update your goals at a high frequency, I suggest you look at this thread which discusses how to follow a moving target - the "target points" of each robot in your case.
Originally posted by al-dev with karma: 883 on 2016-03-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24214,
"tags": "navigation, turtlebot, move-base"
} |
Various different box shadows | Question: The following code will be used for educational purposes. It is to demonstrate various way that the box-shadow property can be used. Any feedback on areas where I have not used best practices would be great.
Working example
HTML:
<!DOCTYPE HTML>
<html>
<head>
<title>Box Shadow</title>
<link rel="stylesheet" type="text/css" href="boxshadow.css">
</head>
<body>
<!-- Introduction to the text shadow property -->
<div class="container one">
<div class="shadow1 box left"></div>
<div class="shadow2 box right"></div>
</div>
<div class="container two">
<div class="shadow3 box left"></div>
<div class="shadow4 box right"></div>
</div>
<div class="container three">
<div class="shadow5 box left"></div>
<div class="shadow6 box right"></div>
</div>
</body>
</html>
CSS:
body{
background: #ccc;
margin: 0;
}
.container{
padding: 20px 50px;
overflow: hidden;
}
.box{
background-color: #fff;
width: 500px;
height: 200px;
}
.left{
float: left;
}
.right{
float: right;
}
/* Basic Shadow */
.shadow1{
box-shadow: 5px 5px 0 rgba(150,150,150,0.5);
}
/* Inset Shadow and blur radius Variety */
.shadow2{
box-shadow: inset 0 0 10px black;
}
/* Inset Shadow and blur radius */
.shadow3{
background: #F2F2F2;
box-shadow: inset 3px 3px 3px #BFBFBF, inset -3px -3px 3px #8C8C8C;
border-radius: 10px;
}
/* Spread radius Shadow */
.shadow4{
box-shadow: 0 20px 10px -10px rgba(150,150,150,0.8);
}
/* Curved Shadows */
.shadow5{
position: relative;
overflow:hidden;
}
.shadow5:before{
content: "";
position: absolute;
z-index: 1;
width: 96%;
top: -15px;
height: 15px;
left: 2%;
border-radius: 100px / 5px;
box-shadow: 0 0 39px rgba(0,0,0,0.6);
}
.shadow5:after{
content: "";
position: absolute;
z-index: 1;
width: 96%;
bottom: -15px;
height: 15px;
left: 2%;
border-radius: 100px / 5px;
box-shadow: 0 0 39px rgba(0,0,0,0.6);
}
.shadow6{
position: relative;
overflow:hidden;
}
.shadow6:before{
content: "";
position:absolute;
z-index: 1;
width:10px;
top: 5%;
height: 90%;
left: -10px;
border-radius: 5px / 100px;
box-shadow:0 0 13px rgba(0,0,0,0.6);
}
.shadow6:after{
content: "";
position:absolute;
z-index: 1;
width:15px;
top: 5%;
height: 96%;
right: -15px;
border-radius: 5px / 100px;
box-shadow:0 0 13px rgba(0,0,0,0.6);
}
Answer: they are all .box class.
I am thinking that you should get rid of the container divs and get rid of the .box class and give it to all divs because after you get rid of the container divs all the divs will be the boxes.
you should do one of those fancy HTML5 Tags for the content section instead of having a container classed div.
check this out, I have removed some tags and some CSS classes to make the code a little cleaner. the output is slightly different but I think this is a little bit better. if you need a width on the content you can add that as well.
what happens when the browser window is resized is that the boxes are now hugging the left and right sides of the content, if you want to keep it to two rows at the max you would just add a width to the content tag
UPDATE
I changed some more things to make this so that you could add more examples with ease, and so that it would be more view-able on mobile devices.
This is the New Version that I came up with
I removed the Left and Right Classes, and moved the float left into the Styling for the Div tags, this way the boxes will wrap the viewport is resized.
I also changed the size of the boxes.
I removed the container classes as they weren't needed.
I added the styling from the box class to the div's themselves.
I am apparently wrong about sizing the content tag that I created, maybe it's not proper HTML5 or I was doing something wrong.
Here is the code that I have after the changes that I made
<!DOCTYPE HTML>
<html>
<head>
<title>Box Shadow</title>
<link rel="stylesheet" type="text/css" href="boxshadow.css">
</head>
<body>
<!-- Introduction to the text shadow property -->
<content>
<div class="shadow1"></div>
<div class="shadow2"></div>
<div class="shadow3"></div>
<div class="shadow4"></div>
<div class="shadow5"></div>
<div class="shadow6"></div>
</content
</body>
</html>
CSS
body{
background: #ccc;
margin: 0;
}
content{
width:1000px;
}
div {
background-color: #fff;
width: 500px;
height: 200px;
padding:15px;
margin:15px;
}
.left{
float: left;
}
.right{
float: right;
}
/* Basic Shadow */
.shadow1{
box-shadow: 5px 5px 0 rgba(150,150,150,0.5);
}
/* Inset Shadow and blur radius Variety */
.shadow2{
box-shadow: inset 0 0 10px black;
}
/* Inset Shadow and blur radius */
.shadow3{
background: #F2F2F2;
box-shadow: inset 3px 3px 3px #BFBFBF, inset -3px -3px 3px #8C8C8C;
border-radius: 10px;
}
/* Spread radius Shadow */
.shadow4{
box-shadow: 0 20px 10px -10px rgba(150,150,150,0.8);
}
/* Curved Shadows */
.shadow5{
position: relative;
overflow:hidden;
}
.shadow5:before{
content: "";
position: absolute;
z-index: 1;
width: 96%;
top: -15px;
height: 15px;
left: 2%;
border-radius: 100px / 5px;
box-shadow: 0 0 39px rgba(0,0,0,0.6);
}
.shadow5:after{
content: "";
position: absolute;
z-index: 1;
width: 96%;
bottom: -15px;
height: 15px;
left: 2%;
border-radius: 100px / 5px;
box-shadow: 0 0 39px rgba(0,0,0,0.6);
}
.shadow6{
position: relative;
overflow:hidden;
}
.shadow6:before{
content: "";
position:absolute;
z-index: 1;
width:10px;
top: 5%;
height: 90%;
left: -10px;
border-radius: 5px / 100px;
box-shadow:0 0 13px rgba(0,0,0,0.6);
}
.shadow6:after{
content: "";
position:absolute;
z-index: 1;
width:15px;
top: 5%;
height: 96%;
right: -15px;
border-radius: 5px / 100px;
box-shadow:0 0 13px rgba(0,0,0,0.6);
}
Some other things that I noticed while pasting this new code is that you assign a z-index of 1 to some of your shadow styles and not to the others, I don't think that this is necessary for what you are doing here. the z-index defaults to 1 or 0 if I remember right, and that should be fine for what you are doing here so you don't even need to add that in there. | {
"domain": "codereview.stackexchange",
"id": 6194,
"tags": "html, css"
} |
Can't understand an MSE loss function in a paper | Question: I'm reading a paper published in nips 2021.
There's a part in it that is confusing:
This loss term is the mean squared error of the normalized feature
vectors and can be written as what follows:
Where $\left\|.\right\| _2$is $\ell_2$ normalization,$\langle , \rangle$ is the dot product operation.
As far as I know MSE loss function looks like :
$L=\frac{1}{2}(y - \hat{y})^{2}$
How does the above equation qualify as an MSE loss function?
Answer: Recall what mean square error is actually measuring... the Euclidean distance between some regressed function, $\hat y$ and the true signal/function $y$ evaluated at every input $x$. The above is a more formalized vector definition, but is still very much the same.
Starting from this idea that the Euclidean distance is coming into play:
$ d(f_{1}(x),f_{2}(x))^{2} = \langle f_{1}(x) - f_{2}(x), f_{1}(x) - f_{2}(x) \rangle = \langle f_{1}(x),f_{1}(x) \rangle + \langle f_{2}(x),f_{2}(x) \rangle - 2 \langle f_{1}(x),f_{2}(x) \rangle = 2 (1 - \langle f_{1}(x),f_{2}(x) \rangle) = 2 - 2 \langle f_{1}(x),f_{2}(x) \rangle$.
The denominator is just to make each vector (and by extension, their dot product) of unit length.
Hope this helps! | {
"domain": "datascience.stackexchange",
"id": 10570,
"tags": "regression, loss-function, mse"
} |
Why is boiling water the second time more quiet than boiling it the first time? | Question: First of all: This is a different question than Why is boiling water loud, then quiet?, although the answers might be similar.
When I wake up, I boil some water for a cup of tea. It happens quite often that I do something else for 10-15 minutes and forget the water, so I boil it again. I noticed that boiling the water the first time is much louder than boiling it the second time.
Why is this the case? Did the air dissolved in the water disappear and therefore it became more quiet? How long would it take until the water has again air (I'm sorry, I know this is linguistically wrong, but I don't know how to write it correct). Is there any simple way I could get air again into the water to check if it's only the air?
Answer: Water could have small pockets of air dissolved inside it, and given enough time they will rise to the surface on their own.
Now when you boil water, what happens is that the vapour pressure at the water surface is equal to the atmospheric pressure, and liquid water gets turned into water vapour. This process does help the trapped pockets of air reach the surface more quickly, since the water molecules are moving around a lot during boiling. So after one boil, there is less water in your kettle, but there is also a lot less air dissolved in the water.
As the question you linked to pointed out, some of the sound comes from the dissolved air bubbles hitting the surface, and with less air that is less sound.
Also quoted is the vapour bubbles that are produced at the bottom of the kettle (where the heating is). A vapour bubble is basically a bubble of water vapour, as the name suggests. The pressure of the water vapour is higher than the pressure of the water around it, so it rises upward to the surface. This produces a sound (quoting the paper linked in the question) of about $35 - 60$ $KHz$.
But the second time you boil the water, the water isn't at room temperature. It's quite a bit higher. Therefore I suspect that less vapour bubbles are produced near the bottom (since the pressure of the surrounding water is higher than at room temperature, because it's hotter) and thus overall less sound. | {
"domain": "physics.stackexchange",
"id": 94448,
"tags": "water, everyday-life"
} |
Value of current in the given circuit | Question:
Since, point B is connected to Ground, Should not current exist only in the branch ABG, and zero current in the branch BCDE, as current chooses least resistive path to move to lower potential.
Solution for this circuit shows current $I=\frac{50}{5+7+10+3}$, which means current exists in the Branch BCDE.
According to me, it should have been $I=\frac{50}{5+7}$, what am I missing?
Solution is at https://www.toppr.com/ask/question/in-the-circuit-shown-the-point-b-is-earthed-the-potential-at-the-point-a/
Answer:
Since, point B is connected to Ground, Should not current exist only in the branch ABG, and zero current in the branch BCDE,
No.
$$I=\frac{V}{R}$$
So if node C isn't also at ground then there is non-zero potential difference across the 10-ohm resistor and current will flow through it.
Similarly, if node D isn't at the same potential as node C then there is a potential difference across the 3-ohm resistor and current will flow through it as well.
as current chooses least resistive path to move to lower potential.
Current flows through all resistors that have a potential difference across them.
The principle "current flows through the path of least resistance" is only really applicable when there is a very large difference in the resistance of two parallel paths (say 1000 ohms in one path, and 1 ohm in the other --- in which case there is 1000 times as much current in the 1-ohm path than the 1000-ohm path and it's often {but not always} reasonable to ignore the current in the high-resistance path).
You should also not assume that ground is always the lowest potential node in a circuit. | {
"domain": "physics.stackexchange",
"id": 74266,
"tags": "homework-and-exercises, electricity, electric-circuits, electric-current"
} |
Are live and neutral wires the same as the wires which are connected to the terminals of a battery in a circuit? | Question: I want to know the basic difference between live & neutral wires and positive & negative wires(connected to a battery), or if they are the same but the names are different.
Answer: I assume "live and neutral" refers to the alternating current (AC) home installation?
That is very different from "positive and negative" in a battery, which is direct current (DC): the two poles of the battery have different potential, with a (more or less) constant potental difference (voltage) of, for example, 1.2V; one pole is the "positive" one, the other the "negative".
In the AC system in your home, the two connectors (ignoring possible ground connector) of a wall plug do not have constant potential difference (voltage), the voltage changes very quickly (around 50 times per second) between positive and negative (with a maximum of around 110 of 220 Volts, depending on where you live), in a kind of sinus wave. However, one of the connectors (called "neutral") will usually have very, very similar potential to "the ground", i.e., mor or less everything else around you.
Remark: there is usually still a tiny potential difference to ground: if you connect neutral and the ground connector of the wall plug, then in a decent installation the RCD "protection switch" should immediately switch off (as it detects a small current.
PS: Please do not fool around with the electric installation if you do not know very well what you are doing (in particular, do not shortcut ground and neutral to test the RCD); you can get yourself killed.
Also, I assume "live" wire cal also mean something entirely different (such as a wire that is connected to a power source, as opposed to a disconnected one), you would have to provide context. | {
"domain": "physics.stackexchange",
"id": 38988,
"tags": "electricity, electric-circuits"
} |
Separating the potential energy of a system of particles. | Question: Assuming all forces derive form a conservative source and that all forces observe the strong form of the third law, how do we arrive at the following equation?
\begin{equation}
V=\sum _i V_i+\frac 12 \sum _i \sum _{j,j\neq i}V_{ij}
\end{equation}
Okay so here are my thoughts.
Firstly we divide the work up into internal and external components:
\begin{equation}
\sum _i\int ^{r_2}_{r_1}\vec{F_i}^{(e)}\cdot d\vec s _i+\sum _{i,j}\int ^{r_2}_{r_1}\vec {F_{ij}}\cdot d\vec s _i
\end{equation}
The factor of a half comes in since we are summing over both $i$ and $j$, and (I think) we can assume that $V_{ij}=V_{ji}$ (why?).
let,
\begin{equation}
\vec F_i ^{(e)}=-\nabla_iV_i(\vec r_1,...,\vec r_n)\\
\vec F_{ij}=-\nabla_{ij}V_{ij}(|\vec r_i -\vec r_j|)
\end{equation}
The $|\vec r_i -\vec r_j|$ is just the magnitude of the particles separation.
Now the work is given by:
\begin{equation}
W=\sum _i\int ^{r_2}_{r_1}-\frac{\partial }{\partial \vec r_i}V_i\cdot d\vec r_i+\frac 12 \sum _{i,j}\int^{r_2}_{r_1}-\frac{\partial }{\partial \vec r_{ij}}V_{ij}\cdot d\vec r_{ij}
\end{equation}
\begin{equation}
W=-\sum _i\int^{V_2}_{V_1}dV_i-\frac 12 \sum _{i,j}\int ^{V_2}_{V_1}dV_{ij}
\end{equation}
So my question is how we got to the end step? This is all out of Goldstein in the 1st chapter. Unfortunately I can't follow the derivation at all ...
Answer:
Firstly we divide the work up into internal and external components:
\begin{equation}
\sum _i\int ^{r_2}_{r_1}\vec{F_i}^{(e)}\cdot d\vec s _i+\sum _{i,j}\int ^{r_2}_{r_1}\vec {F_{ij}}\cdot d\vec s _i
\end{equation}
The factor of a half comes in since we are summing over both $i$ and $j$, and (I think) we can assume that $V_{ij}=V_{ji}$ (why?).
They don't have to be equal. They just have to be equal to within an arbitrary constant. Adding an arbitrary constant to a potential doesn't make a bit of difference (at least in classical mechanics), so one might as well call that constant zero.
To see why they have to be equal (to within a constant), assume $V_{ij} = V_{ji} + \Delta V_{ij}$, where $\Delta V_{ij}$ is some non-constant function. Then $\vec F_{ij} = -\vec\nabla_{ij} V_{ij}$ $= -\vec\nabla_{ij} V_{ji} - \vec\nabla_{ij} \Delta V_{ij}$. Note that $\vec\nabla_{ij} V_{ji} = -\vec\nabla_{ji}V_{ji} = \vec F_{ji}$ Thus $\vec F_{ij} = - \vec F_{ji} - \vec\nabla_{ij} \Delta V_{ij}$. The only way this can satisfy Newton's third law is if $\vec\nabla_{ij} \Delta V_{ij} = 0$. This contradicts the assumption that $\Delta V_{ij}$ is some non-constant function.
let,
\begin{equation}
\vec F_i ^{(e)}=-\nabla_iV_i(\vec r_1,...,\vec r_n)\\
\vec F_{ij}=-\nabla_{ij}V_{ij}(|\vec r_i -\vec r_j|)
\end{equation}
The $|\vec r_i -\vec r_j|$ is just the magnitude of the particles separation.
You have a misunderstanding here with respect to the external forces. The force $\vec F_i^{(e)}$ and hence the potential $V_i$ depends only on $\vec r_i$. There is no dependence on $\vec r_j$ where $j \ne i$. That first line should be $\vec F_i^{(e)} = -\nabla_i V_i(\vec r_1)$.
Now the work is given by:
\begin{equation}
W=\sum _i\int ^{r_2}_{r_1}-\frac{\partial }{\partial \vec r_i}V_i\cdot d\vec r_i+\frac 12 \sum _{i,j}\int^{r_2}_{r_1}-\frac{\partial }{\partial \vec r_{ij}}V_{ij}\cdot d\vec r_{ij}
\end{equation}
\begin{equation}
W=-\sum _i\int^{V_2}_{V_1}dV_i-\frac 12 \sum _{i,j}\int ^{V_2}_{V_1}dV_{ij}
\end{equation}
So my question is how we got to the end step? This is all out of Goldstein in the 1st chapter. Unfortunately I can't follow the derivation at all ...
Goldstein uses $\int_1^2$, not $\int_{r_1}^{r_2}$ or $\int_{V_1}^{V_2}$ This may be part of your problem. Your own notation may be confusing you.
The second line follows from the first as a consequence of the fact that $dV = \vec{\nabla} V \cdot d\vec r$ for some potential $V$. | {
"domain": "physics.stackexchange",
"id": 17169,
"tags": "newtonian-mechanics, classical-mechanics, work, potential-energy"
} |
C++ Socket Part-4 | Question: In my ongoing attempts to become a better blog writer I have some written some more code that needs reviewing.
Full Source: https://github.com/Loki-Astari/Examples/tree/master/Version4
First Article: http://lokiastari.com/posts/SocketProtocols/
This code replaces the hand written socket code with a libcurl wrapper.
#include <curl/curl.h>
#include <sstream>
#include <iostream>
#include <cstdlib>
namespace ThorsAnvil
{
namespace Socket
{
template<std::size_t I = 0, typename... Args>
int print(std::ostream& s, Args... args)
{
using Expander = int[];
return Expander{ 0, ((s << std::forward<Args>(args)), 0)...}[0];
}
template<typename... Args>
std::string buildErrorMessage(Args const&... args)
{
std::stringstream msg;
print(msg, args...);
return msg.str();
}
class CurlGlobal
{
public:
CurlGlobal()
{
if (curl_global_init(CURL_GLOBAL_ALL) != 0)
{
throw std::runtime_error(buildErrorMessage("CurlGlobal::", __func__, ": curl_global_init: fail"));
}
}
~CurlGlobal()
{
curl_global_cleanup();
}
};
extern "C" size_t curlConnectorGetData(char *ptr, size_t size, size_t nmemb, void *userdata);
enum RequestType {Get, Head, Put, Post, Delete};
class CurlConnector
{
CURL* curl;
std::string host;
int port;
std::string response;
friend size_t curlConnectorGetData(char *ptr, size_t size, size_t nmemb, void *userdata);
std::size_t getData(char *ptr, size_t size)
{
response.append(ptr, size);
return size;
}
public:
CurlConnector(std::string const& host, int port)
: curl(curl_easy_init( ))
, host(host)
, port(port)
{
if (curl == NULL)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_init: fail"));
}
}
~CurlConnector()
{
curl_easy_cleanup(curl);
}
virtual RequestType getRequestType() const {return Post;}
void sendMessage(std::string const& urlPath, std::string const& message)
{
std::stringstream url;
response.clear();
url << "http://" << host;
if (port != 80)
{
url << ":" << port;
}
url << urlPath;
CURLcode res;
auto sListDeleter = [](struct curl_slist* headers){curl_slist_free_all(headers);};
std::unique_ptr<struct curl_slist, decltype(sListDeleter)> headers(nullptr, sListDeleter);
headers = std::unique_ptr<struct curl_slist, decltype(sListDeleter)>(curl_slist_append(headers.get(), "Content-Type: text/text"), sListDeleter);
if ((res = curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers.get())) != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_setopt CURLOPT_HTTPHEADER:", curl_easy_strerror(res)));
}
if ((res = curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, "*/*")) != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_setopt CURLOPT_ACCEPT_ENCODING:", curl_easy_strerror(res)));
}
if ((res = curl_easy_setopt(curl, CURLOPT_USERAGENT, "ThorsExperimental-Client/0.1")) != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_setopt CURLOPT_USERAGENT:", curl_easy_strerror(res)));
}
if ((res = curl_easy_setopt(curl, CURLOPT_URL, url.str().c_str())) != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_setopt CURLOPT_URL:", curl_easy_strerror(res)));
}
if ((res = curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, message.size())) != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_setopt CURLOPT_POSTFIELDSIZE:", curl_easy_strerror(res)));
}
if ((res = curl_easy_setopt(curl, CURLOPT_COPYPOSTFIELDS, message.data())) != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_setopt CURLOPT_COPYPOSTFIELDS:", curl_easy_strerror(res)));
}
if ((res = curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, curlConnectorGetData)) != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_setopt CURLOPT_WRITEFUNCTION:", curl_easy_strerror(res)));
}
if ((res = curl_easy_setopt(curl, CURLOPT_WRITEDATA, this)) != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_setopt CURLOPT_WRITEDATA:", curl_easy_strerror(res)));
}
switch(getRequestType())
{
case Get: res = CURLE_OK; /* The default is GET. So do nothing.*/ break;
case Head: res = curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "HEAD"); break;
case Put: res = curl_easy_setopt(curl, CURLOPT_PUT, 1); break;
case Post: res = curl_easy_setopt(curl, CURLOPT_POST, 1); break;
case Delete: res = curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "DELETE"); break;
default:
throw std::domain_error(buildErrorMessage("CurlConnector::", __func__, ": invalid method: ", static_cast<int>(getRequestType())));
}
if (res != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_setopt CURL_METHOD:", curl_easy_strerror(res)));
}
if ((res = curl_easy_perform(curl)) != CURLE_OK)
{
throw std::runtime_error(buildErrorMessage("CurlConnector::", __func__, ": curl_easy_perform:", curl_easy_strerror(res)));
}
}
void recvMessage(std::string& message)
{
message = std::move(response);
}
};
size_t curlConnectorGetData(char *ptr, size_t size, size_t nmemb, void *userdata)
{
CurlConnector* self = reinterpret_cast<CurlConnector*>(userdata);
return self->getData(ptr, size * nmemb);
}
}
}
int main(int argc, char* argv[])
{
namespace Sock = ThorsAnvil::Socket;
if (argc != 3)
{
std::cerr << "Usage: client <host> <Message>\n";
std::exit(1);
}
Sock::CurlGlobal curlInit;
Sock::CurlConnector connect(argv[1], 8080);
connect.sendMessage("/message", argv[2]);
std::string message;
connect.recvMessage(message);
std::cout << message << "\n";
}
Answer: I find it a bit odd to use variadic templates and various tricks (Expander) just for concatenating a few strings in a beginner's tutorial. I would much rather see a function that checks for errors which would help eliminate all those throw std::runtime_error(...) from the main part of the class.
In your code, you use unique_ptr<> inside a method, but you choose to store a raw pointer as a class field (CURL* curl). Why? This will lead to bugs if your class is moved or copied as you'll end up accessing a disposed resource or try to clean it multiple times.
You declare curlConnectorGetData as extern "C" but only pass a pointer to it. The pointer doesn't care about name mangling. In the end you mention it 3 times (once as extern "C", once as friend and once for the actual implementation which only delegates to a member function). All this can be simplified by using a lambda that doesn't capture (for which the C++ standard provides a conversion to function pointer).
You use the term sendMessage which I don't find quite appropriate for an HTTP request. The semantics of using HTTP isn't the same as using a simple socket for exchanging various messages. Users will quickly find the need to adjust various other parameters (like the request type which is currently hard-coded to Post (why not Get?)) and see the response as a more complex structure than just a string.
One more suggestion, I don't know if you're approaching this: try implementing input/output streams that read from/write to the sockets. | {
"domain": "codereview.stackexchange",
"id": 20328,
"tags": "c++, http, socket"
} |
Why can using more reads lead to a lower quality assembly? | Question: I am experimenting with adding additional reads to the input files I'm giving SOAPdenovo2, and there comes a point where a good contig I've been watching actually stops showing up. Does anyone have a quick answer to why that might happen?
Answer: The more reads you add the more errors you add into the assembly. This is because additional duplicate reads don't add nodes/edges to the de Bruijn graph, but those with errors do. By preselecting those reads aligning to your desired contigs, you may be further exacerbating this effect and further hindering SOAPdenovo2's ability to do k-mer correction (after all, the observed k-mer frequency of errors would be higher). This then results in additional incorrect paths through the graph that the assembler has to traverse. | {
"domain": "bioinformatics.stackexchange",
"id": 457,
"tags": "assembly"
} |
Inhibitory effect of GABA through GABA(A) receptors | Question: Over at Wikipedia, the following is written in the article about GABA(A) receptors:
"Upon activation, the GABA(A) receptor selectively conducts Cl− through its pore. Cl- will flow out of the cell if the internal voltage is less than resting potential and Cl- will flow in if it is more than resting potential. This causes an inhibitory effect on neurotransmission by diminishing the chance of a successful action potential occurring." (https://en.wikipedia.org/wiki/GABAA_receptor, Accessed: 22 May 2018)
Based on my understanding, if an action potential is to be inhibited, the internal voltage of the neuron needs to be decreased further away from the voltage at which the neuron's voltage-gated sodium channels open up.
With this in mind, I take issue with the statement Cl- will flow out of the cell if the internal voltage is less than resting potential.... If the internal voltage is less than the resting potential, and the Cl- flows out from the cell, then the internal voltage will increase, hence coming closer to the action potential.
This would increase the probability of reaching an action potential, not decrease it, wouldn't it?
My Question: Does Wikipedia have it wrong, or is there something I haven't understood?
Answer: Great question! This apparent contradiction has puzzled many neuroscience students before you.
Short Answer:
This is often called "shunting inhibition," in particular when excitatory and inhibitory conductances are out on dendrites.
Longer Answer:
The part that is wrong is this (emphasis mine):
the internal voltage will increase, hence coming closer to the action potential. This would increase the probability of reaching an action potential, not decrease it, wouldn't it?
The idea that "hyperpolarization is inhibitory, depolarization is excitatory" is only partly true. It is very important to also consider what the spike threshold is, and to think in terms of reversal potential for a given ion channel (or more generically we can just call it a 'conductance'), and then you can revise the statement to say this:
Conductances with reversal potential greater than spike threshold are excitatory, conductances with reversal potential less than spike threshold are inhibitory.
Most often, chloride-based channels fit into the second statement: their reversal potential is less than the spike threshold.
How Shunting Inhibition Works:
Any time you open a channel, you will shift the membrane potential towards the reversal potential for that channel. The amount of current that flows through a channel depends on the 'driving force': the difference in voltage from the reversal potential.
Let's consider a fairly typical textbook cell, with a spike threshold at -50mV, a resting potential of -65mV, and a chloride reversal at -60mV.
If the cell is at rest, and you open chloride channels (such as with GABA via GABA-A receptors), the resulting flow of current will tend to push the membrane potential towards -60mV, so the cell 'depolarizes'. However, no matter how big of a chloride conductance you open, you will never pass -60mV, so you will never reach spike threshold.
If, in that same cell, you instead opened AMPA channels, with a reversal of around 0mV, you also get depolarization, but in this case as you open more AMPA channels you can potentially depolarize the cell all the way to 0mV. Of course, unless you have blocked sodium channels you will get an action potential before you reach that point, but that's the key: you will cross threshold, therefore we call it excitatory.
Now let's consider a third case where we open both AMPA channels and GABA-A channels. As long as the membrane potential is <-60mV, both channels contribute to depolarization. However, as soon as the membrane potential is >-60mV, chloride ions will start flowing into the cell. We call this "shunting" inhibition because if you look at the sum current flow in the cell, it will look small, because you have chloride ions coming in at the same time as sodium ions, resulting in little change of the membrane potential despite lots of ions moving.
The result is that GABA-A channels, even if they can depolarize a cell that is at rest, will act to prevent the cell from depolarizing far enough to reach threshold. It's important to consider the dynamics of the membrane potential, rather than thinking of the membrane potential as something that is simply added or subtracted to instantaneously.
Important Caveats:
Ion concentrations matter! If chloride concentrations are not 'typical' or if the spike threshold is more negative than in a 'typical' textbook cell, then chloride conductances can indeed be excitatory! In fact, excitatory GABAergic transmission is important are certain stages of development. If cells have too much chloride in them, this can also cause GABAergic transmission to become excitatory (or at least limit the efficacy of the inhibition), and this can lead to epilepsy (see Cohen et al. 2002).
There can also be a time window where GABAergic transmission is excitatory even in a typical cell: this is the period where the GABA-A channels have closed, but the membrane remains slightly depolarized and has not returned to rest. Excitation that arrives during this time will sum with the residual depolarization, and there are no open GABA-A channels to 'shunt' the current.
Alger, B. E., & Nicoll, R. A. (1979). GABA-mediated biphasic inhibitory responses in hippocampus. Nature, 281(5729), 315.
Cohen, I., Navarro, V., Clemenceau, S., Baulac, M., & Miles, R. (2002). On the origin of interictal activity in human temporal lobe epilepsy in vitro. Science, 298(5597), 1418-1421.
Purves, D., Augustine, G. J., Fitzpatrick, D., Hall, W. C., LaMantia, A. S., McNamara, J. O., & White, L. E. (2014). Neuroscience, 2008. De Boeck, Sinauer, Sunderland, Mass. | {
"domain": "biology.stackexchange",
"id": 8700,
"tags": "neuroscience, neurotransmitter"
} |
Reshaping black holes to "push" the event horizon | Question: I watched this simulation made by LIGO (Laser Interferometer Gravitational-Wave Observatory) where 2 black holes merge into one.
https://www.youtube.com/watch?v=I_88S8DWbcU
I was therefore thinking that theoretically nothing can escape a black hole given the features of the space and the black hole remain the same due to the strong gravity.
However, I was thinking if 2 black holes stay together close enough, the "event horizon" would reshape as well.
We all know the gravity force is F=G(Mm/r^2).
Given a point next to the event horizon but "inside" the black hole the gravity there will be so strong that the Escape velocity will be higher than C (speed of light).
If we place another massive object (like another blackhole in the proximities of this point, but far enough, the gravity there will be counteracted and therefore this point will be able to escape its black hole, reshaping the event horizon and making it smaller. (We will have created a "concavity" in the event horizon)
My question is:
Given a hypothetical scenario where we create thousands of micro blacholes orbiting the main blackhole, in a gravitational equlibrium. Would we be able to reshape and "push" the event horizon strong enough that objects inside could now use the chance and escape?
Could we even pull the matter of the black hole and make it less dense along the time even transforming it into a start destroying the singularity?
Answer: The only places around one black hole where gravity would be strong enough to pull an object out from inside the horizon of another black hole are all inside the horizon of the first hole.
Remember that at minimum an object just inside the horizon of a black hole is traveling toward the center at the speed of light (as its speed would be measured from outside the hole, if we could see it) and getting faster. For a second black hole to catch up to the object to pull it out, it would itself have to be moving faster than the object - faster than light - which is obviously impossible but anyway would simply merge the two black holes, without rescuing anything. | {
"domain": "astronomy.stackexchange",
"id": 2541,
"tags": "black-hole, event-horizon"
} |
Whats the advantage of variable speed furnace blowers? | Question: I'm in the market for a new heating and air conditioning unit for my home. Most offer variable speed blowers and the more expensive models are equipped with BLDC motors, which seem overkill. I am having a difficult time seeing the advantages to variable speed blowers other than perhaps to reduce noise.
I figure the goal is to heat up the interior as fast as possible by heating the coils as hot as possible and forcing as much air as possible over them. Of course, you don't want to turn your exhausts into hot air guns. Am I missing something here?
Answer: Variable blowers are typically combined with variable speed air conditioning/heat pump compressors. The compressor is the major energy draw. Energy usage is proportional to the cube of the pump speed, but cooling is proportional to the speed (mass of refrigerant moved). A non-variable system must be sized for max load, and runs that way for a shorter time. So we reduce the refrigerant flow when we don't need full heating or cooling to save money by running slower for longer. There are also comfort improvements in the summer via better dehumidification running the system longer. We save a little on the blower fan speed too, but it's mostly the compressor. The pump/fan laws that govern this are known as the pump affinity laws.
The goal is to heat/cool the interior to the setpoint and keep it there. | {
"domain": "engineering.stackexchange",
"id": 5281,
"tags": "thermodynamics, heat-transfer, heating-systems"
} |
Spectral lines on a detector | Question: How can it be possible for a single electron to go through 2 slits at the same time and create 2 spectral lines on a detector. What is wrong with that theory, but at the same time produce results as if it is true?
Answer: This problem is an example of the wave-particle duality (which is a simplistic model of a complex quantum interaction). Everything can be considered as a wave or a particle, although the higher the energy (that is the larger the particle's mass) the smaller the wave length. So, a low energy photon (like a radio photon) has a very long wave length, while an electron (which has relatively larger energy) has a very short wavelength. Still, you can consider an electron as both a wave and a particle.
When you design an experiment to test one of the aspects of an electron, than the electron will respond with that aspect. So, in your question, you aim a very low-particle rate electron stream (or light stream for that matter) at a two-slit plate. Since your experiment is looking for the wave aspect of the electron, that is what it will see. Therefore, the two-slit experiment will show that an electron behaves just like a wave.
You can change the experiment a bit. Instead of two slits, aim that electron flow at a piece of metal and you will see the electron pop off other electrons (or better, use high-energy photon - X-Rays for example - and you see the light pops electrons off of the metal). Since you are testing the particle property of electrons or light, that is what you will see.
Basically, your experiment interacts with what you are observing in a way that modifies the results of what you see. This is an example of the observer affects the observed. I must also point out that very small wavelength items (such as electrons) are easier to see the particle aspects of the object instead of its wave characteristic. On the other hand, long wavelength objects, such as radio-frequency photons are much easier to see as waves than as particles. Still, it is possible to see radio waves as particles and electrons (or even protons) as waves. | {
"domain": "physics.stackexchange",
"id": 22810,
"tags": "quantum-mechanics, quantum-interpretations, double-slit-experiment"
} |
STM32F3 timers & computing | Question: I have an STM32F3 discovery board. I want to go to the next step and I want to try to use timers in a few configurations.
How can I calculate variables (such as prescaler, period)? I looked in all datasheets, manuals and didn't find anything that can describe these values as - Input capture mode, OP, PWM, etc.
I think that prescaler is for downgrading a frequency from 1-65575.
So if I have fcpu=72MHz and want to generate a signal of frequency=40kHz, am I supposed to do: 72MHz/40kHz=1800?
Now should I subtract this prescaler with -1?
Answer: http://www.st.com/web/en/resource/technical/document/datasheet/DM00058181.pdf
Let's use this datasheet as an example.
Page 18 holds what is known as the clock tree. It shows with what frequency every component ticks with. So let's try following it.
You have your crystal oscillator working on some frequency (or some internal clock). You multiply it by PLL multiplication factor. PLL is used to make a higher frequency signal from a lower one. You then divide by AHB prescaler. You then divide by APB1 or APB2 prescaler (depending on which timer you're using) and then multiply by 2 if APB prescaler is something other than 1. What we have now is the clock timer module "sees". Now it divides that clock with timer prescaler and it increments (or decrements) the counter with that frequency. Once it reaches a set point it will throw the interrupt.
AHB and APB registers are usually set in the project settings of your programming environment. For timer prescaler register refer to the datasheet of your MCU. | {
"domain": "robotics.stackexchange",
"id": 675,
"tags": "microcontroller"
} |
Employee name auto-complete on card scan | Question: Im currently running an employee name substitution on card scan.
Overall the code takes the entry trough the gun scan, "scan" trough the hidden reference sheet to find a match and then input the actual complete name.
I had to implement that feature to my code so that names are always written the same (for futur data treatment purpose).
If y'all have some tip, hint or other proposition to optimise the code.
One of my goal would be for it to auto-fill with any entry (right now mostly working with operator numbers) or to suggest entries based on what's written in real time in a scrolling menu (if it's not too much).
Here's my codes :
Sub Workbook_sheetChange(ByVal Sh As Object, ByVal Target As Range)
Application.EnableEvents = False
Application.ScreenUpdating = False
If Not Intersect(Target, Sh.Range("C5:C8")) Is Nothing Then 'If change in "no.employé" cells
Call Employe ' Call name standardization macro
End If
Application.ScreenUpdating = True
Application.EnableEvents = True
End Sub
Sub Employe()
' Standardization of the operator name macro
' The code automaticly enters the employee names by is number entry
' Therefore permits employee to scan their pass and get their full names automaticly
Dim ash As Worksheet
Set ash = ActiveSheet ' Identify active sheet
Dim positionInitCaseNom1enY As Long: positionInitCaseNom1enY = 5 'First "Name" cell row position
Dim positionCaseNomEnX As Long: positionCaseNomEnX = 3 '"Name" cells culumn position
Dim k As long: k = positionInitCaseNom1enY ' Counter incrément position
Dim no As Variant ' Value of name cell (changes every loop)
Dim i As Integer ' Row integer
For i = 0 To 3
k = k + i ' Up a row (first 0)
no = ash.Cells(k, positionCaseNomEnX).Value ' Gives "no" the current "Name" cell value
If no <> "" Then ' Vérify if theres something in "Name" cell
Dim nos As String
nos = CStr(no) ' no to string (now nos)
Dim valeurnorm As Range
Set valeurnorm = Sheets("Liste Employé").Range("A2:A200").Find(nos)
' Looking for a match in name reference sheet
If valeurnorm Is Nothing Then ' If no match found
If MsgBox("Entrer un numéro d'employé valide" & vbNewLine & "Le numéro d'employé entrée est le suivant: " & nos, 5 + vbCritical, "Erreur") = vbRetry Then
' Msgbox that the employee entered a false number and to retry
ash.Cells(k, positionCaseNomEnX).Value = "" ' Clear old number
ash.Cells(k, positionCaseNomEnX).Select ' Reselect cases
End If
Else ' If name is in the reference sheet
ash.Cells(k, positionCaseNomEnX).Value = valeurnorm ' Change the value of the entry for the complete name
End If
End If
Next
End Sub
Sorry for all of the typos, i need to translate everything and my written english is kinda s***.
Answer: positionInitCaseNom1enY and positionCaseNomEnX should be declared as Const or in an Enum because they are default values that never change.
Const positionInitCaseNom1enY As Long = 5
Const positionCaseNomEnX As Long = 3
Do to implicit conversion no does not need to be cast to a String. The compiler automatically does it for you.
Dim nos As String
nos = CStr(no) ' no to string (now nos)
Use dynamic ranges whenever possible.
Set valeurnorm = Sheets("Liste Employé").Range("A2:A200").Find(nos)
Assuming the name list is the only thing in column A, you should dynamically size you range to fit the data like this:
With Sheets("Liste Employé")
Set valeurnorm = .Range("A2", .Cells(.Rows.Count, "A").End(xlUp)).Find(nos)
End With
Using Dynamic Named Ranges helps to give your code identity. And that is what we want to do as developers. If you have to comeback a year from now, you may not know what StandarValues represent but you will immediately understand what EmployeeNames are.
Set employeeNames= Sheets("Liste Employé").Range("EmployeeNames").Find(nos)
Alerts are necessary but having to click a MsgBox() can get irritating. I would prefer that:
If .Find("Another Solution") is Not Nothing Then Call
Use(.Find("Another Solution"))
Solution 1: Use Conditional Formatting
How To Highlight Cells If Not In Another Column In Excel?
Solution 2: Use an ActiveX Combobox
Setup
• Insert a hidden ComboBox on each Worksheet that needs the name validation
• Give it a meaning full name
• Set it's ListFillRange a Dynamic Named Range of employee names
When a cell in the validation range is selected use the Workbook_SheetSelectionChange()
• Move the ComboBox over the Activecell
• REsize the ComboBox to fit the ActiveCell
• Set the ComboBox.LinkedCell = ActiveCell
• Set focus to the ComboBox
When the user selects a cell in the
• Hide the ComboBox
• Set the ComboBox.LinkedCell = Nothing
Private Sub Workbook_SheetSelectionChange(ByVal Sh As Object, ByVal Target As Range)
AdjustEmployeeNameComboBox Sh, Target
End Sub
Private Sub AdjustEmployeeNameComboBox(ByVal Sh As Object, ByVal Target As Range)
Dim hasEmployeeRange As Boolean
On Error Resume Next
Dim EmployeeNameComboBox As OLEObject
Set EmployeeNameComboBox = Sh.OLEObjects("EmployeeNameComboBox")
hasEmployeeRange = Err.Number = 0
On Error GoTo 0
If Not hasEmployeeRange Then Exit Sub
Dim isEmployeeRangeSelected As Boolean
isEmployeeRangeSelected = Not Intersect(Target, Sh.Range("C5:C8")) Is Nothing
With EmployeeNameComboBox
.LinkedCell = ""
.Visible = isEmployeeRangeSelected
End With
If Target.Cells.Count > 1 Then Exit Sub
If isEmployeeRangeSelected Then
With EmployeeNameComboBox
.LinkedCell = Target.Address
.Top = Target.Top
.Left = Target.Left
.Height = Target.Height
.Width = Target.Width
.Activate
End With
End If
End Sub
Ugh..kinda nasty but it works.
Solution 3: Almost the Same as 2
Use the same setup as Solution2 but use a Cell Style to indicate that it is an Employee Name Validation Range. The advantage of this is that you could have the validation anywhere on the Worksheet without having to update your code.
For this solution you will need to change
isEmployeeRangeSelected = Not Intersect(Target, Sh.Range("C5:C8")) Is Nothing
to this
isEmployeeRangeSelected = Target.Style = "Cell Style Name goes Here" | {
"domain": "codereview.stackexchange",
"id": 39048,
"tags": "vba, excel, autocomplete"
} |
ID microbe found in moss sample (USA) | Question: I found this microbe after putting a wet moss sample under my microscope. It latched on to a piece of moss and scanned the area around it for prey, and when it didn't find any, it slowly swam away. It's fairly large for a microbe, and it's rectangular with two big green spots. I'm not sure if the spots have any significance (I thought for a second it might be mitosis, but I'm not sure).
Some pictures at 250x and 400x:
Any information would be greatly appreciated!
Answer: I am not 100% sure on this, but it looks very like a diatom. These are a range of very diverse unicellular algae found world-wide in soil and water environments (salt and fresh). There are a huge number of species (also genera, families etc.).
The reasons I think that this is a diatom are that it has what appears to be a thick transparent wall (photos 2 and 3) and has a "cut in half" shape - where the word diatom comes from (dia divided, tomos I cut). Diatoms are usually between 20 and 200 micrometres long, so 250-400x observation fits fairly well.
I think what you have here is a pennate diatom (see images under "classification"). Some pennate diatoms are motile
There's an image of one that resembles yours here (see second image) | {
"domain": "biology.stackexchange",
"id": 11503,
"tags": "species-identification, microbiology, microscopy"
} |
Controllers are unable to fight gravity in Gazebo simulation | Question:
I am currently trying to simulate a robot in Gazebo using the ROS framework. I used this (www.gazebosim.org/wiki/Tutorials/1.9/ROS_Control_with_Gazebo) Tutorial to get an existing robot model running with the controller from the ros_control package. Everything works fine, the controller for each joints are up and running. I can use the corresponding ROS topics to send commands to each controller.
There is one slight problem: The controllers are even unable to fight the robots own weight. No mather what combination of PID gains I set for the controller, the robot is always pulled down by its own weight.
I am using a UR5 robot for simulation, the URDF file can be found here (www.github.com/ros-industrial/universal_robot/tree/groovy-devel/ur_description).
Here are two screenshots of the simulation, one with gravity enabled:
www.s14.directupload.net/images/131214/u6f4585l.jpg
and one with gravity disabled:
www.s1.directupload.net/images/131214/oya39ubp.jpg
Can anyone help me here?
Originally posted by miniME05 on ROS Answers with karma: 33 on 2013-12-16
Post score: 1
Answer:
It appears all the effort limits in your URDF are set to 10.0, which seems suspect, are you sure that the joint effort limits are correct? Gazebo will end up limiting the max effort output to that effort limit, so it is quite important.
Originally posted by fergs with karma: 13902 on 2013-12-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by miniME05 on 2013-12-16:
tanks for the hint. I will try to edit these values (later or tomorrow) and see if I can get the robot moving properly. Note: I didn't created that URDF file, so i assumed it should work. But 10 Nm on each joint doesn't sound like that much :) Edit: yea, the value of the effort limit was the problem | {
"domain": "robotics.stackexchange",
"id": 16470,
"tags": "ros, gazebo, simulation, ros-control, ros-controllers"
} |
Modifying a string by passing a pointer to a void function | Question: Please explain the relationship between pointers and arrays. In this tutorial, they change int c by changing *r in the function. How is that possible?
Also, please review my code for reversing a string. The string variable belongs to the main() function, and I would like to pass a pointer to let the strreverse() function write to the string. Does my code work?
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
void strreverse(char *string)
{
int i, len;
char c;
len=strlen(string);
char string2[len+1];
for(i=0; i<len; i++)
{
c=string[i];
string2[len-i]=c;
}
string2[len+1]='\0';
string=string2;
//printf("%s\n", string); <-- this would work,but should be in main
}
int main(int argc, char *argv[])
{
char str[256];
printf("Type a String to reverse it.[max. 255 chars]\n");
fgets(str, 255, stdin);
strreverse(&str[0]);
printf("%s", str);
return EXIT_SUCCESS;
}
Answer: No, I've compiled your code with C90 and it doesn't reverse the given string.
int main(int argc, char *argv[]) If you're not compiling from the command line with arguments to be used by the program, then int main() suffices.
Passing argument by pointer is used when you want the value of the variable changed. Say I have a variable int var and a function change(.), I want change to alter the value of var. If I declare change as void change(int n) and I call change(var), function change will take a copy of var named var but it's only a copy, its address is different from the original var. Any change you make on copy var will not affect the original var. If i declare change as void change(int* n), then change will only accept int pointers, I have to pass the address, &var into change as change(&var). Now working on this address is exactly like working on the initial var.
To understand pointer-array relationship read wikipedia.
int array[5]; // Declares 5 contiguous integers
int *ptr = array; // Arrays can be used as pointers
ptr[0] = 1; // Pointers can be indexed with array syntax
*(array + 1) = 2; // Arrays can be dereferenced with pointer syntax
*(1 + array) = 3; // Pointer addition is commutative
2[array] = 4; // Subscript operator is commutative
Array names are not really pointers but can be used as pointers. Instead of strreverse(&str[0]), you can do strreverse(str). Same result.
You've passed argument as pointer but your code still fails, why?
One thing to know about fgets is that unless there is an [EOF][2] which you can only get from an input file or Ctrl+Z if you run from the command line, fgets exits when the length argument is reached or it encounters a newline. In summary, fgets reads the newline as a character when you press enter key, increasing your desired length by 1. So if I had entered "my string" + Enter, your variable str becomes "my string\n".
So you got the length of the string into len. The array string is zero-based, calling string[len] return the char after the desired last one. The last char is string[len - 1].
You should have done this;
char string2[len];
for(i=0; i<len; i++){
c=string[i]; //variable c is unimportant
string2[len-i-1] = string[i]; //observe the -1, when i = 0, string2[len - 1] = string[0]
}
string2[len] ='\0';
Now that you're done reversing, you need to understand the implication of your next move.
string = string2;
string is a pointer, but that doesn't make it any less of a variable, it's a pointer variable, it also has an address. And if I declare a pointer variable to its address, that pointer will also have an address. Going back to what I said earlier, when you call change(&var), a copy of the address of var is passed into the function, so when you change the value of this pointer, it no longer holds the address of var. You may think about dereferencing, like this
*string = *string2;
but this will only alter the first value of the array since *string is same as string[0]. The solution is to copy element by element.
for(i=0; i<=len; i++)
string[i] = string2[i];
Now your string is reversed.
Read this wikipedia article to understand how an array can be reversed faster. | {
"domain": "codereview.stackexchange",
"id": 4492,
"tags": "c, strings, beginner, pointers"
} |
Calculating speed in four dimensions | Question: If you are moving at $c$ in 3D space and $c$ in time axis too, What would be your total speed?
Edit: Since question has been voted to be closed, I shall make an Edit.
In 4D world all objects move with speed $c$. This implies there should be a relation to compare speeds in two realms space and time.
Answer: The spacetime interval is a relativistic invariant, and is proportional to the travelers proper time. So in a since you are traveling one second per second, per your own wrist-watch. Every other measurement would be the speed of some other inertial reference system, measured with your clock.
Let $s^2 = x^2 + y^2 +z^2- (ct)^2$, where $x$, $y$, $z$ are coordinates in some inertial reference frame, and $t$ is the clock from that same frame. Then $s$ is the spacetime interval for the object that began at location $(0,0,0,0)$, and is now located at $(x,y,z,t)$. Perhaps you set off a firecracker at the beginning, and again at the end: two spacetime events. While different inertial observers will observe different coordinates and times, based on their relative velocity, they will all agree on $s^2$ for the spacetime interval between the two events.
You, the person setting off the firecrackers, can also measure $s^2$. Perhaps you are standing still -- then $s^2 = (ct)^2$ according to your wristwatch. Now calculate your speed: $ds/dt = c$, and in units where $c=1$, it is just one tick per tock. | {
"domain": "physics.stackexchange",
"id": 31061,
"tags": "special-relativity, spacetime, time, metric-tensor, speed"
} |
LHC Big Bang Temperatures | Question: It's been claimed that the LHC's 14 TeV energy produces temperatures comparable to that which occurred very soon after the Big Bang. The well-known $E=1.5kT$ formula from classical statistical mechanics predicts LHC produced temperatures in the $2.44\times 10^{17}$ K range. This temperature is much higher than I expected.
Is there an agreed upon method of calculating temperature equivalents of 14 TeV p-p interactions, and if so, what is the result?
Answer: Note that the energy is per degree of freedom, so you don't use all the 14 TeV (nor are they actually running at that energy yet).
So, how many degrees of freedom? Good question.
Each proton has three valence quarks, but these are generally agreed to make up a small portion of the mass (and to carry a small portion of the momentum) of the proton. The rest of the mass (or momentum) is carried by particles (quarks and gluons mostly) from the so-called "sea"; these pop into and out-of existence owing to the uncertainty principle.
Worse, when the collision happens there is a great deal of energy available to put the virtual particles of the sea on to (or nearly on to) the mass shell, converting them into real particles, each equipped with their own swarm of ghostly hangers on. Then many of these decay in a very short time.
It is the average energy of this multitude of degrees of freedom which you are trying to measure/calculate, and it is non-trivial. | {
"domain": "physics.stackexchange",
"id": 607,
"tags": "large-hadron-collider, big-bang"
} |
Maxwell stress tensor for electric field | Question: If we need to calculate the time-averaged Maxwell stress tensor for an arbitrary field like
$$
\vec{E}=E_{0x}e^{ikz-iwt}\hat{i} + E_{0y}e^{ikz-iwt}\hat{j}+E_{0z}e^{ikz-iwt}\hat{k}
$$
I know we should omit the $e^{-iwt}$ term and multiply by $\frac{1}{2}$ to get the time average, but since the Maxwell stress tensor should be real, what would happen to the $e^{ikz}$ terms? Because they do not get canceled in the stress tensor's formulas like:
$$
T_{xy} = \epsilon_{0}(E_xE_y)+\frac{1}{\mu_0}(B_xB_y)
$$
and other terms. Should the multiplication be conjugated like $E_xE^*_y$?
Answer: The Electric and Magnetic fields in Classical Electromagnetism are not complex fields, they are real.
However, when the fields vary sinusoidally, we need to keep track of the amplitude and the phase, and a complex number is an easy way to do this. Thus, we use complex numbers to denote electromagnetic waves because they greatly simplify most calculations, with the understanding that we are actually interested in the real part, which we extract at the end. However, this only works as long as the operations that we perform on the fields are linear, since these keep the imaginary and real parts separate during calculations.
However, if you perform a non-linear operation (such as squaring the field, for example) the real and imaginary parts would mix, and so using the complex representation breaks down when we try to calculate the Poynting vector using the "usual" definition, or more generally, the Electromagnetic Stress-Energy Tensor. In these cases, you should in fact either be using the real part of the complex field you mentioned above, or a different definition for the quantities. | {
"domain": "physics.stackexchange",
"id": 69143,
"tags": "electromagnetism, energy, stress-energy-momentum-tensor"
} |
Why don't we burn ourselves when we speak? | Question: In this post we learn that sound transfers a lot more energy than heat across the medium, so why don't we burn ourselves when we speak by generating sound? Does sound reflect more easily than heat?
Answer: It is similar to the reason we don't burn ourselves by compressing a spring. The compressed spring stores potential energy. If you stop pushing, the spring will push you away and you get it (almost) all back.
If the spring rubbed a wall as it compressed and expanded, there would be friction. Some energy would be lost as heat. You would not get it all back.
Sound is a pressure wave. Air compresses like a spring. | {
"domain": "physics.stackexchange",
"id": 76394,
"tags": "thermodynamics, acoustics, everyday-life, estimation"
} |
What CS theories are absolutely paramount for someone new to TCS to understand? | Question: First - I'm happy to be a part of this community. Electronics and software engineering are both my passion and my profession, yet I feel as if I'm missing a solid basis in theoretical computer science. This field absolutely fascinates me, and I was pleased to see a Stack Exchange community built around it. So thanks to everyone who contributes.
I'm interested in building well-rounded foundation TCS, but don't know where to start. Of course Turing, etc is very popular, but I'd like some reference from researchers in the field on what theories are critical to have a greater, well rounded understanding of computers, while actually solving real problems. Please include any examples/advice where TCS has helped you solve a specific problem, if applicable.
My questions..
What areas are critical for me to study in mathematics?
What areas are exciting and "hot" in theoretical CS right now?
What areas are incredibly useful to consider for hardware innovation?
What areas are useful when considering software innovation?
If it helps at all I work in IoT as a software/cloud architect, primarily writing ruby/C++, while occasionally working with EE on hardware design. I'm just starting to get into PCB design in my free time.
Thanks again in advance, looking forward to learning!
Answer: (Disclaimer: this answer has a focus on programming languages theory, which is only one of the many disciplines under the TCS umbrella.
Apologies for the length.)
A small digression
You are asking for topics in TCS which
are "hot" in theoretical CS right now
are useful for innovative applications in software/hardware in the next future
From the way you pose the question, I am led to think that you believe that the two requirements above largely overlap, and/or that only bleeding edge research in TCS can have some impact in future technologies. If that is the case, I would instead argue that the overlap is instead much smaller.
First, CS theory is usually very far ahead most applications.
This is not unlike theory in other science disciplines. In the past, some great theoretical developments took much time to influence the industry.
A few examples:
Turing's vision of computers influenced automatic computers
Church's $\lambda$-calculus influenced programming languages
Higher order functional programming influenced Google's map/reduce
System F influenced generic types of Java (and C#, Scala, etc.)
System F$\omega$ influenced Scala higher kinds
Dependent types influenced Haskell GADTs
Church-style encodings of algebraic data types influenced the "visitor pattern" in OOP
Category theory influenced functional programming
Constructive/intuitionistic logic influenced type systems
These influences did not happen overnight. If history repeats, some new
technology tomorrow is more likely to draw from old-ish theory (relatively speaking) than bleeding edge discoveries.
Second, while some theoretical research is directed by some clear applications, sometimes research is driven by curiosity: when we discover a new approach and the first results appear to lead to an elegant theory, which reshapes our way of thinking about some topic, then we follow that direction. Often, even if we can't find a practical application right now, we still want to know how far we can go!
Third, some of the most powerful theory involves negative results. These, on the one hand do not tell us how to solve problems, but on the other hand tell us to avoid wasting resources trying to achieve the impossible.
For instance, computability theory tells us that we cannot precisely count the number of pages in a PostScript document (!), so we need to either move to better document formats or live with some approximate solution. Dually, we learn that if we design a new media format, we should try to avoid the same issues.
Another example: the so-called "free theorem", or parametricity property, can be used to prove the impossibility of writing a function with a certain polymorphic (or generic) type, in some languages.
Another: the Curry-Howard isomorphism can be used for a similar goal, proving the impossibility of a hypothetical program. (Shameless plug: I once used that to answer a Scala question on StackOverflow)
Some suggested "paramount" topics
(The following lists are not, by any means, complete. I only wrote the first topics that came to my mind.)
I would recommend the following topics as the bare minimum.
[Logic] Classical propositional logic. First order logic. Naive set theory.
[Computability] Decidable and semidecidable problems. Rice theorem. Many-one reduction.
[Computational complexity] P vs NP. Completeness / hardness. Polynomial reduction.
[Programming languages] $\lambda$ calculus: untyped & simply-typed. Church encodings (for untyped). Fixed points vs recursion.
These are more advanced. I would not recommend to study them all, albeit maybe one should know what these topics are about, at least. Having even a superficial understanding of these can help in keeping the mind open.
[Logic] Intuitionistic logic. ZFC set theory. Linear logic.
[Geometry] Basic topology.
[Domain theory] Posets. $\omega$CPOs. Complete lattices. Scott topology. Fixed point theorems (Tarski, Kleene). Induction ($f(x)\sqsubseteq x \implies \mu f \sqsubseteq x$). Coinduction.
[Computability] Rice-shapiro.
[Type theory] Polymorphic types (System F). Dependent types and recursive/inductive types (Calculus of Constructions, Coq, Agda, etc.). Curry-Howard. Homotopy type theory (hot topic).
[Category theory] Products/coproducts. Functors. Natural transformations.
$F$-algebras/coalgebras.
[Programming languages] Imperative, functional, and logic programming. Formal semantics: operational (small and big step), denotational, axiomatic (Hoare logic). Continuations.
[Model checking] Temporal logics (modal $\mu$ calculus, CTL, LTL, etc.). SAT-solvers. SMT-solvers.
[Concurrency] Petri nets. Process algebras.
These are even more advanced or hotter topics.
[Programming languages] Verification. Abstract interpretation. Gradual typing.
[Category theory] Cartesian closed categories. More "abstract nonsense" in general.
[Model checking] SAT-solvers. SMT-solvers.
[Type theory] Homotopy type theory (hot topic).
So. Many. Others.
And finally,
the topics mentioned in the digression above :-)
Most, if not all, these topics have had a deep influence on the design of modern programming languages. Some of these can also affect the way we think rather deeply. Here's a few examples from my experience.
When stuck in writing a correct while loop, one might stop and try to think about the loop invariants, and solve the issue. Of course, this only happens if one has seen Hoare logic before.
Remembering certain programming techniques can be hard. Sometimes, theory can help to quickly recall / reconstruct such techniques. For instance, above I mentioned the visitor pattern. Its definition can be recovered remembering that recursive types are interpreted as initial $F$-algebras, so by initiality, we immediately obtain an associated type
$$
\mu F \to \forall \alpha.\ (F\alpha \to \alpha) \to \alpha
$$
So, in OOP, we would take the object (of recursive type $\mu F$), the visitor (of type $F \alpha \to \alpha$), and must return the result of the visitor ($\alpha$). Translating this into OOP, say Java, is routine.
This might look as gibberish before studying these topics, but I believe one
can see that, in this way, a one-line theoretical description can summarize a complex technique like the visitor pattern.
Concluding, I hope I managed to show that the topics above have had an impact on software, and probably will have more in the future. I also hope this does not feel too overwhelming, especially since it's only about the PL part of TCS! :-) | {
"domain": "cstheory.stackexchange",
"id": 4052,
"tags": "big-picture, big-list"
} |
Swinging of a bob | Question:
Suppose a variable force is acting on the bob and making it swing as shown in the figure then why will kinetic energy be conserved here even though there is a variable force acting on the bob?
Answer: This is a straight-forward application of the mechanical work-energy theorem: the net work done by all forces acting on a body as it moves from position $i$ to position $f$ is equal to the change in the kinetic energy of the body, $\Delta K = K_f-K_i$ or
$$\sum_{\mathrm{all~forces}}W = \Delta K.$$
It doesn't matter what the nature and origins of the forces are. All that matters is whether they do work on the object (mechanically) or not. They may be conservative or non-conservative, time-dependent or constant.
Kinetic energy is not conserved, and the calculation of kinetic energy is frame-dependent, so this principle/theorem must be applied wisely.
The work energy theorem arises out of the conservation of energy when dealing with mechanical processes (and may carefully be extended to other processes, but we won't deal with that here): $$E_f=E_i+W_{\mathrm{non-conservative}}$$
where $E=K+U$, and $U$ is the system potential energy associated with some conservative force. As part of accounting for energy changes in a system, we either count the change in a potential energy or the work done by that conservative force, but not both. We also define changes in potential energy as $$\Delta U_n = -W_n,$$ where the $n$ subscript identifies a particular conservative force.
Replacing the $E_f$ and $E_i$ by $K+U$ we get the following:
$$K_f + \sum_n U_{nf}= K_i + \sum_n U_{ni} +W_{\mathrm{non-conservative}}$$
$$K_f - K_i = \sum_n\left(-U_{nf}+U_{ni} \right)+W_{\mathrm{non-conservative}}$$
$$K_f - K_i = \sum_n(W_n)+W_{\mathrm{non-conservative}}$$
$$\Delta K = W_{\mathrm{all}}$$
If the kinetic energies at two locations along a path are the same, then $\Delta K=0$ and the net work done by all external forces acting on the object while traveling between those two points will also be zero (not including thermodynamic effects). The variety of changes to velocities and speeds between those two points are irrelevant. | {
"domain": "physics.stackexchange",
"id": 46011,
"tags": "newtonian-mechanics, energy-conservation, work"
} |
Master Locksmith | Question: Description
Master Locksmith has just finished the work of his life: a combination
lock so big and complex that no one will ever open it without knowing
the right combination. He's done testing it, so now all he has to do
is put the lock back into a neutral state, as to leave no trace of the
correct combination.
The lock consists of some number of independently rotating discs. Each
disc has k numbers from 0 to k-1 written around its circumference.
Consecutive numbers are adjacent to each other, i.e. 1 is between
0 and 2, and because the discs are circular, 0 is between k-1
and 1. Discs can rotate freely in either direction. On the front of
the lock there's a marked bar going across all discs to indicate the
currently entered combination.
Master Locksmith wants to reset the lock by rotating discs until all
show the same number. However, he's already itching to start work on
an even greater and more complicated lock, so he's not willing to
waste any time: he needs to choose such a number that it will take as
little time as possible to reach it. He can only rotate one disc at a
time, and it takes him 1 second to rotate it by one position. Given
k and the initialState of the lock, find the number he should
choose. If there are multiple possible answers, return the smallest
one.
Example
For k = 10 and initialState = [2, 7, 1]
the output should be masterLocksmith(k, initialState) = 1
It takes 1 second for the first disc to reach 1 (2 → 1). It takes 4
seconds for the second disc to reach 1 (7 → 8 → 9 → 0 → 1). The
third disc is already at 1. The whole process can be completed in
5 seconds. Reaching any other number would take longer, so this is
the optimal solution.
Constraints
Guaranteed constraints:
3 ≤ k ≤ 10 ** 14
2 ≤ initialState.length ≤ 10**4
0 ≤ initialState[i] < k for all valid i
Code
from operator import itemgetter
from collections import defaultdict
def masterLocksmith(k, initial_state):
occurence = defaultdict(int)
for i in initial_state:
occurence[i] += 1
num_set = set(initial_state)
num_set.add(0)
shortest = {j: 0 for j in num_set}
for i in occurence:
for j in num_set:
shortest[j] += min(abs(i-j), k - abs(i-j)) * occurence[i]
return min([[k, shortest[k]] for k in shortest], key=itemgetter(1,0))[0]
The TLE's were really hard, I've tried some optimization but the best I could solve this in was still \$ O(n * k) \$ which was not enough to complete the challenge. I am less concerned about readability but more how this can be sped up.
Answer:
def masterLocksmith(k, initial_state):
...
for i in occurence:
for j in num_set:
shortest[j] += min(abs(i-j), k - abs(i-j)) * occurence[i]
return min([[k, shortest[k]] for k in shortest], key=itemgetter(1,0))[0]
The aliasing of k is not very helpful for readability; that last line is hard enough to decipher without confusion about the names. Another answer observed that using a running minimum would reduce memory usage and therefore possibly performance; in my opinion it would also be more readable.
from collections import defaultdict
...
occurence = defaultdict(int)
for i in initial_state:
occurence[i] += 1
There's a specialised class for this:
from collections import Counter
...
occurence = Counter(initial_state)
the best I could solve this in was still \$O(n∗k)\$
I concur with Quuxplusone that it's actually \$O(n^2)\$. Looking at the small example case by hand, I think there's a reasonably straightforward way to do it in \$O(n \lg n)\$ time. This is just a sketch: I haven't implemented it.
Basically the task is to add a series of functions which look something like
^
/ \
/ \
\ /
\ /
v
Note that there are two points of inflexion: at the initial value, and opposite the initial value. (Correctly handling odd vs even values of k will be a minor complication). Also note that a sum of piecewise linear functions is piecewise linear.
So you can first calculate in \$O(n)\$ time the score if the target number is 0 and the gradient of the sum at \$0\$. Then sort a list of inflexion points in \$O(n \lg n)\$ time and iterate through them, calculating the score by extrapolation from the previous one and the new gradient. Finally extrapolate to k-1. | {
"domain": "codereview.stackexchange",
"id": 31449,
"tags": "python, python-3.x, programming-challenge, time-limit-exceeded, combinatorics"
} |
What's actually inside these EEG values? | Question: I have opened the file in Matlab and am using MNE Python too. This is a processed EEG file in Matlab data format instead of raw EEG (Screenshot 1).
Screenshot 1: struct of the 801_1_PD_REST
Now the problem is that inside the Matlab data file, the values inside EEG are very large (see screenshot 2), and what are their units? I have read on the internet that generally the microvolt as a unit is used for EEG analysis, but these are too large values to be in microvolts, so what are these values exactly? Even in the organized data file, the values don't look like voltages (see screenshot 3)
Screenshot 2: Inside the 801_1_PD_REST
Screenshot 3: Organized data with eyes closed
I have also opened some raw EEG files with MNE in Python (https://neuraldatascience.io/7-eeg/mne_data.html).
and accessing the data, the general unit of every channel voltage is already mentioned (see screenshot 4), and the data inside it also looks like EEG voltages (see screenshot 5). But in the processed data, what are those values? I don't understand. They can't be voltages, right? Also as the unit mentioned in micro volt in screenshot 4 so if the value is -7.68e-6 it meant -7.68 micro volts or -7.68e-6 micro volts
Screenshot 4: Every channel's unit is specified as microvolts and in the data actual voltages in raw eeg from IOWA out of sample dataset
In 63th(array indexing from 0) channel printing out first 10 data
Dataset Description:
Methods: We included a total of 41 PD patients and 41 demographically-matched controls from New Mexico and
Iowa. Data for all participants from New Mexico (27 PD patients and 27 controls) were used to evaluate insample
LEAPD performance, with extensive cross-validation. Participants from Iowa (14 PD patients and 14
controls) were used for out-of-sample tests. Our method utilized data from six EEG leads which were as little as 2
min long.
Paper Link: Linear predictive coding distinguishes spectral EEG features of Parkinson's disease
Dataset Link: Available Data
I am working on detecting PD from EEG signals as my thesis work.
Answer: Accessing the data requires a login that I don't have, so here is a wild guess.
It looks like the Matlab data is indeed in $\mu V$ but with a very large DC bias (which is fairly common for EEG signals). I suggest looking at the data after subtracting the mean for starters.
It's also fairly normal to leave the bias in the signal as the offset may drift over time and having access to the raw data allows applying more sophisticated bias removal algorithms.
EDIT:
I'm not sure what exactly your problem is. The data reads fine in Matlab and looks reasonable. As expected it's "noisy" with a large bias and other potential contaminants so some cleanup is likely required, but the data file itself seems perfectly fine and easy to interpret. | {
"domain": "dsp.stackexchange",
"id": 12137,
"tags": "eeg"
} |
Hawking Radiation and Curvature | Question: From a simple point of view in GR/QM we take a virtual pair creation and presumable they reunite shortly in flat space-time probably representable by a space-time warpage that generates geodesic closure (?); ie. quantum mechanics. It is represented that the geodesics around a black hope are such that even the small separation leads to different space-time paths.
My problem with this model is that any space-time curvature should also allow this to happen; even around us on earth. Albeit with low likelihood or intensity. If this were true then all of space-time would be "relaxing" to a flat metric. Perhaps electrons and such are too big but neutrinos have a far smaller threshold due to smaller mass.
Is this reasoning reasonable?
Answer: Lets take it one step at a time.
All bodies radiate with black body radiation, which means that in a vacuum they would cool down to the ambient temperature of the Cosmic Microwave Background radiation , which is a black body radiation characterized by the temperature of 2.7260±0.0013 K .
Now a black hole is a construct specific to general relativity, and classically it should not radiate at all, because everything within the event horizon cannot leave the black hole classically.
In 1975 Hawking published a shocking result: if one takes quantum theory into account, it seems that black holes are not quite black! Instead, they should glow slightly with "Hawking radiation", consisting of photons, neutrinos, and to a lesser extent all sorts of massive particles. This has never been observed, since the only black holes we have evidence for are those with lots of hot gas falling into them, whose radiation would completely swamp this tiny effect. Indeed, if the mass of a black hole is M solar masses, Hawking predicted it should glow like a blackbody of temperature
6 × 10^-8/M kelvins,
This is a very small number and the energy of the individual photons is so low that no particle of the standard model could materialize out of this energy unless it is a micro-black hole .
The argument for the radiation comes from an effective field theory, as gravity has not been consistently quantized , so it is a heuristic argument to talk about virtual pairs, one falling into the black hole and the other leaving, the energy supplied by the gravitational field of the black hole.
Here is a heuristic image of the feynman diagrams , heuristic because it misses the gravitational vertex that will supply the energy for the reality of the escaping particle.
An analogous image could be devised for escaping photons, from a two photon diagram .
My problem with this model is that any space-time curvature should also allow this to happen; even around us on earth.
To speak of Hawking radiation we have to go quantum. Feynman diagrams need vertices with precise rules of calculating the probabilities. In the sense you are asking it, any loss of the invariant mass of an object in space changes the curvature. Since there is no event horizon for small objects the radiation that will involve exchanges of gravitons will join the black body radiation from the electromagnetic interactions and it will not be separable. Due to the very small value of the gravitational couplings with respect to electromagnetic ones the effect will not be measurable, in my opinion. At least not at the level of knowledge and detectors we have now.
One could think of measuring the black body spectrum of the moon , for example , and Jupiter, and compare the discrepancy from a purely electromagnetic black body , but there will be so many assumptions entering in the model and the difference so small that, imo, no definitive answer can be given. | {
"domain": "physics.stackexchange",
"id": 22422,
"tags": "curvature, hawking-radiation"
} |
About Feynman's treatment of entropy | Question: Feynman said in this chapter that if a system absorbs (rejects) an amount of heat $d Q$ at a temperature $T$, then we say the entropy of the system increased (decreased) by an amount $dS=d Q /T$. And he previously showed that for a reversible engine operating between two temperatures, the entropy lost from the hotter reservoir is equal to the entropy gained by the cold reservoir
$$Q_{h}/T_{h}=S=Q_{c}/T_{c}.$$
So the entropy here is some quantity related to changes of the system: heat flows out of our system at a given temperature, we lose some entropy. He then said that when a system goes from state $a$ with temperature $T_{a}$ and volume $V_{a}$ to state $b$ with temperature $T_b$ and volume $V_b$, we can write the change of entropy as $$S_a-S_b=\int_{a}^{b} \frac{dQ}{T}.$$
What does $S_{a}$, or $S_b$, represent in this case? In the first case entropy was something flowing in and out and was connected to the amounts of heat leaving or entering the system. Now, entropy is connected only to the volume and temperature of the system, regardless of heat flows. Aren't these two interpretations at odds with each other?
A heat $Q_1$ at temperature $T_1$ is “equivalent” to $Q_2$ at $T_2$ if
$Q_1 / T_1 = Q_2 / T_2$, in the sense that as one is absorbed the other is delivered. This
suggests that if we call $Q/T$ something, we can say: in a reversible process as
much $Q/T$ is absorbed as is liberated; there is no gain or loss of $Q/T$. This $Q/T$
is called entropy.
$$$$
We can move around on a $pV$ diagram all over the place, and go from one condition to another. In other words, we could say the gas is in a certain condition $a$,
and then it goes over to some other condition, $b$, and we will require that this
transition, made from $a$ to $b$, be reversible. Now suppose that all along the path
from $a$ to $b$ we have little reservoirs at different temperatures, so that the heat $dQ$ removed from the substance at each little step is delivered to each reservoir at the temperature corresponding to that point on the path. Then let us connect all these reservoirs, by reversible heat engines, to a single reservoir at the unit temperature. When we are finished carrying the substance from $a$ to $b$, we shall bring all the reservoirs back to their original condition. Any heat $dQ$ that has been absorbed from the substance at temperature $T$ has now been converted by
a reversible machine, and a certain amount of entropy $dS$ has been delivered at
the unit temperature as follows: $$dS=d Q /T.$$
Let us compute the total amount of entropy which has been delivered. The
entropy difference, or the entropy needed to go from $a$ to $b$ by this particular
reversible transformation, is the total entropy, the total of the entropy taken out
of the little reservoirs, and delivered at the unit temperature:
$$S_a-S_b=\int_{a}^{b} \frac{dQ}{T}.$$
Answer:
Feynman said in this chapter that if a system absorbs (rejects) an amount of heat $d Q$ at a temperature $T$, then we say the entropy of the system increased (decreased) by an amount $dS=d Q /T$.
If you re-read your link, you will see that Feynman did not say that. What you left out of your description is that Feynman explicitly specified that his equation applied to a reversible path. [The difference between $Q$ and $Q_{rev}$ is not a minor typo. Rather, it is crucial to understanding the issues you raise in your question.]
I.e., here is what he is saying:
$$dS=d Q_{\textbf{rev}} /T$$
[You only mention the reversibility constraint as applying to his equations for the efficiency of heat engines.]
Likewise, I'm afraid this is also incorrect, because it again leaves out the fact that Feynmann added the key constraint of reversibility:
He then said that when a system goes from state $a$ with temperature $T_{a}$ and volume $V_{a}$ to state $b$ with temperature $T_b$ and volume $V_b$, we can write the change of entropy as $$S_a-S_b=\int_{a}^{b} \frac{dQ}{T}.$$
Instead, Feynman's message is that we can calculate the entropy change associated with a process if we can connect the initial and final states by some reversible path (of which there are an infinite number), and integrate over that path, i.e.:
$$S_a-S_b=\int_{a}^{b} \frac{dQ_\textbf{rev}}{T}.$$
Entropy is a property of the system. It is a state function, and the change in the entropy of a system depends only on the difference between the intial and final states. The change in entropy is thus independent of the path through which that change is made; it doesn't matter whether the change is made reversibly or irreversibly—for a given initial and final state, the change in entropy will always be the same.
Heat flow, by contrast, is not a state function—i.e., it is not a property of the system. Rather (like work), it is a property of the path that connects changes in the system. Thus, if we use two different paths to connect an initial and final state, the heat and work flows along these paths can be different, even though the initial states for both paths are the same, and the final states also are the same.
Having said that, if we constrain a path in a very special way—namely, if we constrain the path to be reversible—we can calculate the change in entropy by integrating $\frac{dQ_\textbf{rev}}{T}$ along that path. | {
"domain": "physics.stackexchange",
"id": 64278,
"tags": "thermodynamics, entropy"
} |
Fast symmetric key cryptography class | Question: I am trying to build a fast cryptography algorithm. The algorithm works fine, but I am worried if there are any potential flaws that might make the algorithm vulnerable to any kind of attack. Here is my code.
Encryptor.cpp
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
using namespace std;
class Encryptor
{
private:
unsigned char *key; //256 byte
unsigned char key16[16]; //16 byte
public:
Encryptor(unsigned char *k)
{
key = k;
for (int i = 0; i < 256; i += 16)
{
for (int j = 0; j < 16; j++)
{
if (i < 16)
{
key16[j] = key[j];
}
else
{
key16[j] = key16[j] ^ key[i + j];
}
}
}
srand(time(0));
}
string encrypt(string txt)
{
int totalRounds = (txt.size() / 256);
if (txt.size() % 256)
totalRounds++;
string cipher(totalRounds * 16 + txt.size(), 0);
for (int i = 0; i < totalRounds; i++)
{
unsigned char randKey[16];
int txtIndex = i * 256;
int cipherIndex = i * (16 + 256);
int txtSize = (i == (totalRounds - 1)) ? txt.size() % 256 : 256;
for (int j = 0; j < 16; j++)
{
randKey[j] = random(1, 254);
cipher[cipherIndex] = key16[j] ^ randKey[j];
cipherIndex++;
}
for (int j = 0; j < txtSize; j++)
{
cipher[cipherIndex] = key[j] ^ randKey[j % 16] ^ txt[txtIndex];
cipherIndex++;
txtIndex++;
}
}
return cipher;
}
string decrypt(string cipher)
{
int totalRounds = (cipher.size() / (256 + 16));
if (cipher.size() % (256 + 16))
totalRounds++;
string txt(cipher.size() - totalRounds * 16, 0);
for (int i = 0; i < totalRounds; i++)
{
unsigned char randKey[16];
int txtIndex = i * 256;
int cipherIndex = i * (16 + 256);
int txtSize = (i == (totalRounds - 1)) ? (cipher.size() % (256 + 16)) - 16 : 256;
for (int j = 0; j < 16; j++)
{
randKey[j] = cipher[cipherIndex] ^ key16[j];
cipherIndex++;
}
for (int j = 0; j < txtSize; j++)
{
txt[txtIndex] = cipher[cipherIndex] ^ key[j] ^ randKey[j % 16];
cipherIndex++;
txtIndex++;
}
}
return txt;
}
int random(int lower, int upper)
{
return (rand() % (upper - lower + 1)) + lower;
}
};
main.cpp
#include <iostream>
#include "Encryptor.cpp"
using namespace std;
int main()
{
unsigned char key[256] = {239, 222, 80, 163, 48, 26, 182, 101, 123, 51, 145, 28, 106, 157, 105, 1, 51, 129, 222, 124, 80, 254, 118, 220, 208, 75, 225, 127, 180, 192, 125, 149, 22, 140, 218, 162, 89, 45, 237, 250, 71, 85, 245, 75, 59, 122, 146, 95, 68, 130, 33, 62, 124, 11, 203, 252, 72, 141, 140, 12, 241, 218, 89, 147, 58, 124, 209, 177, 71, 254, 201, 3, 166, 10, 179, 89, 194, 72, 150, 32, 97, 197, 119, 50, 185, 11, 202, 164, 175, 115, 239, 113, 146, 7, 84, 62, 49, 124, 25, 108, 111, 107, 250, 168, 75, 137, 87, 219, 115, 242, 237, 23, 79, 53, 95, 45, 180, 59, 243, 138, 37, 219, 174, 13, 188, 19, 62, 104, 176, 154, 183, 242, 177, 19, 215, 42, 197, 88, 149, 246, 40, 54, 184, 31, 187, 9, 115, 152, 128, 165, 116, 105, 179, 242, 145, 195, 250, 153, 139, 247, 96, 51, 225, 237, 86, 97, 97, 196, 146, 67, 73, 88, 30, 135, 192, 29, 64, 189, 123, 95, 152, 22, 31, 5, 71, 38, 136, 6, 68, 247, 93, 206, 200, 229, 243, 140, 11, 137, 60, 197, 22, 92, 118, 44, 3, 47, 121, 249, 88, 27, 101, 242, 222, 36, 112, 45, 188, 46, 170, 201, 244, 90, 115, 224, 88, 157, 109, 136, 228, 134, 186, 124, 154, 3, 78, 49, 225, 57, 249, 172, 103, 44, 74, 84, 158, 48, 139, 185, 207, 9, 58, 143, 211, 177, 62, 32};
Encryptor e(key);
for (int i = 0; i < 100; i++)
{
string c = e.encrypt("my secret");
cout << "cipher: " << c << endl;
cout << "After decryption: " << e.decrypt(c) << endl;
}
return 0;
}
Algorithm:
user provides 256 byte key where the value of any byte can't be 0 or 255
16 byte internal key is generated from the user provided key
encryption:
plain text is processed in 256-byte blocks (same as the key length) except the last one which depends on the length of the plain text.
16-byte random key is generated for every block where the value of each byte is between 1,254.
for each plain text block additional 16 byte is added in the beginning of cipher text block that increases the cipher text block size to 256+16 byte
the first 16 bytes of each cipher text block contains the XOR result of the block random key and the internal key
the 17th byte of the cipher text = key[first byte] xor random key[first byte] xor plain text block[first byte]
the 18th byte of the cipher text = key[second byte] xor random key[second byte] xor plain text block[second byte]
....
when the random key reaches its last byte, as it is shorter than the block size, it repeats from the beginning.
The decryption process is reverse of the encryption process.
Answer: Here are a number of things you could do to improve the code.
Separate interface from implementation
The interface goes into a header file and the implementation (that is, everything that actually emits bytes including all functions and data) should be in a separate .cpp file. The reason is that you might have multiple source files including the .h file but only one instance of the corresponding .cpp file. In other words, split your existing Encryptor.cpp file into a .h file and a .cpp file.
Use C++-style includes
Instead of including stdio.h you should instead use #include <cstdio>. The difference is in namespaces as you can read about in this question.
Don't abuse using namespace std
Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. Know when to use it and when not to (as when writing include headers).
Make sure you have all required #includes
The code uses std::string but doesn't #include <string>. Also, carefully consider which #includes are part of the interface (and belong in the .h file) and which are part of the implementation per the earlier advice.
Don't use unnecessary #includes
The code has #include <stdio.h> but nothing from that include file is actually in the code. For that reason, that #include should be eliminated.
Don't use std::endl if you don't really need it
The difference betweeen std::endl and '\n' is that '\n' just emits a newline character, while std::endl actually flushes the stream. This can be time-consuming in a program with a lot of I/O and is rarely actually needed. It's best to only use std::endl when you have some good reason to flush the stream and it's not very often needed for simple programs such as this one. Avoiding the habit of using std::endl when '\n' will do will pay dividends in the future as you write more complex programs with more I/O and where performance needs to be maximized.
Use a better random number generator
You are currently using
(rand() % (upper - lower + 1)) + lower;
There are a number of problems with this approach. This will generate lower numbers more often than higher ones -- it's not a uniform distribution. Another problem is that the low order bits of the random number generator are not particularly random, so neither is the result. On my machine, there's a slight but measurable bias toward 0 with that. See this answer for details.
Study cryptography
If you're interested in this sort of thing, it would be good for you to study cryptography. One book you might like is Modern Cryptanalysis by Christopher Swenson. It's quite understandable and may help you answer your own questions. Here's a brief analysis of your scheme:
First, some nomenclature. I'm going to be referring to blocks (16-byte chunks) hereafter. Let \$m\$ be a single block message we're encrypting, and the 256-byte key is \$k[0] \cdots k[15]\$. Let's also say the random key is \$r\$ and your key16 is \$b\$. Using standard notation, \$\oplus\$ is the block-sized exclusive or operator. Your scheme does this:
$$ b = k[0] \oplus k[1] \oplus \cdots \oplus k[14] \oplus k[15] $$
The generated message has two parts which I'll call \$p\$ and \$q\$:
$$ p = b \oplus r $$
$$ q = k[0] \oplus r \oplus m $$
If we combine those into a new quantity \$m'\$, we get this:
$$ m' = p \oplus q = b \oplus r \oplus k[0] \oplus r \oplus m $$
$$ m' = p \oplus q = b \oplus k[0] \oplus m $$
So we can easily already see that the random key has no useful effect. Further, we can say that \$b' = b \oplus k[0]\$, so from the definition of \$b\$ we get:
$$ b' = k[1] \oplus k[2] \oplus \cdots \oplus k[14] \oplus k[15] $$
And so \$m' = b' \oplus m\$ and we can now see that the first block of the key also doesn't need to be derived. All that we need to get is \$b'\$ and we can decode any message encrypted with that key. If we know or can guess that the encrypted text is ASCII, for example, it's not at all hard to guess at the top few bits of each of the bytes of \$b'\$ and in essence, your scheme is no better (and no different) than choosing a randomly generated single key and exclusive-or-ing it with the message. That's a very, very weak scheme that even amateur cryptographers like me would have little difficulty breaking. | {
"domain": "codereview.stackexchange",
"id": 36449,
"tags": "c++, cryptography"
} |
Lattice $SU(2)$ Higgs model in unitary gauge | Question: I'm currently reading the book Quantum Fields on a Lattice by I. Montvay and G.Münster, and in section 6.1 they describe lattice actions for various higgs models. And I got confused at the moment where they describe the unitary gauge action for $SU(2)$-higgs theory.
In 6.16 they say that higgs field can be represented as $φ_x = ρ_xα_x$ , $ρ_x \ge 0$ , $α_x\in SU(2)$ and $φ_x^+φ_x=ρ_x^21$. Where $ρ_x$ is length of higgs field at given point, and $α_x$ is angular components of the higgs field.
And in 6.24 they define unitary gauge $α_x' = 1$ , $φ_x' = \begin{pmatrix} ρ_x & 0 \\ 0 & ρ_x \\ \end{pmatrix}$. Higgs field becomes a diagonal matrix with entries being higgs length(modulus), thus getting a preferred direction and breaking gauge symmetry.
But what is the meaning of this $ρ_x$ ? It is almost always non-zero, it can be equal to zero only if the Higgs field $φ_x$ itself at this point is equal to zero. And you can measure it without going to unitary gauge, because it's gauge invariant quantity. So what you should use to measure higgs VEV or higgs propagator for example?
Answer: Your answer is here, up to language and normalizations. That is, your authors adopt the 2×2 hermitian matrix representation of the complex Higgs doublet $\phi$, by arraying it and its conjugate in entries of a transposed two vector, (Longhitano's thesis):
$$
\varphi \equiv (\tilde{\phi}, \phi)=(i\tau_2 \phi^*, \phi). \tag{6.1,6.2}
$$
I have suppressed throughout the entirely superfluous subscripts x.
This matrix may be written as
$$
\varphi = \rho \alpha , \leadsto \\
\varphi^\dagger \varphi = \rho^2 \alpha^\dagger \alpha = \rho^2 \mathbb{I}, \tag{6.15, 6.16}
$$
since the as are unitary and unimodular. The latter matrix is L- and hypercharge gauge invariant, since it is only variant under the global R, custodial, symmetry!
The ρ is the unshifted radial variable of the Higgs field, the only one uninvolved in the Goldstone phenomenon, basically the v.e.v. plus the "debris" Higgs particle excitation. Tastefully, the authors call it the σ, mapping it to the legendary O(4)/O(3) σ-model of Gell-Mann and Levy... $\langle \rho^2\rangle= 1$ so $\langle \varphi^\dagger \varphi\rangle= \mathbb{I}$.
In unitary gauge, $ \alpha =\mathbb{I}$, so
$\langle \varphi\rangle= \mathbb{I}$. | {
"domain": "physics.stackexchange",
"id": 97297,
"tags": "quantum-field-theory, gauge-theory, symmetry-breaking, lattice-model, lattice-gauge-theory"
} |
Is steel air (argon) tight? | Question: I need to build an air tight system that can hold gas (Argon Gas). I see that most pipes use steel. Is steel tight enough to hold that gas? Argon is very dense, so I think it should have no problem.
Answer: Steel is routinely used for high pressure storage of industrial argon. For practical purposes it is certainly 'gas tight' although there may be a certain amount of atomic level diffusion into the material in the long term but this is more about contamination than actual leakage.
As mentioned in comments the quality of mechanical and welded joints is also critical for pressurised gas seals. There are numerous standards which deal with this in a variety for circumstances. In the UK BSP threaded fittings are commonly used for most gasses, including argon as well as a variety of compression and quick release fittings for low pressure applications.
For prototype applications coded TIG welding will generally be preferred for fabrication of the vessel itself and attachment points for fixtures and fittings in terms of both gas tightness and structural integrity. Mass produced gas containers usually use coded automated processes for pressure vessel fabrication in conjunction with in-line quality control testing. | {
"domain": "engineering.stackexchange",
"id": 946,
"tags": "gas, pipelines"
} |
Texas Hold em Poker Hand recognition algorithm and implementation | Question: I am designing an in-depth poker game. I am first focusing on recognizing the strength of a hand given the set of cards.
Is the following algorithm suitable for the stated purpose?
Am I using correct OOP design principles and implementation in my code?
Algorithm:
Go from a top down approach checking for the following in order:
Is: the hand a royal flush
If not: is the hand a straight flush
If not: is the hand a four of a kind
If not: is the hand a full house
Eventually if none are met, it will just return the high card.
Checking for each individual instance:
Royal flush: is it a flush and is it a straight and are all the cards picture cards
Is a straight flush: is it a straight and is it a flush
Is it a 4 of a kind: is the same card repeated 4 times
Is it a full house: is there a 3 of a kind and a 2 of a kind
Is it a flush: are there 5 cards with the same suit
Is it a straight: are there 5 cards in a row with a common difference of 1
Is it a 3 of a kind: are there 3 repeated cards
Is it a two pair: are there two pairs
Is it a pair: are there 2 repeated cards
package main;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
import java.util.stream.Stream;
/**
*
* @author Tamir
*/
public class Hand {
private Card[] hand = new Card[2];
public enum HandRank {
ROYAL_FLUSH,
STRAIGHT_FLUSH,
FOUR_OF_A_KIND,
FULL_HOUSE,
FLUSH,
STRAIGHT,
THREE_OF_A_KIND,
TWO_PAIR,
PAIR,
HIGH_CARD;
}
public Hand() {
}
public Hand(Card[] hand) {
this.hand = hand;
}
public Card[] getHand() {
return hand;
}
public void setHand(Card[] hand) {
this.hand = hand;
}
public void printHand() {
for (Card c : hand) {
System.out.println(c);
}
}
public HandRank determineHandRank(Card[] flop) {
if (isARoyalFlush(flop)) {
return HandRank.ROYAL_FLUSH;
} else if (isAStraightFlush(flop)) {
return HandRank.STRAIGHT_FLUSH;
} else if (isAFourOfAKind(flop)) {
return HandRank.FOUR_OF_A_KIND;
} else if (isAFullHouse(flop)) {
return HandRank.FULL_HOUSE;
} else if (isAFlush(flop)) {
return HandRank.FLUSH;
} else if (isAStraight(flop)) {
return HandRank.STRAIGHT;
} else if (isThreeOfAKind(flop)) {
return HandRank.THREE_OF_A_KIND;
} else if (isTwoPair(flop)) {
return HandRank.TWO_PAIR;
} else if (isPair(flop)) {
return HandRank.PAIR;
} else {
return HandRank.HIGH_CARD;
}
}
public boolean isARoyalFlush(Card[] flop) {
if (isAStraight(flop) && isAFlush(flop)) {
Card[] allCards = Stream.concat(Arrays.stream(flop), Arrays.stream(hand))
.toArray(Card[]::new);
boolean aceExists = false, kingExists = false, queenExists = false, jackExists = false, tenExists = false;
for (Card c : allCards) {
switch (c.getRank().getRank()) {
case "ACE":
aceExists = true;
break;
case "KING":
kingExists = true;
break;
case "QUEEN":
queenExists = true;
break;
case "JACK":
jackExists = true;
break;
case "TEN":
tenExists = true;
break;
}
}
return (aceExists && kingExists && queenExists && jackExists && tenExists);
} else {
return false;
}
}
public boolean isAStraight(Card[] flop) {
Card[] allCards = Stream.concat(Arrays.stream(flop), Arrays.stream(hand))
.toArray(Card[]::new);
Arrays.sort(allCards, byRank);
int noOfCardsInARow = 0;
int pos = 0;
boolean isAStraight = false;
while (pos < allCards.length - 1 && !isAStraight) {
if (allCards[pos + 1].getRank().getValue() - allCards[pos].getRank().getValue() == 1) {
noOfCardsInARow++;
if (noOfCardsInARow == 4) {
isAStraight = true;
} else {
pos++;
}
} else {
noOfCardsInARow = 0;
pos++;
}
}
return isAStraight;
}
public boolean isAFlush(Card[] flop) {
Card[] allCards = Stream.concat(Arrays.stream(flop), Arrays.stream(hand))
.toArray(Card[]::new);
int noOfClubs = 0;
int noOfSpades = 0;
int noOfHearts = 0;
int noOfDiamonds = 0;
for (Card c : allCards) {
switch (c.getSuit()) {
case "HEART":
noOfHearts++;
break;
case "SPADES":
noOfSpades++;
break;
case "CLUBS":
noOfClubs++;
break;
case "DIAMONDS":
noOfDiamonds++;
break;
}
}
return (noOfClubs == 5 || noOfSpades == 5 || noOfHearts == 5 || noOfDiamonds == 5);
}
private boolean isThreeOfAKind(Card[] flop) {
Card[] allCards = Stream.concat(Arrays.stream(flop), Arrays.stream(hand))
.toArray(Card[]::new);
int cardRepeats = 1;
boolean isThreeOfAKind = false;
int i = 0;
int k = i + 1;
while (i < allCards.length && !isThreeOfAKind) {
cardRepeats = 1;
while (k < allCards.length && !isThreeOfAKind) {
if (allCards[i].getRank().getValue() == allCards[k].getRank().getValue()) {
cardRepeats++;
if (cardRepeats == 3) {
isThreeOfAKind = true;
}
}
k++;
}
i++;
}
return isThreeOfAKind;
}
private boolean isTwoPair(Card[] flop) {
Card[] allCards = Stream.concat(Arrays.stream(flop), Arrays.stream(hand))
.toArray(Card[]::new);
int cardRepeats = 1;
int noOfCardRepeats = 0;
boolean isTwoPair = false;
int i = 0;
int k = i + 1;
while (i < allCards.length && !isTwoPair) {
cardRepeats = 1;
while (k < allCards.length && !isTwoPair) {
if (allCards[i].getRank().getValue() == allCards[k].getRank().getValue()) {
cardRepeats++;
if (cardRepeats == 2) {
cardRepeats = 1;
noOfCardRepeats++;
if (noOfCardRepeats == 2) {
isTwoPair = true;
}
}
}
k++;
}
i++;
}
return isTwoPair;
}
private boolean isPair(Card[] flop) {
Card[] allCards = Stream.concat(Arrays.stream(flop), Arrays.stream(hand))
.toArray(Card[]::new);
int cardRepeats = 1;
boolean isPair = false;
int i = 0;
int k = i + 1;
while (i < allCards.length && !isPair) {
cardRepeats = 1;
while (k < allCards.length && !isPair) {
if (allCards[i].getRank().getValue() == allCards[k].getRank().getValue()) {
cardRepeats++;
if (cardRepeats == 2) {
isPair = true;
}
}
k++;
}
i++;
}
return isPair;
}
public Comparator<Card> byRank = (Card left, Card right) -> {
if (left.getRank().getValue() < right.getRank().getValue()) {
return -1;
} else {
return 1;
}
};
private boolean isAFullHouse(Card[] flop) {
Card[] allCards = Stream.concat(Arrays.stream(flop), Arrays.stream(hand))
.toArray(Card[]::new);
Arrays.sort(allCards, byRank);
int noOfRepeats = 1;
boolean isThreeOfAKind = false;
boolean isTwoOfAKind = false;
for (int i = 0; i < allCards.length - 1; i++) {
if (allCards[i].getRank().getValue() == allCards[i + 1].getRank().getValue()) {
noOfRepeats++;
if (noOfRepeats == 3) {
isThreeOfAKind = true;
noOfRepeats = 1;
} else if (noOfRepeats == 2) {
isTwoOfAKind = true;
noOfRepeats = 1;
}
} else {
noOfRepeats = 1;
}
}
return (isTwoOfAKind && isThreeOfAKind);
}
public boolean isAFourOfAKind(Card[] flop) {
Card[] allCards = Stream.concat(Arrays.stream(flop), Arrays.stream(hand))
.toArray(Card[]::new);
int cardRepeats = 1;
boolean isFourOfAKind = false;
int i = 0;
int k = i + 1;
while (i < allCards.length && !isFourOfAKind) {
cardRepeats = 1;
while (k < allCards.length && !isFourOfAKind) {
if (allCards[i].getRank().getValue() == allCards[k].getRank().getValue()) {
cardRepeats++;
if (cardRepeats == 4) {
isFourOfAKind = true;
}
}
k++;
}
i++;
}
return isFourOfAKind;
}
private boolean isAStraightFlush(Card[] flop) {
if (isAFlush(flop) && isAStraight(flop)) {
return true;
} else {
return false;
}
}
public Card getHighCard(Card[] flop) {
Card[] allCards = Stream.concat(Arrays.stream(flop), Arrays.stream(hand))
.toArray(Card[]::new);
Arrays.sort(allCards, byRank);
return allCards[0];
}
public Card getHandHighCard() {
Arrays.sort(hand, byRank);
return hand[0];
}
}
Card object
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package main;
/**
*
* @author Tamir
*/
public class Card {
private String suit;
private Rank rank;
//isDealt: checks if the card is in play and out of the deck
private boolean isDealt;
public Card() {
}
public Card(String suit, Rank rank, boolean isDealt) {
this.suit = suit;
this.rank = rank;
this.isDealt = isDealt;
}
public String getSuit() {
return suit;
}
public void setSuit(String suit) {
this.suit = suit;
}
public Rank getRank() {
return rank;
}
public void setRank(Rank rank) {
this.rank = rank;
}
public boolean isIsDealt() {
return isDealt;
}
public void setIsDealt(boolean isDealt) {
this.isDealt = isDealt;
}
@Override
public String toString() {
return suit + " " + rank;
}
}
Rank object
package main;
/**
*
* @author Tamir
*/
public class Rank {
private int value;
private String rank;
public Rank() {
}
public Rank(int value, String rank) {
this.value = value;
this.rank = rank;
}
public int getValue() {
return value;
}
public void setValue(int value) {
this.value = value;
}
public String getRank() {
return rank;
}
public void setRank(String rank) {
this.rank = rank;
}
@Override
public String toString() {
String stg = "";
stg += this.rank + "(" + this.value + ")";
return stg;
}
}
Answer: Good job! Just some points:
Bugs
Royal Flush Checking - 10 S, J S, Q S, K S, 2 H as flop and 8 S, A S in the hand will return false if passed through isARoyalFlush(). What??? You seem to only be checking for Straights and Flushes with the flop.
Flush checking - If there is 6 Spades, or 6 Hearts or 6/7 whatever, then the isAFlush() method returns false. Change:
return (noOfClubs == 5 || noOfSpades == 5 || noOfHearts == 5 || noOfDiamonds == 5);
To:
return (noOfClubs >= 5 || noOfSpades >= 5 || noOfHearts >= 5 || noOfDiamonds >= 5);
Straight Flush Checking - Two things gone wrong here.
a) You're only checking for a Straight Flush in the flop. What if the Hand contributes to a Straight Flush?
b) Don't feel bad about this one too much; this is a common mistake that even I made when writing a Hand Evaluator (I figured it out and decided it was way too hard; so I gave up). 3 S, 4 S, 5 S, 6 S, 10 S as flop and 8 H, 7 H in the hand will return true (after you edit it so that it checks both the hand and the flop), even though you see that there is no Straight Flush.
Naming
public boolean isARoyalFlush(Card[] flop)
Why the "A" in the middle? I would remove it completely, as it both reduces readability and is doesn't add anything to the meaning.
Same with all the other ones.
In isARoyalFlush(Card[] flop)
switch (c.getRank().getRank())
What??? .getRank().getRank() confuses me. Might want to do some naming changes. I would simply remove getRank() in Rank and use toString() instead:
@Override
public String toString() {
return rank;
}
And maybe even change the naming for rank.
Also, checking for Straight beforehand is not necessary, as your code checks if it is a flush and later checks if it contains 10, J, Q, K, and A anyways. isStraight() will take about the same time to run as isRoyalFlush(), making the possible performance gain from a false evaluation not worth it.
Others
In isAStraightFlush(Card[] flop)
private boolean isAStraightFlush(Card[] flop) {
if (isAFlush(flop) && isAStraight(flop)) {
return true;
} else {
return false;
}
}
That could easily be:
private boolean isAStraightFlush(Card[] flop) {
return isAFlush(flop) && isAStraight(flop);
}
Though, as mentioned in the Bugs section, it doesn't really work.
In isThreeOfAKind(Card[] flop)
Hmm... Here you don't do isAThreeOfAKind(Card[] flop)...
Also, here:
while (i < allCards.length && !isThreeOfAKind) {
cardRepeats = 1;
while (k < allCards.length && !isThreeOfAKind) {
if (allCards[i].getRank().getValue() == allCards[k].getRank().getValue()) {
cardRepeats++;
if (cardRepeats == 3) {
isThreeOfAKind = true;
}
}
k++;
}
i++;
}
return isThreeOfAKind;
You can just as easily return in the inner if statement immediately, without going through the checks in the loop:
while (i < allCards.length) {
cardRepeats = 1;
while (k < allCards.length) {
if (allCards[i].getRank().getValue() == allCards[k].getRank().getValue()) {
cardRepeats++;
if (cardRepeats == 3) {
return true;
}
}
k++;
}
i++;
}
return false;
In isTwoPair(Card[] flop)
See all advice in isThreeOfAKind(Card[] flop).
In isPair(Card[] flop)
Again, see all advice in isThreeOfAKind(Card[] flop).
In isAFourOfAKind(Card[] flop)
Not the A again...
See second piece of advice in isThreeOfAKind(Card[] flop).
Good Job!
Hand Evaluators are very hard to implement. It's very courageous of you to tackle such a challenging task (as you can see, there are quite a few bugs). Good luck improving your code, and I hope to see a follow-up after the bugs are fixed! | {
"domain": "codereview.stackexchange",
"id": 34669,
"tags": "java, algorithm, playing-cards"
} |
Activity Selection and Matroid Theory | Question: Many people on different articles suggests that if an optimization problem has a greedy solution, the underlying structure must have matroid property.
I was trying to understand this. So far, I was able to prove that for,
Maximum sum of m integers among n integer.
Minimum spanning tree.
However, The classical greedy algorithm Activity Selection seems to fail having both independence and base exchange property.
Let,
E = {1-3, 2-4, 3-5, 4-6, 5-7}
Now, Take two independent set,
I = {2-4, 4-6} and J = {1-3, 3-5, 5-7}
There is no activity in J which can extend I. Which fails the independence exchange property of matroid, if I understood it correctly. Thus, it is not matroid and shouldn't have a greedy algorithms. But this problem has a greedy solution.
Where am I wrong?
Answer: Suppose that $\mathcal{F}$ is a nonempty collection of subsets of a finite set $U$ satisfying the following axiom:
If $A \in \mathcal{F}$ and $B \subseteq A$ then $B \in \mathcal{F}$.
Consider the following algorithm, which gets as input a weight function $w\colon U \to \mathbb{R}_+$ (here $\mathbb{R}_+$ consists of all positive reals):
Set $S \gets \emptyset$.
While there exists an element $x \notin S$ such that $S \cup \{x\} \in \mathcal{F}$:
Let $x$ be an element of maximum weight among $\{ x : x \notin S \text{ and } S \cup \{x\} \in \mathcal{F} \}$.
Set $S \gets S \cup \{x\}$.
Return $S$.
We say that the algorithm is valid for $w$ if it returns a set $S$ maximizing $\sum_{x \in S} w(x)$ among all $S \in \mathcal{F}$.
Theorem. The algorithm is valid for all $w\colon U \to \mathbb{R}^+$ iff $\mathcal{F}$ is a matroid.
You can find the proof of this theorem in lecture notes and in textbooks.
The algorithm given above is a specific greedy algorithm. This specific greedy algorithm is optimal if and only if the set system is a matroid. However, the (informal) notion of greedy algorithms encompasses more than just this specific algorithm. The theorem above doesn't apply to these algorithms.
In the two good examples you consider, the corresponding greedy algorithms (or at least one of their variants) are instances of the algorithm above. The same isn't true for the algorithm for your bad example.
For a formalized notion of "greedy-like" algorithms, consult (Incremental) priority algorithms by Borodin, Nielsen, and Rackoff. You can see that is is much more general than the simple algorithm stated above.
Another relevant notion is that of greedoid, for which a theorem similar to the one stated above does hold (see Wikipedia for details). | {
"domain": "cs.stackexchange",
"id": 10964,
"tags": "greedy-algorithms, matroids"
} |
How many optical photons are in a room? | Question: Got interested in the question and came up with ~1 trillion particles. Would like to take some time of anyone interested to review my approach to the problem and help with side questions I got confused with.
Here is my take. My room is few meters long with ~15W light source. Assuming its COP is nearly 100% and average photon is 500nm (close to green but rounded up for simplicity), it is trivial to find a number of photons emitted per second.
The next part of the problem is to decide the lifetime of photons and here is what I get confused with. My way of thinking is based on a red laser pointer experiment: one can see a light spot right on a wall where it is directed to, but it does not get reflected good enough to see it on the opposite wall, hence most of the photons get absorbed on a wall, while some other portion gets reflected (otherwise nothing could be seen by eyes) and absorbed on the next obstacle. This way I derive that photon travels a room length once or two times. Taking it as 10m length I derive photon's lifetime.
That is all I did to come up with ~1*10^12 estimation.
Anyone knows, does it look like somehow close to the reality?
I've started with 15W of light energy, it seems quite a lot to me. If it all gets absorbed on a surface of my interior, I'd expect to notice some heating up, but it does not actually happen.
Taking power of a light source as a starting point does not look quite trusty to me. It requires me to know a few technical details about the source. Is there anything better to start estimation with?
If I want to go with wider picture and count all photons (not only visible ones) in a room, is it correct to assume there are only the following sources to add up:
FM radio, TV
GPS, WiFi, cellular
thermal emittance and sun light
Answer: Welcome to the physics stack exchange!
Your estimate looks reasonable to me for an order of magnitude estimate including only visible light. Addressing your concerns:
Yes. I would expect your largest sources of error to be the crudeness of your estimate for the mean photon lifetime (light colored reflective rooms could result in much higher values etc.) and your simplification of the light spectrum to a single wavelength.
15W is not very much heating power at room temperature spread over a whole room. Your body is emitting more than 100W of heating power right now as you read this!
You could instead start with the brightness of illumination in the room, but that would require a light meter, and would begin to feel a lot like simply measuring the quantity you are trying to estimate. I like your way better.
No, there are a lot of sources of photons, and your list is far from complete. One prosaic source of large numbers of photons is the 60 Hz EM interference caused by the wiring in the walls of the room. Not much energy, but you are counting photons, not energy. Also, numerous virtual photons are exchanged every time two molecules collide. There is a huge quantity of electromagnetic interaction going on around us, and any electromagnetic energy exchange relies on photons in some way. My examples do not complete your list, but I hope they give a flavor for how monumental a task counting low energy photons could be.
Keep asking questions, and enjoy the journey! | {
"domain": "physics.stackexchange",
"id": 91502,
"tags": "thermodynamics, optics, photons, everyday-life, estimation"
} |
An rough implementation of `std::is_constructible` | Question: As a challenge/fun activity/task, I have implemented my version of std::is_constructible.
Here is the source code:
#include <cbrainx/cbrainx.hpp>
#include <iostream>
template <typename T, typename ...V>
auto aux(long) -> cbx::false_type;
template <typename T, typename ...V>
static auto aux(int) -> cbx::type_switch_t<
cbx::branch<cbx::false_type,
cbx::detail::void_t<decltype(::new(cbx::declval<void *>()) T{cbx::declval<V>()...})>>,
cbx::otherwise<cbx::true_type>>;
template <typename T, typename ...V>
auto is_constructible() -> decltype(aux<T, V...>(0));
struct X {
X() {
std::cout << "constructed..." << std::endl;
}
private:
X(int) {
}
public:
X(const X &) = delete;
X(X &&) {}
~X() {
std::cout << "destroyed..." << std::endl;
}
static void xd() {
X{1};
std::cout << decltype(is_constructible<X, int>())::value << std::endl;
}
};
auto main() -> int {
std::cout << std::boolalpha;
std::cout << decltype(is_constructible<X, X>())::value << std::endl;
std::cout << decltype(is_constructible<X, X &>())::value << std::endl;
std::cout << decltype(is_constructible<X, X &&>())::value << std::endl;
return 0;
}
IMPORTANT:
cbx::false_type is equivalent to std::false_type. It has a value of type bool. Likewise cbx::true_type.
cbx::type_switch_t is equivalent to switch clause but, for types and evaluated statically at compile time.
cbx::branch is like the if branch. The first argument is the condition and must have a value of type bool. The second argument is the type itself.
cbx::otherwise is like the else branch.
Things I know
It does not work for private constructors. (I am okay with that!)
It's a rough implementation.
Things I want to know
Is this implementation correct?
Am I missing anything?
How close it is to the actual std::is_constructible?
How can I improve?
C++ Standard: 20
Answer: It would help to cleanly separate your is_constructible from the "test harness" code that follows it — either with a //----- comment, or by putting your thing in a namespace, or something.
Your thing is a function template, whereas the standard std::is_constructible is a type-trait (that is, a class template).
Without seeing the definitions of branch, otherwise, etc., it's hard to judge whether your code is even correct, let alone performant.
You name both your template parameter T and your parameter pack V... with singular names. It would be helpful to name the pack with a plural name, e.g. Vs... or Args....
Your unit tests take the form of std::cout << foo. Prefer to use static_assert(foo), so that the compiler will tell you whether your tests passed or failed — you shouldn't have to eyeball the terminal output to tell whether you implemented is_constructible right!
You should become familiar with the traditional SFINAE idioms. For a simple type-trait like is_constructible, where all you want to know is whether a particular expression is well-formed, the idiom is
template<class T, class = void>
struct is_fooable : std::false_type {};
template<class T>
struct is_fooable<T, decltype((
your-expression-here
), void())> : std::true_type {};
Or, since you tagged this question c++20, you could just use a requires-expression:
template<class T>
struct is_fooable : std::bool_constant<requires {
your-expression-here;
}> {};
In the specific case of is_constructible, the expression you care about is new T(args...). So you might write
template<class, class T, class... Args>
struct is_constructible_impl : std::false_type {};
template<class T, class... Args>
struct is_constructible_impl<decltype((
new T(std::declval<Args>()...)
), void()), T, Args...> : std::true_type {};
template<class T, class... Args>
struct is_constructible : is_constructible_impl<void, T, Args...> {};
Or:
template<class T, class... Args>
struct is_constructible : std::bool_constant<requires (Args&&... args) {
new T(static_cast<Args&&>(args)...);
}> {};
(Godbolt.)
By writing unit tests and comparing them to the standard std::is_constructible, we soon find a problem: std::is_constructible<int&, int&>::value is true, but our new T(args...) formulation yields false, because you can't new a reference type. Solving this problem is left as an exercise for the reader, because I'm too lazy to look it up right now. :) | {
"domain": "codereview.stackexchange",
"id": 42818,
"tags": "c++, template-meta-programming, c++20"
} |
How many palindromic sequences in human genome? | Question: I'm writing an article on palindromes (the words) and I wanted to mention the existence of palindromic gene sequences. Roughly how many palindromes exist in the human genome? I understand the number will vary from person to person. All I want to know is the order of magnitude. Is it 10^2, 10^3, 10^6?
Answer: The human genome contains approximately 1.25*10^7 palindromes longer than 6 bp (that is 8 bp or longer). Some interesting facts are also known about their distribution.
We found that 24 palindrome-abundant intervals are mostly located on G-bands, which condense early, replicate late, and are relatively A+T rich. In general, palindromes are overrepresented in introns but underrepresented in exons. Upstream region has enriched palindrome distribution, where palindromes can serve as transcription factor binding sites. We created a Human DNA Palindrome Database (HPALDB) which is accessible at http://vhp.ntu.edu.sg/hpaldb . It contains 12,556,994 entries covering all palindromes in the human genome longer than 6 bp.
And no, the number of shorter ones (shorter than 8 bp) isn't mentioned in the paper because they are considered unimportant, biologically. | {
"domain": "biology.stackexchange",
"id": 9631,
"tags": "bioinformatics, dna-sequencing"
} |
Why is $v=\sqrt{\frac{GM}{r}}$ not a valid equation for escape velocity? | Question: Firstly, I know that the equation for the escape velocity is $$v_{\text{escape}}=\sqrt{\frac{2\,GM}{r}}\tag{1}$$ and understand it's derivation.
The following is such a simple derivation; for a test body of mass $m$ in orbit with a massive body (assumed to be spherical) with mass $M$ and separation $r$ between the two body's centres. Equating the centripetal force to the gravitational force yields;
$$\frac{mv^2}{r}=\frac{GMm}{r^2}\tag{2}$$
which on simplificaton, gives
$$v=\sqrt{\frac{GM}{r}}\tag{3}$$
What I would like to know is why eqn $(3)$ is not a valid escape velocity equation?
Or, put in another way, mathematically, the derivation in $(2)$ seems sound; yet it is out by a factor of $\sqrt{2}$. What is 'missing' from the derivation $(2)$?
EDIT:
As I mentioned in the comment below, just to be clear, I understand that equation $(3)$ will give the velocity required for a bound circular orbit. But to escape it should follow that the test mass has to move at any speed that is infinitesimally larger than $\sqrt{\frac{GM}{r}}$ such that $$v_{\text{escape from orbit}}\gt\sqrt{\frac{\,GM}{r}}$$
So in other words eqn $(3)$ gives the smallest possible speed for a bound circular orbit. I referred to this as the 'escape speed'; since speeds larger than this will lead to a non-circular orbit, and larger still will lead to an escape from the elliptical orbit.
So my final question is; do the formulas $(1)$ and $(3)$ actually give the highest possible speed not to escape orbit rather than the 'escape speed' itself?
Thank you to all those that contributed these answers.
Answer: This answer addresses the edit made in the question.
Your question boils down to (correct me if wrong): if (1) $v_{escape} =\sqrt{2Gm/r}$ is escape velocity and (3) $v_{circular\;orbit} =\sqrt{Gm/r}$ is orbital velocity, then what would for example $v=\sqrt{1.5Gm/r}$ be? Or in other words, what happens with a speed higher than $v_{circular\;orbit}$ but lower than $v_{escape}$?
The answer is: an elliptical orbit.
(1) is derived for the orbital limit (it is assumed that the object will reach infinitely far away) and (3) is derived for a circular orbit (you used the circular centripetal force expression). A speed in between will distort the circular orbit as if escaping but then still coming back at some point completing the now non-circular, elliptical orbit.
The full range of possible speeds is:
$v=0$: No orbit (vertical fall).
$0<v<v_{circular\;orbit}$: "Vertical" ellipse
$v=v_{circular\;orbit}$: Circle
$v_{circular\;orbit}<v<v_{escape}$: "Horizontal" ellipse
$v=v_{escape}$: Orbital limit
$v_{escape}<v$: No orbit | {
"domain": "physics.stackexchange",
"id": 46389,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, orbital-motion, escape-velocity"
} |
$UP^{\ O}\neq P^{\ O}$ for some oracle $O$ | Question: The definition of the class $UP$ is here. It is of course easy to see that $P\subseteq UP$.
I have a problem of proving that there is an oracle $O$ and a language $L$ such that $L\in UP^{\ O}$ but $L\notin P^{\ O}$, i.e. $UP^{\ O}\neq P^{\ O}$.
I find a lot of struggle of finding an explicit $O$, and more struggle to proof it without finding one. I don't have access to the paper which is in the complexity zoo in the link above, can you please explain why it is true or provide me with access to the paper?
Answer: Here is the construction from Rackoff's paper Relativized questions involving probabilistic automata. We construct the oracle $O$ using diagonalization, in steps. At step $t$ we will construct the initial segment $O_t$ of $O$ which consists of $O \cap \bigcup_{i=0}^{n_t} \Sigma^i$ for some $n_t$, where $n_0 < n_1 < \cdots$ is an increasing sequence. We will maintain the following invariant: $O_t$ contains at most one string of each length. This will guarantee that the following language is in $UP^O$:
$$
L = \{ x : \exists y \in O \cap \Sigma^{|x|} \}.
$$
The construction will use an enumeration $M_i$ of all polynomial time Turing machines, with the guarantee that the running time of $M_i$ is at most $in^i$. It is a standard fact that such an enumeration exists.
Let $O_{t-1}$ be the initial segment constructed at step $t-1$ (or $\emptyset$ in the first step). We now show how to construct $O_t$. Choose the smallest $a_t > n_{t-1}$ such that $ta_t^t < 2^{a_t}$. Run $M_t$ on the string $0^{a_t}$, answering NO to all queries outside of $O_{t-1}$. Let $b_t$ be the length of the largest string queried, and let $n_t = \max(a_t,b_t)$. If $M_t$ accepts, set $O_t = O_{t-1}$, guaranteeing that $0^{a_t} \notin L$. Otherwise, set $O_t = O_{t-1} \cup \{x\}$, where $x$ is some string of length $a_t$ which $M_t$ didn't query (such a string must exist since $ta_t^t < 2^{a_t}$), guaranteeing that $0^{a_t} \in L$. You can check that $O_t$ satisfies the invariant stated above.
By construction, $L \in UP^O$ (the UP machine guesses $y$ of the same length as the input and verifies that $y \in O$), while $L \notin P^O$ since $M_t$ answers incorrectly on the string $0^{a_t}$, by construction.
Beigel, in the paper On the relativized power of additional accepting paths, shows that $P^O \neq UP^O$ for a random oracle, crediting this result to Rudich's PhD thesis. In fact he proves even stronger separations, showing that allowing more accepting paths increases the (relativized) power of polytime machines. | {
"domain": "cs.stackexchange",
"id": 9444,
"tags": "complexity-theory, polynomial-time, oracle-machines"
} |
Conceptual help with a modified atwood machine | Question: From my understanding, in this atwood machine, one mass is on a horizontal surface, and the other is hung off a pulley and left to freefall. Pictured below:
If only the hanging mass affects the acceleration of the entire system, why does the tension in m1 equal (m1*a)?
Trying to work through this myself: I may have misinterpreted that (m1*a) was mass times gravity, which isn't the case. However, that's still confusing. The acceleration is equal throughout the string, so the acceleration is equal to the force of gravity on m2. That makes sense to me algebraically and conceptually, but why does the TENSION equal (m1 * a)?
Answer: Cut the string to the right of M1. You need to replace it with the force that the string exerted to the right on M1. That force is the tension in the string. Since there are no other forces acting on M1 horizontally (assumes frictionless table), the acceleration of M1 must be $\frac{T}{M_1}$.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 53186,
"tags": "newtonian-mechanics, forces, free-body-diagram, string"
} |
How do I determine temperature and pressure rise whenever mols of gases are added in a isolated room? | Question: Let's consider a fully isolated room, no gas or heat flowing from it.
We consider gases ($\ce{O2,N2,CO2}$) to follow the ideal gas law, and no chemical reaction occurs between them.
Pressure $p$, temperature $T$, volume $V$ and the amount of gas $n$ are known, and only $V$ is held constant.
What happens when $n$ suddenly changes?
You can imagine it as compressed gas container leaking $\mathrm{d}n$ moles of gas.
According to my intuition, both $p$ and $T$ would increase.
I can use the ideal gas law if I consider either pressure or temperature to be constant in order to obtain the other one, but I have no clue about what would happen in reality to both temperature and pressure.
The purpose of this question is to build "kind of" a simulation.
Answer: The question can be answered by solving the material and energy balance for this case.
The tank is the control volume CV, of a constant volumne $V_\mathrm{CV}$, and its variables will be referred with the index $\text{CV}$. A constant molar flow rate in $\pu{mol s^-1}$ is exiting the CV.
The outlet won't have any index.
The process is depicted as follows:
1. Conservation of mass The rate of change of the amount of substance in the CV can only decrease due to the leaking
\begin{align}
\color{blue}{\frac{\mathrm{d}n_\mathrm{CV}}{\mathrm{d}t}} = -\dot{n} \tag{1} \\
\end{align}
we have a decrease, hence the minus sign. The conservation of amount of substance here is valid, because we don't have a chemical reaction, so mass and amount are conserved quantities.
2. First law In its full form, where the index $i$ denote inlets and $o$ outlets, we have
\begin{align}
\frac{\mathrm{d}\left[n_\mathrm{CV}\left(u_\mathrm{CV} +
\dfrac{v_\mathrm{CV}^2}{2} + gz_\mathrm{CV}\right)\right]}{\mathrm{d}t}
=& \dot{Q} + \dot{W} \\ &+
\sum_{i} \dot{n}_\mathrm{i}\left(h_i + \frac{v_i^2}{2} + gz_i\right)
\\ &-
\sum_{o} \dot{n}_\mathrm{o}\left(h_o + \frac{v_o^2}{2} + gz_o\right) \tag{2}
\end{align}
where our assumptions are:
We disregard the kinetic and gravitational potential energy vs the enthalpy.
There is no transfer of work.
There is no transfer of heat.
Thus, Eq. (2) with only one outlet of enthalpy yields
\begin{align}
\frac{\mathrm{d}(n_\mathrm{CV}u_\mathrm{CV})}{\mathrm{d}t} &=
\color{blue}{-\dot{n}}h
\qquad\qquad \text{[Use Eq. (1)]} \\
n_\mathrm{CV}\frac{\mathrm{d}u_\mathrm{CV}}{\mathrm{d}t} +
u_\mathrm{CV}\frac{\mathrm{d}n_\mathrm{CV}}{\mathrm{d}t} &=
h\frac{\mathrm{d}n_\mathrm{CV}}{\mathrm{d}t} \\
n_\mathrm{CV}\mathrm{d}u_\mathrm{CV} &=
(h - u_\mathrm{CV})\mathrm{d}n_\mathrm{CV} \\
\frac{\mathrm{d}u_\mathrm{CV}}{h - u_\mathrm{CV}} &=
\frac{\mathrm{d}n_\mathrm{CV}}{n_\mathrm{CV}} \tag{3} \\
\end{align}
Eq. (3) is as far as we can go. We will make two more assumptions:
The fluid inside the CV has the same variables as the fluid leaving the CV. Thus, $P_\mathrm{CV} = P$ and $T_\mathrm{CV} = T$, so that $h_\mathrm{CV} = h$.
The fluid behaves as an ideal gas. Thus, the differential of internal energy can be written as
$$ du_\mathrm{CV} = C_\mathrm{V}^\mathrm{ig} \, \mathrm{d}T_\mathrm{CV}
\tag{4} $$
Combining both points we can simplify the nasty denominator in Eq. (3)
\begin{align}
\require{cancel}
h - u_\mathrm{CV} &= h_\mathrm{CV} - u_\mathrm{CV} \\
h - u_\mathrm{CV} &=
(\cancel{u_\mathrm{CV}} + P_\mathrm{CV}v_\mathrm{CV}) - \cancel{u_\mathrm{CV}} \\
h - u_\mathrm{CV} &= P_\mathrm{CV}v_\mathrm{CV} \\
h - u_\mathrm{CV} &= RT_\mathrm{CV} \tag{5} \\
\end{align}
Combining Eqs. (3), (4), and (5) and integrating
\begin{align}
\frac{C_\mathrm{V}^\mathrm{ig}\mathrm{d}T_\mathrm{CV}}
{RT_\mathrm{CV}} =
\frac{\mathrm{d}n_\mathrm{CV}}{n_\mathrm{CV}} \\
\frac{1}{R}\int_{T_\mathrm{CV1}}^{T_\mathrm{CV2}}
\frac{C_\mathrm{V}^\mathrm{ig}\mathrm{d}T_\mathrm{CV}}{T_\mathrm{CV}} &=
\int_{n_\mathrm{CV1}}^{n_\mathrm{CV2}}
\frac{\mathrm{d}n_\mathrm{CV}}{n_\mathrm{CV}} \tag{6} \\
\end{align}
Strictly, even for an ideal gas, $C_\mathrm{V}^\mathrm{ig}$ is a function of temperature. We will analyze, for simplicity, the case where this magnitude is constant. The integration is not that hard if we consider a function like a polynomial, but we will get a result that is not analytic in terms of the final temperature $T_\mathrm{CV2}$.
Continuing with Eq. (6)
\begin{align}
\frac{C_\mathrm{V}^\mathrm{ig}}{R}\int_{T_\mathrm{CV1}}^{T_\mathrm{CV2}}
\frac{\mathrm{d}T_\mathrm{CV}}{T_\mathrm{CV}} &=
\int_{n_\mathrm{CV1}}^{n_\mathrm{CV2}}
\frac{\mathrm{d}n_\mathrm{CV}}{n_\mathrm{CV}} \\
\frac{C_\mathrm{V}^\mathrm{ig}}{R}
\ln\left(\frac{T_\mathrm{CV2}}{T_\mathrm{CV1}}\right) &=
\ln\left(\frac{n_\mathrm{CV2}}{n_\mathrm{CV1}}\right) \rightarrow
\boxed{\frac{T_\mathrm{CV2}}{T_\mathrm{CV1}} =
\left(\frac{n_\mathrm{CV2}}{n_\mathrm{CV1}}\right)
^{R/C_\mathrm{V}^\mathrm{ig}}} \tag{7}
\end{align}
Now consider the ideal gas law in two situations
\begin{align}
\require{cancel}
\frac{P_\mathrm{CV2}\cancel{V_\mathrm{CV}}}
{n_\mathrm{CV2}\cancel{R}T_\mathrm{CV2}} &=
\frac{P_\mathrm{CV1}\cancel{V_\mathrm{CV}}}
{n_\mathrm{CV1}\cancel{R}T_\mathrm{CV1}} \\
\frac{P_\mathrm{CV2}}{n_\mathrm{CV2}T_\mathrm{CV2}} &=
\frac{P_\mathrm{CV1}}{n_\mathrm{CV1}T_\mathrm{CV1}} \\
\frac{P_\mathrm{CV2}}{P_\mathrm{CV1}} &=
\frac{n_\mathrm{CV2}T_\mathrm{CV2}}{n_\mathrm{CV1}T_\mathrm{CV1}}
\qquad \text{[Use Eq. (7)]} \\
\frac{P_\mathrm{CV2}}{P_\mathrm{CV1}} &=
\frac{n_\mathrm{CV2}}{n_\mathrm{CV1}}
\left(\frac{n_\mathrm{CV2}}{n_\mathrm{CV1}}\right)
^{R/C_\mathrm{V}^\mathrm{ig}} \\
\frac{P_\mathrm{CV2}}{P_\mathrm{CV1}} &=
\left(\frac{n_\mathrm{CV2}}{n_\mathrm{CV1}}\right)
^{(R/C_\mathrm{V}^\mathrm{ig}) + 1} \\
\frac{P_\mathrm{CV2}}{P_\mathrm{CV1}} &=
\left(\frac{n_\mathrm{CV2}}{n_\mathrm{CV1}}\right)
^{(R + C_\mathrm{V}^\mathrm{ig})/C_\mathrm{V}^\mathrm{ig}} \rightarrow
\boxed{\frac{P_\mathrm{CV2}}{P_\mathrm{CV1}} =
\left(\frac{n_\mathrm{CV2}}{n_\mathrm{CV1}}\right)
^{C_\mathrm{p}^\mathrm{ig}/C_\mathrm{V}^\mathrm{ig}}} \tag{8}
\end{align}
3. Remarks Eqs. (7) and (8) are the end. Final observations:
According to Eq. (7), as the tank is depleted, the temperature decreases as a power function.
According to Eq. (8), as the tank is depleted, the pressure decreases as a power function.
The pressure decreases, for the same level of depletion of the CV, more than the temperature does.
The total depletion of the tank, i.e. $n_\mathrm{CV2} \approx 0$, should not be evaluated with these equations. As temperature and pressure go down, a phase transition may occur in the fluid, so the equations are no longer valid because we don't have an ideal gas anymore. | {
"domain": "chemistry.stackexchange",
"id": 17617,
"tags": "thermodynamics, equilibrium, gas-laws, ideal-gas"
} |
Calculating the time light needs to reach a fast moving object | Question: I'm wondering if someone could explain me this with more details:
Assume a spaceship flies on the x-axis away from earth with $v=0.8c$. Lets call its frame $S'$. The frame of earth is called $S$.
When the spaceship is located at $x_s=6.66\cdot 10^9$ km away from earth (measured in $S$), we send him a light signal.
How long does it need, until it reaches him?
The solution says: Becuase the signal moves with c, we get:
$\Delta t_{signal}\cdot c = x_s + \Delta t_{signla}\beta c$ (1)
so
$\Delta t_{signal}=\frac{x_s}{(1-\beta)c}$
But how exactly come they up with (1)?
Edit: I do see that it actually makes sense, but still. A more detailed way to get there would be nice. Even if it's a basic problem.
Answer: There's nothing specific to relativity in the answer. It's just kinematics in the $S$ frame. So imagine a similar situation at lower speed: you have two runners, the slower one gets a head start. At $t = 0$, the slower one is at $x = x_s$, the faster one at $x = 0$. The faster one's $x$ position as a function of time: $v_{faster} t$. The slower one's: $x_s + vt$. You want to know at what time they're at the same position, so set them equal and solve for $t$. | {
"domain": "physics.stackexchange",
"id": 37717,
"tags": "homework-and-exercises, special-relativity"
} |
Forming states of the form $\sqrt{p}\vert 0\rangle+\sqrt{1-p}\vert 1\rangle$ | Question: I'm curious about how to form arbitrary-sized uniform superpositions, i.e.,
$$\frac{1}{\sqrt{N}}\sum_{x=0}^{N-1}\vert x\rangle$$
for $N$ that is not a power of 2.
If this is possible, then one can use the inverse of such a circuit to produce $\sqrt{p}\vert 0\rangle+\sqrt{1-p}\vert 1\rangle$ (up to some precision). Kitaev offers a method for the reverse procedure, but as far as I can tell there is no known method to do one without the other.
Clearly such a circuit is possible, and there are lots of general results on how to asymptotically make any unitary I want, but it seems like a massive headache to distill those results into this one simple, specific problem.
Is there a known, efficient, Clifford+T circuit that can either produce arbitrary uniform superpositions or states like $\sqrt{p}\vert 0\rangle+\sqrt{1-p}\vert 1\rangle$?
Answer: As long as you want to set arbitrary states for a single qubit, like in your example, the solution is straightforward and it makes use of standard 2x2 $R_y$ gate. In there if you set
$\theta = 2\arctan(\sqrt{1-p}/\sqrt{p})$
and apply $R_y(\theta)$ in the form
$$
\begin{pmatrix}
\cos(\theta/2) & -\sin(\theta/2)\\
\sin(\theta/2) & \cos(\theta/2)
\end{pmatrix}
$$
to an input qubit in superimposed state (i.e. passed through a H gate), you get what you're looking for.
Of course things get more complicated in case you want to set an arbitrary state over a qubit register of size > 1. In that case there are algorithms like Ventura's one (Initializing the Amplitude Distribution of a Quantum State) which can do it with polynomial complexity. | {
"domain": "quantumcomputing.stackexchange",
"id": 2608,
"tags": "quantum-state, circuit-construction, superposition, state-preparation"
} |
Anyone use Zorin OS 9? | Question:
Just tried to install ROS on my computer.
I am not totally new to Linux, but not a expert.
I tried installing and it did not work.
Any step by step instructions would be appreciated.
Thanks...
Originally posted by tman67 on ROS Answers with karma: 1 on 2015-01-02
Post score: 0
Answer:
Caveat: I have no idea whether the following will work, but it is worth a shot.
According to distrowatch.com, Zorin OS 9 is based on Ubuntu. Ubuntu 14.04 (Trusty) to be precise, according to this post on their blog.
ROS Indigo can be installed on Ubuntu Trusty, but you'll have to help the ROS OS detector, since it doesn't know about Zorin OS. You could try to see whether setting ROS_OS_OVERRIDE is enough. Add the following to the end of your .bashrc file (which is in your home directory):
export ROS_OS_OVERRIDE=ubuntu:14.04
Then start a new terminal, and see if things start working.
PS: in the future, please add more information to your questions. We don't know which version of ROS you're trying to install, what kind of machine you have (32 or 64 bit), or even what "did not work". Simply stating that something didn't work is not sufficient. We don't have crystal balls ..
Originally posted by gvdhoorn with karma: 86574 on 2015-01-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 20470,
"tags": "ros"
} |
Validity of Kirchhoff's current law | Question: Is Kirchhoff's current law valid for rapidly varying currents such as a one with a frequency in the order of 10KHz?
Answer: It's all about time constants. Kirchhoff's Laws assume that the steady state has been achieved.
How do you know if you can make that assumptions in the presence of of time varying driving signals? By comparing the time scale of the external variation with the time-scale of transients in the circuit.
There are two cases to check first
Capacitance and inductance driven time-scale For a basic RC series circuit (that is one with a resistance $R$ and a capacitance $C$ in series) the time constant is $\tau = RC$ (you should check for yourself that this has units of time, BTW), and if
$\tau \, f \ll 1 $
where $f$ is the frequency of the driving signal, then you can reasonably use Kirchhoff's Laws for this case.
Physical extent time-scale Generally if the circuit is much smaller than $c/f$ than you can ignore the transmission delay around the circuit, but if this is not true you have to worry about propagation delays. People who work on very high frequency circuits have lots of rules of thumb that contradict the ones that are used with ordinary, slowly varying circuits like the one you build on a bench in a first year lab course. | {
"domain": "physics.stackexchange",
"id": 29273,
"tags": "electric-circuits, electric-current"
} |
Joining url path components intelligently | Question: I'm a little frustrated with the state of url parsing in python, although I sympathize with the challenges. Today I just needed a tool to join path parts and normalize slashes without accidentally losing other parts of the URL, so I wrote this:
from urlparse import urlsplit, urlunsplit
def url_path_join(*parts):
"""Join and normalize url path parts with a slash."""
schemes, netlocs, paths, queries, fragments = zip(*(urlsplit(part) for part in parts))
# Use the first value for everything but path. Join the path on '/'
scheme = next((x for x in schemes if x), '')
netloc = next((x for x in netlocs if x), '')
path = '/'.join(x.strip('/') for x in paths if x)
query = next((x for x in queries if x), '')
fragment = next((x for x in fragments if x), '')
return urlunsplit((scheme, netloc, path, query, fragment))
As you can see, it's not very DRY, but it does do what I need, which is this:
>>> url_path_join('https://example.org/fizz', 'buzz')
'https://example.org/fizz/buzz'
Another example:
>>> parts=['https://', 'http://www.example.org', '?fuzz=buzz']
>>> '/'.join([x.strip('/') for x in parts]) # Not sufficient
'https:/http://www.example.org/?fuzz=buzz'
>>> url_path_join(*parts)
'https://www.example.org?fuzz=buzz'
Can you recommend an approach that is readable without being even more repetitive and verbose?
Answer: I'd suggest the following improvements (in descending order of importance):
Extract your redundant generator expression to a function so it only occurs once. To preserve flexibility, introduce default as an optional parameter
This makes the comment redundant because first is a self-documenting name (you could call it first_or_default if you want to be more explicit), so you can remove that
Rephrase your docstring to make it more readable: normalize and with a slash don't make sense together
PEP 8 suggests not to align variable assignments, so does Clean Code by Robert C. Martin. However, it's more important to be consistent within your project.
def url_path_join(*parts):
"""Normalize url parts and join them with a slash."""
schemes, netlocs, paths, queries, fragments = zip(*(urlsplit(part) for part in parts))
scheme = first(schemes)
netloc = first(netlocs)
path = '/'.join(x.strip('/') for x in paths if x)
query = first(queries)
fragment = first(fragments)
return urlunsplit((scheme, netloc, path, query, fragment))
def first(sequence, default=''):
return next((x for x in sequence if x), default)
If you're looking for something a bit more radical in nature, why not let first handle several sequences at once? (Note that unfortunately, you cannot combine default parameters with sequence-unpacking in Python 2.7, which has been fixed in Python 3.)
def url_path_join(*parts):
"""Normalize url parts and join them with a slash."""
schemes, netlocs, paths, queries, fragments = zip(*(urlsplit(part) for part in parts))
scheme, netloc, query, fragment = first_of_each(schemes, netlocs, queries, fragments)
path = '/'.join(x.strip('/') for x in paths if x)
return urlunsplit((scheme, netloc, path, query, fragment))
def first_of_each(*sequences):
return (next((x for x in sequence if x), '') for sequence in sequences) | {
"domain": "codereview.stackexchange",
"id": 3493,
"tags": "python, parsing, url"
} |
Correlation between resonance and coordinate covalent bond | Question: So I was listening to an online chemistry lecture the other day, and the theme was "bond".
In the 'coordinate covalent bond' part, the teacher said "Examples of the coordinate covalent bond can be $\ce{SO_2}$, $\ce{HNO_3}$, etc." and I could figure out that both molecules have resonance structures: between $\ce{N}$ and $\ce{O}$ atoms in $\ce{HNO_3}$, between $\ce{O}$ and $\ce{S}$ atoms in $\ce{SO_2}$. Also I am aware of the resonance structures in the $\ce{C_6H_6}$(benzene) molecule, so I looked up the Wikipedia page and could find this sentence:
"..., while the VB description involves a superposition of resonance structures."
So I became curious about this: Is there any close correlation between resonance and coordinate covalent bond? In other words, is it safe to view a molecule as having coordinate covalent bond when it has resonance structure in it?
Answer: Coordinate covalent bonds and Resonance are very very vastly different things. You may have coordinate compounds without resonance, and you may have resonance involving compounds without any coordinate covalent bonds. There's no connection.
Coordinate Covalent Bonds (a.k.a Dative Bonds):
These bonds are just like any other covalent bond. They involve sharing of electrons in between the two atoms that make up the bond. The only difference between a regular covalent bond and a coordinate one is that both the electrons belong to one atom in case of coordinate covalent bonds. Head over to Wikipedia's page on Coordinate Covalent Bonds to learn more, if you're interested.
Resonance:
This is, as I mentioned, vastly different from dative bonds. This is simply a delocalization of $\pi$-electrons along a larger stretch of the molecule as opposed to being confined to being in-between two atoms. Resonance occurs when two $\pi$-systems are in conjugation with each other.
For example, consider butadiene:
There are two $\pi$-systems that are in conjugation with each other. As a result, the two systems kind of overlap and become one bigger system.
You might wonder, why do these molecules undergo resonance anyway? Well it's all based on a very simple fact: Like charges repel. Electrons in the $\pi$ molecular orbitals are no exception either. As opposed to being confined to being between two atoms, they prefer to spread over the molecule when given the chance. If you put a 100 misanthropes into a large hall, would they all form two clumps or evenly spread throughout the hall?
Anyway, now you can see, there isn't any relationship between dative bonds and resonance. There are compounds which have no dative bonds, but possess resonance, eg. benzene, butadiene, pyridine, et cetera. There are compounds that have dative bonds, and yet no resonance, eg. ammonia-boron adduct, tetraaquacopper(II), et cetera. | {
"domain": "chemistry.stackexchange",
"id": 8727,
"tags": "bond"
} |
The Mathematical Prediction of Antimatter | Question: According to my understanding:
The Dirac Equation, has negative and positive energy solutions. The "Dirac Sea" theory says that these solutions can be interpreted as a "sea" of negative-energy particles, and a "sea" of positive-energy particles, with the same mass, together they form what we perceive as a vacuum.
What I am confused about, is when there is a "hole" in the negative-energy particle sea, why does the hole have opposite charge to the particle?
Answer: This is probably easiest to answer in the solid state case, rather than the particle physics case. In particle physics this is a prediction of antimatter, but in a crystalline solid, electrons are forced by Pauli exclusion to occupy higher and higher energy levels, until they more or less occupy a ball in momentum-space whose surface corresponds to an energy that we call the Fermi energy. In a semiconductor this energy occurs in a “band gap” which does not have any states for electrons to inhabit, between a “valence band” and a “conduction band.”
So, if you put an electric field on the electron sea of a semiconductor, it does what any electric field does: pushes the electrons in the opposite direction.
If those electrons are free electrons in a conduction band, everything makes sense and is straightforward. If they are holes in a valence band, then you have to think backwards a little bit. The electron that fills in the hole is moving opposite the field, so the hole must move along the field. thus, it acts like it has the opposite charge $+1e$ to the electron’s $-1e$. | {
"domain": "physics.stackexchange",
"id": 63068,
"tags": "antimatter, dirac-equation"
} |
How do I calculate the amplitude of an $s$-channel Feynman diagram? | Question: What is is the starting point to measure the amplitude in an $s$-diagram like the one below where two particles collide and create a propagator followed by the final product?
I know that in $t$-diagram I should start from the opposite side to the direction of propagation but in this case, the particles are propagating to the same point, meaning:
What I mean is:
Should I start by adding the maths expressions from the top right to bottom left, as in $Z_L(p_3)$, followed by the vertex and then by the mathematical expression for $Z_L(p_4)$ followed by the propagator?
Answer: As the diagram is rather general, i.e. it is unknown if the particles $Z_L(p_i)$ are distinguishable or undistinguishable, particles or anti-particles, bosons or fermions, one can only make very general statement. Anyway, it is possible to start translating the Feynman-diagram wherever you want, as composing the mathematical expression is essentially multiplication and multiplication is commutative.
The expression for the scattering amplitude would be in the most general form ($g_i$ with $i=1,2$ are the coupling constants which are not necessarily the same). I assume a scalar coupling:
$$i{\cal M} = J(1,2)\frac{-i g_1 g_2}{s -m^2_{H_0}}J(3,4)$$
where $J(1,2)$ is the "current" (caveat: possibly this "current" is not conserved, here this does not matter) of the particles $Z_L(p_1)$ and $Z_L(p_2)$, and $J(3,4)$ is the "current" of $Z_L(p_3)$ and $Z_L(p_4)$. If the particles are not distinguishable, another diagram has to be added where the outgoing particles are swapped with respect to the ingoing particles.
BONUS:
As this diagram is supposed to be s-channel process, $Z_L(p_1)$ and $Z_L(p_2)$ would annihilate, and $Z_L(p_3)$ and $Z_L(p_4)$ would be created. So in case of fermions, the "currents" would be something like:
$$J(1,2) = \overline{v}(-p_2)u(p_1) \quad \text{and} \quad J(3,4) = \overline{u}(p_4)v(-p_3)$$
but I don't guarantee that this expression is 100% correct, it is just for giving you an idea.
EDIT:
Actually, the products of the bispinors, for instance $\overline{u}\cdot v$, are not commutative, so one could define a rule in which order they should be written up (Nevertheless $J(1,2)$ and $J(3,4)$ can be still commutated). However, it would be only valid for a $s$-channel process, for the other channels other rules would have to be applied.
ANOTHER EDIT:
The signs of the momenta depend on the assumptions that $p_1$ corresponds an in-going particle and $p_2$ corresponds to an outgoing particle. Finally, $p_3$ is considered as an in-going particle and $p_4$ corresponds to an out-going particle. If the direction of a particle is swapped, the sign has to be swapped correspondingly. | {
"domain": "physics.stackexchange",
"id": 65698,
"tags": "homework-and-exercises, feynman-diagrams, higgs, quantum-chromodynamics"
} |
Can a safe state in Banker's Algorithm cause deadlock eventually? | Question: I know that unsafe state does not always mean deadlock. That is a situation that banker's algorithm does not detect deadlock accurately. I am just wondering if there is any example of a safe state that deadlocks?
Answer: No deadlock as long as you keep running the Banker's algorithm.
By definition, a state is considered safe if it is possible for all processes to finish executing, which means there is no deadlock.
In order to avoid triviality, the question should be asking whether a safe state might change into deadlock. Assume the system is in a safe state (or the system is safe in short) initially. If you are able to run the Banker's algorithm, then
each time a request for resources is raised, the system will
either reject the request so the state will not change, meaning the system is still safe of course.
or it will grant the request after it have been verified that after the requesting process has obtained those resources the system will still be safe, meaning the system is still safe.
each time some resources is returned, the system will obviously still be safe.
So, you can see that almost by definition and design, a safe state never goes into deadlock under the Banker's algorithm.
A more interesting question might be whether the system guarantees to make progress. The answer is yes, too. For any process, it is assumed that maximum number of instances of each resource type may not exceed the total number of resources in the system. It is also assumed that when a process gets all its requested resources it must return them in a finite amount of time. Suppose at some point of time, no process is able to run yet because of its lack of resources. Since the system is safe, resources will be allocated upon requests continuously until one of process is able to get all the resources it needs, meaning that process will start to run and progress will be made. After some finite amount of time, that process, having made some progress, must return all its requested resources to the system. Now the system is sort of back to square one but with progress made.
For a more rigorous analysis, you can check Selected Writings on Computing: A Personal Perspective
, Springer-Verlag, 1982, by the algorithm's designer Edsger W. Dijkstra. | {
"domain": "cs.stackexchange",
"id": 12241,
"tags": "operating-systems, deadlocks"
} |
Why do gold deposits form only in certain areas of the earth? | Question:
In the map above you can find that most elements are spread evenly throughout Earth's crust and that they are available all around the Earth. However, gold can only be found in certain areas of the planet such as Australia and Canada. Is there a specific reason to why gold can only be mined at these locations or is it just a coincidence?
Answer: I'll take the form of the question given by another person here and attempt to provide a different answer.
So what you are asking is: "How did gold become so concentrated in certain parts of the world?"
So yes, gold is all around but the concentration is too low to make extraction of it worthwhile. You need some process to take small amounts of gold from a large volume and turn it into large amounts of gold in a small volume where it is convenient to build a mining facility and get it from the Earth.
One of the most common process to concentrate gold is through the action of hydrothermal fluids. This is basically heated water flowing through the Earth's crust. Heated water with certain properties such as acidity (pH) or dissolved anions (think chlorine-rich seawater is more corrosive than your tap water) can dissolve solid gold and put it into solution. Just like regular water can dissolve table-salt or sugar and put it into solution.
So you have this hydrothermal fluid flowing through huge masses of rock (mountain-like, but underground) for a very long time, and when it goes up to the surface, it is channelled into thin conduits of fluid flow. During their ascent the conditions change (be it temperature or pH, etc.) and the water are not capable of carrying gold with them anymore. This results in the deposition of gold in that specific region.
I made an example, that I hope will help you understand this in a clearer way:
So in here you have rain water, entering the Earth in a large area and getting hotter as they descend down. It becomes possible for them to dissolve the gold from the large volume of rock. But also, because the water (now steam or a supercritical fluid) are hot they start to ascend upwards, usually through a narrow zone. When they cool down again, gold forms as a solid. For example, gold associated with quartz veins commonly forms through this process.
Now, what happens if these gold-bearing quartz veins are exposed on the surface? They may erode by rain and snow and get concentrated in river beds. So you can either mine the original quartz vein or the nearby river bed. | {
"domain": "earthscience.stackexchange",
"id": 1758,
"tags": "petrology, mineralogy, minerals, economic-geology"
} |
"Magnetic mnemonics" | Question: Over and over I'm getting into the same trouble, so I'd like to ask for some help.
I need to solve some basic electrodynamics problem, involving magnetic fields, moving charges or currents.
But I forgot all this rules about "where the magnetic field should go if the current flows up". I vaguely remember that it about hands or fingers or books or guns, and magnetic field should go somewhere along something, while current should flow along something else... But it doesn't help, because I don't remember the details.
I do find some very nice rule and use it. And I think "that one I will never forget".
...time passes...
Go to step 1.
The problem is that you have to remember one choice from a dichotomy: "this way or that way". And all the mnemonics I know have the same problem -- I got to remember "left hand or right hand", "this finger or that finger", "inside the book or outside of the book".
Maybe someone knows some mnemonics, that do not have such problem?
Answer: Dear Kostya, if the electric field is a vector with an arrow, then the magnetic field is fundamentally not a vector: instead, it is an antisymmetric tensor with two indices, determining an "oriented two-plane".
The latter carries the same information (3 different components) as a vector, and there is a convention given by the right-hand rule to switch from one to the other. A derived and related rule also determines the magnetic field of a solenouid and other things.
Clearly, the convention to switch from the antisymmetric tensors to vectors could have been the other way around, too. So one has to remember something to know the convention; one can derive many things but not conventions. I agree that remembering the right hand operations is simple, especially because the word "right" also means "correct" and because the right-wing political opinions are the right ones while the left-wing political opinions are those that are left over. | {
"domain": "physics.stackexchange",
"id": 231,
"tags": "electromagnetism, mnemonic"
} |
Is levitation possible through this apparatus? | Question: If we had two rigid sheets which were exactly identical and we had the magnets on it as shown in the figure, then would the top sheet float above the bottom, which is fixed on a rigid surface.
The black dots are identical magnets and $N$ and $S$ represent the poles of magnet facing the opposite magnet.
I can see that the torque of the top sheet about axis in its plane and through its center bisecting either of the sides is zero. But the net force on it is $mg$ downward hence it must free fall. But the free falling top sheet does not sound appealing to me as when it approaches the bottom it will have two pairs of magnets which want to get nearer but other two which wanna get farther. When it reaches the bottom it cannot rotate as there would be no torque and obviously it won't be in equilibrium. Thus I want to know what would the top sheet behave like and also should it float above the bottom sheet?
Note that I have assumed everything to be perfect.
Answer: What you describe is called a "metastable" state. These are states which are mathematically stationary, but any small perturbation from that perfect state causes things to collapse back towards a "stable" state.1
Practically speaking, such systems are never considered stable, because the implementation is never perfect. However, they are very popular in control systems. If you have a control system which is actively observing the state of your top sheet and applying torques to keep it in that metastable state, you can maintain levetation with a great deal of efficiency. This is what is done with things like the hoverboards, where one is basically balancing on top of a wheel (which is a metastable state).
One key thing to remember is that magnetic forces get stronger as the magnets get closer. So I would expect the natural failure mode of this would be for a perturbation to tip it so that one of the N-S pairs gets closer than the other. This will make it stronger, leading it to tip more and more until collapse. However, if everything is mathematically perfect, this system will indeed stay in place.
1. Okay, I'll admit my mathematical preference on the terms. In physics, "metastable" typically means something which is stable, but not at a local minimum. In the mathematical sense, specifically in studying dynamic systems, metastable is used to refer to systems which can remain in a set of states indefinitely, but perturbations rapidly evolve away from those states (usually towards a stable state). | {
"domain": "physics.stackexchange",
"id": 69447,
"tags": "magnetic-fields, levitation"
} |
Efficient algorithm for 'unsumming' a set of sums | Question: Given a multiset of natural numbers X, consider the set of all possible sums:
$$\textrm{sums}(X)= \left\{ \sum_{i \in A} i \,|\, A \subseteq X \right\}$$
For example, $\textrm{sums}(\left\{1,5\right\}) = \left\{0, 1, 5, 6\right\}$ while $\textrm{sums}(\left\{1,1\right\}) = \left\{0, 1, 2\right\}$.
What is the most efficient algorithm for calculating the inverse operation (measured in terms of the size of the input set of sums)? Specifically is it possible to efficiently calculate any of the following:
Whether a given set is a valid set of sums. (For example, $\left\{0,1,2\right\}$ is valid but $\left\{0,1,3\right\}$ is not.)
A multiset that sums to the given set.
The smallest multiset that sums to the given set. (For example, $\left\{1,2\right\}$ and $\left\{1,1,1\right\}$ both sum to $\left\{0,1,2,3\right\}$ but the former is smaller.)
Answer: Solution
Solution has two parts. First we discover minimal set, then we prove that it can represent the power sum set. The solution is adjusted for programming implementation.
Minimal set algorithm
Find maximal element $a_{m}$ from the sum (multi)set. $P$, the potential minimal (multi)set is initially empty.
Unless there is only one group, represent $a_{m}$ in all possible ways as a pair of sums that add up to $a_{m}$, $S_{ij}=\{(a_{i},a_{j})|a_{i}+a_{j}=a_{m}\}$
Check that all elements from the set of sums are included.
Find maximal element $a_{s}$ from all $S_{ij}$ (meaning together) with the following property: for each $S_{ij}$, $a_{s}$ is either in $S_{ij}$, or we can find $a_{p}$ from the set of sums so that $a_{p}+a_{s}$ is in $S_{ij}$.
If it is the case that $S_{ij}$ does not contain $a_{s}$, just the sum $a_{s}+a_{p}$, remove $a_{p}+a_{s}$ from $S_{ij}$ (or just set a mark to ignore it) and insert $a_{p}$ and $a_{s}$ in $S_{ij}$ instead.
If an element is present in every $S_{ij}$ remove it from all $S_{ij}$ once (or just set a mark to ignore it and not to touch it any longer) and add it to the list of elements of potential minimal set $P$.
Repeat until all $S_{ij}$ are empty
If some of $S_{ij}$ remains non-empty and we cannot continue, try again with the maximum value from all $S_{ij}$.
Recreate the recursive steps without removals and continue with power set coverage algorithm over $P$. (Before this, you can make a safe-check that $P$ includes all elements that cannot be represented as a sum of two elements so they must be in underlying set for sure. For example, the minimal element must be in $P$.)
(10. Observe that a minimal set solution which is the goal of the algorithm cannot contain more than one repetition of the same number.)
Example:
$$\{2,3,5,7,8,10,12,13,15\}$$
Represent 15 in all possible ways as a sum of two numbers from the set of sums.
$$(13,2),(12,3),(10,5),(8,7)$$
Try to find maximal number that is in all groups or that can be represented as a sum. Obviously we can start searching for it from 8, there is no point going above it.
13 from the first group is 13=8+5 so 13 is fine, but 12 from the second group is not fine since there is no 4 to make 12=8+4 in the set of sums. Next we try with 7. But immediately 13 cannot be covered, there is no 6.
Next we try 5. 13=5+8, 12=5+7, 10=5+5, and for the last either 8=5+3 or 7=5+2 but not both. The groups are now:
$$((5,8),2),((5,7),3),((5,5),5),((5,3),7)$$
5 is repeating in all groups so we extract it $P=\{5\}$. We extract 5 only once from each group.
$$(8,2),(7,3),(5,5),(3,7)$$
Obviously there is no point going higher than 5 so we try 5 again. 8=5+3, 7=5+2, so all is fine
$$((5,3),2),((5,2),3),(5,5),(3,(5,2))$$
Extract one 5 again from all groups since it is repeating. (This is not common but our case is deliberately created to display what to do in case we have repetitions.) $P=\{5,5\}$
$$(3,2),(2,3),(5),(3,2)$$
Now we try with 3 and have 5=3+2. Add it to the group.
$$(3,2),(2,3),(3,2),(3,2)$$
Now extract 3 and 2 since they are repeating everywhere and we are fine $P=\{5,5,3,2\}$ and the groups are empty.
$$(),(),(),()$$
Now, we need to recreate recursive steps without removals, this simply means doing the above without really removing the elements from $S_{ij}$ just placing them in $P$ and marking not to alter it any longer.
$$(13,2),(12,3),(10,5),(8,7)$$
$$((5,8),2),((5,7),3),((5,5),5),((5,3),7)$$
$$((5,(5,3)),2),((5,(5,2)),3),((5,(3,2)),5),((5,3),(5,2))$$
Power set coverage
The purpose of this part is to check if the found minimal set is able to cover the power sum set. It is possible that a found solution can cover all given sums, but that they are not power set sums. (Technically, you could simply create a power sum set from the found minimal set and check if each sum, as power set dictates, is in the initial sum set. This is all that just merged with what we already have, so nothing is wasted. You can do this part while rewinding the recursion.)
Encode all elements from the minimal set using successive powers of 2. The order is not important. Encode the same element with a new value as many times as it is repeating. Start from C=1, every next element has C=2C.
$$(2=[1],3=[2],5=[4],5=[8])$$
Replace the elements in the restored recursion list,
$$((5,(5,3)),2),((5,(5,2)),3),((5,(3,2)),5),((5,3),(5,2))$$
with the encoding: 2 with 1, 3 with 2, 5 with 4, and another 5 with 8. Observe that each element has different encoding even though they are repeated.
$$((4,(8,2)),1),((4,(8,1)),2),((4,(2,1)),8),((8,2),(4,1))$$
Collect all intermediate sums, at the moment we have (1,2,4,8)
$$((4,(10)),1),((4,(9)),2),((4,(3)),8),((10),(5))$$
Intermediate sums $(1,2,3,4,5,8,9,10)$
$$((14),1),((13),2),((7),8),(15)$$
Intermediate sums $(1,2,3,4,5,8,9,10,13,14,15)$
$$\{(15),(15),(15),(15)\}$$
Check that the result is $2^m-1$, where $m$ is the number of elements in the solution, in the example $m=4$
Collect missing numbers from $1$ to $2^m-1$ in the intermediate sum list
$(6,7,11,12)$
Justify their absence in the following manner: represent each number in binary form
$(6=0110_2)$
$(7=0111_2)$
$(11=1011_2)$
$(12=1010_2)$
$6$ represents the sum of 3+5 since $0110_2$ is covering second and third element from $(2=[1],3=[2],5=[4],5=[8])$. The sum of these elements, 8, is listed in the initial sum list $\{2,3,5,7,8,10,12,13,15\}$, so all is fine.
$7$ represents the sum of 2+3+5 since $0111_2$ is covering first three elements from $(2=[1],3=[2],5=[4],5=[8])$. The sum of these elements, 10, is listed in the initial sum list so all is fine.
$11$ is 2+3+5, and 10 is in the list.
$12$ is 3+5, and 8 is in the list.
If any binary representation corresponds to the sum that cannot be found, report that there is no solution.
So all is fine and $(2,3,5,5)$ is the solution. It is the minimal solution as well.
Discussion
It was necessary to provide the algorithm that is going to check if the sums cover the power set completion, which is what is hidden in the binary expansion. For example if we exclude 8 and 7 from the initial example, the first part would still provide the solution, only the second part would report missing combinations of sums.
First part of discovering the possible minimal set is $mnlog(m)$ which comes to $m\log^2(m)$: we are looking around $m$ elements $n$ times having one $\log(m)$ binary search.
The last part is done in recursion return and it does not require any special effort, we are searching over less than $m$ elements, we need binary form which is $\log{m}$ and we have one addition and search if the sum is in the list, so together it is again about $m\log^2(m)$.
If we assume that the number of elements in the power sum set corresponds to the number of partitions of the largest element in the underlying set then the complexity is around $m\log^3(m)$. Any of the two justifies the initial sorting in order to find the largest element.
Parts of the algorithm assume that we can find the pair of sums in linear time and this requires sorting.
Incorrect start
First part of the algorithm may fail, if we have started it on the wrong foot. For example $2,3,4,5,6,7,8,9,10,11,12,13,15$ has the basic solution $2,3,4,6$ which you get if you start algorithm from 6. However we can start our algorithm from 7, since there is nothing in step 4. that would say not to, and lock ourselves in, the algorithm cannot end properly. The reason is that 7 is 7=4+3 and 4 and 3 are in the solution. So locked algorithm does not always mean that there is no solution, just to try again with lower initial value. In that case, some ideas about the possible values are hidden within remaining $S_{ij}$. That is why we suggested starting from there in case of failure.
Another example, if you miss and start algorithm from 5, you would get $5,4,3,3$ but this one does not include 2.
Notice that this algorithm is not going to give a derived solutions like $2,2,3,4,4$, which we got simply by turning 6 into 4 and 2 in the solution $2,3,4,6$. There are special rules that cover these versions.
The purpose of this algorithm is to provide a solution once we have started it all correctly.
Improvements
Step 4. is the one that could be upgraded in this manner: instead of maximal we could try out every element in descending order that satisfies the given condition. We create a separate branch for each. If some branch does not give a solution, cancel it.
For example for $2,3,4,5,6,7,8,9,10,11,12,13,15$ we could try in the first round $7,6,5,4$ in separate ways since all of them are passing the first test. (There is no reason to use 2 or 3 since we know they have to be in the underlying set.) and simply continue that way all around until we collect all versions that can reach the end. This would create a full-coverage solution which would discover more than one underlying set.
Another thing, since we know that we cannot have more than one repetition if the case is minimal, we can incorporate this in our algorithm.
Overall, the condition in step 4. that a number must repeat in every group or have ability to create a sum is strong enough to get us out of direct exponential waters, which would be an algorithm of simply trying out every combination and creating the power set over each until we find a match. | {
"domain": "cs.stackexchange",
"id": 5916,
"tags": "algorithms, optimization, combinatorics, integers"
} |
Send message and wait for receive while using async/await and promises the proper way | Question: I have this working code, but sendAndReceive function looks ugly/smelly to me. Eslint complains about using await inside a loop, and using true as conditional in while loop. Would there be a more elegant way to achieve this? I mean, "blocking" until a response appears in inbox before returning a response. This is necessary because I need to get each response before sending the next message, otherwise the device firmware complains.
const SerialPort = require('serialport');
const ReadLine = require('@serialport/parser-readline');
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
const inbox = [];
// IS THIS FUNCTION OK?
const sendAndReceive = async (message, port) => {
port.write(message);
while (true) {
await sleep(1);
if (inbox.length > 0) {
const response = inbox.pop();
return response;
}
}
};
SerialPort.list()
.then(async portInfos => {
portInfos.filter(pinfo => pinfo.manufacturer === 'FTDI')
.forEach(async portInfo => {
const port = new SerialPort(portInfo.path).setEncoding('ascii');
const parser = port.pipe(new ReadLine({
delimiter: '\r\n',
encoding: 'ascii',
}));
parser.on('data', data => {
inbox.push(data);
});
port.open();
const serialMessage = api.createReadMessage(SERIAL);
const batteryMessage = api.createReadMessage(BATTERY);
for await (const m of [serialMessage, batteryMessage]) {
const response = await sendAndReceive(m, port);
console.log(m.trim());
console.log(response);
console.log();
}
});
});
Answer: The while (true) { await sleep(1); ... is potentially very computationally expensive. Yes, it'd be good to refactor it out if possible.
Response bug Your current implementation looks like it has a bug, or what could very easily become a bug. The inbox array is global, but each portInfo array item will push separate, undistinguished items to the inbox array. That is, each forEach iteration will create a port and open it immediately, and then whichever iteration whose sendAndReceive happens to run its timeout first after the data callback has pushed an item to the array will get the response. Here:
const response = await sendAndReceive(m, port);
Unless you have only a single portInfo, the response received is likely to correspond to a different iteration's m and port.
The easy way to fix this would be to declare a separate inbox array inside each iteration - but the whole approach there deserves some reworking, as you noticed.
Inside the innermost loop that calls sendAndReceive, you might construct Promises for each message being iterated over, and push their resolve function to an array. When the parser responds, if the array has a resolver function, shift it out and call it with the data:
portInfos.filter(pinfo => pinfo.manufacturer === 'FTDI')
.forEach(portInfo => {
const port = new SerialPort(portInfo.path).setEncoding('ascii');
const parser = port.pipe(new ReadLine({
delimiter: '\r\n',
encoding: 'ascii',
}));
const resolves = [];
parser.on('data', data => {
if (resolves.length) {
resolves.shift()(data);
}
});
port.open();
const serialMessage = api.createReadMessage(SERIAL);
const batteryMessage = api.createReadMessage(BATTERY);
for (const message of [serialMessage, batteryMessage]) {
new Promise((resolve) => {
port.write(message);
resolves.push(resolve)
})
.then((response) => {
console.log(m.trim());
console.log(response);
});
}
});
The promises are currently dangling, which is usually weird, but in this case, the promises will never reject. If parser might not be able to get a data for a given message, and it exposes an API for that (an error event, maybe?), it would be good to listen for that, so errors can be properly handled instead of ignored.
Note that the above approach calls port.write twice at once in a given iteration *once for each method), instead of waiting for the prior message Promise to resolve first - this will speed up your program a bit, since you're now waiting in parallel, not in serial. If you wanted to do something when both messages are received, you'd use Promise.all and pass into it an array of the Promises instead of using for..of. (If you have to wait in serial, you can await the resolution of each Promise inside the loop)
Other things I noticed:
for..await is only for async iterators While you're technically allowed to do
for await (const m of [serialMessage, batteryMessage]) {
it doesn't make sense to use for await, since the array there is just a plain array - it's not something that implements the async iterator interface. It'd be more appropriate to use just for (const m ...
Variable names m isn't so great as a variable name. message is significantly more informative.
Only use async when you need to return a Promise You do:
SerialPort.list()
.then(async portInfos => {
portInfos.filter(pinfo => pinfo.manufacturer === 'FTDI')
.forEach(async portInfo => {
The .then's callback does not use await directly inside of it, nor does it need to return a Promise, so there's no need for the async keyword - it's just potentially confusing noise. | {
"domain": "codereview.stackexchange",
"id": 39545,
"tags": "node.js, async-await, promise"
} |
Can we directly measure vectors' quantities? | Question: Can we perform some kind of experiment that will give us, for example, the $p_x$, $p_y$ and $p_z$ of a particle in a single measurement?
I'm aware that they commute so one measurement will not disturb the others, but I want to know if it is possible to obtain all three components with one single measurement.
The real question I'm trying to get my head around is:
Can we directly measure anything that is NOT a scalar after all?
Answer: Yes, this is possible depending on your definition of a single measurement. I have a contrived example to illustrate.
An object is initially in the center of a spherical shell that has multiple sensors on its inner surface. The object then moves in an unknown direction $\hat n$ with uniform motion. From the single measurement of the single sensor that the object hit, we can deduce $\hat n$.
I suppose there are a few subtleties here. The thing here actually being measured is the electrical signal from the sensor, which you could model as a scalar. But the information one can deduce from that measurement is a vector. Furthermore, it's really the single signal from the hit detector combined with all of the other non-signals that helps us deduce the information. So one could argue this isn't really a single measurement. | {
"domain": "physics.stackexchange",
"id": 14904,
"tags": "measurements, vectors"
} |
Is a three level pumping scheme possible in a LASER only when exactly three energy states are present in it? | Question: I'm brushing up my concepts on LASER's and I was just curious about this. Does a three level pumping scheme necessarily imply that exactly three energy states have to be present? Or can more or or less energy states can be present as well? I know that a four level pumping scheme is more efficient but this is just a question that popped up in my mind. I'm sorry if I got my concepts wrong in some place and would really appreciate it if someone could explain the rationale behind this to clear my confusion. Sorry for taking up your valuable time!
Answer: You will need at least three states for lasing to occur (with reasonable probability) but that does not mean that other states cannot be present. If the pump frequencies are not tuned to reach these other states then they will not be occupied with any appreciable probability and hence will not influence the lasing process. | {
"domain": "physics.stackexchange",
"id": 67836,
"tags": "laser"
} |
I need to find the mechanism of a reaction | Question: As in the title, I have a reaction to witch I need to give the mechanism.
Answer:
The image shows the complete mechanism | {
"domain": "chemistry.stackexchange",
"id": 15536,
"tags": "reaction-mechanism"
} |
Is a non-accelerating object far from a gravity source moving at the speed of light through time? | Question: I'm trying to understand the Minkowski spacetime better.
If an object is not undergoing acceleration, and is far from any large mass, does it travel maximally "fast" through time? Can we calculate a 'speed' of some sort? Put another way, is there some 'Minkowski unit' that is conserved as one accelerates?
Does the question as currently formulated even make sense?
EDIT: Just adding a few clarifications.
1. The object can be considered 'at rest' (or at least moving at a very slow non-relativistic constant speed) compared to (a set of) very distant galaxies.
2. Imagine the spacetime hyperplane intersecting Earth's current spacetime. I know it's outside our lightcone and all, but imagine you had big-bang-triggered clocks every here and there. Would there be a zone of spacetime where the clocks had reached a maximum time? Would that be (significantly: minutes, days, year) different from Earth's estimation of time since the Big Bang?
Answer: The speed that we use in relativity is the four-velocity. Just as ordinary velocity is a 3D vector that describes the rate of change of your position in space with time, the four-velocity is a 4D vector that describes the rate of change of your position in spacetime with proper time. This four-velocity is an invarient and all observers in all inertial frames will agree on its value.
Proper time is a concept that doesn't exist in non-relativistic physics. You'll be familiar with the fact that clocks on fast moving spaceships run slow due to time dilation, and this makes defining time a bit tricky. Your proper time is the time shown on a clock you are carrying i.e. the time shown by a clock that is stationary relative to you. In special relativity the proper time is an invariant and all observers in all frames will agree on its value. This makes it a very useful concept, and if you study relativity you'll run into proper time all over the place.
Anyhow, in 3D space where your position is given by a vector $(x, y, z)$, the velocity is given by:
$$ \vec{v} = \left(\frac{dx}{dt}, \frac{dy}{dt}, \frac{dz}{dt}\right) $$
In 4D spacetime we choose an inertial frame using coordinates $x$, $y$, $z$ and $t$, and the position some object is then given by the four-vector $(t, x, y, z)$. The four-velocity is then given by:
$$ \vec{U} = \left(\frac{cdt}{d\tau}, \frac{dx}{d\tau}, \frac{dy}{d\tau}, \frac{dz}{d\tau}\right) $$
There are a few interesting things to note about the four-velocity. Suppose you are measuring your four-velocity in your own inertial frame. In this frame you are stationary so $dx = dy = dz = 0$, and the time $t$ is the same as the proper time $\tau$ (because that's how proper time is defined). So your four-velocity is:
$$ \vec{U} = \left(\frac{cd\tau}{d\tau}, 0, 0, 0\right) = \left(c, 0, 0, 0\right) $$
So even when you're standing still your four-velocity is $c$. That's because even you you're not moving in space you are moving in time, and in fact you're moving though time at the speed of light. | {
"domain": "physics.stackexchange",
"id": 19003,
"tags": "special-relativity, spacetime, time"
} |
Method for populating ListViewSubItems | Question: This is my current method for populating the sub-items of ListView controls. I like this method for two reasons...
1) I don't have to keep up with the display order of the columns.
2) If I enable column re-ordering, I don't have to change anything with the code.
So, the question is... Is this a good approach? Can it be improved?
Note: I'm not too happy about declaring Result As Object. It seems like there should be a better way to handle that, but it's the only way I could get it to working.
Private Function RetrieveItem(Of T)(ByVal displayIndex As Integer) As T
Dim Result As Object = If(GetType(T) Is GetType(ListViewItem), New ListViewItem, New ListViewItem.ListViewSubItem)
Select Case displayIndex
Case ColumnHeader1.DisplayIndex
Result.Text = "First Item"
Result.Tag = "First"
Case (ColumnHeader2.DisplayIndex)
Result.Text = "Second Column"
Result.Tag = "Second"
Case ColumnHeader3.DisplayIndex
Result.Text = "Third Column"
Result.Tag = "Third"
Case ColumnHeader4.DisplayIndex
Result.Text = "Fourth Column"
Result.Tag = "Fourth"
End Select
Return Result
End Function
Example Usage...
Dim item As ListViewItem = RetrieveItem(Of ListViewItem)(0)
For i As Integer = 1 To ListView1.Columns.Count - 1
item.SubItems.Add(RetrieveItem(Of ListViewItem.ListViewSubItem)(i))
Next
ListView1.Items.Add(item)
Here is the example I came up with using @MarkHurd's suggestion of using a Widening CType Operator...
Private Class LVI
Public Name As String
Public Text As String
Public Tag As Object
Public Sub New(ByVal name As String, ByVal text As String, ByVal tag As Object)
Me.Name = name
Me.Text = text
Me.Tag = tag
End Sub
Public Shared Widening Operator CType(ByVal item As LVI) As ListViewItem
Dim Result As New ListViewItem(item.Text)
Result.Name = item.Name
Result.Tag = item.Tag
Return Result
End Operator
Public Shared Widening Operator CType(ByVal item As LVI) As ListViewItem.ListViewSubItem
Dim Result As New ListViewItem.ListViewSubItem
Result.Text = item.Text
Result.Name = item.Name
Result.Tag = item.Tag
Return Result
End Operator
End Class
Private Function RetrieveItem(ByVal index As Integer) As LVI
Select Case index
Case ColumnHeader1.DisplayIndex : Return New LVI("1", "First Column", "one")
Case ColumnHeader2.DisplayIndex : Return New LVI("2", "Second Column", "two")
Case ColumnHeader3.DisplayIndex : Return New LVI("3", "Third Column", "three")
Case ColumnHeader4.DisplayIndex : Return New LVI("4", "Fourth Column", "four")
Case Else : Return Nothing
End Select
End Function
Example usage...
Dim item As ListViewItem = RetrieveItem(0)
For i As Integer = 1 To ListView1.Columns.Count - 1
item.SubItems.Add(RetrieveItem(i))
Next
ListView1.Items.Add(item)
I like both of these approaches, but I feel like the first is shorter and easier to implement, so I lean towards the first option.
Answer: If you want to avoid the late bound .Text and .Tag you could just create your own private type, say LVI, containing these two properties and implicit Widening CType operators for ListViewItem and ListViewItem.ListViewSubItem. Then the Result As New LVI can be converted on return using Return CType(CTypeDynamic(Result, GetType(T)), T) in the latest VB.NET.
Without CTypeDynamic I don't have a working solution yet. | {
"domain": "codereview.stackexchange",
"id": 2124,
"tags": "vb.net"
} |
Predicted and true values distributions comparison | Question: Is this alarming when a distribution of predicted values differs from a distribution of true values? I use xgbregressor and get the following plots
Usage of boxcox doesn't improve the case.
My data is spatial-temporal. I make a cash-flow forecasting for some city and time is treated like 12 features corresponding to months that I feed to xgb.
The figure shows data for one year.
Answer: I don't know the method you're using but I suspect that what you observe here is a common problem with supervised learning: models tend to favour predictions close to the mean, that is avoid extreme predictions because these are usually more risky (higher loss if it's a mistake). As a consequence the std deviation of the predictions is often significantly smaller than the s.d. of the ground truth.
Afaik there's no perfect solution. Typically you could try to encourage risky predictions a bit more in the loss function, if that's an option with your method. But in most applications it's safer to learn to live with this issue. | {
"domain": "datascience.stackexchange",
"id": 6079,
"tags": "time-series, xgboost, distribution"
} |
What is the difference between search and learning? | Question: I came across an article, The Bitter Truth, via the Two Minute Papers YouTube Channel. Rich Sutton says...
One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.
What is the difference between search and learning here? My understanding is that learning is a form of search -- where we iteratively search for some representation of data that minimizes a loss function in the context of deep learning.
Answer: In the context of AI:
Search refers to Simon & Newell's General Problem Solver, and it's many (many) descendant algorithms. These algorithms take the form:
a. Represent a current state of some part of the world as a vertex in a graph.
b. Represent, connected to the current state by edges, all states of the world that could be reached from the current state by changing the world with a single action, and represent all subsequent states in the same manner.
c. Algorithmically find a sequence of actions that leads from a current state to some more desired goal state, by walking around on this graph.
An example of an application that uses search is Google Maps. Another is Google Flights.
Learning refers to any algorithm that refines a belief about the world through the exposure to experiences or to examples of others' experiences. Learning algorithms do not have a clear parent, as they were developed separately in many different subfields or disciplines. A reasonable taxonomy is the 5 tribes model. Some learning algorithms actually use search within themselves to figure out how to change their beliefs in response to new experiences!
An example of a learning algorithm used today is Q-learning, which is part of the more general family of reinforcement learning algorithms. Q-learning works like this:
a. The learning program (usually called the agent) is given a representation of the current state of the world, and a list of actions that it could choose to perform.
b. If the agent has not seen this state of the world before, it assigns a random number to the reward it expects to get for performing each action. It stores this number as $Q(s,a)$, its guess at the quality of performing action $a$ in-state $s$.
c. The agent looks at $Q(s,a)$ for each action it could perform. It picks the best action with some probability $\epsilon$ and otherwise acts randomly.
d. The action of the agent causes the world to change and may result in the agent receiving a reward from the environment. The agent makes a note of whether it got a reward (and how much the reward was), and what the new state of the world is like. It then adjusts its belief about the quality of performing the action it performed in the state it used to be in, so that its belief about the quality of that action is closer to the reality of the reward it got, and the quality of where it ended up.
e. The agent repeats steps b-d forever. Over time, its beliefs about the quality of different state/action pairs will converge to match reality more and more closely.
An example of an application that uses learning is AI.SEs recommendations, which are made by a program that likely analyzes the relationships between different combinations of words in pairs of posts, and the likelihood that someone will click on them. Every time someone clicks on them, it learns something about whether listing a post as related is a good idea or not. Facebook's feed is another everyday example. | {
"domain": "ai.stackexchange",
"id": 1154,
"tags": "machine-learning, deep-learning, comparison, philosophy, search"
} |
Are nucleophilic aromatic substitutions stepwise or concerted? | Question: Recently, there has been a lot of discussion about nucleophilic aromatic substitution (SNAr) reactions apparently being concerted instead of stepwise (e.g. Krämer, K. Textbook aromatic substitution mechanism overthrown. Chemistry World, 23 July 2018):
This is in contrast to the usual mechanism taught at an undergraduate level, where the anionic Meisenheimer complex is considered to be an intermediate:
What is the current consensus on the mechanism, and what evidence is there for it?
Answer:
TL;DR Most SNAr reactions proceed via concerted mechanisms without a discrete anionic intermediate. This is typically the case when there is a good leaving group (e.g. chloride or bromide) and if the ring is not extremely electron-poor.
Over the years, there has been increasing evidence of concerted nucleophilic substitutions at sp2-hybridised carbons. One notable example was published by Neumann et al. in 2016, who studied the mechanism by which PhenoFluor (2) converted phenols to aryl fluorides:[1]
For the conversion of [1,1′-biphenyl]-4-ol to 4-fluoro-1,1′-biphenyl (i.e. the above reaction with a phenyl substituent at the para position), a 16O/18O kinetic isotope effect of 1.08 ± 0.02 was observed. This implies that Ar–O bond breaking takes place in the rate-determining step, which is inconsistent with the traditional stepwise mechanism. The concerted mechanism shown above was further supported by DFT studies which revealed a transition state 4 with partial C–O and C–F bonds.
Building on this, Kwan et al. went one step further to show that many SNAr reactions were indeed concerted.[2] In particular, for three examples, the authors used 13C/12C kinetic isotope effects to evaluate the mechanism. These KIEs were determined by integrating the 13C satellites in 19F NMR spectra (an adaptation of Singleton's method[3] which improves sensitivity).
The presence of a (primary) 13C/12C KIE suggests that the C–X bond is being broken in the transition state. However, the KIE itself does not a priori differentiate between stepwise and concerted mechanisms, since in both pathways the respective rate-determining steps involve weakening of bonds in the transition state. In order to identify the mechanism, the authors compared the experimental KIE to a calculated "maximum" KIE. Compounds with stronger bonds (such as C–F in 6) have more vibrational zero-point energy to lose in the TS, and hence have larger maximum KIEs (1.070 for reaction 1).
For a concerted mechanism which "alters both bonds simultaneously", the ratio $\mathrm{{KIE}_{exp}/{KIE}_{max}}$ is expected to be large; and for a stepwise mechanism which "alters one bond at a time", $\mathrm{{KIE}_{exp}/{KIE}_{max}}$ is small. [The paper does not go into detail about exactly why this is so; I suggest reading pp 225 onwards of an article by Westaway[4] which explains this in the context of SN2 reactions.]
This allows the three reactions above to be classified into three different categories:
a stepwise reaction (6 to 7) with a discrete anionic intermediate. This intermediate is heavily stabilised by the electron-withdrawing nitro groups, and also because fluoride ion is a poor leaving group;
a concerted reaction (8 to 9) with the Meisenheimer complex as the transition state. Here, the presence of a good leaving group (chloride) means that the Meisenheimer complex is not sufficiently stable to be an intermediate;
and a borderline case (10 to 11) where there is formally no intermediate but a shallow region in the potential energy surface near the TS.
For the concerted pathway, DFT calculations revealed that the transition state was essentially a Meisenheimer complex: the ring bears a significant amount of negative charge, is no longer aromatic (as indicated by NICS values), and NBO analysis shows that the electron density in the TS is mostly described by the Lewis structure of the Meisenheimer complex. Consequently, even though many "textbook" SNAr reactions are in fact concerted and not stepwise, the same factors that stabilise the previously purported intermediate (electron-withdrawing groups at 2- and 4-positions, and heteroatoms in the ring) also stabilise the transition state and facilitate the reaction.
Finally, a range of SNAr reactions on different carbocyclic and heterocyclic rings were then surveyed by DFT. It turns out that, at least within the scope of the reactions investigated:
All SNAr reactions on nitrogen-containing heterocycles (2-halopyridines, 2-halopyrazines, and 2-halopyrimidines) proceed in a concerted fashion, regardless of leaving group or nucleophile.
Likewise, the theoretical SNAr reactions on unsubstituted benzene are concerted.
For electron-deficient benzene rings (with 4-nitro, 2,4-dinitro, 2,4,6-trinitro, or 4-acetyl groups), SNAr reactions with fluoride as the leaving group are stepwise, but chlorides and bromides mostly react in a concerted manner.
References
Neumann, C. N.; Hooker, J. M.; Ritter, T. Concerted nucleophilic aromatic substitution with 19F− and 18F−. Nature 2016, 534 (7607), 369–373. DOI: 10.1038/nature17667.
Kwan, E. E.; Zeng, Y.; Besser, H. A.; Jacobsen, E. N. Concerted nucleophilic aromatic substitutions. Nat. Chem. 2018, 10 (9), 917–923. DOI: 10.1038/s41557-018-0079-7.
Singleton, D. A.; Thomas, A. A. High-Precision Simultaneous Determination of Multiple Small Kinetic Isotope Effects at Natural Abundance. J. Am. Chem. Soc. 1995, 117 (36), 9357–9358. DOI: 10.1021/ja00141a030.
Westaway, K. C. Using Kinetic Isotope Effects to Determine the Structure of the Transition States Of SN2 Reactions. Adv. Phys. Org. Chem. 2006, 41, 217–273. DOI: 10.1016/S0065-3160(06)41004-2. | {
"domain": "chemistry.stackexchange",
"id": 11065,
"tags": "organic-chemistry, reaction-mechanism, aromatic-compounds, heterocyclic-compounds"
} |
hector_imu_attitude_to_tf vs hector_localization | Question:
Hi, I'm using an IMU and a laser scanner and would like to create a planar map, but estimate the pose with 6DoF, for example for handheld mapping.
As far as I can see, it is not possible to estimate the height (z-axis value) with the available sensors, hence 5DoF remain.
My question now is: According to the hector_mapping tutorial (here), it is possible to calculate the roll and pitch angles and broadcast the resulting transform between base_stabilized and base_link with hector_imu_attitude_to_tf.
On the other hand, the package hector_pose_estimation within hector_localization apparently takes the pose estimate from hector_mapping on /poseupdate, fuses it with the IMU data and publishes the 6DoF pose.
So, are there any advantages/differences when taking hector_pose_estimation instead of hector_imu_attitude_to_tf for example for performing 2D handheld SLAM, or is it just the same?
Is hector_pose_estimation maybe capable of doing something else that I miss (looking at the source code it seems to be pretty complex)?
Thanks,
Philipp
Originally posted by pschroeppel on ROS Answers with karma: 1 on 2017-03-09
Post score: 0
Answer:
hi
can you do this ???
i do this, i fuse imu and Z to localization but when i fused poseupdate from hector_mappin, localization gonna to jumping around...
Originally posted by Morid_92 with karma: 16 on 2017-03-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Morid_92 on 2017-03-15:
you don't need hector_imu_attitude_to_tf, because you can use /imu_raw topic in hector_pose_estimation and fusion the imu data to tf
and
you can fusion the height by publish z value on /height topic in hector_pose_estimation | {
"domain": "robotics.stackexchange",
"id": 27265,
"tags": "slam, imu, navigation, hector-localization, hector-mapping"
} |
measurement probability from density operator? | Question: I've been through this before but I can't fully get my head round this upon review.
So the density operator $\hat{\rho}=\sum_j p_j|\psi_j\rangle\!\langle \psi_{j}|$ for pure states $|\psi_{j}>$ at probabilities $p_j$. Suppose we wanted to measure a non degenerative operator, $\hat{A}=\sum_{l}\lambda_{l}|a_{l}><a_{l}|$ for eigenvalues $\lambda_{l}$ associated to eigenstates $|a_{l}>$. Probability of measuring $\lambda_{l}$ is as follows:
$$P(measured\ value=\lambda_{l})=\sum_{j}p_{j}|\langle\psi_{j}|a_{l}\rangle|^{2}=\sum_{j}p_{j}<\psi_{j}|a_{l}\rangle\langle a_{l}|\psi_{j}>$$
We simplify this expression to
$$\operatorname{Tr}(\hat{\rho}|a_{l}\rangle\langle a_{l}|)$$
However when I multiply out this second expression I get:
$$Tr(\sum_{j}p_{j}<\psi_{j}|a_{l}>|\psi_{j}><a_{l}|)$$
$$=\sum_{j}p_{j}<\psi_{j}|a_{l}><\psi_{j}|a_{l}>$$
Which will not be the same as the first expression unless $<\psi_{j}|a_{l}>$ is real. This number is not necessarily real, for example if $|\psi_{l}>=|0>+i|1>$ and $|a_{l}>=|1>$. I have a feeling that I've made some stupid error somewhere, can anyone see where? Also sorry for the bra/ket formatting, i couldn't get the latex package to work.
Answer: You're doing the trace wrong :)
Let's look at it explicitly.
\begin{align}
{\rm Tr}\Big(\sum_j p_j\langle\psi_j\vert a_l\rangle\vert\psi_j\rangle\langle a_l\vert\Big) &= \sum_i\bigg\langle a_i\bigg\vert \Big(\sum_j p_j \langle \psi_j\vert a_l\rangle\vert \psi_j\rangle\langle a_l\vert\Big)\bigg\vert a_i\bigg\rangle \\
&=\sum_j p_j\langle\psi_j \vert a_l\rangle\sum_i\langle a_i\vert \psi_j\rangle\langle a_l\vert a_i\rangle \\
&= \sum_j p_j\langle\psi_j\vert a_l\rangle \sum_i \langle a_i\vert \psi_j\rangle\delta_{li} \\
&= \sum_j p_j\langle\psi_j\vert a_l\rangle\langle a_l\vert \psi_j\rangle \\
&= \sum_j p_j\vert \langle \psi_j\vert a_l\rangle\vert^2\
\end{align}
which is the expected result. We used the fact that the operator that you were trying to measure had a non-degenerate spectrum. More generally, you'd use the projection operators onto the distinct eigensubspaces of an operator, however, you can perform the same calculation because these projection operators would also be complete. | {
"domain": "quantumcomputing.stackexchange",
"id": 4034,
"tags": "textbook-and-exercises, density-matrix, linear-algebra"
} |
Python script to identify regex matches in all subdirs and write dict of matches to MongoDB | Question: I've been attempting to teach myself Python over the past couple of months and have been setting myself practical challenges to learn different aspects of the language.
I have a deep structure of subdirectories containing .rtf files. The objectives of the following code are as follows:
Examine every file in every subdirectory to identify "flagged words" defined in expression.
For each instance a flagged word is identified in a file, capture the file name and the line within that file in which the flagged word appears.
Construct a dictionary, in which flagged words are keys and filenames and lines are stored as values.
Insert each key/value pair in the dictionary into a MongoDB collection
The code is as follows:
from pymongo import MongoClient
import os
import re
import json
# connect to mongodb
client = MongoClient('localhost', 27017)
db = client['test-database']
restrictions = db['restrictions']
# create empty dictionary
d = {}
# setup regular expressions
expression = "sexual offences|reporting restriction|reporting restrictions|anonymous|anonymously|secret"
pattern = re.compile(expression)
# read the source files
for dname, dirs, files in os.walk("path/to/top/level/directory"):
for fname in files:
fpath = os.path.join(dname, fname)
with open(fpath, 'r') as f:
source = f.readlines()
for i, line in enumerate(source):
for match in re.finditer(pattern, line):
# set the matched flagword as the key and the filename and line as the value
d.setdefault(match.group(0), [])
file_instance = fname, line
print file_instance
d[match.group(0)].append(file_instance)
# Update the restrictions database
for key, value in d.iteritems():
xref_id = db.restrictions.insert_one({'flagged_word': key, 'instance': value})
An example document in MongoDB generated by this code:
{
"_id" : ObjectId("5925e0d94fb263cb1417bb73"),
"flagged_word" : "restriction",
"instance" : [
[
"Family3.txt ",
"ii)\tAn application for a reporting restriction order (\"reporting restriction order\") to restrict or prohibit the publication of:\n"
]
}
The code works as expected, but I have a nagging feeling that the same objectives could be achieved with a far less clunky approach in the code (particularly where all of the for loops are concerned).
In the spirit of wishing to learn how to write clearer, more Pythonic code, I wonder if anyone could offer some advice on how this code could be improved. Many thanks in advance for taking the time to read my code.
Answer:
f.readlines() is redundant. Just iterate over the file: for i, line in enumerate(f):, or, as you are not using i at all: for line in f:
d.setdefault becomes redundant if you use d = collections.defaultdict(list) | {
"domain": "codereview.stackexchange",
"id": 25817,
"tags": "python, regex, mongodb"
} |
What is the name of this height-adjustment structure? | Question: I'd want to build a scale which can be "locked" in order to prevent weight from being pressed in it.
The scale and the weight are permanently fixed on a vehicle and
I want to make sure it doesn't break under excessive force during travel.
What I have in mind is something like the schema below. Does it have a name?
Or are there other, more common ways to achieve this? I am looking.at something that can be done quickly (a screw would take too long and I'd need 4 of them.
If it matters, I need about a 5mm displacement, and a supported weight of 200kg
Answer: Maybe I'm not understanding what you have in mind but your drawing looks really complicated with sliding rollers in captive grooves and such. I also imagine the lateral movement when weighing something is undesired. I would just try to shove something underneath.
Simple methods are vertical guide rails that can be pinned. Or a swinging beam with a radiused top or eccentric cam that can be flipped up and locked/pinned into place.
But in my opinion, your thought about screws is the best. By far the most secure and with the greatest leverage. Use one in the form of a screw jack and use a multi-start thread to make it faster. That will solve the speed issue. You'll need to find someone with a lathe to single-point thread you some, which means instead of screws you might as well get them to just make two big screw jacks with hand crank wheels, instead of four smaller, regular screws.
Or if you can't find someone with a lathe, then perhaps find
multi-start/fast travel threaded rod or lead screws. Typically used for linear motion.
If your plate was being guided on vertical tight rails then you only need one screw jack if it is big and strong enough since the rails would eliminate tilting. | {
"domain": "engineering.stackexchange",
"id": 4517,
"tags": "mechanical-engineering"
} |
What is an example of an electron acting as a particle? | Question: I'm aware that, like all quantum objects (I think), an electron can act as both a wave and a particle. Electron diffraction is a good example of how an electron can act as a wave, but I'm struggling to come up with an example of how it acts as a particle.
Any help would be much appreciated!
Answer: In an old-school TV picture tube, electrons were shot into one end of it, accelerated, steered into specific directions, and then collided with a thin phosphor coating on the inside of the picture end of the tube. Each collision created a burst of light, building up a visible picture for the TV watchers.
This process is well-modeled by envisioning the electrons as particles.
The original SLAC particle accelerator can be thought of as a machine for adding huge amounts of energy to electrons by shooting them down an evacuated tube two miles long, in which the electrons were separated into individual bunches and then made to surf on the crests of an intense microwave beam traveling down the same tube.
This process is also well-modeled by envisioning the electrons as particles. | {
"domain": "physics.stackexchange",
"id": 77548,
"tags": "particle-physics, experimental-physics, electrons, wave-particle-duality"
} |
Removal of jQuery from script to auto-apply coupons from a link | Question: We are using a small extension that auto-applies coupons from a link.
The extension had pop-up JS based on jQuery:
<script>
jQuery.noConflict();
jQuery(function() {
var appendthis = ("<div class='modal-overlay js-modal-close'></div>");
jQuery(document).ready(function(e) {
//e.preventDefault();
jQuery("body").append(appendthis);
jQuery(".modal-overlay").fadeTo(500, 0.7);
//$(".js-modalbox").fadeIn(500);
var modalBox = 'popup1';
jQuery('#' + modalBox).fadeIn(jQuery(this).data());
});
jQuery(".js-modal-close, .modal-overlay").click(function() {
jQuery(".modal-box, .modal-overlay").fadeOut(500, function() {
jQuery(".modal-overlay").remove();
});
});
jQuery(window).resize(function() {
jQuery(".modal-box").css({
top: (jQuery(window).height() - jQuery(".modal-box").outerHeight()) / 2,
left: (jQuery(window).width() - jQuery(".modal-box").outerWidth()) / 2
});
});
jQuery(window).resize();
});
</script>
It was our highest traffic page, in order to increase page speed we were removing unnecessary libraries and jQuery was one of them. So we needed to turn this code into pure JS.
The dev commented:
As far as I can see, this script performs a very simple task in a very
complicated way.
First: there is no need to use fadeIn\fadeOut methods in this case.
These methods freeze browser in loading moment. We need to use CSS
transitions and just add opacity.
Second: there is no need to change top and left params during resize.
We can use CSS and make the browser do it for us.
He did small changes to CSS of the extension, and final pure JS equivalent is:
<script>
window.onload = function() {
document.getElementById("popup-wrapper").className += " modal-overlay_visible";
}
function closeMethod() {
document.getElementById("popup-wrapper").className += " modal-overlay_hidden";
}
function DOMready() {
var closeElements = document.getElementsByClassName("js-modal-close");
Array.from(closeElements).forEach(function(element) {
element.addEventListener('click', closeMethod);
});
}
document.addEventListener("DOMContentLoaded", DOMready);
</script>
My response to the comments asking why both load and DOMContentLoaded are used:
Yes, there is a reason to use DOMContentLoaded.
Since until the DOM is built, we can not add events to the elements,
they simply do not exist.
For a sample, if I tried to get in global scope
document.getElementsByClassName("modal-overlay__close"); this
instruction would return []
Also, we can not show the popup until all popup styles and popup
picture are loaded.
That's why, I use window load event.
Answer: As it stands, in most ways the code is now much better than the original. Your senior dev is absolutely correct in wanting to use CSS for the modal size. Despite this, there are a few suggestions I would make.
Instead of using window.onload use window.addEventListener("load", function(){});. If window.onload is used, then the function activating the modal could be overwritten by accident in the future.
There is no point in including Method in closeMethod. I'm sure that Method was added when the dev realized that just naming the function close hid the window.close function. closeModal would be a more descriptive name that also avoids this problem.
Is there any reason that both load and DOMContentLoaded events are listened for? These events should fire incredibly close to each other, and the code can likely be combined into a single initialization method to reduce complexity.
The ID of the popup-wrapper is overdefined. More than one method has it hardcoded in. If it were to be changed, it would be fairly easy to forget to change one or more occurrences. To fix this, it might be best to wrap everything in an immediately invoked function expression (IIFE) and pass the ID in. Using an IIFE also makes it possible to trivially avoid any troubles with global scope.
Instead of appending to className, use the classList API as this also makes removing the modal-overlay_visible class trivial. It also might make it possible to remove the modal-overlay_hidden class.
The DOMready function is exposed to the global scope. This isn't a huge issue on small pages, but should still be avoided to increase maintainability.
Lastly, consistency is a good thing. A style guide can make it easier to read code by ensuring everyone sticks to using double quotes (or single quotes) and uses the same CSS class patterns (I expected js-modal-close to be modal-overlay_close)
I could provide several different ways of writing this module. Which way is best heavily depends on your situation and is a judgement I can't make. | {
"domain": "codereview.stackexchange",
"id": 26462,
"tags": "javascript, jquery, e-commerce"
} |
How does "be altruist to those who are similar to you" evolve? | Question: There are many cases when people commit altruism. One is relationship. I am willing to die for 2 of my children or 8 nieces, say an evolutionary psychologist. Another is reciprocal altruism, which is just a selfish cooperation rather than true altruism.
In the 1930s J.B.S. Haldane had full grasp of the basic quantities and
considerations that play a role in kin selection. He famously said
that, "I would lay down my life for two brothers or eight cousins".[8]
Kin altruism is the term for altruistic behaviour whose evolution is
supposed to have been driven by kin selection.
Is it possible that humans may be altruists not for their 2 children or 8 nieces but to say, 1000 people that are similar to them.
If so, how does "Be altruist to 1000 of people similar to you" evolve?
Note: If the answer is negative, then we will have a hard time understanding voting. Why sacrifice your time. However, if the answer is positive that humans do love other humans "a little bit" then voting, or even suicide bombing make sense. Those humans sacrifice their little time to improve reproductive success of those who are similar to them.
Answer DOES NOT have to be about humans. If you can show me why strong/weak chimps tend to help each other or whether alpha males like one another, that'll do. Of course, naturally we would expect that leaders need followers and followers need leaders. But sometimes there are behavior that defy even this complementary nature.
Similarity does not have to be genetic. For example, fellow programmers and fellow engineers tend to be gang up.... Or does it have to be genetic?
Answer: The above answers are good, but have unfortunately confused some of the concepts in the theory, I will do my best to explain.
What we are assessing is how does one behaviour evolve, links with cooperation and altruism are applications of this. The process of selection in evolution removes the the worst individuals from the gene pool, thus those with comparatively good genes survive to reproduce. With physical traits these mechanisms are well understood, with behaviour there are still multiple theories which have not been concluded but our understanding of them is reasonably complete.
Inclusive fitness is the combination of direct and indirect fitness, direct being your personal fitness, indirect being fitness gained from others. In general this is measured in terms of reproductive success (RS) (the number of offspring you successfully raise to reproductive maturity). Out of this understanding came Hamilton's Rule:
r.b > c
Where r is the relatedness, b is the benefit & c is the cost. In this scenario the cost is a reduction in RS, the benefit is a gain in RS. Benefits are multiplied by the relatedness to the offspring. In normal diploid organisms (Gets DNA from both parents) you are related by 0.5 (half from mother, half from father), but only related to your brother/sister's offspring by 0.25. Thus, if you have two children you have a direct fitness of 1, if your brother has two children you gain an indirect fitness of 0.5. Thus, you would prefer under normal circumstances to have your own children rather than raise your brothers.
Altruism is suffering a cost for someone else's benefit, @Winawer mentions the greenbeard theory, which rests on the assumption that if you can tell someone has the same genes as you then you know that you are related to them in someway and this will be on average be greater than to any other individual. Altruism is a behaviour, defined above, and not a specific action like giving up your own reproduction, thus it is a part of cooperation and not separate from it.
It is true that there are different forms of altruism however these are all just perspectives pertaining to the same theory. ALL behaviour is selfish, reciprocal altruism is just an example of this. Reciprocal altruism, is "If you scratch my back, I'll scratch yours" (You need help now, I may need help in the future), it is altruistic because you suffer a cost for another's benefit but it is selfish because it is still in the individual's interest to help. This is where prisoners dilemma comes in, this is basically asking the question "how do I pick the most optimal situation in this scenario?"
For simplicity, it is generally ill advised to study behaviours in humans because of the implicit bias we have in our judgment on ourselves. From political and religious associations that cloud judgment to the fact that we always strive to see ourselves differently. Also, there are all sorts of ethical difficulties in actually testing and manipulating theories on us. Thus we normally study animals to get around this. However your question specifically talks about people so I will try and answer this based on biological theory alone.
Based on the foundations above, why would someone behave altruistically to someone only 'similar' to you? The biggest difficulty is knowing that we understand all the costs and benefits involved, most of the time we only know some of them and its impossible that you can know all of them because you can't know what you don't know. Humans have effectively 'bootstrapped' behaviours which are 'for the good of society' building on from previously developed behaviours. However we are still fundamentally asking a question about Hamilton's Rule. Being seen as a part of society induces altruistic behaviour towards you (because you are seen as worth something), this is good for you and the costs associated with being altruistic are usually low. Thus you would never rationally die for 1000 unrelated individuals unless you had at least had all the children you were going to have and raised them to be independent. But you would vote, by voting you suffer a tiny cost, your time, for a huge benefit. Benefits might include:
Being seen as caring for others (altruism given to you)
Possible benefits to yourself (less tax, more support, better roads, clean air.. etc)
Possible benefits to others related to you
General good moral of your community, leading to a better life.
There could be thousands of other benefits that I can't, off the top of my head, think of. The cost is negligible and the benefit is intangible but probably higher than the cost. Relatedness isn't important (so long as it is greater than 0), if the cost is sufficiently low or the benefit is sufficiently high. I hope this has helped clarify your thinking and answer the question.
For further information:
Wikipedia is great for getting an overview of terms and definitions, but remain critical of what you could be reading.
A decent book on animal behaviour in general is: Alcock J. (2009) Animal Behaviour: An evolutionary approach. 9Th edition. Sinaur Associates.
Academic papers; if you have access to them (i.e. are an academic)
Provides the foundations
Hamilton W.D. (1963) Evolution of altruistic behaviour. The American Naturalist 97 354
Hamilton W.D. (1964) The Genetical Evolution of Social Behaviour I . Journal of Theoretical Biology 7 1
Hamilton W.D. (1964) The Genetical Evolution of Social Behaviour II. Journal of Theoretical Biology 7 17
The current controversy mentioned in another answer is from this paper, it is widely dislike but does have some support. It is slightly off topic from from the question here, but eusociality is fundamental to our understanding of evolutionary behaviour:
Nowak M.A., Tarnita C.E. & Wilson E.O. (2010) The Evolution of Eusociality. Nature 466 1057 | {
"domain": "biology.stackexchange",
"id": 340,
"tags": "evolution, sociality"
} |
Flagging system: Check for changing values in a data frame | Question: I've written the following code to check if a value in a data frame changes. I'm looking at the last 5 values. If there was no change at all I want my code to return 1, if a single one (or multiple) of the last 5 are different to the value that is being checked return 0. Finally I want the returned values in a new column in my data frame.
Here's my code so far. It works but I think there is a nicer (and more clean) way to do it.
mydata <- data.frame("id" = 1:100, "ta" = c(sample(x = c(-5:20), size = 94, replace = T), rep(1,6))) # include a repetition to check if code works
nums <- mydata$id # create a dummy for iteration
qc_dummy <- vector(mode = "list", length = length(nums)) # create a dummy vector for the values computed in the for loop
for(i in 1:length(nums)) {
qc_dummy[[i]] <- ifelse(mydata[nums[i], 2] - mydata[nums[i-1], 2] == 0,
ifelse(mydata[nums[i], 2] - mydata[nums[i-2], 2] == 0,
ifelse(mydata[nums[i], 2] - mydata[nums[i-3], 2] == 0,
ifelse(mydata[nums[i], 2] - mydata[nums[i-4], 2] == 0,
ifelse(mydata[nums[i], 2] - mydata[nums[i-5], 2] == 0, 1, 0) ,0), 0) ,0) ,0)
}
mydata$qc1 <- as.vector(c(0,unlist(qc_dummy))) # first value of list is skipped by unlist (logi(0)) -> add 0
Answer: I reduced the example data, for easier viewing
# new example data:
mydata <- data.frame(ta = 1:13)
mydata[2:3, 1] <- 1L
mydata[6:12, 1] <- 2L
n <- 3 # how many equal values we need
require(data.table)
setDT(mydata) # convert to data.table
mydata
mydata[, mathcPrev := fifelse((ta - shift(ta, 1)) == 0L, T, F, F)]
mydata[, g := cumsum(!mathcPrev)] # grouping value, if value has changed
mydata[, count := cumsum(mathcPrev), by = g]
mydata[, qc2 := fifelse(count >= n, 1L, 0L)]
mydata
# ta mathcPrev g count qc2
# 1: 1 FALSE 1 0 0
# 2: 1 TRUE 1 1 0
# 3: 1 TRUE 1 2 0
# 4: 4 FALSE 2 0 0
# 5: 5 FALSE 3 0 0
# 6: 2 FALSE 4 0 0
# 7: 2 TRUE 4 1 0
# 8: 2 TRUE 4 2 0
# 9: 2 TRUE 4 3 1
# 10: 2 TRUE 4 4 1
# 11: 2 TRUE 4 5 1
# 12: 2 TRUE 4 6 1
# 13: 13 FALSE 5 0 0
So, the idea is to create index mathcPrev, that shows if this value matches previous, and then we can count how many equal values we have in a row. | {
"domain": "codereview.stackexchange",
"id": 37084,
"tags": "r, iteration"
} |
libuvc hydro catkin install fails | Question:
I am trying to get a camera driver(any camera driver working) on a RADXA Rock board. Ubuntu raring - cheese works, but I can't get any ROS driver to make or work.
roslaunch uvc_camera uvc_camera.launch device:=/dev/video0
rosrun uvc_camera uvc_camera_node
terminate called after throwing an instance of 'std::runtime_error'
what(): couldn't query control
This is the catkin_make on the git clone of libuvc_camera.
-- +++ processing catkin package: 'libuvc_camera'
-- ==> add_subdirectory(libuvc_ros/libuvc_camera)
CMake Error at libuvc_ros/libuvc_camera/CMakeLists.txt:9 (find_package):
By not providing "Findlibuvc.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "libuvc", but
CMake did not find one.
Could not find a package configuration file provided by "libuvc" with any
of the following names:
libuvcConfig.cmake
libuvc-config.cmake
Add the installation prefix of "libuvc" to CMAKE_PREFIX_PATH or set
"libuvc_DIR" to a directory containing one of the above files. If "libuvc"
provides a separate development package or SDK, be sure it has been
installed.
-- Configuring incomplete, errors occurred!
Invoking "cmake" failed
this is the src directory:
rock@radxa:~/catkin_ws/src$ ls
CMakeLists.txt driver_common image_common
common_msgs dynamic_reconfigure libuvc_ros
Thanks This should be easier than playing 'driver roulette.'
Originally posted by DrBot on ROS Answers with karma: 147 on 2014-03-01
Post score: 0
Answer:
The libuvc_camera package is available as part of the Hydro builds for ARM. you should be able to install it with:
sudo apt-get install ros-hydro-libuvc-camera
For reference, the complete list of packages available for Hydro on ARM is available on the status page
Originally posted by ahendrix with karma: 47576 on 2014-03-01
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 17132,
"tags": "ros, uvc-camera"
} |
Markdown Markup Editor: MK2 | Question: Following on from this question I have added some more functionality to Markdown Markup, and made it more WPF idiomatic.
It now supports saving data from any of the four boxes, and loading Markdown or CSS files.
Everything seems to work, so as always, any tips/pointers/critique is/are welcome.
So, first, the new MainWindow.xaml.cs:
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
DataContext = new MainWindowViewModel();
}
private MainWindowViewModel ViewModel => DataContext as MainWindowViewModel;
private void renderPreviewBrowser_Navigating(object sender, NavigatingCancelEventArgs e)
{
// This prevents links in the page from navigating, this also means we cannot call WebBrowser.Navigate for any browsers with this event.
if (e.Uri != null)
{
e.Cancel = true;
}
}
}
Nice and succinct.
The new MainWindow.xaml:
<Window x:Class="Markdown_Markup.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:local="clr-namespace:Markdown_Markup"
mc:Ignorable="d"
Title="MainWindow" Height="539" Width="749"
d:DataContext="{d:DesignInstance local:MainWindowViewModel}">
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="*"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
<DockPanel Grid.ColumnSpan="3">
<Menu DockPanel.Dock="Top">
<MenuItem Header="_File">
<MenuItem Header="_Open">
<MenuItem Header="_Markdown" Command="{Binding OpenMarkdownCommand}"/>
<MenuItem Header="_CSS" Command="{Binding OpenCssCommand}"/>
</MenuItem>
<MenuItem Header="_Save">
<MenuItem Header="_Markdown" Command="{Binding SaveMarkdownCommand}" />
<MenuItem Header="_CSS" Command="{Binding SaveCssCommand}"/>
<MenuItem Header="_Generated HTML" Command="{Binding SaveGeneratedHtmlCommand}"/>
<MenuItem Header="_Rendered HTML" Command="{Binding SaveRenderedHtmlCommand}"/>
</MenuItem>
</MenuItem>
</Menu>
<StackPanel></StackPanel>
</DockPanel>
<StatusBar Height="24" VerticalAlignment="Bottom" Grid.ColumnSpan="3" Grid.Row="2"/>
<TextBox Margin="5,45,5,29" TextWrapping="Wrap" Grid.RowSpan="3" Text="{Binding MarkdownContent, UpdateSourceTrigger=PropertyChanged}" AcceptsReturn="True" AcceptsTab="True"/>
<TextBox Margin="5,28,5,5" TextWrapping="Wrap" Grid.Column="1" Grid.Row="1" IsReadOnly="True" Text="{Binding HtmlContent}"/>
<TextBox Margin="5,26,5,29" TextWrapping="Wrap" Grid.Column="1" Grid.Row="2" IsReadOnly="True" Text="{Binding HtmlRenderContent}"/>
<TextBox Margin="5,45,5,0" TextWrapping="Wrap" Grid.Column="1" Text="{Binding CssContent, UpdateSourceTrigger=PropertyChanged}" AcceptsReturn="True" AcceptsTab="True"/>
<WebBrowser local:BrowserBehavior.Html="{Binding HtmlRenderContent}" Grid.Column="2" Margin="5,45,5,29" Grid.RowSpan="3" Navigating="renderPreviewBrowser_Navigating" />
<Label Content="Markdown Content:" HorizontalAlignment="Left" Margin="5,19,0,0" VerticalAlignment="Top"/>
<Label Content="Additional CSS:" Grid.Column="1" HorizontalAlignment="Left" Margin="5,19,0,0" VerticalAlignment="Top"/>
<Label Content="Markdown HTML:" Grid.Column="1" HorizontalAlignment="Left" Margin="5,2,0,0" Grid.Row="1" VerticalAlignment="Top"/>
<Label Content="Render HTML:" Grid.Column="1" HorizontalAlignment="Left" Margin="5,0,0,0" Grid.Row="2" VerticalAlignment="Top"/>
<Label Content="HTML Preview:" Grid.Column="2" HorizontalAlignment="Left" Margin="5,19,0,0" VerticalAlignment="Top"/>
</Grid>
</Window>
This is a bit larger than last time.
The BrowserBehavior.cs:
/// <summary>
/// Represents a behavior to control WebBrowser binding to an HTML string.
/// </summary>
/// <remarks>
/// Adopted from: http://stackoverflow.com/a/4204350/4564272
/// </remarks>
public class BrowserBehavior
{
public static readonly DependencyProperty HtmlProperty = DependencyProperty.RegisterAttached(
"Html",
typeof(string),
typeof(BrowserBehavior),
new FrameworkPropertyMetadata(OnHtmlChanged));
[AttachedPropertyBrowsableForType(typeof(WebBrowser))]
public static string GetHtml(WebBrowser d) => (string)d.GetValue(HtmlProperty);
public static void SetHtml(WebBrowser d, string value)
{
d.SetValue(HtmlProperty, value);
}
static void OnHtmlChanged(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e)
{
var webBrowser = dependencyObject as WebBrowser;
webBrowser?.NavigateToString(e.NewValue as string ?? " ");
}
}
It's slightly modified from the Stack Overflow question mentioned in the XML comment.
A DelegateCommand (given to me by Mat's Mug, which I made a couple changes to):
public class DelegateCommand : ICommand
{
private readonly Predicate<object> _canExecute;
private readonly Action<object> _execute;
public DelegateCommand(Action<object> execute, Predicate<object> canExecute = null)
{
_canExecute = canExecute;
_execute = execute;
}
public bool CanExecute(object parameter) => _canExecute == null || _canExecute.Invoke(parameter);
public void Execute(object parameter)
{
_execute.Invoke(parameter);
}
public event EventHandler CanExecuteChanged
{
add { CommandManager.RequerySuggested += value; }
remove { CommandManager.RequerySuggested -= value; }
}
}
Lastly, and this is the more fun of the files, the MainWindowViewModel.cs:
public class MainWindowViewModel : INotifyPropertyChanged
{
private Markdown _markdown;
private string _markdownContent;
private string _cssContent;
private string _htmlContent;
private string _htmlRenderContent;
public MainWindowViewModel()
{
_markdown = new Markdown();
SaveMarkdownCommand = new DelegateCommand(SaveMarkdown, CanSaveMarkdown);
SaveCssCommand = new DelegateCommand(SaveCss, CanSaveCss);
SaveGeneratedHtmlCommand = new DelegateCommand(SaveGeneratedHtml, CanSaveGeneratedHtml);
SaveRenderedHtmlCommand = new DelegateCommand(SaveRenderedHtml, CanSaveRenderedHtml);
OpenMarkdownCommand = new DelegateCommand(OpenMarkdown, CanOpenMarkdown);
OpenCssCommand = new DelegateCommand(OpenCss, CanOpenCss);
}
public ICommand SaveMarkdownCommand { get; }
public ICommand SaveCssCommand { get; }
public ICommand SaveGeneratedHtmlCommand { get; }
public ICommand SaveRenderedHtmlCommand { get; }
public ICommand OpenMarkdownCommand { get; }
public ICommand OpenCssCommand { get; }
public string MarkdownContent
{
get { return _markdownContent; }
set
{
_markdownContent = value;
OnPropertyChanged(new PropertyChangedEventArgs(nameof(MarkdownContent)));
UpdateHtml();
}
}
public string CssContent
{
get { return _cssContent; }
set
{
_cssContent = value;
OnPropertyChanged(new PropertyChangedEventArgs(nameof(CssContent)));
UpdateHtml();
}
}
public void UpdateHtml()
{
var html = _markdown.Transform(MarkdownContent);
HtmlContent = html;
html = $"<html>\r\n\t<head>\r\n\t\t<style>\r\n\t\t\t{CssContent}\r\n\t\t</style>\r\n\t</head>\r\n\t<body>\r\n\t\t{html}\r\n\t</body>\r\n</html>";
HtmlRenderContent = html;
}
public string HtmlContent
{
get { return _htmlContent; }
set
{
_htmlContent = value;
OnPropertyChanged(new PropertyChangedEventArgs(nameof(HtmlContent)));
}
}
public string HtmlRenderContent
{
get { return _htmlRenderContent; }
set
{
_htmlRenderContent = value;
OnPropertyChanged(new PropertyChangedEventArgs(nameof(HtmlRenderContent)));
}
}
public bool CanSaveMarkdown(object parameter) => !string.IsNullOrWhiteSpace(MarkdownContent);
public void SaveMarkdown(object parameter)
{
var dialog = new SaveFileDialog();
dialog.AddExtension = true;
dialog.Filter = "Markdown Files|*.md|All Files|*.*";
var result = dialog.ShowDialog();
if (result.Value)
{
using (var sw = new StreamWriter(dialog.FileName))
{
sw.WriteLine(MarkdownContent);
}
}
}
public bool CanSaveCss(object parameter) => !string.IsNullOrWhiteSpace(CssContent);
public void SaveCss(object parameter)
{
var dialog = new SaveFileDialog();
dialog.AddExtension = true;
dialog.Filter = "CSS Files|*.css|All Files|*.*";
var result = dialog.ShowDialog();
if (result.Value)
{
using (var sw = new StreamWriter(dialog.FileName))
{
sw.WriteLine(CssContent);
}
}
}
public bool CanSaveGeneratedHtml(object parameter) => !string.IsNullOrWhiteSpace(HtmlContent);
public void SaveGeneratedHtml(object parameter)
{
var dialog = new SaveFileDialog();
dialog.AddExtension = true;
dialog.Filter = "HTML Files|*.html|All Files|*.*";
var result = dialog.ShowDialog();
if (result.Value)
{
using (var sw = new StreamWriter(dialog.FileName))
{
sw.WriteLine(HtmlContent);
}
}
}
public bool CanSaveRenderedHtml(object parameter) => !string.IsNullOrWhiteSpace(HtmlRenderContent);
public void SaveRenderedHtml(object parameter)
{
var dialog = new SaveFileDialog();
dialog.AddExtension = true;
dialog.Filter = "HTML Files|*.html|All Files|*.*";
var result = dialog.ShowDialog();
if (result.Value)
{
using (var sw = new StreamWriter(dialog.FileName))
{
sw.WriteLine(HtmlRenderContent);
}
}
}
public bool CanOpenMarkdown(object parameter) => true;
public void OpenMarkdown(object parameter)
{
var dialog = new OpenFileDialog();
dialog.AddExtension = true;
dialog.Filter = "Markdown Files|*.md|All Files|*.*";
var result = dialog.ShowDialog();
if (result.Value)
{
using (var sr = new StreamReader(dialog.FileName))
{
MarkdownContent = sr.ReadToEnd();
}
}
}
public bool CanOpenCss(object parameter) => true;
public void OpenCss(object parameter)
{
var dialog = new OpenFileDialog();
dialog.AddExtension = true;
dialog.Filter = "CSS Files|*.css|All Files|*.*";
var result = dialog.ShowDialog();
if (result.Value)
{
using (var sr = new StreamReader(dialog.FileName))
{
CssContent = sr.ReadToEnd();
}
}
}
public void OnPropertyChanged(PropertyChangedEventArgs e)
{
var handler = PropertyChanged;
handler?.Invoke(this, e);
}
public event PropertyChangedEventHandler PropertyChanged;
}
All comments and critique are welcome.
It's also on GitHub now.
Answer: OnPropertyChanged()
This method to raise the PropertyChanged event needs only to be called if the value is changed which isn't verified by the setters of your properties yet. To fix this issue and prevent unneeded work to be done a simple if condition is needed like so
public string MarkdownContent
{
get { return _markdownContent; }
set
{
if (_markdownContent == value) { return; }
_markdownContent = value;
OnPropertyChanged(new PropertyChangedEventArgs(nameof(MarkdownContent)));
UpdateHtml();
}
}
The implementation of the OnPropertyChanged() can be improved by just using the ? null-conditional operator which is clearly stated in the New Features in c# 6
We expect that a very common use of this pattern will be for triggering of events:
PropertyChanged?.Invoke(this, args);
This is an easy and thread-safe way to check for null before you trigger an event. The reason it’s thread-safe is that the feature evaluates the left-hand side only once, and keeps it in a temporary variable.
OpenMarkDown() and OpenCss()
You have duplicated code here and an unused method parameter. By introducing a string GetLoadFilename(string filter) (not sure about the method name) this can be prevented like so
private string GetLoadFilename(string filter)
{
var dialog = new OpenFileDialog();
dialog.AddExtension = true;
dialog.Filter = filter;
var result = dialog.ShowDialog();
return result.Value ? dialog.FileName : string.Empty;
}
and now for instance OpenCss() would look like so if we also extract the actual reading of the file to a string ReadFile(string) method
public void OpenCss(object parameter)
{
var fileName = GetLoadFilename("CSS Files|*.css|All Files|*.*");
if (fileName.Length == 0) { return; }
CssContent = ReadFile(fileName)
}
private string ReadFile(string fileName)
{
using (var sr = new StreamReader(dialog.FileName))
{
return sr.ReadToEnd();
}
}
Almost the same refactoring should be applied to the SaveMarkdown(), SaveCss(), SaveGeneratedHtml() and SaveRenderedHtml() by introducing the methods string GetSaveFilename(string) and void SaveFile(string). | {
"domain": "codereview.stackexchange",
"id": 18706,
"tags": "c#, .net, wpf, xaml, markdown"
} |
Random Number Generator Followup: Choosing the Generator Algorithm and the Distribution | Question: This question is a follow-up from my previous code review question. This question regards the ability to choose a predefined random number generator algorithm and also choose a generator distribution.
For context, here is the full code in its current state:
#include <algorithm>
#include <boost/program_options.hpp>
#include <boost/algorithm/string/predicate.hpp>
#include <cmath>
#include <iomanip>
#include <iostream>
#include <limits>
#include <random>
#include <vector>
enum returnID {
success = 0,
known_err = 1,
other_err = 2,
zero_err = 3,
conflict_err = 4,
overd_err = 5,
underd_err = 6,
exclude_err = 7,
round_prec = 8,
vect_nan = 9,
gen_err = 10,
success_help = -1
};
bool filter(const long double rand, const int precision, const std::vector<std::string> & fx, bool(*predicate)(const std::string&, const std::string&)) {
std::ostringstream oss;
oss << std::fixed << std::setprecision(precision) << rand;
const auto str_rand = oss.str();
return std::none_of(fx.begin(), fx.end(), [&](auto const & s) { return predicate(str_rand, s); });
}
struct program_args {
long long number;
long double lbound, ubound;
bool ceil, floor, round, trunc; // mutually exclusive
int precision;
std::vector<long double> excluded;
bool norepeat, stat_min, stat_max, stat_median,
stat_avg, bad_random, list, quiet, numbers_force;
std::vector<std::string> prefix, suffix, contains;
std::string delim = "\n", generator = "mt19937";
};
returnID parse_args(program_args & args, int argc, char const * const * argv) {
static auto const ld_prec = std::numeric_limits<long double>::max_digits10;
namespace po = boost::program_options;
po::options_description desc("Options");
desc.add_options()
("help,h", "produce this help message")
("number,n", po::value<long long>(&args.number)->default_value(1),
"count of numbers to be generated")
("lbound,l", po::value<long double>(&args.lbound)->default_value(0.0),
"minimum number (ldouble) to be generated")
("ubound,u", po::value<long double>(&args.ubound)->default_value(1.0),
"maximum number (ldouble) to be generated")
("ceil,c", po::bool_switch(&args.ceil)->default_value(false),
"apply ceiling function to numbers")
("floor,f", po::bool_switch(&args.floor)->default_value(false),
"apply floor function to numbers")
("round,r", po::bool_switch(&args.round)->default_value(false),
"apply round function to numbers")
("trunc,t", po::bool_switch(&args.trunc)->default_value(false),
"apply truncation to numbers")
("precision,p", po::value<int>(&args.precision)->default_value(ld_prec),
"output precision (not internal precision, cannot be > ldouble precision)")
("exclude,x", po::value<std::vector<long double> >(&args.excluded)->multitoken(),
"exclude numbers from being printed, best with --ceil, --floor, --round, or --trunc")
("norepeat", po::bool_switch(&args.norepeat)->default_value(false),
"exclude repeated numbers from being printed, best with --ceil, --floor, --round, or --trunc")
("stat-min", po::bool_switch(&args.stat_min)->default_value(false),
"print the lowest value generated")
("stat-max", po::bool_switch(&args.stat_max)->default_value(false),
"print the highest value generated")
("stat-median", po::bool_switch(&args.stat_median)->default_value(false),
"print the median of the values generated")
("stat-avg", po::bool_switch(&args.stat_avg)->default_value(false),
"print the average of the values generated")
("prefix", po::value<std::vector<std::string> >(&args.prefix)->multitoken(),
"only print when the number begins with string(s)")
("suffix", po::value<std::vector<std::string> >(&args.suffix)->multitoken(),
"only print when the number ends with string(s)")
("contains", po::value<std::vector<std::string> >(&args.contains)->multitoken(),
"only print when the number contains string(s)")
("list", po::bool_switch(&args.list)->default_value(false),
"print numbers in a list with positional numbers prefixed")
("delim", po::value<std::string>(&args.delim),
"change the delimiter")
("quiet,q", po::bool_switch(&args.quiet)->default_value(false),
"disable number output, useful when paired with stats")
("numbers-force", po::bool_switch(&args.numbers_force)->default_value(false),
"force the count of numbers output to be equal to the number specified")
("generator,g", po::value<std::string>(&args.generator),
"change algorithm for the random number generator:\n - minstd_rand0\n - minstd_rand"
"\n - mt19937 (default)\n - mt19937_64\n - ranlux24_base\n - ranlux48_base"
"\n - ranlux24\n - ranlux48\n - knuth_b\n - default_random_engine"
"\n - badrandom (std::rand)");
po::variables_map vm;
po::store(po::parse_command_line(argc, argv, desc), vm);
po::notify(vm);
if(vm.count("help")) {
std::cout << desc << '\n';
return returnID::success_help;
}
if(args.number <= 0) {
std::cerr << "error: the argument for option '--number' is invalid (n must be >= 1)\n";
return returnID::zero_err;
}
if(args.ceil + args.floor + args.round + args.trunc > 1) {
std::cerr << "error: --ceil, --floor, --round, and --trunc are mutually exclusive\n";
return returnID::conflict_err;
}
if(args.ceil || args.floor || args.round || args.trunc) {
args.precision = 0;
}
if(args.precision > ld_prec) {
std::cerr << "error: --precision cannot be greater than the precision for <long double> ("
<< ld_prec << ")\n";
return returnID::overd_err;
}
if(args.precision <= -1) {
std::cerr << "error: --precision cannot be less than zero\n";
return returnID::underd_err;
}
if(vm.count("exclude") && vm["exclude"].empty()) {
std::cerr << "error: --exclude was specified without arguments (arguments are separated by spaces)\n";
return returnID::exclude_err;
}
std::vector<std::vector<std::string> > filters = {{args.prefix, args.suffix, args.contains}};
for(auto i : filters) {
for(auto j : i) {
if(j.find_first_not_of("0123456789.") != std::string::npos || std::count(std::begin(j), std::end(j), '.') > 1) {
std::cerr << "error: --prefix, --suffix, and --contains can only be numbers\n";
return returnID::vect_nan;
}
}
}
const std::vector<std::string> options {{"minstd_rand0", "minstd_rand", "mt19937",
"mt19937_64", "ranlux24_base", "ranlux48_base", "ranlux24",
"ranlux48", "knuth_b", "default_random_engine", "badrandom"}};
if(std::find(options.begin(), options.end(), args.generator) == options.end()) {
std::cerr << "error: --generator must be: minstd_rand0, minstd_rand, "
"mt19937, mt19937_64, ranlux24_base, ranlux48_base, "
"ranlux24, ranlux48, knuth_b, default_random_engine, badrandom\n";
return returnID::gen_err;
}
return returnID::success;
}
std::function<long double()> random_generator(const program_args & args) {
if(args.generator == "minstd_rand0") {
std::minstd_rand0 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "minstd_rand") {
std::minstd_rand generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "mt19937") {
std::mt19937 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "mt19937_64") {
std::mt19937_64 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "ranlux24_base") {
std::ranlux24_base generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "ranlux48_base") {
std::ranlux48_base generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "ranlux24") {
std::ranlux24 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "ranlux48") {
std::ranlux48 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "knuth_b") {
std::knuth_b generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "default_random_engine") {
std::default_random_engine generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "badrandom") {
std::srand(std::time(nullptr));
const auto min = args.lbound;
const auto scale = (args.ubound - args.lbound) / RAND_MAX;
return [min, scale]{ return min + (std::rand() * scale);};
}
}
int main(int ac, char* av[]) {
try {
program_args args;
switch(auto result = parse_args(args, ac, av)) {
case returnID::success: break;
case returnID::success_help: return 0;
default: return result;
}
std::vector<long double> generated;
std::cout.precision(args.precision);
const auto random = random_generator(args);
long long list_cnt = 0;
for(long long i = 1; i <= args.number;) {
if(!args.numbers_force) ++i;
if(args.list) ++list_cnt;
long double rand = random();
if(args.ceil) rand = std::ceil(rand);
else if(args.floor) rand = std::floor(rand);
else if(args.round) rand = std::round(rand);
else if(args.trunc) rand = std::trunc(rand);
if(!args.excluded.empty() && std::find(args.excluded.begin(), args.excluded.end(), rand) != args.excluded.end())
continue;
else if(args.norepeat && std::find(generated.begin(), generated.end(), rand) != generated.end())
continue;
else if(!args.prefix.empty() && filter(rand, args.precision, args.prefix, boost::starts_with))
continue;
else if(!args.suffix.empty() && filter(rand, args.precision, args.suffix, boost::ends_with))
continue;
else if(!args.contains.empty() && filter(rand, args.precision, args.contains, boost::contains))
continue;
generated.push_back(rand);
if(!args.quiet) {
if(args.list && args.numbers_force) std::cout << i << ".\t";
if(args.list) std::cout << list_cnt << ".\t";
std::cout << std::fixed << rand << args.delim;
if(args.numbers_force) ++i;
}
}
if(args.delim != "\n" && !args.quiet) std::cout << '\n';
if((args.stat_min || args.stat_max || args.stat_median || args.stat_avg) && !args.quiet)
std::cout << '\n';
if(args.stat_min || args.stat_max) {
auto minmax = std::minmax_element(generated.begin(), generated.end());
if(args.stat_min) std::cout << "min: " << *minmax.first << '\n';
if(args.stat_max) std::cout << "max: " << *minmax.second << '\n';
}
if(args.stat_median) {
auto midpoint = generated.begin() + generated.size() / 2;
std::nth_element(generated.begin(), midpoint, generated.end());
auto median = *midpoint;
if(generated.size() % 2 == 0)
median = (median + *std::max_element(generated.begin(), midpoint)) / 2;
std::cout << "median: " << median << '\n';
}
if(args.stat_avg) {
long double sum = std::accumulate(generated.begin(), generated.end(), 0.0);
std::cout << "avg: " << sum / generated.size() << '\n';
}
return returnID::success;
} catch(std::exception & e) {
std::cerr << "error: " << e.what() << '\n';
return returnID::known_err;
} catch(...) {
std::cerr << "error: exception of unknown type!\n";
return returnID::other_err;
}
}
Here is the portion of the code that this question is about:
std::function<long double()> random_generator(const program_args & args) {
if(args.generator == "minstd_rand0") {
std::minstd_rand0 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "minstd_rand") {
std::minstd_rand generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "mt19937") {
std::mt19937 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "mt19937_64") {
std::mt19937_64 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "ranlux24_base") {
std::ranlux24_base generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "ranlux48_base") {
std::ranlux48_base generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "ranlux24") {
std::ranlux24 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "ranlux48") {
std::ranlux48 generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "knuth_b") {
std::knuth_b generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "default_random_engine") {
std::default_random_engine generator{(std::random_device())()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
} else if(args.generator == "badrandom") {
std::srand(std::time(nullptr));
const auto min = args.lbound;
const auto scale = (args.ubound - args.lbound) / RAND_MAX;
return [min, scale]{ return min + (std::rand() * scale);};
}
}
Which is then called using:
const auto random = random_generator(args);
long double rand = random();
This std::function was mainly created by Toby Speight through his response to my previous question.
While Toby's original code was very good, when I added a lot more predefined generators, it seemed to be repetitive.
My question is this: can this std::function be simplified, and can it also work with the ability to choose a distribution as well?
If I were to simply create a separate if-else for each possibility of generator-and-distribution it would get out of hand quickly.
Answer: Below is the solution that I came to (posted a day late):
I decided to take Incomputable's idea of templates and run with it, in my own way:
template<typename GEN> std::function<long double()> r_gen(const program_args & args) {
GEN generator{std::random_device{}()};
std::uniform_real_distribution<long double> dis{args.lbound, args.ubound};
return [dis, generator]() mutable -> auto { return dis(generator); };
}
long double random(const program_args & args) {
if(args.generator == "minstd_rand0") return r_gen<std::minstd_rand0>(args)();
else if(args.generator == "minstd_rand") return r_gen<std::minstd_rand>(args)();
else if(args.generator == "mt19937_64") return r_gen<std::mt19937_64>(args)();
else if(args.generator == "ranlux24_base") return r_gen<std::ranlux24_base>(args)();
else if(args.generator == "ranlux48_base") return r_gen<std::ranlux48_base>(args)();
else if(args.generator == "ranlux24") return r_gen<std::ranlux24>(args)();
else if(args.generator == "ranlux48") return r_gen<std::ranlux48>(args)();
else if(args.generator == "knuth_b") return r_gen<std::knuth_b>(args)();
else if(args.generator == "default_random_engine") return r_gen<std::default_random_engine>(args)();
else if(args.generator == "badrandom")
return args.lbound + (std::rand() / (RAND_MAX / (args.ubound - args.lbound)));
else return r_gen<std::mt19937>(args)();
}
Which can then be called by:
long double rand = random(args);
Notice that the case for args.generator == "badrandom" does not include std::srand(). If it contained std::srand(), it would be called constantly, and thus would return the same number over and over (that's why it's called badrandom after all). Instead, I use:
if(args.generator == "badrandom") std::srand(std::time(nullptr));
before the main for loop.
You may also notice that my template (r_gen<>()) does not include a typename for the distribution, even though it is possible to do so via:
template<typename GEN, typename DIST> // etc.
When going through the distributions, I noticed different distributions work differently (not really surprising). For example, std::uniform_int_distribution returns int, std::bernoulli_distribution returns bool, etc. Some of them have varying arguments, some just using a single number for the bound, or only using arguments to define a seed, etc.
For these reasons, I decided not to include a choice of distribution, at least not yet. A solution would be to pass lbound and ubound as optional arguments... we'll see.
While I will accept this as the answer, because it is what I ended up using, I want to say that Incomputable's answer is what inspired the code, and some of their suggestions I have also implemented (such as std::random_device{}()). I love the idea of using tests, and it's something I've had on my list to look into.
Here's the final code. Let me know if I've done something horribly wrong. | {
"domain": "codereview.stackexchange",
"id": 27192,
"tags": "c++, random, boost, generator"
} |
Quantum Field Theory: Number Operator $\hat{N} = a^\dagger a$ and bra-ket notation | Question: My textbook, Quantum Field Theory and the Standard Model by Schwartz, says the following:
The easiest way to study a quantum harmonic oscillator is with creation and annihilation operators, $a^\dagger$ and $a$. These satisfy
$$[a, a^\dagger] = 1.$$
There is also the number operator $\hat{N} = a^\dagger a$, which counts modes:
$$\hat{N} \mid n \rangle = n \mid n \rangle.$$
I’ve only just started learning bra-ket notation, but as I understand it, $\hat{N} \mid n \rangle$ is just applying the operator $\hat{N}$ to $n$? But how does this result in $\hat{N} \mid n \rangle = n \mid n \rangle$?
I would appreciate it if people could please take the time to clarify this.
Answer: $|n\rangle$ is an eigenstate of the number operator $\hat{N}$ with the eigenvalue $n$. Being an eigenstate, applying the operator to the state does not change the state, so the result will be proportional to $|n\rangle$. The proportionality constant is exactly the eigenvalue $n$, hence $\hat{N}|n\rangle = n|n\rangle$. | {
"domain": "physics.stackexchange",
"id": 60291,
"tags": "quantum-mechanics, hilbert-space, operators, harmonic-oscillator, notation"
} |
How do I adjust objects on a conveyor belt into the proper orientation? | Question: This is part two of my larger robot, it follows up what happens with the small rocks here: What kind of sensor do i need for knowing that something is placed at a position?
Now i am taking the rocks down a tube for placement. In the case they need to be altered so they always will stand up before they enter the tube. Obvioulsy a rectangular rock wont fit if it comes in sideways. The dimensions here are pretty small. The rocks are about 15 mm x 10 mm. The tube i use is actually a plastic drinking straw. And the material i use for the rest of the robot is Lego powered by step motors which draw the conveyor belts to move the rocks. The control is Arduino.
(sorry for the lousy illustration, if you know a good paint program for mac like the one used to draw the picture in my other post, please tell me :-))
The rocks will always enter one at a time and have as much time they need to be adjusted to fit and enter the tube so the fall down. The question is, how to ensure all rocks are turned the right way when they get to the straw. Im not sure if using Lego when building the robot is off topic here, but a solution involving lego is preferable. And it has to be controlled by an Arduino.
General tips in how to split a complex task into subtasks robots can do is also good, is there any theory behind the most common sub tasks a job requires when designing multiple robots to do it?
Answer: One way to do this is complicated, involving computer vision and a robot arm or other manipulator that can directly affect the orientation of each rock.
The low-tech way to do it would be to use a separate conveyor that gave you one rock at a time, and use walls to funnel it into a gate that matches the internal dimensions of the straw. You would then just detect when the rock begins to push on the funnel walls (instead of travelling through) and knock it backwards to try again. After a few tries, it should end up in the proper orientation (or you can give up and reject it at that point). An even simpler way would be to skip the detection and just oscillate the funnel continuously, which would be safe if you're assured that each rock is guaranteed to have a working orientation.
It's similar to this patent which suggests a single wall, with the other side being open for improperly-aligned objects to fall off the belt. | {
"domain": "robotics.stackexchange",
"id": 157,
"tags": "arduino, motor, microcontroller, motion"
} |
Instantiating shapes using the Factory Design Pattern in Java | Question: Trying to learn the factory design pattern, came up with some code based on a Shape Factory (found this example here). Is this the right way to implement the factory pattern?
interface Shape{
void draw();
}
class Rectangle implements Shape{
@Override
public void draw() {
System.out.println(" Drawing A Rectangle!");
}
}
class Circle implements Shape{
@Override
public void draw() {
System.out.println(" Drawing A cIRCLE!");
}
}
class Triangle implements Shape{
@Override
public void draw() {
System.out.println(" Drawing A Triangle!");
}
}
class ShapeFactory{
public static Shape getShape(String shapeType)
{
Shape shape = null;
switch(shapeType.toUpperCase()){
case "CIRCLE":
shape = new Circle();
break;
case "RECTANGLE":
shape = new Rectangle();
break;
case "TRIANGLE":
shape = new Triangle();
break;
}
return shape;
}
}
public class FactoryExample {
public static void main(String[] args) {
Shape s = ShapeFactory.getShape("rectangle");
s.draw();
s = ShapeFactory.getShape("triangle");
s.draw();
}
}
Answer: You could rewrite getShape as follows:
public static Shape getShape(String shapeType) {
switch (shapeType.trim().toUpperCase()) {
case "CIRCLE":
return new Circle();
case "RECTANGLE":
return new Rectangle();
case "TRIANGLE":
return new Triangle();
default:
return null;
}
} | {
"domain": "codereview.stackexchange",
"id": 18875,
"tags": "java, design-patterns, factory-method"
} |
Can "state" be considered a 5th dimension? | Question: I searched for an answer to this question on Google but just found articles that mention either string theory or a 5th dimension in passing (such as Maxwell equations as they relate to Riemann curvature tensor.)
I stumped myself while driving to school today thinking about this...
We can explain an objects position in the universe by describing its spacial and time locations, such as at 2nd and 3rd street on the fifth floor at 10:00am
But is that enough to fully describe an object when you take into account Schrödinger's cat? For example, what if, Schrödinger's cat is the object that is at that location and time. It seems like you would have to have a 5th state dimension to fully explain its position, as in at 2nd and 3rd street on the fifth floor at 10:00am with a probability of .5
It seems like if we had state as an additional dimension it would help explain things like quantum entanglement as the particles could be moving away from each other in space-time but standing still in state.
Am I merely relating things that have no business being related, or is there a connection between state and dimension?
Answer: It goes the other way, actually. In the Lagrangian and Hamiltonian approaches to classical dynamics (on which the quantum theory is based), you learn about "generalized coordinates" or "degrees of freedom." Most often this is used to reduce the complexity of a problem. The canonical example is a clock pendulum, constrained to move in a plane. The motion in $x$ and $y$ is rather complicated, but it becomes simpler once you realize that the only degree of freedom is the angle $\theta$ that the pendulum makes with the vertical direction.
A more sophisticated example is the description of the motion of many coupled oscillators in terms of their "normal modes." There is a very close connection between the normal modes of a classical oscillator and the energy eigenstates of a quantum system.
The various internal state variables that describe a quantum system are simply degrees of freedom in the Hamiltonian which don't happen to correspond to spatial position. In quantum theory there is nothing special about the degrees of freedom corresponding to position in space apart from our macroscopic biases. This is where you get some of the counterintuitive, non-local phenomena: QM treats position in space as no more or less special than any other degree of freedom. | {
"domain": "physics.stackexchange",
"id": 13268,
"tags": "quantum-mechanics, spacetime-dimensions"
} |
cannot find hello_qiskit module | Question: I'm currently following the "Introduction To Quantum Programming" course of qiskit.
At the beginning of "Visualizing entanglement" chapter :
https://learn.qiskit.org/course/introduction/visualizing-entanglement
We have to run this line:
from hello_qiskit import run_puzzle
This raises this error :
ModuleNotFoundError: No module named 'hello_qiskit'
I cannot find the hello_qiskit module in my anaconda environnement
And if I try to install it with
pip install hello_qiskit
I have this error:
ERROR: Could not find a version that satisfies the requirement
hello_qiskit (from versions: none) ERROR: No
matching distribution found for hello_qiskit
How can I install this module or is there another way to use the run_puzzle function ?
Answer: hello_qiskit is just a python file that lives in the Qiskit Textbook Jupyter Notebook system that is used as a short-cut/helper to interact with qiskit_textbook.games.hello_quantum. In Visualizing Entanglement, if you add
import hello_qiskit
help(hello_qiskit)
to the first cell, you will find a file /home/jovyan/hello_qiskit.py that contains the run_puzzle method, defined as follows:
from qiskit_textbook.games import hello_quantum
def run_puzzle(j):
puzzle = hello_quantum.run_game(exercises[j]['initialize'],
exercises[j]['success_condition'],
exercises[j]['allowed_gates'],
exercises[j]['vi'],
qubit_names=exercises[j]['qubit_names'],
mode=exercises[j]['mode']
)
return puzzle
This method references a list exercises of dictionaries defining arguments that will be passed to the run_game method. When you pass an int to the run_puzzle method, you are simply indexing the exercises list defined in hello_qiskit.py. For example, run_puzzle(0) accesses exercises[0], which is defined as
{
'initialize': [],
'success_condition': {},
'allowed_gates': {'0': {'x': 3}, '1': {}, 'both': {}},
'vi': [[1], True, False],
'mode': 'line',
'qubit_names': {'0':'q[0]', '1':'q[1]'}
}
To see how the run_game method is used, see Hello Qiskit Game. | {
"domain": "quantumcomputing.stackexchange",
"id": 3579,
"tags": "qiskit, programming"
} |
Idiomatic way to filter a `Vec` of version identifiers to only include latest version for each minor release? | Question: I have a Vec<String> of all available versions of a particular piece of software (Godot), named VERSIONS in code below, where each version can be either "major.minor" or "major.minor.patch". These are manually scraped from tuxfamily.
I also have a Vec<&str> of versions that are considered "supported", named STABLES in code below, where each version is in form of "major.minor". These are fetched from a file on GitHub.
The purpose of this snippet is to filter VERSIONS to only include latest versions for each minor release. For example, if i have VERSIONS = [2.1, 3.5, 3.5.1, 3.6] and STABLES = [3.5, 3.6], i want to disregard 2.1 (as it is not in STABLES) and 3.5 (as the latest 3.5 release is 3.5.1), ending up with VERSIONS = [3.5.1, 3.6].
I tried hard to figure out a clean and ideally fast way to do the above, but ended up with a solution that is explicit about everything it does, possibly sacrificing readability (opinionated) and speed (i did not measure it, but i think it could be faster).
// Dependencies: `versions` (for the `Versioning` struct), `ahash` (for `AHashMap`).
// A mapping of "major.minor" to a struct representing "major.minor[.patch]"
let mut latest_versions: AHashMap<&str, Versioning> = AHashMap::new();
// For each Godot version fetched from tuxfamily
for godot_version in VERSIONS {
for stable_godot_version in &STABLES {
// We check if it is one of the versions marked as stable in our
// custom data set
if godot_version.starts_with(stable_godot_version) {
// If yes, we check if we already have a version set as
// "latest" in our hashmap.
let latest: Option<&Versioning> = latest_versions.get(stable_godot_version);
match latest {
Some(latest) => {
// If yes, we compare our currently-iterated-over version
// with the one in the hashmap, and if it is greater, we
// set it as the latest
let godot_version = Versioning::new(godot_version.as_str()).unwrap();
if &godot_version > latest {
latest_versions.insert(stable_godot_version, godot_version);
}
},
// If not, we simply set it as the latest.
None => { latest_versions.insert(stable_godot_version, Versioning::new(godot_version.as_str()).unwrap()); },
}
}
}
}
VERSIONS = latest_versions.values().into_iter().map(|v| v.to_string()).collect();
Notably, version comparison is done by overloaded operators on versions::Versioning. I still am not very comfortable with this code in my codebase, so i wonder if there's a way to improve/refactor it?
Answer: I would start by parsing all of the version strings into some sort of Version struct. I might define it something like:
struct Version {
major: u32,
minor: u32,
patch: Option<u32>
}
You already do use versions::Versioning which you could parse into, but looking at that crate it looks to be trying to support any kind of version format and consequently lacks functionality which would be more useful for a constrained version.
for godot_version in VERSIONS {
for stable_godot_version in &STABLES {
// We check if it is one of the versions marked as stable in our
// custom data set
if godot_version.starts_with(stable_godot_version) {
This is going to be somewhat inefficient because you loop over all the STABLES looking for a match. It would be more efficient if you created a set of stable versions and then checked if the corresponding major.minor for a particular godot version was in there.
What I'd do is sort all of the versions. (Actually, if you are scraping it, maybe they already come sorted). Then I'd group_by from itertools to group versions with the same major.minor version. Then you could filter by stable versions and then map to return the last item of the group. | {
"domain": "codereview.stackexchange",
"id": 45254,
"tags": "rust"
} |
Coefficient of friction | Question: I know that coefficient of friction=f/N , and I also know that it depends on that surface area of contact and the material.
But my doubt is ,from this formula if N is halved then coefficient of friction should increase but that does not happen, it remains unchanged, why?
If it's because f might adjust itself accordingly to keep it constant ,then what about in case of know kinetic friction?
Only static friction is self adjusting, right!
Answer: First of all,
friction is the force that arises between surfaces when there is relative motion between surfaces. And when the system is in equilibrium even after applying the force in some direction we say static friction is acting and this is the only adjusting friction
now as you have written coefficient of static friction =f/N (where f is the force and N is normal reaction). This equation is ONLY valid if the system is in equilibrium and force acts in the direction, in which there will be relative motion.
I also know that it depends on that surface area of contact and the material.
this is the biggest fallacy coefficient of friction never depends on the area of contact(think about this point and if confusion comment)
if N is halved then coefficient of friction should increase but that does not happen, it remains unchanged, why?
N can be halved only in one way in the picture i have shown and the way is by applying force in upward direction, so in this way N is reduced , but by doing this to keep system in equilibrium in horizontal direction force required will also be reduced (because in this $\mu N=friction$, N is reduced in)
Only static friction is self adjusting, right!
yup :-) | {
"domain": "physics.stackexchange",
"id": 29551,
"tags": "friction"
} |
Creation/annihilation operators relation in equation 2.46 of Peskin and Schoroeder | Question: We can get the following relations from the creation/annihilation operators:
$$
H^n a_p = a_p (H - E_p)^n,
$$
and
$$
H^n a_p^{\dagger} = a_p^{\dagger} (H + E_p)^n.
$$
How do we get that
$$
e^{iHt} a_p e^{-iHt} = a_p e^{-iE_p t}
$$
and
$$
e^{iHt} a_p^{\dagger} e^{-iHt} = a_p^{\dagger} e^{iE_p t}?
$$
Actually, why we can say that
$$
\phi(x) = \phi(\vec{x},t) = e^{iHt}\phi(\vec{x})e^{-iHt}
$$
Answer: Just expand the exponential $e^{iHt}$.
$$\begin{aligned}e^{iHt} a_p e^{-iHt}&=\sum_{n=0}^{\infty}\frac{(iHt)^n}{n!}a_p\ e^{-iHt},\\
&=\sum_{n=0}^{\infty}a_p\frac{(iHt-itE_p)^n}{n!}e^{-iHt},\\
&=a_p e^{it(H-E_p)}e^{-iHt},\\
&=a_p e^{-itE_p}.\end{aligned}$$
Similarly you can compute the same thing for the creation operator. | {
"domain": "physics.stackexchange",
"id": 80667,
"tags": "quantum-mechanics, homework-and-exercises, operators, hamiltonian, commutator"
} |
How can I achieve the following reaction where the iodide gets replaced by an aldehyde? | Question:
The question is what other chemicals need to be added so that the reaction is possible.
Although the solution shouldnt be that hard, I couldnt find the answer to this one.
Answer: Note that the reaction adds a carbon to the struture.
This can be done in two steps:
React the iodide with NaCN or KCN in DMSO to form the nitrile
Reduce the nitrile to the aldehyde with di-isobutylaluminium hydride Dibal-H | {
"domain": "chemistry.stackexchange",
"id": 17075,
"tags": "organic-chemistry"
} |
Prove that the Bures metric satisfies a contractive property and has unitary invariance | Question: In this paper, the authors assert that the Bures metric satisfies a contractive property and has unitary invariance. These terms aren't defined or proved in the paper, nor is any reference given for a definition or proof. Can anyone provide a concrete definition of these terms (and a proof that the Bures metric has these properties) or a place where I can find such details?
Answer: Contractivity refers to the fact that, under the action of any CPTP map $\mathcal{E}$, a given metric satisfies $M(\rho,\sigma) \geq M(\mathcal{E}(\rho), \mathcal{E}(\sigma))$. Unitary invariance means that the above is an equality when $\mathcal{E}$ is a unitary channel (and is actually a consequence of the standard contractivity).
The fact that the Bures metric satisfies the above follows from the fact that the fidelity does (with $\leq$ rather than $\geq$), which is proved e.g. in Nielsen and Chuang, Theorem 9.6. | {
"domain": "quantumcomputing.stackexchange",
"id": 3762,
"tags": "error-correction, trace-distance, quantum-metrology, metrics, fubini-study-metric"
} |
Searchable list of Kaggle challenges | Question: Is there a way to search Kaggle for a list of current and prior challenges, or an external site that does that?
When I search on Kaggle it will only bring up solution notebooks and datasets, it doesn't seem to have a filter on challenge page.
Answer: Have you checked the competition's stage?
In there you will see all the active and concluded contests
https://www.kaggle.com/competitions | {
"domain": "datascience.stackexchange",
"id": 8759,
"tags": "kaggle"
} |
Generalisation of the statement that a monoid recognizes language iff syntactic monoid divides monoid | Question: Let $A$ be a finite alphabet. For a given language $L \subseteq A^{\ast}$ the syntactic monoid $M(L)$ is a well-known notion in formal language theory. Furthermore, a monoid $M$ recognizes a language $L$ iff there exists a morphism $\varphi : A^{\ast} \to M$ such that $L = \varphi^{-1}(\varphi(L)))$.
Then we have the nice result:
A monoid $M$ recognizes $L \subseteq A^{\ast}$ if $M(L)$ is a homomorphic image of a submonoid of $M$ (writen as $M(L) \prec M$).
The above is usually states in the context of regular languages, and then the above monoids are all finite.
Now suppose we substitute $A^{\ast}$ with an arbitrary monoid $N$, and we say that a subset $L \subseteq N$ is recognized by $M$ if there exists a morphism $\varphi : N \to M$ such that $L = \varphi^{-1}(\varphi(L))$. Then we still have that if $M$ recognizes $L$, then $M(L) \prec M$ (see S. Eilenberg, Automata, Machines and Languages, Volume B), but does the converse hold?
In the proof for $A^{\ast}$ the converse is proven by exploiting the property that if $N = \varphi(M)$ for some morphism $\varphi : M \to N$ and $\psi : A^{\ast} \to N$ is also a morphism, then we can find $\rho : A^{\ast} \to M$ such that $\varphi(\rho(u)) = \psi(u)$ holds, simply by choosing some $\rho(x) \in \varphi^{-1}(\psi(x))$ for each $x \in A$ and extending this to a morphism from $A^{\ast}$ to $M$. But this does not work for arbitrary monoids $N$ so I expect the above converse to be false then. And if it is false, for what kind of monoid beside $A^{\ast}$ is it still true, and did those monoids have received any attention in the research literature?
Answer: Yes, these monoids have received attention in the research literature and actually lead to difficult questions.
Definition. A monoid $N$ is called projective if the following property holds: if $f:N \to R$ is a monoid morphism and $h:T \to R$ is a surjective morphism, then there exists a morphism $g:N \to T$ such that $f = h \circ g$.
You can find a long discussion on projective monoids in [1], right after Definition 4.1.33. It is shown in particular that every projective finite semigroup is a band (a semigroup in which every element is idempotent). But the converse is not true and it is actually an open problem to decide whether a finite semigroup is projective.
[1] J. Rhodes and B. Steinberg, The $q$-theory of finite semigroups. Springer Monographs in Mathematics. Springer, New York, 2009. xxii+666 pp. ISBN: 978-0-387-09780-0 | {
"domain": "cstheory.stackexchange",
"id": 4409,
"tags": "fl.formal-languages, automata-theory, regular-language, algebra, monoid"
} |
Two-point correlation function in functional integral | Question: On Peskin and Schroeder's QFT book, page 284, the book derived two-point correlation functions in terms of function integrals.
$$\left\langle\Omega\left|T \phi_H\left(x_1\right) \phi_H\left(x_2\right)\right| \Omega\right\rangle=$$ $$\lim _{T \rightarrow \infty(1-i \epsilon)} \frac{\int \mathcal{D} \phi \phi\left(x_1\right) \phi\left(x_2\right) \exp \left[i \int_{-T}^T d^4 x \mathcal{L}\right]}{\int \mathcal{D} \phi \exp \left[i \int_{-T}^T d^4 x \mathcal{L}\right]} ,\tag{9.18} $$
Then, in the following several pages, the book calculate the numerator and denominator separately, using discrete Fourier series, and then take the continue limit.
Finally, the book obtain the calculation result of eq.(9.18) in eq.(9.27):
$$ \left\langle 0\left|T \phi\left(x_1\right) \phi\left(x_2\right)\right| 0\right\rangle=$$
$$\int \frac{d^4 k}{(2 \pi)^4} \frac{i e^{-i k \cdot\left(x_1-x_2\right)}}{k^2-m^2+i \epsilon}=D_F\left(x_1-x_2\right). \tag{9.27}$$
I am troubled for the L.H.S of (9.27), why $\left\langle\Omega\left|T \phi_H\left(x_1\right) \phi_H\left(x_2\right)\right| \Omega\right\rangle$ became $\left\langle 0\left|T \phi\left(x_1\right) \phi\left(x_2\right)\right| 0\right\rangle$?
Also, the $\phi$ in eq.(9.27) maybe in interaction picture, which is different with Heisenberg picture $\phi_H$ in general case.
A parallel analysis is also in Peskin and Schroeder's QFT book, on page 87, in eq.(4.31)
$$ \langle\Omega|T\{\phi(x) \phi(y)\}| \Omega\rangle=\lim _{T \rightarrow \infty(1-i \epsilon)} \frac{\left\langle 0\left|T\left\{\phi_I(x) \phi_I(y) \exp \left[-i \int_{-T}^T d t H_I(t)\right]\right\}\right| 0\right\rangle}{\left\langle 0\left|T\left\{\exp \left[-i \int_{-T}^T d t H_I(t)\right]\right\}\right| 0\right\rangle} .\tag{4.31}$$
And also we only considered the numerator of (4.31) in later analysis, why?
Answer: Perhaps it would be more symmetric if we write the denominator $\langle\Omega|\Omega\rangle=1$ on the left-hand sides explicitly. The denominators on the right-hand sides are important, and cannot in principle be omitted in later analysis.
If we start with eq. (4.31), here the denominator on the right-hand side is crucial, since when we apply the theorem of Gell-Mann and Low a non-trivial factor is cancelled. Here $|\Omega\rangle$ and $|0\rangle$ denote the vacuum in the interaction and the free theory, respectively.
Similarly, in eq. (9.18) the denominator is a convenient way to absorb an over-all normalization factor in the path integral measure.
Eq. (9.27) refers to a free theory, so the Heisenberg and interaction pictures coincide. | {
"domain": "physics.stackexchange",
"id": 91065,
"tags": "quantum-field-theory, path-integral, vacuum, interactions, correlation-functions"
} |
Finding downward force of lever bar | Question: I am working on a DIY project, I am trying to replicate a bar compressor which for some reason is only sold in Europe so I thought it would be fun to make my own without having to waste so much material. I simply want to make sure the force going down with the plate somewhat replaces a person to jump in the trashcan. Somewhere above 80+ lbs, since I am no engineer I was wondering how would one go about finding the force applied going down to the trash if about 50lbs is pulling down the lever. I understand that materials are key to this but as an amateur I just wanted to know how to get started. Sorry for posting here but I thought it would suit this page the most for this question. Thank you.
Answer: The input force is 18.5+20.5 ft away from the fulcrum and the load is 18.5 away from the fulcrum.
This means the force applied to the load is $\frac{(18.5+20.5)}{18.5} = 2.1$ times greater than the input force or for an input force of 50 lbs you get an output force of 105 lbs.
However this assumes that all forces are applied perpendicular to the line between point and fulcrum. If this is not the case then you need to decompose the forces into tangential and normal components. | {
"domain": "engineering.stackexchange",
"id": 1330,
"tags": "mechanical-engineering, civil-engineering, applied-mechanics, statics, dynamics"
} |
Can anybody please tell how to use web camera in ROS? | Question:
I want to use webcamera to test images in ROS. I am programming using OPENCV and want to see the live video stream from my laptop in a window on ROS. When I try to launch a file whose contents are given below, I get an error that "ERROR: cannot launch node of type [uvc_cam/uvc_cam_node]: can't locate node [uvc_cam_node] in package [uvc_cam]
No processes to monitor :".
I launch the file using the command : roslaunch ishan_vision uvc_cam.launch device:=/dev/video0
uvc_cam.launch file :
<node name="uvc_cam_node" pkg="uvc_cam" type="uvc_cam_node" output="screen">
<remap from="camera/image_raw" to="camera/rgb/image_color" />
<param name="device" value="$(arg device)" />
<param name="width" value="320" />
<param name="height" value="240" />
<param name="frame_rate" value="20" />
<param name="exposure" value="0" />
<param name="gain" value="100" />
</node>
Can anybody please help?
Thanks
Originally posted by ish45 on ROS Answers with karma: 151 on 2014-09-20
Post score: 1
Answer:
It looks like roslaunch can't find the node, which shouldn't happen if you've:
installed uvc_camera sudo apt-get install ros-<distro>-uvc-camera
sourced your ros installation source /opt/ros/<distro>/setup.bash
Originally posted by paulbovbel with karma: 4518 on 2014-09-20
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by ish45 on 2014-09-21:
Thanks a lot. Now, its working. I did not install the uvc camera thats why it was giving the error. Thanks.
Comment by arzo on 2015-03-20:
sudo apt-get install ros-`rosversion -d`-uvc-camera is shorter version
Comment by aaditya_saraiya on 2017-08-24:
Dear Sir,
I have installed uvc-camera and have even sourced my ros installation.
However still roslaunch is unable to find the launch file.
Using ROS Kinetic | {
"domain": "robotics.stackexchange",
"id": 19467,
"tags": "ros, uvc-cam"
} |
Gradient descent parameter estimation Package for R | Question: I am looking for a package that does gradient descent parameter estimation in R, maybe with some bootstrapping to get confidence intervals. I wonder if people call it something different here as I get almost nothing on my searches, and the one article I found was from someone who rolled their own.
It is not that hard to implement, but I would prefer to use something standard.
Answer: Ok, after a lot of looking I found the "optim" routine which is in "stats", one of the packages that is always loaded. It has quite a few methods including conjugate gradients and BGGS and a few others and worked well on the first few examples I tried. It doesn't seem to get a lot of attention strangely. I guess optimization people tend to use Matlab.
I knew there had to be something. | {
"domain": "datascience.stackexchange",
"id": 278,
"tags": "r, gradient-descent"
} |
Where can I find information about monthly seasonal change in a given area? | Question: I have searched Google for almost an hour with no result. I'm looking for something like, "Indonesia, January to August is rain season, September to December is hot season".
I can't use the old information in books about seasonal change that now seem to have shifted and I can't find any updated information anywhere.
Answer: You can try climatemps.com for easy access and plots. There they have a map you can click on or a list by countries.
More data and analysis tools are available at the KNMI Climate Explorer, a more reliable and scientifically sound source.
This is one of the outputs for Jakarta from jakarta.climatemps.com | {
"domain": "earthscience.stackexchange",
"id": 1653,
"tags": "meteorology, climate, seasons"
} |
What is the significance of momentum? | Question: I just want to get an idea what momentum is. I know the mathematical meaning that momentum is $mv$ where $v$ is velocity. But I don't know its significance. Like I know that Acceleration is how velocity is changing with respect to time, I wanted to get feel of what momentum really is?
Answer: Momentum is what makes it "tough" to stop moving things.
If you stand with an apple in each hand, then you feel their weight. If you throw one of them upwards and catch it again, it feels "heavier" in that moment. What you feel in addition to the weight is the momentum.
Stopping a motion can be an easy or tough task depending on the momentum, which encompasses both the speed $v$ to decelerate from as well as the mass $m$ that resists this deceleration.
At the same time there happens to exist a conservation law regarding momentum. All momentum before equals all after any event; momentum is always conserved $\sum P_{before}=\sum p_{after}$. So apart from the physical significance of momentum, it also happens to be a very useful tool.
That's why we see it and learn about it and use it all the time. | {
"domain": "physics.stackexchange",
"id": 43211,
"tags": "newtonian-mechanics, momentum"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.