anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Why is the inside of the Earth so hot? | Question: I have heard that the Earth is made up of four layers, being the crust, the mantle, the outer core and the inner core. I have also heard that the Earth's temperature increases as you move from the crust to the inner core, with the inner core having a temperature of 4700 degrees.
Why is it that the inside of the Earth is so hot compared to the outside of the Earth?
Answer: This is a very good question. There a few main heat sources: Heat(and work) left over from the formation of the earth, work (potential energy) generated by dense iron sinking into the center of the earth and forming a core, and radioactive energy. Since the Thermal diffusivity of the materials that make up the earth is low, heat transport is very slow and thus the planet retains a significant amount of its heat.
Another reason why the earth is so hot, is that mantle convection is a very inefficient way to transport said heat, and thus the earth loses energy very slowly.
We know how hot the interior of the earth might be because of laboratory experiments, and also volcanic eruptions. How much energy(temperature) does it take to melt gabbro (mantle rock)?
Interestingly enough, the earth is cooling. And the inner core / outer core boundary is a freezing front (meaning turning from a liquid outer core to a solid inner core). Though, it is a very hot freezing front. | {
"domain": "earthscience.stackexchange",
"id": 60,
"tags": "geophysics, temperature, geothermal-heat"
} |
How was it determined that the electron observed in the cathode ray experiments was the same particle that gave an atom its balancing negative charge? | Question: How was it determined that the electron observed in the cathode ray experiments was the same particle that gave an atom its balancing negative charge?
Couldn't there have been an entirely different negatively charged particle? What justified that assumption?
Answer: The idea that the atom contains small indivisible quantities of electric charge was developed in the 19th century by various contributors (e.g. Faraday, Stoney, Laming, Weber, and Helmholtz).
When Thomson (Nobel Prize in physics 1906) discovered the electron in 1897 and estimated its mass and charge, he used the free electrons of cathode rays.
The important experiment that showed that atoms contain electrons was the discovery of the Zeeman effect in 1896, i.e. the splitting of spectral lines in the presence of a magnetic field. Lorentz explained the phenomena with his electron theory. In 1902, Lorentz and Zeeman jointly received the Nobel Prize in physics. | {
"domain": "chemistry.stackexchange",
"id": 3517,
"tags": "physical-chemistry, atoms, electrons, history-of-chemistry"
} |
Gauss Law Difference between Griffiths and Jackson | Question: As per Jackson, Gauss's Law is defined as:
$$\nabla \cdot \vec E=4 \pi \rho/\epsilon$$
Now as per Griffiths, the same equation is defined as:
$$\nabla \cdot \vec E= \rho/\epsilon$$
So the $\pi$ part is missing in Griffiths, which is written in SI unit. Is this difference due to that Jackson is written in CGS whereas Griffiths is in SI??
Answer:
So the pi part is missing is Griffith which is written in SI unit. Is this difference due to that Jackson is written in CGS whereas Griffith in SI??
Yes.
Further comment: there are also some newer editions of Jackson's book in SI. Also to be more precise the original Jackson's book uses cgs-gauss units. | {
"domain": "physics.stackexchange",
"id": 90162,
"tags": "electromagnetism"
} |
Force and Torque | Question: Think of a uniform sphere. Sometimes if a force is applied to the sphere it does not only move but also spins. So there is a torque.
But is it possible to calculate which part of the applied force is responsible for movement which part works as torque to give it a spin.
Answer: I will be explaining with respect to the below free body diagram.
You can see that I have split the applied force into its components as per the axis shown.
For translational motion
For this you have to consider the sphere as a object with mass concentrated at the center of mass O(center)
You can then apply $a=\frac{F}{m} $ where a is the translational acceleration
For rotational motion
You must be knowing that $\tau=I\alpha$
Here $\tau$ is provided by the $F\sin\theta $ at a distace $r$ from the center of mass, and $\alpha$ is the rotational(angular) acceleration.
Thus $\tau=(F\sin\theta)(r)$
Initially there might be slipping but eventually when pure rolling starts $a$ will be equal to $r\alpha$ i.e. $a=r\alpha$
The solution highlights which part of the applied force is responsible for movement and which part works as torque to give it the spin. I hope that this clarifies your doubt . Any suggestion or query, use the comment section :) | {
"domain": "physics.stackexchange",
"id": 29776,
"tags": "newtonian-mechanics, forces, rotational-dynamics, torque"
} |
Why is the horizontal force on sled $2\cdot F$ rather than just $1 \cdot F$? | Question: I was wondering why the net horizontal force on the sled was $2 \cdot F$ rather than just $1 \cdot F$. Is that the case because of the reactionary force of tension on the pulley? I think my understanding of tension is off.
Answer: Yes. The tension in the string is $F$ and this acts twice on the sled/pulley because the string is wrapped around the pulley. So the net force on the sled/pulley is $2F$ to the right.
If you are drawing a free-body diagram for the pulley alone, remember that we are told that the pulley is massless. So the net force on the pulley must be zero, even though it is accelerating to the right and so is not in equilibrium. The string exerts a force $2F$ to the right on the pulley, so the sled must also exert a force $2F$ to the left on the pulley. And so the equal and opposite force that the pulley exerts on the sled is $2F$ to the right.
Another way to get the same result is to consider the motions of the mass $m$, the sled/pulley and the centre of mass of the sled/pulley/mass considered as a single system. Suppose the centre of mass of the whole system accelerates to the right with acceleration $a_0$; the sled accelerates to the right with acceleration $a_1$; and the mass accelerates to left with acceleration $a_2$. Then we have
$(M+m)a_0 = Ma_1 - ma_2$
But we know that $ma_2=F$ because there is a force $F$ acting to the left on the mass. We also know that $(M+m)a_0=F$ because the net force on the whole sled/pulley/mass system is $F$ to the right (when we consider the whole system, the tension in the string is an internal force so it cancels out and can be ignored). So we have
$F = Ma_1 - F \\ \Rightarrow Ma_1 = 2F$
which confirms that the net force acting on the sled/pulley is $2F$ to the right. | {
"domain": "physics.stackexchange",
"id": 71493,
"tags": "homework-and-exercises, newtonian-mechanics, kinematics"
} |
What is the isomer distribution in monosubstituted fluorobullvalene? | Question: Bullvalene (tricyclo[3.3.2.02,8]deca-3,6,9-triene) is a fluxional molecule able to interconvert any two carbon atoms through a series of degenerate Cope rearrangements (for more information, see the Wikipedia article on bullvalene). At slightly above room temperature bullvalene produces a single line in its 1H-NMR spectrum. Considering fluorobullvalene, there are 4 possible isomers:
1-fluorotricyclo[3.3.2.02,8]deca-3,6,9-triene, i.e. the fluorine could be attached to any of the 3 equivalent cyclopropyl carbons
3-fluorotricyclo[3.3.2.02,8]deca-3,6,9-triene, i.e. the fluorine could be attached to any of the 3 equivalent olefinic carbons alpha to the cyclopropane ring
4-fluorotricyclo[3.3.2.02,8]deca-3,6,9-triene, i.e. the fluorine could be attached to any of the 3 equivalent olefinic carbons beta to the cyclopropane ring
5-fluorotricyclo[3.3.2.02,8]deca-3,6,9-triene, i.e. the fluorine could be attached to the unique tris-allylic carbon
Again, at room temperature or thereabouts, only a single line is observed in the 1H, 13C and 19F-NMR spectra. I remember once reading that about 80% of the fluorine is in the tris-allylic position (4), most of the remainder is attached to the two olefinic carbons (2 and 3) and a very small amount is attached at the cyclopropyl position (1). This was often cited as an example to support the argument that fluorine prefers to form bonds to carbon when the involved carbon orbital has high p-content (low s-content).
Does anyone know what the isomer distribution is? Those 4 numbers and a reference would be very helpful.
Answer: Extensive NMR studies of substituted bullvalenes were done in the 1960's and 70's, by Oth et al. Much of their original work was published in German. Thankfully, some of these topics were revisited (in English!) during the 1990's by Luz et al., and these included some rather nice low temperature 19F and 13C NMR studies in both solution and solid state (and make it much quicker and easier for me to read).
Let us assign the following isomers 1–4, as is done in the question:
To answer your question:
Oth et al.[1] studied fluorobullvalene at −25 °C, and found no evidence by 19F of isomer 1. The equilibrium populations of isomers 4:3:2 was 78%:7%:15%. However, they could not distinguish between isomers 3 and 2, so it could be 78:15:7. In any case, predominantly isomer 4. They do state, however, that isomer 1 must exist ["Das C-Isomer muss also durchlaufen werden."], as the Cope rearrangement mechanism for isomer 4 to go to isomer 3 must go through isomer 1.
Luz et al.[2] studied the solution state Cope rearrangement of a number of monosubstituted bullvalenes.
At −30 °C, they identify the ratio of isomers 4:3:2 to be 87%:3.4%:9.6%. Again no evidence of isomer 1 by 19F at this temperature. However, at −55 °C, they observe a peak in the 19F spectrum that they ascribe to isomer 1, with an intensity of 0.2% relative to isomer 4. This peak broadens beyond detection at −40 °C. The ratios of isomers at −55 °C, then, was found to be 1.0:0.018:0.065:0.002 for isomers 4:3:2:1.
And just out of interest:
Of course, the other monsubstituted bullvalenes all have different preferred isomer distributions in solution. In the solid state, all monosubstituted bullvalenes studied by Luz crystallize as well-ordered single isomer crystals. Fluorobullvalene[3] crystallizes exclusively as isomer 4. Cyanobullvalene, bullvalenecarboxylic acid[4], and (ethylthio)bullvalene[5] crystallize exclusively as isomer 3. Bromo- and iodobullvalene[6] crystallize entirely as isomer 2.
Useful references for you:
Oth, J. F. M.; Merényi, R.; Röttele, H.; Schröder, G. Fluor-, chlor- und jodbullvalen. Tetrahedron Lett. 1968, 9 (36), 3941–3946. DOI: 10.1016/S0040-4039(00)72372-0.
Poupko, R.; Zimmermann, H.; Müller, K.; Luz, Z. Dynamic NMR Investigation of the Cope Rearrangement in Solutions of Monosubstituted Bullvalenes. J. Am. Chem. Soc. 1996, 118 (34), 7995–8005. DOI: 10.1021/ja954004t.
Müller, K.; Zimmermann, H.; Krieger, C.; Poupko, R.; Luz, Z. Reaction Pathways in Solid-State Processes. 1. Carbon-13 NMR and X-ray Crystallography of Fluorobullvalene. J. Am. Chem. Soc. 1996, 118 (34), 8006–8014. DOI: 10.1021/ja954005l.
Müller, K.; Zimmermann, H.; Krieger, C.; Poupko, R.; Luz, Z. Reaction Pathways in Solid-State Processes. 2. Carbon-13 NMR and X-ray Crystallography of Cyanobullvalene and Bullvalenecarboxylic acid. J. Am. Chem. Soc. 1996, 118 (34), 8015–8023. DOI: 10.1021/ja954006d.
Luger, P.; Roth, K. X-Ray, n.m.r., and theoretical studies of the structures of (ethylthio)bullvalene. J. Chem. Soc., Perkin Trans. 2 (1972-1999) 1989, 649–655. DOI: 10.1039/P29890000649.
Luz, Z.; Olivier, L.; Poupko, R.; Müller, K.; Krieger, C.; Zimmermann, H. Bond Shift Rearrangement of Chloro-, Bromo-, and Iodobullvalene in the Solid State and in Solution. A Carbon-13 and Proton NMR Study. J. Am. Chem. Soc. 1998, 120 (22), 5526–5538. DOI: 10.1021/ja9728029. | {
"domain": "chemistry.stackexchange",
"id": 1724,
"tags": "organic-chemistry, nmr-spectroscopy, reference-request"
} |
Could not find resource '[]' in 'hardware_interface::EffortJointInterface' | Question:
Hello,
The answers at questions: 1, 2 etc, does not seem to be solving this. I am having trouble setting up basic controllers in Gazebo. Can you please help me out with this please?
When I launch the launch file, I get an error saying Could not find resource 'gripper_joint' in 'hardware_interface::EffortJointInterface'.
I confirmed that my controller_manager and spawner are in same name space:
$ rosservice list | grep controller_manager
/braccio/controller_manager/list_controller_types
/braccio/controller_manager/list_controllers
...
$ rosservice list | grep spawner
/braccio/controller_spawner/get_loggers
/braccio/controller_spawner/set_logger_level
My launch file:
<launch>
<!-- Convert an xacro and put on parameter server -->
<param name="robot_description" command="$(find xacro)/xacro --inorder $(find braccio_arduino_ros_rviz)/urdf/braccio_arm.xacro" />
<!-- these are the arguments you can pass this launch file, for example paused:=true -->
<arg name="paused" default="false"/>
<arg name="use_sim_time" default="true"/>
<arg name="gui" default="true"/>
<arg name="headless" default="false"/>
<arg name="debug" default="false"/>
<!-- We resume the logic in empty_world.launch -->
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="world_name" default="$(find braccio_gazebo)/worlds/pick_place_multi.world"/>
<arg name="debug" value="$(arg debug)" />
<arg name="gui" value="$(arg gui)" />
<arg name="paused" value="$(arg paused)"/>
<arg name="use_sim_time" value="$(arg use_sim_time)"/>
<arg name="headless" value="$(arg headless)"/>
</include>
<!-- Spawn a robot into Gazebo -->
<node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-param robot_description -urdf -x 1.3 -y -0.3 -z 0.8 -model braccio" />
<!-- Load joint controller configurations from YAML file to parameter server -->
<rosparam file="$(find braccio_gazebo)/config/braccio_gazebo_joint_position.yaml" command="load"/>
<!-- load the controllers -->
<node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="false"
output="screen" ns="/braccio" args="base_joint_pos_cntrl
shoulder_joint_pos_cntrl
elbow_joint_pos_cntrl
wrist_pitch_joint_pos_cntrl
wrist_roll_joint_pos_cntrl
sub_gripper_joint_pos_cntrl
gripper_joint_pos_cntrl "/>
<node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher"/>
</launch>
My YAML file:
braccio:
# Publish all joint states -----------------------------------
joint_state_controller:
type: joint_state_controller/JointStateController
publish_rate: 50
# Position Controllers ---------------------------------------
base_joint_pos_cntrl:
type: effort_controllers/JointPositionController
joint: base_joint
pid: {p: 100.0, i: 10.0, d: 1.0}
shoulder_joint_pos_cntrl:
type: effort_controllers/JointPositionController
joint: shoulder_joint
pid: {p: 100.0, i: 10.0, d: 1.0}
elbow_joint_pos_cntrl:
type: effort_controllers/JointPositionController
joint: elbow_joint
pid: {p: 100.0, i: 10.0, d: 1.0}
wrist_pitch_joint_pos_cntrl:
type: effort_controllers/JointPositionController
joint: wrist_pitch_joint
pid: {p: 100.0, i: 10.0, d: 1.0}
wrist_roll_joint_pos_cntrl:
type: effort_controllers/JointPositionController
joint: wrist_roll_joint
pid: {p: 100.0, i: 10.0, d: 1.0}
gripper_joint_pos_cntrl:
type: effort_controllers/JointPositionController
joint: gripper_joint
pid: {p: 100.0, i: 10.0, d: 1.0}
sub_gripper_joint_pos_cntrl:
type: effort_controllers/JointPositionController
joint: sub_gripper_joint
pid: {p: 100.0, i: 10.0, d: 1.0}
XACRO file for my robot using the Braccio Arduino arm:
<?xml version="1.0" ?>
<robot xmlns:xacro="http://www.ros.org/wiki/xacro"
xmlns:sensor="http://playerstage.sourceforge.net/gazebo/xmlschema/#sensor"
xmlns:controller="http://playerstage.sourceforge.net/gazebo/xmlschema/#controller"
xmlns:interface="http://playerstage.sourceforge.net/gazebo/xmlschema/#interface"
name="braccio">
<xacro:property name="damping_value" value="203.35"/>
<xacro:property name="friction_value" value="20.135"/>
<xacro:property name="kinect_box_length" value="0.3556" />
<xacro:property name="kinect_box_width" value="0.1778" />
<xacro:property name="kinect_box_height" value="0.0762" />
<xacro:property name="kinect_box_mass" value="1.274595" />
<xacro:macro name="inertial_matrix_cuboid" params="mass box_length box_width">
<inertial>
<mass value="${mass}" />
<inertia ixx="${mass/12*(box_length*box_length)}"
ixy = "0" ixz = "0"
iyy="${mass/12*(box_width*box_width)}" iyz = "0"
izz="${mass/12*(box_length*box_length + box_width*box_width)}" />
</inertial>
</xacro:macro>
<xacro:macro name="transmission_block" params="joint_name idx">
<transmission name="tran_${idx}">
<type>transmission_interface/SimpleTransmission</type>
<joint name="${joint_name}">
<hardwareInterface>hardware_interface/PositionJointInterface</hardwareInterface>
</joint>
<actuator name="motor__${idx}">
<hardwareInterface>hardware_interface/PositionJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
</xacro:macro>
<link name="world"/>
<joint name="world_joint" type="fixed">
<parent link="world"/>
<child link="base_link"/>
</joint>
<link name="base_link">
<visual>
<geometry>
<cylinder length="0.01" radius=".053" />
</geometry>
<material name="black"/>
<origin rpy="0 0 0" xyz="0 0 0"/>
</visual>
<inertial>
<mass value="2"/>
<inertia ixx="0.015" ixy="0" ixz="0" iyy="0.015" iyz="0" izz="0.015"/>
</inertial>
</link>
<link name="camera_link">
<xacro:inertial_matrix_cuboid mass="${kinect_box_mass}" box_length="${kinect_box_length}" box_width="${kinect_box_width}"/>
</link>
<joint name="camera_joint" type="fixed">
<origin xyz="0 0 1.2" rpy="0 0 0"/>
<parent link="base_link"/>
<child link="camera_link"/>
</joint>
<link name="braccio_base_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_base.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="orange"/>
<origin rpy="0 0 3.1416" xyz="0 0.004 0" />
</visual>
<inertial>
<mass value="2"/>
<inertia ixx="0.015" ixy="0" ixz="0" iyy="0.015" iyz="0" izz="0.015"/>
</inertial>
<collision>
<origin rpy="0 0 3.1416" xyz="0 0.004 0"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_base.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="shoulder_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_shoulder.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="orange"/>
<origin rpy="0 0 0" xyz="-0.0045 0.0055 -0.026"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 0 0" xyz="-0.0045 0.0055 -0.026"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_base.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="elbow_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_elbow.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="orange"/>
<origin rpy="0 0 0" xyz="-0.0045 0.005 -0.025"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 0 0" xyz="-0.0045 0.005 -0.025"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_elbow.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="wrist_pitch_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_wrist_pitch.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="orange"/>
<origin rpy="0 0 0" xyz="0.003 -0.0004 -0.024"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 0 0" xyz="0.003 -0.0004 -0.024"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_wrist_pitch.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="wrist_roll_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_wrist_roll.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="white"/>
<origin rpy="0 0 0" xyz="0.006 0 0.0"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 0 0" xyz="0.006 0 0.0"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_wrist_roll.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="left_gripper_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_left_gripper.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="white"/>
<origin rpy="0 1.5708 0" xyz="0 -0.012 0"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 1.5708 0" xyz="0 -0.012 0"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_left_gripper.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<link name="right_gripper_link">
<visual>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_right_gripper.stl" scale="0.001 0.001 0.001"/>
</geometry>
<material name="white"/>
<origin rpy="0 1.5708 0" xyz="0 -0.012 0.010"/>
</visual>
<inertial>
<mass value="0.1"/>
<inertia ixx="0.000348958333333" ixy="0" ixz="0" iyy="0.000348958333333" iyz="0" izz="3.125e-05"/>
</inertial>
<collision>
<origin rpy="0 1.5708 0" xyz="0 -0.012 0.010"/>
<geometry>
<mesh filename="package://braccio_arduino_ros_rviz/stl/braccio_right_gripper.stl" scale="0.001 0.001 0.001"/>
</geometry>
</collision>
</link>
<joint name="base_joint" type="revolute">
<axis xyz="0 0 1"/>
<limit effort="1000.0" lower="0.0" upper="3.1416" velocity="1.0"/>
<origin rpy="0 0 0" xyz="0 0 0"/>
<parent link="base_link"/>
<child link="braccio_base_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="shoulder_joint" type="revolute">
<axis xyz="1 0 0"/>
<limit effort="1000.0" lower="0.2618" upper="2.8798" velocity="1.0"/>
<origin rpy="-1.5708 0 0" xyz="0 -.002 0.072"/>
<parent link="braccio_base_link"/>
<child link="shoulder_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="elbow_joint" type="revolute">
<axis xyz="1 0 0"/>
<limit effort="1000.0" lower="0" upper="3.1416" velocity="1.0"/>
<origin rpy="-1.5708 0 0" xyz="0 0 0.125"/>
<parent link="shoulder_link"/>
<child link="elbow_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="wrist_pitch_joint" type="revolute">
<axis xyz="1 0 0"/>
<limit effort="1000.0" lower="0" upper="3.1416" velocity="1.0"/>
<origin rpy="-1.5708 0 0" xyz="0 0 0.125"/>
<parent link="elbow_link"/>
<child link="wrist_pitch_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="wrist_roll_joint" type="revolute">
<axis xyz="0 0 -1"/>
<limit effort="1000.0" lower="0.0" upper="3.1416" velocity="1.0"/>
<origin rpy="0 0 1.5708" xyz="0 0.0 0.06"/>
<parent link="wrist_pitch_link"/>
<child link="wrist_roll_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="gripper_joint" type="revolute">
<axis xyz="0 -1 0"/>
<limit effort="1000.0" lower="0.1750" upper="1.2741" velocity="1.0"/>
<origin rpy="0 -0.2967 0" xyz="0.010 0 0.03"/>
<parent link="wrist_roll_link"/>
<child link="right_gripper_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<joint name="sub_gripper_joint" type="revolute">
<axis xyz="0 1 0"/>
<mimic joint="gripper_joint"/>
<limit effort="1000.0" lower="1.2741" upper="2.3732" velocity="1.0"/>
<origin rpy="0 3.4383 0" xyz="-0.010 0 0.03"/>
<parent link="wrist_roll_link"/>
<child link="left_gripper_link"/>
<dynamics damping="${damping_value}" friction="${friction_value}"/>
</joint>
<material name="orange">
<color rgba="0.57 0.17 0.0 1"/>
</material>
<material name="white">
<color rgba="0.8 0.8 0.8 1.0"/>
</material>
<material name="black">
<color rgba="0 0 0 0.50"/>
</material>
<xacro:transmission_block joint_name="base_joint" idx="1"/>
<xacro:transmission_block joint_name="shoulder_joint" idx="2"/>
<xacro:transmission_block joint_name="elbow_joint" idx="3"/>
<xacro:transmission_block joint_name="wrist_pitch_joint" idx="4"/>
<xacro:transmission_block joint_name="wrist_roll_joint" idx="5"/>
<xacro:transmission_block joint_name="gripper_joint" idx="6"/>
<xacro:transmission_block joint_name="sub_gripper_joint" idx="7"/>
<gazebo>
<plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so">
<robotNamespace>/braccio</robotNamespace>
</plugin>
</gazebo>
</robot>
Originally posted by dpakshimpo on ROS Answers with karma: 161 on 2018-03-20
Post score: 2
Answer:
I managed to solve it. It turned out to be a stupid copy-paste mistake. In my XACRO file, I had the following interface,
<xacro:macro name="transmission_block" params="joint_name idx">
<transmission name="tran_${idx}">
<type>transmission_interface/SimpleTransmission</type>
<joint name="${joint_name}">
<hardwareInterface>hardware_interface/PositionJointInterface</hardwareInterface>
</joint>
<actuator name="motor__${idx}">
<hardwareInterface>hardware_interface/PositionJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
</xacro:macro>
In reality it should be an EffortJointInterface, instead of a PositionJointInterface
<xacro:macro name="transmission_block" params="joint_name idx">
<transmission name="tran_${idx}">
<type>transmission_interface/SimpleTransmission</type>
<joint name="${joint_name}">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
</joint>
<actuator name="motor__${idx}">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
</xacro:macro>
Originally posted by dpakshimpo with karma: 161 on 2018-03-21
This answer was ACCEPTED on the original site
Post score: 6 | {
"domain": "robotics.stackexchange",
"id": 30385,
"tags": "ros, gazebo, ros-control, ros-kinetic"
} |
Does there exist polytime algorithm for this partitioning problem? | Question: I would like to know if there exists a polytime probablistic algorithm for the problem described below. It is relevant for construction of a crossvalidation-partitioning in statistics, fulfilling certain constraints.
Or is it maybe NP-complete? I don't see any direct connections to any NP-complete problem I know of.
Input: $(N,K, (\phi_1, \ldots, \phi_l))$
Informal description:
Let $ \{1 \ldots N\}$ be partitioned according to functions $\phi_1, \ldots, \phi_l$. Find a random partition $\Phi : \{1\ldots N\} \rightarrow \{1\ldots K\}$ s.t. for all $i$, elements with the same value under $\phi_i$, will get at least 2 different values under $\Phi$. Furthermore, the new partitioning should be balanced.
If no solution exists, halt with error.
Formal description:
Let $N, K \in \mathbb N$, and $\phi_i : \{1 \ldots N\} \rightarrow \{1 \ldots m_i\}$ be given for $i \in \{1 \ldots l\}$.
Find a random $\Phi : \{1\ldots N\} \rightarrow \{1\ldots K\}$ s.t.
$$
\forall i\in \{1 \ldots l\} :\forall v \in \{1\ldots m_i\} : |\Phi( \phi_i^{-1}(v) )| \geq 2
$$
$$
\forall v \in \{1\ldots K\} :\left\lfloor \frac{N}{K} \right\rfloor \leq |\Phi^{-1}(v)| \leq \left\lceil \frac{N}{K} \right\rceil
$$
Answer: (This is about the problem in which $|\phi(\Phi^{-1}(i))|\geq 2$ instead of $|\Phi(\phi^{-1}(i))|\geq 2$. Read it too fast. On the bright side Dave fixes it in a comment to this message )
What about saying it computes a proper edge coloring of a regular graph ? This problem is NP-Complete, and amounts, given a graph as an entry, to find a partition of its edges into matchings.
Vizing's theorem says that to do that Delta (the maximum degree of your graph) or Delta + 1 colors are required, though deciding which is NP-hard.
In your case, I think setting $N$ to the be number of edges, and setting K to Delta would do the trick. You then want to split your $N$ edges into $K=\Delta$ classes, and N is a multiple of Delta (for example in 3-regular graphs, for which the problem is still NP-hard).
In order to ensure that the answer is a proper edge coloring, one can let $\phi_v$ (for each vertex $v$) be the function equal to $0$ when edge $e\in [N]$ is adjacent to $v$, and 1 otherwise. If each color class has at least two different images for each $\phi_v$, it means that each color class contains a edge incident to each vertex. As the graph is Delta-regular, it also means that all the edges around a vertex have different colors as there are only Delta edges around each vertex, and if a class had two it means one would have none.
If it's true it would mean that finding one answer to your question is hard, and sampling solutions too :-)
Nathann | {
"domain": "cstheory.stackexchange",
"id": 852,
"tags": "cc.complexity-theory, np-hardness, application-of-theory, randomized-algorithms"
} |
How to connect remote roscore with python in runtime | Question:
The following is the implementation using roscpp:
std::map<std::string, std::string> remappings;
remappings["__master"] = master_url;
remappings["__hostname"] = host_url;
ros::init(remappings, "node name");
parement "master_url" and "host_url" come from UI
So how to implement using python?
Originally posted by ugluo on ROS Answers with karma: 3 on 2019-08-13
Post score: 0
Original comments
Comment by ct2034 on 2019-08-14:
Can you please clarify: What are you trying to achieve? What have you tried? What do you expect to happen and what happens instead? And please make sure your code is correctly formatted.
Comment by ugluo on 2019-08-14:
I want to connect remote roscore with python, The above code implements the function of connecting to a remote roscore using roscpp.
Comment by ugluo on 2019-08-14:
roscpp:
std::map<std::string, std::string=""> remappings;
remappings["__master"] = master_url;
remappings["__hostname"] = host_url;
ros::init(remappings, "node name");
Comment by ugluo on 2019-08-14:
the rospy project come from catkin_create_qt_pkg
catkin_create_qt_pkg test_qt
the code in 'src/qnode.cpp' lline 57-59
Answer:
I am not familiar with the remapping within the code. The normal way is: You have to set the ROS_MASTER_URI and ROS_HOSTNAME environment variable in the terminal according to https://wiki.ros.org/ROS/NetworkSetup. Then run your python code with rosrun and it will connect to the remote roscore.
Originally posted by ct2034 with karma: 862 on 2019-08-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by ugluo on 2019-08-14:
Thanks
I know this solution,but I want to implement it in code like roscpp, the parameters come from the UI
I have tried to use os.system("shell commond") but failed,like this:
os.system('export ROS_MASTER_URI=http://192.168.0.91:11311 ; echo $ROS_MASTER_URI')
os.system('echo $ROS_MASTER_URI')
Comment by ct2034 on 2019-08-14:
I see. You could try https://docs.python.org/2.7/library/os.html#os.putenv
Comment by ugluo on 2019-08-15:
Thanks, the function of os.putenv() can't set the environment on my own running python,just like the way "launch terminal 、set environment and run command",
my rusty code in link text
Comment by ct2034 on 2019-08-15:
please try os.putenv("ROS_MASTER_URI", "http://192.168.0.91:11311") (and the same for ROS_HOSTNAME) then you can check with print(os.environ). This seems to be also how init_node gets the params (https://docs.ros.org/melodic/api/rospy/html/rospy.client-pysrc.html#init_node - line 317)
Comment by ugluo on 2019-08-15:
Thank you serserly ,the problem is already solved.
i try to use os.putenv
os.putenv("ROS_MASTER_URI", "http://192.168.0.91:11311")
print(os.environ)
self.node = rospy.init_node('leap_one_gazebo_control', anonymous=True)
print(os.environ)
then output:
http://localhost:11311
http://localhost:11311
so i print the type(os.environ) and Force modify data of the dictionary
os.environ['ROS_MASTER_URI'] = 'http://192.168.0.91:11311'
os.environ['ROS_MASTER_IP'] = '192.168.0.91'
tne output
http://192.168.0.91:11311
http://192.168.0.91:11311
Comment by ct2034 on 2019-08-15:
ok, cool. Can you please accept my answer then (click the little check mark) | {
"domain": "robotics.stackexchange",
"id": 33617,
"tags": "rospy, ros-kinetic"
} |
How to preprocess heavy MRI images? | Question: I have a large MRI dataset for an image segmentation task that cannot directly fit in memory in Colab, you can access the data with the link I put at the end. They are brain MRI images:
484 training images, each has a shape of (240, 240, 155, 4), these 4
numbers are the height, width, number of layers and sequences
respectively.
484 labels, each has a shape of (240, 240, 155)
How are you going to preprocess those images before training? Below are the steps that I tried but it didn't work:
Load and read the image. (I used nibabel)
Convert the images' type from float64 to float32, labels' type to uint8.
Remove the very first and last layers because they don't contain useful information .
Stack/Add each of them into an array with a for loop.
What else do you think I can do do deal with this problem?
Datalink: https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2 (task 1 - Brain Tumour)
Please tell me if you need more information.
Answer: As you cannot read the whole dataset in a single time, you should read and preprocess the images batch-wise while training the model. You can write your preprocessing pipeline in a data loader and iterate through this data loader while training the model. During each iteration, your data loader will fetch a single batch of data. And you can write your custom pipeline in the data loader to get this single batch. Treat this as a generator and iterate through this generator in the training loop, and you will get batches on the run time (i.e. a single batch is read in the memory at a time).
The following links give a good example of creating a custom data loader in Pytorch -
https://pytorch.org/tutorials/beginner/basics/data_tutorial.html
https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel
You have similar functionality in TensorFlow using input data pipelines -
https://www.tensorflow.org/guide/data | {
"domain": "datascience.stackexchange",
"id": 10303,
"tags": "python, image-preprocessing, image-segmentation"
} |
Lepton flavor violating process on loop level $\mu \rightarrow e \gamma$ via Higgs, Divergent integral | Question: I am stuck with calculating the proces $\mu \rightarrow e \gamma$ as in this diagram:
I wrote down the matrix element like this:
$$
\mathcal{M}=\bar{u}(p-q)\left[\int \frac{d^4k}{(2\pi)^4}A\frac{(k+m_\tau)(k'+m_\tau)}{(k'^2-m^2_\tau)(k^2-m^2_\tau)[(p-k)^2-m^2_\tau)]}(e\gamma^\sigma\epsilon_\sigma)\frac{m\tau}{v}\right]u(p)
$$
I'm new to calculating loop diagrams so I have trouble with the k integral. I know I have to integrate about the momentum from $-\infty$ to $\infty$.
$$
I=\int \frac{d^4k}{(2\pi)^4}\frac{(k+m_\tau)(k'+m_\tau)}{(k'^2-m^2_\tau)(k^2-m^2_\tau)[(p-k)^2-m^2_\tau)]}
$$
With the denominator I have to go through all the standard stuff. Using Feynman parameters $x,y,z$, since I have three propagators. Then shifting the integration variable
$$\ell=k-xq-zp$$
In the end I get
$$
2\int_0^1dx dy dz \frac{\delta(x+y+z-1)}{[\ell^2-\Delta+i\epsilon]^3}
$$
Now for $I$ there are 3 types of integrals the ones with no $k$ in the numerator with $k$ in the numerator and with $k^2$ in the numerator.
The one with no k is working fine so far. I can do a Wick rotation and then integrate in 4 dimensional spherical coordinates in Euklidean space. With a substitution I can simply calculate the integral here...
And when substituting $k$ by
$$
k=\ell+xq+zp
$$
I can drop all the terms linear in $\ell$ in the numerator since they cancel out due to symetric integration.
Am I correct so far?
The main question is: for the integral of the type:
$$
\int_{-\infty}^\infty d^4\ell \frac{\ell^2}{[\ell^2-\Delta+i\epsilon]^3}
$$
If I do the Wick rotation here and with the spherical coordinates I get an extra $\ell³$. This integral becomes divergent for large $\ell$. As far as I know there should be no infinities here, as compared to the vacuum polarization in QED. Am I missing something? Or how can I calculate this integral?
Answer: Ok, I found out that the Integral might be divergent but this divergency will cancel out later in the calculation, so it's not important to calculate. Actually one should not just start computing this diagram straight away but first find out the form of the amplitude.
A good reference for this is Cheng and Li "Gauge theory of elementary particle physics". In section 13.3 they calculate a similar diagram but with neutrino oscillation. The further calculation might be different but the derivation of the form of the amplitude is the same. Thus one finds out that all terms proportional to $\ell^2$ in the numerator cancel.
If one wants to know more about it he can also take a look here:
Lorentz decomposition of electromagnetic current Cheng&Li $\mu\rightarrow e\gamma$ p.421
where I explained the steps. I just need help to reproduce them. But this question should be answered by this... | {
"domain": "physics.stackexchange",
"id": 57469,
"tags": "quantum-field-theory, feynman-diagrams, higgs, beyond-the-standard-model"
} |
What is the total internal energy of a liter of water? | Question: How does one calculate the total internal energy of a liter of water? For an ideal gas, the case is simple. $E = c*n*T$, where c is the molar heat capacity, $n$ number of moles, and $T$ the temperature, and because it is an ideal gas, $c$ is only dependent on whether the gas is monatomic, diatomic, etc.
However for a liter of water, $c$ is a function of temperature, and not to mention there is a phase change to ice, so simply integrating over $c$ is also an issue.
How then would I calculate the total internal energy of a liter of water given its temperature? Is there some constant I can multiply by temperature and moles that already has the above effects baked in?
Answer: Such a constant factor as you ask for - applying at all temperatures - could not exist, because specific heat capacity varies with temperature (as you have said), and there are discontinuities at phase transitions.
Nevertheless you could still make a calculation quite easily, integrating numerically over values of SHC for ice then water from $1K$ up to your target temperature, adding the latent heat of vaporisation if appropriate.
Tables of specific heat capacity for water and ice over a range of temperatures are available on the internet. For example Engineering Toolbox has values for ice down to $-100^{\circ}C$ and for water up to $360^{\circ}C$. Page 16 of Monograph 21 from the former National Bureau of Standards (now NIST) has a table of values for ice at temperatures from $1K$ up to $300K$. | {
"domain": "physics.stackexchange",
"id": 73826,
"tags": "thermodynamics, energy, water, estimation"
} |
Optimize "Fill magic square" in python with backtracking | Question: I have written a python program with backtracking that fills a n * n magic square. It solves for n = 4 in 4.5 seconds but gets stuck when I run it for n = 5 on my machine. How can I optimize the algorithm to make it run faster?
from time import perf_counter
def fillMatrix(n):
nSquared = n * n
lineSum = n * (nSquared + 1) / 2
candidates = set(range(1, nSquared + 1))
matrix = [[None for _ in range(n)] for _ in range(n)]
def isValid(row, col):
# row
rowSum = 0
isFull = True
for item in matrix[row]:
if not item:
isFull = False
continue
rowSum += item
if rowSum > lineSum:
return False
if isFull and rowSum != lineSum:
return False
# column
colSum = 0
isFull = True
for i in range(n):
item = matrix[i][col]
if not item:
isFull = False
continue
colSum += item
if colSum > lineSum:
return False
if isFull and colSum != lineSum:
return False
# diagonal
if row != col and row + col != n - 1:
return True
diagSum = 0
isFull = True
for i in range(n):
item = matrix[i][i]
if not item:
isFull = False
continue
diagSum += item
if diagSum > lineSum:
return False
if isFull and diagSum != lineSum:
return False
diagSum = 0
isFull = True
for i in range(n):
item = matrix[n - i - 1][i]
if not item:
isFull = False
continue
diagSum += item
if diagSum > lineSum:
return False
if isFull and diagSum != lineSum:
return False
return True
def solve(row, col):
if matrix[row][col] == None:
for candidate in candidates.copy():
matrix[row][col] = candidate
if not isValid(row, col):
matrix[row][col] = None
continue
candidates.remove(candidate)
if row == n - 1 and col == n - 1:
return True
currentSolution = False
if col == n - 1:
currentSolution = solve(row + 1, 0)
else:
currentSolution = solve(row, col + 1)
if currentSolution:
return True
candidates.add(candidate)
matrix[row][col] = None
return False
return True
t1 = perf_counter()
print(solve(0, 0))
print(matrix)
t2 = perf_counter()
print(f'{t2 - t1}s')
fillMatrix(5)
Answer: Optimization is pointless
Random magic square hunting peters out very quickly. If n is the order of
a magic square (the number of cells in a row), the domain to be searched for
valid magic squares grows at the rate of (n*n)!. If we conceptualize a magic
square as a one-dimensional list of length n*n, a naive brute-force search
would iterate over all permutations of that list. For n=3 that is doable,
because there are only 362880 lists to check. But for n=4, that domain
becomes about 21 trillion. Since there are known
to be 880 4x4 magic squares, that means random hunting in that space will emit
a hit once per 23.8 billion checks. Python cannot do that.
Even smart searches don't survive long in that environment. Your implementation takes some advantage of the known constraints of
the problem to short-circuit the searching and speed things up. But there are
pressing limits here as well. Consider an algorithm that leverages the
constraints even more effectively than your code. Instead of filling in the grid one cell and
at time and then checking for violated constraints, one could pre-compute all
possible ways of creating a valid row/column/diagonal, given the size of
the magic square (n), the implied magic constant (the needed sum for every
row/column/diagonal), and its universe of eligible numbers (1..n inclusive).
One could also pre-compute a lookup index mapping any partially-completed
row/column/diagonal to the other numbers that would complete it in a valid way.
In addition, the algorithm could proceed in a way that maximizes the
interactions among the constraints: first fill the diagonals, then row 0, then
column 0, row 1, column 1, etc. The benefit of that kind of crossing approach
is that each prior step imposes additional constraints on subsequent steps, thus
reducing the number of viable sums that have to be searched.
Such an algorithm would be considerably faster for various reasons: (1) it
would fill in valid rows/columns/diagonals in a single shot (rather than one
cell at a time); (2) no checking for validity would be required because the
algorithm would be premised on only filling in valid rows/columns/diagonals;
(3) most important, the scope of the search space would be much smaller since
we would only be considering combinations of numbers that achieve valid sums
rather than naively filling in the next cell and then checking for violations. Unfortunately, none of that is enough.
# Let's explore magic squares of size 3 through 7 to see their
# magic constant along with the number of valid ways that a
# row/column/diagonal can add up to that constant.
from itertools import combinations
for n in range(3, 8):
n_stop = n * n + 1
magic_constant = int(n * n_stop / 2)
universe = tuple(range(1, n_stop))
valid_rows = tuple(
c
for c in combinations(universe, n)
if sum(c) == magic_constant
)
print(n, magic_constant, len(valid_rows))
# Output.
3 15 8
4 34 86
5 65 1394
6 111 32134
7 175 957332
By the time we get n=7 even our smart algorithm will be utterly swamped.
For magic squares of size 7 there are nearly 1M ways to get the needed sum of
175. And even that number is a big underestimate of the size of the domain.
Because that code snippet uses combinations() it is normalizing the valid
ways to make the needed sums. When filling in any particular
row/column/diagonal, we would actually need to check every permutation of the
current combination being examined. Furthermore, the magnitudes printed above
represent only the size of the domain at the top level (where we fill in the
first diagonal). That magnitude would need to be multiplied by the
constraint-surviving permutations that occur at deeper levels as we fill in
the grid according to our crossing plan. As a result, I'm pretty confident that our
envisioned algorithm could handle n=5, but I am quite pessimistic about
n=6, and n=7 seems impossible.
There are a variety of square-generating algorithms. A different approach
is to grab one of the known algorithms to generate specific magic squares -- for
example, up and to the right.
And if that seems too boring, one could also take a square generated in such
a manner and transform it in a variety of ways (also see). | {
"domain": "codereview.stackexchange",
"id": 42740,
"tags": "python, algorithm, backtracking"
} |
What is the difference between total energy and the Lagrangian energy function? | Question: I am primarily looking for the difference in definitions to see how they differ. Given a Lagrangian $L(q_{j}, \dot{q}_{j}, t)$ of a system of finitely many particles, we may define (using Einstein summation convention) the Lagrangian energy function
$$ h(q, \dot{q}, t) = \dot{q}_{j}\frac{\partial L(q, \dot{q}, t)}{\partial \dot{q}_{j}} - L(q, \dot{q}, t), $$
which is conserved whenever $\frac{\partial L}{\partial t} = 0$. Now there are cases where this doesn't coincide with the "normal notion of energy." The problem is, if we're not talking about the Lagrangian energy function, what do we mean by the "normal notion of energy" to begin with?
I have seen the following PSE pages:
When does Hamiltonian equals to energy of the system?
When is the Hamiltonian of a system not equal to its total energy?
When Hamiltonian and the total energy are the same
Difference between the energy and the Hamiltonian in a specific example
Hamiltonian is conserved, but is not the total mechanical energy
Three really good examples are Ján Lalinský's post, Dan's post, and Siyuan Ren's post. However, none of these posts provide a clear definition of energy.
I thought energy was the Noether charge associated with time-translations, but under that definition, energy would have to be the Lagrangian energy function, and the answers insist this is not the case. So then what would be the definition of energy in these contexts?
Some comments (in response to the exact same question) said we can say energy is $T + V$ instead of being the Noether charge, but then we'd have to define $T$, $V$, and then we'd be back to needing to define energy, which we haven't done.
Answer:
I am primarily looking for the difference in definitions to see how they differ. (Emphasis in original).
The kinetic energy $T$ is defined as:
$$
T = \sum_{i=1}^N \frac{1}{2}m_i \vec v_i^2
$$
$$
= \sum_{i=1}^N \frac{1}{2}m_i\left(\dot x_i^2 + \dot y_i^2 +\dot z_i^2\right)\;,\tag{1}
$$
where $x$, $y$, and $z$ are a fixed set of rectangular/cartesian coordinate axes (as you might expect), not generalized coordinates.
For a conservative force field $\vec F(\vec x)$, the single-partial potential energy $U$ is usually defined (up to a constant) via:
$$
\vec F =-\vec \nabla U\;.
$$
By analogy, I define $V = V(\vec q_1, \vec q_2,\ldots)$ via the force on particle i:
$$
\vec F^{(i)} = -\vec \nabla_i V\;.
$$
For example, if the system of particles is non-interacting, other than via a single-particle potential $U$, then we can write $V = \sum_i U(\vec x_i)$. (Note that even $V=V(q_1,q_2,\ldots)$ is not exactly the most general form of potential, but this is explained further in an addendum.)
The "total mechanical energy" $E_{TM}$ is defined as:
$$
E_{TM} = T + V\;,
$$
but be careful, because this thing I'm calling "$E_{TM}$" might also be called the "mechanical energy" or the "total energy" or the "energy." I'm hanging a couple subscripts off of my $E$ symbol to indicate exactly what I mean, but no one else will ever do that.
The "Lagrangian energy function," as you have called it, is defined properly as you have defined it in terms of generalized coordinates as:
$$
h = \sum_{i=1}^{3N}\dot q_i\frac{\partial L}{\partial \dot q_i} - L\;,
$$
but be careful, because someone might also call this the "mechanical energy" or the "total energy" or the "energy."
And, of course, L is defined as:
$$
L = T - V\;.
$$
Now, you have the proper definitions, you should be able to figure out for yourself when the "Lagrangian energy function" is equal to the "total mechanical energy."
For some further assistance, see my answer here, and see below, and perhaps consult a graduate textbook on Classical Mechanics, for example, the textbook by Whittaker (some of which is rewritten below).
Addendum:
You might be interested to know that the Lagrangian equations of motion in the form:
$$
\frac{d}{dt}\frac{\partial L}{\partial \dot q_i} - \frac{\partial L}{\partial q_i} = 0\;,
$$
is not the most general form.
Supposing that the force can not be written as the gradient of a potential, we can still write a form of the Lagrange equations of motion as:
$$
\frac{d}{dt}\frac{\partial T}{\partial \dot q_i} - \frac{\partial T}{\partial q_i} = Q_i\;,
$$
where the $Q_i$ is a generalized force, which is defined via the work done as the generalized coordinate $q_i$ changes by an infinitesimal amount: $dW_i = Q_i(q_1,q_2,\ldots) dq_i$ (no sum on i implied).
In the case where we can define a potential energy function
$$
V(q_1, q_2, \ldots)\;,
$$
then we can write:
$$
\frac{d}{dt}\frac{\partial T}{\partial \dot q_i} - \frac{\partial T}{\partial q_i} = -\frac{\partial V}{\partial q_i}\;,
$$
and then we can recover the other form of the equations of motion.
Addendum 2
So, when are the "total mechanical energy" ($E_{TM}$) and the "Lagrangian energy function" ($h$) equal?
To answer this we should consider a slight generalization of the potential energy $V$. But at first, a let's consider a potential that only depends on the $q_i$.
Previously, I indicated that if we can find a potential $V$ such that:
$$
-\frac{\partial V}{\partial q_i} = Q_i\;,
$$
then we recover the usual Lagrangian equations of motion:
$$
\frac{d}{dt}\frac{\partial L}{\partial \dot q_i} - \frac{\partial L}{\partial q_i} = 0\;.
$$
From the above equation of motion we can show that $h=E_{TM}$ whenever:
$$
\dot q_i\frac{\partial T}{\partial \dot q_i} = 2T\;. \tag{A}
$$
Eq. (A) is the condition that the "total mechanical energy" ($E_{TM}$) and the "Lagrangian energy function" ($h$) are equal, given that we can write $Q_i = -\frac{\partial V}{\partial q_i}$.
So, what is the generalization of this result to velocity dependent potentials?
Suppose that we want to be a little more general and write
$$
V=V(q_1,q_2,\ldots,\dot q_1, \dot q_2,\ldots)
$$
where now we define V via:
$$
Q_i = -\frac{\partial V}{\partial q_i} + \frac{d}{dt}\frac{\partial V}{\partial \dot q_i}\;.
$$
In this case, the "total mechanical energy" ($E_{TM}$) and the "Lagrangian energy function" ($h$) are equal whenever:
$$
\dot q_i\frac{\partial T}{\partial \dot q_i}-\dot q_i\frac{\partial V}{\partial \dot q_i} = 2T\;. \tag{B}
$$
Condition (B) reduces to condition (A) when the potential is velocity independent.
Addendum 3
OP asks a question about coordinate transformations in the comments. To answer this question, it is easiest to return to the usual case where $Q_i=-\frac{\partial V}{\partial q_i}$ and where Condition (A) then holds.
The condition that $h=E_{TM}$ is then:
$$
\dot q_i\frac{\partial T}{\partial \dot q_i} = 2T\;.
$$
In terms of coordinate transformations, first consider the familiar case when the coordinate transformation don't depend on velocity or time explicitly:
$$
x_i = x_i(q_1,q_2,\ldots)\;.
$$
$$
y_i = y_i(q_1,q_2,\ldots)\;.
$$
$$
z_i = z_i(q_1,q_2,\ldots)\;.
$$
In this case, it is straightforward to show using Eq. (1) above and using
$$
\frac{\partial \dot x_i}{\partial \dot q_j} = \frac{\partial x_i}{\partial q_j}
$$
that
$$
\dot q_i\frac{\partial T}{\partial \dot q_i} = 2T\;.
$$
And so, in this case, $h=E_{TM}$.
But, in general, when the $x_i$, $y_i$, and $z_i$ depend on the $\dot q_j$ as well, we instead find that:
$$
h = E_{TM} + \sum_j m^{(j)}\dot x_j\left[\dot x_j - \sum_i\dot q_i\frac{\partial \dot x_j}{\partial \dot q_i}\right]
+ \sum_j m^{(j)}\dot y_j\left[\dot y_j - \sum_i\dot q_i\frac{\partial \dot y_j}{\partial \dot q_i}\right]
+ \sum_j m^{(j)}\dot z_j\left[\dot z_j - \sum_i\dot q_i\frac{\partial \dot z_j}{\partial \dot q_i}\right]\;.\tag{2}
$$
The quantities in the square brackets in Eq. (2) above are zero whenever the coordinate transformations do not explicitly depend on the velocities. | {
"domain": "physics.stackexchange",
"id": 95892,
"tags": "classical-mechanics, energy, lagrangian-formalism"
} |
Why doesn't the flow electrons occur in a broken circuit? | Question: Take a battery and connect a small led bulb across it with the help of two wires. The bulb will glow, but if I cut a small piece of wire from any part of the connecting wires,the circuit will not work which implies that there is no current and hence no flow of electrons. But if you consider the part of the wire which is exposed to air on one end(due to the cut part) and to negative potential,then the electrons should have travelled from the negative end to the almost zero potential end(air gap) but it does not occur. Why?
Answer:
But if you consider the part of the wire which is exposed to air on one end(due to the cut part) and to negative potential,then the electrons should have travelled from the negative end to the almost zero potential end(air gap) but it does not occur.
They will travel from the negative potential source to the cut end. But it will only take a few nanoseconds or so for enough electrons to build up at the cut end so that they repel any further electrons from moving there. Because this motion is so brief, we normally just ignore it when talking about how the gap behaves in a low-speed circuit.
The electrons can't travel across the air gap because it takes a substantial energy (called the work function) for an electron to exit the metal material and into the air in the gap. If you were to make the gap small enough and the voltage across the cut ends large enough, you could create an arc that carries current across the gap. This is how the spark plugs in an internal combustion engine work. | {
"domain": "physics.stackexchange",
"id": 56003,
"tags": "electric-circuits, electric-current, electrical-resistance, conductors, batteries"
} |
Mean shift image processing algorithm for color segmentation | Question: I'm implementing a version of the mean shift image processing algorithm for color segmentation in Python/NumPy.
I've written a pure NumPy version of the actual mean shifting per pixel (which I imagine is where the majority of time is taking). It slices an array of RGB values to work on out of the parent image, then creates lower bound and higher bound RGB reference arrays and generates a boolean masking array for the pixels to use for averaging then averages.
Any further optimizations? I suspect vectorizing the x/y for loops might give a speed up but for the life of me, but I haven't figured out how. (Some how generating an array of each pixel grid to work on and then generalizing the mean shift to take array input?) gL is grid length. gS is gL squared - the number of pixels in grid.
for itr in xrange(itrs):
if itr != 0:
img = imgNew
for x in xrange(gL,height-gL):
for y in xrange(gL,width-gL):
cGrid = img[x-gSmp:(x+gSmp+1),y-gSmp:(y+gSmp+1)]
cLow,cUp = np.empty((gL,gL,3)),np.empty((gL,gL,3))
cLow[:] = [img[x,y][0]-tol,img[x,y][1]-tol,img[x,y][2]-tol]
cUp[:] = [img[x,y][0]+tol,img[x,y][1]+tol,img[x,y][2]+tol]
cBool = np.any(((cLow < cGrid) & (cUp > cGrid)),axis=2)
imgNew[x,y] = np.sum(cGrid[cBool],axis=0)/cBool.sum()
Answer: The following code is a first shot and it is still not vectorized. The major points here are the extraction of the creation of cLow and cUp (don't create arrays in loops, always 'preallocate' memory), the calculation of the tolerance levels can be done in one operation (under the assumption that broadcasting is possible at this point) and at last I removed the conditional case for copying the imgNew to img (I also doubt that you do not want to copy the last iteration back into img. If so you have to remove the copy line before the loop and move the copy at the beginning of the loop to its end.).
diff_height_gL = height - gL
diff_width_gL = width - gL
sum_gSmp_one = gSmp + 1
cLow, cUp = np.empty((gL, gL, 3)), np.empty((gL, gL, 3))
imgNew = img.copy()
for itr in xrange(itrs):
img[:] = imgNew
for x in xrange(gL, diff_height_gL):
for y in xrange(gL, diff_width_gL):
cGrid = img[x-gSmp:(x + sum_gSmp_one), y-gSmp:(y + sum_gSmp_one)]
cLow[:] = img[x, y, :] - tol
cUp[:] = img[x, y, :] + tol
cBool = np.any(((cLow < cGrid) & (cUp > cGrid)), axis=2)
imgNew[x, y] = np.sum(cGrid[cBool], axis=0) / cBool.sum()
This problems seems to be perfectly shaped to do multiprocessing. This could be an alternative/extension to vectorization. If I have time I will try the vectorization...
Kind regeards | {
"domain": "codereview.stackexchange",
"id": 2738,
"tags": "python, optimization, image, numpy"
} |
Convert an integer to 4 bytes without bitshift operators | Question: This code is applicable to either GLSL or C due to virtually identical syntax. Before GLSL 1.3, bitshift operators were not present and I am aiming for backwards compatibility to GLSL 1.2.
float lut[3] = {256,65536,16777216};
vec4 getBytes(float n)
{
vec4 bytes;
bytes.x = floor(n / lut[2]);
bytes.y = floor((n- bytes.x*(lut[2]) )/ lut[1]);
bytes.z = floor((n - bytes.y*(lut[1]) - bytes.x*lut[2] )/ lut[0]) ;
bytes.w = n - bytes.z*lut[0] - bytes.y*lut[1] - bytes.x*lut[2];
return bytes;
}
float getid(vec4 bytes)
{
return bytes.w + bytes.z*lut[0]+ bytes.y*lut[1] + bytes.x*lut[2];
}
This code will be used to pack a value greater than 8 bits into an 8-bit texture with 4 channels.
I also ported the code to Lua to run a unit test.
lut = {256,65536,16777216}
function getBytes(n)
bytes = {}
bytes.x = math.floor(n / lut[3]);
bytes.y = math.floor((n- bytes.x*(lut[3]) )/ lut[2]);
bytes.z = math.floor((n - bytes.y*(lut[2]) - bytes.x*lut[3] )/ lut[1]) ;
bytes.w = n - bytes.z*lut[1] - bytes.y*lut[2] - bytes.x*lut[3];
return bytes
end
function getid(bytes)
return bytes.w + bytes.z*lut[1]+ bytes.y*lut[2] + bytes.x*lut[3]
end
for i=1,math.pow(2,24),1 do
if not (getid(getBytes(i)) == i) then
print("Fail " .. i)
end
end
Answer: There's not a ton that can be improved here, but:
lut should be made const
Due to implicit promotion, lut can be stored as integers instead of floats. Your expressions should evaluate to the same thing.
Consider representing your lut constants in hexadecimal (0x) format - or as 1 << x notation.
Your lut doesn't strictly need to be an array; you're not iterating over it or indexing it dynamically. As such, you may be better off simply making individually-named constants such as XL, YL, ZL. | {
"domain": "codereview.stackexchange",
"id": 35704,
"tags": "c, lua, glsl"
} |
What is tensile stress/force? Where should it be applied? | Question: I am practicing questions from the topic elasticity.
There was a question from the book I.E. Irodov, Q.no)1.291.
The question is as follows:
What internal pressure(in the absence of an external pressure) can be sustained by a glass spherical flask, the wall thickness $\Delta$r = 1.0mm and the radius of flask equals r = 25mm?
When I referred the solution to this question, it read as follows:
Force due to pressure is F = p$\pi$$r^2$
For equilibrium, tensile force(T) = F,
And then the solution continued. Now comes my real doubt what is tensile stress/force, why are they equating it with force due to pressure, where should it be used?? (Ex. Pseudo force is used when the observer is in non-inertial frame).
Answer: Take a small element on the surface of the sphere ${\rm d}A$ and perform static analysis by balancing the forces.
"Tensile" force is the the force exerted by the sphere to the surroundings as a result of deformation. A better term would be radial force, or even better it should not talk about forces at all but stresses.
So if the radial stress is $\sigma_r$ then the "Tensile" force is ${\rm d} F_r = \sigma_r\, {\rm d}A$ as developed right under the skin.
Over the skin of the sphere the pressure force is ${\rm d}F_p = P\, {\rm d}A$
the force balance ${\rm d}F_r = {\rm d}F_p$ yields the boundary condition for the stress field
$$ \sigma_r = P $$ | {
"domain": "physics.stackexchange",
"id": 79167,
"tags": "homework-and-exercises, elasticity, stress-strain"
} |
Activity Selection to maximize the number of activity executions | Question: I solved this problem from a challenge.The problem is as below:-
Problem Statement:
Given N activities with their start and finish times. Select the maximum number of activities that can be performed by a single person, assuming that a person can only work on a single activity at a time.
Note : The start time and end time of two activities may coincide.
Input:
The first line contains T denoting the number of testcases. Then follows description of testcases. First line is N number of activities then second line contains N numbers which are starting time of activies.Third line contains N finishing time of activities.
Output:
For each test case, output a single number denoting maximum activities which can be performed in new line.
Constraints:
1<=T<=50
1<=N<=1000
1<=A[i]<=100
Example:
Input:
2
6
1 3 2 5 8 5
2 4 6 7 9 9
4
1 3 2 5
2 4 3 6
Output:
4
4
C++ code for the problem:-
#include<iostream>
void swap(int a[],int i,int j)
{
int temp=a[i];
a[i]=a[j];
a[j]=temp;
}
int activitySelection(int start[], int end[], int n){
long i,j,count=1,pointer;
for(i=0;i<n;i++) //sorting both arrays in the order of the end array
{
for(j=i+1;j<n;j++)
if(end[j]<end[i])
{
swap(end,i,j);
swap(start,i,j);
}
}
pointer=end[0]; //pointer set to end time of the first activity
for(i=1;i<n;i++)
{
if(start[i]>=pointer) //if
{
count++;
pointer=end[i];
}
}
return count;
}
int main()
{
int t;
std::cin>>t; /*number of testcases*/
while(t--)
{
int n;
std::cin >> n;
int start[n], end[n];
for(int i=0;i<n;i++)
std::cin>>start[i];
for(int i=0;i<n;i++)
std::cin>>end[i];
std:: cout << activitySelection(start, end, n) << std::endl;
}
}
The code works as I intended to.But there are few openings that i feel could be improved in the code.The above code basically sorts the both arrays according to the end[] array.Then it checks if the starting time of the current activity is greater or equal to the end time of the previous activity,if yes then count increments and the pointer is set to the end time of the current activity for comparing the next activity.Two things i want to know:
1.Is there a better approach than sorting arrays and comparing?
2.What is a better way to sort the two arrays on the basis of the second array?
Please suggest changes to improve.Thanks for reading ;-)
Answer: Note: your code doesn't currently compile for at least two reasons: you are not telling the compiler cout and endl live in the std namespace, and you can't declare your arrays start and end because n is not known at compile-time (instead, you'd have to use dynamic memory allocations).
In any case, your code smells like your background might be in C (but then you'd probably also know about dynamic memory... well, anyway). Indeed, with C++, you typically define variables as close to their site of usage. Moreover, you could take advantage of the algorithms and data structures provided by the language.
Is there a better approach than sorting arrays and comparing?
Essentially no: you must sort the arrays (with no additional knowledge, you can't beat the log-linear bound). This dominates your runtime, but it is scalable and practical enough that you don't have to worry about it.
What is a better way to sort the two arrays on the basis of the second array?
Because start and end times are logically and tightly coupled (i.e., something is wrong if we suddenly lose the correspondence), it's a good idea to tie them together. For that reason, we can use std::pair. To store such objects, we can use a dynamic array, i.e., std::vector. For sorting that according to the second element, we can use std::sort.
To summarize, we could proceed along the following lines (I'm taking the liberty to omit reading input to avoid clutter):
#include <iostream>
#include <algorithm>
#include <vector>
typedef std::vector<std::pair<int, int> > PairList;
int activitySelection(const PairList& p)
{
PairList s(p);
std::sort(s.begin(), s.end(), [](const auto& x, const auto& y) {
return x.second < y.second;
});
int end = s[0].second;
int count = 1;
for (int i = 1; i < s.size(); ++i)
{
if (s[i].first >= end)
{
++count;
end = s[i].second;
}
}
return count;
}
int main()
{
PairList v{ {1,2},{3,4},{2,6},{5,7},{8,9},{5,9} };
std::cout << activitySelection(v) << "\n";
} | {
"domain": "codereview.stackexchange",
"id": 35812,
"tags": "c++, algorithm, array"
} |
Eager Loading Deeply into a Model with a Collection Property Whose Type is Inherited | Question: Visual Studio generated this great route for me, where I can load an entity:
// GET: api/Jobs/5
[ResponseType(typeof(Job))]
public async Task<IHttpActionResult> GetJob(int id)
{
Job job = await db.Jobs.FindAsync(id);
if (job == null)
{
return NotFound();
}
return Ok(job);
}
Here's the Job model this is based on:
public class Job
{
public Job()
{
this.Regions = new List<Region>();
this.Files = new List<JobFile>();
}
public int ID { get; set; }
public string Name { get; set; }
public List<Region> Regions { get; set; }
public JobTypes JobType { get; set; }
public int UserIDCreatedBy { get; set; }
public int? UserIDAssignedTo { get; set; }
public List<JobFile> Files { get; set; }
public bool IsLocked { get; set; } // Lock for modification access
}
Here's the JobFile class, which Jobs have a list of:
public class JobFile
{
public int ID { get; set; }
public string Name { get; set; }
public string Url { get; set; }
public int Job_ID { get; set; }
}
and Pdf, a subclass of JobFile:
public class Pdf : JobFile
{
public Pdf()
{
this.PdfPages = new List<PdfPage>();
}
public int Index { get; set; }
public List<PdfPage> PdfPages { get; set; }
}
Now, when I hit that route, I'd like to eagerly load all the Pdfs for a Job, including their pages. I modified the route to look like this, and it works.
// GET: api/Jobs/5
[ResponseType(typeof(Job))]
public async Task<IHttpActionResult> GetJob(int id)
{
Job job = await db.Jobs.FindAsync(id);
// Lookup the PDFs for this job and include their PdfPages
List<JobFile> jobPdfs = db.Pdfs.Include(pdf => pdf.PdfPages).Where(pdf => pdf.Job_ID == id).ToList<JobFile>();
// Attach the job files to the job
job.Files = jobPdfs;
if (job == null)
{
return NotFound();
}
return Ok(job);
}
Is this the best way to eagerly load all these models? Could this somehow be collapsed into one statement? It seems right now it hits the database twice. Could I build off of the original
Job job = await db.Jobs.FindAsync(id);
to load the Pdfs and their PdfPages all in one query?
This question provided some helpful insight, but I'm not sure how I can capitalize on its conclusions. I think I need the ToList<JobFile>() (which according to the question does a trip to the database) because I actually need that data. So unless I can squash it into one more complicated Linq statement, perhaps it's unavoidable to make two trips.
Answer: The problem here is that you'd actually want to include subtypes. If Job had a collection of Pdfs, you could have done
Job job = await db.Jobs
.Include(j => j.Pdfs.Select(pdf => pdf.PdfPages))
.SingleOrDefaultAsync(j => j.Id == id);
But Pdf is a subtype, and EF doesn't support a syntax like
Job job = await db.Jobs
.Include(j => j.Files.OfType<Pdf>().Select(pdf => pdf.PdfPages))
.SingleOrDefaultAsync(j => j.Id == id);
So what you do is the only way to get the Job with Pdfs and PdfPages.
There are some improvements to be made though:
You can just load the child objects into the context without assigning them to job.Files yourself. EF will knit the entities together by relationship fixup.
You can first check if the Job is found and then load the Pdfs.
Turning it into this:
Job job = await db.Jobs.FindAsync(id);
if (job == null)
{
return NotFound();
}
else
{
// Load the PDFs for this job and include their PdfPages
await db.Pdfs.Include(pdf => pdf.PdfPages).Where(pdf => pdf.Job_ID == id)
.LoadAsync();
}
return Ok(job); | {
"domain": "codereview.stackexchange",
"id": 18456,
"tags": "c#, linq, entity-framework, linq-to-sql"
} |
How do I get similarity with autoencoders | Question: I have build an autoencoder to extract from a very high dimensional (200 dimensions) space a smaller but significant representation (16 dimensions).
Now that I have these "encoded" vectors, I would like to compute some kind of similarity score, or clustering.
I am not sure which notion of distance to apply at this point. Any ideas how I can get similarity/clusters considering that I have used autoencoders?
Answer: You can calculate the cosine similarity between two encoded vectors you would like to compare. The cosine similarity between two vectors is defined as follows: | {
"domain": "datascience.stackexchange",
"id": 5582,
"tags": "deep-learning, similarity, autoencoder"
} |
Schrödinger equation for the evolution operator transform under unitary transformation? | Question: The Schrödinger equation for states is known as $\frac{d}{dt} \psi(t)= - iH(t)\psi(t) $. The solution can be expressed via the time evolution operator $U(t)$ so that $\psi(t)=U(t)\psi(0)$ where now also $U(t)$ satisfies the Schrödinger equation \begin{align} \frac{d}{dt} U(t) = -i H(t) U(t)\end{align}
Applying a time-dependent unitary basis transformation $T(t)$, the Hamiltonian can be shown to transform as \begin{align}
\breve{H} = THT^\dagger + i \dot{T}T^\dagger
\end{align}
yielding a transformed Schrödinger equation \begin{align} \frac{d}{dt} \breve{\psi} = -i\breve{H}(t)\breve{\psi}(t) \end{align} where $\breve{\psi}(t)=T\psi$
Question: Does the time evolution operator solving the transformed Schrödinger equation also satisfy the transformed Schrödinger equation? I.e. $\frac{d}{dt}\breve{U}=-i\breve{H}\breve{U}$?
Background: I am actually interested in introducing a time-dependent basis transformation of the operator basis that $U$ may be written in. Naively, such a basis transform would transform $U$ as $U\mapsto \breve{U} = T U T^\dagger$. However the equation of motion for this $\breve{U}$ takes the form
\begin{align*}
\frac{d}{dt} \breve{U} =& \dot{T}UT^{\dagger}+T\dot{U}T^{\dagger}+TU\dot{T^{\dagger}}\\
= & \dot{T}T^{\dagger}TUT^{\dagger}-iTHUT^{\dagger}+TUT^{\dagger}T\dot{T^{\dagger}}\\
= & \dot{T}T^{\dagger}\breve{U}-iTHUT^{\dagger}+\breve{U}T\dot{T^{\dagger}}\\
= & \dot{T}T^{\dagger}\breve{U}-iTHT^{\dagger}TUT^{\dagger}+\breve{U}T\dot{T^{\dagger}}\\
= & \dot{T}T^{\dagger}\breve{U}-iTHT^{\dagger}\breve{U}+\breve{U}T\dot{T^{\dagger}}\\
= & \dot{T}T^{\dagger}\breve{U}-iTHT^{\dagger}\breve{U}+\breve{U}T\dot{T^{\dagger}}
\end{align*}
and upon replacing $THT^{\dagger}=\breve{H}-i\dot{T}T^{\dagger}$ we find \begin{align*}
\frac{d}{dt}\breve{U}= & \dot{T}T^{\dagger}\breve{U}-i\left(\breve{H}-i\dot{T}T^{\dagger}\right)\breve{U}+\breve{U}T\dot{T^{\dagger}}\\
= & \dot{T}T^{\dagger}\breve{U}-i\breve{H}\breve{U}-\dot{T}T^{\dagger}\breve{U}+\breve{U}T\dot{T^{\dagger}}\\
= & -i\breve{H}\breve{U}+\breve{U}T\dot{T^{\dagger}}
\end{align*}
The final equation of motion for $\breve{U}$ almost looks like the Schrödinger equation with transformed Hamiltonian $\breve{H}$ if it were not for the addition term featuring $T \dot{T^\dagger}$ operated from the right on $\breve{U}$. How does one handle this equation? I am used to factoring out $U$ or $\psi$ in any equation of motion. This does not seem to be possible here.
I am wondering: is there any literature/ mathematical theories treating such a time-dependent transformation?
Answer: The transformation for the evolution operator you wrote is wrong. The correct one is,
\begin{equation}
\breve{U}(t_2,t_1)=T(t_2)U(t_2,t_1)T^\dagger(t_1)
\end{equation}
note the explicit two-time dependence in case of the time-dependent Hamiltonian,
\begin{equation}
i\frac{d}{dt_2}U(t_2,t_1)=H(t_2)U(t_2,t_1),\quad i\frac{d}{dt_1}U(t_2,t_1)=-U(t_2,t_1)H(t_1)
\end{equation}
So in essence, you first transform the initial wavefunction to the old basis at the moment $t_1$, then evolve it to $t_2$ and then transform it to the new basis with a different operator $T(t_2)$. When you differentiate $\breve{U}(t_2,t_1)$ by $t_2$, $T^\dagger(t_1)$ doesn't depend on $t_2$ and therefore does not contribute extra term. | {
"domain": "physics.stackexchange",
"id": 81363,
"tags": "schroedinger-equation, differentiation, linear-algebra, time-evolution"
} |
Per Base Sequence Content in fastqc | Question: I have a question regarding "Per Base Sequence Content" plot for "fastqc":
In the fastqc documentation, it is written: "In a random library you would expect that there would be little to no difference between the different bases of a sequence run, so the lines in this plot should run parallel with each other."
But I don't understand why different bases in a read should follow the same pattern of allele frequency ("A/T/C/G"). I mean they are different positions in the genome and it is normal that each position has different allele.
I would appreciate it if someone could help me, please.
Answer: The positions you see in the x-axis of this plot are positions on your reads, not on the reference genome. Since each read comes from a random position on the genome, the frequency of A/T/C/G on each base position should reflect the base composition of the entire genome. Therefore, the percentage of each base should be approximately constant across the read.
Hope that helps! | {
"domain": "bioinformatics.stackexchange",
"id": 1325,
"tags": "ngs, genome, fastqc"
} |
How can I calculate time difference of things moving at different speeds? | Question: Satellites orbiting earth are faster than it, and hence from our perspective they age slower, or, for us time moves faster.
Is there a way to calculate this difference?
Even though earth is fast, can I set the speed of earth to 0, since the satellite also moves at the same speed + his?
If any of you by any chance know the factor of how much slower time is moving for the satellites I'd appreciate this, too, but I do care more about the formula.
Answer: Speed is relative, that means it has meaning only in reference to something. So in this case the best reference to choose is the Earth itself and the speed of Earth relative to Earth is always zero.
You cannot measure speed of some object without reference point. The speed is distance that is traveled in given time. But the distance measured depends on the reference point. F.e. when you sit in a train the distance you traveled relative to the train is zero. But if the train moves, relatively to earth your distance traveled will not be zero.
Then the question naturally arises, if there is some nice reference point that is the most convenient to use. The answer is, there is - the inertial one. That is the system in which space and time are homogenous and isotropic - meaning same everywhere and in every direction.
But there are infinitely many such systems all of them moving with constant velocity in respect to each other. So the second natural question arises, which inerial system is the best? Now the answer is in general there is none.That is called relativity principle, in its first form formulated by Galileo (at least i think, it is called galileo relativity principle) and later generalized by Einstein.
The relativity principle states that you cannot distinguish two inertial reference frames by physics alone. That means all that can happen in one system, can also happen in another. F.e. playing ping-pong is governed by some physics. But the game will feel the same wheter you are playing on earth, or on train moving with constant velocity in respect to Earth. If the velocity of train wouldn't be constant, then you would feel something different, like you do when train is slowing down or turning.
Now, here is the celebrated formula for time dilatation:
$$
\Delta t'=\gamma\Delta t
$$
where lorentz factor is given by the relative speed (v) of the two frames:
$$
\gamma=1/\sqrt{1-v^2/c^2}
$$
where c is the speed of light.
But there is one further physics involved. Because sattelite is on the orbit there is additional time dilatation caused by curvature of spacetime (gravitation):
$$
\Delta t'=\Delta t \sqrt{1-\frac{3r_s}{2r_2}}/\sqrt{1-\frac{3r_s}{2r_1}}
$$
where $r_1$ and $r_2$ are radial distances of the observer and sattelite from the centre of earth and $r_s=2GM/c^2=9mm$ is schwarschild radius given by gravitational constat ($G$), mass of Earth ($M$) and speed of light. This radius tells you how small would Earth need to be to become a black hole.
So the final formula is:
$$
\Delta t'=\frac{\sqrt{1-\frac{3r_s}{2r_2}}}{\sqrt{1-v^2/c^2}\sqrt{1-\frac{3r_s}{2r_1}}}\Delta t
$$
But the gravitational time dilatation is not as simple as dilatation due to relative motion. In relative motion, the time dilatation is always proportial with proportionality factor $\gamma$, but gravitational dilatation is in general much more complicated. | {
"domain": "physics.stackexchange",
"id": 58376,
"tags": "general-relativity, spacetime"
} |
Is jumping the result of normal force or action-reaction? | Question: Just a clarification question based on an example I read about.
Can normal force do work on an object?
The answer is yes, with an example being a person jumping. The normal force causes work to be done.
However, I'm wondering, is that actually normal force, or is that the reaction force from applying force to the ground? There is a difference between the reaction force and normal force. I'm not sure if technically that example is correct. If it isn't, can someone provide a different one where normal force does do work?
Answer:
** I'm wondering, is that actually normal force, or is that the reaction force from applying force to the ground?** There is a difference between the reaction force and normal force.
No, there is not a difference you can determine between "the reaction force" and this normal force because one is describing a relationship (reaction) and the other a reality (normal force).
It seems to me there is a fundamental misunderstanding of Newton's 3rd Law, aka, action-reaction forces. The normal force on the persons feet is caused by the interaction of the structural boundaries (the intermolecular bonds, etc) of the feet with the structural boundaries of the ground. Likewise, the normal force on the ground from the feet is caused by exactly the same interaction. You cannot say that one occurs in "reaction" to the other.
What Newton's 3rd Law says is that forces do not occur singularly. They are interactions, and as such, if you observed the result of a force or you conceptually determine there is one force on an object, there must, by symmetry, be another force due to the same interaction. It's not a cause and effect relationship (" which force is the reaction to the action?"), it's a there-must-be-another-force-somewhere relationship. Newton's 3rd law really is a statement about conservation of momentum. This is from Newton (translated from the Latin by Drake, I believe):
If a body impinges upon another, and by its force changes the motion of the other, that body also (because of the equality of the mutual pressure) will undergo an equal change, in its own motion, toward the contrary part. Teh changes made by these actions are equal, not in the velocities but in the motions of the bodies; that is to say, if the bodies are not hindered by any other impediments. For, as the motions are euqally changed, the changes of the velocities made toward contrary parts are reciprocally proportional to the bodies.
In this writing, Newton's motion is our momentum and his body is our concept of mass.
Nowhere does Newton say that an action causes a reaction. He says that forces come in pairs:
If you press a stone with your finger, the finger is also pressed by the stone.
The forces come from a mutual interaction; reaction is an unfortunate word.
Forces have root causes and those causes are not other forces. They are interactions: mass with mass, charge with charge, quarks with quarks. | {
"domain": "physics.stackexchange",
"id": 20202,
"tags": "forces"
} |
How to compute the complexity of $T(n) = T(n-2)+T(n-3)+2T(n/3)$? | Question: $T(n) = T(n-2)+T(n-3)+2T(n/3)$ and $T(n)=1$ for $n<4$.
I tried to compute the complexity of $T(n) = T(n-2)+T(n-3)+2T(n/3)$ using the recursion tree but it's not clear enough for me to make a guess and demonstrate it by induction? also, it should be computed given both upper and lower bound.
Answer: Summary: $T(n) = \Theta(\alpha^n)$, where $\alpha=\sqrt[3]{\frac{9+\sqrt{69}}{18}}+\sqrt[3]{\frac{9-\sqrt{69}}{18}}\approx 1.324718$.
Find $\alpha$
Let $\alpha$ be the unique positive root of $x^3=x+1$.
We can solve the equation manually by the standard method, letting $x=w+\frac 1{3w}$ and solving $w$. If we trust the online calculators or you can use some software package such as MatLab, Maple, Wolfram Alpha or Python numpy.root, you can find all the exact roots or the approximate roots. In fact, since all we need is a positive root, we could also cheat a bit by just verifying directly the following quantity is a root.
$$\alpha=\sqrt[3]{\frac{9+\sqrt{69}}{18}}+\sqrt[3]{\frac{9-\sqrt{69}}{18}}\approx 1.324718$$
Reference function $S$
Let $S(n)= S(n-2)+S(n-3)$ and $S(n)=1$ for $n<4$.
Claim: $\frac{\alpha^n}{3}<S(n)<\alpha^n$ for all $n\ge1$.
Proof by mathematical induction on $n$.
The base case when $1\le n\le3$ is easy since $1<\alpha$ and $\alpha^3<3$.
Since $\alpha^3=\alpha+1$, the induction step is easy. Assume the claim is true when $n\le k-1$ for some $k\ge3$. Then
$$S(k)=S(k-2)+S(k-3)>\frac{\alpha^{k-2}}{3}+\frac{\alpha^{k-3}}3=\frac{\alpha^k}3$$
$$S(k)=S(k-2)+S(k-3)<\alpha^{k-2}+\alpha^{k-3}=\alpha^k$$
Last Trick
Claim One: $S(n)\le T(n)$ for all $n\ge1$.
Proof: It is easy to prove by mathematical induction that $T(n)>0$ for all $n\ge1$. $S(n)\le T(n)$ follows easily by mathematical induction.
Lemma: there exists a constant $c>0$ such that for all $n\ge c$, $\alpha^{\frac{2n}3} > 3n^2$.
Proof. This is obvious since any growing exponential function grows faster than any polynomial, a well-known fact that is (almost) proved in this answer. (We can take c=72, for example.)
Claim Two: There exist a constant $d>0$ such that $T(n)<d(1-\frac1n)S(n)$ for all $n\ge1$. (That factor $1-\frac1n$ is the "last trick".)
Proof. Let $c>4$ be a constant as in the lemma. Let $d=1+2\max_{1\le n<c}(T(n))>1$.
Let us prove $T(n)<d(1-\frac1n)S(n)$ by mathematical induction on $n$.
The base case when $n<c$. $T(n) < d/2 \le d(1-\frac1n) \le d(1-\frac1n)S(n)$.
Suppose the inequality is true for all $n\le k-1$, where $k\ge c$. Then
$$ \begin{aligned}
T(k)=& T(k-2) + T(k-3) + 2T(k/3)\\
\lt& d(1-\frac1{k-2})S(k-2) + d(1-\frac1{k-3})S(k-3) + 2d(1-\frac1{\frac k3})\alpha^{\frac k3} \\
\lt&d(1-\frac1{k-2})(S(k-2)+S(k-3)) + 2d\alpha^{\frac k3}\\
=&d(1-\frac1{k-2})S(k) + 2d\alpha^{\frac k3}\\
=&d(1-\frac1{k})S(k) - \frac{2d}{(k-2)k}S(k)+ 2d\alpha^{\frac k3} \\
\lt&d(1-\frac1{k})S(k) - \frac{2d}{(k-2)k}\frac13\alpha^k+ 2d\alpha^{\frac k3} \\
=&d(1-\frac1{k})S(k) - \frac{2d\alpha^{\frac k3}}{3(k-2)k}(\alpha^{\frac {2k}3} - 3(k-2)k)\\
\end{aligned}$$
Since $\alpha^{\frac {2k}3} - 3(k-2)k \gt \alpha^{\frac {2k}3} - 3k^2\gt0$, we obtain $T(k) \lt d(1-\frac1{k})S(k)$.
Claim Three: $T(n)=\Theta(\alpha^n)$
Proof: This follows immediately from the two above claims since $\frac{\alpha^n}{3}<S(n)<\alpha^n$ and $d(1-\frac1n)\le d$. | {
"domain": "cs.stackexchange",
"id": 12666,
"tags": "asymptotics, recurrence-relation"
} |
Is the turtlebot bumper sensor being used by the naviation stack? | Question:
Hello,
My turtlebots don't respond to the bumper sensor when navigating using AMCL. That is, there appears to be no default behavior related to the bumper. Of course, I am planning to fix this in my own logic, but I wanted to be sure that I am not missing some basic functionality or configuration step.
That is, is there some out of the box bumper functionality included, or is it assumed that we will build that into our own behaviors (such as to back up, re-orient, etc.)? If we should build it ourselves, is it simply a matter of subscribing to the sensor_state messages to get the bumper state? Will I need to cancel my AMCL goal before backing up?
After looking around, all I could find is this enhancement request:
https://kforge.ros.org/turtlebot/trac/ticket/35
Thank you in advance.
Originally posted by ceverett on ROS Answers with karma: 332 on 2012-06-17
Post score: 2
Answer:
The bumper sensor is used to stop the turtlebot at the lowest level: relevant code
Once the navigation stack realizes it isn't moving even after sending velocity commands, it may (if you've enabled it) go into recovery behaviour. The default behaviour (which the default turtlebot launch files use) is here
Originally posted by weiin with karma: 2268 on 2012-06-17
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 9818,
"tags": "navigation, turtlebot"
} |
ROS2 string repr of parameter within launch file | Question:
Is it possible to get string representation of parameter? LaunchConfiguration('arg_name') returns LaunchConfiguration object, while I need string to perform several actions.
Originally posted by definitive on ROS Answers with karma: 57 on 2021-07-08
Post score: 0
Answer:
It's automatically evaluated in the launch phase, so generally, you don't have to care about it.
If you'd like to print the value for debugging, you can use OpaqueFunction.
Related: https://answers.ros.org/question/322636/ros2-access-current-launchconfiguration/
I put a minimal example here.
# false
$ ros2 launch sample.launch.py use_sim_time:=false
use_sim_time: false
$ ros2 param get /publisher use_sim_time
Boolean value is: False
# true
$ ros2 launch sample.launch.py use_sim_time:=true
use_sim_time: true
$ ros2 param get /publisher use_sim_time
Boolean value is: True
The launch file is:
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument, OpaqueFunction
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node, SetParameter
def launch_setup(context, *args, **kwargs):
use_sim_time = LaunchConfiguration("use_sim_time")
print(f"use_sim_time: {use_sim_time.perform(context)}")
set_use_sim_time = SetParameter(name="use_sim_time", value=use_sim_time)
node = Node(
package="examples_rclcpp_minimal_publisher",
executable="publisher_lambda",
name="publisher",
)
return [
set_use_sim_time,
node,
]
def generate_launch_description():
return LaunchDescription(
[
DeclareLaunchArgument("use_sim_time", default_value="false"),
OpaqueFunction(function=launch_setup),
]
)
Originally posted by Kenji Miyake with karma: 307 on 2021-07-08
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by M@t on 2021-08-23:
Thanks for providing the example @Kenji! It works perfectly. | {
"domain": "robotics.stackexchange",
"id": 36668,
"tags": "python, ros2, roslaunch"
} |
Monad to sample without replacement | Question: I created a monad in Haskell that lets you sample without replacement from user-defined urns, and then at the end gives you a list of all possible outcomes. It looks like it's similar to the list monad (except that one only ever samples with replacement), and to the ST monad. Here's the interface I want to present:
data Draw s a
data Urn s a
instance MonadPlus (Draw s)
newUrn :: [a] -> Draw s (Urn s a)
drawFrom :: Urn s a -> Draw s a
drawList :: [a] -> Draw s a -- so that you can still sample with replacement, like in the list monad
runDraw :: (forall s. Draw s a) -> [a]
Here's an example of how I want to use it:
runDraw $ do
l <- newUrn [1,2,3,3]
x <- drawFrom l
y <- drawFrom l
return (x, y)
-- produces [(1,2),(1,3),(1,3),(2,1),(2,3),(2,3),(3,1),(3,2),(3,3),(3,1),(3,2),(3,3)]
And here's what I came up with to implement that:
{-# LANGUAGE MagicHash, RankNTypes, RoleAnnotations #-}
module Draw (Draw, Urn, newUrn, drawFrom, drawList, runDraw) where
import Control.Applicative (Alternative(..))
import Control.Monad (MonadPlus, ap, liftM)
import Data.List (genericSplitAt)
import GHC.Exts (Any, unsafeCoerce#)
import Numeric.Natural (Natural)
newtype Draw s a = Draw { unDraw :: (Natural, [Any]) -> [(a, (Natural, [Any]))] }
type role Draw nominal representational
newtype Urn s a = Urn Natural
type role Urn nominal representational
instance Functor (Draw s) where
fmap = liftM
instance Applicative (Draw s) where
pure = drawList . pure
(<*>) = ap
instance Alternative (Draw s) where
empty = drawList empty
Draw m1 <|> Draw m2 = Draw $ \s -> m1 s <|> m2 s
instance Monad (Draw s) where
Draw m >>= f = Draw $ \s -> m s >>= uncurry (unDraw . f)
instance MonadPlus (Draw s)
drawList :: [a] -> Draw s a
drawList xs = Draw $ \s -> flip (,) s <$> xs
runDraw :: (forall s. Draw s a) -> [a]
runDraw (Draw f) = map fst (f (0, []))
newUrn :: [a] -> Draw s (Urn s a)
newUrn xs = Draw $ \(n, us) -> pure (Urn n, (n + 1, us ++ [toAny xs]))
drawFrom :: Urn s a -> Draw s a
drawFrom (Urn i) = Draw go where
go :: (Natural, [Any]) -> [(a, (Natural, [Any]))]
go (n, us) = map (\(x, remainingContents) -> (x, (n, before ++ toAny remainingContents : after))) (removeEach (fromAny urnContents)) where
(before, urnContents:after) = genericSplitAt i us
fromAny :: Any -> [a]
fromAny = unsafeCoerce#
toAny :: [a] -> Any
toAny = unsafeCoerce#
removeEach :: [a] -> [(a, [a])]
removeEach [] = []
removeEach (x:xs) = (x, xs):map (fmap (x:)) (removeEach xs)
This seems to work, at least with the example I posted above.
Here's my concerns:
I'm doing a lot of unsafeCoerce#, which is obviously not very safe
(before, urnContents:after) = genericSplitAt i us is an incomplete pattern match, which may be able to fail at runtime
I'm building the list of urns with xs ++ [x], which is quadratically slow
I'm not confident that this satisfies all of the typeclass laws, in particular the monad law of associativity I now realize that my type is isomorphic to StateT (Natural, [Any]) [], with equivalent instances, so I'm no longer concerned about this.
I'm not sure if the way I'm handling the urns is correct, or if it's somehow possible to use an urn where it doesn't belong and thus break type safety
Answer: This is an interesting problem, and you've written an interesting solution.
With respect to your inefficient urn "store" -- the mapping of immutable urn references (Urn Natural) to mutable urn contents -- it might be worth considering that because of the nature of your monad, most monadic computations involving urns will scale exponentially in the number of urns anyway, so big-O performance of urn list building and lookups is essentially irrelevant. You can start thinking about it when people want to use your monad for 100000-urn problems; or you could probably port everything over to a Data.Map Int or Data.IntMap in a few minutes).
The bigger problem, as you've noted, is that because this all has to run in a specific monotyped monad, unless you want to pre-declare the set of urns and their element types as used in a particular computation, you need an ugly, unsafe generic type like [Any] to represent your set of urns.
One method of dealing with this would be to represent the mutable contents of an urn by a set of always-integer indices while packaging the actual elements as part of the immutable Urn reference. That is, the Urn references you pass around can be represented as:
data Urn s a = Urn { tag :: Key
, labels :: Int -> a }
type role Urn nominal representational
with monotyped mutable state:
data UrnState = UrnState { nextTag :: Key
, urns :: IntMap [Int] }
So urns urnState ! tag1 is the set of integer indices still in play for that urn, and the actual elements are available by looking up those indices in the labels urnRef map.
SPOILERS
A complete code example, which seems to work on your test case is:
{-# LANGUAGE DeriveFunctor, RoleAnnotations, RankNTypes #-}
import Data.List
import Control.Monad
import qualified Data.IntMap as IntMap
import Data.IntMap (Key, IntMap, (!))
data Urn s a = Urn { tag :: Key
, labels :: Int -> a }
type role Urn nominal representational
data UrnState = UrnState { nextTag :: Key
, urns :: IntMap [Int] }
newtype Draw s a = Draw { unDraw :: UrnState -> [(a, UrnState)] } deriving (Functor)
type role Draw nominal representational
instance Applicative (Draw s) where
pure x = Draw (\s -> [(x,s)])
(<*>) = ap
instance Monad (Draw s) where
Draw d >>= f = Draw $ \s -> do -- list monad
(a', s') <- d s
unDraw (f a') s'
evalDraw :: (forall s. Draw s a) -> [a]
evalDraw (Draw d) = map fst $ d $ UrnState 0 IntMap.empty
newUrn :: [a] -> Draw s (Urn s a)
newUrn xs = Draw $ \(UrnState nxttag urs) ->
let -- list of labels keyed by indexes [0..n-1]
lbls = IntMap.fromAscList (zip [0..] xs)
-- new urn has tag "nxttag" and the immutable labelling function
u = Urn nxttag (lbls !)
-- add urn to state
urs' = IntMap.insert nxttag (IntMap.keys lbls) urs
in [(u, UrnState (nxttag+1) urs')]
draws :: [a] -> [(a,[a])]
draws xs = zipWith3 go (inits xs) xs (tail (tails xs))
where go l a r = (a, l++r)
drawFrom :: Urn s a -> Draw s a
drawFrom (Urn tg lbls) = Draw $ \(UrnState nxttag urs) ->
case urs ! tg of
[] -> fail "empty urn"
xs -> do -- list monad
(a, xs') <- draws xs
return $ (lbls a, UrnState nxttag $ IntMap.insert tg xs' urs)
main :: IO ()
main = print $ evalDraw $ do
l <- newUrn [1,2,3,3]
x <- drawFrom l
y <- drawFrom l
return (x, y) | {
"domain": "codereview.stackexchange",
"id": 37007,
"tags": "haskell, monads"
} |
2D Basic 'evade the enemy' type game | Question: I'm trying to learn how to become a good programmer. I think this means making my code more readable for others. I would really appreciate any feedback whatsoever. I'm not quite sure what specifically I'm most lacking in so that's about as precise as I can get.
Currently I've split my program up into 5 parts:
'game.py': main program which runs the game
'entitie.py': describes how entities function (both enemy and player entities).*
'screen.py': describes how the screen works, takes care of blitting text and drawing all the entities
'levels.py': describes how enemies spawn in each of the levels, as well as what happens when the next level is reached
'game_mechanics.py': really just a file I plan to throw all my functions in that don't really fit anywhere else
I've heard about 'inheritance' and I suppose it might be better to split enemy and player entities up and use the overlap from a more basic entity, but I've never done that and it feels to me the code would become less readable.
game.py:
"""
Written by Nathan van 't Hof
9 January 2018
Main file used to control the game.
"""
from entitie import Entity
import screen
import pygame
import sys
import os
import time
import levels
def init():
player = Entity('player', x=300, y=500)
entities = []
return player, entities
def update(player, entities):
player.update_movement()
for entity in entities:
entity.update_movement()
return player, entities
# initialize values
pygame.mixer.pre_init(44100, 16, 2, 4096)
pygame.init()
player, entities = init()
playing = True
# play music
pygame.mixer.music.load(os.path.join(os.getcwd(), "background.mp3"))
pygame.mixer.music.set_volume(0.5)
pygame.mixer.music.play(-1)
# set display
display = screen.display()
# set basic values
level = 0
list_levels = [levels.level_0, levels.level_1, levels.level_2, levels.level_3, levels.level_4, levels.level_5]
previous_addition = time.time()
dt = 1
while playing:
# update values
player, entities = update(player, entities)
player.update_score()
level, entities, difficulty = levels.check_level_up(player.score, level, entities)
# check if player wants to leave
for event in pygame.event.get():
if event.type == pygame.QUIT or event.type == pygame.KEYDOWN:
playing = False
# check if player died
restart = player.check_die(entities, display)
if restart > 0:
level = 0
entities = []
# allow player to move
player.interact(display.WIDTH, display.HEIGHT)
# add enemies
if time.time() - previous_addition > dt:
entities, dt = list_levels[level](entities, display, player.x, player.y, player.score, difficulty)
previous_addition = time.time()
# draw all the levels
entities = display.draw(entities, player)
# exit properly
pygame.quit()
sys.exit()
entitie.py
"""
Written by Nathan van 't Hof
9 January 2018
This is an entity class which can both be used for the player
as well as for enemies.
Player is a square whose velocity is dependant on an acceleration, dependant on distance of mouse in regards to
player object
Enemies are constant velocity squares
If player collides with a non-same color enemy, one life is retracted
"""
import game_mechanics
import time
import pygame
import win32api
from random import choice
import ctypes
# to ensure the mouse lines up with the pixels
user32 = ctypes.windll.user32
user32.SetProcessDPIAware()
possible_colors = [(250,0,0), (0,250,0), (0,0,250)]
class Entity:
def __init__(self, type_entity, x=0, y=0, vx=0, vy=0, width=20):
if type_entity == 'player':
self.lives = 3
self._input = True
self.color = (0, 0, 250)
self._width = 10
self.high_score = 0
if type_entity == 'enemy':
self.lives = 1
self._input = False
self.color = choice(possible_colors)
self._width = width
self.score = 0
self.x = x
self._vx = vx
self._ax = 0
self.y = y
self._vy = vy
self._ay = 0
self._last_updated = time.time()
self._time_initiated = time.time()
self._time_last_died = time.time()
self._state_left_mouse = 0
def update_movement(self):
dt = time.time() - self._last_updated
self._last_updated = time.time()
self.x += self._vx * dt + 0.5 * self._ax * dt * dt
self._vx += dt * self._ax
self.y += self._vy * dt + 0.5 * self._ay * dt * dt
self._vy += dt * self._ay
# check if it is the player
if self._input:
# check if left mouse has been pressed, if so, it changes color
new_state_left_mouse = win32api.GetKeyState(0x01)
# Button state changed
if new_state_left_mouse != self._state_left_mouse:
self._state_left_mouse = new_state_left_mouse
if new_state_left_mouse < 0:
self.color = (self.color[1], self.color[2], self.color[0])
def update_score(self):
self.score = int(time.time() - self._time_initiated)
self.high_score = max(self.high_score, self.score)
def check_die(self, entities, display):
restart = 0
# cannot immediately die
if time.time() - self._time_last_died > 3:
# check if any of the enemies cross the player
for entity in entities:
if entity.color != self.color:
if game_mechanics.collision_detect(self.x, entity.x, self.y, entity.y, self._width, entity._width, self._width, entity._width):
restart += self.die()
break
# check if player is out of bounds
if self.x > display.WIDTH or self.x < 0 or self.y > display.HEIGHT or self.y < 0 and restart == 0:
restart = self.die()
return restart
def die(self):
self._time_last_died = time.time()
self.lives -= 1
self.x = 500
self.y = 500
if self.lives == 0:
self.restart()
return 1
return 0
def restart(self):
self.score = 0
self.lives = 3
self.x = 200
self._vx = 0
self._ax = 0
self.y = 200
self._vy = 0
self._ay = 0
self._last_updated = time.time()
self._time_initiated = time.time()
def interact(self, screen_x, screen_y):
# only if the entity is the player (not really necessary, just an extra precaution)
if self._input:
# determine accelerations based on distance of mouse to player object
x_mouse, y_mouse = win32api.GetCursorPos()
dx = ((x_mouse - self.x)/screen_x)
dy = ((y_mouse - self.y)/screen_y)
self._ax = min(300000, 15000 * dx)
self._ay = min(300000, 15000 * dy)
# break hard when you deccelerate
if self._ax * self._vx < 0 and self._ay * self._vy < 0:
self._vx = self._vx * 0.8
self._vy = self._vy * 0.8
def draw(self, display):
pygame.draw.rect(display, self.color, (self.x, self.y, self._width, self._width))
screen.py
"""
Written by Nathan van 't Hof
9 January 2018
The screen object all the objects are drawn on.
"""
import pygame
from win32api import GetSystemMetrics
class display:
def __init__(self):
self.BLACK = (0, 0, 0)
self.WIDTH = GetSystemMetrics(0)
self.HEIGHT = GetSystemMetrics(1)
# full screen
self.windowSurface = pygame.display.set_mode((self.WIDTH, self.HEIGHT), pygame.FULLSCREEN)
self.windowSurface.fill(self.BLACK)
self.font = pygame.font.Font(None, 32)
def draw(self, entities, player):
self.windowSurface.fill(self.BLACK)
new_entities = []
for entity in entities:
entity.draw(self.windowSurface)
# if entity is no longer in screen it is not added to the entity list, meaning it is no longer kept track of
if not (entity.x > self.WIDTH or entity.x < -entity._width or
entity.y > self.HEIGHT or entity.y < -entity._width):
new_entities.append(entity)
player.draw(self.windowSurface)
label = self.font.render('score : ' + str(player.score), 1, (250,250,250))
self.windowSurface.blit(label, (20, 20))
label = self.font.render('lives : ' + str(player.lives), 1, (250,250,250))
self.windowSurface.blit(label, (20, 40))
label = self.font.render('high-score : ' + str(player.high_score), 1, (250, 250, 250))
self.windowSurface.blit(label, (20, 60))
pygame.display.flip()
return new_entities
levels.py
"""
Written by Nathan van 't Hof
9 January 2018
Used to control the behaviour of new enemies per level.
"""
from entitie import Entity
from random import randint
def level_up(level, entities):
level += 1
if level == 1:
for entity in entities:
entity._vx = 50
elif level == 3:
for entity in entities:
entity._vx = -80
return level, entities
def check_level_up(score, level, entities):
"""
Keeps track of what level the player is currently in,
once the last level is reached the difficulty is ramped up and it starts over.
"""
max_level = 5
difficulty = 1 + (score / 80) * 0.2
correct_level = (score % 80) / 15
if level == 0 and score > 7:
level, entities = level_up(level, entities)
elif correct_level > level and level != max_level:
level, entities = level_up(level, entities)
elif correct_level < level:
level = correct_level
return level, entities, difficulty
def level_0(entities, display, x_play, y_play, score, difficulty):
"""
Adds entities with 0 speed at random locations in the field
"""
entity = Entity('enemy',
x=randint(0,int(display.WIDTH)),
y=randint(0,int(display.HEIGHT)),
width=randint(10,40) * difficulty
)
entities.append(entity)
dt = 0.1 / difficulty
return entities, dt
def level_1(entities, display, x_play, y_play, score, difficulty):
"""
Adds entities on the left of the field, moving towards the right.
"""
entity = Entity('enemy',
x = 0,
y = randint(0,int(display.HEIGHT)),
vx = 80,
width = randint(10,100)* difficulty
)
entities.append(entity)
dt = 0.7/ difficulty
return entities, dt
def level_2(entities, display, x_play, y_play, score, difficulty):
"""
Adds entities on the left of the field, moving towards the right.
"""
for i in range(2):
entity = Entity('enemy',
x = 0,
y = randint(0,int(display.HEIGHT)),
vx = randint(70, 120),
width = randint(10,60)* difficulty
)
entities.append(entity)
dt = 0.7/ difficulty
return entities, dt
def level_3(entities, display, x_play, y_play, score, difficulty):
"""
Adds entities on the right of the field, moving towards the left.
"""
for i in range(3):
entity = Entity('enemy',
x = display.WIDTH,
y = randint(0,int(display.HEIGHT)),
vx = randint(-180, -90),
vy = randint(-30, 30),
width = randint(10,70)* difficulty
)
entities.append(entity)
dt = 0.4/ difficulty
return entities, dt
def level_4(entities, display, x_play, y_play, score, difficulty):
"""
Adds entities around the field, moving towards the player.
"""
positions = [[randint(0, display.WIDTH), 0],
[randint(0, display.WIDTH), display.HEIGHT],
[0, randint(0, display.HEIGHT)],
[display.WIDTH, randint(0, display.HEIGHT)]]
for position in positions:
entity = Entity('enemy',
x = position[0],
y = position[1],
vx = (x_play - position[0]) / 8,
vy = (y_play - position[1]) / 8,
width = randint(10,60)* difficulty
)
entities.append(entity)
dt = 70./ score / difficulty
return entities, dt
def level_5(entities, display, x_play, y_play, score, difficulty):
"""
Adds entities around the field, moving towards the player.
Also adds entities going from left to right
"""
positions = [[randint(0, display.WIDTH), 0],
[randint(0, display.WIDTH), display.HEIGHT],
[0, randint(0, display.HEIGHT)],
[display.WIDTH, randint(0, display.HEIGHT)]]
for position in positions:
entity = Entity('enemy',
x = position[0],
y = position[1],
vx = (x_play - position[0]) / 8,
vy = (y_play - position[1]) / 8,
width = randint(10,60)* difficulty
)
entities.append(entity)
for i in range(2):
entity = Entity('enemy',
x = 0,
y = randint(0,int(display.HEIGHT)),
vx = randint(70, 120),
width = randint(10,60)* difficulty
)
entities.append(entity)
dt = 50./score / difficulty
return entities, dt
game_mechanics.py
"""
Written by Nathan van 't Hof
9 January 2018
This is used for extra mechanics that don't fit in anywhere else properly.
Currently only used to detect if two squares overlap.
"""
def collision_detect(x1, x2, y1, y2, w1, w2, h1, h2):
"""
Check whether two rectangles (both parallel to the x-y axes) overlap
:param x1: x value specified corner rectangle 1
:param x2: x value specified corner rectangle 2
:param y1: y value specified corner rectangle 1
:param y2: y value specified corner rectangle 2
:param w1: width rectangle 1
:param w2: width rectangle 2
:param h1: height rectangle 1
:param h2: height rectangle 2
:return: True if rectangles do overlap, else False
"""
if x1 > x2 and x1 < x2 + w2 or x1 + w1 > x2 and x1 + w1 < x2 + w2:
if y1 > y2 and y1 < y2 + h2 or y1 + h1 > y2 and y1 + h1 < y2 + h2:
return True
return False
Answer: Let's start with a few small things:
Your collision detection routine is not needed, because pygame provides a collision detector for both rectangles and sprites. Just call whatever flavor of obj.collideXXX is most appropriate.
You import windows-specific modules a few times. First, be aware that pygame has mouse support, so you don't need to call Windows for mouse information. Next, please make an effort to hide your windows-specific modules and code behind an interface. Ideally, make it conditional on the program actually running on windows. You can use the platform module for this.
Now, with those out of the way, here are some suggestions in no particular order:
Your level_0, level_1, etc. functions in levels.py should be objects. You are returning tuples because you have more than one piece of data depending on the level- that's a good indicator for an object.
class Level:
pass
class L1(Level):
def create_enemies(self, display):
# ...
return enemies
def timeout(self, difficulty):
return 0.1 / difficulty
I recommend that you create a Player subclass of Entity. There is too much player-specific code in the Entity class that doesn't apply to Enemies. You might want to also create an Enemy subclass, but that may not be necessary. Try it with just Player first and see what you get.
Your check_level_up is taking a lot of inputs, and returning a lot of outputs. You should probably make that a player method. You can make the level a player attribute, too, if it seems appropriate. Alternatively, just do something like:
if player.level_up():
player.level += 1
cur_level = Levels[player.level]
Your display.draw function should not be filtering your enemies that are still alive. The display object shouldn't know anything about that. Use a collision-detection with the main screen to determine what to draw, and let the enemies kill themselves when they move offscreen. (That is, let the Entity class handle it, since you're talking about Entity-specific data.) | {
"domain": "codereview.stackexchange",
"id": 28991,
"tags": "python, game, pygame"
} |
Writing little package for rubber ducks in tikz | Question: I'm currently working on a little latex package to bring rubber ducks into tikz (a continuation from https://tex.stackexchange.com/a/347458/36296). As this is the first bigger project in which I use tikz I'd like to hear your opinion about the current way it is implemented.
I'm looking forward to your suggestions!
A few questions I have in particular:
Do you find it acceptable to load xcolor with the option svgnames? This could lead to option clashes if used in projects that use other colour systems. On the other hand its so easy to use the pre-made colours, much less work than mixing or defining all the colours myself.
-> confirmed that this is bad idea, see edit below
Do you think the commands needs to be more flexible, i.e. optional arguments to adjust specific colours/shapes of the subcomponents? For example I could introduce an optional argument to change the head colour independently from the body colour, but this might make using the package more complicated and more importantly the code much harder to read.
A cut down version of the package:
\documentclass{article}
\RequirePackage[svgnames]{xcolor}
\RequirePackage{tikz}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% combine ducks
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%1: skin colour
\newcommand{\duck}[1]{%
\colorlet{duck}{#1}
\colorlet{eye}{Cornsilk}
\colorlet{pupil}{black}
\colorlet{bill}{orange}
\duckbody{duck}
\duckhead{duck}
\duckbill{bill}
\duckeyes{pupil}{eye}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% body parts
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%1: skin colour
\newcommand{\duckbody}[1]{%
\path[fill=#1] (0.5128,1.1446) .. controls (0.2669,1.1021) and (-0.1252,0.6574) .. (0.2894,0.2611) .. controls (0.7040,-0.1351) and (2.8627,0.1303) .. (1.8177,1.4188) .. controls (0.9375,0.9457) and (1.2396,1.3785) .. (0.5128,1.1446) -- cycle;
}
%1: skin colour
\newcommand{\duckhead}[1]{%
\path[fill=#1] (0.90,1.50) ellipse (0.50 and 0.625);
}
%1: bill colour
\newcommand{\duckbill}[1]{%
\path[fill=#1] (0.4056,1.4721) .. controls (0.6429,1.5298) and (0.5408,1.3034) .. (0.9095,1.37) .. controls (0.0825,0.85) and (0.2685,1.3690) .. (0.4058,1.4721) -- cycle;
}
%1: pupile colour
%2: eye colour
\newcommand{\duckeyes}[2]{%
% right eye
\path[fill=#2, rotate=-20] (0.23,1.7675) ellipse (0.0893 and 0.125);
\path[fill=#1, rotate=-20] (0.26,1.7575) ellipse (0.0357 and 0.0714);
% left eye
\path[fill=#2, rotate=-20] (-0.06,1.74) ellipse (0.0786 and 0.1143);
\path[fill=#1, rotate=-20] (-0.03,1.73) ellipse (0.0286 and 0.0643);
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Accessories
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%1: frame colour
\newcommand{\addglasses}[1]{
\path[draw=#1,line width=1] (0.93,1.62) -- (1.30,1.50);
\draw[line width=1,color=#1] (0.73,1.67) arc (65:92:0.20);
\path[draw=#1,line width=1,rotate=-20] (0.23,1.7675) circle (0.125);
\path[draw=#1,line width=1,rotate=-20] (-0.06,1.74) circle (0.1143);
}
\begin{document}
\begin{tikzpicture}
\duck{yellow}
\addglasses{brown}
\end{tikzpicture}
\end{document}
Full code is available from https://github.com/samcarter8/tikzducks
P.S. I know that \RequirePackage is not meant to be used in documents, but as this part is copied from the .sty file, I decided to leave it as it is.
EDIT
new version of the code
without the svgnames option of xcolor
replaced \newcommand by \newcommand*
hopefully added the necessary %, probably added way to many ...
\documentclass{article}
\RequirePackage{xcolor}
\RequirePackage{tikz}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% combine ducks
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%1: skin colour
\newcommand*{\duck}[1]{%
\colorlet{duck}{#1}%
\colorlet{eye}{white!85!yellow}%
\colorlet{pupil}{black}%
\colorlet{bill}{orange}%
\duckbody{duck}%
\duckhead{duck}%
\duckbill{bill}%
\duckeyes{pupil}{eye}%
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% body parts
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%1: skin colour
\newcommand*{\duckbody}[1]{%
\path[fill=#1] (0.5128,1.1446) .. controls (0.2669,1.1021) and (-0.1252,0.6574) .. (0.2894,0.2611) .. controls (0.7040,-0.1351) and (2.8627,0.1303) .. (1.8177,1.4188) .. controls (0.9375,0.9457) and (1.2396,1.3785) .. (0.5128,1.1446) -- cycle;%
}
%1: skin colour
\newcommand*{\duckhead}[1]{%
\path[fill=#1] (0.90,1.50) ellipse (0.50 and 0.625);%
}
%1: bill colour
\newcommand*{\duckbill}[1]{%
\path[fill=#1] (0.4056,1.4721) .. controls (0.6429,1.5298) and (0.5408,1.3034) .. (0.9095,1.37) .. controls (0.0825,0.85) and (0.2685,1.3690) .. (0.4058,1.4721) -- cycle;%
}
%1: pupile colour
%2: eye colour
\newcommand*{\duckeyes}[2]{%
% right eye
\path[fill=#2, rotate=-20] (0.23,1.7675) ellipse (0.0893 and 0.125);%
\path[fill=#1, rotate=-20] (0.26,1.7575) ellipse (0.0357 and 0.0714);%
% left eye
\path[fill=#2, rotate=-20] (-0.06,1.74) ellipse (0.0786 and 0.1143);%
\path[fill=#1, rotate=-20] (-0.03,1.73) ellipse (0.0286 and 0.0643);%
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Accessories
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%1: frame colour
\newcommand*{\addglasses}[1]{%
\path[draw=#1,line width=1] (0.93,1.62) -- (1.30,1.50);%
\draw[line width=1,color=#1] (0.73,1.67) arc (65:92:0.20);%
\path[draw=#1,line width=1,rotate=-20] (0.23,1.7675) circle (0.125);%
\path[draw=#1,line width=1,rotate=-20] (-0.06,1.74) circle (0.1143);%
}
\begin{document}
\begin{tikzpicture}
\duck{yellow}%
\addglasses{brown}%
\end{tikzpicture}
\end{document}
Answer: Not much to say about the last version; on the other hand, you could exploit PGF keys.
\documentclass{article}
\RequirePackage{xcolor}
\RequirePackage{tikz}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% combine ducks
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\makeatletter
%1: skin colour
\newcommand*{\duck}[1][]{\tikzset{/duck/.cd,#1}\duck@draw}
\tikzset{
/duck/.cd,
body/.code=\def\duck@body{#1},
head/.code=\def\duck@head{#1},
eye/.code=\def\duck@eye{#1},
pupil/.code=\def\duck@pupil{#1},
bill/.code=\def\duck@bill{#1},
glasses/.code=\duck@glassestrue\def\duck@glasses{#1},
% set defaults
body=yellow,
eye=white!85!yellow,
pupil=black,
bill=orange,
glasses/.default=black,
}
\newif\ifduck@glasses
\def\duck@draw{
% body
\path[fill=\duck@body]
(0.5128,1.1446) .. controls (0.2669,1.1021) and (-0.1252,0.6574) ..
(0.2894,0.2611) .. controls (0.7040,-0.1351) and (2.8627,0.1303) ..
(1.8177,1.4188) .. controls (0.9375,0.9457) and (1.2396,1.3785) ..
(0.5128,1.1446) -- cycle;
% head
\ifdefined\duck@head\else\let\duck@head=\duck@body\fi
\path[fill=\duck@head] (0.90,1.50) ellipse (0.50 and 0.625);
% bill
\path[fill=\duck@bill]
(0.4056,1.4721) .. controls (0.6429,1.5298) and (0.5408,1.3034) ..
(0.9095,1.37) .. controls (0.0825,0.85) and (0.2685,1.3690) ..
(0.4058,1.4721) -- cycle;
% right eye
\path[fill=\duck@eye, rotate=-20] (0.23,1.7675) ellipse (0.0893 and 0.125);
\path[fill=\duck@pupil, rotate=-20] (0.26,1.7575) ellipse (0.0357 and 0.0714);
% left eye
\path[fill=\duck@eye, rotate=-20] (-0.06,1.74) ellipse (0.0786 and 0.1143);
\path[fill=\duck@pupil, rotate=-20] (-0.03,1.73) ellipse (0.0286 and 0.0643);
% glasses
\ifduck@glasses
\path[draw=\duck@glasses,line width=1] (0.93,1.62) -- (1.30,1.50);
\draw[line width=1,color=\duck@glasses] (0.73,1.67) arc (65:92:0.20);
\path[draw=\duck@glasses,line width=1,rotate=-20] (0.23,1.7675) circle (0.125);
\path[draw=\duck@glasses,line width=1,rotate=-20] (-0.06,1.74) circle (0.1143);
\fi
}
\makeatother
\begin{document}
\begin{tikzpicture}
\duck
\end{tikzpicture}
\qquad
\begin{tikzpicture}
\duck[body=yellow,head=pink,glasses=brown]
\end{tikzpicture}
\qquad
\begin{tikzpicture}
\duck[body=red,glasses]
\end{tikzpicture}
\begin{tikzpicture}
\duck[glasses]
\end{tikzpicture}
\end{document} | {
"domain": "codereview.stackexchange",
"id": 26795,
"tags": "graphics, tex"
} |
publisher code in arduino is not the same as the one in ubuntu | Question:
Here are the links to official tutorial:
publisher example code with rosserial_arduino
publisher example code in ubuntu
The examples are different from each other with regard to publisher. I am curious about why ?
Is publisher in arduino using C libraries of ROS instead of those of C++, in order to save more storage for arduino board?
Originally posted by shawnysh on ROS Answers with karma: 339 on 2017-01-06
Post score: 0
Answer:
In short, the rosserial libraries are completely different from the roscpp libraries that run on Linux.
The API is optimized to save space and avoid copying messages, because the Arduino has so little memory. Even small messages can take upwards of 100 bytes, and on a processor such as the AtMega 168 or 328 that only has 1k or 2k of memory, that is a significant percentage of the overall system memory.
The API is also optimized to avoid memory allocations (calls to malloc or new), because memory allocation on embedded platforms such as the Arduino is generally a bad idea (memory leaks are more likely to crash your program, overwrite your stack, or there simply isn't an allocator at all!)
rosserial_arduino is optimized to work over a serial cable, so the protocol is slightly different, and the API setup is designed around using a serial port instead of supplying a node name.
Originally posted by ahendrix with karma: 47576 on 2017-01-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by shawnysh on 2017-01-08:
I sorry for replying late, very detailed answer!Thanks a lot.
I have heard from someone who claimed that the rosserial API document have not been documented completely, it is right? Where can I find it?
Comment by shawnysh on 2017-01-08:
and the API setup is designed around using a serial port instead of supplying a node name.
Did you mean that the rqt_graph for the node in rosserial_arduino is shown as /serial_node ? | {
"domain": "robotics.stackexchange",
"id": 26660,
"tags": "ros, arduino, rosserial, publisher"
} |
Why plastic sheets contract when heated? | Question: Why do thin plastic sheets contract when heated, contradictory to the behavior of most other materials ?
What are the things going on at the molecular level ?
Answer: When plastic sheets are produced, they are rapidly cooled to keep the polymer chains oriented in a way that makes the sheets nice and flat. This is a relatively high-strain orientation since it is associated with the energy level of the molecules at the casting temperature.
Once the plastic is heated above its glass transition temperature, the polymer chains are no longer locked in that high strain orientation. They relax to a low energy orientation- curled and bending in a way that shrinks the bulk material.
As for the precise mechanism, I'm not exactly sure. I would guess that the shrunken conformation is entropically favorable because there are more arbitrary bends. This would decrease the Gibbs free energy, making it a more stable shape.
Alternatively, it could be that hydrogen bonding between chain elements makes the folded shape more enthalpically favorable. | {
"domain": "chemistry.stackexchange",
"id": 12966,
"tags": "polymers, materials, intermolecular-forces, plastics"
} |
Particle class for physics simulation | Question: This is a very lengthy class called Particle, it relies on two other headers, one of them is a simple struct which contains an x and y (Vector2), and the other header gets a material's properties (materialProperties.h). The class is no where near done, and I am struggling to get my collision detection function to work. This means that currently the bounciness property has no real purpose, and odd things will happen in void checkParticleMovement(std::vector< std::vector<Particle> >& particleArray), for example, when there are two particles, one behind the other, moving at the same speed, the previous particle won't move on the grid.
The code has very infrequent comments, so lots of it may not make sense. It also uses a 2D vector grid, because it is much faster than a normal vector. If the code appears to be unclear without the other files, then I will put the code on Github or something (but it uses SFML for drawing).
//Standard C++:
#include <iostream>
#include <string>
#include <vector>
#include <math.h>
//My Headers:
#include "vector2.h"
#include "materialDatabase.h"
class Particle
{
private:
//Coords:
vector2 coords;
//Velocities:
vector2 velocity;
//Material:
std::string material;
//Expanding:
bool fillToolExpands = false;
//Mass:
double mass = 0;
//Bounciness:
double bounciness = 0.5;
public:
//All values:
void setAllValues(vector2, vector2, std::string, double, double);
void setEmpty();
//Copy Particle:
void copyParticle(Particle&);
//Coords:
void setCoords(vector2);
vector2 getCoords();
vector2 getPreciseCoords();
//Velocities:
void giveVelocity(vector2);
void setVelocity(vector2);
vector2 getVelocity();
//Material:
void setMaterial(std::string);
void setMaterialProperties(std::string);
std::string getMaterial();
//Expanding:
void setFillParticle(bool);
bool isFillParticle();
//Mass:
void setMass(double);
double getMass();
//Bounciness:
void setBounciness(double);
double getBounciness();
//Gravitational Velocity:
void calculateGravitationalVelocity(Particle&);
//Update:
void update();
};
//Set values:
void Particle::setAllValues(vector2 startCoords, vector2 startVelocity, std::string startMaterial, double startMass, double startBounciness)
{
coords = startCoords;
velocity = startVelocity;
material = startMaterial;
mass = startMass;
bounciness = startBounciness;
}
void Particle::setEmpty()
{
coords = vector2(floor(coords.x), floor(coords.y));
velocity = vector2(0, 0);
material = "empty";
mass = 0;
bounciness = 0;
}
//Copy Particle:
void Particle::copyParticle(Particle& particleToCopyTo)
{
particleToCopyTo.setAllValues(coords, velocity, material, mass, bounciness);
}
//Coords:
void Particle::setCoords(vector2 newCoordinates)
{
coords = newCoordinates;
}
vector2 Particle::getCoords()
{
vector2 flooredCoords(floor(coords.x), floor(coords.y));
return flooredCoords;
}
vector2 Particle::getPreciseCoords()
{
return coords;
}
//Velocities:
void Particle::giveVelocity(vector2 addedVelocity)
{
velocity.x = velocity.x + addedVelocity.x;
velocity.y = velocity.y + addedVelocity.y;
}
void Particle::setVelocity(vector2 newVelocity)
{
velocity = newVelocity;
}
vector2 Particle::getVelocity()
{
return velocity;
}
//Material:
void Particle::setMaterial(std::string newMaterial)
{
material = newMaterial;
}
void Particle::setMaterialProperties(std::string newMaterial)
{
material = newMaterial;
mass = getMaterialMass(newMaterial);
bounciness = getMaterialBounciness(newMaterial);
}
std::string Particle::getMaterial()
{
return material;
}
//Expanding:
void Particle::setFillParticle(bool isFill)
{
fillToolExpands = isFill;
}
bool Particle::isFillParticle()
{
return fillToolExpands;
}
//Mass:
void Particle::setMass(double newMass)
{
mass = newMass;
}
double Particle::getMass()
{
return mass;
}
//Bounciness:
void Particle::setBounciness(double newBounciness)
{
bounciness = newBounciness;
}
double Particle::getBounciness()
{
return bounciness;
}
//Gravitational Velocity:
void Particle::calculateGravitationalVelocity(Particle& distantParticle)
{
//Physics constants:
const double G = 0.00000000006673; //Gravitational Constant (or Big G)
//Get coords of particle:
vector2 coords1 = coords;
//Get coords of particle with gravity:
vector2 coords2 = distantParticle.getCoords();
//Get the difference vector:
vector2 rV(coords2.x - coords1.x, coords2.y - coords1.y);
//Distances:
double r = pow(rV.x, 2) + pow(rV.y, 2);
double r2 = sqrt(r);
if (r != 0)
{
//Normalize the difference vector
vector2 u(rV.x / r, rV.y / r);
//Acceleration of gravity
double a = G * distantParticle.getMass() / r2;
//Set the velocity:
velocity.x = velocity.x + (a * u.x / 1000);
velocity.y = velocity.y + (a * u.y / 1000);
}
}
//Update:
void Particle::update()
{
coords.x = coords.x + velocity.x;
coords.y = coords.y + velocity.y;
}
//Miscellaneous Functions:
void checkParticleMovement(std::vector< std::vector<Particle> >& particleArray)
{
int vectorWidth = particleArray[0].size();
int vectorHeight = particleArray.size();
std::vector< std::vector<bool> > updated(vectorHeight, std::vector<bool> (vectorWidth, 0));
//Make incrementer:
int incrementX = 0;
int incrementY = 0;
while (incrementY != vectorHeight)
{
//Check if it needs to be moved:
if ((particleArray[incrementY][incrementX].getMaterial() != "empty") && (updated[incrementY][incrementX] == false))
{
int coordX = particleArray[incrementY][incrementX].getCoords().x;
int coordY = particleArray[incrementY][incrementX].getCoords().y;
//Moving a particle in the grid:
if ((coordX != incrementX) || (coordY != incrementY))
{
if (particleArray[coordY][coordX].getMaterial() == "empty")
{
//Copy Particle:
particleArray[incrementY][incrementX].copyParticle(particleArray[coordY][coordX]);
//particleArray[coordY][coordX].setCoords(vector2(coordX, coordY));
//Delete previous particle:
particleArray[incrementY][incrementX].setEmpty();
particleArray[incrementY][incrementX].setCoords(vector2(incrementX, incrementY));
}
}
//Make sure the particle can't be updated multiple times:
updated[coordY][coordX] = true;
}
++incrementX;
if (incrementX == vectorWidth)
{
incrementX = 0;
++incrementY;
}
}
}
//Collision Detection:
void handleCollisionDetection(std::vector< std::vector<Particle> >& particleArray)
{
int vectorWidth = particleArray[0].size();
int vectorHeight = particleArray.size();
double highestVelocity = 0;
std::vector< std::vector<vector2> > velocities(vectorHeight, std::vector<vector2>(vectorWidth));
std::vector< std::vector<vector2> > coords(vectorHeight, std::vector<vector2>(vectorWidth));
std::vector< std::vector<std::string> > materials(vectorHeight, std::vector<std::string>(vectorWidth));
//FIND THE HIGHEST VELOCITY (TO DIVIDE WITH):
int incrementX = 0;
int incrementY = 0;
while (incrementY != vectorHeight)
{
velocities[incrementY][incrementX] = particleArray[incrementY][incrementX].getVelocity();
if (velocities[incrementY][incrementX].x > highestVelocity) {highestVelocity = ceil(velocities[incrementY][incrementX].x);}
if (velocities[incrementY][incrementX].y > highestVelocity) {highestVelocity = ceil(velocities[incrementY][incrementX].y);}
coords[incrementY][incrementX] = particleArray[incrementY][incrementX].getPreciseCoords();
materials[incrementY][incrementX] = particleArray[incrementY][incrementX].getMaterial();
++incrementX;
if (incrementX == vectorWidth)
{
incrementX = 0;
++incrementY;
}
}
//Remove minus number
highestVelocity = fabs(highestVelocity);
incrementX = 0;
incrementY = 0;
while (incrementY != vectorHeight)
{
if (materials[incrementY][incrementX] != "empty")
{
vector2 dividedVelocityStart = velocities[incrementY][incrementX];
if (velocities[incrementY][incrementX].x != 0)
{
dividedVelocityStart.x = dividedVelocityStart.x / highestVelocity;
if (std::isnan(dividedVelocityStart.x) == true) {dividedVelocityStart.x = 0;}
}
else {dividedVelocityStart.x = 0;}
if (velocities[incrementY][incrementX].y != 0)
{
dividedVelocityStart.y = dividedVelocityStart.y / highestVelocity;
if (std::isnan(dividedVelocityStart.y) == true) {dividedVelocityStart.y = 0;}
}
else {dividedVelocityStart.y = 0;}
vector2 dividedVelocityIncrement = dividedVelocityStart;
while (dividedVelocityIncrement <= velocities[incrementY][incrementX])
{
int incrementXLowLimit = incrementX - (highestVelocity * 2);
if (incrementXLowLimit < 0) {incrementXLowLimit = 0;}
int incrementXHighLimit = incrementX + (highestVelocity * 2);
if (incrementXHighLimit >= vectorWidth) {incrementXHighLimit = vectorWidth;}
int incrementX2 = incrementXLowLimit;
int incrementY2 = incrementY - (highestVelocity * 2);
if (incrementY2 < 0) {incrementY2 = 0;}
int incrementYHighLimit = incrementY + (highestVelocity * 2);
if (incrementYHighLimit >= vectorHeight) {incrementYHighLimit = vectorHeight;}
while (incrementY2 != incrementYHighLimit)
{
if ((materials[incrementY2][incrementX2] != "empty") && (incrementX != incrementX2) && (incrementY != incrementY2))
{
vector2 dividedVelocityStart2 = velocities[incrementY2][incrementX2];
vector2 dividedVelocityIncrement2 = dividedVelocityStart2;
dividedVelocityIncrement2.x = dividedVelocityIncrement2.x / highestVelocity;
if (std::isnan(dividedVelocityIncrement2.x) == true) {dividedVelocityIncrement2.x = 0;}
dividedVelocityIncrement2.y = dividedVelocityIncrement2.y / highestVelocity;
if (std::isnan(dividedVelocityIncrement2.y) == true) {dividedVelocityIncrement2.y = 0;}
while (dividedVelocityIncrement2 <= velocities[incrementY2][incrementX2])
{
if ((floor(coords[incrementY][incrementX].x + dividedVelocityIncrement.x) ==
floor(coords[incrementY2][incrementX2].x + dividedVelocityIncrement2.x))
&& (floor(coords[incrementY][incrementX].y + dividedVelocityIncrement.y) ==
floor(coords[incrementY2][incrementX2].y + dividedVelocityIncrement2.y)))
{
std::cout << "COLLISION!" << std::endl;
}
if (dividedVelocityIncrement2.x >= 0) {dividedVelocityIncrement2.x = dividedVelocityIncrement2.x + dividedVelocityStart2.x;}
else {dividedVelocityIncrement2.x = dividedVelocityIncrement2.x - dividedVelocityStart2.x;}
if (dividedVelocityIncrement2.y >= 0) {dividedVelocityIncrement2.y = dividedVelocityIncrement2.y + dividedVelocityStart2.y;}
else {dividedVelocityIncrement2.y = dividedVelocityIncrement2.y - dividedVelocityStart2.y;}
//For minus values:
if (dividedVelocityIncrement2 <= 0)
{
if (dividedVelocityIncrement2 >= velocities[incrementY2][incrementX2]) {break;}
}
}
}
++incrementX2;
if (incrementX2 == incrementXHighLimit)
{
incrementX2 = incrementXLowLimit;
++incrementY2;
}
}
if (dividedVelocityIncrement.x >= 0) {dividedVelocityIncrement.x = dividedVelocityIncrement.x + dividedVelocityStart.x;}
else {dividedVelocityIncrement.x = dividedVelocityIncrement.x - dividedVelocityStart.x;}
if (dividedVelocityIncrement.y >= 0) {dividedVelocityIncrement.y = dividedVelocityIncrement.y + dividedVelocityStart.y;}
else {dividedVelocityIncrement.y = dividedVelocityIncrement.y - dividedVelocityStart.y;}
//For minus values:
if (dividedVelocityIncrement <= 0)
{
if (dividedVelocityIncrement >= velocities[incrementY][incrementX]) {break;}
}
}
}
++incrementX;
if (incrementX == vectorWidth)
{
incrementX = 0;
++incrementY;
}
}
}
Answer: There is quite a bit of code here, so this is by no means a complete review. A few main points that caught my attention:
Your class could use some parameterized constructors. That would reduce the need for all the set* methods. The work done by setAllValues(), for instance, should clearly be done by a constructor.
copyParticle() is also redundant with operator =, but some might prefer the more C-ish syntax of copying via a function.
I would advise keeping function parameter names in the function/method prototypes, as this adds to the documentation of the code.
You did not follow Const Correctness for the get*() methods. They are not altering any member state, so should be marked as const.
Use <cmath> for C++, <math.h> is actually the C-language header file.
fillToolExpands is a strange name. The methods that manipulate it are setFillParticle() and isFillParticle(), so shouldn't it be called something like fillParticle or fillEnabled?
Another issue with fillToolExpands: Avoid declaring booleans in the middle of classes. A bool is usually 1 byte in size, so they will break data alignment and force the compiler to pad the bool to the size of a word, making your class larger in terms of memory. Placing bools always at the end of classes/structs will make the need for compiler-generated padding less frequent and won't require padding between fields.
Column alignment of similar lines is something that helps me digest code. This is certainly arguable, but I would change a block like this:
coords = startCoords;
velocity = startVelocity;
material = startMaterial;
mass = startMass;
bounciness = startBounciness;
To this:
coords = startCoords;
velocity = startVelocity;
material = startMaterial;
mass = startMass;
bounciness = startBounciness;
Don't use pow() to calculate the square of a number. That will call a function which can be an expensive one. Instead, just multiply the number by itself.
double r = pow(rV.x, 2) + pow(rV.y, 2);
double r2 = sqrt(r);
Simpler and faster:
double length = sqrt((rV.x * rV.x) + (rV.y * rV.y));
In the mathematical update of the particles, done by calculateGravitationalVelocity(), you use a few single letter variable names. Try to provide better and more descriptive names instead. E.g. r/r2 are actually the length of the vector. a is the acceleration of gravity. Use descriptive names and the comments can even be removed.
floor(), ceil(), fabs() and all functions declared by <cmath> are all members of namespace std. Technically, compilers are not required to expose such functions in the global namespace, so for good portability, make sure to always prefix them with the std:: namespace resolution. | {
"domain": "codereview.stackexchange",
"id": 12341,
"tags": "c++, simulation, coordinate-system, physics"
} |
Magnetic field due to a single moving charge | Question: The Biot-Savart law can only be used in the case of magnetostatics (constant current) so how do we calculate the magnetic field of a single charge moving at constant velocity at a distance r. I tried by calculating the displacement current
using but i was not sure wether the biot savart law can be applied to displacement currents.
Please don't use relativity if possible because i have no experience with relativity yet.
Answer:
A point charge $\:q\:$ is moving uniformly on a straight line with velocity $\:\boldsymbol{\upsilon}\:$ as is the Figure. The electromagnetic field at a point $\:\mathrm{P}\:$ with position vector $\:\mathbf{x}\:$ at time $\:t\:$ is
\begin{align}
\mathbf{E}_{_{\mathbf{LW}}}\left(\mathbf{x},t\right) & \boldsymbol{=}\dfrac{q}{4\pi \epsilon_{\bf 0}}\dfrac{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\right)}{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\sin^{\bf 2}\!\phi\right)^{\boldsymbol{3/2}}}\dfrac{\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{\bf 3}},\quad \beta\boldsymbol{=}\dfrac{\upsilon}{c}
\tag{01a}\\
\mathbf{B}_{_{\mathbf{LW}}}\left(\mathbf{x},t\right) & \boldsymbol{=}\dfrac{1}{c^{ \bf 2}}\left(\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{E}\right)\vphantom{\dfrac{a}{\dfrac{}{}b}}\boldsymbol{=}\dfrac{\mu_{0}q}{4\pi }\dfrac{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\right)}{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\sin^{\bf 2}\!\phi\right)^{\boldsymbol{3/2}}}\dfrac{\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{\bf 3}}
\tag{01b}
\end{align}
Equations (01) are relativistic. They come from the Lienard-Wiechert potentials.
Biot-Savart Law
After a quick calculation with Biot-Savart Law (using the Dirac $\:\delta\:$ function) I found the solution
\begin{equation}
\mathbf{B}_{_{\mathbf{BS}}}\left(\mathbf{x},t\right) \boldsymbol{=}\dfrac{\mu_{0}q}{4\pi }\dfrac{\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{\bf 3}}
\tag{02}
\end{equation}
which compared with that from the Lienard-Wiechert potentials, see above equation (01b)
\begin{equation}
\mathbf{B}_{_{\mathbf{LW}}}\left(\mathbf{x},t\right)\boldsymbol{=}\dfrac{\mu_{0}q}{4\pi }\dfrac{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\right)}{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\sin^{\bf 2}\!\phi\right)^{\boldsymbol{3/2}}}\dfrac{\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{\bf 3}}
\tag{03}
\end{equation}
it looks as an approximation for charges whose velocities are small compared to that of light $\:c$
\begin{equation}
\mathbf{B}_{_{\mathbf{BS}}}\left(\mathbf{x},t\right)\boldsymbol{=}
\lim_{\beta \boldsymbol{\rightarrow} 0}\mathbf{B}_{_{\mathbf{LW}}}\left(\mathbf{x},t\right)\boldsymbol{=}
\lim_{\beta\boldsymbol{\rightarrow} 0}\left[\dfrac{\mu_{0}q}{4\pi }\dfrac{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\right)}{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\sin^{\bf 2}\!\phi\right)^{\boldsymbol{3/2}}}\dfrac{\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{\bf 3}}\right]\boldsymbol{=}\dfrac{\mu_{0}q}{4\pi}\dfrac{\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{3}}
\tag{04}
\end{equation}
(1)
EDIT
Answer to OP's comment :
how did you get equation 02 when v << c. – DHYEY Jun 29 '18 at 11:49
From Jackson's : Biot and Savart Law
\begin{equation}
\mathrm d\mathbf{B}=\dfrac{\mu_{0}}{4\pi}I\dfrac{\left(\mathrm d\boldsymbol{\ell}\boldsymbol{\times}\mathbf{r'}\right)}{\:\:\Vert\mathbf{r'}\Vert^{3}}
\tag{BS-01}
\end{equation}
\begin{equation}
I=q\upsilon\delta\left(x'-r\cos\phi\right), \qquad \mathrm d\boldsymbol{\ell}=\mathbf{i}\mathrm dx', \qquad \mathbf{r'}=x'\mathbf{i}\boldsymbol{+}\alpha\mathbf{j}\boldsymbol{+}0\mathbf{k}
\tag{BS-02}
\end{equation}
\begin{equation}
\mathrm d\mathbf{B}=\dfrac{\mu_{0}q}{4\pi}q\upsilon\delta\left(x'\!\boldsymbol{-}\!r\cos\phi\right)\dfrac{\left(\mathbf{i}\boldsymbol{\times}\mathbf{r'}\right)}{\:\:\Vert\mathbf{r'}\Vert^{3}}\mathrm dx'=\dfrac{\mu_{0}q}{4\pi}q\upsilon\delta\left(x'\!\boldsymbol{-}\!r\cos\phi\right)\dfrac{\left(\alpha\mathbf{k}\right)}{\:\:\left(x'^2\!\boldsymbol{+}\!\alpha^2 \right)^{3/2}}\mathrm dx'
\tag{BS-03}
\end{equation}
\begin{equation}
\mathbf{B}=\dfrac{\mu_{0}}{4\pi}q\upsilon\alpha\mathbf{k}\int\limits_{\boldsymbol{-}\boldsymbol{\infty}}^{\boldsymbol{+}\boldsymbol{\infty}}\dfrac{\delta\left(x'\!\boldsymbol{-}\!r\cos\phi\right)}{\:\:\left(x'^2\!\boldsymbol{+}\!\alpha^2 \right)^{3/2}}\mathrm dx'=\dfrac{\mu_{0}q}{4\pi}\dfrac{\upsilon\alpha\mathbf{k}}{\:\:\left(r^2\cos^2\phi\!\boldsymbol{+}\!\alpha^2 \right)^{3/2}}= \dfrac{\mu_{0}q}{4\pi}\dfrac{\left(\upsilon\mathbf{i}\right)\boldsymbol{\times}\left(\alpha\mathbf{j}\right)}{\:\:\left(r^2\cos^2\phi\!\boldsymbol{+}\!\alpha^2 \right)^{3/2}}
\tag{BS-04}
\end{equation}
\begin{equation}
\mathbf{B} =\dfrac{\mu_{0}q}{4\pi }\dfrac{\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{3}}
\tag{BS-05}
\end{equation} | {
"domain": "physics.stackexchange",
"id": 81899,
"tags": "electromagnetism, magnetic-fields"
} |
Photon encoding and computational limit of the universe | Question: The amount of information or entropy that can be contained in a region of space is given by the Bekenstein Bound:
$I \leq \frac{2\pi RE} {hc \ln2}$
However, recent publications have shown information encoding in multiple dimensions, up to 10 dimensions on a single photon(Kues et al.)
What is the relationship between photon encoding in multiple dimensions and the Bekenstein Bound? And more broadly, the computational limitations of our universe?
Answer: I think this link will be useful, particularly when he begins discussing the 5th argument. The answer as to why quantum computing doesn't violate the Bekenstein Bound is that if we have $n$ two dimensional entangled quantum systems, qubits, (the argument works similarly with 10 dimensional systems, except we work with a base of 10 instead of 2) there is a theorem, the Holevo Theorem which gives the amount of reliable classical bits that can be recovered from the system, namely $n$ classical bits. The system requires $2^n$ complex variables to describe, so given an $n$ sufficiently large, we could certainly make it look like we are violating the Bekenstein Bound, e.g. by having a system which requires $2^{10000}$ continuous complex variables to completely specify, but we couldn't "cash the system out". That is we couldn't recover even a minuscule fraction of that information from the system. Thus when we consider the number of classical bits stored in a quantum system, it still only scales linearly with the number of quantum bits, and thus we can't violate the Bekenstein Bound. | {
"domain": "physics.stackexchange",
"id": 55854,
"tags": "quantum-information, entropy"
} |
How is the state preparation Unitary in initialize selected? | Question:
Normally, in order to prepare the Bell state $\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$, we can simply make a circuit with a Hadamard gate on $|0\rangle$ followed by a CNOT gate on $|1\rangle$. However, initializing the Bell state and then using Decompose() will give a different circuit. We can also check the unitaries of the 2 different scenarios and realize that they are different. Thus, how does Qiskit select the state preparation unitary when the initialize function is used?
Answer: As stated in this tutorial, qiskit relies on a method from this paper from implementing the initialize function.
Note that this algorithm is generic: it does not assume anything on the state one wants to prepare. However, it is not known how good this algorithm is in terms of CNOT gates for the initializing circuit. The example you took actually proves that the algorithm may yield sub-optimal circuits according to this metric.
Tha algorithm works as follows: first, we create a circuit that yields the state $|0\rangle^{\otimes n}$ from the $n$-qubit state we want, and then we reverse this circuit. In order to do the first step, the idea is the following:
Untangle the last qubit from the rest of the state
Apply the desired rotations to put it in a $|0\rangle$ state
Repeat on the $n-1$-qubit remaining state
In fact, looking at the article, you can see that the operation used for such a task is a controlled $R_Y$ and a controlled $R_Z$:
On this figure, the last wire represents the qubit to be untangled (this is reversed when compared to qiskit's little-endian convention). They then proceed to upper-bound the number of CNOT gates required to implement these controlled gates to prove that their algorithm yields near-optimal circuits in terms of CNOT gates for initializing arbitrary states.
This explains the difference between both implementations: in the first one, you know using previous knowledge that applying an Hadamard gate on the first qubit and then a CNOT gate on the second one yields the desired state. The initialize function does not have this previous knowledge and entirely relies on the aforementioned algorithm, which yields the rightmost circuit.
Note that the fact that the associated unitaries are not the same doesn't matter, since we assume the input state to be $|00\rangle$. Thus, the only thing that matters in this case is that their first columns match, which is the case. | {
"domain": "quantumcomputing.stackexchange",
"id": 2944,
"tags": "qiskit, initialization"
} |
What is the average turnaround time? | Question: For the following jobs:
The average wait time would be using a FCFS algorithm:
(6-6)+(7-2)+(11-5)+(17-5)+(14-1) -> 0+5+6+10+13 -> 34/5 = 7 (6.8)
What would the average turnaround time be?
Answer: You need to determine at what time each job is completed. With a first-come-first-served scheduler, this is simple to calculate: each job starts as soon as the processor becomes free, and takes exactly its burst time to complete. You've already calculated the start and end times to calculate the wait times, so use that to obtain the turnaround time.
For example, A arrives at time 0. The processor is free, so it starts at time 0 and ends at time 6. Then the processor runs B, which had to wait for 5 units, and finishes at time 8, for a turnaround time of 7.
The answer from the book seems to be totaling the completion times, without regard for the arrival time. This is not something I recognize as “turnaround time”. | {
"domain": "cs.stackexchange",
"id": 17703,
"tags": "algorithms, operating-systems, process-scheduling, scheduling"
} |
SOLIDWORKS - 3D Sketch of Trombone | Question: My task is to create a 3D Sketch of a trombone like the one below
My professor recommended defining the points 1-4 and putting a spline through them. I did that, but my resulting design looks weird. It curves inward between points 1 and 2 before curving out towards the rest of the points. I'm not sure what I'm doing wrong, and I don't know if I have enough given constraints to curve it outward manually myself by editing the spline
Answer: See the .gif below!
EDIT: You may need to enable visibility of these handles if they're not on by default - it depends on your default profile/settings. | {
"domain": "engineering.stackexchange",
"id": 3050,
"tags": "mechanical-engineering, design, cad, solidworks"
} |
Are we technically able to make, in hardware, arbitrarily large neural networks with current technology? | Question: If neurons and synapses can be implemented using transistors, what prevents us from creating arbitrarily large neural networks using the same methods with which GPUs are made?
In essence, we have seen how extraordinarily well virtual neural networks implemented on sequential processors work (even GPUs are sequential machines, but with huge amounts of cores).
One can imagine that using GPU design principles - which is basically to have thousands of programmable processing units that work in parallel - we could make much simpler "neuron processing units" and put millions or billions of those NPUs in a single big chip. They would have their own memory (for storing weights) and be connected to a few hundred other neurons by sharing a bus. They could have a frequency of for example 20 Hz, which would allow them to share a data bus with many other neurons.
Obviously, there are some electrical engineering challenges here, but it seems to me that all big tech companies should be exploring this route by now.
Many AI researchers say that superintelligence is coming around the year 2045. I believe that their reasoning is based on Moore's law and the number of neurons we are able to implement in software running on the fastest computers we have.
But the fact is, we today are making silicon chips with billions of transistors on them. SPARK M7 has 10 billion transistors.
If implementing a (non-programmable) neuron and a few hundred synapses for it requires for example 100 000 transistors, then we can make a neural network in hardware that emulates 100 000 neurons.
If we design such a chip so that we can simply make it physically bigger if we want more neurons, then it seems to me that arbitrarily large neural networks are simply a budget question.
Are we technically able to make, in hardware, arbitrarily large neural networks with current technology?
Remember: I am NOT asking if such a network will in fact be very intelligent. I am merely asking if we can factually make arbitrarily large, highly interconnected neural networks, if we decide to pay Intel to do this?
The implication is that on the day some scientist is able to create general intelligence in software, we can use our hardware capabilities to grow this general intelligence to human levels and beyond.
Answer: The approach you describe is called neuromorphic computing and it's quite a busy field.
IBM's TrueNorth even has spiking neurons.
The main problem with these projects is that nobody quite knows what to do with them yet.
These projects don't try to create chips that are optimised to run a neural network. That would certainly be possible, but the expensive part is the training not the running of neural networks. And for the training you need huge matrix multiplications, something GPUs are very good at already. (Google's TPU would be a chip optimised to run NNs.)
To do research on algorithms that might be implemented in the brain (we hardly know anything about that) you need flexibility, something these chips don't have. Also, the engineering challenge likely lies in providing a lot of synapses, just compare the average number of synapses per neuron of TrueNorth, 256, and the brain, 10,000.
So, you could create a chip designed after some neural architecture and it would be faster, more efficient, etc …, but to do that you'll need to know which architecture works first. We know that deep learning works, so google uses custom made hardware to run their applications and I could certainly imagine custom made deep learning hardware coming to a smartphone near you in the future. To create a neuromorphic chip for strong AI you'd need to develop strong AI first. | {
"domain": "ai.stackexchange",
"id": 150,
"tags": "neural-networks, recurrent-neural-networks, hardware, implementation"
} |
How does taking the modulus of the P-function of an optical state affect the trace norm? | Question: I have two density matrices in the coherent-state basis:
$$\rho_i = \int_{\mathbb{C}}P_i(\alpha) \left|\alpha\right\rangle\!\left\langle\alpha\right| \textrm{d}^2\alpha$$
where $i=1,2$. I want to find the trace distance between these, so I look at the operator:
$$\sigma \equiv \rho_1-\rho_2 = \int_{\mathbb{C}}f(\alpha) \left|\alpha\right\rangle\!\left\langle\alpha\right| \textrm{d}^2\alpha$$
where $\!\!\quad f(\alpha) \equiv P_1(\alpha)-P_2(\alpha)$, and I want to find
$$\lVert \sigma \rVert_1 = \textrm{Tr}\left(\left|\sigma\right|\right)$$
where $\left|\sigma\right| = \sqrt{\sigma^\dagger\sigma}$.
Since this is difficult to directly calculate, I hypothesise that $\left|\sigma\right|$ may be equal to $\bar{\sigma}$ (or at least less than it, in the sense of the trace norm), where I have defined
$$\bar{\sigma} = \int_{\mathbb{C}}\left|f(\alpha)\right| \left|\alpha\right\rangle\!\left\langle\alpha\right| \textrm{d}^2\alpha$$
although I don't know how to go about showing this (or finding a counter example if it is not the case).
Answer: You can instantly get a counterexample by choosing two rank-1 states $\rho_1=|0\rangle\langle0|$ and $\rho_2=|\alpha\rangle\langle\alpha|$. Then, $\mathrm{tr}\,\bar\sigma=2$, while $\|\rho_1-\rho_2\|=2\sqrt{1-|\langle0|\alpha\rangle|^2}$. (The latter follows from the fact that the $\rho_1-\rho_2$ is supported in the 2-dimensional subspace spanned by $|0\rangle$ and $|\alpha\rangle$, and is fully determined by their overlap.)
On the other hand, as AccidentalFourierTransform pointed out, the inequality
$$
\mathrm{tr}\,\sigma\le \mathrm{tr}\,\bar\sigma
$$
follows right away from the triangle inequality. | {
"domain": "physics.stackexchange",
"id": 42483,
"tags": "quantum-mechanics, quantum-information, quantum-optics, coherent-states"
} |
Comparing std::vector to std::vector | Question: A recent comment to an answer of mine here on Code Review brought up an interesting point. The comment was that one should use std::vector<char> over std::vector<bool> in most cases because the standard requires std::vector<bool> to actually pack bits. I replied that for small vector sizes, the speed wouldn't matter much and for large ones, cache locality would give the advantage to bool vectors.
However somewhere in between "small" and "large" is a lot of numbers! I wanted to do a test to characterize any std::vector<bool> advantage. The method I used is fairly simple. I wrote a program that takes 3 arguments:
minsize = the smallest vector size to test
maxsize = the largest vector size to test
steps = the number of steps between those two
Since I wanted to test a large range quickly, and guessing how the vectors would scale, I chose to step through the range logarithmically. Using the output from the test, and normalizing the speed advantage of std::vector<bool> over std::vector<char> by subtracting the two times and dividing by the size of the vector yielded this chart on my machine (an older 64-bit Linux machine, using g++ version 5.3.1 for x86_64).
It seems that for a range of sizes in the 2000 to 75000 range, the advantage is with std::vector<char> but for all other ranges, the two were either identical (to the resolution of my timer) or the std::vector<bool> had the advantage.
I'm interested in comments on the code or the test method.
booltest.cpp
#include <iostream>
#include <vector>
#include <cstdlib>
#include <cmath>
#include "stopwatch.h"
template <typename F>
struct testfunc {
F *fn;
const char *name;
};
#define TEST(x) { x, #x }
template <typename T>
void vectest(unsigned n)
{
std::vector<T> arr(n, false);
unsigned remaining = n;
static constexpr unsigned incr = 13;
for (unsigned j = 0; j < incr; ++j) {
for (unsigned i = j; i < n && remaining; i += incr) {
if (!arr[i]) {
arr[i] = true;
--remaining;
}
}
}
}
#define SHOW(x) std::cerr << #x << " = " << x << "\n"
int main(int argc, char *argv[])
{
const testfunc<decltype(vectest<bool>)> test[]{
TEST(vectest<char>),
TEST(vectest<bool>),
};
if (argc < 4) {
std::cerr << "Usage: booltest minsize maxsize steps\n";
return 0;
}
unsigned min = std::stod(argv[1]);
unsigned max = std::stod(argv[2]);
unsigned steps = std::stod(argv[3]);
double logmin = std::log10(min);
double logmax = std::log10(max);
double step = (logmax - logmin)/steps;
SHOW(min);
SHOW(max);
SHOW(steps);
// print header data
std::cout << "\"n\"";
for (const auto t : test) {
std::cout << ", \"" << t.name << "\"";
}
std::cout << "\n";
for (unsigned i = 0; i < steps; ++i) {
unsigned val = std::pow(10, logmin + i * step);
std::cout << val;
for (const auto t : test) {
std::cout << ", " << timeit<>(t.fn, val);
}
std::cout << std::endl;
}
}
Answer: I'm going to intersperse bits of code with my comments on them:
template <typename F>
struct testfunc {
F *fn;
const char *name;
};
testfunc isn't an entirely self-explanatory name (especially given that it's a template, so F could be pretty much anything. A comment about the basic intent would be quite helpful.
#define TEST(x) { x, #x }
template <typename T>
I'd rather see a blank link after the end of TEST to make it clear that the subsequent template isn't particularly closely related. Other than that, the same comment as previously applies: the name TEST doesn't tell us much, so a comment (or more explanatory name, or both) might be helpful here.
static constexpr unsigned incr = 13;
Two points here. First, I'm not entirely excited about the name: the value is used as an increment in one place (which seems to fit well with the name) but as the upper limit on a loop in another place (which doesn't fit quite so well).
Second, I'm left wondering exactly what (if any) significance the value 13 has. Is it arbitrary, or does it matter that it happens to be odd, or perhaps it matters that it's prime. Or maybe it's really a carriage return in disguise (oh, but only some crusty old assembly language programmer would notice that; no wonder it never occurred to you:-) !)
std::cout << std::endl;
It seems likely that this is one of the (rare) times that somebody is using std:endl because they really want what it does. Personally, I'd rather do that a bit more explicitly with std::cout << "\n" << std::flush; though. Otherwise, some busybody might conclude you're part of the crowd that routinely uses endl when they really just want a carriage return, and change it to the latter.
As far as the test method goes, it only tests one very specific pattern. It fits well with the Sieve of Eratosthenes, but doesn't tell you much about (for example) repeated manipulations of the same parts of the vector. If that fits closely with your intended usage pattern, that's fine--but there are clearly a lot of other possible uses about which this is likely to tell us little or nothing. | {
"domain": "codereview.stackexchange",
"id": 18187,
"tags": "c++, c++14, vectors, benchmarking"
} |
How do these triplets code for these proteins? | Question:
I am slightly confused by the diagram above.
The first codon of the unaltered DNA is AAG. During transcription, isn't this coverted to UUC (mRNA). So doesn't UUC code for phe and not Lys?
Likewise, how does TAG code for a stop codon? Isn't it transcibed to AUC, which codes for ile?
Also, is that the coding strand or template strand? If there is a mutation in the coding strand, is there any impact on the protein?
Answer: In the first image, AAG is the true codon for Lysine. So when the ribosome hits "AAG" in the mRNA it recruits a Lysine-tRNA.
What can be confusing is the use of the term "coding strand" when talking about the DNA. The coding strand is illustrated in the second image you posted, and it is the coding strand of DNA that the first image is depicting. What "coding strand" means here is, this is the strand that looks like the mRNA will look. Depicting the DNA coding strand like this--i.e. as a series of codons--can be slightly misleading since we know codons really only mean anything when in the form of mRNA. Nevertheless, it is a common way of thinking about DNA.
Back to the first image. Because it is showing the "coding strand" of DNA, the "template strand" for 5'-AAG-3' then must be 5'-CTT-3'. Hence, when this strand is transcribed by RNA polymerase, the 5'-CTT-3'(DNA) becomes 5'-AAG-3'(mRNA). | {
"domain": "biology.stackexchange",
"id": 9346,
"tags": "genetics"
} |
Is it possible for two strong lenses to cancel each other effects? | Question: Is it possible for a lens system for 2 strong lenses to cancel each other, so it looks almost the same as a common glass, while at the same time making an object in between them appear big and distant?
Could it be done with flat fresnel lenses?
EDIT: Ok, it's ok to have more lenses, what wouldn't serve me is them to be too far or for the system to be too big
Answer: I do not believe this is possible. For examppe researchers at Rochester used this to effectively cloak a small space when viewed from the right angles, since the lenses do bend the light passing between them thus occupying a cone shape instead of the entire cylinder. They use four lenses to obtain this. The way I think they do this is by placing two pairs of lenses at each others focal points, which should not magnify, but would invert the image, therefore you require two sets of these (four lenses in total) to invert it twice and get back to normal.
The researchers used four lenses, probably to increase the volume which would be cloaked:
However you can get away with just using three lenses if you just want to cancel the effects. For example you could use the following configuration:
where the three vertical black lines represent (thin) lenses, with focal points $F_1$, $F_2$ and $F_3$ from left to right, with in this case $F_1=2F_2=F_3$, such that all three lenses can have the same size.
I do not think that this can be simplified any further, or you would have to allow to use a concave and convex lens and glue them together, such that you obtain an effective optic with no effect. | {
"domain": "physics.stackexchange",
"id": 25525,
"tags": "visible-light, lenses, vision"
} |
How to know if a RG fixed point is attractive or repulsive? | Question: Assuming that I have the beta functions and fixed point solutions of a particular model how can I determine whether the fixed points are attractive or repulsive in each direction?
Answer: The idea is essentially to expand the RG-equations around the fixpoint so you can actually solve them. Assume we are given a collection of couplings
$g_i$ with fixed points $g_i^*$ in combination with a set of equations (the beta functions) $\lambda \frac{dg_i}{d\lambda} = \beta_i(g_j)$
Now define $\delta g_i = g_i^* - g_i$ and expand $\beta_i$ to linear order such that
$\lambda \frac{d\delta g_i}{d\lambda} = B_{ij} \delta g_j + \mathcal{O}(\delta g^2)$
where $B_{ij}$ is the matrix of taylor coefficients.
You can now look at the eigenvectors and eigenvalues of this matrix $v_i, \gamma_i$. Plugging one of these vectors into the set of equations we find
$\lambda\frac{d v_i}{d\lambda} = \gamma_i v_i $
which you can trivially solve or just realize that the direction of flow along the eigendirections of the fixpoint corresponds to the sign of the eigenvalue such that any negative eigenvalues represent a flow away from the fp, with positive corresponding to flow towards the fp. (This is just seen by solving the differential equation)
The only tricky thing here is the existence of these Eigenvectors and Eigenvalues, as there my not be the same number of Eigenvectors as directions in space. I'm not sure how to deal with this generally. | {
"domain": "physics.stackexchange",
"id": 95464,
"tags": "quantum-field-theory, renormalization"
} |
Sum total distance of electrons on a spherical surface | Question: What is the sum total distance between every possible pair of point charges when there are n point charges on a spherical surface?
All point charges can only and are located on the infinitesimal spherical surface.
Basically, we're going to have a bunch of points staying as far apart as possible on the surface of a sphere. Is there a general equation for this?
And whats the sum total distance between every pair of points?
(Both the distances by travelling the shortest distance on the spherical surface and the shortest distance travelling through the interior volume of the sphere. I'm confident that the relationship between those two values require a relatively simple equation.
Is there a similar mathematical model/problem which can be used to solve this problem?
Answer: To begin with, I'll have to assume you mean surplus electron charges on a sphere. Obviously, the integral of the inverse of distances between charges would be comparatively easy to relate to physical quantities since that's basically potential. So let's talk about this sphere, radius $R$, with a charge $Q$ on it, made up of $Q/e$ electrons. The self capacitance of a conducting sphere is $4 \pi \epsilon_0 R = R/k$. Then we can establish that the total potential energy is $\frac{1}{2}C V^2 = \frac{1}{2} \frac{Q^2}{C}$.
$$E=\frac{Q^2}{2 C} = k e^2 \frac{N^2 }{2 R} = k e^2 \sum_{i=1}^N \sum_{j=1}^{i-1} \frac{1}{|r_i-r_j|} $$
For the sum of distances (not the inverse), you can get a formula like this.
$$\sum_{i=1}^N \sum_{j=1}^{i-1} |r_i-r_j| = \frac{2}{3} N^2 R$$
This falls out from the calculus of the summation, just like the capacitance value. I didn't do that, instead I just wrote a code that discovered the relationship to my satisfaction.
program sphere
implicit none
double precision :: mu, theta
double precision :: mu2, theta2
double precision, dimension(3) :: r1, r2, rand
integer :: i, j, N
double precision :: thesum,thesum2, ind
double precision, parameter :: pi = 3.14159265
double precision :: d, rad
N = 5000
rad = 2.
ind = 0
thesum = 0.
thesum2 = 0.
do i = 1,N
r1 = random_points(rad)
do j = 1,i-1
ind = ind+1
r2 = random_points(rad)
d = sqrt(sum((r1-r2)**2))
thesum = thesum + 1./d
thesum2 = thesum2 + d
end do
end do
write(*,*) ' N= ',N,' number= ',ind
write(*,*) ' pot/N^2 ',thesum/N**2
write(*,*) ' len/N^2 ',thesum2/N**2
contains
function random_points(r)
implicit none
double precision, dimension(3) :: random_points
double precision, intent(in) :: r
double precision :: theta, mu
double precision, dimension(3) :: rand
call random_number(rand(1))
call random_number(rand(2))
theta = 2.*pi*rand(1)
mu = rand(2)*2.-1.
random_points(1) = cos(theta)*sqrt(1.-mu**2)*rad
random_points(2) = sin(theta)*sqrt(1.-mu**2)*rad
random_points(3) = mu*rad
end function random_points
end program sphere
I don't know if this value has any physical utility. To begin with, we can put it in terms of more familiar physical values.
$$\frac{2 N^2 R}{3} = \frac{2 Q^2 R}{3 e^2}$$
The problem with doing a distance integral is that I can't think of any physical thing for which this would matter. Field and potential are $1/r^2$ and $1/r$ and if you integrate again, you get $ln(r)$. I suppose some forces do grow with distance, and maybe they're proportional to distance.
I'm confident that the relationship between those two values require a relatively simple equation.
As long as the geometry is sufficiently simple, this will be true for many similar questions. That's in the domain of math.
EDIT:
I think that the revise problem is to constrain:
$$i=1 .. N$$
$$ |\vec{r}_i| < R$$
Then show that 1 being true implies 2 below
the sum of $1/r$ integral of all the charges over the entire volume is constant
the summed distance between all points is at a maximum
I think this could be what is being asked. It's a little lofty, but I'm sure it's entirely doable. My suspicion is that it would apply for any arbitrary region.
THE CALCULUS:
I'll present the way to get these numbers by integrating, partly because I think it would be helpful for an incoming freshman to see. The problem my above code solves is to integrate the values of $1/r$ and $r$ over all pairs of points over a sphere. When I do the actual integral I'll multiply it by 1/2 because the integral would otherwise double-count the pairs. Integrating is basically a way of using math to describe a problem with infinite points.
Let's start out. The surface area of the sphere is:
$$SA = 4 \pi R^2$$
The charge density is the number divided by the surface area. This is the number of charges per unit area on the surface of the sphere.
$$\sigma = \frac{N}{SA}$$
You can integrate over any variable you'd like, I'll chose the angle between the x-axis and the vector. I'm going to denote this with a prime to indicate that it is the "second" piont in the pair, and the "first" point in the pair will simply be fixed.
$$ \vec{r}' = <x',y',z'> = < R \cos{\theta}, 0, R \sin{\theta} >$$
$$ \vec{r} = < 1, 0, 0 >$$
Define the distance between them. This is a scalar.
$$ d = | \vec{r} - \vec{r}' | $$
I'm going to do a surface integral to cover all the $\vec{r}'$ and then do another surface integral (times 1/2) over all the $\vec{r}$. That's the jist of it, but there are lots of symmetries involved. These symmetries reduce the dimensionality of the $\vec{r}'$ integral by 1 and the $\vec{r}$ integral by 2. That means the latter isn't even an integral. The proposition behind these is that
You can rotate the $\vec{r}'$ point around the x-axis and it doesn't change the distance
You can move the $\vec{r}$ point all around the sphere and it doesn't change the distance
Now I'm at a point where I can write the integral. First for the total electrostatic energy. Note that $e \sigma$ is the charge density since I used $\sigma$ as the number density. The first expression of charge density time surface area is the multiplier that I use in lieu of an outer integral. The $2 \pi y'$ is needed to correctly use the x-axis symmetry, it equates to multiplying by the perimeter of a washer centered about the x-axis.
$$ N = (e \sigma SA) \frac{1}{2} \int_0^{\pi} 2 \pi y' \frac{k e \sigma}{d} d \theta = k e^2 \frac{ N^2}{2} $$
This is the result I wanted. The same manner is used almost identically to reproduce the number for the sum of distances between points. I'll leave out the charges because there's no clear physical interpretation.
$$ sum = ( \sigma SA) \frac{1}{2} \int_0^{\pi} 2 \pi y' \sigma d d \theta = \frac{2 N^2}{3} $$
And that's the integral. Using a computational algebra package is helpful, but everything should be sufficiently defined here. | {
"domain": "physics.stackexchange",
"id": 4508,
"tags": "electricity, mathematics, space, air, charge"
} |
What would happen if the aether did exist and there was no such thing as relativity? | Question: I'm curious as to the purpose of relativity and why the universe would function this way as opposed to a universe with an aether. So what would be different if we had an aether?
Answer: Interesting question. Brings to mind some thoughts.
After Maxwell published his theory of electromagnetism, many people thought that the EM waves had to be traveling through some medium, so they postulated the ether. Maxwell's Theory has waves propagating at a characteristic speed, but it was unknown exactly what the speed meant. Was it it speed of source, speed of observation point, speed relative to the ether?
To sort that out, Michelson and Morley used a very sensitive interferometer to see if the speed of light changed if a light beam traveled perpendicular to earth's orbit vs. parallel to the earth's orbit. No difference in the speed of light was observed.
So that's a jumping off point. If there was an ether, the speed of light would change depending on which direction the light beam was moving.
Length contraction and time dilation logically follow from the universality of the speed of light, so those would change. Consequently there would be significantly fewer muons reaching the surface of the earth from interactions in the upper atmosphere. Other observations would be off from their known values.
Magnetic fields would behave differently. These fields can be represented as an electrostatic field arising from a charge distribution from different times in the past.
Flat earth theorists have to explain many observations presently explained by a round earth theory. For example, they claim gravity is caused by the disc of the earth accelerating upward. This would lead to a gravitational field more uniform than the one we actually have and independent of earth's variable density.
In that same sense and ether would have many implications likely testable that wouldn't hold up.
Suppose there was a robust aether theory avoiding many ad hoc corrections that agreed with experiment and made all the predictions of special relativity. It wouldn't work to supersede special relativity. By virtue of introducing a new phenomena with no additional explaining power it could be ruled out as moot. | {
"domain": "physics.stackexchange",
"id": 97700,
"tags": "general-relativity, special-relativity, speed-of-light, aether"
} |
How to extract column from a matrix matching the another file (sample file) | Question: I have two files master file.txt and sample.txt
master.txt
Name GTEX.1117F.3226.SM.5N9CT GTEX.111FC.3126.SM.5GZZ2 GTEX.111FC.3326.SM.5GZYV GTEX.1128S.2726.SM.5H12C GTEX.1128S.2826.SM.5N9DI GTEX.117XS.3026.SM.5N9CA
ENSG00000223972 1 0 0 1 0 0
ENSG00000227232 298 168 197 106 221 184
ENSG00000278267 0 1 1 0 0 0
ENSG00000243485 0 2 0 1 0 1
ENSG00000237613 0 0 0 1 0 1
ENSG00000268020 1 0 0 1 0 1
ENSG00000240361 1 3 0 4 2 1
ENSG00000186092 3 1 1 1 1 2
Sample file.txt
GTEX.1117F
GTEX.111FC
GTEX.111XS
Desired output
Name GTEX.1117F.3226.SM.5N9CT GTEX.111FC.3126.SM.5GZZ2 GTEX.111FC.3326.SM.5GZYV GTEX.117XS.3026.SM.5N9CA
ENSG00000223972 1 0 0 0
ENSG00000227232 298 168 197 184
ENSG00000278267 0 1 1 0
ENSG00000243485 0 2 0 1
ENSG00000237613 0 0 0 1
ENSG00000268020 1 0 0 1
ENSG00000240361 1 3 0 1
ENSG00000186092 3 1 1 2
ENSG00000238009 2 2 1 1
ENSG00000233750 0 3 3 0
ENSG00000268903 103 44 76 24
I tried using the R-code form this link
Print specific columns in a matrix on the basis of sample id's in the header
However, I am not able to figure out how to extract columns matching only half part of ID (like for GTEX.111FC I want both GTEX.111FC.3126.SM.5GZZ2 and GTEX.111FC.3326.SM.5GZYV)
Thank you
Answer: Assuming your "master" files looks like this:
df <- data.frame(Name = LETTERS[1:5],
Col1 = sample(1:10,5),
Col2 = sample(1:10,5),
Col11 = sample(1:10,5),
Col3 = sample(1:10,5))
Name Col1 Col2 Col11 Col3
1 A 5 4 3 10
2 B 1 6 5 6
3 C 3 1 9 4
4 D 9 9 8 9
5 E 8 10 1 1
For a sequence of pattern you want to extract, you can use base r function such as grep:
match_pat = c("Col1","Col2")
m <- unlist(sapply(match_pat,function(x) grep(x,colnames(df))))
Col11 Col12 Col2
2 4 3
And then, extract your columns of interest by doing:
> df[,m]
Col1 Col11 Col2
1 5 3 4
2 1 5 6
3 3 9 1
4 9 8 9
5 8 1 10
In your example, it will be something like:
match_pat <- c("GTEX.1117F","GTEX.111FC","GTEX.111XS")
m <- unlist(sapply(match_pat,function(x) grep(x,colnames(df))))
master[,m]
``` | {
"domain": "bioinformatics.stackexchange",
"id": 1744,
"tags": "r, data-preprocessing, gtex"
} |
Do electrons in an atom gain kinetic energy when a photon hits it? | Question: I know once a photon hits an electron it moves from the ground state to an excited state, then come back down to ground by releasing that energy as a photon. But the the electron "and the atom" also increase in kinetic energy or does the energy get released too fast before it has a chance to increase the electron "and atom" kinetic energy?
Answer: Electrons and photons and nuclei are described with quantum mechanical equations. A photon can interact with a free electron, scattering elastically or inelastically called Comtpon scattering, where part of the energy of the photon turns into kinetic energy of the electron.
An electron bound to a nucleus forms an atom , and usually occupies the ground energy level. If a photon with the appropriate energy hits the atom and it has an energy covering the energy levels of the atom, the system goes to a higher excited energy level. The electron is located at a higher energy level, the momentum of the photon is transferred to the atom. In the semiclassical Bohr model one can say that the electron has higher energy, though the rigorous quantum mechanical solution is about probable values of the energy if measured.
When the atom relaxes to the lower level emitting a photon , momentum has to be conserved and the whole atom has to take part in the exercise. The incoming and outgoing photons will not have the same direction since the decay is probabilistic. | {
"domain": "physics.stackexchange",
"id": 44678,
"tags": "photons, atoms"
} |
Effect of Zn dust reduction of phenolic -OH group on other groups | Question: We use distillation with zinc dust to remove -OH group from phenol.
$$\ce{Ph-OH + Zn \rightarrow Ph-H + ZnO }$$
What I want to know is that whether this reaction has any effect on other groups attached to the benzene ring, like $\ce{-CH_3,-NH_2,-NO_2,-CN,-CONH2}$ etc. or not?
Answer: I did some research for this question many years ago. Nisarg's answer prompted me to post an answer.
Nisarg's answer showed how nitrobenzene can be reduced to form different compounds but my answer focuses on some different compounds. Doing some literature surveys gave me results from very old books. All the reaction involves only zinc dust. I tried quoting but the majority of the terms/compound names are obsolete, so I have wrote them as points:
formanilide and its derivatives is reduced benzenenitrile and aniline (and its derivatives) as major products and other side products such as N-Methyldiphenylamine. Yield of nitrile 10-20% weight of initial formanilide.
N-alkyl-o-toluidine and N-alkyl-p-toluidine yields corresponding nitrile: yield 15-20%
methyl-α- and methyl-β-naphthalamine yield α- and β-naphthonitrile
xylidine and its isomers yields xylonitrile (prone to hydrolysis to yield xylic acid(isomeric carboxylic acid of xylene)) : yield 12%
Phenyl isothiocyanate and thiocarboanilide forms benzonitrile and some aniline.
References:
K.Gasiorowski and V.Merz, Journal of the Chemical Society, 1884 | {
"domain": "chemistry.stackexchange",
"id": 15929,
"tags": "organic-chemistry, aromatic-compounds, synthesis"
} |
How to compute the maximum possible coherence of a two-particle Bell state? | Question: I am reading through some notes and am stuck on a bit of math that shows the max possible coherence. Our wave function is $|\psi\rangle =\frac{|01\rangle+|10\rangle}{2}$ and doing $|\psi\rangle \langle\psi|$ gives us
$\frac{|01\rangle \langle01| + |01\rangle \langle10| + |10\rangle \langle01| + |10\rangle \langle10|}{2}$
Multiplying these and adding them together gives me
$\frac{1}{2} (\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix})$
which is
$\frac{1}{2}\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$
but the given solution is
$\frac{1}{2}\begin{pmatrix} 0&0&0&0\\0&1&1&0\\0&1&1&0\\0&0&0&0 \end{pmatrix}$
Can I get some help pointing out where my math is wrong here?
Answer: Welcome to QCSE. It doesn't look like you took the outer product properly, for example, you might have gotten lost in thinking that $|01\rangle$ is a two-dimensional ket, but it's actually four-dimensional.
In more detail, $|01\rangle$ corresponds to 1 written in binary, and has a 1 in the second row of the vector, while $|10\rangle$ corresponds to binary 2, and has a 1 in the third row - the first row would correspond to $|00\rangle$ and the fourth row is $|11\rangle$.
That is, we have:
$$|01\rangle=\begin{pmatrix} 0 \\ 1 \\ 0 \\ 0\end{pmatrix}\:\:|10\rangle=\begin{pmatrix} 0 \\ 0 \\ 1 \\ 0\end{pmatrix}$$
But it looks like your matrices are written as if you are taking the outer product of:
$$|01\rangle=\begin{pmatrix} 1 \\ 0 \end{pmatrix}\:\: |10\rangle=\begin{pmatrix} 0 \\ 1\end{pmatrix}$$
This is improper. | {
"domain": "quantumcomputing.stackexchange",
"id": 4248,
"tags": "entanglement, textbook-and-exercises, density-matrix"
} |
Is there a machine learning model that is able to take reviews as input and output a new and unique blog article from them? | Question: I am looking for a machine learning model ideally with inference speeds of no longer than a few minutes that is able to take in n human written reviews and output a blog article from them.
The model would need to be pre-trained or if it does need training using the reviews then the training would not take more than a few minutes on a modern single GPU machine.
Can someone point me to such open-source projects?
Answer: There is a solution that could work well. It requires minimum effort but has to be tested.
If you take several reviews and you group all first paragraphs together, then the second ones, etc. and you apply an efficient summarization model, you should have the essence of all reviews.
The model would recognize the most frequent patterns in an organized way and do the job.
It could work with sentences instead of paragraphs.
If the articles have very different sizes, you can use summarization for each of them to 10 sentences, and then apply the process described above.
Note: You can't do this with full reviews next to each other because the model would not recognize the beginning from the end for each of them.
https://huggingface.co/facebook/bart-large-cnn
Other models:
https://huggingface.co/models?pipeline_tag=summarization&sort=downloads | {
"domain": "datascience.stackexchange",
"id": 11124,
"tags": "text-generation"
} |
Proving a step in this field-theoretic derivation of the Bogoliubov de Gennes (BdG) equations | Question: In derivation of the BdG mean field Hamiltonian as follows, I have a confusion here in the second step:
$H_{MF-eff} = \int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})H_{E}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})+\int d^{3}r\psi_{\downarrow}^{\dagger}(\mathbf{r})H_{E}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})
+\int d^{3}r\triangle^{\star}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})+\int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}^{\dagger}(\mathbf{r})\triangle(\mathbf{r})-\int d^{3}r\frac{|\triangle(\mathbf{r})|^{2}}{U}$
$ = \int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})H_{E}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})-\int d^{3}r\psi_{\downarrow}(\mathbf{r})H_{E}^{\star}(\mathbf{r})\psi_{\downarrow}^{\dagger}(\mathbf{r})
+\int d^{3}r\triangle^{\star}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})+\int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}^{\dagger}(\mathbf{r})\triangle(\mathbf{r})-\int d^{3}r\frac{|\triangle(\mathbf{r})|^{2}}{U}$
$= \int d^{3}r\left(\begin{array}{cc}
\psi_{\uparrow}^{\dagger}(\mathbf{r}) & \psi_{\downarrow}(\mathbf{r})\end{array}\right)\left(\begin{array}{cc}
H_{E}(\mathbf{r}) & \triangle(\mathbf{r})\\
\triangle^{\star}(\mathbf{r}) & -H_{E}^{\star}(\mathbf{r})
\end{array}\right)\left(\begin{array}{c}
\psi_{\uparrow}(\mathbf{r})\\
\psi_{\downarrow}^{\dagger}(\mathbf{r})
\end{array}\right)+const.
$
with
$H_{E}(\mathbf{r})=\frac{-\hbar^{2}}{2m}\nabla^{2}$
In the second step, we have taken
$\int d^{3}r\psi_{\downarrow}^{\dagger}(\mathbf{r})\nabla^{2}\psi_{\downarrow}(\mathbf{r}) = -\int d^{3}r\psi_{\downarrow}(\mathbf{r})\nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})$............(1).
I can prove (by integration by parts and putting the surface terms to 0) that $\int d^{3}r\psi_{\downarrow}^{\dagger}(\mathbf{r})\nabla^{2}\psi_{\downarrow}(\mathbf{r}) = \int d^{3}r \nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})$
but how is it justified to now take
$\int d^{3}r \nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}(\mathbf{r}) = - \int d^{3}r\psi_{\downarrow}(\mathbf{r})\nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})$
in order to prove (1) ?
Answer: Write the differential operator $\nabla^2$ in terms of derivatives as $\nabla^2=\partial^2_x+\partial^2_y+\partial^2_z$. Write each derivative as a limit (i.e.: $\partial_x f(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} $). Rearrange the fields by putting $\psi_\downarrow (r')$ on the left, of course keep track of fermionic exchanges with a minus sign. Reinstate the limit as a derivative and then as a differential operator. | {
"domain": "physics.stackexchange",
"id": 13376,
"tags": "quantum-mechanics, quantum-field-theory, condensed-matter"
} |
Why do colored solutions become more transparent as volume is decreased? | Question: For instance, I was drinking Gatorade, and as I drank more and more, the drink became lighter in color. Why?
I don't understand. I would think that the molarity stays constant and the color too.
I feel like it's a really basic question, but as finals approach, I seem to have forgotten everything I know.
Answer: That's the Beer - Lambert law in action. The color you see depends on absorbance, which is proportional to molarity and the thickness of your specimen.
Come to think of it, how could it be otherwise? Would you expect 1cm thick layer of liquid to look the same as 10cm? What about 1mm? Or 0.1mm? | {
"domain": "chemistry.stackexchange",
"id": 5528,
"tags": "concentration, color, spectrophotometry"
} |
What is "data science"? | Question: In recent years, the term "data" seems to have become a term widely used without specific definition. Everyone seems to use the phrase. Even people as technology-impaired as my grandparents use the term and seem to understand words like "data breach." But I don't understand what makes "data science" a new discipline. Data has been the foundation of science for centuries. Without data, there would be no Mendel, no Schrödinger, etc. You can't have science without interpreting and analyzing data.
But clearly it means something. Everyone is talking about it. So what exactly do people mean by data when they use terms like "big data" and why has this become a discipline in itself? Also, if it is an emerging discipline, where can I find more serious/in-depth information so I can better educate myself?
Thanks!
Answer: I get asked this question all the time, so earlier this year I wrote an article (What is Data Science?) based on a presentation I've given a few times. Here's the gist...
First, a few definitions of data science offered by others:
Josh Wills from Cloudera says a data scientist is someone "who is better at statistics than any software engineer and better at software engineering than any statistician."
A frequently-heard joke is that a "Data Scientist" is a Data Analyst who lives in California.
According to Big Data Borat, Data Science is statistics on a Mac.
In Drew Conway's famous Data Science Venn Diagram, it's the intersection of Hacking Skills, Math & Statistics Knowledge, and Substantive Expertise.
Here's another good definition I found on the ITProPortal blog:
"A data scientist is someone who understands the domains of programming, machine learning, data mining, statistics, and hacking"
Here's how we define Data Science at Altamira (my current employer):
The bottom four rows are the table stakes -- the cost of admission just to play the game. These are foundational skills that all aspiring data scientists must obtain. Every data scientist must be a competent programmer. He or she must also have a solid grasp of math, statistics, and analytic methodology. Data science and "big data" go hand-in-hand, so all data scientists need to be familiar with frameworks for distributed computing. Finally, data scientists must have a basic understanding of the domains in which they operate, as well as excellent communications skills and the ability to tell a good story with data.
With these basics covered, the next step is to develop deep expertise in one or more of the vertical areas. "Data Science" is really an umbrella term for a collection of interrelated techniques and approaches taken from a variety of disciplines, including mathematics, statistics, computer science, and software engineering. The goal of these diverse methods is to extract actionable intelligence from data of all kinds, enabling clients to make better data-driven decisions. No one person can ever possibly master all aspects of data science; doing so would require multiple lifetimes of training and experience. The best data scientists are therefore "T-shaped" individuals -- that is, they possess a breadth of knowledge across all areas of data science, along with deep expertise in at least one. Accordingly, the best data science teams bring together a set of individuals with complementary skillsets spanning the entire spectrum. | {
"domain": "datascience.stackexchange",
"id": 162,
"tags": "bigdata, definitions"
} |
Electric potential vs induced emf | Question: Suppose we have a changing magnetic field in the $z$ direction and a conductive ring of radius r inside the magnetic field lying in the $xy$ plane. Then we have an induced emf: $$ ε= -{{dΦ} \over {dt} }$$
This emf causes a current along the conductor-ring: $ I = \frac{ε}{R} $ , R its resistance. (1)
However we know that all points inside a conductor have the same electric potential. (2)
How can both (1) and (2) be true?
Some thoughts:This is a pure Faraday field and the induced electric field is of course not conservative. So how can there even be a potential? If $E$ is not conservative then we can't write $E=- \nabla V$ , right? However we do have some sort of potential from which we get E: the emf. What's the difference between the induced emf and the electric potential?
Also: what would happen if I cut the ring at one point? Would I have potential difference between the 2 ends?
Answer: You're right. When you have a conservative $E$ field, you can define an electric potential $V$. And when you don't, you can't define such a potential (just as you can't define a potential energy $U$ for a nonconservative force $F$ - nonconservative forces still do work - but there is no associated potential energy function for such forces).
General Remarks
Electric potential is measured in volts and defined by
$$V(x) = -\int_{\mathcal{O}}^x \vec{E}\cdot d\vec{l} \tag{1}$$
A voltage is defined by (in the textbooks) as a difference in potential
$$\Delta V = V_b - V_a = -\int_a^b \vec{E}\cdot d\vec{l} \tag{2}$$
However this is too restrictive. In general, anything that takes the form
$$\int \frac{\vec{F}}{q}\cdot d\vec{l} \tag{3}$$
can be called voltage and is measured in volts. So yes $\text{(2)}$ above is a voltage and even $\text{(1)}$ is a voltage if you like (as $V(x)$ is secretly $V(x) - V(\mathcal{O}) = V(x) - 0$ and therefore a difference in potential). But note that $\vec{F}$ can be anything. It doesn't have to be a conservative electric force. Again, anything that takes the form of $\text{(3)}$ is measured in volts and can be called a voltage. EMF takes the form $\text{(3)}$ [I explain this further down]. So too does potential difference. This is one reason why EMF and potential difference get mixed up: they take similar forms and hence both are measured in volts and can be called voltage (or induced voltage or whatever). And actually, this is great. If you are doing anything in the lab or talking to engineers, it doesn't really matter whether voltage means potential difference or EMF. It does, but all we really care about is energy. Voltage (equation $\text{(3)}$)is energy. EMF and potential difference/potential are energy. Energy is energy, whether it be EMF or potential difference. Voltage is just a general term for energy. If you are ever in a situation where you don't know whether to say "the EMF is 5 volts" or the "potential difference is 5 volts", just say "5 volts" or "the voltage/induced voltage/whatever is 5 volts" and you are safe.
What is EMF
In order to get current to flow around a circuit, you need some force pushing charges around the wire. Let's call this the driving force $\vec{F}$. EMF $\mathcal{E}$ is defined as
$$ \mathcal{E} = \oint \frac{\vec{F}}{q}\cdot d\vec{l} \tag{4}$$
where the integration is taken around the loop. There are two main forces that drive current around a circuit: a "source" force from say a battery and a conservative electric field which pushes charges around the wire. $\vec{F} = \vec{F}_s + q\vec{E}$. Therefore, $\text{(4)}$ can be written
$$\mathcal{E} = \int_a^b \frac{\vec{F}_s}{q}\cdot d\vec{l} $$
as $\vec{F}_s$ is usually confined to a section of the loop and $E$ is conservative so it integrates to 0 (started where we left off - once around the loop). $\vec{F}_s$ can be anything. It can be a chemical force, some temperature gradient thing, pressure on a crystal, a magnetic force, a nonconservative $E$ field, etc. So consider a battery. A conservative electric field goes from the positive terminal, around the loop, to the negative terminal, as well as from the positive terminal to the negative terminal inside the battery. Using the last equation, assuming the battery is ideal so that the chemical force is equal and opposite to the electric force,
$$ \mathcal{E} = \int_a^b \frac{\vec{F}_{\text{chemical}}}{q}\cdot d\vec{l} = -\int_a^b \vec{E}\cdot d\vec{l} = V$$
The EMF of the battery is equal to the potential difference across its terminals. But this does not mean that EMF is potential difference. It just happens to be so in this case. Most simple circuits turn out to be this way but realize again that EMF and potential difference are totally different. In the first place, you can't have a potential difference without an EMF generating that separation of charge in the battery. EMF generates a potential difference that happens to match the numerical value of the EMF (which you can think of as energy conservation). Then if you have a resistor connected to this battery, current $I = V/R$. I could also say $I = \mathcal{E}/R$ as they are numerically equivalent. But it's more appropriate in my opinion to use $I = V/R$ as the energy drop is coming as electric potential energy in a conservative $E$ field (which exists throughout the wire doing the pushing). Here we begin to see, as in the next section, that All circuits require an EMF to function.
Let's Look at your example
There is no such thing as an electric potential in your example. There's an $E$ field, but it's not conservative. Therefore, don't say potential. There is an EMF however. You can say there's a voltage or an induced voltage if you like (from the above discussion). But there's definitely not a potential difference/potential present. For this specific example, the EMF is given by
$$ \mathcal{E} = -\frac{d\phi}{dt} = \oint \vec{E} \cdot d\vec{l}$$
where the driving force is that nonconservative $E$ field. The current as you say is $I = \mathcal{E}/R$. Here again we see a true instance of the following: all circuits require an EMF. The idea that all points in a conductor are at the same potential is equivalent to saying that there is no $E$ field in a conductor. Note that this idea of there being no $E$ field in a conductor only holds for electrostatics + no time varying external magnetic fields (having a time varying $B$ prevents electrostatics anyways so saying "+ no time varying $B$" was redundant). Having an $E$ field in a conductor is completely fine. Turn on an $E$ field in a conductor. There is definitely an $E$ field in the conductor until electrostatics is reached. This is why conducting wires in simple circuits can have $E$ fields in them. This $E$ field is essential for driving current around even though it's through a conductor. It's just that the conductor can never reach electrostatics when it's part of a circuit. It desperately tries, but the battery prevents the wire from coming to a static situation. And with time-varying $B$ fields, nothing wrong with having an $E$ field in a conductor. When you stop varying $B$, you'll stop changing the $E$ field and things will reach statics. While you are changing the $B$ field, $E$ is changing with time. The conductor is trying to reach statics, but can never do so. So there will always be an $E$ present and hence the conductor won't be an equipotential (albeit, in simple circuits, you can take the wires to be equipotentials because $E$ is so so tiny).
Too Much Theory, What to know about EMF
From equation $\text{(4)}$, because of the closed line integral, EMF does not care about conservative forces while electric potential crucially depends on a conservative E field. EMF and potential are both instances of equation $\text{(3)}$, and therefore both tell you about energy. Potential is energy in a conservative E field. EMF is energy added to your circuit through "nonconservative" driving forces. In order for circuits to work, you need to pump energy into them so that charges will flow back down to low energy, making a circuit. EMF tells you how much energy driving forces give to a unit charge in one trip around the loop. Conservative forces don't give any net energy to a charge after one complete loop (started where you stopped). "Nonconservative" forces will give you some nonzero value to equation $\text{(4)}$. EMF tells you how much energy was added by driving forces and hence how much energy must be dropped by dissipative/"friction" forces in one trip around the loop. An EMF of 5 volts means 5 volts must be dropped by every unit of charge. If you have a battery providing 2 volts of EMF and a changing magnetic field providing 6 volts of EMF, 8 volts must be dropped
[by the way, If you ever see an example circuit out there with both a battery and a changing flux enclosed by the loop, more than likely their derived equations have wrong explanations. Right equation. Wrong explanation (which is basically just as bad as not knowing what you are doing). What they do is say $-d\phi/dt = \oint \vec{E}\cdot d\vec{l}$. This is Faraday's Law, true in anycase. But what actually does that $\vec{E}$ mean? It's the net $E$ field on your loop. In the case of a battery and a changing flux, that $E$ has both a conservative and a nonconservative component. The conservative component integrates to zero, leaving only the nonconservative $E$ providing the $-d\phi/dt$. Therefore, if you have this simple battery + changing flux + resistor circuit, $-d\phi/dt = I_{\phi}R$ where $I_{\phi}R$ is the integral of nonconservative $E$ around the loop. Now we can add a constant to each side of the equation. Since EMF battery $V_0 = I_0R$, we can say $V_0 - d\phi/dt = (I_{\phi} + I_0)R = IR$. Or if you want, you can write out $-d\phi/dt = I_{\phi}R + \oint \vec{E}_{\text{conserv}} \cdot d\vec{l} = I_{\phi}R - V_0 + I_0R$]. | {
"domain": "physics.stackexchange",
"id": 60745,
"tags": "electromagnetism, classical-electrodynamics"
} |
Creating a thread for file transfer | Question: I am creating an application in Java that runs at scheduled intervals and it transfer files from one server to another server.
For SFTP I'm using jSch, and my server and file details came from Database.
My code is working fine but its performance is not good because I'm using too many loops in my code.
Is there any way to increase performance of my code?
public class FileTransferThread implements Runnable {
public static final Logger log = Logger.getLogger(FileTransferThread.class.getName());
private Session hibernateSession_source;
private Session hibernateSession_destination;
private List<nr_rec_backup_rule> ruleObjList = new ArrayList<>();
private Map<String, List<String>> filesMap = new HashMap<>();
private int i = 1;
@Override
public void run() {
try {
hibernateSession_destination = HibernateUtilReports.INSTANCE.getSession();
// Getting Active rules from (nr_rec_backup_rule)
Criteria ruleCriteria = hibernateSession_destination.createCriteria(nr_rec_backup_rule.class);
ruleCriteria.add(Restrictions.eq("status", "active"));
List list = ruleCriteria.list();
for (Object object : list) {
nr_rec_backup_rule ruleObj = (nr_rec_backup_rule) object;
ruleObjList.add(ruleObj);
}
System.out.println("List of Rule Objs : " + ruleObjList);
getTargetServerAuthentication();
} catch (Exception e) {
log.error("SQL ERROR ======== ", e);
} finally {
hibernateSession_destination.flush();
hibernateSession_destination.close();
hibernateSession_source.flush();
hibernateSession_source.close();
}
}
private void getTargetServerAuthentication() throws Exception {
if (ruleObjList.size() > 0) {
JSch jsch = new JSch();
hibernateSession_source = HibernateUtilSpice.INSTANCE.getSession();
for (nr_rec_backup_rule ruleObj : ruleObjList) {
//getting authentication details for backupserver from table "contaque_servers"
String backupHost = ruleObj.getBackupserver();
Criteria crit = hibernateSession_source.createCriteria(contaque_servers.class);
crit.add(Restrictions.eq("server_ip", backupHost));
ProjectionList pList = Projections.projectionList();
pList.add(Projections.property("machineUser"));
pList.add(Projections.property("machinePassword"));
pList.add(Projections.property("machinePort"));
crit.setProjection(pList);
Object uniqueResult = crit.uniqueResult();
if (uniqueResult != null) {
Object[] serverDetails = (Object[]) uniqueResult;
String backupUser = (String) serverDetails[0];
String backupPassword = (String) serverDetails[1];
int backupPort = (int) serverDetails[2];
//creating connection to backup server
com.jcraft.jsch.Session sessionTarget = null;
ChannelSftp channelTarget = null;
try {
sessionTarget = jsch.getSession(backupUser, backupHost, backupPort);
sessionTarget.setPassword(backupPassword);
sessionTarget.setConfig("StrictHostKeyChecking", "no");
sessionTarget.connect();
channelTarget = (ChannelSftp) sessionTarget.openChannel("sftp");
channelTarget.connect();
System.out.println("Target Channel Connected");
//Getting fileName from table "contaque_recording_log" using campName and Dispositions
String[] split = ruleObj.getDispositions().split(", ");
Criteria criteria = hibernateSession_source.createCriteria(contaque_recording_log.class);
criteria.add(Restrictions.eq("campName", ruleObj.getCampname()));
criteria.add(Restrictions.in("disposition", Arrays.asList(split)));
criteria.setProjection(Projections.property("fileName"));
List list = criteria.list();
for (Iterator it = list.iterator(); it.hasNext();) {
String completeFileAddress = (String) (it.next());
if (completeFileAddress != null) {
int index = completeFileAddress.indexOf("/");
String serverIP = completeFileAddress.substring(0, index);
String filePath = completeFileAddress.substring(index, completeFileAddress.length()) + ".WAV";
if (filesMap.containsKey(serverIP)) {
List<String> sourceList = filesMap.get(serverIP);
sourceList.add(filePath);
} else {
List<String> sourceList = new ArrayList<String>();
sourceList.add(filePath);
filesMap.put(serverIP, sourceList);
}
}
}
//getting authentication details for source-server from table "contaque_servers"
if (filesMap.size() > 0) {
for (Map.Entry<String, List<String>> entry : filesMap.entrySet()) {
String sourceHost = entry.getKey();
List<String> fileList = entry.getValue();
Criteria srcCriteria = hibernateSession_source.createCriteria(contaque_servers.class);
srcCriteria.add(Restrictions.eq("server_ip", sourceHost));
ProjectionList pList1 = Projections.projectionList();
pList1.add(Projections.property("machineUser"));
pList1.add(Projections.property("machinePassword"));
pList1.add(Projections.property("machinePort"));
srcCriteria.setProjection(pList1);
Object uniqueResult1 = srcCriteria.uniqueResult();
if (uniqueResult1 != null) {
Object[] srcServer = (Object[]) uniqueResult1;
String srcUser = (String) srcServer[0];
String srcPassword = (String) srcServer[1];
int srcPort = (int) srcServer[2];
//creating connection to source server
com.jcraft.jsch.Session sessionSRC = jsch.getSession(srcUser, sourceHost, srcPort);
sessionSRC.setPassword(srcPassword);
sessionSRC.setConfig("StrictHostKeyChecking", "no");
sessionSRC.connect();
ChannelSftp channelSRC = (ChannelSftp) sessionSRC.openChannel("sftp");
channelSRC.connect();
System.out.println("Source Channel Connected");
try {
fileTransfer(channelSRC, channelTarget, ruleObj, fileList);
} finally {
channelSRC.exit();
channelSRC.disconnect();
sessionSRC.disconnect();
}
} else {
log.error("IN ELSE ======== Source server dosen't exists in table 'contaque_servers'");
}
}
}
} catch (JSchException e) {
log.error("Error Occured ======== Connection not estabilished", e);
} finally {
if (channelTarget != null && sessionTarget != null) {
log.error("exiting channel and session");
channelTarget.exit();
channelTarget.disconnect();
sessionTarget.disconnect();
} else {
log.error("Error Occured ======== Connection not estabilished");
}
}
}
}
}
}
private void fileTransfer(ChannelSftp channelSRC, ChannelSftp channelTarget, nr_rec_backup_rule ruleObj, List<String> fileList) {
for (String filePath : fileList) {
System.out.println("i === " + i++);
int fileNameStartIndex = filePath.lastIndexOf("/") + 1;
String fileName = filePath.substring(fileNameStartIndex);
System.out.println("File Name : " + fileName);
System.out.println("File Path: " + filePath);
System.out.println("Backup Path : " + ruleObj.getBackupdir() + fileName);
try {
InputStream get = channelSRC.get(filePath);
channelTarget.put(get, ruleObj.getBackupdir() + fileName);
} catch (SftpException e) {
log.error("Error Occured ======== File or Directory dosen't exists === " + filePath);
}
}
}
}
Answer: At face value it appears that there can be only one place where the major bottleneck is: the actual file transfer. Your code does the following:
builds up a bunch of source files to copy
creates a 'target' destination for the file copy
goes through each source
for each source, it 'downloads' the files one at a time
as it downloads each file, it uploads it to the target.
While this whole task may be running in a separate thread, it is by no means multi-threaded.
The probable bottleneck here is the amount of CPU time required to decrypt the data from the source, and re-encrypt it to the destination.
It is likely, also, that close behind the CPU bottleneck (perhaps even in front of it) is the network transfer speeds you can get in a single socket connection.
I would suggest four things to do, and possibly a combination of them:
try to set up a system where you can sfp direct from the source to the destination without needing to process the file in between. You have ssh access to them both, so it should not be that hard to create a script on the source, and run that script with some parameters that copies the file to the destination.
Use BlowFish encryption algorithm for the transfer. It is rumoured that blowfish is faster than the other algorithms, and, by the sounds of it, it should be fine for your use case.
Wrap the InputStream you get from jsch in a BufferedInputStream
spread the load of the decrypt/encrypt on multiple threads.
The most effective option will be 1, but the most fun to write will be 4....
Something like:
create a method that takes the details required to copy a single file....
public Boolean copyFile(Session source, Session target, String sourcefile, String targetfile) throws IOException {
// connect to the source
.....
// connect to the target
.....
// get a BufferedInputStream on the source
.....
// copy the stream to the target
.....
return Boolean.TRUE; // success.
}
instead of populating a filesMap Map, do something like:
ExecutorService threadpool = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
List<Future<Boolean>> transfers = new ArrayList<>();
....
final Session source = ......;
final Session target = ......;
final String sourcefile = ....;
final String targetfile = ....;
transfers.add(threadpool.submit(new Callable<Boolean>() {
public Boolean call() throws IOException {
return copyFile(source, target, sourcefile, targetfile);
}
});
....
// all copy actions are submitted now... so we wait for the threadpool.
threadpool.shutdown(); // orderly shutdown, all tasks are completed.
for (Future<Boolean> fut : transfers) {
try {
fut.get();
} catch (Exception ioe) {
LOGGER.warn("Unable to transfer file: " + ioe.getMessage(), ioe);
}
}
// all copies have been attempted, in parallel. | {
"domain": "codereview.stackexchange",
"id": 6000,
"tags": "java, optimization, performance, multithreading"
} |
Construct a monochromatic graph flipping nodes of the edges | Question: Given a letter (A or B) representing the color for N nodes of a graph and the set of M edges. Its possible to make all the graph have the letter A by flipping the nodes connected by an edge ?
Example:
Given the following letter graph:
A - B
Its impossible to make all the letters A by flipping the nodes connected by some edge.
With this graph is possible to make it (Just flip the second edge ):
A - B - B
To solve this problem i've tried brute force. For every edge the program make two paths (flip the ith edge or not flip). I've tried to memoize some paths that was visited in the search to make it more faster.
Description of the input:
There will be several test cases. Each test case begins with two integers: N (1 ≤ N ≤ 1000) and M (1 ≤ M ≤ 4000). The next line contains N letters, indicating the starting letter of each node. Then follows M lines, each with two integers, a and b (1 ≤ a,b ≤ N and a != b), describing that there is an edge from a to b. Input ends with EOF.
Description of the output:
For each test case you should output Y if it is possible to meet the requirements or N otherwise.
Above example input:
2 1
A B
1 2
3 2
A B B
1 2
2 3
Above example output:
N
Y
#include <cstdio>
#include <vector>
#include <utility>
#include <unordered_map>
#include <algorithm>
#define A 0
#define B 1
using edge = std::pair<int, int>;
std::unordered_map<unsigned int , int > visited_state;
std::vector<edge> edge_list;
std::vector<char> node_letters;
bool circuit;
bool isAllA(){
for (int i = 0; i < node_letters.size(); ++i)
if(node_letters[i] == B)
return false;
return true;
}
//flip the color of the nodes in the eth edge
void flipColors( int e ){
int u = edge_list[e].first;
int w = edge_list[e].second;
node_letters[u] = !node_letters[u];
node_letters[w] = !node_letters[w];
}
unsigned int getHashState(){
unsigned int hash1 = 63689 , hash2 = 378551;
unsigned int hash_state = 0;
for (int i = 0; i < node_letters.size(); ++i){
hash_state = hash_state * hash1 + node_letters[i];
hash1 *= hash2;
}
return hash_state;
}
void brute_search( int i ){
//if its possible ends the recursion
if ( circuit )
return;
unsigned int hash = getHashState();
if ( visited_state[ hash ] ) {
//this state was visited on higher depth (dont need to search again)
if ( visited_state[ hash ] >= i + 1){
//puts("test repeated problems");
return;
}
}
visited_state[ hash ] = i + 1;
if(i == edge_list.size() ){
circuit = isAllA();
return;
}
//search without change the nodes on the ith edge
brute_search(i + 1);
//test with the changes
flipColors(i);
brute_search(i + 1);
flipColors(i);
}
int main ( void ){
int n,m;
while( scanf("%d %d ",&n,&m) != EOF ) {
//init and reset global variables
circuit = false;
node_letters.clear();
edge_list.clear();
visited_state.clear();
//read the letter of each node
for ( int i = 0 ; i < n ; ++i ) {
node_letters.push_back(getchar() - 'A');
getchar();//remove space
//printf("%d\n",c[i]);
}
//read the edges
for (int i = 0; i < m; ++i) {
int u,w;
scanf("%d %d",&u,&w);
edge_list.push_back(edge(u - 1,w - 1));
//printf("%d %d\n",v[i].first,v[i].second);
}
brute_search(0);
putchar(circuit ? 'Y' : 'N');
putchar('\n');
}
}
Answer: This is not a review, but a comment too long to be a comment.
Brute force is almost always wrong. In the tasks like this it is always wrong.
Let the graph has a nodes colored A, and b nodes colored B.
First notice that flipping an edge doesn't change the parity of neither a nor b. From this, it follows immediately that the if both a and b are odd, the graph cannot be made monochromatic.
It is a bit harder to prove the opposite, that is if at least one of a and b is even, a connected graph can be made monochromatic. Prove that given two vertices of a given color there is a sequence of A-B flips so that two vertices of a given color become adjacent. Then use induction.
Summing up, the solution is to identify the connected components of the graph, and for each of them count the color's parities. | {
"domain": "codereview.stackexchange",
"id": 16814,
"tags": "c++, performance"
} |
Finding the multiple of 3 and 5 | Question: Write a program that prints the numbers from 1 to 100.
But for multiples of three print “Fizz” instead of the number
and for the multiples of five print “Buzz”. For numbers which are
multiples of both three and five print “FizzBuzz”.
Is this the best way of doing this?
def fizz_buzz(num):
if num%3==0 and num%5==0:
return 'FizzBuzz'
elif num % 3 == 0:
return 'Fizz'
elif num % 5==0:
return 'Buzz'
else:
return num
for n in range(1,100):
print(fizz_buzz(n))
Answer: General feedback
I will try to go over some points that can be useful to note. Firstly there are several things I like about your code. Firstly it is very readable. Secondly I like that you split your logic. You also split finding the string and printing it. This is good. With this being said there are always things which could and should be improved
Semantics
You should use the if __name__ == "__main__": module in your answer.
def fizz_buzz(num):
if num%3==0 and num%5==0:
return 'FizzBuzz'
elif num % 3 == 0:
return 'Fizz'
elif num % 5==0:
return 'Buzz'
else:
return num
if __name__ == "__main__":
for n in range(1,100):
print(fizz_buzz(n))
Which makes your code reusable for later. Eg you can call functions from your file in other programs.
Your else clause at the end of the code is useless. You could have written
elif num % 5==0:
return 'Buzz'
return num
Alternatives
One problem with your code is that you have multiple exit points. Now this is not something to sweat too hard over, and it is not a goal to always try have a single exit. However it can be easier to debug a code with fewer exit points. This is of course much more relevant in longer and more complex code. Though it is a good thing to always have in mind. One way to do this is to define a new variable string
def fizz_buzz(num):
string = ''
if num%3==0 and num%5==0:
string = 'FizzBuzz'
elif num % 3 == 0:
string = 'Fizz'
elif num % 5==0:
string = 'Buzz'
if string:
return string
return num
The code now only has two exit points however it can still be improved. One key point is that if a number is divisible by 3 and 5, it is divisible by 15. So we can gradually build the string, like shown below
def fizz_buzz(num):
string = ''
if num % 3 == 0:
string += 'Fizz'
if num % 5==0:
string += 'Buzz'
if string:
return string
return num
As a last point the return statement could be written using a terniary conditional operator
return string if string else n
Which combines the two exit points into a single one. To sumarize
def fizz_buzz(num):
string = ''
if num % 3==0: string +='Fizz'
if num % 5==0: string +='Buzz'
return string if string else num
if __name__ == "__main__":
for n in range(1, 100):
print(fizz_buzz(n))
Closing comments
Python has a style guide PEP 8 which explains in excruciating detail how to structure your code. I whole heartily recommend skimming through it and follow it.
The problem FizzBuzz is very, very simple. It can be solved in a number of ways using just a simple line. Syb0rg, showed one way to write this code
for i in range(1,101): print("Fizz"*(i%3==0) + "Buzz"*(i%5==0) or i)
You can even shorten this into
i=0;exec"print i%3/2*'Fizz'+i%5/4*'Buzz'or-~i;i+=1;"*100
Using some cryptic pythonic voodoo. However as I said in the introductory I like your code, because it is easy to understand. Almost always it is better to have clear, readable code than cryptic code which is a few lines shorter. This of course disregards any speed improvements and such | {
"domain": "codereview.stackexchange",
"id": 20482,
"tags": "python, fizzbuzz"
} |
Do supernovae push neighboring stars outward? | Question: I know that a supernova can mess up the heliosphere of nearby stars, but I'm wondering if it could physically push neighboring stars off their trajectories.
It's fun to imagine all the stars surrounding a supernova being propelled outward and tumbling out of the galactic arm!
I would expect that a really close star, such as a partner in a binary pair, would get really messed up. I'm thinking more about the neighbors a few light-years away.
I realize that a supernova involves both the initial EM burst and the mass ejection which arrives later. I'm open to the effects of any of these things.
Answer: Consider a star of mass $M$ and radius $R$ at a distance $r$ from the supernova. For a back-of-the-envelope estimate, consider how much momentum would be transferred to the star by the supernova. From that, we can estimate the star's change in velocity and decide whether or not it would be significant.
First, for extra fun, here's a review of how a typical core-collapse supernova works [1]:
Nuclear matter is highly incompressible. Hence once the central part of the core reaches nuclear density there is powerful resistance to further compression. That resistance is the primary source of the shock waves that turn a stellar collapse into a spectacular explosion. ... When the center of the core reaches nuclear density, it is brought to rest with a jolt. This gives rise to sound waves that propagate back through the medium of the core, rather like the vibrations in the handle of a hammer when it strikes an anvil. .. The compressibility of nuclear matter is low but not zero, and so momentum carries the collapse beyond the point of equilibrium, compressing the central core to a density even higher than that of an atomic nucleus. ... Most computer simulations suggest the highest density attained is some 50 percent greater than the equilibrium density of a nucleus. ...the sphere of nuclear matter bounces back, like a rubber ball that has been compressed.
That "bounce" is allegedly what creates the explosion. According to [2],
Core colapse liberates $\sim 3\times 10^{53}$ erg ... of gravitational binding energy of the neutron star, 99% of which is radiated in neutrinos over tens of seconds. The supernova mechanism must revive the stalled shock and convert $\sim 1$% of the available energy into the energy of the explosion, which must happen within less than $\sim 0.5$-$1$ s of core bounce in order to produce a typical core-collapse supernova explosion...
According to [3], one "erg" is $10^{-7}$ Joules. To give the idea the best possible chance of working, suppose that all of the $E=10^{53}\text{ ergs }= 10^{46}\text{ Joules}$ of energy goes into the kinetic energy of the expanding shell. The momentum $p$ is maximized by assuming that the expanding shell is massless (because $p=\sqrt{(E/c)^2-(mc)^2}$), and while we're at it let's suppose that the collision of the shell with the star is perfectly elastic in order to maximize the effect on the motion of the star. Now suppose that the radius of the star is $R=7\times 10^8$ meters (like the sun) and has mass $M=2\times 10^{30}$ kg (like the sun), and suppose that its distance from the supernova is $r=3\times 10^{16}$ meters (about 3 light-years). If the total energy in the outgoing supernova shell is $E$, then fraction intercepted by the star is the area of the star's disk ($\pi R^2$) divided by the area of the outgoing spherical shell ($4\pi r^2$). So the intercepted energy $E'$ is
$$
E'=\frac{\pi R^2}{4\pi r^2}E\approx 10^{-16}E.
$$
Using $E=10^{46}$ Joules gives
$$
E'\approx 10^{30}\text{ Joules}.
$$
That's a lot of energy, but is it enough? Using $c\approx 3\times 10^8$ m/s for the speed of light, the corresponding momentum is $p=E'/c\approx 3\times 10^{21}$ kg$\cdot$m/s. Optimistically assuming an elastic collision that completely reverses the direction of that part of the shell's momentum (optimistically ignoring conservation of energy), the change in the star's momentum will be twice that much. Since the star has a mass of $M=2\times 10^{30}$ kg, its change in velocity (using a non-relativistic approximation, which is plenty good enough in this case) is $2p/M\approx 3\times 10^{-9}$ meters per second, which is about $10$ centimeters per year. That's probably not enough to eject the star from the galaxy. Sorry.
References:
[1] Page 43 in Bethe and Brown (1985), "How a Supernova Explodes," Scientific American 252: 40-48, http://www.cenbg.in2p3.fr/heberge/EcoleJoliotCurie/coursannee/transparents/SN%20-%20Bethe%20e%20Brown.pdf
[2] Ott $et al$ (2011), "New Aspects and Boundary Conditions of Core-Collapse Supernova Theory," http://arxiv.org/abs/1111.6282
[3] Table 9 on page 128 in The International System of Units (SI), 8th edition, International Bureau of Weights and Measures (BIPM), http://www.bipm.org/utils/common/pdf/si_brochure_8_en.pdf | {
"domain": "physics.stackexchange",
"id": 55255,
"tags": "astrophysics, stars, supernova"
} |
If we know information existed when life first began on earth, then can’t we surmise that information existed prior to earth life? | Question: If true, then wouldn’t information have been created when the universe was created?
In other words, if information existed from the start of the universe, then it’s possible that information can not be created or destroyed, it’s just converted from disordered information to ordered information, using energy(?).
Answer: It's difficult to tell what you mean by information here. In physics, information is often used to refer to the number of possible states of a system, similar to Shannon entropy.
Under most definitions of information, it has always been a fundamental part of the universe. In a more philosophical sense, some physicists would say that the universe is made of, or equivalent to, information. | {
"domain": "physics.stackexchange",
"id": 88150,
"tags": "conservation-laws, information, biology"
} |
Maximum Useful Work from a Process | Question: Negative of the $\Delta G$ for a process is the maximum useful work that can be obtained from it (at constant pressure and temperature). I understood this in this way: $\Delta H$ is the heat absorbed by the system (since the process is at constant temperature and pressure), so equivalently $-\Delta H$ energy is obtained from the system after doing expansion work. Since $\Delta S$ is the entropy created in the process, at the very least $-\Delta S$ entropy must be created in the surroundings - that is, at least $-T\Delta S$ energy must be lost as heat. This comes from the $-\Delta H$, and thus leaves $-\Delta H + T\Delta S$ to do useful work. No more work can be done than this. $-\Delta H + T\Delta S$ is $-\Delta G$, so $-\Delta G$ is the maximum possible useful work. First of all, I wanted to know if this is correct, and if this is actually why $-\Delta G$ is the maximum possible useful work.
Now if $|\Delta H|$ is more than $|T\Delta S|$, with both being negative, then one can think of the $W_{max}$ or $-\Delta G$ as a part of the $|\Delta H|$; since $|\Delta G|< |\Delta H|$. Some part of $|\Delta H|$ goes as $|T\Delta S|$ to increase the surroundings' entropy, and the other part in doing useful work. But if both are positive, with $|T\Delta S|$ being more than $|\Delta H|$, again $\Delta G$ is negative, allowing useful work to be extracted. But now it seems as though $|T\Delta S|$ heat will be extracted from the surroundings, $|\Delta H|$ used up in the process, while the rest can be converted to work - in other words, useful work is a part of $|T\Delta S|$ - with the other part used by the process as $|\Delta H|$. Is this correct? Is $|T\Delta S|$ extracted from the surroundings actually?
Answer: "Useful work" does not include expansion work. For a proof of why $w_{\text{by,add}} \leq \Delta G$, you may want to first refer to my answer to this question: Why does the Gibbs free energy only correspond to non-expansion work? then come back here.
In that proof, I used equalities by assuming that the process is reversible. If we drop this assumption, then by the Second Law,
$$\begin{align}
\mathrm{d}S &\geq \frac{\mathrm{d}q}{T} \\
\mathrm{d}q &\leq T\mathrm{d}S \\
dU &\leq T\mathrm{d}S - p\mathrm{d}V + \mathrm{d}w_{\text{add}} \\
dG &\leq dw_{\text{add}} \\
\Delta G &\leq w_{\text{add}}
\end{align}
$$
Equality holds if the process is reversible, or more generally, if both the initial and final states are equilibrium states. Now this seems like an obvious contradiction of the equation in the very first paragraph. The key lies in the definition of the work - whether it is done on the system, or by the system. In my proof, I have been referring to the work done on the system; however, here we are interested in the work done by the system, since that is the work that can be "extracted". The work done by the system must be equal to the negative of the work done on the system: $w_{\text{by,add}} = -w_{\text{add}}$. This leads to the final result $w_{\text{by,add}} \leq \Delta G$.
Levine's Physical Chemistry 6th ed. writes:
In many cases (for example, a battery, a living organism), the P-V expansion work is not useful work, but $w_{\text{by,add}}$ is the useful work output. The quantity $-\Delta G$ equals the maximum possible nonexpansion work output $w_{\text{by,add}}$ done by a system in a constant-$T$-and-$p$ process. Hence the term "free energy". (Of course, for a system with P-V work only, $\mathrm{d}w_{\text{by,add}} = 0$ and $\mathrm{d}G = 0$ for a reversible, isothermal, isobaric process.) Examples of nonexpansion work in biological systems are the work of contracting muscles and of transmitting nerve impulses.
If you want a direct answer as to why your explanation isn't right, it's because you can't equate $\Delta H$ with the work done. Work done is given by $w = -\int\! p\,\mathrm{d}V$. You can only equate $\Delta H$ with the heat transferred at constant $p$. Apart from the First Law ($\Delta U = q + w$), heat and work are generally unrelated. | {
"domain": "chemistry.stackexchange",
"id": 3823,
"tags": "thermodynamics"
} |
HttpClient retry handler on response 429 | Question: When the remote server returns a 429 (Too Many Requests) response with the Retry-After header, the HttpClient can handle such cases with a handler:
public class RetryHandler : DelegatingHandler
{
protected override async Task<HttpResponseMessage> SendAsync(
HttpRequestMessage request,
CancellationToken cancellationToken)
{
var response = await base.SendAsync(request, cancellationToken);
if (response.StatusCode == HttpStatusCode.TooManyRequests
&& response.Headers.RetryAfter is not null)
{
var delta = TimeSpan.Zero;
if (response.Headers.RetryAfter.Date.HasValue)
{
delta = response.Headers.RetryAfter.Date.Value - DateTimeOffset.Now;
}
if (delta <= TimeSpan.Zero && response.Headers.RetryAfter.Delta.HasValue)
{
delta = response.Headers.RetryAfter.Delta.Value;
}
if (delta > TimeSpan.Zero)
{
await Task.Delay(delta, cancellationToken);
response = await base.SendAsync(request, cancellationToken);
}
}
return response;
}
}
And usage example:
HttpClient client = HttpClientFactory.Create(new RetryHandler(), new Handler2(), new Handler3());
Do you see any improvements / problems?
Answer: Let me suggest here an alternative solution, namely the Polly's retry policy. This library is the Microsoft suggested way to decorate any HttpClients (normal, named, typed or named and typed clients).
In order to use it, you need the following libraries:
Polly: this helps you to define policies
Microsoft.Extensions.Http.Polly: this helps you to integrate policies into the HttpClient's pipeline
Please be aware that there is a Polly.Extensions.Http package as well but that is deprecated
The policy
In order to define a retry policy against HttpClient you need to do the followings:
Define a policy which returns with an HttpResponseMessage
Define that policy as asynchronous
So, the defined policy must be an IAsyncPolicy<HttpResponseMessage>:
IAsyncPolicy<HttpResponseMessage> retryAfterPolicy = Policy
.HandleResult<HttpResponseMessage>(r => r.StatusCode == HttpStatusCode.TooManyRequests && r.Headers.RetryAfter != null)
.WaitAndRetryAsync(
1,
(_, result, _) => result.Result.Headers.RetryAfter.Delta.Value,
(_, __, ___, ____) => Task.CompletedTask);
OR
Func<HttpResponseMessage, bool> shouldRetry = r => r.StatusCode == HttpStatusCode.TooManyRequests && r.Headers.RetryAfter != null;
var retryAfterPolicy = Policy
.HandleResult(shouldRetry)
.WaitAndRetryAsync(
1,
(_, result, _) => result.Result.Headers.RetryAfter.Delta.Value,
(_, __, ___, ____) => Task.CompletedTask);
The parameters of the WaitAndRetryAsync
retryCount: How many retries should be issued if the condition is met (the predicate provided to the HandleResult)
sleepDurationProvider: How much time should be spent with sleeping/waiting between retry attempts
onRetryAsync: This delegate is designed mainly for logging purposes. It is called before the policy goes to sleep between two retry attempts
The integration
The Microsoft.Extensions.Http.Polly defines several extension methods:
AddPolicyHandler: It allows you to decorate an HttpClient with an IAsyncPolicy<HttpResponseMessage> policy
AddPolicyHandlerFromRegistry: Polly allows us to add our policies to a registry. This method allows you to decorate an HttpClient with an IAsyncPolicy<HttpResponseMessage> policy from a pre-populated registry
AddTransientHttpErrorPolicy: It allows you to decorate an HttpClient with an IAsyncPolicy<HttpResponseMessage> policy. It is pre-configured to trigger for HttpRequestException and for status code: 408 or 5xx
Under the hood all of them register a PolicyHttpMessageHandler into the HttpClient pipeline which is indeed a DelegatingHandler.
Let me show you how to use it for a named client
builder.Services
.AddHttpClient("SampleApi", client =>
{
client.BaseAddress = new Uri("http://.../");
})
.AddPolicyHandler(retryAfterPolicy);
Usage
public class SampleController : ControllerBase
{
private readonly IHttpClientFactory _httpClientFactory;
public SampleController(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}
[HttpGet("{id}")]
public async Task<IActionResult> Get(int id)
{
var httpClient = _httpClientFactory.CreateClient("SampleApi");
//...
}
}
Next steps
If you have received a 429 that means the downstream system is overloaded / under pressure / flooded with requests. So, it might make sense to let it perform self-healing and then retry any pending requests.
With the above setup each and every concurrent requests are sent to the downstream system to receive the same "I'm busy" response. It would be nice if we could avoid this unnecessary roundtrips to get the same message.
The good news is that we can do this by using a Circuit Breaker. This can be used to short cut any outgoing requests while the downstream is trying to heal itself. The CB has a shared state which can be accessed by the concurrent requests. So, rather than flooding the downstream with new requests we can prevent that on the client-side by short-cutting them.
We can combine the retry policy with circuit breaker to define a protocol which will respect the RetryAfter header and applies that to all outgoing requests.
I would like to mention one other policy which might be useful here and that is the timeout policy. It allows you to define an upper limit on time to get a valuable response.
Either you can define that time constraint on a per retry attempt bases
retry > local_timeout
Or you can define that time constraint as an overarching constraint which covers all retry attempts
global_timeout > retry
Or you can combine both
global_timeout (60 seconds) > retry (three times) > local_timeout (2 seconds)
And of course the Circuit Breaker could be added to this policy chain as well. I have posted a lots of answers on SO about this topic, just to name a few: 1, 2, 3, 4, etc. | {
"domain": "codereview.stackexchange",
"id": 44143,
"tags": "c#, http, .net-core, httpclient"
} |
How do Maxwell's equations uniquely determine ${\bf E}$ and ${\bf B}$ despite no. of equations exceeding no. of unknowns? | Question: Maxwell's equations in free space are given by $${\bf\nabla}\cdot\textbf{E}=0,~~{\bf\nabla}\cdot\textbf{B}=0$$
and
$${\bf\nabla}\times\textbf{E}=-\frac{\partial\textbf{B}}{\partial t},~~{\bf\nabla}\times\textbf{B}=c^{-2}\frac{\partial\textbf{E}}{\partial t}.$$
The first two equations are two scalar equations whereas the second two equations are vector equations each of which gives three independent equations (componentwise)! Therefore, there are $2+6=8$ equations while only $6$ unknowns: $(E_x,E_y,E_z)$ and $(B_x,B_y,B_z)$.
Question When we have a larger number of unknowns than the number of equations, we don't, in general, expect to obtain a unique solution. However, given the appropriate boundary conditions, Maxwell's equations work triumphantly and give unique solutions to electric and magnetic fields, I must be overlooking something. What is the resolution to this apparent paradox?
Answer: Provided that the first two equations hold true at the initial condition, they are redundant for the time evolution, because
$$\nabla \cdot \frac{\partial \mathbf{E}}{\partial t} = \frac{1}{c^2} \nabla \cdot \nabla \times \mathbf{B} = 0$$
and hence $\nabla \cdot \mathbf{E}$ is constant, with a similar argument for $\nabla \cdot \mathbf{B}$. So we actually only have $6$ equations determining the time evolution, which is just the right amount. | {
"domain": "physics.stackexchange",
"id": 66477,
"tags": "electromagnetism, maxwell-equations, differential-equations, degrees-of-freedom"
} |
Differences between a field, its field strength, and the force an object experiences within this field | Question: My question is what are the conceptual and intuitive differences between these things. For example, the magnetic field B = F/(|q|v). In this case, B IS the field, and when a charged particle is travelling within this field, it experiences some magnetic force F. In this case, what is the magnetic field strength? Does it even matter, if from B we can calculate the the force the particle experiences anyway? More generally, what is the conceptual difference between a field, its field strength, and the force an object experiences when in this field? Thanks in advance, and I apologise if this question has already been answered, although I couldn't find anything answering this question precisely!
Answer: A field is usually a tensor field of arbitrary order, so a function from space(-time) to the corresponding tensor space. For example, in your case, $\vec{B}$ is a vector field, a function from time and space to three dimensional vectors. The field strength is in my experience usually used for the norm of a vector field. And usually, the field gives the force vector that an object of unit mass/charge/... experiences, so to obtain the force of an actual object in the field, you usually need to multiply the field vector with the objects "scaling value" - the mass/charge/...
Was this the kind of answer you were looking for? | {
"domain": "physics.stackexchange",
"id": 32092,
"tags": "magnetic-fields, field-theory"
} |
Is there an efficient algorithm to check for duplicator-invariant equivalence on symmetric interaction combinators? | Question: Consider the 3 symmetric interaction combinator nets below:
Despite being different nets, they are equal, in the sense that, if we view white nodes as lambdas and applications, and black nodes as duplicators, then, the corresponding sharing graph reads back to the same λ-term, which is the Church Encoded number 4 (λf. λx. (f (f (f (f x))))). As such, we could propose an equivalence relation on nets, based on whether they read back to the same λ-term. Moreover, there is an efficient algorithm to check for equivalence: just convert the net to a λ-term, and compare.
Now, consider the following net instead:
In this case, the notion of equivalence I outlined doesn't apply, because these nets aren't valid λ-terms. I'm looking for an equivalence on interaction nets that implies lambda calculus read-back equivalence, but that also identifies the 3 nets above; or, in other words, one that isn't dependent on the λ-calculus.
A solution would be to use Damiano Mazza's axiom-equivalence, but, while this notion equates all nets above, it doesn't necessarily imply same λ-term read-back. We could, though, adjust it to be duplicator-invariant: two nets µ and ν are considered constructor-axiom-equivalent (µ ≃ ν) if they develop the same observable axioms in any context consisting of only constructor nodes. If my line of thought is correct, this weaker equivalence would, indeed, imply λ-calculus read-back equivalence. The problem is: how do we check for equivalence efficiently? The naive algorithm I thought of is exponential, so I fear I am missing something. Thus, my question is:
Is there an efficient algorithm to check for duplicator-invariant equivalence on symmetric interaction combinators?
Answer: This does not answer your main question but concerns the following point:
I'm looking for an equivalence on interaction nets that implies lambda calculus read-back equivalence, but that also identifies the 3 nets above
The three nets in your example are $\eta$-equivalent, a notion introduced originally in Lafon't paper on the interaction combinators and in a later paper by Fernández and Mackie. This equivalence in the (symmetric) interaction combinators plays a similar role as $\eta$-equivalence in the $\lambda$-calculus, where it gets its name. For example, a result similar to Böhm's theorem holds: for any two non-$\eta$-equivalent reduced nets (no active pair, no vicious circle) with the same number of free ports, there exists a context sending one to a wire and the other to two $\varepsilon$ cells. This implies, in particular, that no non-trivial congruence on nets may equate two reduced non-$\eta$-equivalent nets.
In the interaction combinators, $\eta$-equivalence is the contextual closure of the following equations:
In the symmetric interaction combinators, $\eta$-equivalence is defined by the above rules where $\gamma$ is replaced everywhere by $\zeta$, and the top-left rule for the $\zeta$ combinator is identical to that of the $\delta$ combinator.
I don't know whether $\eta$-equivalence includes readback-equivalence, but it might be worth checking. | {
"domain": "cstheory.stackexchange",
"id": 5695,
"tags": "type-theory, interaction-nets, equivalence, interaction-combinators"
} |
About molecules and their shape | Question: When a molecule is being used in some biochemical process it has a certain 3-d structure. Is this maintained throughout it's usage or can one specific molecule change it's 3-d shape while it is being 'used' in the process. Can a specific molecule in all it's interactions in a cell say , take on 4 or 5 distinct 3-d shapes??
Answer: All molecules have a 3D shape. This is referred to as it's conformation. Most molecules have certain degrees of flexibility. For example, single bonds usually allow free rotation, while double bonds don't. Cyclic structures have less freedom, but can still twist into several shapes. This can be important because some molecules bend into a certain conformation and then fit into protein binding sites they normally couldn't fit into. And it's not just small molecules, proteins, RNA, and DNA fold into specific shapes, and often must be unfolded, twisted, or bent in order to carry out their function. Protein dynamics is an active field of study. A great example is the ATP synthetase, which has a large rotating portion and acts a lot like a motor. | {
"domain": "biology.stackexchange",
"id": 2538,
"tags": "biochemistry, structural-biology"
} |
Problems with big open complexity gaps | Question: This question is about problems for which there is a big open complexity gap between known lower bound and upper bound, but not because of open problems on complexity classes themselves.
To be more precise, let's say a problem has gap classes $A,B$ (with $A\subseteq B$, not uniquely defined) if $A$ is a maximal class for which we can prove it is $A$-hard, and $B$ is a minimal known upper bound, i.e. we have an algorithm in $B$ solving the problem. This means if we end up finding out that the problem is $C$-complete with $A\subseteq C\subseteq B$, it will not impact complexity theory in general, as opposed to finding a $P$ algorithm for a $NP$-complete problem.
I am not interested in problems with $A\subseteq P$ and $B=NP$, because it is already the object of this question.
I am looking for examples of problems with gap classes that are as far as possible. To limit the scope and precise the question, I am especially interested in problems with $A\subseteq P$ and $B\supseteq EXPTIME$, meaning both membership in $P$ and $EXPTIME$-completeness are coherent with current knowledge, without making known classes collapse (say classes from this list).
Answer: The Knot Equivalence Problem.
Given two knots drawn in the plane, are they topologically the same? This problem is known to be decidable, and there do not seem to be any computational complexity obstructions to its being in P. The best upper bound currently known on its time complexity seems to be a tower of $2$s of height $c^n$, where $c = 10^{10^{6}}$, and $n$ is the number of crossings in the knot diagrams. This comes from a bound by Coward and Lackenby on the number of Reidemeister moves needed to take one knot to an equivalent one. See Lackenby's more recent paper for some more recent related results and for
the explicit form of the bound I give above (page 16). | {
"domain": "cstheory.stackexchange",
"id": 3457,
"tags": "cc.complexity-theory, complexity-classes, big-list"
} |
Does, and if so, why does the frequency of light and wavelength of light affect the photoelectric current? | Question: It makes sense that intensity of light affects the photoelectric current, but what about the frequency and wavelength, given that intensity remains constant?
The formula for intensity would be I = nhf/tA (where n is the number of photons and hf the energy of one photon, t the time, and A the area).
Now, let's say that we double the frequency, we get 2f, but since I is constant, n/t must halve, hence the current must be smaller. But apparently it increases or has no effect according to my research in the internet, what went wrong in my calculation? If we reduce the wavelength, the current decreases, right? Many thanks in advance!
Answer: We need to be really careful in our definitions - as @Stealth849 points out, it is very easy to confuse what is photon flux, and intensity.
Flux is the number of photons which strike a unit area within a given time interval.
Intensity is the amount of energy per unit area within a given time, and the two are related by individual photon energy $hf$. We can say (where $\Phi$ is the flux, and $I$ the intensity):
$$\Phi = \frac{n}{A\Delta t} \space\space\space \rightarrow \space\space\space I = \Phi hf=\frac{nhf}{A\Delta t}$$
The flux describing the number of incident photons dictates the resulting photocurrent so long as the photon energy is high enough to free an electron from the surface.
Now with the intensity, you're making the connection that $f\rightarrow2f$, and thusly, $\Phi\rightarrow\frac{\Phi}{2}$. You are making the assumption that when you increase the frequency of photons, you will only be sending half as many photons in order to preserve your desired constant intensity.
Given that your photocurrent is $I_P=ne$ where $e$ is the electron charge, we then notice that the photocurrent would decrease if you decrease the number of photons (flux).
But given the scenario where you do not require the intensity $I$ to be constant, and you increased the frequency of the incident photons while retaining the original flux, you would not notice a change in the photocurrent so long as the incident photons are above the cutoff frequency. | {
"domain": "physics.stackexchange",
"id": 87495,
"tags": "electric-current, frequency, wavelength, photoelectric-effect, intensity"
} |
What is the difference between "deadlock prevention" and "deadlock avoidance" | Question: I have studied about both of these on many places like this, this and this, but still it is not that much clear that what the actual difference is between these two.
The Wikipedia links (first and second links mentioned above) say about deadlock prevention that:
The hold and wait or resource holding conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations)...
and the deadlock avoidance section of the same page says:
Deadlock can be avoided if certain information about processes are available to the operating system before allocation of resources, such as which resources a process will consume in its lifetime...
so almost in both of these situations we require the processes to provide information about resources in advance.
So can anyone kindly explain in easy-to-understand words that what the actual difference is in between both of these.
Answer: It seems that deadlock prevention and deadlock avoidance are two names for the same concept. Indeed, the Wikipedia section on deadlock avoidance has been marked as redundant. While the distinction might be taken from the literature, some people at least are arguing that this distinction is superfluous. See the paper The classification of deadlock prevention and avoidance is erroneous by Neumann Levine, which is mentioned in the Talk part of the Wikipedia article. | {
"domain": "cs.stackexchange",
"id": 6343,
"tags": "operating-systems, deadlocks"
} |
In Computer Vision, what is the difference between a transformer and attention? | Question: Having been studying computer vision for a while, I still cannot understand what the difference between a transformer and attention is?
Answer: The original transformer is a feedforward neural network (FFNN)-based architecture that makes use of an attention mechanism. So, this is the difference: an attention mechanism (in particular, a self-attention operation) is used by the transformer, which is not just this attention mechanism, but it's an encoder-decoder architecture, which makes use of other techniques too: for example, positional encoding and layer normalization. In other words, the transformer is the model, while the attention is a technique used by the model.
The paper that introduced the transformer Attention Is All You Need (2017, NIPS) contains a diagram of the transformer and the attention block (i.e. the part of the transformer that does this attention operation).
Here's the diagram of the transformer.
Here's the picture of the attention mechanism (as you can see from the diagram above, the transformer used the multi-head attention on the right).
One thing to keep in mind is that the idea of attention is not novel to the transformer, given that similar ideas had already been used in previous works and models, for example, here, although the specific attention mechanisms are different.
Of course, you should read the mentioned paper for more details about the transformer, the attention mechanism and the diagrams above. | {
"domain": "ai.stackexchange",
"id": 2936,
"tags": "computer-vision, comparison, transformer, attention"
} |
Failed to fetch ros-cturtle-web-interface for lucid: "404 not found" | Question:
I'm setting up a build server and trying to match as best I can the packages on the three latest versions: cturtle, diamondback, electric.
In trying to install ros-cturtle-pr2, apt attempts to get ros-cturtle-web-interface:
The following extra packages will be installed:
ros-cturtle-web-interface
And I'm getting a 404 on apt-get install attempt:
Failed to fetch http://packages.ros.org/ros/ubuntu/pool/main/r/ros-cturtle-web-interface/ros-cturtle-web-interface_0.4.4-s1302754950~lucid_i386.deb 404 Not Found
It looks like the apt package list is listing a package it doesn't have.
I'm in the process of submitting a bug report, but wanted to ask here first before I submit. Anyone familiar with this issue?
Originally posted by Asomerville on ROS Answers with karma: 2743 on 2011-08-16
Post score: 1
Answer:
The cturtle Debian packages have been updated. You need to run "sudo apt-get update" to get the latest package index. After that, installing should just work fine. You can check on the server http://packages.ros.org/ros/ubuntu/pool/main/r/ros-cturtle-web-interface/ what the current build version of the package is.
Originally posted by Wim with karma: 2915 on 2011-08-16
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 6435,
"tags": "ros, ubuntu, pr2, cturtle, package"
} |
Electrolysis in aqueous solution – which equations to use to predict product at each electrode? | Question: When trying to work out what will be produced at each electrode during electrolysis, I'm confused as to which equations to use for hydrogen and oxygen.
For instance, an electrolytic cell containing aqueous nickel(II) bromide, with a nickel cathode and a platinum anode.
At the cathode:
$$\ce{Ni^2+ + 2e- -> Ni}$$
I have read that this does not occur because $\ce{Ni^2+}$ is a worse oxidising agent than $\ce{H+}$, which makes sense because $\ce{H+}$ is lower down on the left side of the electrochemical series meaning its reduction is more feasible.
Water is therefore reduced at the cathode:
$$\ce{2H+ + 2e- -> H2}\quad E^\circ=0.0\ \mathrm V$$
or:
$$\ce{2H2O + 2e- -> H2 + 2OH-}\quad E^\circ=-0.83\ \mathrm V$$
The second equation, however, has water higher up in the electrochemical series. This would suggest nickel should actually be reduced in preference to water.
How do I know which of the two "water-reduction" equations to use?
This is even less transparent for the anode, where bromide should be oxidised to bromine, but there are also two equations showing production of oxygen: one below and one above bromide in the series:
$$\begin{alignat}{2}
\ce{O2 + 4H+ + 4e- &-> 2H2O}\quad &E^\circ&=+1.23\ \mathrm V\\[6pt]
\ce{O2 + 2H2O + 4e- &-> 4OH-}\quad &E^\circ&=+0.40\ \mathrm V
\end{alignat}$$
I don't understand which of these equations should be used and how you're supposed to know which to use.
Answer: The given potentials $E = 0.00{\text{ V}}$ and $E = -0.83{\text{ V}}$ for the reduction of water apply to different $\text{pH}$.
The thermodynamic relation of the potential $E$ to the composition of the solution is generally known as the Nernst equation:
$$E = E^\circ + \frac{{0.059{\text{ V}}}}{z}\log \frac{{{{\prod\nolimits_i {\left[ {{\text{ox}}} \right]} }^{{n_i}}}}}{{{{\prod\nolimits_j {\left[ {{\text{red}}} \right]} }^{{n_j}}}}}$$
For the given reaction
$$\ce{2 H+ + 2e- <=> H2}$$
the Nernst equation reads
$$E = 0.00{\text{ V}} + \frac{{0.059{\text{ V}}}}{2}\log \frac{{{{\left[ {{{\text{H}}^ + }} \right]}^2}}}{{\left[{{{\text{H}}_2}}\right]}}$$
Since
$${\text{pH}} = - \log \left[ {{{\text{H}}^ + }} \right]$$
the potential $E$ depends on $\text{pH}$:
$$E = 0.00{\text{ V}} - 0.059{\text{ V}} \times {\text{pH}} + \frac{{0.059{\text{ V}}}}{2}\log \frac{1}{{\left[ {{{\text{H}}_2}} \right]}}$$
Therefore, the potential is $E = 0.00{\text{ V}}$ at $\text{pH} = 0$ and $E = -0.83{\text{ V}}$ at $\text{pH} = 14$ respectively. | {
"domain": "chemistry.stackexchange",
"id": 2909,
"tags": "electrolysis"
} |
How to create a node that listens to particular topic and publishes tf | Question:
How can I create a node that listens to that particular topic and publishes the required required tf. I have succesfully published data on a topic. Now I want to exhibit it in TF. Do I need a TF broadcaster, something like this
tf::TransformBroadcaster br;
tf::Transform transform;
transform.setOrigin( tf::Vector3(0,0,0));
transform.setRotation( tf::Quaternion(x,y,z,w) );
br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), "optical", "base_link"));
Even If I have something like this. How do I listen and display it. Forexample do I need a tf listener to listen to some topic which is publishing data. If thats the case. What would the code look like if my topic is /mytopic
Originally posted by micheal on ROS Answers with karma: 121 on 2013-08-08
Post score: 0
Answer:
Here is a piece of code that publishes an odometry transform over tf.
void pub_odom::odom_callback(const nav_msgs::Odometry& odom) {
btVector3 Position;
btQuaternion Orientation;
tf::TransformBroadcaster odom_broadcaster_;
tf::StampedTransform odom_transform;
Position.setValue(odom.pose.pose.position.x,odom.pose.pose.position.y,odom.pose.pose.position.z);
Orientation.setW(odom.pose.pose.orientation.w);
Orientation.setX(odom.pose.pose.orientation.x);
Orientation.setY(odom.pose.pose.orientation.y);
Orientation.setZ(odom.pose.pose.orientation.z);
odom_transform.setOrigin(map_center);
odom_transform.setRotation(tf::Quaternion(tf::Vector3(0,0,1),-yaw_init));
odom_transform.stamp_ = odom.header.stamp;
odom_transform.child_frame_id_= "/odom";
odom_transform.frame_id_= "/center_map";
odom_broadcaster_.sendTransform(odom_transform);}
Hope it helps you
Originally posted by Mario Garzon with karma: 802 on 2013-08-08
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by BennyRe on 2013-08-08:
You also could replace btVector3 and btQuaternion by tf::Vector3 and tf::Quaternion.
Comment by micheal on 2013-08-10:
@Mario Garzon Can you please send me a link to your whole package so that I can visualise your package with tf. danke Schoen. Right now i cannot run your program so that I can understand better. | {
"domain": "robotics.stackexchange",
"id": 15205,
"tags": "ros, rviz, transform"
} |
What would be tension developed in a conducting ring of radius $R$ which has been given a charge $Q $, uniformly in the ring? | Question: As the question says , i thought about using coulumbs law , but for any small dq charge , all other part will be attracting it , so.how to get the total tension force ( difficulty is in as distance is different for different small charges )
Answer: Actually this look like a homework type, but I think it is jee type question, so most of the student want this answer I think ishould answer it , actually when you distribute a charge $Q$ unifromly each $dl$ length o the wire will experience the repulsive force from rest of the other, so you can do this question by just simply applying work energy theory.
Suppose the tension in the ring is $T$, and we stretch the ring by some distance $d\ell$, then the work done by the tension is $dW = T dr$.
d finally, because the work done by the tension has to be equal to the change in the self energy we just equate $\delta E$ and $dW$ to get:
$$ T 2\pi dr = \frac{dE}{dr} dr $$
So:
$$ T = \frac{1}{2\pi} \frac{dE}{dr} $$
So if you know $dE/dr$ this gives you the tension in the ring. | {
"domain": "physics.stackexchange",
"id": 61689,
"tags": "homework-and-exercises, electrostatics, conductors, coulombs-law"
} |
Rotational Friction | Question: This is the question-
Consider a cylinder of mass $M$ resting on a rough horizontal rug that is pulled out from under it with acceleration $a$ perpendicular to the axis of the cylinder. What is $F$ (friction) at a point P? It is assumed that the cylinder does not slip.
Options-
A) $F = M g$
B) $F = M a$
C) $F = \frac{M}{2} a $
D) $F = \frac{M}{3} a $
My attempt-
Since rolling is nothing but rotation about the point of contact (P in this case), according to me P should be at relative rest, therefore $F=Ma$. This should have been a short and crisp question, help me out please answer given is $F=\frac{M}{3} a$, and if you can perhaps please point me out to articles for such rolling friction question, I get stumbled whenever they ask friction in rolling. Thanks in advance..
Edit-
This idea also came to my mind-
Moment of inertia for a solid cylinder= I = $\frac{MR^2}{2}$
Let the frictional force be F, then,
$F*R=(torque)=I*\alpha$
$F*R=\frac{(M*R^2)*(a)}{2R}$
(as the cylinder is in pure rolling)
therefore- $F=\frac{Ma}{2}$
which is again incorrect any pointers to where I went wrong here again would be appreciated.. thanks
Answer: The job of the friction is to enforce the no slip constraint. So let us find the relative acceleration of the two parts and find the force $F$ which makes it zero.
The cylinder EOM are (positive is to the left) $ F = m \dot{v} $ and $r F = I \dot{\omega}$
and the tangential cylinder acceleration at P is $\dot{v}_P = \dot{v} + r \dot{\omega}$
To make the tangential acceleration equal to the rug you have
$$ a = \dot{v}_P = \frac{F}{m} + \frac{r^2 F}{I} \\ F = \left(\frac{1}{m} + \frac{r^2}{I}\right)^{-1} a $$
where the part in the parenthesis is the effective mass of a rolling cylinder on its surface. For a solid cylinder the mass moment of inertia is $I=\frac{m}{2} r^2$ and thus
$$ F = \left(\frac{1}{m} + \frac{2}{m}\right)^{-1} a = \frac{m}{3} a$$ | {
"domain": "physics.stackexchange",
"id": 13971,
"tags": "homework-and-exercises, classical-mechanics"
} |
Java implementation of spell-checking algorithm | Question: This little program was written for an assignment in a data structures and algorithms class. I'll just note the basic requirements:
Given an inputted string, the program should check to see if it exists
in a dictionary of correctly spelled words. If not, it should return a
list of words that are obtainable by:
adding any character to the beginning or end of the inputted string
removing any single character from the inputted string
swapping any two adjacent characters in the string
The primary data structure is embodied in the Dictionary class, which is my implementation of a separately-chaining hash list, and the important algorithms are found in the charAppended(), charMissing() and charsSwapped() methods.
Everything works as expected - I'm just looking for tips about anything that can be done more cleanly, efficiently or better aligned with best practices.
SpellCheck.java
import java.util.ArrayList;
import java.util.Scanner;
public class SpellCheck {
private Dictionary dict;
final static String filePath = "d:/desktop/words.txt";
final static char[] alphabet = "abcdefghijklmnopqrstuvwxyz".toCharArray();
SpellCheck() {
dict = new Dictionary();
dict.build(filePath);
}
void run() {
Scanner scan = new Scanner(System.in);
boolean done = false;
String input;
while (true) {
System.out.print("\n-------Enter a word: ");
input = scan.nextLine();
if (input.equals("")) {
break;
}
if (dict.contains(input)) {
System.out.println("\n" + input + " is spelled correctly");
} else {
System.out.print("is not spelled correctly, ");
System.out.println(printSuggestions(input));
}
}
}
String printSuggestions(String input) {
StringBuilder sb = new StringBuilder();
ArrayList<String> print = makeSuggestions(input);
if (print.size() == 0) {
return "and I have no idea what word you could mean.\n";
}
sb.append("perhaps you meant:\n");
for (String s : print) {
sb.append("\n -" + s);
}
return sb.toString();
}
private ArrayList<String> makeSuggestions(String input) {
ArrayList<String> toReturn = new ArrayList<>();
toReturn.addAll(charAppended(input));
toReturn.addAll(charMissing(input));
toReturn.addAll(charsSwapped(input));
return toReturn;
}
private ArrayList<String> charAppended(String input) {
ArrayList<String> toReturn = new ArrayList<>();
for (char c : alphabet) {
String atFront = c + input;
String atBack = input + c;
if (dict.contains(atFront)) {
toReturn.add(atFront);
}
if (dict.contains(atBack)) {
toReturn.add(atBack);
}
}
return toReturn;
}
private ArrayList<String> charMissing(String input) {
ArrayList<String> toReturn = new ArrayList<>();
int len = input.length() - 1;
//try removing char from the front
if (dict.contains(input.substring(1))) {
toReturn.add(input.substring(1));
}
for (int i = 1; i < len; i++) {
//try removing each char between (not including) the first and last
String working = input.substring(0, i);
working = working.concat(input.substring((i + 1), input.length()));
if (dict.contains(working)) {
toReturn.add(working);
}
}
if (dict.contains(input.substring(0, len))) {
toReturn.add(input.substring(0, len));
}
return toReturn;
}
private ArrayList<String> charsSwapped(String input) {
ArrayList<String> toReturn = new ArrayList<>();
for (int i = 0; i < input.length() - 1; i++) {
String working = input.substring(0, i);// System.out.println(" 0:" + working);
working = working + input.charAt(i + 1); //System.out.println(" 1:" + working);
working = working + input.charAt(i); //System.out.println(" 2:" + working);
working = working.concat(input.substring((i + 2)));//System.out.println(" FIN:" + working);
if (dict.contains(working)) {
toReturn.add(working);
}
}
return toReturn;
}
public static void main(String[] args) {
SpellCheck sc = new SpellCheck();
sc.run();
}
}
Dictionary.java
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
public class Dictionary {
private int M = 1319; //prime number
final private Bucket[] array;
public Dictionary() {
this.M = M;
array = new Bucket[M];
for (int i = 0; i < M; i++) {
array[i] = new Bucket();
}
}
private int hash(String key) {
return (key.hashCode() & 0x7fffffff) % M;
}
//call hash() to decide which bucket to put it in, do it.
public void add(String key) {
array[hash(key)].put(key);
}
//call hash() to find what bucket it's in, get it from that bucket.
public boolean contains(String input) {
input = input.toLowerCase();
return array[hash(input)].get(input);
}
public void build(String filePath) {
try {
BufferedReader reader = new BufferedReader(new FileReader(filePath));
String line;
while ((line = reader.readLine()) != null) {
add(line);
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
//this method is used in my unit tests
public String[] getRandomEntries(int num){
String[] toRet = new String[num];
for (int i = 0; i < num; i++){
//pick a random bucket, go out a random number
Node n = array[(int)Math.random()*M].first;
int rand = (int)Math.random()*(int)Math.sqrt(num);
for(int j = 0; j<rand && n.next!= null; j++) n = n.next;
toRet[i]=n.word;
}
return toRet;
}
class Bucket {
private Node first;
public boolean get(String in) { //return key true if key exists
Node next = first;
while (next != null) {
if (next.word.equals(in)) {
return true;
}
next = next.next;
}
return false;
}
public void put(String key) {
for (Node curr = first; curr != null; curr = curr.next) {
if (key.equals(curr.word)) {
return; //search hit: return
}
}
first = new Node(key, first); //search miss: add new node
}
class Node {
String word;
Node next;
public Node(String key, Node next) {
this.word = key;
this.next = next;
}
}
}
}
Answer: General
Many of the variable names lack meaning and are inconsistent. I cover some of them specifically below, but for example print is not a good choice for a list of suggestions--even though you intend to print them. Instead, suggestions clearly identifies what the list holds.
Learn to use JavaDoc for documenting public classes and methods. Not only is it nicer to read than one-liners, developing this habit early will demonstrate your goal to become a professional engineer (if that's the case).
/**
* Adds the key to this dictionary if not already present.
*
* @param key hash determines the bucket to receive it
*/
public void add(String key) { ... }
/**
* Determines if the key is present in this dictionary.
*
* @param key hash determines the bucket to search
* @return true if key is present
*/
public boolean contains(String input) { ... }
SpellCheck
You make a good effort at breaking up the methods and separating concerns, but I would take it a step further. The methods that build alternate spellings should not be responsible for checking the dictionary. Instead, combine all misspellings into a single list and search the dictionary in one place. This also allows you to remove duplicates and avoid wasted lookups.
I would also completely separate all output, possibly to a new UI class to allow reuse. You did pretty well here, but printSuggestions should receive the list of suggestions instead of calling makeSuggestions itself.
private void printStatusAndSuggestions(String input) {
System.out.println();
System.out.print(input);
if (dict.contains(input) {
System.out.println(" is spelled correctly.");
} else {
System.out.print(" is not spelled correctly,");
printSuggestions(suggest(input));
}
}
private void printSuggestions(Set<String> suggestions) {
if (suggestions.isEmpty()) {
System.out.println(" and I have no idea what word you could mean."
} else {
... print them ...
}
}
private Set<String> suggest(String input) {
Set<String> suggestions = new HashSet<>();
Set<String> alternates = makeAlternates(input);
for (String alternate : alternates) {
if (dict.contains(alternate) {
suggestions.add(alternate);
}
}
return suggestions;
}
If Dictionary implemented the Set interface, you could use the built-in intersection method to do the lookups for you.
private Set<String> suggest(String input) {
return makeAlternates(input).retainAll(dict);
}
input.equals("") is better expressed as input.isEmpty(). You may want to trim the input to remove leading/trailing spaces: input = scan.nextLine().trim();
This applies similarly to ArrayList: print.isEmpty() instead of print.size() == 0.
pringSuggestions creates a StringBuilder needlessly when there are no suggestions.
You use two different methods of concatenating strings when building suggestions: an implicit StringBuilder and String.concat. Both are fine in different circumstances (though I confess that I've never used the latter), but make sure you combine all the concatenations in one statement. Breaking them across statements uses a new builder for each statement. charsSwapped is especially egregious requiring three for each suggestion instead of one.
String working = input.substring(0, i)
+ input.charAt(i + 1);
+ input.charAt(i);
+ input.substring(i + 2);
Dictionary
Turn M into a constant. As it stands, you're initializing the field to 1319 and then reassigning it to itself in the constructor. Perhaps BUCKET_COUNT is a better name.
Alternatively, add a constructor that takes the value as a parameter. While we're at it, how about buckets in place of the very generic name array? It's rarely a good idea to name a primitive variable based solely on its type.
public static final int DEFAULT_BUCKET_COUNT = 1319;
private final bucketCount;
private final Bucket[] buckets;
public Dictionary() {
this(DEFAULT_BUCKET_COUNT);
}
public Dictionary(int bucketCount) {
this.bucketCount = bucketCount;
... create empty buckets ...
}
Bucket/Node
Both of these can be static since they don't access the outer classes' members.
You don't really need Bucket and may consider rewriting the code to remove it to see the difference. It's not a huge improvement but may simplify the code.
As with Dictionary, add and contains make more sense than put and get.
Be consistent with naming across methods. In Bucket, get takes in while put takes key, but in and key represent the same things and as such should use the same name: key. Also, stick with curr to avoid confusion with Node.next.
@Czippers nailed this one: get and put should use the same looping construct since they're both walking the list in order and possibly stopping at some point. | {
"domain": "codereview.stackexchange",
"id": 24270,
"tags": "java, algorithm, homework, hash-map"
} |
Finding Specific Order Statistics with O(n log(n/k)) Complexity | Question: I'm working on a problem where I need to find specific order statistics in a set of numbers. Given n numbers, I want to determine the k-th smallest, the 2k-th smallest, and so on, up to the ⌊n/k⌋-th smallest element. I'm looking for a method to accomplish this with a time complexity of O(n log(n/k)).
My Attempt:
So far, I've considered using modified sorting algorithms and heap structures, but I'm not sure if these approaches can achieve the desired time complexity. I suspect there might be a more efficient way to find these specific elements without fully sorting the entire array.
Question:
Can anyone suggest an algorithm or method to find these elements with the required time complexity?
Answer: Find n/2k-th in $O(n)$ time. There are standard algorithms for finding any p-th smallest element in array in $O(n)$ time.
Partition the array in almost equal halves about this n/2k-th smallest element.
Run the algorithm recursively on the two halves. Once the algorithm finds all the required $n/k$ elements; return.
Note that the height of the recurrence tree is $O(\log(n/k))$. and in each level the algorithm is doing $O(n)$ comparison operations.
Thus, the overall running time is $O(n \log (n/k))$. | {
"domain": "cs.stackexchange",
"id": 21727,
"tags": "algorithms, sorting"
} |
Software platforms with integration of multi vendor mass spectrometer device data? | Question: I know about ACD/Spectrus Processor which is a software platform to integrate analytical chemistry data from multiple mass spectrometer device vendors (e.g. Agilent Technologies, Bruker, Waters) to allow data processing. Are there comparable software platforms out there which enable the integration of MS/chromatography data of different vendors for data processing?
BTW: I know that questions of this kind usually fits better to Software Recommnedations. However cause it's specific to the chemical domain I think it's more suitable to ask here.
Answer: Some of the MS vendors made their software work with other vendors as well. E.g. ThermoFisher's Chromeleon, Waters' Empower seems to have Data Converters for this. If you already have software bought from your vendors it may make sense to research their cross-vendor capabilities.
There are also pure software companies that build vendor-agnostic tools:
As you mentioned - ACD/Spectrus which has a wide range of vendors supported.
Virscidian's Analytical Studio.
Elsci's Peaksel (disclaimer: I'm one of the developers).
The last two have good support for handling libraries (24, 96, 384, 1536 or more samples per experiment).
There are also open-source packages and libraries:
OpenChrom, if you don't like something you can contribute or just change it to your own needs.
MZmine2. There's also a repo with MZmine3, but I don't know what's the status there. | {
"domain": "chemistry.stackexchange",
"id": 13752,
"tags": "chromatography, mass-spectrometry"
} |
Summing categories of financial records per month in a query | Question: My program is working properly but I'm unconfortable with code repetition.
class Movimentacao(models.Model):
data = models.DateField() # complete datetime field from 2019->2022
movimentacao = models.CharField(max_length=200) # text field
valor_da_operacao = models.DecimalField(max_digits=19, decimal_places=2) # decimal field
# get all objects filtered by `movimentacao` field
proventos = Movimentacao.objects.filter(movimentacao__in=('Dividendo', 'Juros Sobre Capital Próprio', 'Rendimento', 'Reembolso')).order_by('data', 'id')
currentDate = proventos[0].data # get date from the first record
lastRecord = proventos.last().id # get ID from the last record
assets = []
stockSum = 0
fiiSum = 0
for m in proventos:
if(m.data.month != currentDate.month or lastRecord == m.id):
dataFormatada = f'{currentDate.strftime("%y")}-{currentDate.strftime("%b")}'
if(lastRecord == m.id):
# first if-else block (necessary but I'd like to 'remove')
if(m.movimentacao == 'Dividendo'):
stockSum += m.valor_da_operacao
if(m.movimentacao == 'Rendimento'):
fiiSum += m.valor_da_operacao
assets.append([dataFormatada, 'FIIs', fiiSum])
assets.append([dataFormatada, 'Stocks', stockSum])
stockSum = 0
fiiSum = 0
currentDate = m.data
# second if-else block (this one I can't remove)
if(m.movimentacao == 'Dividendo'):
stockSum += m.valor_da_operacao
if(m.movimentacao == 'Rendimento'):
fiiSum += m.valor_da_operacao
As you can see, the code below is being repeteaded:
if(m.movimentacao == 'Dividendo'):
stockSum += m.valor_da_operacao
if(m.movimentacao == 'Rendimento'):
fiiSum += m.valor_da_operacao
The goal of this code is to build the assets list formatted accordingly below:
[
["19-Nov", "FII", 411.97], ["19-Nov", "Stocks", 0],
["19-Dec", "FII", 368.21], ["19-Dec", "Stocks", 1542.08],
["20-Jan", "FII", 0], ["20-Jan", "Stocks", 401.06],
]
In this way, I'd like to see if it's possible to "remove" the first if-else statement, keeping the code working properly (this would be my first choice).
The issue is: if I remove the first if-else statement, the variable m.movimentacao from the last object is not added inside stockSum or fiiSum variables - that was the reason I need to insert the first if-else statement. The last object can't reach the last if-else statement to be checked either m.movimentacao is a 'Dividendo' or 'Rendimento'.
If not, what is the best way to avoid code repetition in this case (function use)?
Answer: Mixing Portuguese and English makes this code messy. Please pick one or the other. I recommend coding consistently in English, because Python, its library, and its code ecosystem are in English, so Portuguese will always feel out of place.
Why do you call .filter(movimentacao__in=('Dividendo', 'Juros Sobre Capital Próprio', 'Rendimento', 'Reembolso'), when only "Dividendo" and "Rendimento" matter for building assets? Also, wouldn't you want to do another two assets.append(…) statements at the end of this code?
This date formatting code is clumsy:
dataFormatada = f'{currentDate.strftime("%y")}-{currentDate.strftime("%b")}'
… and it could be better expressed as
dataFormatada = currentDate.strftime("%y-%b")
But really, none of this loop should exist: all of the filtering and summation should be done by the database instead! Transferring so much data from the database to Python for analysis defeats the purpose of the database. It would be more efficient, scalable, and readable to do this in SQL rather than Python.
Here's a query (and a fiddle) for postgresql that gets you the same results:
SELECT DISTINCT
date_trunc('month', data) AS mes,
CASE
WHEN movimentacao = 'Dividendo' THEN 'Stocks'
ELSE 'FIIs'
END AS categoria,
SUM(valor_da_operacao) OVER monthly AS total
FROM Movimentacao
WHERE movimentacao IN ('Dividendo', 'Rendimento')
WINDOW monthly AS (PARTITION BY date_trunc('month', data), movimentacao)
ORDER BY mes, categoria;
You can use a similar query for sqlite, except that the datetime functions are different:
SELECT DISTINCT
date(data, 'start of month') AS mes,
CASE
WHEN movimentacao = 'Dividendo' THEN 'Stocks'
ELSE 'FIIs'
END AS categoria,
SUM(valor_da_operacao) OVER monthly AS total
FROM Movimentacao
WHERE movimentacao IN ('Dividendo', 'Rendimento')
WINDOW monthly AS (PARTITION BY date(data, 'start of month'), movimentacao)
ORDER BY mes, categoria;
Well, almost the same results. With your Python code, if there is any month in which there is a Dividendo but no Rendimento, or vice versa, both "FIIs" and "Stocks" get appended to assets anyway — one of them having a 0 value. With some creativity and perhaps some Common Table Expressions, the SQL query can be tweaked to produce such 0 values as well:
WITH MovimentacaoRelevante AS (
SELECT date(data, 'start of month') AS mes,
movimentacao,
valor_da_operacao
FROM Movimentacao
WHERE movimentacao IN ('Dividendo', 'Rendimento')
), MovimentacaoRelevanteComZeros AS (
SELECT * FROM MovimentacaoRelevante
UNION ALL
SELECT DISTINCT mes, 'Dividendo', 0 AS total FROM MovimentacaoRelevante
UNION ALL
SELECT DISTINCT mes, 'Rendimento', 0 AS total FROM MovimentacaoRelevante
) SELECT DISTINCT
mes,
CASE
WHEN movimentacao = 'Dividendo' THEN 'Stocks'
ELSE 'FIIs'
END AS categoria,
SUM(valor_da_operacao) OVER monthly AS total
FROM MovimentacaoRelevanteComZeros
WINDOW monthly AS (PARTITION BY mes, movimentacao)
ORDER BY mes, categoria; | {
"domain": "codereview.stackexchange",
"id": 43711,
"tags": "python, database, django"
} |
Loss function in GAN | Question: Since the aim of a Discriminator is to output 1 for real data and 0 for fake data, hence, the aim is to increase the likelihood of true data vs. fake one. In addition, since maximizing the likelihood is equivalent to minimizing the log-likelihood, why are we updating the discriminator by ascending its stochastic gradient as mentioned in Algorithm 1 in https://arxiv.org/pdf/1406.2661.pdf. Shouldn't we update the discriminator by descending its stochastic gradient?
Any help is much appreciated!
Answer: In algorithm 1 of the original GAN article (https://arxiv.org/pdf/1406.2661.pdf), the discriminator is said to be updated by "ascending its stochastic gradient". This is referring to equation 1:
$$
\min_G \max_D V(D, G)=
\mathbb{E}_{x\sim p_{data}(x)}[\log D(x)]
+ \mathbb{E}_{z\sim p_z(z)}[\log(1 - D(G(z)))]
$$
When we want to minimize something, we do grandient descent. When we want to maximize something, we do gradient ascent. In this context, we want to maximize $V(D, G)$ with respect to the discriminator $D$, that is, the $\max_D V(D, G)$ part from equation 1.
I recommend you have a look at NIPS 2016 GAN tutorial video and text. They are very enlightening. | {
"domain": "datascience.stackexchange",
"id": 1926,
"tags": "unsupervised-learning, gan"
} |
When does a hard drive have the most entropy? | Question: Entropy is a measure of disorder, the higher the entropy the greater the disorder.
Uniform systems have less entropy than random systems.
Data is expressed in binary in computers, expressed as ones and zeros. Eight bits are a byte, data is stored in bytes, if a stream of data is less than a byte, it is padded by adding zeros before it. A unsigned byte can express numbers between 00000000 and 11111111, which is 0 to 255 in decimal and 00 to FF in hexadecimal.
A formatted "empty" disk has uniform bytes, uniform bytes have no information, by writing data to it, its randomness has increased and thus entropy is increased.
Now, I am wondering, 1, do freshly formatted disks, with all bytes the same (i.e. 00), have entropy of 0?
2, Lets say the disk with all bytes 00 is empty, when all bytes of it are used the disk is full. When does the disk have the maximum entropy? When it is half full? Or equal number of all possible 256 bytes randomly distributed?
By entropy I mean this: https://en.wikipedia.org/wiki/Entropy_(order_and_disorder)
By hard drive I specifically mean Hard Disk Device, which is the spinning electro-magnetic disk.
As far as I know, binary data is stored in such a device by changing the magnetic microstate of the platter, changing magnetic polarity of a small area by magnetization from the read/write head. This process requires energy. Though information itself is hard to be associated with energy, storing, retrieving and processing information all requires energy.
So the individual bytes of information do have energy, magnetic energy, which is a form of electro-magnetic energy, the energy is equal to energy difference of the hard disk caused by the process to store information. And they do have mass, no matter how small it is, by E=m*c^2.
And there are four elemental forces: electromagnetic, gravitation, strong nuclear and weak nuclear. Every force in the macrocosm that is not gravitation is EM, and thermodynamic energy is the average kinetic energy of the random motion of the constituent particles, particles are waves, and changes of EM cause changes of thermodynamic energy, so thermodynamic energy is a form of EM.
Thus the binary data is related to thermodynamic energy, QED.
Answer: Entropy is a measure of disorder is not a useful statement. Because when you follow that path what happens usually is that you define entropy by disorder and disorder by entropy.
A more useful way to understand is to note that entropy is defined as such only because there are more possibilities we consider to be disordered rather than orthered. let's clear this up with an example.
Imagine a child's nursery, there are lots of toys and clothes in the room. And note that we can arrange them in nearly infinite ways. Imagine a computer simulation randomly replacing the stuff in the room. By random I truely mean random, the rugs can be on the walls, the crib can be upside down, the toys all over the place etc. We can place the toys in countless ways in the room but most of these arrengements we consider disordered. Only a few is considered to be organized; the few where the rug is on the floor, the toys are in their properly placed chest etc.
This is why the entropy of the universe always increases, because the arrengement of atoms to construct someting we consider ordered, say a human, is so little compared to the ones we consider random lumps of atoms; the possibility of atoms moving to an ordered state is practically impossible.
But note that if we are to consider a specific arrangement an ordered one the law does not break. For an 8bit string there are 256 possible arrangements but 00000000 isn't any different from the others. 00000000 and 10101010 both have 1 in 256 of a possibility to occur. So there isn't any specific order of bits that you can consider to be more ordered or disordered unless you need specific arrangements that you consider to be "ordered". | {
"domain": "physics.stackexchange",
"id": 75165,
"tags": "electromagnetism, thermodynamics, entropy"
} |
Why is q used for specific humidity? | Question: $q$ is the symbol used for specific humidity in many textbooks and papers (at least in meteorology and climatology, I'm not familiar with it in other disciplines). Where does this symbol come from, and what does it stand for?
Answer: I haven't been able to find any particular references which hold themselves out as the origin of $q$ as the symbol of choice for specific humidity, but the origin of the term "specific humidity" itself appears to have been in an 1884 article by Dr. W. von Bezold, translated into English and published in the Smithsonian Miscellaneous Collections (Vol. 51) in 1910, and is defined therein as a quantity:
the quantity of vapor contained in the unit mass of moist air which can be conveniently called the "specific humidity"
(p.327)
At that time, $q$ was not used, but instead von Bezold chose $y$ as the symbolic representation of specific humidity in his subsequent equations:
$y$ the specific humidity or the quantity of vapor in a unit mass of moist air expressed in the fractional parts of this unit.
(p.328)
The first appearance of $q$ as a symbol for specific humidity that I've found is in Chapter 5 of Physics of the Earth, Vol. 1 (p. 151), published in 1931. Here, the author uses $q$ to mean specific humidity as though it is common knowledge, using it in equations before even defining it for the reader:
This makes $q$ a readily determined and quite conservative characteristic of any unsaturated air mass.
Followed a few sentences later by:
The relative humidity $f$ together with the specific humidity $q$ and absolute humidity $\rho_{w}$ are the quantities commonly used as measures of the moisture content of the atmosphere.
(p.137)
This 1931 reference is likely not the earliest use of $q$ to mean specific humidity, but is the earliest I was able to uncover.
Thus, it would seem that the use of $q$ as a commonly accepted symbolic representation of specific humidity originated sometime between 1884 and 1931. It seems quite plausible that, given the use of the word "quantity" in defining the term originally (and in later references that I'm unable to add links for), the choice of $q$ as the symbol for specific humidity finds its source in this definition.
Additional reference: Temperature inversions in relation to frosts, McAdie, Alexander, Cambridge, 1915. (p.5) (Full text at Hathi Trust) | {
"domain": "earthscience.stackexchange",
"id": 218,
"tags": "meteorology, terminology, climatology, humidity"
} |
Is VQE a class of algorithms or a specific algorithm? | Question: Is VQE a class of algorithms or a specific algorithm? For example, is QAOA a VQE or is VQE an algorithm distinct from QAOA that solves the same class of problems?
If VQE is a specific algorithm, what are its defining features that distinguish it from other algorithms such as QAOA?
Answer: I view QAOA as an algorithm for solving (approximately) a special class of problems, namely combinatorial problems and VQE as a possible subroutine to QAOA (but not necessarily as in the case of MaxCut). Let me explain
The VQE - Variational Quantum Eigensolver - solves the problem of approximating the smallest eigenvalue of some Hermitian operator $H$ which we usually just call Hamiltonian. As a byproduct, we also obtain a classical description of the approximate ground state. It does so by classically varying over efficiently preparable ansatz states $|\psi(\theta)\rangle$ and a quantum subroutine determines the expectation value
$$\mu=\langle \psi (\theta)|H|\psi(\theta)\rangle$$ by a sampling procedure.
In QAOA (Quantum Approximate Optimization Algorithm), your cost function (or Hamiltonian if you will) is given by $H=\sum_i C_i(z)$ where the $C_i(z)$ are operators diagonal in the computational basis. Importantly, the eigenbasis of $H$ is thus the computational basis and one of the computational eigenstates encodes the solution to the problem! This, is not the case in VQE!
So how does QAOA procece? On a high level, without going into too many details, it procedes very similarly to VQE:
Optimize over variational parameters in some ansatz state. The state is called $|\gamma, \beta \rangle$ in QAOA and it ought to minimize/maximize the expectation value
$$\langle \gamma, \beta|H|\gamma, \beta\rangle$$
In this step, VQE can be used as a subroutine as this is precisely the task VQE can achieve (finding good parameters $\beta, \gamma$) but it might not be necessary. In the original QAOA paper, the authors argued, that for particular instances of MaxCut (i.e. some particular classes of graphs), an efficient classical optimization method exists, that is, they could optimize over the ansatz state without ever preparing it (no quantum device involved)!
Here, we necessarily go quantum (here you need a quantum device): Prepare the optimized ansatz state $|\psi_{opt} \rangle$ over and over again and measure it in the computational basis until you statistically converged enough to be able to pick the right computational basis state encoding the solution with high probability. (Note that because of the previous optimization routine, the state $|\psi_{opt} \rangle$ should have a large overlap with the eigenstate to the smallest eigenvalue which I stress once again is one of the basis vectors of the computational basis)
How is QAOA approximate you might ask now: Well, depending on how much computational resources you are willing to invest into finding good parameters, your $|\psi_{opt} \rangle$ might vary in quality. A bad quality state might not be close enough fidelity-wise to the eigenstate one is looking for. So the algorithm is approximate in the sense, that it tries to find a trade-off in the optimization procedure between optimization rounds and fidelity of the optimized state.
Note, that QAOA is just one possible application of VQE and there are many more, first and foremost quantum chemistry problems! | {
"domain": "quantumcomputing.stackexchange",
"id": 1020,
"tags": "quantum-algorithms, qaoa, vqe"
} |
Is $A=\{ w \in \{a,b,c\}^* \mid \#_a(w)+ 2\#_b(w) = 3\#_c(w)\}$ a CFG? | Question: I wonder whether the following language is a context free language:
$$A = \{w \in \{a,b,c\}^* \mid \#_a(w) + 2\#_b(w) = 3\#c(w)\}$$
where $\#_x(w)$ is the number of occurrences of $x$ in $w$.
I can't find any word that would be useful to refute by the pumping lemma, on the other hand I haven't been able to find a context free grammar generating it. It looks like it has to remember more than one PDA can handle.
What do you say?
Answer: Yes, $A$ is a CFL. Use the context free language (with the notation $|x|_0$ meaning what you have as #0's in $x$):
$B=\{x\in \{0,1\}^*:3\cdot|x|_0=|x|_1\}$
and the morphism $f:\{a,b,c\}^*\rightarrow\{0,1\}^*:$ $ f(a)=0, f(b)=00, f(c)=1$
so that
$A=f^{-1}(B)$.
Since the CFL's are closed under inverse morphism, $A$ is a CFL.
The proof that $B$ is a CFL is a bit tricky. Let's do it for the simpler case $C=\{x:|x|_0=|x|_1\}$, which can fairly easily be generalized to work for $B$ (or use another morphism).
Design a PDA which keeps track of $|x|_0-|x|_1$ at all points of the input, using the stack to keep track of the difference, having read $x$. The PDA accepts only when that difference is zero, that is, the stack is empty. The idea is to push a $0$ on seeing a $0$ on the input string, and pop a $0$ when seeing a $1$ in the input. The problem is that difference might go negative at times, and you can't pop an empty stack. In that case, however, the PDA goes into another mode (a different set of states) where it pushes a $1$ on seeing an input of $1$ and pops a $1$ on seeing an input of $0$. So the second mode handles the negative case. It alternates between the two modes, accepting only if the stack if totally empty, that is, in-between the modes.
Or, you can do it with the grammar: $S\rightarrow 0S1|1S0|SS|\epsilon$. That clearly generates only strings of C, but proving that it generates all of $C$ is a bit involved, but here's a sketch, by induction on string length.
An isosymbolic string $x$ (that is $|x|_0 = |x|_1$) could be null, in which case the $\epsilon$ production applies giving the basis of the induction. Otherwise $x$ either starts and ends in different symbols (one end $0$ and the other end $1$), or the same symbol (both $0$ or both $1$). In the not-equal case, the last production to apply is either the $0S1$ or the $1S0$ production, allowing us to strip off the first and last symbols, getting a shorter isosymbolic string, so the induction applies.
In the equal case, as with the PDA, we keep track of $|y|_0-|y|_1$ as we move from left to right through prefixes $y$ of $x$. If the first and last symbols of $x$ are the same, it's fairly easy to see that that expression must be zero somewhere in the interior of the string. For example, if $x$ starts and ends with $0$, then after the first symbol the difference is $+1$ and before the last symbol it is $-1$, so it must have crossed zero in the interior. Split the string in two at that spot (reversing the $SS$ production) and the two halves, both shorter than $x$, are both isosymbolic. So the induction works there too, and we are done. | {
"domain": "cs.stackexchange",
"id": 180,
"tags": "formal-languages, context-free"
} |
Movies App with Vue 3 and TypeScript | Question: I have made a Movies App with Vue 3, TypeScript and The Movie Database (TMDB) API. For aesthetics, I rely on Bootstrap 5.
In src\App.vue I have:
<template>
<TopBar />
<router-view />
<AppFooter />
</template>
<script lang="ts">
import { defineComponent } from 'vue';
// Import components
import TopBar from './components/TopBar.vue';
import AppFooter from './components/AppFooter.vue';
export default defineComponent({
// Register components
components: {
TopBar,
AppFooter
}
});
</script>
<style lang="scss">
.app-logo {
max-height: 25px;
width: auto;
}
// Layout
#app {
min-height: 100vh;
height: auto;
display: flex;
flex-direction: column;
}
</style>
The Navbar (src\components\TopBar.vue):
<template>
<nav class="navbar sticky-top navbar-expand-md shadow-sm">
<div class="container-fluid">
<router-link class="navbar-brand" to="/">
<img src="../assets/logo.png" class="app-logo" alt="App Logo">
</router-link>
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#mainNavigation" aria-controls="mainNavigation" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="mainNavigation">
<ul class="navbar-nav pe-md-1 navbar-expand-md">
<li class="nav-item">
<router-link class="nav-link" :class="$route.name == 'home' ? 'active':''" to="/">Now Playing</router-link>
</li>
<li class="nav-item">
<router-link class="nav-link" :class="$route.name == 'top_rated' ? 'active':''" to="/top-rated">Top Rated</router-link>
</li>
</ul>
<form ref="searchForm" class="search_form w-100 mx-auto mt-2 mt-md-0">
<div class="input-group">
<input v-on:keyup="debounceMovieSearch" v-model="searchTerm" class="form-control search-box" type="text" placeholder="Search movies...">
<div class="input-group-append">
<button class="btn" type="button">
<font-awesome-icon :icon="['fas', 'search']" />
</button>
</div>
</div>
<div v-if="isSearch" @click="isSearch = false" class="search-results shadow-sm">
<div v-if="this.movies.length">
<router-link v-for="movie in movies.slice(0, 10)" :key="movie.id" :to="`/movie/${movie.id}`">
<SearchItem :movie="movie" />
</router-link>
</div>
<div v-else>
<p class="m-0 p-2 text-center">No movies found for this search</p>
</div>
</div>
</form>
</div>
</div>
</nav>
</template>
<script lang="ts">
import { defineComponent, ref } from 'vue';
import axios from 'axios';
import env from '../env';
import SearchItem from './SearchItem.vue';
export default defineComponent({
name: 'TopBar',
components: {SearchItem},
data: () => ({
searchForm: ref(null),
isSearch: false,
searchTerm: '',
timeOutInterval: 1000,
movies: []
}),
mounted() {
this.windowEvents();
},
methods: {
windowEvents() {
// Check for click outside the search form
window.addEventListener('click', (event) => {
if (!(this.$refs.searchForm as HTMLFormElement).contains(event.target as Node|null)) {
this.isSearch = false;
}
});
},
debounceMovieSearch() {
setTimeout(this.doMovieSearch, this.timeOutInterval)
},
doMovieSearch() {
if (this.searchTerm.length > 2) {
this.isSearch = true;
axios.get(`${env.api_url}/search/movie?api_key=${env.api_key}&query=${this.searchTerm}`).then(response => {
this.movies = response.data.results;
})
.catch(err => console.log(err));
}
},
}
});
</script>
In the views directory I have the HomeView.vue that displays a list of movies, via the MoviesList.vue component.
In HomeView.vue:
<template>
<div class="container">
<h1 class="page-title">{{ pageTitle }}</h1>
<MoviesList listType="now_playing" />
</div>
</template>
<script lang="ts">
import { defineComponent } from 'vue';
import MoviesList from '../components/MoviesList.vue';
export default defineComponent({
name: 'HomeView',
components: {
MoviesList
},
data: () => ({
pageTitle: "Now Playing"
})
});
</script>
In MoviesList.vue:
<template>
<div class="row list">
<div
v-for="movie in movies"
:key="movie.id"
class="col-xs-12 col-sm-6 col-lg-4 col-xl-3"
>
<MovieCard :movie="movie" :genres="genres" :showRating="true" />
</div>
</div>
</template>
<script lang="ts">
import { defineComponent } from "vue";
import axios from "axios";
import env from "../env";
import MovieCard from "./MovieCard.vue";
export default defineComponent({
name: "MoviesList",
components: { MovieCard },
props: {
listType: {
type: String,
required: true,
},
},
data: () => ({
searchTerm: "",
movies: [],
genres: [],
}),
mounted() {
this.listMovies();
this.getGenres();
},
methods: {
listMovies() {
axios
.get(
`${env.api_url}/movie/${this.$props.listType}?api_key=${env.api_key}`
)
.then((response) => {
this.movies = response.data.results;
})
.catch((err) => console.log(err));
},
getGenres() {
axios
.get(`${env.api_url}/genre/movie/list?api_key=${env.api_key}`)
.then((response) => {
this.genres = response.data.genres;
})
.catch((err) => console.log(err));
},
},
});
</script>
<style scoped lang="scss">
[class*="col-"] {
display: flex;
flex-direction: column;
margin-bottom: 30px;
}
</style>
The above component is reusable. I use it in the Top Rated Movies view, by providing a different value (from the one used on the Homepage view) for the listType prop:
<template>
<div class="container">
<h1 class="page-title">{{ pageTitle }}</h1>
<MoviesList listType="top_rated" />
</div>
</template>
<script lang="ts">
import { defineComponent } from "vue";
import MoviesList from "../components/MoviesList.vue";
export default defineComponent({
name: "TopRatedMoviesView",
components: {
MoviesList,
},
data: () => ({
pageTitle: "Top Rated",
})
});
</script>
In src\components\MovieCard.vue I have:
<template>
<div class="movie card" @click="showDetails(movie.id)">
<div class="thumbnail">
<img
:src="movieCardImage"
:alt="movie.title"
class="img-fluid"
/>
</div>
<div class="card-content">
<h2 class="card-title">{{ movie.title }}</h2>
<p class="card-desc">{{ movie.overview }}</p>
<span v-if="showRating" :title="`Score: ${movie.vote_average}`" class="score">{{ movie.vote_average}}</span>
</div>
<div class="card-footer">
<p class="m-0 release">
Release date: {{ dateTime(movie.release_date) }}
</p>
<p v-if="movieGenres" class="m-0 pt-1">
<span class="genre" v-for="genre in movieGenres" :key="genre.id">
{{ genre.name }}
</span>
</p>
</div>
</div>
</template>
<script lang="ts">
import { defineComponent } from "vue";
import moment from "moment";
export default defineComponent({
name: "MovieCard",
props: {
movie: {
type: Object,
required: true,
},
genres: {
type: Object,
required: false,
},
showRating: {
type: Boolean,
default: false,
}
},
data: () => ({
genericCardImage: require("../assets/generic-card-image.png"),
}),
methods: {
dateTime(value: any) {
return moment(value).format("MMMM DD, YYYY");
},
showDetails(movie_id: any) {
let movie_url = `/movie/${movie_id}`;
window.open(movie_url, "_self");
},
},
computed: {
movieCardImage() {
return !this.movie?.backdrop_path
? this.genericCardImage
: `https://image.tmdb.org/t/p/w500/${this.movie?.backdrop_path}`;
},
movieGenres() {
if (this.genres) {
let genres = this.genres;
return this.movie?.genre_ids
.filter((genre_id: number) => genre_id in genres)
.map((genre_id: number) => genres[genre_id])
.filter((genre: { name: string }) => genre.name.length > 0);
} else {
return [];
}
},
},
});
</script>
The app is significantly bigger: there is a Movie details view and an Actor details view, but instead of pasting it all here, I have put together Stackblitz.
Questions
Is there any code redundancy (and ways to reduce it)?
Do you see any bad practices?
Any suggestions for code optimization, modularization and reusability?
Answer: This is feedback from me.
You can create a directive named composite for outside click.
In the movieCard component => you can replace template logic with computed properties. like for the date dateTime(movie.release_date)
Regarding typescript, don't overuse any. movie_id type is defined. It may be a string or number.
In actorDetails, you used useRoute, use is a composable mostly used with composition API. In option API it is better to use this.$route directly.
For reusability, you can use mixin which will handle all the API requests. You can figure out how it can be implemented.
It can be a little awkward but if you use BaseCard with a name for actor and movie.
You can register Axios as a global variable like $api or $axios.
Lastly, I think you need to move from Vue CLI to Vite. Use composition API instead of an option. It gives a lot of flexibility and reusability. Use tailwind (optional). | {
"domain": "codereview.stackexchange",
"id": 44732,
"tags": "javascript, typescript, vue.js"
} |
Why do water mains break in the winter? | Question: It may just be my perception, but it seems like water main breaks (at least in Pittsburgh PA) are more common in the winter during the cold weather. It may just that they are more news worthy in the winter (water+cold=ice > news).
Are water mains more likely to break in the winter? If so what can be done to limit or prevent the occurrence?
Answer: Cold makes things shrink. Despite being buried — presumably below the frost line — both the ground outside the pipe and the water inside the pipe are much colder in winter than in summer. This causes the pipe segments to shrink, putting tension on the joints, and increasing the chances that a weak one will fail. By contrast, warm weather puts the joints under compression, helping to seal any leaks.
Also, the ground does shift during freeze/thaw cycles (a.k.a. "frost heaves"), and this can cause additional stresses on pipes, even below the frost line — especially if an old leak has washed out some of the soil supporting the pipe.
There's really nothing that can be done that I know of to mitigate this for existing pipes. Any preventative measures must be put in place when the pipe is first installed. For example, putting a layer of crushed stone around a pipe helps to decouple it both mechanically and thermally from the surrounding soil. | {
"domain": "engineering.stackexchange",
"id": 691,
"tags": "pipelines, infrastructure"
} |
What keeps a gas giant from falling in on itself? | Question: There is not enough gravity at the center to start nuclear fusion, but it seems that there would be plenty enough to collapse the planet.
Answer: Pulsar's answer is indeed correct, but let me expand a bit more.
What happens when a gas giant shrinks?
A uniform mass will have a self gravitational potential of $-\frac{3GM^2}{5R}$. If we decrease its radius, its potential will decrease as well and the difference will be turned into thermal energy. Although gas giants and stars are not uniform mass balls, their gravitational binding energy is still proportional to $\frac{GM^2}{R}$, Thus if the radius decreases it will release energy, which will raise the temperature in return.
What happens when the temperature increases?
Assuming the gas in those planets obey the ideal gas law $$PV=nRT$$ (where $R$ is not the radius but the molar gas constant $R=8.314\,\text{J K}^{−1}\text{mol}^{-1}$), it's obvious that when $T$ increases and $V$ decreases (due to the shrink in the previous section) $P$ must increase. Note that most real gases behave qualitatively like an ideal gas, so this is not a crazy assumption.
So what is the big picture?
The planet shrinks a little bit, the potential difference turns into thermal energy and its temperature rises. The rise in temperature will cause the pressure to rise and prevent the planet from shrinking further (holding the planet in hydrostatic equilibrium). However, the planet also loses energy due to EM radiation as well, so it will continuously shrink and radiate. The process is called Kelvin–Helmholtz mechanism.
For instance, Jupiter is shrinking the tiny bit of $2\,\text{cm}$ each year. Although you might think this is really nothing, the amount of heat produced is similar to the total solar radiation it receives.
Addendum (Nov. 2020)
As Rob Jeffries has correctly pointed out, what ultimately keeps a gas giant from collapsing indefinitely is the electron degeneracy pressure. Eventually because of high pressure the hydrogen and other elements in the deep interior of the gas giant will undergo a phase transition to a metallic phase and will not compress any further. | {
"domain": "physics.stackexchange",
"id": 13806,
"tags": "thermodynamics, newtonian-gravity, pressure, astrophysics, planets"
} |
If non-array property exists convert it to one and push new value | Question: Is there a simpler/shorter/better way to do this?
let obj={a:1,b:[1,2]}
function add(key,value){
if(!obj[key].push){
obj[key]=[obj[key]]
}
obj[key].push(value)
}
Answer: Basically after reading over the code, I interpret that the conditional (i.e. !obj[key].push) checks if the value at the given key is not an array. A more robust way to do that is to use (the negated value of) Array.isArray() instead. That may not be any shorter, but perhaps a better way to determine if the property at key is an array.
var obj={a:1,b:[1,2]}
function add(key,value){
if(!Array.isArray(obj[key])){
obj[key]=[obj[key]]
}
obj[key].push(value)
}
add('a',3);
console.log(obj);
It would be difficult to prevent the re-assignment of the property .push (see example below). .push could be assigned to something other than a function, like an integer, string or object, or a function that does something other than push the supplied argument on the array.
var obj={a:1,b:[1,2]}
function add(key,value){
if(!obj[key].push){
obj[key]=[obj[key]]
}
obj[key].push(value)
}
obj.b.push = undefined;
add('b',3);
console.log(obj);
Edit
insertusernamehere made a good point in a comment: Perhaps it would be wise to guard against the case where the obj[key] is undefined. The current code would add that to the array, which is likely not preferable.
There are multiple ways to achieve this, including calling obj.hasOwnProperty(), checking the array returned by Object.keys(obj) does include key, etc.
var obj={a:1,b:[1,2]}
function add(key,value){
if (!Object.hasOwnProperty(key)) {
obj[key]=[];
}
if(!Array.isArray(obj[key])){
obj[key]=[obj[key]]
}
obj[key].push(value)
}
add('a',3);
add('c',4);
console.log(obj); | {
"domain": "codereview.stackexchange",
"id": 27945,
"tags": "javascript, array, ecmascript-6, properties"
} |
Accounting for pressure energy in Euler turbine/pump equation | Question: For all the analysis to find work done by a compressor or work done on a turbine, the book I'm reading (Fundamentals of Turbomachinery by Venkanna B.K) uses the Euler turbine and pump equation, $$W=\dot{m}(V_{w1}U_1\pm V_{w2}U_2) $$ where $V_w$ is the whirl velocity of fluid at inlet and exit, and $U$ is the mean rotational speed of the rotor blades and inlet and exit.
It is based on the conservation of angular momentum of the fluid by drawing velocity triangles.
While this might give the value of work done due to momentum of the fluid, what about the work done by the pressure energy in the fluid or work done to increase the pressure energy of the fluid? Especially in cases like Francis turbine and axial compressors where change in pressure energy plays a big role, how can we consider only the momentum of the fluid in our analysis? I'm guessing work needs to be done to increase the pressure energy as well/work is done by pressure energy in turbines like Francis turbine.
Maybe because of complicated aerofoil shapes of the blades its hard to do an analytical approach but shouldn't we at least account for a factor of change in pressure energy?
Answer: If you examine a derivation of the Euler work equation (http://web.mit.edu/16.unified/www/SPRING/propulsion/notes/node91.html), you will see that the change in angular momentum of the fluid is proportional to the change in enthalpy of the fluid. Remember the definition of enthalpy is the flow work and the internal energy. Therefore, the enthalpy contains the change in pressure via the flow work term. This is why pressure does not directly appear in the Euler work equation, but is physically accounted for. I recommend stepping through a derivation of the equation yourself to gain a better understanding. | {
"domain": "engineering.stackexchange",
"id": 3407,
"tags": "fluid-mechanics, pumps, turbines, turbomachinery"
} |
Infant tissues, organs, body parts or reflexes in an adult organism | Question: What is the phenomenon, when a normal useful tissue, organ, body part or an inborn reflex or instinct existed in the infant organism and normally should disappear or at least completely lose its function in the adult organism but it won't? Atavism and vestigiality don't seem the right terms. Is there a specific word that refers to the phenomenon?
Answer: "Remnant" is the closest term I can think of; "residual" is also used. You can find both used to describe the foramen ovale for example, which typically closes at birth (and has served its purpose at that point), but remains patent in a substantial proportion of adults, sometimes causing symptoms.
However, these terms are of course not specific for the circumstance you describe; I don't know of any that is. | {
"domain": "biology.stackexchange",
"id": 11218,
"tags": "anatomy"
} |
Refactor to remove goto | Question: Is it possible to implement this function without goto, while keeping the code correct and readable?
using iter = std::string::const_iterator;
// Skip multiline comment in a source string.
// Initial iterator is at the first character after the "/*"
iter skipMultilineComment( iter i, iter fileEnd )
{
while( true )
{
const char c1 = *i;
if( c1 != '*' )
{
i++;
if( i != fileEnd )
continue;
break;
}
// Found '*' character
FoundAsterisk:
i++;
if( i == fileEnd )
break;
const char c2 = *i;
if( c2 == '*' )
{
/* Found another '*' character; can be more: ****/
goto FoundAsterisk;
}
if( c2 != '/' )
{
i++;
if( i != fileEnd )
continue;
break;
}
// Finally, found the closing token, "*/"
i++;
return i;
}
logError( "fatal error C1071: unexpected end of file found in comment" );
throw E_INVALIDARG;
}
P.S. That’s not a test assignment or something, it’s production code. The input is GLSL, but the comments there are the same as in C++.
Answer: Yes, of course. (It's always possible to eliminate unstructured constructs by use of structured programming.) I recommend Kernighan & Plauger's The Elements of Programming Style for developing your own sense of structured programming.
Come on this journey with me!
Step 1: That confusing if at the top of the loop.
while (true) {
const char c1 = *i;
if( c1 != '*' )
{
i++;
if( i != fileEnd )
continue;
break;
}
c1 is never used again, so we can eliminate it. Then, breaking the loop means going to the end of the loop and setting a "don't do this loop again" flag; so let's make that flag; we'll call it done. Then continueing the loop means going to the end without setting that flag. So at this point we have
template<class It>
It skipMultilineComment(It it, It fileEnd)
{
bool done = false;
while (!done) {
if (*it != '*') {
++it;
done = (it == fileEnd);
} else {
// Found '*' character
FoundAsterisk:
++it;
if (it == fileEnd) break;
if (*it == '*') {
/* Found another '*' character; can be more: ****/
goto FoundAsterisk;
} else if (*it != '/') {
++it;
if (it != fileEnd) continue;
break;
}
// Finally, found the closing token, "*/"
++it;
return it;
}
}
logError( "fatal error C1071: unexpected end of file found in comment" );
throw E_INVALIDARG;
}
Notice that I've cleaned up some whitespace style and renamed i (traditionally a name for an integer loop control variable) to it (traditionally a name for an iterator). I've also made the function a template so that I can get rid of that global-scope typedef at the top.
Okay, let's pull out the next break... or, no, let's skip down to that similar continue/break tangle at the bottom of the loop, and sort that out. Same transformation as before. I'll just show the loop, because nothing outside it has changed.
while (!done) {
if (*it != '*') {
++it;
done = (it == fileEnd);
} else {
// Found '*' character
FoundAsterisk:
++it;
if (it == fileEnd) break;
if (*it == '*') {
/* Found another '*' character; can be more: ****/
goto FoundAsterisk;
} else if (*it == '/') {
// Finally found the closing token, "*/"
++it;
return it;
} else {
++it;
done = (it == fileEnd);
}
}
}
Notice that I am also habitually untangling your if/else blocks. You don't want to have if (a == x) ... else if (a != y) ... else if (a == z) ... because that's just plain confusing. if/else chains should read like switch statements: one handler per interesting value. So here we have one handler for *it == '*', and one handler for *it == '/', and then one catch-all "else" handler.
Let's follow that guideline and refactor the outer if (*it != '*') as well.
Notice that once we do that, we don't need the comment // Found '*' character anymore, because it's obvious from the code itself. Getting to remove pointless comments is one of the most satisfying parts of the refactoring process!
while (!done) {
if (*it == '*') {
FoundAsterisk:
++it;
if (it == fileEnd) {
done = true;
} else if (*it == '*') {
goto FoundAsterisk;
} else if (*it == '/') {
// Finally found the closing token, "*/"
++it;
return it;
} else {
++it;
done = (it == fileEnd);
}
} else {
++it;
done = (it == fileEnd);
}
}
Okay, let's tackle that goto. The fundamental algorithm here is, "While we're looking at a * character, increment it. But if we reach the end of the string, then stop." Normally we'd spell that as
while (it != fileEnd && *it == '*') { ++it; }
Let's see if we can shoehorn that line of code into our function in a natural way.
while (!done) {
if (*it == '*') {
while (it != fileEnd && *it == '*') {
++it;
}
if (it == fileEnd) {
done = true;
} else if (*it == '/') {
// Finally found the closing token, "*/"
++it;
return it;
} else {
++it;
done = (it == fileEnd);
}
} else {
++it;
done = (it == fileEnd);
}
}
Notice that every path to the bottom of the outer loop now ends with done = (it == fileEnd) (except in one case where we already know it == fileEnd and so we just set done = true). So basically we just keep going until it == fileEnd. That's our loop condition.
while (it != fileEnd) {
if (*it == '*') {
while (it != fileEnd && *it == '*') {
++it;
}
if (it == fileEnd) {
} else if (*it == '/') {
// Finally found the closing token, "*/"
++it;
return it;
} else {
++it;
}
} else {
++it;
}
}
Okay, but, now we've got a loop within a loop. You know what? Maybe what we really want here is a simple state machine: Either we've just seen a *, or we haven't. If we have, and the next character is /, then we're done. Otherwise, keep going. That would code up like this:
template<class It>
It skipMultilineComment(It it, It fileEnd)
{
for (bool seenStar = false; it != fileEnd; ++it) {
if (*it == '/' && seenStar) {
return it + 1;
}
seenStar = (*it == '*');
}
logError( "fatal error C1071: unexpected end of file found in comment" );
throw E_INVALIDARG;
}
Yes. I like that better. But wait, we can do better than that! All we seem to be doing is looking for the string "*/" inside a longer string. That's string search, and there are library functions for that!
#include <string.h>
const char *skipMultilineComment(const char *it, const char *fileEnd)
{
if (const char *p = memmem(it, fileEnd - it, "*/", 2)) {
return p + 2;
}
logError( "fatal error C1071: unexpected end of file found in comment" );
throw E_INVALIDARG;
}
or in C++17,
std::string_view skipMultilineComment(std::string_view text)
{
size_t pos = text.find("*/");
if (pos != text.npos) {
return text.substr(pos + 2);
}
logError( "fatal error C1071: unexpected end of file found in comment" );
throw E_INVALIDARG;
} | {
"domain": "codereview.stackexchange",
"id": 38065,
"tags": "c++, strings"
} |
JSON schema validator | Question: I wrote a simple JSON schema validator.
The full code is over here on gist.github.com
However, the code without comment is...
validate = function(schema, instance) {
var i;
var errors = 0;
var getType = function(attr) {
return Object.prototype.toString.call(attr);
}
var addError = function(msg, attrs) {
console.error(msg, attrs);
errors += 1;
}
if(getType(schema) !== getType(instance)) {
addError("Type Mismatch", [schema, instance]);
return errors;
}
for(i in schema) {
if(schema.hasOwnProperty(i)) {
if(instance[i] == undefined) {
addError("Property Not found", i);
}
//Special Handling for arrays
else if( getType(schema[i]) === getType([]) ) {
var zeroSchema = schema[i][0];
var zeroInstance = instance[i][0];
if(zeroInstance === undefined) {
continue;
}
for(var j=0;j<instance[i].length;j++) {
errors += validate(zeroSchema, instance[i][j]);
}
}
//Special Handling for nested objects
else if( getType(schema[i]) === getType({}) ) {
errors += validate(schema[i], instance[i]);
}
}
}
return errors;
}
The code doesn't need the schema object to explicitly specify the type of object properties
unlike the official one
I'm fairly new to javascript. How do I improve the code?
Answer: I'd definitely change:
if(instance[i] == undefined) {
addError("Property Not found", i);
}
to something more robust, e.g.
if (!instance.hasOwnProperty(i)) {
addError("Property Not found", i);
}
Stylistically I'd change a few things as well:
Add a space between function names and (): function() to function ()
Add a space between if, for etc and (: for(i in schema) to for (i in schema)
Add in semi-colons after defining your functions:
var getType = function(attr) {
return Object.prototype.toString.call(attr);
}; // <-- here.
Also, because JavaScript doesn't have block scope I like to define all my variables at the top of the function. e.g. you have:
validate = function (schema, instance) {
// omitted code.
for(var j=0; j<instance[i].length; j++) {
errors += validate(zeroSchema, instance[i][j]);
}
// more code.
}
Whereas I would prefer j defined at the top of the function (as well as the extra spaces I've added).
Having said all of that, I wonder whether your solution is flexible enough - how would you specify an optional property? A property with a minimum and/or maximum value? | {
"domain": "codereview.stackexchange",
"id": 3415,
"tags": "javascript, json"
} |
Sort multidimensional array based of the difference in the value | Question:
Sort multidimensional array based of the difference in the value, if
value is same sort on first column.
Constrains:
No of rows can be any but fixed no of column ie 2.
Example:
int arr[][] = new int[5][2]
[0] [1]
[0] 0 6
[1] 0 7
[2] 4 5
[3] 2 3
[4] 0 1
Final Output:
[0] [1]
[0] 0 1
[1] 2 3
[2] 4 5
[3] 0 6
[4] 0 7
Explanation:
difference between: arr[0][1] - arr[0][0] -> 6-0 -> 6
difference between: arr[1][1] - arr[1][0] -> 7-0 -> 7
difference between: arr[2][1] - arr[2][0] -> 5-4 -> 1 — same length ie 1
difference between: arr[3][1] - arr[3][0] -> 3-2 -> 1 — same length ie 1
difference between: arr[4][1] - arr[4[0] -> 1-0 -> 1 — same length ie 1
I want to sort on the difference, in cases where difference is same I
want to sort those with same difference on column [0]
So in this case, the below 3 have same difference:
difference between: arr[2][1] - arr[2][0] -> 5-4 -> 1 — same difference ie 1
difference between: arr[3][1] - arr[3][0] -> 3-2 -> 1 — same difference ie 1
difference between: arr[4][1] - arr[4[0] -> 1-0 -> 1 — same difference ie 1
Need to sort the above 3 based on there column[0] value:
arr[2][1] - arr[2][0] -> 5-4 -> 1 — value here is 4 ie arr[2][0]
arr[3][1] - arr[3][0] -> 3-2 -> 1 — value here is 2 ie arr[3][0]
arr[4][1] - arr[4[0] -> 1-0 -> 1 — value here is 1 ie arr[4][0]
So the the one with the least value in column[0] should be first, in
final output:
arr[4][1] - arr[4[0] -> 1-0 -> 1 ——> 1st
arr[3][1] - arr[3][0] -> 3-2 -> 1 ——> 2nd
arr[4][1] - arr[4[0] -> 1-0 -> 1 ——> 3rd
arr[0][1] - arr[0][0] -> 6-0 -> 6 ——> 4th
arr[1][1] - arr[1][0] -> 7-0 -> 7 ——> 5th
I would like to know the time complexity of my code. In short, what is the complexity of sorting a 2D array? 1d array --> O(n.logn) 2d --> ?
private static int solve(int pathLength, int[][] floristIntervals) {
// TODO Auto-generated method stub
System.out.println(Arrays.deepToString(floristIntervals));
Arrays.sort(floristIntervals, new Comparator<int[]>(){
@Override
public int compare(int[] o1, int[] o2) {
// TODO Auto-generated method stub
/*System.out.println(o1[0]);
System.out.println(o1[1]);
System.out.println(o2[0]);
System.out.println(o2[1]);*/
if(o2[1]-o2[0] == o1[1]-o1[0]){
if(o2[0] > o1[0]){
return -1;
}
return 1;
}
if (o2[1]-o2[0] > o1[1]-o1[0])
return 1;
else
return -1;
}
});
System.out.println(Arrays.deepToString(floristIntervals));
Answer: Comparator → Total Ordening
The implementor must ensure that sgn(compare(x, y)) == -sgn(compare(y, x)) for all x and y.
compare({0, 1}, {0, 1}) == 1
In its current form, the used comparator cannot return 0, and returns 1 for equals elements.
Does not solve problem?
The example input gives this as output:
[[0, 7], [0, 6], [0, 1], [2, 3], [4, 5]]
@Override
public int compare(int[] o1, int[] o2) {
if(o2[1]-o2[0] == o1[1]-o1[0]){
if(o2[0] > o1[0]){
return -1;
}
return 1;
}
if (o2[1]-o2[0] > o1[1]-o1[0])
return 1; // <-- should be -1
else
return -1; // <-- should be +1
}
Review
pathLength isn't used. What is it for?
The method solve is declared to return int, but doesn't.
Consider adding braces to all code blocks—including if and else. This will prevent errors from adding lines later.
o1 and o2 could be replaced with lhs (left-hand side) and rhs (right-hand side), but that's a bit nitpicking.
In order to implement returning 0 for equal elements, try using Integer.compare:
@Override
public int compare(int[] lhs, int[] rhs) {
int rv = Integer.compare(lhs[1] - lhs[0], rhs[1] - rhs[0]);
return rv != 0 ? rv : Integer.compare(lhs[0], rhs[0]);
}
Time Complexity
Unchanged: \$O(n\ log\ n)\$ by virtue of the comparison function being \$O(1)\$. | {
"domain": "codereview.stackexchange",
"id": 28882,
"tags": "java, array, sorting, complexity"
} |
In Atwood machine, does the tension of the rope do work? | Question: It seems in all Atwood machine exercises I can find, no one ever takes into account the tension of the rope when solving with conservation of energy. Why is this? Shouldn't the tension be a non-conservative force contributing to the net work?
Answer: Depends on what particular subsystem you're looking at. The tension does work on each of the blocks, but since it's an internal force, it does no work on the Atwood machine as a whole. | {
"domain": "physics.stackexchange",
"id": 48151,
"tags": "newtonian-mechanics, work, string"
} |
Can a quantum error-correcting code really correct any linear combination of correctable errors? | Question: It appears to me that in the survey by Gottesman (around Thm 2) as well as the book by Nielsen and Chuang (Thm 10.2) it is suggested that if a QEC code corrects errors $A$ and $B$ then it also corrects any linear combination of errors (in particular by Gottesman); the sources can be found here:
Gottesman: https://arxiv.org/abs/0904.2557
Nielsen, Chuang: http://mmrc.amss.cas.cn/tlb/201702/W020170224608149940643.pdf
A simple QEC code like Shor's 9-qubit code can correct arbitrary single-qubit errors bc it can correct the Pauli errors if they occur on the same qubit, but clearly it cannot correct more than one error if they occur in the wrong places (e.g. two bitflip errors in the same block). But such an error would be a linear combination of a bitflip error X_1 hitting the first and a bitflip error X_2 hitting the second qubit in the code. What am I missing here?
Answer: Let's make it even simpler by using the $3$-qubit bit-flip code. That code corrects the errors $E_{1} = XII$,$E_{2} = IXI$ and $E_{3} = IIX$.
The 'theorem' states that this code can then also correct any error which is a linear combination $E_{l}$ of these errors:
\begin{equation}
E_{l} = \alpha I + \beta E_{1} + \gamma E_{2} + \delta E_{3}
\end{equation}
Note, however, that the error you describe (a bit flip on the first and the second qubit, let's call it $K$) is described by the operator $K = XXI$, which is not a linear combination of $E_{1}, E_{2} \& E_{3}$. In other words:
\begin{equation}
XXI \not = XII + IXI.
\end{equation}
If you view the collection of all operators/errors as a space, then $E_{1} = XII$,$E_{2} = IXI$ and $E_{3} = IIX$ form a basis for a subspace of that entire space; the theorem is that every element of that subspace is then also correctable (i.e. you only need to make sure that you can correct the elements from a basis, and the rest of the space comes for free. Any operator outside that space will be non-correctable.
$K$ is what we call a correlated error: the flips on the first and second qubits are correlated. These errors (also called higher-weight errors) are normally non-correctable by QECC's, and therefore they need to be circumvented at all cost (through fault-tolerance and the likes). | {
"domain": "quantumcomputing.stackexchange",
"id": 1520,
"tags": "error-correction"
} |
Which frequencies of IR and UV light best penetrate the atmosphere with least interference? | Question: We are exploring concept of setting up transceivers on lighter than air airships and balloons about 100,000 feet up in the stratosphere. They will be used for transmitting and receiving information similar to what satellites now do. We don't want to use radio frequencies since they are regulated by the FCC.
The other option is to use lasers for the physical layer. Which frequencies of UV (ultraviolet) and IR (infrared) would be the best to use for this purpose? The ideal frequency would encounter the least distortion in the stratosphere and troposphere
Answer: There is what's known as an IR window in the earths atmosphere. This occurs at wavelengths of 8-14 microns. There are other smaller regions as well. Best to do an internet search on atmospheric IR windows to get the other smaller regions. The 8-14 micron window transmit up to about 85%. The following chart, taken from Wikipedia’s Infrared Window page shows transmittance vs. wavelength. | {
"domain": "physics.stackexchange",
"id": 40594,
"tags": "optics, experimental-physics, laser, atmospheric-science"
} |
Implementing a HSP for Graph Isomorphism in the Quantum Circuit Model | Question: The HSP (Hidden Subgroup Problem) links many NP-intermediate problems, such as factoring, graph isomorphism, and shortest vector.
The brief problem statement is presented like so:
Given some group, G, and a set, X, along with some function $f: G \mapsto X$, $f$ is said to hide a subgroup, $H$, if $f(a) = f(b) \iff aH = bH$. The task is to find the subgroup, $H$ (as a generating set), given $f$ as an oracle.
Factorization (period finding) can fit into this paradigm with the group $\mathbb{Z}_N$, where $N$ is some given constant.
On the other hand, Graph isomorphism has the group $S_N$; the symmetric group on $N$ elements. This group has $N!$ elements in it.
The oracle in graph isomorphism is $U_f | x \in S_N \rangle = | x(G) \in S_N \rangle$ where $G$ is the disjoint union of two graphs (the graphs we want to check the isomorphism of).
An oracle inputs $O(poly(N))$ qubits into it, meaning it has $2^{poly(N)}$ distinct states for $x$ it can hold. (I am using $poly(N)$ as some polynomial function). But $x$ can be anything in $S_N$, meaning it has $N!$ possible states.
We cannot represent an input to this oracle as a bitstring. So how is it done?
Answer: What do you mean that we cannot "represent an input to this oracle as a bitstring"?
For example we could have the basis states in our Hilbert space be the adjacency matrices over $N$ vertices, with $G$ being one of these states, while $\pi_i\in S_N$ being permutations of these vertices.
I claim it's easy to prepare the state:
$$\frac{1}{\sqrt {N!}}\sum_{i=1}^{N!}|i\rangle,\tag 1$$
for example where each $i$ is written as a number in the factoradic number system.
With $\pi_i\in S_N$, $G$ being an adjacency matrix on $N$ vertices of the test graph, and $\pi_i(G)$ being another adjacency matrix having the $i$th permutation applied to $G$, I then also claim that it's easy to prepare:
$$\frac{1}{\sqrt {N!}}\sum_{i=1}^{N!}|i\rangle|\pi_i(G)\rangle,\tag 2$$
by applying the $i$th permutation to the vertices on the test graph $G$. If $G$ is given as an adjacency matrix on $N$ vertices in, say, some canonical form, and $\pi_i$ is a permutation, then we can easily come up with classical code, and hence with a quantum circuit, to permute the $N$ vertices in the adjacency matrix to find another adjacency matrix.
The problem, though, is that we cannot then easily disentangle the first register from the second register to prepare:
$$\frac{1}{\sqrt {N!}}\sum_{i=1}^{N!}|\pi_i(G)\rangle,\tag 3$$
because we need to uncompute the garbage that's picked up along the way, while computing $\pi_i(G)$. For, if we had such a state, we could solve graph isomorphism in quantum polynomial time.
Note I'm using "easy" as synonymous with "polynomially", and not necessarily as meaning easy in the plain and ordinary interpretation of effortlessly or uncomplicated; it might indeed be actually be challenging to engineer the actual snippet of code to do the permutations, and to then convert the code into a quantum circuit. | {
"domain": "quantumcomputing.stackexchange",
"id": 4809,
"tags": "shors-algorithm, hidden-subgroup-problem, factorization, graph-isomorphism"
} |
What substance has the lowest solubility product? | Question: What substance has the lowest $K_\mathrm{sp}$ and what is its value? The lowest I could find is $2.6\cdot 10^{-124}$ for cobalt(III) sulfide $\ce{Co2S3}$.
Answer: The $K_\mathrm{sp}(\ce{Co2S3})$ value of magnitude of $10^{-124}$ appears in paper by Goates et al. [1].
The refined "thermodynamic" value of $\pu{2.6E-124}$ that you've listed and is used by numerous textbook up to these days has been proposed by Waggoner [2].
However, thirty years later (late 1980s) there's been another study by Licht [3], which showed significant deviation from the previous studies due to two factors:
A new value of the free energy of sulfide ion formation for aqueous solutions has been used:
$$Δ_\mathrm{f}G^\circ(\ce{S^2-(aq)}) = \pu{(111 ± 2) kJ mol-1}$$
Previous value [1] was
$$Δ_\mathrm{f}G^\circ(\ce{S^2-(aq)}) = \pu{20.64 kcal mol-1} \approx \pu{83.68 kJ mol-1}$$
Free sulfide activity $a(\ce{S^2-})$ used for $K_\mathrm{sp}$ determination
$$
\begin{align}
\ce{M_xS_y &<=> x M^{$2y/x$+} + y S^2-} &\qquad K_\mathrm{sp} &= a(\ce{M^{$2y/x$+}})^x \cdot a(\ce{S^2-})^y \\
\ce{HS- &<=> H+ + S^2-} &\qquad K_2 &= \frac{a(\ce{H+})\cdot a(\ce{S^2-})}{a(\ce{HS-})}
\end{align}
$$
has also been misinterpreted in early studies by acidification of hydroxyl $\ce{OH-}$ instead, erroneously substituting $K_2$ by $K_\mathrm{w}.$
Table I from [3] lists more recent $\mathrm{p}K_\mathrm{sp}$ values, among which the one for $\ce{Co2S3}$ that has increased significantly $(\mathrm{p}K_\mathrm{sp}(\ce{Co2S3}) = 49.9,$ $K_\mathrm{sp}(\ce{Co2S3}) \approx \pu{1.26E-50}).$
Five least soluble sulfides from that table are:
$$
\begin{array}{lrc}
\hline
\ce{M_xS_y} & \mathrm{p}K_\mathrm{sp} & K_\mathrm{sp} \\
\hline
\ce{Ir2S3} & 196.3 & \pu{5.0E-197} \\
\ce{Bi2S3} & 115.1 & \pu{7.9E-116} \\
\ce{Mo2S3} & 107.8 & \pu{1.6E-108} \\
\ce{Ni3S4} & 104.5 & \pu{3.2E-105} \\
\ce{In2S3} & 96.3 & \pu{5.0E-97} \\
\hline
\end{array}
$$
So that it seems like the new champion (at least among sulfides) is iridium(III) sulfide $\ce{Ir2S3}$ with $K_\mathrm{sp} = \pu{5.0E-197}$.
References
Goates, J. R.; Gordon, M. B.; Faux, N. D. Calculated Values for the Solubility Product Constants of the Metallic Sulfides. Journal of the American Chemical Society 1952, 74 (3), 835–836. DOI: 10.1021/ja01123a510.
Waggoner, W. H. Textbook Errors: Guest Column. The Solubility Product Constants of the Metallic Sulfides. Journal of Chemical Education 1958, 35 (7), 339. DOI: 10.1021/ed035p339.
Licht, S. Aqueous Solubilities, Solubility Products and Standard Oxidation-Reduction Potentials of the Metal Sulfides. Journal of The Electrochemical Society 1988, 135 (12), 2971. DOI: 10.1149/1.2095471. | {
"domain": "chemistry.stackexchange",
"id": 11618,
"tags": "inorganic-chemistry, equilibrium, solubility, reference-request"
} |
Feasability of glass gears? | Question: I recently saw a video where a glassblower was making something, and one of the first steps was to push the wad of molten glass down into a shaper with a number of vertical spikes, such that the result vaguely resembled a gear.
I am aware that gears can be made of plastics and wood (e.g. https://woodgears.ca/gear_cutting/index.html) as well as metals, but would glass (perhaps a stronger variant such as tempered or borosilicate glass) be suitable to make gears out of? Are there any historical examples of this?
Obviously with modern materials and techniques steel is probably the best choice in an industrial setting, but I am curious as to it's suitability as an intermediate step between cheap but quickly worn down wooden gears for rapid prototyping and metal gears for serious use. I would imagine it could be a lot easier and cheaper to melt glass in a home workshop and pour it into a mold or shape it appropriately as compared to doing the same thing with metals (keeping in mind equipment costs e.g. having to build a foundry capable of melting steel).
Answer: the tooth-root stresses in a gear are tensile and the teeth roots have sharp corners. This means if the glass gears were carrying any sort of load, the teeth would shear off right away.
Furthermore, the teeth faces in a meshing gear set are in sliding contact, and if any grit gets into the space between the glass gear teeth, the faces will rapidly get scored and then the teeth will shatter into a million pieces.
Finally, gear teeth have to withstand large shock loads when the gear train is slammed into engagement and starts up and/or reverses during operation. Brittle materials like glass exhibit very low toughness which means they break promptly under a shock load. | {
"domain": "engineering.stackexchange",
"id": 3476,
"tags": "gears, glass"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.