anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
HTML Form validation with PHP | Question: I'm kinda new to PHP and I'm trying to make a script to validate a HTML form only with PHP (I know there are options to do so with JS, but I guess it's better to have a "cover" with a server-side script).
Project folder structure:
project/
classes/ #contains classes to validate and process the HTML form
DB.php #contains the connection to the MySQL server
Rules.php #contains all rules to validate the form
Validate.php #contains methods to validate each field
Register.php #is the registration class
ini.php #contains the spl_autoload_register() function
index.php #contains the HTML form
HTML form code, which is located in the project folder:
<?php include_once 'lib/csrf-magic/csrf-magic.php; ?>
<form method="POST" action="classes/Register.php">
<fieldset>
<legend><strong>register:</strong></legend>
* <input type="text" name="fname" placeholder="First Name"/><br/>
<div style="float:left;">
<?php
// I want to put here all the validation errors for the fname field, if any
?>
</div>
<input type="submit" name="submit" value="Register"/>
</fieldset>
</form>
classes/ini.php
error_reporting(E_ALL);
ini_set('display_errors','On');
spl_autoload_register(function($class){
require_once $class.'.php';
});
classes/DB.php
class DB
{
private $driver = 'mysql';
private $host = '127.0.0.1';
private $user = 'root';
private $pass = '';
private $name = 'project';
public $db;
public function __construct()
{
try
{
$this->db = new PDO("$this->driver:host=$this->host;dbname=$this->name;charset=utf8", $this->user, $this->pass);
//Error reporting. Throw exceptions
$this->db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
//Use native prepared statements
$this->db->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
//Don't use persistent connections across multiple server sessions
$this->db->setAttribute(PDO::ATTR_PERSISTENT, false);
//Set default fetch mode
$this->db->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_OBJ);
//Return the connection object
return $this->db;
} catch (PDOException $e)
{
#echo 'Sorry. We have some problemes. Try again later!';
echo $e->getMessage();
}
}
}
classes/Rules.php
class Rules
{
/**
* used for displaying in the page the user input text
*/
public static function escape($str)
{
return htmlspecialchars(trim($str),ENT_QUOTES,'UTF-8');
}
/**
* used to allow only numbers, letters (lc+uc), a space, an underscore, and a dash for a username. this will be stored into db
*/
public static function filter($str)
{
return preg_replace('#[^0-9a-z _-]#i', "", trim($str));
}
/**
* used to determine if the input has a certain minimum length
*/
public static function minlen($str, $value)
{
return mb_strlen(trim($str)) < $value ? true : false;
}
/**
* used to determine if the input has a certain maximum length
*/
public static function maxlen($str, $value)
{
return mb_strlen(trim($str)) > $value ? true : false;
}
/**
* used to determine if the password has a certain minimum length
*/
public static function passLen($str)
{
return mb_strlen(trim($str)) < 6 ? true : false;
}
/**
* used to determine if two passwords are equal
*/
public static function equals($pass1, $pass2)
{
return trim($pass2) === trim($pass1) ? true : false;
}
/**
* used to determine if a textbox contain a valid email
*/
public static function email($email)
{
return filter_var(trim($email),FILTER_VALIDATE_EMAIL) ? true : false;
}
/**
* used to determine if a user has wrote some date into the textbox
*/
public static function required($str)
{
return trim($str) === '' ? true : false;
}
}
classes/Validation.php
require 'ini.php';
class Validation{
public static function validateName($name){
$errors['name'] = [];
$name = Rules::filter($name);
if (Rules::required($name)) {
$errors['name'][] = 'The Name field is mandatory<br/>';
}
if (Rules::minlen($name, 3)) {
$errors['name'][] = 'The Name field is too short<br/>';
}
if (Rules::maxlen($name, 50)) { // this field is 50 chars long into the db table
$errors['name'][] = 'The Name field is too long<br/>';
}
if (isset($errors['name'])) {
return $errors['name'];
} else {
return true;
}
}
}
classes/Register.php
require_once 'ini.php';
class Register
{
private $db;
public function __construct(DB $db)
{
$this->db = $db->db;
}
public function reg()
{
$fname = isset($_POST['fname']) ? $_POST['fname'] : ''; // I' use HTMLPurifier for this variable
if (isset($_POST['submit'])) {
if (Validation::validateName($fname)) {
$stmt = $this->db->prepare('INSERT INTO users SET fname=:fn, lname=:ln');
$stmt->bindValue(':fn', $fname, PDO::PARAM_STR);
$stmt->exeecute();
} else {
foreach (Validation::validateName($fname) as $e) {
return $e;
}
}
}
}
$d = new DB();
$r = new Register($d);
echo $r->reg();
Now, I want to put all the validation errors that will occur when the user submits the form w/o filling accordingly all the fields required, into the <div> located down under the <input type="text" name="fname"> field. The errors are in the foreach loop located into the Register class. Another major aspect is the security: how should I improve my code?
Answer: Use boolean expressions directly
Rules.php is full of ternaries that return true or false,
like this one:
public static function minlen($str, $value)
{
return mb_strlen(trim($str)) < $value ? true : false;
}
You can return boolean expressions directly:
return mb_strlen(trim($str)) < $value;
Don't repeat yourself
All the rules trim the input.
Instead of writing trim in all of them,
it would be simpler to trim once before using the rule methods,
so that you don't have to repeatedly write trim so many times in every single method.
Don't return two types of values
The validateName may return two kinds of values:
an array of errors, or true (boolean).
if (isset($errors['name'])) {
return $errors['name'];
} else {
return true;
}
This is poor design.
Return one type of value.
If there are no errors, return an empty array.
That way the return type will be consistently an array,
which is a good thing.
The code snippet above will become simpler too:
return $errors['name']; | {
"domain": "codereview.stackexchange",
"id": 14507,
"tags": "php, html"
} |
Computing parity function on n variables with O(n) gates | Question: Sipser example 9.29
He says: "one way to do so (compute the parity function with O(n) gates. One way to do so is build a binary tree that computes the XOR function, where the XOR function is the same as parity on 2 variables, and then implement each XOR gate with two NOTs and two ANDs, and one OR. ... Let A be the language of strings that contain an odd number of 1's. Then A has circuit complexity O(n)."
I'm not seeing the steps that lead to saying that A has circuit complexity of O(n). Why can we say that if we implement a binary tree of XORs we can compute the parity with O(n) gates?
Answer: Because the parity is 1 if and only if the XOR of all the bits is 1. In other words, computing the parity is equivalent to computing the XOR of the bits. Sipser describes how to compute the XOR of the $n$ bits with $O(n)$ gates, which means that this same circuit computes the parity of those $n$ bits with $O(n)$ gates. | {
"domain": "cs.stackexchange",
"id": 2009,
"tags": "complexity-theory, circuits"
} |
Can I traverse/enumerate PDB one at a time using a Python script without downloading all of them to my local disk? | Question: What would be the total size of all PDB files in RCSB?
Can I traverse/enumerate them one at a time using a Python script without downloading all of them to my local disk?
If YES, what would be the Python script like?
Answer: The page https://www.wwpdb.org/ftp/pdb-ftp-sites gives the addresses of interest. PDB has an FTP site downloadable via rsync and the latter has a dry run mode.
rsync --port=33444 -rlpt -v -z --dry-run --stats rsync.rcsb.org::ftp_data/structures/divided/pdb/
You get a bunch of folders names and the following:
Number of files: 182479
Number of files transferred: 0
Total file size: 38382243431 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 3777395
File list generation time: 52.909 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 16
Total bytes received: 3777651
sent 16 bytes received 3777651 bytes 66861.36 bytes/sec
total size is 38382243431 speedup is 10160.30
So there are 182,479 *.ent.gz in various folders. The total is 38,382,243,431 bytes, which is 38 GB, so not too much. In Python it is easy to iterate across gzip files with the gzip module, so uncompressing them is generally a thing to avoid as it will bloat up greatly. The problem arise when dealing with the density maps, which do take up a lot of space...
Download without saving
The PDBe is easier to for downloading things than the RCSB PDB and the API are much more comprehensive.
Given a list of PDB codes one could
pdbcode : str = ''
import requests
r = requests.get(f'https://www.ebi.ac.uk/pdbe/entry-files/download/pdb{pdbcode.lower()}.ent', stream=True)
assert r.status_code == 200, f'something ({r.status_code} is wrong: {r.text}'
pdb_code = r.text | {
"domain": "bioinformatics.stackexchange",
"id": 2380,
"tags": "python, pdb, biopython"
} |
Can a very small portion of an ellipse be a parabola? | Question: We can show that when a particle is projected from a certain height (from the surface of the Earth) with a speed lesser than the orbital speed and in a direction tangential to the surface of the Earth at that point, it follows an elliptical path with the center of the Earth as a focus. The process of deriving this result does not involve putting any lower bound on the speed of projection. Certainly, if the speed of projection is small enough then the trajectory would actually intersect the Earth but we can nevertheless consider the section of the trajectory until it actually hits the Earth to be elliptical. However, when we explicitly consider only those situations when the trajectory of the projectile does intersect the Earth, via approximations [1], we can show that if the height is sufficiently smaller (than the radius of the Earth) then the trajectory will actually be a parabola. In none of the two calculations, we have assumed the earth to be a point or a flat surface. So is it true that a very small portion of an ellipse is parabola even if both are entirely different conics by definition?
[1]: Via "approximations", I meant treating an approximated gravitational field and solving for the trajectory anew rather than incorporating the approximation into the already evaluated original trajectory. Of course, it would give the same answer (as it should), but unaware of how to Taylor expand a curve, when I posted the question, I saw in this result a paradox as to how could two different conics be the same in some regime.
Answer: A small portion of any smooth curve looks the same as a small piece of a parabola in the limit. Choose a coordinate system so that the tangential direction in the middle of the segment is along the $x$ axis and choose a translation for the middle of the segment to sit at $(0,0,0)$, the origin of the coordinates. Then $y,z$ on the curve (ellipse etc.) may be viewed as functions of $x$ and these functions $y(x),z(x)$ may be Taylor-expanded. The first nontrivial term is
$$ y(x) = a_2^y x^2 +O(x^3)$$
because the absolute and linear terms were made to vanish by the choice of the coordinates. But neglecting the $x^3$ and other pieces, this is just an equation for a parabola.
(A similar comment would apply to $z(x)$ and one could actually rotate the coordinates in the $yz$ plane so that $a_2^z$ equals zero.)
So a "supershort piece" of a curve is always a straight line. With a better approximation, a "very short" piece is a parabola, and one may refine the acceptably accurate formulae by increasingly good approximations, by adding one power after another.
But near the perigeum (the closest approach to the source of the gravitational field), we may actually describe a limiting procedure for which the "whole" ellipse – and not just an infinitesimal piece – becomes a parabola. As the maximum speed of the satellite (the speed at the perigeum etc.) approaches the escape speed, while the place of the perigeum is kept at the same point, the elliptical orbit approaches a parabolic trajectory – the whole one. Note that in this limiting procedure, the furthest point from the center of gravity goes to infinite distance and the eccentricity diverges, too.
So the parabola is a limit of a class of ellipses. In the same way, a parabola is a limiting case of hyperbolae, too. In fact, in the space of conics, a parabola is always the "very rare, measure-zero" crossing point from an ellipse to a hyperbola. This shouldn't be shocking. Just consider an equation in the 2D plane
$$ y-x^2-c=ay^2 $$
which is a function of $a$. For a positive $a$, you get an ellipse; for a negative $a$, you get a hyperbola. The intermediate, special case is $a=0$ which is a parabola. One may also parameterize these curves as conic sections. The type of the curve we get will depend on the angle by which the plane is tilted relatively to the cone; the parabola is again the transition from ellipses at low angle to hyperbolae at a high angle. | {
"domain": "physics.stackexchange",
"id": 11723,
"tags": "newtonian-gravity, orbital-motion, geometry, approximations, celestial-mechanics"
} |
When calculating first moments of mass in cylindrical coordinates, can we use $r,\theta$ instead of $x,y$? | Question: Usually when we want to find the center of mass of a 3D body, we need to find the
first moments of mass about $x,y,z$, call them $M_x,M_y,M_z$. Defined by
$$
M_x = \iiint x\rho \ dV,
$$
where $\rho$ is the density function.
However in cylindrical coordinates ($z=z,\ x=r\cos{\theta},\ y=r\sin{\theta}$), we still calculate $M_x,M_y,M_z$. What I want to know is why don't we find $M_z, M_\theta, M_r$? Does this not lead to the following coordinates of the center of mass:
$$
(\bar{z},\bar{\theta}, \bar{r}) = \left(\frac{M_z}{M}, \frac{M_\theta}{M},
\frac{M_r}{M} \right),
$$
where $M$ is the mass of the body.
Answer: Unfortunately, your proposed $z,\theta,r$ method doesn't work.
Consider just a cylinder, centered at the origin, and let's look at $M_r=\int\!\!\int\!\!\int r \rho\ dV$.
Even without doing the calculation, we can see that $M_r$ will be something greater than zero, since $r \ge 0$.
This will give your $\bar r = {M_r/ M} \ge 0$.
But one can clearly see by symmetry that the center of mass is at $z=0$, $r=0$, and $\theta$ is anything/undefined.
So you see, trying to do this as you suggest -- calculating the mass-weighted value of $r$ -- isn't giving the center of mass. | {
"domain": "physics.stackexchange",
"id": 64606,
"tags": "newtonian-mechanics, coordinate-systems, moment"
} |
Electric and Water Flow Analogy | Question: I’ve been studying electricity lately and it has been quite hard to understand the terminology, and picturing it in my mind. After struggling to understand what each property of electric currents meant, I found an analogy that related electric current to water flow. I am not sure if this analogy is 100% accurate, but it sure has helped me understand the complex (at least for young, starter scientist like me) topic of electricity. This far I have managed to relate in the analogy that:
-----Electricity-------------------------Water------------
Current (Amps)(A)-------Flow rate (Volume/Time)
Resistance (Ohms)(Ω)----pipe size (Inches)
Voltage (Volts)(V)------Water pressure (psi)
But I am still missing properties like charge. What properties am I missing? How could it be related to water flow? If it can’t be related, can you please give me a short definition and explanation to better grasp the concept? Please include units in both sides of the analogy. Also, please keep the answer short and simple for us, the inexpert scientist (including me) that struggle to grasp the concept.
Thanks a bunch for your expert help
It is kind of discussed in these articles:
Flow of water and flow of electrons, how this analogy works?
Power in hydraulic analogy
But I couldn’t find (or understand) the answer provided in them.
Answer: Your analogy is legitimate for understanding the basics of circuits. You might find picturing a compressible fluid like air slightly more helpful than a fluid like water. In such an analogy, above average air density is like excess positive charge and below average air density is like excess negative charge. The charge is the fluid.
Coulombs = gallons
Battery = pump. But it is a pump that uses a constant amount of "effort", producing more or less flow depending on the resistance it encounters.
Diode = valve, only allows flow in one direction.
Capacitor = A rubber barrier inside a pipe. It stretches some as it stops the flow. The greater the pressure difference the more it stretches. Capacitance is like the inverse of the spring constant of this membrane.
I don't think there are good analogies with fluids for phenomena based on the magnetic fields of circuits - induction, transformers, motors, etc. Inductance is a bit like the inertia of the fluid, but I'm not sure it is helpful to push on that analogy. | {
"domain": "physics.stackexchange",
"id": 13348,
"tags": "electricity"
} |
How to find magnetization of permanent magnet using its residual magnetization? | Question: I have a magnet with N52 designation whose remanence/residual magnetization $B_r = 1.4 T$. Magnet has dimension $50\times 20\times 5\text{ mm}$. My question is how to find magnet`s magnetization $\vec{M}$ using these parameters?
My approach so far has been finding magnetic dipole moment using formula from Wikipedia:
$$\vec{m} = \frac{1}{\mu}\vec{B_r}V$$
where $V$ is the volume of the magnet and $\vec{B_r}$ is the residual flux density or remanence. I struggle to convert this magnetic dipole moment to magnetization $\vec{M}$. Is this the correct approach?
Answer: As I recall, the magnetization is the dipole moment / unit volume. Divide your formula by V. | {
"domain": "physics.stackexchange",
"id": 80941,
"tags": "electromagnetism, magnetic-fields, magnetic-moment"
} |
Is heat transfer in space through radiation 100% efficient? | Question: Flat 1x1 meter plate. Heat source indicated by red arrow. The rest is in space. No sun exposure at all.
Some of the photons will exit through the gaps between plates. No problem. My question is about the photons that do make it from plate to plate.
One photon leaves the first plate and is received by the second. Is this transfer of heat 100% efficient? If not, why? If the energy does not go into heat, where does it go?
A variant of this question would be a hypothetical sphere heat source inside a much larger hollow sphere in space. Is heat transfer through radiation from the small sphere in the center to the larger outer sphere 100% efficient?
Let's define the term: "100% efficient" is taken to mean that one unit of energy carried away by a photon from the source delivers one unit of energy to the destination.
Does the angle of incidence matter? If so, how?
Answer: Yes, the transfer is 100% efficient. we can use $E=h \nu$ to calculate the energy of the photon with frequency $\nu$. This is the amount the emitting plate looses and the absorbing plate gains. This is not dependent on the angle of incident, as long as the photon is completely absorbed.
The conservation of energy is a general concept. We use it all the time in physics. | {
"domain": "physics.stackexchange",
"id": 68997,
"tags": "thermodynamics, space"
} |
Why isn't Panama considered an intercontinental country? | Question: A well-known intercontinental country (a country that exists in two or more continents) is Egypt.
It has been divided to two parts by the man-made Suez Canal. The Sinai peninsula which is at the east side of the Canal is contiguous with Asia, and the rest of the country is contiguous with Africa.
Also, in the case of Panama there is a man-made canal (Panama Canal) which divides the country to two parts. The eastern part is contiguous with South America and the western part is contiguous with North America.
So why isn't Panama considered an intercontinental country like Egypt is?
Answer: Basically the reason is we say it isn't intercontinental.
Panama differs from Egypt in two ways relevant to this question. First, even without the Suez Canal a clearly defined narrow neck exists between the "Asian" and "African" parts of Egypt, making the proposed intercontinental boundary a plausible natural one. With Panama the neck is less prominent; southern North America tapers much more gradually from Mexico to the canal.
Second, eastern Panama has a much more definitive geographical feature that could serve as a boundary. The Darien Gap, with its jungle and mountains, is one of the most rugged regions on Earth. It therefore makes a good candidate for a boundary that effectively separates the continents of North and South America, like the Ural Mountains in Russia. No part of Panama is on the eastern side of this region; instead the Darien Gap spills over into northwestern Colombia and therefore all the way into "South America". | {
"domain": "earthscience.stackexchange",
"id": 2697,
"tags": "geography, continent"
} |
Setting parameters in a launch file does not appear to be working | Question:
Hello,
Some script which was working before, now does not work anymore. The issue is that the param instruction in the launch file, does not seem to have any effect in the executable file
<param name="camera_topic" value="/camera/image_rect" type="str"/>
in my C++ executable, I looked for this "camera_topic" relative parameter as:
ros::param::get("camera_topic",camera_topic);
but I cannot get them easily. setting the same parameter in the program before getting it works, so I conclude it must be something about the launch file. The funny thing is that it worked before (+2 months).
Originally posted by cduguet on ROS Answers with karma: 187 on 2012-09-04
Post score: 1
Original comments
Comment by dornhege on 2012-09-04:
I'm sure it still works in principle. You can check if and where it is set by rosparam list. If that doesn't help you need to come up with a complete minimal example.
Comment by cduguet on 2012-09-05:
I have tried it thanks! I is there indeed. But the node executable somehow still does not read it. Could it be about time delay?
Comment by dornhege on 2012-09-05:
Not if set by roslaunch. Test by launching the node manually, when the param is set - I suspect it won't work either.
Answer:
If you don't use a forward slash, "ros::param::get" gets a parameter from the node's namespace, but not its private namespace. If you put that parameter tag inside of a tag, it will be in this private namespace.
For example, the following roslaunch xml:
<param name="camera_topic_root" value="/camera/image_rect">
<group ns="group_ns">
<param name="camera_topic_ns" value="/camera/image_rect">
<node name="node_name" pkg="foo" type="bar" >
<param name="camera_topic_private" value="/camera/image_rect">
</node>
</group>
Would set the parameters:
/camera_topic_root
/group_ns/camera_topic_ns
/group_ns/node_name/camera_topic_private
To get these parameters with the ros::param::get API, you could do each of the following:
ros::param::get("/camera_topic_root",camera_topic); // /camera_topic_root
ros::param::get("camera_topic_ns",camera_topic); // /group_ns/camera_topic_ns
ros::param::get("~camera_topic_private",camera_topic); // /group_ns/node_name/camera_topic_private
This is documented slightly less succinctly here.
Originally posted by jbohren with karma: 5809 on 2012-09-05
This answer was ACCEPTED on the original site
Post score: 9
Original comments
Comment by cduguet on 2012-09-11:
Thank you! I missed the "~" in the private parameter! :). Basic question, I hope this answer will help everyone making the same little mistake. | {
"domain": "robotics.stackexchange",
"id": 10891,
"tags": "ros, roslaunch, parameter, parameters"
} |
Evaporation and boiling | Question: I recently read that evaporation takes place on the surface of the liquid and boiling takes place throughout the liquid. The part about evaporation seemed legit as it is observed in nature that water dries off gradually surface by surface. But the part about boiling seems a bit counterintuitive as boiling too seems to occur on the surface as water, when it boils, evaporates from the surface. How can this be justified?
Answer: Boiling consists on the formation of vapour bubbles within the liquid. If, at some point within the liquid, the temperature is high enough to produce a local change of phase (vaporisation), it can produce a bubble. If the pressure inside the bubble is kept high enough to prevent it's collapse, it will reach the surface of the liquid container (because it has a lower density and therefore it experiences an upwards buoyancy force). That is when the vapour appears in the surface and it looks like evaporation just happened in the surface.
The difference between evaporation and boiling, which are two forms of vaporisation, is that evaporation can happen in the surface at lower temperatures, while boiling requires a the high temperature needed to achieve a hight pressure to form bubbles inside the liquid.
To conclude, I'd say that boiling and evaporation are different by definition, since they are driven by the same phenomenon, which is a phase transition from liquid to gas. | {
"domain": "physics.stackexchange",
"id": 54403,
"tags": "thermodynamics, phase-transition, evaporation"
} |
Checking for duplicate materials | Question: This is a follow up question of Importing different type of files into Lists. Where the original lists are acquired.
In this script I am processing the obtained lists, and comparing them with each other to determine whether or not a used material has duplicate functions. This using their most commonly known traits, which are: Shader name, Color property and a texture property
I have the feeling my script is far from optimal, and currently takes roughly 120 seconds to compare 1200 materials. Making the rough iteration time per material 0.10 Where 200 of them are unique.
In the future this script will be expanded to process more different details besides the color/name/texture as well, so advice on how to to keep the process more 'open for change' is appreciated as well.
public void InitShaderSorting()
{
//wipe data so it won't duplicate if button is pressed again
unique = new List<Material>();
broken = new List<Material>();
duplicate = new List<Material>();
dupof = new List<Material>();
for (int i = 1; i < TotalMaterial; i++)
{
//Filter broken shaders out first so they can never be a unique one
if (allMat[i].shader.name == "Hidden/InternalErrorShader")
broken.Add(allMat[i]);
//If no unique material has been assigned to compare with, assign one first
else if (unique.Count == 0)
unique.Add(allMat[i]);
//If it aint broken, and there is a unique one. Go and compare the shader properties
else
CompareShader(allMat[i], allMat[i].shader.name, allMat[i].HasProperty("_Color"), allMat[i].HasProperty("_MainTex"));
}
}
/// <param name="sn">Shader Name</param>
/// <param name="hc">Has Color Property</param>
/// <param name="ht">Has Texture Property</param>
void CompareShader(Material mat, string sn, bool hc, bool ht)
{
bool finished = false;
for (int i = 0; i < unique.Count; i++)
{
//Check if shader names are identical, if not continue as it can't be a duplicate
if (sn != unique[i].shader.name)
{
if (i == unique.Count - 1)
{
finished = true;
break;
}
else continue;
}
//Check if this material even uses a color
if (hc)
{
//If the color is a match and there is no texture to compare, mark as duplicate
if (unique[i].color == mat.color && !ht)
{
duplicate.Add(mat);
dupof.Add(unique[i]);
break;
}
else
{
if (i == unique.Count - 1)
{
finished = true;
break;
}
else continue;
}
}
if (ht)
{
//If the texture is a match, mark as duplicate
if (unique[i].mainTexture == mat.mainTexture)
{
duplicate.Add(mat);
dupof.Add(unique[i]);
break;
}
}
if (i == unique.Count - 1) finished = true;
}
//If we reached the end without a match, then this material is unique.
if (finished) unique.Add(mat);
}
Answer: Correctness
At first, I'm afraid your code doesn't find all duplicates: If the compared materials don't use a texture or a color, they won't be marked as duplicates. In your code, if (hc) checks if the material has a color. If not, it is jumped over. If the other material has no color either, this could still be a duplicate.
Performance
The performance of your code is \$O(n^2)\$: For each additional item in allMat, the unique collection will be iterated one time. As this collection grows, this will take longer and longer. This can be certainly improved:
What you actually want to do, is "group" your Texture objects by equality. The groups with size == 1 will be your unique textures, the groups with size > 1 will contain your duplicates.
We can do this very easily using LINQ. All we need to do is implement a custom comparer, that knows how to compare two Texture instances:
public class MaterialComparer : IEqualityComparer<Material>
{
public bool Equals(Material x, Material y)
{
if (ReferenceEquals(x, y)) return true;
if (x.shader.name != y.shader.name) return false;
var xHasColor = x.HasProperty("_Color");
var yHasColor = y.HasProperty("_Color");
if (xHasColor != yHasColor) return false;
if (xHasColor && x.color != y.color) return false;
var xHasTexture = x.HasProperty("_MainTex");
var yHasTexture = y.HasProperty("_MainTex");
if (xHasTexture != yHasTexture) return false;
if (xHasTexture && x.mainTexture != y.mainTexture) return false;
return true;
}
public int GetHashCode(Material mat)
{
unchecked
{
var hasColor = mat.HasProperty("_Color");
var hasTexture = mat.HasProperty("_MainTex");
var hashCode = mat.shader.name.GetHashCode();
hashCode = (hashCode * 397) ^ (hasTexture ? mat.mainTexture.GetHashCode() : 0);
hashCode = (hashCode * 397) ^ (hasColor ? mat.color.GetHashCode() : 0);
return hashCode;
}
}
}
The important thing here is to get GetHashCode right: This method is supposed to return two integers that MUST be equal when the compared objects are equal.
We can use it like that:
var groups = allMat.GroupBy(m => m, new MaterialComparer())
.GroupBy(g => g.Count() == 1)
.ToArray();
var unique = (groups.SingleOrDefault(g => g.Key) ?? Enumerable.Empty<IEnumerable<Material>>())
.SelectMany(m => m).ToArray();
var duplicates = (groups.SingleOrDefault(g => !g.Key) ?? Enumerable.Empty<IEnumerable<Material>>())
.SelectMany(m => m).ToArray();
I put this in a Unit Test to measure the performance for 20,000 random materials:
[TestMethod]
public void TestMaterials()
{
var random = new Random();
IEnumerable<Material> allMat = from i in Enumerable.Range(0, 20000)
let shader = random.Next(100)
let color = random.Next(50)
let texture = random.Next(50)
select new Material(shader.ToString(),
color == 0 ? null : color.ToString(),
texture == 0 ? null : texture.ToString());
var comparator = new YourImplementation(allMat.ToArray());
var stopwatch = Stopwatch.StartNew();
comparator.InitShaderSorting();
Console.WriteLine(stopwatch.Elapsed);
stopwatch.Restart();
var groups = allMat.GroupBy(m => m, new MaterialComparer())
.GroupBy(g => g.Count() == 1)
.ToArray();
var unique = (groups.SingleOrDefault(g => g.Key) ?? Enumerable.Empty<IEnumerable<Material>>())
.SelectMany(m => m).ToArray();
var duplicates = (groups.SingleOrDefault(g => !g.Key) ?? Enumerable.Empty<IEnumerable<Material>>())
.SelectMany(m => m).ToArray();
Console.WriteLine(stopwatch.Elapsed);
}
Result:
00:00:14.0328998 // your custom implementation
00:00:00.0712459 // LINQ | {
"domain": "codereview.stackexchange",
"id": 11416,
"tags": "c#, performance, unity3d"
} |
How to solve that gazebo freeze ubuntu? | Question:
I run several times to add object to gazebo, and sometimes my ubuntu just freezed so that my mouse and keyboard all couldn't work anymore.
My system is ubuntu 64 bits with ROS fuerte. I use optirun to run roslaunch.
I use Geforce 630M GT 2GB.
Here is my launch:
<launch>
<!-- start gazebo with an empty plane -->
<param name="/use_sim_time" value="true" />
<node name="gazebo" pkg="gazebo" type="gazebo" args="/home/sam/code/ros/temp/t1.world" respawn="false" >
</node>
<!-- start gui -->
<node name="gazebo_gui" pkg="gazebo" type="gui" respawn="false" output="screen"/>
</launch>
Here is my t1.world:
<launch>
<!-- start gazebo with an empty plane -->
<param name="/use_sim_time" value="true" />
<node name="gazebo" pkg="gazebo" type="gazebo" args="/home/sam/code/ros/temp/t1.world" respawn="false" >
</node>
<!-- start gui -->
<node name="gazebo_gui" pkg="gazebo" type="gui" respawn="false" output="screen"/>
</launch>
sam@sam:~/code/ros/temp$ cat t1.world
<?xml version="1.0"?>
<gazebo version="1.0">
<world name="default">
<scene>
<ambient rgba="0.5 0.5 0.5 1"/>
<background rgba="0.5 0.5 0.5 1"/>
<shadows enabled="false"/>
</scene>
<physics type="ode">
<gravity xyz="0 0 -9.8"/>
<ode>
<solver type="quick" dt="0.001" iters="10" sor="1.3"/>
<constraints cfm="0.0" erp="0.2" contact_max_correcting_vel="100.0" contact_surface_layer="0.001"/>
</ode>
</physics>
<!-- Ground Plane -->
<model name="plane1_model" static="true">
<link name="body">
<collision name="geom_1">
<geometry>
<plane normal="0 0 1"/>
</geometry>
<surface>
<friction>
<ode mu="10.0" mu2="10.0" fdir1="0 0 0" slip1="0" slip2="0"/>
</friction>
<bounce restitution_coefficient="0" threshold="1000000.0"/>
<contact>
<ode soft_cfm="0" soft_erp="0.2" kp="1e10" kd="1" max_vel="100.0" min_depth="0.0001"/>
</contact>
</surface>
</collision>
<visual name="visual_1" cast_shadows="false">
<geometry>
<plane normal="0 0 1"/>
</geometry>
<material script="Gazebo/Grey"/>
</visual>
</link>
</model>
<!-- Box -->
<model name='box_model' static='0'>
<link name='body' gravity='1' self_collide='0' kinematic='0'>
<origin pose='0.000000 0.000000 0.500000 0.000000 -0.000000 0.000000'/>
<inertial mass='1.000000' density='1.000000'>
<inertia ixx='1.000000' ixy='0.000000' ixz='0.000000' iyy='1.000000' iyz='0.000000' izz='1.000000'/>
</inertial>
<collision name='geom' laser_retro='0.000000'>
<geometry>
<box size='1.000000 1.000000 1.000000'/>
</geometry>
<origin pose='0.000000 0.000000 0.000000 0.000000 -0.000000 0.000000'/>
<surface>
<bounce restitution_coefficient='0.000000' threshold='100000.000000'/>
<friction>
<ode mu='-1.000000' mu2='-1.000000' fdir1='0.000000 0.000000 0.000000' slip1='0.000000' slip2='0.000000'/>
</friction>
<contact>
<ode soft_cfm='0.000000' soft_erp='0.200000' kp='1000000000000.000000' kd='1.000000' max_vel='100.000000' min_depth='0.001000'/>
</contact>
</surface>
</collision>
<visual name='visual1' cast_shadows='1' laser_retro='0.000000' transparency='0.000000'>
<geometry>
<box size='1.000000 1.000000 1.000000'/>
</geometry>
<material script='Gazebo/WoodPallet'/>
</visual>
</link>
<origin pose='0.000000 0.000000 60.000000 0.000000 -0.000000 0.000000'/>
</model>
<light type="directional" name="my_light" cast_shadows="false">
<origin pose="0 0 30 0 0 0"/>
<diffuse rgba=".9 .9 .9 1"/>
<specular rgba=".1 .1 .1 1"/>
<attenuation range="20"/>
<direction xyz="0 0 -1"/>
</light>
<plugin name="joint_trajectory_plugin" filename="libgazebo_ros_joint_trajectory.so"/>
</world>
</gazebo>
Ubuntu freezed caused my job paused and sometime lose my work which done before.
How to solve this serious problem?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2012-11-22
Post score: 0
Answer:
The problem is not really related to ROS. Either your graphics card driver or another driver in the Linux kernel is probably not stable or your hardware is defect (maybe your RAM).
Originally posted by Lorenz with karma: 22731 on 2012-11-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by sam on 2012-11-22:
I use laptop. How to know that and check it? | {
"domain": "robotics.stackexchange",
"id": 11852,
"tags": "ros"
} |
$S^+$ acting on a spin chain raises the entropy by at most $\ln(2)$ | Question: Consider the operator $S^+ = \sum_{i=1}^L S^+_i$ acting on a spin-chain of spin-1/2 particles.
Denote the half-chain Von Neumann entanglement entropy of a state $|\psi\rangle$ by $\mathbb{S}[|\psi\rangle]$. (For simplicity in notation in the following, take $\mathbb{S}[0] = 0$.)
Consider the following example. Define the state $|\Omega\rangle$ as having all spins down: $|\Omega\rangle = \otimes_{i=1}^L|\downarrow\rangle$. Then $\mathbb{S}[|\Omega\rangle] =0$ because $|\Omega\rangle $ is a product state, while $\mathbb{S}[S^+|\Omega\rangle] = \ln(2)$, as can be seen by a quick Schmidt decomposition by hand. Notice in particular that
$$\mathbb{S}[S^+|\Omega\rangle] - \mathbb{S}[|\Omega\rangle] = \ln(2)$$
After playing around a little with numerics, I have the following conjecture:
$$\max_{|\psi\rangle \in \mathscr{H}} \left( \mathbb{S}[S^+|\psi\rangle] - \mathbb{S}[|\psi\rangle] \right)= \ln(2)$$
That is, $S^+$ can only increase the entropy of a state by $\ln(2)$ and no more. Similarly, I conjecture that
$$\max_{|\psi\rangle \in \mathscr{H}} \left( \mathbb{S}[(S^+)^n|\psi\rangle] - \mathbb{S}[|\psi\rangle] \right)=\mathbb{S}[(S^+)^n|\Omega\rangle] - \mathbb{S}[|\Omega\rangle]$$
Are these conjectures correct? How can I prove these conjectures?
A small piece of supporting evidence (not near a proof) for the first conjecture is that it is easy to check that all product states $|p\rangle$ in the $S^z$-basis obey $\mathbb{S}[S^+|p\rangle] \leq \ln(2)$, as the resulting state's Schmidt decomposition has at most two states. Another small piece of supporting evidence, when I feed in random states for $|\psi\rangle$, the entanglement entropy decreases relative to the entropy of the random state. However, this is just supporting evidence that is far from a statement about all possible states in the Hilbert space.
Answer: My conjectures in my question were incorrect. Since $S^+$ can destroy certain states, it's possible to begin with a low entanglement entropy state that gains a high entropy after being acted upon by $S^+$.
For example, consider (for some large $L$) a state $$|\psi\rangle = \sqrt{.999999999} \otimes_{i=1}^L|\uparrow\rangle_i + \sqrt{.000000001} (\text{scrambled, normalized superposition of a massive number of states}).$$ The entropy of this initial state arising from the second term is suppressed by the tiny coefficient. However, the action of $S^+$ will destroy the first term, removing the suppression of the second term after normalizing. The second term after being acted upon by $S^+$ will still have some large amount of entanglement, perhaps smaller than the second term had before, but still a much larger amount than the full initial state.
Thus, the entanglement of the final state will be much larger than the initial state, easily exceeding $\ln(2)$.
I will see if I can formulate a similar question in the context of unitary operators, which cannot annihilate states. | {
"domain": "physics.stackexchange",
"id": 78775,
"tags": "operators, entropy, quantum-entanglement, spin-chains"
} |
When is separating the total wavefunction into a space part and a spin part possible? | Question: The total wavefunction of an electron $\psi(\vec{r},s)$ can always be written as $$\psi(\vec{r},s)=\phi(\vec{r})\zeta_{s,m_s}$$ where $\phi(\vec{r})$ is the space part and $\zeta_{s,m_s}$ is the spin part of the total wavefunction $\psi(\vec{r},s)$. In my notation, $s=1/2, m_s=\pm 1/2$.
Question 1 Is the above statement true? I am asking about any wavefunction here. Not only about energy eigenfunctions.
Now imagine a system of two electrons. Even without any knowledge about the Hamiltonian of the system, the overall wavefunction $\psi(\vec{r}_1,\vec{r}_2;s_1,s_2)$ is antisymmetric. I think (I have this impression) under this general conditions, it is not possible to decompose $\psi(\vec{r}_1,\vec{r}_2;s_1,s_2)$ into a product of a space part and spin part. However, if the Hamiltonian is spin-independent, only then can we do such a decomposition into space part and spin part.
Question 2 Can someone properly argue that how this is so? Please mention about any wavefunction of the system and about energy eigenfunctions.
Answer: Your claim
[any arbitrary] wavefunction of an electron $\psi(\vec{r},s)$ can always be written as $$\psi(\vec{r},s)=\phi(\vec{r})\zeta_{s,m_s} \tag 1$$ where $\phi(\vec{r})$ is the space part and $\zeta_{s,m_s}$ is the spin part of the total wavefunction $\psi(\vec{r},s)$
is false. It is perfectly possible to produce wavefunctions which cannot be written in that separable form - for a simple example, just take two orthogonal spatial wavefunctions, $\phi_1$ and $\phi_2$, and two orthogonal spin states, $\zeta_1$ and $\zeta_2$, and define
$$
\psi = \frac{1}{\sqrt{2}}\bigg[\phi_1\zeta_1+\phi_2\zeta_2 \bigg].
$$
Moreover, to be clear: the hamiltonian of a system has absolutely no effect on the allowed wavefunctions for that system. The only thing that depends on the hamiltonian is the energy eigenstates.
The result you want is the following:
If the hamiltonian is separable into spatial and spin components as $$ H = H_\mathrm{space}\otimes \mathbb I+ \mathbb I \otimes H_\mathrm{spin},$$ with $H_\mathrm{space}\otimes \mathbb I$ commuting with all spin operators and $\mathbb I \otimes H_\mathrm{spin}$ commuting with all space operators, then there exists an eigenbasis for $H$ of the separable form $(1)$.
To build that eigenbasis, simply diagonalize $H_\mathrm{space}$ and $H_\mathrm{spin}$ independently, and form tensor products of their eigenstates. (Note also that the quantifiers here are crucial, particularly the "If" in the hypotheses and the "there exists" in the results.) | {
"domain": "physics.stackexchange",
"id": 56990,
"tags": "quantum-mechanics, wavefunction, quantum-spin, pauli-exclusion-principle, identical-particles"
} |
learnable segmentation or learnable edge detection | Question: Is there some learnable segmentation or learnable edge detection algorithm exist?
For example I mark N fotos by hands and than program "learns" how to do it best way and can do it on similar images.
there are some papers on this subject or source code?
Update:
I found some inforamtion supervised edge detection \ supervised segmentation
http://www.loni.ucla.edu/~ztu/Download.htm
but still looking for code.
Answer: There is a benchmark code and test and learning dataset to perform image segmentation with ground truth dataset at Berkeley website and the code in the same place. | {
"domain": "dsp.stackexchange",
"id": 1446,
"tags": "edge-detection, image-segmentation"
} |
Reduce only build-related ROS packages - reduce docker image size | Question:
I am building a custom ROS docker image to deploy it on a ubuntu xenial embedded board. The Dockerfile is shown below.
When I build the container, the size is about 1,7 GB (which is not nuch bigger than a ros:kinetic-ros-core image anyway.
BUT I need it to be smaller!!!
My suggestion is, that it might be possible to remove/uninstall those ROS packages and other sw which is only necessary for building my packages, but not needed after running "catkin build".
Does anyone know how this is done? When I try "apt remove ..." with e.g. the "ros-kinetic-roslint" package, even runtime relevant stuff gets removed. Help, please!
FROM ubuntu
LABEL version="0.0.4"
ENV ROS_DISTRO kinetic
ENV CATKIN_WS=/root/catkin_ws
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 421C365BD9FF1F717815A3895523BAEEB01FA116
RUN echo "deb http://packages.ros.org/ros/ubuntu xenial main" > /etc/apt/sources.list.d/ros-latest.list
RUN apt-get update && \
apt-get -y --quiet --no-install-recommends install --allow-unauthenticated \
build-essential \
gcc \
g++ \
cmake \
make \
libusb-1.0-0-dev \
python-wstool \
python-catkin-tools \
ros-kinetic-cpp-common \
ros-kinetic-actionlib \
ros-kinetic-actionlib-msgs \
ros-kinetic-bond \
ros-kinetic-bondcpp \
ros-kinetic-catkin \
ros-kinetic-class-loader \
ros-kinetic-cmake-modules \
ros-kinetic-gencpp \
ros-kinetic-geneus \
ros-kinetic-genlisp \
ros-kinetic-genmsg \
ros-kinetic-gennodejs \
ros-kinetic-genpy \
ros-kinetic-geometry-msgs \
ros-kinetic-image-transport \
ros-kinetic-message-filters \
ros-kinetic-message-generation \
ros-kinetic-message-runtime \
ros-kinetic-nav-msgs \
ros-kinetic-nodelet \
ros-kinetic-pluginlib \
ros-kinetic-rosbag \
ros-kinetic-rosbag-storage \
ros-kinetic-rosbuild \
ros-kinetic-rosclean \
ros-kinetic-rosconsole \
ros-kinetic-roscpp \
ros-kinetic-roscpp-serialization \
ros-kinetic-roscpp-traits \
ros-kinetic-rosgraph \
ros-kinetic-rosgraph-msgs \
ros-kinetic-roslaunch \
ros-kinetic-roslib \
ros-kinetic-roslint \
ros-kinetic-roslz4 \
ros-kinetic-rosmaster \
ros-kinetic-rosout \
ros-kinetic-rospack \
ros-kinetic-rosparam \
ros-kinetic-rospy \
ros-kinetic-rostest \
ros-kinetic-rostime \
ros-kinetic-rostopic \
ros-kinetic-rosunit \
ros-kinetic-sensor-msgs \
ros-kinetic-smclib \
ros-kinetic-std-msgs \
ros-kinetic-std-srvs \
ros-kinetic-tf2 \
ros-kinetic-tf2-msgs \
ros-kinetic-tf2-py \
ros-kinetic-tf2-ros \
ros-kinetic-topic-tools \
ros-kinetic-xmlrpcpp \
ros-kinetic-mav-comm \
ros-kinetic-mav-msgs \
ros-kinetic-image-common \
ros-kinetic-visualization-msgs \
libopencv-dev && \
rm -rf /var/lib/apt/lists/*
# copy my local src directory to the docker image
ADD src /root/catkin_ws/src
WORKDIR $CATKIN_WS
RUN catkin config --install --extend=/opt/ros/kinetic -DCMAKE_BUILD_TYPE=Release && \
catkin build --summarize
RUN rm -rf /root/catkin_ws/src
RUN rm -rf /root/catkin_ws/build
RUN rm -rf /root/catkin_ws/logs
RUN rm -rf /root/catkin_ws/devel
RUN strip --strip-all $(find /root/catkin_ws/install -name "*.so"); exit 0
RUN echo 'source "$CATKIN_WS/install/setup.bash"' >> ~/.bashrc
CMD ["bash"]
Originally posted by basti.hunter on ROS Answers with karma: 25 on 2018-03-06
Post score: 2
Answer:
You could consider using an onbuild to create a minimalistic runtime environment by cherry picking the built package files from the build-dockerfile image layers. Onbuild's intended use is geared towards exporting a minimal compiled application from the build image to provide a slimmed down runtime image:
Docker Reference | Engine Builder: onbuild
Stackoverflow | Dockerfile ONBUILD instruction
Also, because the image size on disk is the sum of the sizes of the layers its constructed from, removing files from a separate RUN-command/image-layer does not reduce the total size of the resulting image. You would need to then flatten the image to achieve this (like when exporting and then importing the image). However onbuild now offers a more elegant solution, and is how most of the Docker community achieves this runtime image reduction.
Stackoverflow | Why are Docker container images so large?
Originally posted by ruffsl with karma: 1094 on 2018-03-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 30223,
"tags": "ros, docker, ros-kinetic"
} |
Find lowest leaf nodes in a binary tree | Question: So this is probably a popular problem, and I decided to take a crack at it today using C#.
I came up with the following solution:
private BinaryTree<int>.BinaryTreeNode FindLowestNodes(BinaryTree<int>.BinaryTreeNode node, int depth, ref int biggestDepth) {
this.WriteResult("Entering recursiorn for node.....{0}", node.Value.ToString());
BinaryTree<int>.BinaryTreeNode lowestNode = null;
if (node.Left != null) {
var n = FindLowestNodes(node. Left, depth + 1, ref biggestDepth);
if (n != null) lowestNode = n;
}
if (node.Right != null) {
var n = FindLowestNodes(node.Right, depth + 1, ref biggestDepth);
if (n != null)
lowestNode = n;
}
if (node.Left == null && node.Right == null) {
if (depth > biggestDepth) {
lowestNode = node;
biggestDepth = depth;
}
}
return lowestNode;
}
This is a recursive function using a depth parameter, and biggestDepth parameter that helps me determine if the node is indeed the lowest since the tree can have multiple depths. What I don't like about passing an int by reference is boxing...
Could this solution be written more efficiently?
Also, would it be easier to parallelize using a functional language like Clojure, or is that not really a factor as long as variables are not passed by reference?
Answer: Passing an int with the ref keyword does not involve any boxing at all. Do not confuse this with "passing an int by reference" (as in, a function with return type Object returning an int,) which is quite a different thing, and it does involve boxing.
Your code seems quite efficient to me, I have nothing to comment on. | {
"domain": "codereview.stackexchange",
"id": 1097,
"tags": "c#"
} |
Efficient Protocol for Entanglement Purification | Question: By using entanglement purification, we can produce a high-fidelity entangled state from several pieces of low-fidelity entangled states. From my study, there is a protocol proposed by Bennett et al.
Further improvements were made by Deutsh Protocol. But if we closely observe these two schemes, we see that they are not particularly great for producing high-fidelity entangled states. Besides, they are based on trial and error, so in my opinion, there should be much more sophisticated schemes. I wonder if someone can help me suggesting some existing efficient schemes regarding this matter. I am a newbie in this field, so please keep this in mind as well. Thank you.
Answer: Imagine you have $n$ noisy copies of a maximally entangled state, where Alice has one half of each and Bob has the other half. Pick your favourite error correcting code on $n$ qubits that encodes one logical qubit (meaning the one that has the highest error correcting threshold for your particular noise model). This has logical states $|0\rangle_L$ and $|1\rangle_L$.
Alice prepares an $n+1$ qubit state
$$
\frac{1}{\sqrt{2}}(|0\rangle|0\rangle_L+|1\rangle|1\rangle_L).
$$
She takes the $n$ qubits of the logical state, and teleports each of them through one of the noisy Bell pairs. So, a noisy version of the logical qubits arrives with Bob.
Bob performs error correction and decoding on the qubits he holds. With some probability (defined by your error correcting code and the noise model), correction succeeds, in which case you would have a pure maximally entangled state shared between Alice and Bob. In practice, due to the non-zero failure probability, they'll share a mixture of the 4 possible Bell states, but if you've picked your code correctly, the purity will be much higher.
So, the problem is "simply" transformed into one of finding the best possible error correcting code for your actual noise model. You probably want to assume a noise model that acts on each noisy Bell pair independently. For example, if you took depolarising noise, there are well-known results about how well error correcting codes can possible work. For example, it is reported here that
$$
1=2p\log_2(3)+h(2p)
$$
is the behaviour in the asymptotic limit ($n\rightarrow\infty$) for a case where the per-qubit error probability is $p$ and $h(p)$ is the binary entropy function. This should give you a benchmark for how well any finite sized case you choose might be performing. For example, the Toric Code gets pretty tight to this bound (as I tested numerically here, although I imagine there are plenty of other sources!) | {
"domain": "quantumcomputing.stackexchange",
"id": 4696,
"tags": "entanglement, communication"
} |
Choice of color for graphs with filled area | Question: I am writing an article and the central results are summarized in a graph so I want it to be very beautiful and physically intuitive. It has 3 regions in which a certain quantity is conserved, partially conserved or not conserved so I want them to be respectively green, yellow/orange and red. The problem is that I can't find a good combination of colors... Maybe someone has done something of similar before and can give some suggestion.. I post here my figure:
Maybe it is not the right place where ask this question but I can not find a more appropriate Stack Exchange site to post it...
Answer: I think the problem here is that green, red and yellow/orange do not intuitively suggest different levels of conservation. In addition to that yellow/orange and red have a very poor contrast with each other. Here's what I suggest:
In the full conserved region--> take blue
In the partially conserved region--> take a much lighter shade the same kind of blue
In non-conserved region --> take white
The key to good making pretty presentations and related stuff is minimalism and clarity. | {
"domain": "physics.stackexchange",
"id": 9596,
"tags": "data-analysis"
} |
Using Threading.Event to signal main thread | Question: I was reading on this page about using a thread, catching an exception, and notifying the calling thread.
The answer suggested Queue, but in the comments it states:
Queue is not the best vehicle to communicate an error back, unless you
want to have a full queue of them. A much better construct is
threading.Event()
From the Python documentation:
This is one of the simplest mechanisms for communication between
threads: one thread signals an event and other threads wait for it.
I'm new to multi threading and I wanted to give it a try, so I came up with this trivial example. I'm unsure if I'm on the right track.
import threading
def dividebyzero(event,first,second):
try:
result = first / second
except ZeroDivisionError:
event.set()
print("Can't divide by 0")
else:
print(result)
e = threading.Event()
t1 = threading.Thread(name='DivideByZero',target=dividebyzero, args=(e, 10, 0),)
t1.start()
t1.join()
if e.is_set():
print("Error occurred in division thread")
Question:
Using threading.Event as shown in my example, is it the correct way to signal the main thread an error as occurred?
I'm not using event.wait() because it seems unnecessary, I call join() on the t1 thread, so when I call is_set() hopefully I'll get an accurate result. Is it mandatory to use event.wait() when using threading.event()
I understand that if I needed to raise more exceptions using an event probably isn't ideal, how would you know which exception was raise? In that situation would a Queue be better?
If I were to use event.wait() there is a chance the main thread could hang forever, because if the exception doesn't occur the event is never set.
I don't have an error or problems with the code, it works, I just want it reviewed to make sure I used event correctly.
Answer:
Yes and no. Your approach works, and it is easy to understand. If, however, you want to keep track of what type of exception was raised, it could be improved. What if you have some super complex function that can raise 10 different exceptions, and you need access to the exception traceback if something goes wrong? You'd need 10 threading.Event instances. In that case, you should use a queue.Queue instead, which is thread-safe (untested snippet):
import queue
import threading
def dividebyzero(first, second, exc_queue):
try:
result = first / second
except ZeroDivisionError as exc:
exc_queue.put(exc)
else:
print(result)
excq = queue.Queue()
t = threading.Thread(target=dividebyzero, args=(10, 0, excq))
t.start()
t.join()
if excq.qsize() != 0:
print(excq.get())
No, it is not mandatory to use threading.Event.wait(). Your observation is right: calling event.wait() might make the thread hang indefinitely.
goto 1; | {
"domain": "codereview.stackexchange",
"id": 30454,
"tags": "python, python-3.x, multithreading"
} |
Angular ng-show | Question: Is there any easier/shorter/better way to do the marked part?
plnkr
<body ng-app="">
Apple <input type="checkbox" ng-model="apple" aria-label="Toggle ngHide"><br/>
Banana <input type="checkbox" ng-model="banana" aria-label="Toggle ngHide"><br/>
<div>
<!-- This part-->
<div ng-show="apple || banana">
You want to buy:
<span>
<span ng-show="apple">Apple</span>
<span ng-show="banana">Banana</span>
</span>
</div>
<!-- This part-->
</div>
</body>
Answer: Better and more performance oriented solution will be to use ng-if instead of ng-show. Although it doesn't matter with such few items on scope but with huge/big data its better to use ng-if.
<body ng-app="">
Apple <input type="checkbox" ng-model="apple" aria-label="Toggle ngHide"><br/>
Banana <input type="checkbox" ng-model="banana" aria-label="Toggle ngHide"><br/>
<div>
<div ng-if="apple || banana">
You want to buy:
<span>
<span ng-if="apple">Apple</span>
<span ng-if="banana">Banana</span>
</span>
</div>
</div>
</body> | {
"domain": "codereview.stackexchange",
"id": 15166,
"tags": "html, angular.js"
} |
Does graph G with all vertices of degree 3 have a cut vertex? | Question: I'm asked to draw a simple connected graph, if possible, in which every vertex has degree 3 and has a cut vertex. I tried drawing a cycle graph, in which all the degrees are 2, and it seems there is no cut vertex there. I know, so far, that, by the handshaking theorem, the number of vertices have to be even and they have to be greater than or equal to 4. So, I kept drawing such graphs but couldn't find one with a cut vertex. I have a feeling that there must be at least one vertex of degree one but I don't know how to formally prove this, if its true. I'd appreciate if someone can help with that.
Answer: Not necessarily true, for example complete graph of 4 vertices have no cut vertex. But there exists a graph G with all vertices of degree 3 and there
is a cut vertex. See the picture. Red vertex is the cut vertex. | {
"domain": "cs.stackexchange",
"id": 11729,
"tags": "graphs, discrete-mathematics, max-cut"
} |
What happens to an electron when it radiates a photon? | Question: I recently came across this Feynman diagram:
For a more simplistic diagram, I suppose even this would be adequate:
As you can see in these diagrams, they radiate these virtual photons. The virtual photons have no charge or mass, and there is no apparent visible change in the energy of the electron after it radiates this photon. I'm aware that the probability level of the incident occurring drops by 1% for every extra vertex in the diagram, so low as the probability can get, it can never touch 0. What stops the electron from radiating multiple photons that can turn into other particles? What change occurs in the electron itself after the emission of the photon?
In the case of quarks and gluons, as seen in the diagram below, the quarks are losing the color charge of the gluon. Therefore, some change must occur in the electron after it radiates the photon.
Answer: My understanding is that this is due to momentum conservation.
A gamma ray cannot create an e-/e+ pair without interacting with a nucleus in order to conserve momentum. This is why the virtual gamma ray here can only create a virtual (off-shell) e-/e+ pair.
The change that occurs in the electron after the photon emission is just a change in energy/momentum.
Additionally, your comment about an electron creating multiple gammas - the only vertex allowed is two fermions with a photon, so emitting multiple gammas must be done in the fashion shown in the first diagram (in multiple steps) | {
"domain": "physics.stackexchange",
"id": 74760,
"tags": "quantum-mechanics, electrons, quantum-electrodynamics, feynman-diagrams, quantum-chromodynamics"
} |
Scheduling Algorithm : First Come First Serve | Question: I am practicing opearator overloading and to write copy constructor. I have written this program of scheduling algoritrhm. Please suggest me some improvements.
#include <iostream>
#include <vector>
class Scheduling
{
int process;
std::vector<int> burst_time;
std::vector<int> waiting_time;
public:
Scheduling() : process(0) {}
Scheduling(int n) : process(n)
{
for (int i = 0; i < n; i++)
{
std::cout << "Enter Burst time for process " << i+1 << " : ";
int val;
std::cin >> val;
burst_time.push_back(val);
}
}
//copy constructor
Scheduling(const Scheduling &other)
{
process = other.process;
int size = other.burst_time.size();
for (int i = 0; i < size; i++)
{
burst_time.push_back(other.burst_time[i]);
}
}
//copy assignment
Scheduling &operator=(const Scheduling &other)
{
process = other.process;
int size = other.burst_time.size();
for (int i = 0; i < size; i++)
{
burst_time.push_back(other.burst_time[i]);
}
return *this;
}
std::vector<int> cal_waiting_time(std::vector<int>& burst_time)
{
waiting_time.push_back(0);
for (int i = 0; i < burst_time.size() - 1; i++)
{
waiting_time.push_back(waiting_time[i] + burst_time[i]);
}
return waiting_time;
}
void print_table()
{
waiting_time = cal_waiting_time(burst_time);
std::cout << "Process\t Burst Time\t Waiting Time\n";
for (int i = 0; i < burst_time.size(); i++)
{
std::cout << i+1 << "\t\t " << burst_time[i] << "\t\t " << waiting_time[i] << "\n";
}
}
};
int main()
{
int num;
std::cout << "Enter number of process\n";
std::cin >> num;
Scheduling batch1(num);
batch1.print_table();
Scheduling batch2(batch1);
batch2.print_table();
}
Answer: Let's dive into the review.
int process;
Does process serve any purpose? The only time it's used is when constructing the burst_time vector. And it's the same as burst_time.size().
Scheduling() : process(0) {}
Scheduling(int n) : process(n)
{
for (int i = 0; i < n; i++)
{
std::cout << "Enter Burst time for process " << i+1 << " : ";
int val;
std::cin >> val;
burst_time.push_back(val);
}
}
I don't know if you realize it, but both of these constructors do the same thing. You could remove the default constructor, and make the other one Scheduling(int n = 0), and avoid repeating yourself.
Whenever you write a constructor that takes a single argument, you almost always want to declare it explicit. This is for safety and efficiency, to avoid accidentally constructing an expensive Scheduling object.
Another issue here is that you don't do any error checking on your input. For toy code that's not a problem, but in real code, you'll want to check that each input doesn't fail. One easy (but not very efficient) way to do that is to read input into a string, and then convert it to an int with std::stoi(). If the conversion fails, an exception will be thrown, but at least your program won't fly into a tailspin.
So all of the above code might become something like:
explicit Scheduling(int n = 0)
{
auto buffer = std::string{};
for (int i = 0; i < n; i++)
{
std::cout << "Enter Burst time for process " << i+1 << " : ";
std::getline(std::cin, buffer);
burst_time.push_back(std::stoi(buffer));
}
}
I would also recommend not doing the input in the constructor. This is inflexible. Instead what you should do is have a constructor that takes a vector<int> as a parameter, and put I/O stuff elsewhere. That way you could read in your times either from std::cin (as you are), or from a data file, or a GUI element, or literally anything.
Next is the copy constructor. Now, the default copy constructor will mostly work for this class, so really the best thing you could do would be to not write a copy constructor at all - just let the compiler generate it.
That is... except for the fact that the default copy constructor would also copy waiting_time, which you don't seem to want.
So if you want to copy burst_time (and maybe process) but not waiting_time... then yes, you need a custom copy constructor.
However, there is no need to manually copy every item in a vector one-by-one. vector knows how to copy itself. So your copy constructor can just be:
Scheduling(const Scheduling& other) :
burst_time{other.burst_time}
{
// nothing needed here
}
Whenever you manually define the copy constructor or assignment operator, you prevent automatic generation of the move operations... which is why you normally really don't want to do it. You'll probably want them back. You'll see why shortly.
The best way to do the special operations in a class is usually to define a swap function. If you want to make swapping part of the public interface - which you almost always do - you can make it a friend function. And you'll almost always want your swap function to be noexcept.
friend void swap(Scheduling& a, Scheduling& b) noexcept
{
using std::swap;
swap(a.burst_time, b.burst_time);
// you can swap waiting_time too, if you want
}
Once you have a swap function, the move constructor is easy:
Scheduling(Scheduling&& other) noexcept
{
using std::swap;
swap(*this, other);
}
Move assignment is also easy:
Scheduling& operator=(Scheduling&& other) noexcept
{
using std::swap;
swap(*this, other);
return *this;
}
But the real reason we want swap and the move operations is next.
Scheduling &operator=(const Scheduling &other)
{
process = other.process;
int size = other.burst_time.size();
for (int i = 0; i < size; i++)
{
burst_time.push_back(other.burst_time[i]);
}
return *this;
}
Unfortunately, your copy assignment is buggy. The first problem is that it's not really copying the burst_time vector... it's appending to it.
In other words, if you have a Scheduling object s1 with burst_times "1, 2, 3", and s2 with burst_times "4, 5, 6", when you do s1 = s2;, you're not going to end up with both having the same value. s2 will remain unchanged, but s1 will now be "1, 2, 3, 4, 5, 6". That's probably not what you want.
To solve this, you probably want to clear the burst_time vector before doing the push_back()s.
The other problem is deeper. Each time you try to push_back, that might force the vector to reallocate... and that might throw an exception. If that happens midway through, you'll end up with a broken Schedule object - it will only be half-copied.
There is a standard technique used to solve this problem called copy-and-swap or copy-and-move. In fact, it would solve all your problems with this assignment operator. Here's how it looks:
Scheduling& operator=(const Scheduling& other)
{
// First we copy other...
auto temp = other;
// If that fails in any way, no problem. temp is in an indeterminate
// state... but who cares? It will be destroyed, and this object
// hasn't been touched yet.
// Next we move temp into *this (or we swap(temp, *this) - same thing)
// Because we made the move operations noexcept, this cannot fail.
*this = std::move(temp);
// And we're done!
return *this;
}
This technique prevents all problems that assignment can possibly have: it handles exceptions, it handles self-assignment, it handles everything. There is an efficiency cost, but it's often unavoidable, unfortunately.
So your copy assignment operator should almost always look like this:
Type& operator=(Type const& other)
{
auto temp = other;
*this = std::move(temp);
// OR:
// using std::swap;
// swap(*this, temp);
return *this;
}
And that means you should almost always write the move operations and swap, and make them noexcept.
std::vector<int> cal_waiting_time(std::vector<int>& burst_time)
{
waiting_time.push_back(0);
for (int i = 0; i < burst_time.size() - 1; i++)
{
waiting_time.push_back(waiting_time[i] + burst_time[i]);
}
return waiting_time;
}
This function might have a bug in it. If you call it twice, the waiting_time vector will be doubled in size. Are you sure you don't want to clear it first?
The other, bigger issue with this function is that it doesn't really seem to make sense as a member function as written. Here burst_time is a function parameter, which shadows burst_time in the class. Is that intentional? Even weirder, you use the class waiting_time member in the function... then promptly overwrite it with the return in print_table().
It seems to me that this function should not be in the class. It should be a free function outside. Or perhaps a static member function. Maybe even private. It depends on the interface you want.
So assuming it's not going to be a member function anymore, there are a few more things to fix.
The first is that you probably want to take the burst_time argument by const reference.
The second is that you don't handle the case where burst_time is empty. If it is, when you subtract 1 from the size (which is 0) and start your loop, you're going to trigger a spectacular crash - your program will attempt to read through gigabytes of random memory. (Though it will probably segfault and die fairly quickly, in reality.)
The third is another minor bug because you use int as the loop counter variable, but burst_time.size() returns an unsigned type. Comparing signed and unsigned types is dangerous. Your loop should be for (decltype(burst_time.size()) i = 0; i < burst_time.size(); ++i).
The final problem is a matter of efficiency. push_back() is cool and all, but each time you use it, you might be triggering a reallocation... which is expensive and slow. (In practice, you'll probably only get a reallocation every 2^n elements.) When you know how many elements are going to be in the vector, you can use reserve() to do all the allocation up front. This can be an enormous speed-up.
Putting that altogether gives:
// This function is NOT inside the class. It is a free function.
std::vector<int> cal_waiting_time(std::vector<int> const& burst_time)
{
if (burst_time.empty())
return {}; // or maybe "return {0};"? that's up to you
auto waiting_time = std::vector<int>{};
waiting_time.reserve(burst_time.size());
waiting_time.push_back(0);
for (decltype(burst_time.size()) i = 0; i < burst_time.size() - 1; i++)
{
waiting_time.push_back(waiting_time[i] + burst_time[i]);
}
return waiting_time;
}
Once you've done this, this raises the question of why you have waiting_time as a class data member. It doesn't seem to serve any purpose. And if you don't have waiting_time as a member, then there's no justification whatsoever for a custom copy constructor... or any of the other stuff we were forced to write because of that.
In other words, it seems like all you want for the class is this:
class Scheduling
{
std::vector<int> burst_time;
public:
Scheduling() = default;
explicit Scheduling(std::vector<int> burst_time_values) :
burst_time{std::move(burst_time_values)}
{
// Nothing needed here.
}
void print_table()
{
auto waiting_time = cal_waiting_time(burst_time);
std::cout << "Process\t Burst Time\t Waiting Time\n";
for (int i = 0; i < burst_time.size(); i++)
{
std::cout << i+1 << "\t\t " << burst_time[i] << "\t\t " << waiting_time[i] << "\n";
}
}
// Automatic copy constructor, copy assignment, move constructor
// move assignment, and destructor are all perfect.
};
Then you'd do your I/O separately into a vector<int>, maybe in main(), like this:
std::cout << "Enter number of process\n";
int num;
std::cin >> num;
// Should do error checking!
auto burst_times = std::vector<int>{};
burst_times.reserve(num);
for (int i = 0; i < n; i++)
{
std::cout << "Enter Burst time for process " << i+1 << " : ";
int val;
std::cin >> val;
// Again, error checking!
burst_time.push_back(val);
}
auto batch1 = Scheduling{std::move(burst_times)};
batch1.print_table();
auto batch2 = batch1;
batch2.print_table();
Anyway, the rest of the code is cool (though you should probably do some error checking when you read num in main().)
Summary
Make any constructors that take only one argument explicit.
Avoid unnecessary member functions and data members.
If you implement either the copy constructor or copy assignment operator (or both), you almost always want to implement the move constructor, move operator, and swap functions. The move constructor, move operator, and swap should all be noexcept.
When implementing the copy assignment operator, use the copy-and-swap or copy-and-move technique. (This requires noexcept move and/or swap operations. But you want those anyway.)
Beware of shadowing variables with other variables with the same name.
Watch out for signed/unsigned comparisons - use auto and decltype() to avoid problems.
When you can know the size of a vector in advance, use reserve().
You should check input operations for errors, because they happen often. | {
"domain": "codereview.stackexchange",
"id": 30874,
"tags": "c++, c++11"
} |
Snake game in C | Question: Here is a very basic Snake game in C, which I just want to make better. The game is working perfectly but it is very annoying because when playing it, it is always blinking. I hope that somebody could try it in their compiler to see how annoying it is. How can I improve this?
Here is a screen shot of the game:
Of course it works, but I just need some advice about the design.
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <conio.h>
#include<time.h>
#include<ctype.h>
#include <time.h>
#include <windows.h>
#include <process.h>
#include <unistd.h>
#define UP 72
#define DOWN 80
#define LEFT 75
#define RIGHT 77
int length;
int bend_no;
int len;
char key;
void record();
void load();
int life;
void Delay(long double);
void Move();
void Food();
int Score();
void Print();
void gotoxy(int x, int y);
void GotoXY(int x,int y);
void Bend();
void Boarder();
void Down();
void Left();
void Up();
void Right();
void ExitGame();
int Scoreonly();
struct coordinate{
int x;
int y;
int direction;
};
typedef struct coordinate coordinate;
coordinate head, bend[500],food,body[30];
int main()
{
char key;
Print();
system("cls");
load();
length=5;
head.x=25;
head.y=20;
head.direction=RIGHT;
Boarder();
Food(); //to generate food coordinates initially
life=3; //number of extra lives
bend[0]=head;
Move(); //initialing initial bend coordinate
return 0;
}
void Move()
{
int a,i;
do{
Food();
fflush(stdin);
len=0;
for(i=0;i<30;i++)
{
body[i].x=0;
body[i].y=0;
if(i==length)
break;
}
Delay(length);
Boarder();
if(head.direction==RIGHT)
Right();
else if(head.direction==LEFT)
Left();
else if(head.direction==DOWN)
Down();
else if(head.direction==UP)
Up();
ExitGame();
}while(!kbhit());
a=getch();
if(a==27)
{
system("cls");
exit(0);
}
key=getch();
if((key==RIGHT&&head.direction!=LEFT&&head.direction!=RIGHT)||(key==LEFT&&head.direction!=RIGHT&&head.direction!=LEFT)||(key==UP&&head.direction!=DOWN&&head.direction!=UP)||(key==DOWN&&head.direction!=UP&&head.direction!=DOWN))
{
bend_no++;
bend[bend_no]=head;
head.direction=key;
if(key==UP)
head.y--;
if(key==DOWN)
head.y++;
if(key==RIGHT)
head.x++;
if(key==LEFT)
head.x--;
Move();
}
else if(key==27)
{
system("cls");
exit(0);
}
else
{
printf("\a");
Move();
}
}
void gotoxy(int x, int y)
{
COORD coord;
coord.X = x;
coord.Y = y;
SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), coord);
}
void GotoXY(int x, int y)
{
HANDLE a;
COORD b;
fflush(stdout);
b.X = x;
b.Y = y;
a = GetStdHandle(STD_OUTPUT_HANDLE);
SetConsoleCursorPosition(a,b);
}
void sleep(unsigned int mseconds)
{
clock_t goal = mseconds + clock();
while (goal > clock());
}
void load(){
int row,col,r,c,q;
gotoxy(36,14);
printf("loading...");
gotoxy(30,15);
for(r=1;r<=20;r++){
sleep(200);//to display the character slowly
printf("%c",177);}
getch();
}
void Down()
{
int i;
for(i=0;i<=(head.y-bend[bend_no].y)&&len<length;i++)
{
GotoXY(head.x,head.y-i);
{
if(len==0)
printf("v");
else
printf("*");
}
body[len].x=head.x;
body[len].y=head.y-i;
len++;
}
Bend();
if(!kbhit())
head.y++;
}
void Delay(long double k)
{
Score();
long double i;
for(i=0;i<=(10000000);i++);
}
void ExitGame()
{
int i,check=0;
for(i=4;i<length;i++) //starts with 4 because it needs minimum 4 element to touch its own body
{
if(body[0].x==body[i].x&&body[0].y==body[i].y)
{
check++; //check's value increases as the coordinates of head is equal to any other body coordinate
}
if(i==length||check!=0)
break;
}
if(head.x<=10||head.x>=70||head.y<=10||head.y>=30||check!=0)
{
life--;
if(life>=0)
{
head.x=25;
head.y=20;
bend_no=0;
head.direction=RIGHT;
Move();
}
else
{
system("cls");
printf("All lives completed\nBetter Luck Next Time!!!\nPress any key to quit the game\n");
record();
exit(0);
}
}
}
void Food()
{
if(head.x==food.x&&head.y==food.y)
{
length++;
time_t a;
a=time(0);
srand(a);
food.x=rand()%70;
if(food.x<=10)
food.x+=11;
food.y=rand()%30;
if(food.y<=10)
food.y+=11;
}
else if(food.x==0)/*to create food for the first time coz global variable are initialized with 0*/
{
food.x=rand()%70;
if(food.x<=10)
food.x+=11;
food.y=rand()%30;
if(food.y<=10)
food.y+=11;
}
}
void Left()
{
int i;
for(i=0;i<=(bend[bend_no].x-head.x)&&len<length;i++)
{
GotoXY((head.x+i),head.y);
{
if(len==0)
printf("<");
else
printf("*");
}
body[len].x=head.x+i;
body[len].y=head.y;
len++;
}
Bend();
if(!kbhit())
head.x--;
}
void Right()
{
int i;
for(i=0;i<=(head.x-bend[bend_no].x)&&len<length;i++)
{
//GotoXY((head.x-i),head.y);
body[len].x=head.x-i;
body[len].y=head.y;
GotoXY(body[len].x,body[len].y);
{
if(len==0)
printf(">");
else
printf("*");
}
/*body[len].x=head.x-i;
body[len].y=head.y;*/
len++;
}
Bend();
if(!kbhit())
head.x++;
}
void Bend()
{
int i,j,diff;
for(i=bend_no;i>=0&&len<length;i--)
{
if(bend[i].x==bend[i-1].x)
{
diff=bend[i].y-bend[i-1].y;
if(diff<0)
for(j=1;j<=(-diff);j++)
{
body[len].x=bend[i].x;
body[len].y=bend[i].y+j;
GotoXY(body[len].x,body[len].y);
printf("*");
len++;
if(len==length)
break;
}
else if(diff>0)
for(j=1;j<=diff;j++)
{
/*GotoXY(bend[i].x,(bend[i].y-j));
printf("*");*/
body[len].x=bend[i].x;
body[len].y=bend[i].y-j;
GotoXY(body[len].x,body[len].y);
printf("*");
len++;
if(len==length)
break;
}
}
else if(bend[i].y==bend[i-1].y)
{
diff=bend[i].x-bend[i-1].x;
if(diff<0)
for(j=1;j<=(-diff)&&len<length;j++)
{
/*GotoXY((bend[i].x+j),bend[i].y);
printf("*");*/
body[len].x=bend[i].x+j;
body[len].y=bend[i].y;
GotoXY(body[len].x,body[len].y);
printf("*");
len++;
if(len==length)
break;
}
else if(diff>0)
for(j=1;j<=diff&&len<length;j++)
{
/*GotoXY((bend[i].x-j),bend[i].y);
printf("*");*/
body[len].x=bend[i].x-j;
body[len].y=bend[i].y;
GotoXY(body[len].x,body[len].y);
printf("*");
len++;
if(len==length)
break;
}
}
}
}
void Boarder()
{
system("cls");
int i;
GotoXY(food.x,food.y); /*displaying food*/
printf("F");
for(i=10;i<71;i++)
{
GotoXY(i,10);
printf("!");
GotoXY(i,30);
printf("!");
}
for(i=10;i<31;i++)
{
GotoXY(10,i);
printf("!");
GotoXY(70,i);
printf("!");
}
}
void Print()
{
//GotoXY(10,12);
printf("\tWelcome to the mini Snake game.(press any key to continue)\n");
getch();
system("cls");
printf("\tGame instructions:\n");
printf("\n-> Use arrow keys to move the snake.\n\n-> You will be provided foods at the several coordinates of the screen which you have to eat. Everytime you eat a food the length of the snake will be increased by 1 element and thus the score.\n\n-> Here you are provided with three lives. Your life will decrease as you hit the wall or snake's body.\n\n-> YOu can pause the game in its middle by pressing any key. To continue the paused game press any other key once again\n\n-> If you want to exit press esc. \n");
printf("\n\nPress any key to play game...");
if(getch()==27)
exit(0);
}
void record(){
char plname[20],nplname[20],cha,c;
int i,j,px;
FILE *info;
info=fopen("record.txt","a+");
getch();
system("cls");
printf("Enter your name\n");
scanf("%[^\n]",plname);
//************************
for(j=0;plname[j]!='\0';j++){ //to convert the first letter after space to capital
nplname[0]=toupper(plname[0]);
if(plname[j-1]==' '){
nplname[j]=toupper(plname[j]);
nplname[j-1]=plname[j-1];}
else nplname[j]=plname[j];
}
nplname[j]='\0';
//*****************************
//sdfprintf(info,"\t\t\tPlayers List\n");
fprintf(info,"Player Name :%s\n",nplname);
//for date and time
time_t mytime;
mytime = time(NULL);
fprintf(info,"Played Date:%s",ctime(&mytime));
//**************************
fprintf(info,"Score:%d\n",px=Scoreonly());//call score to display score
//fprintf(info,"\nLevel:%d\n",10);//call level to display level
for(i=0;i<=50;i++)
fprintf(info,"%c",'_');
fprintf(info,"\n");
fclose(info);
printf("wanna see past records press 'y'\n");
cha=getch();
system("cls");
if(cha=='y'){
info=fopen("record.txt","r");
do{
putchar(c=getc(info));
}while(c!=EOF);}
fclose(info);
}
int Score()
{
int score;
GotoXY(20,8);
score=length-5;
printf("SCORE : %d",(length-5));
score=length-5;
GotoXY(50,8);
printf("Life : %d",life);
return score;
}
int Scoreonly()
{
int score=Score();
system("cls");
return score;
}
void Up()
{
int i;
for(i=0;i<=(bend[bend_no].y-head.y)&&len<length;i++)
{
GotoXY(head.x,head.y+i);
{
if(len==0)
printf("^");
else
printf("*");
}
body[len].x=head.x;
body[len].y=head.y+i;
len++;
}
Bend();
if(!kbhit())
head.y--;
}
Answer: Things that could be improved:
Portability:
Every time you add an #import to the top of your C file, you potentially create a dependency. For example: #include <windows.h> creates a dependency that the program can only be compiled on a Windows system.
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <conio.h>
#include <time.h>
#include <ctype.h>
#include <time.h>
#include <windows.h>
#include <process.h>
#include <unistd.h>
You should always try to create a program so that it is as portable as possible, and can be played on a variety of systems. Right now, your game can only be played on a few select systems.
Conventions/Standards:
You don't follow proper C naming conventions for method names.
void Delay(long double);
void Move();
void Food();
int Score();
void Print();
void gotoxy(int x, int y);
void GotoXY(int x,int y);
void Bend();
void Boarder();
void Down();
void Left();
void Up();
void Right();
void ExitGame();
int Scoreonly();
Either use camelCase, or snake_case with method names.
You should have unique method names.
void gotoxy(int x, int y);
void GotoXY(int x,int y);
Be more expressive with your function naming.
You don't typedef a struct in the standard way, nor do you use proper naming conventions of typedef structs.
struct coordinate{
int x;
int y;
int direction;
};
typedef struct coordinate coordinate;
You can combine these two together for the proper definition of a typedef struct. Also, you should always capitalize the first letter of the typedef struct name.
typedef struct
{
int x;
int y;
int direction;
} Coordinate;
Don't use a for loop in the place of sleep().
for(i=0;i<=(10000000);i++);
There are many problems with busy waiting instead of using sleep(). See this question for more information.
If you don't take in any variables as parameters, you should declare them as void.
int main(void)
Define i inside of your for loop.(C99)
for(int i = 4; i < length; i++)
Styling:
You have way too much space in some areas of your program.
int main()
{
char key;
Print();
system("cls");
load();
length=5;
head.x=25;
head.y=20;
head.direction=RIGHT;
Boarder();
Food(); //to generate food coordinates initially
life=3; //number of extra lives
bend[0]=head;
Move(); //initialing initial bend coordinate
return 0;
}
I'm all for using whitespace, but there are limits to everything. Cut back on it a bit, right now the amount of whitespace you are using makes this program unreadable.
Syntax:
You have some #defines that are related to each other.
#define UP 72
#define DOWN 80
#define LEFT 75
#define RIGHT 77
These are all related to each other because they are all directions. Therefore, we can group them together in an enum.
typedef enum
{
UP = 72;
DOWN = 80;
LEFT = 75;
RIGHT = 77;
} Direction;
Use puts() instead of printf() when you are not formatting strings. | {
"domain": "codereview.stackexchange",
"id": 6318,
"tags": "performance, c, game, snake-game"
} |
Can it be shorter?: DNA Sequence Analyzer from CS50 written in Python | Question: This is my first time requesting a code review. This is code I wrote in Python to create a DNA sequence analyzer. This is for the Harvard CS50 course.
I was really hoping to get some constructive criticism on this code I finished for the DNA problem in Problem Set 6. This code passed all tests and is fully functional as far as I am aware.
I was mainly wondering if there were any more succinct ways to rewrite parts of it. I spent several hours writing this and my brain is pretty exhausted, so near the end I just went for whatever worked.
Any input is appreciated! (Also, sorry if this is an excessive amount of comments, but they help me keep everything clear in my head.)
import re
import csv
from sys import argv
# Checks for correct number of command-line arguments
if not len(argv) == 3:
print("Incorrect number of command-line arguments.")
exit(1)
# Declares a dictionary containing STR counts
str_counts = {
"AGATC": "0",
"TTTTTTCT": "0",
"AATG": "0",
"TCTAG": "0",
"GATA": "0",
"TATC": "0",
"GAAA": "0",
"TCTG": "0"
}
# Opens the csv and txt files to read (and closes them later)
with open(argv[1], "r", newline="") as csv_file, open(argv[2], "r") as sequence:
# Reads the database into a dict and the sequence into a string
db = csv.DictReader(csv_file)
sq = sequence.read()
# Stores the fieldname of the dictionary and stores the keys in STRS, skipping first line
keys = db.fieldnames
key_len = len(keys)
# Skips first row of headers
STRs = keys[1:key_len]
# Bool to signal if match was found
matched = False
# Checks for STR matches and length of those matches, then stores those values in str_counts
for STR in STRs:
# Executes following code only if there are 1 or more matches
if re.search(rf"(?:{STR}+)", sq):
# Creates a list of matches
matches = re.findall(rf"(?:{STR})+", sq)
# Finds the longest match
longest = max(matches, key = len)
# Stores that value in corresponding dict key
str_counts[STR] = len(longest) / len(STR)
# Compares str_counts values to database to find match
# Compares the match count values to the database values
for row in db:
# Declares a counter to later check if all STR values are matched
# Resets counter to zero every iteration
match_count = 0
# Declares a temporary dictionary with only int match values to compare STR counts to
compare = {}
# Stores the names field for use later
match_name = row['name']
# Deletes the names field so we only have ints in our compare dict
del row['name']
# Converts values to integers for easy comparison
for key, value in row.items():
compare[key] = int(value)
# Increments match count
for STR in STRs:
if compare[STR] == str_counts[STR]:
match_count += 1
# If all fields are matched, print match name, switch bool to true
if match_count == len(STRs):
matched = True
print(match_name)
if not matched:
print("No Match")
Answer: I would group the code in functions and call them from a central main. With how you have it now, simply loading the file into an interactive console will cause all the code to run, which, if the code is long running or causes side-effects, may be less than ideal. Say I wrote a port-scanner, and it takes 2-3 minutes to run. I may not want it to start simply because I loaded the file into a REPL in Pycharm. Not only would that waste a few minutes each time, port-scanning can get you in trouble if done on a network that you don't have permission to scan.
Code wrapped in functions is simply easier to control the execution of, because it will only run when the function is called.
Grouping code into functions also allows you to easily test each chunk in isolation from the rest of the code. It's difficult to test code when you're dependent on the first half of the code to supply test data to the latter-half. It's much easier however to simply load the script into a REPL, and throw test data at the function (or use proper unit-tests). Then you can test one piece of code in isolation without touching the rest of the code.
I personally believe that STRs and STR should be lower-case. According to PEP8, Python's style and convention guide, regular variables should be lower case, with words separated by underscores. Variables that are considered to be constants however should be upper-case, separated by underscores.
Local variables that remain unchanged have never clicked as "constants" in my head, so it depends on what you decide.
Regardless though, STRs should either be all upper-case, or all lower-case; in which case the STR counterpart will need to be renamed so it doesn't clash with the str built-in. A name like str_ could be used, or you could use a more descriptive name.
compare = {}
. . .
for key, value in row.items():
compare[key] = int(value)
This can make use of a dictionary comprehension. Whenever you have the pattern of creating a set/list/dictionary before a loop, then populating the structure within the loop, you should consider a comprehension:
compare = {key: int(value) for key, value in row.items()}
Similarly, to ensure that a condition holds for an entire collection (or two in this case), you can combine all with a generator expression (the part passed to all):
if all(compare[STR] == str_counts[STR] for STR in STRs):
matched = True
print(match_name)
"If all corresponding values in the two dictionaries match, set the matched flag". It reads quite nicely.
That gets rid of the need for match_count, and the second last loop.
You're looping more than you need to. You appear to only care about if at least one match is found; yet you keep looping even after one is.
I'd break once one is found. If you combine that with for's else clause, you can prevent unnecessary looping and still detect failure:
for row in db:
. . .
if all(compare[STR] == str_counts[STR] for STR in STRs):
print(match_name)
break
else:
print("No Match") | {
"domain": "codereview.stackexchange",
"id": 39096,
"tags": "python, python-3.x, regex, formatting, hash-map"
} |
Getting started with motion planning | Question: I have recently finished the coursera Motion Planning course and I am looking for a project to do, using ROS, OpenRave, Gazebo and similar tools. My project would be in the area of motion planning for a mobile robot. The objective is to take a turtlebot from point A to point B in an environment with dynamic obstacles.
I am looking for suggestions regarding the software architecture and how to connect the above mentioned tools together.
Answer: If you're interested in doing generic motion planning the best place to start is with the MoveIt Project
If you specifically want to do ground navigation, the ROS navigation stack has been a standard starting point. However if you're getting started now I'd recommend getting involved with nav2 which is an iteration on the design which is being developed for ROS 2. | {
"domain": "robotics.stackexchange",
"id": 2420,
"tags": "ros, motion-planning, gazebo"
} |
How to visually tell the difference between a planetary nebula and a supernova remnant? | Question: If we see a nebula by looking through a powerful telescope, how can we tell whether we are looking at the remnant of a supernova or at a planetary nebula?
Thanks
Answer: A Supernova Remnant contains a Black Hole or a Neutron Star while a Planetary Nebula contains a White Dwarf.
Also, Supernovae Remnants are likely to have great velocities, so Doppler is another plausible choice. - Not through telescope though. | {
"domain": "astronomy.stackexchange",
"id": 2602,
"tags": "telescope, observational-astronomy, supernova, nebula"
} |
Getting point cloud from turtlebot? | Question:
Hello, I am using the turtlebot simulator in gazebo and I want to get its pointcloud data and send it in real time to a mapping program (gmapping maybe?). Where can I find the pointcloud data?
Originally posted by Morefoam on ROS Answers with karma: 27 on 2016-07-07
Post score: 0
Answer:
Run a rostopic list and see if you can see /scan topic. If you can, then that's the topic into which pointcloud data is published.
Originally posted by janindu with karma: 849 on 2016-07-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25184,
"tags": "ros, gazebo, navigation, turtlebot, gmapping"
} |
usb_cam on ubuntu 11.10 | Question:
I want to use usb_cam on ubuntu 11.10 and ROS electric.
when I use rosmake usb_cam --rosdep-install, it says:
Failed to find rosdep libswscale for package usb_cam on OS:ubuntu version:oneiric
while I check libswscale, libswscale-dev and libswscale2 were installed in my system.
any ideas to solve this error?
Originally posted by Parisa on ROS Answers with karma: 23 on 2012-02-23
Post score: 0
Answer:
you can use gscam pkg instead of usb_cam
Originally posted by abrsefid with karma: 41 on 2012-03-22
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8363,
"tags": "ros, usb-cam, camera-drivers, camera"
} |
Natural frequency mass-spring system on inclined plane | Question:
I have to find $\omega$ for this system using the forces. I have a disc $radius = R, mass = M$
By using $F=ma$, I get $mg\sin\theta - kx = m\ddot{x}$
then
$$-g\sin\theta + \frac{kx}{m} + \ddot{x} = 0$$
$\omega = \sqrt{\frac{k}{m}}$
But the correct answer is $\omega = \sqrt{\frac{2k}{3m}}$
I don't see where my errors are.
Answer: There's rolling going on here.
Assuming no slipping, you have $\tau = R mg\sin\theta\implies I\alpha = fR$ where $f$ is friction. Since $a=\alpha R$ then you have $$C mR^2 \dfrac \alpha R = fR\implies Cma=f$$
where $I=CmR^2$.
Then by Newton's second Law, $$ma=mg\sin\theta-kx-f\implies ma=mg\sin\theta-kx-Cma$$
Then use the chain rule $a=\dfrac {dv}{dt}=\dfrac{dv}{dx}\dfrac{dx}{dt}=\dfrac{dv}{dx}v$, get the equation of motion for $x(t)$ and identify your $\omega$. | {
"domain": "physics.stackexchange",
"id": 76642,
"tags": "homework-and-exercises, newtonian-mechanics, frequency, spring, oscillators"
} |
Multi-language DFA minimisation | Question: I'm interested in a slight generalisation of DFA. As usual we have state-set $Q$, finite alphabet $\Sigma$, a $\Sigma^*$-action defined on $Q$ by $\delta : Q\times\Sigma\rightarrow Q$, and initial state $q_0$; but instead of the usual terminal set, we take a family $(T_i)_{i\in 1..n}$ of subsets of $Q$. A Multi-language DFA $M$ is then the tuple
$(Q, \Sigma, \delta, q_0, (T_i))$
and $L \subseteq \Sigma^*$ is recognised by $M$ iff $L = \{s\in\Sigma^*|q_0s\in T_i\}$ for some $i\in 1..n$ . Define $(L_i(M))_{i\in 1..n}$ to be the family of languages recognised by M, if you like.
Okay, now for my question: given a family of regular languages $(L_i)_{i\in 1..n}$ , I want to find the minimal Multi-language DFA $M$ as described above such that $L_i = L_i(M)$ for all $i\in 1..n$ , that is, such that $|Q|$ is minimised over all such machines. My question is, are there any known efficient ways of doing this, perhaps analogous to standard DFA minimisation theory? Conversely, is there any evidence that this problem might be hard?
Answer: Short answer. Given a finite family of regular languages $\mathcal{L} = (L_i)_{1 \leqslant i \leqslant n}$, there is a unique minimal deterministic complete multi-automaton recognizing this family.
Details. The case $n = 1$ corresponds to the standard construction and the general case is not much different in spirit. Given a language $L$ and a word $u$, let $u^{-1}L = \{ v \in A^* \mid uv \in L \}$. Define an equivalence relation $\sim$ on $A^*$ by setting
$$
u \sim v \iff \text{for each }L \in \mathcal{L},\ u^{-1}L = v^{-1}L
$$
Since the $L_i$ are regular, this congruence has finite index. Further, it is easy to see that each $L_i$ is saturated by $\sim$ and that for each $a \in A$, $u \sim v$ implies $ua \sim va$. Let us denote by $1$ the empty word and by $[u]$ the $\sim$-class of a word $u$. Let $\mathcal{A}_\mathcal{L} = (Q, [1], \cdot, (F_i)_{1 \leqslant i \leqslant n})$ be the deterministic multi-automaton defined as follows:
$Q = \{ [u] \mid u \in A^*\}$,
$[u] \cdot a = [ua]$,
$F_i = \{ [u] \mid u \in L_i\}$.
By construction, $[1] \cdot u \in F_i$ if and only if $u \in L_i$ and hence $\mathcal{A}_\mathcal{L}$ accepts the family $\mathcal{L}$. It remains to prove that $\mathcal{A}_\mathcal{L}$ is minimal. It is actually minimal in a strong algebraic sense (which implies that it has the minimal number of states). Let $\mathcal{A} = (Q, q_-, \cdot, (F_i)_{1 \leqslant i \leqslant n})$ and $\mathcal{A}' = (Q', q'_-, \cdot, (F'_i)_{1 \leqslant i \leqslant n})$ be two multi-automata. A morphism $f: \mathcal{A} \to \mathcal{A}'$ is a surjective map from $Q$ onto $Q'$ such that
$f(q_-) = q'_-$,
for $1 \leqslant i \leqslant n$, $f^{-1}(F'_i) = F_i$,
for all $u \in A^*$ and $q \in Q$, $f(q \cdot u) = f(q) \cdot u$.
Then for any accessible deterministic multi-automaton $\mathcal{A}$ accepting $\mathcal{L}$, there is a morphism from $\mathcal{A}$ onto $\mathcal{A}_\mathcal{L}$. To prove this, one first verifies that if $q_- \cdot u_1 = q_- \cdot u_2 = q$, then $u_1 \sim u_2$. Now $f$ is defined by $f(q) = [u]$ where $u$ is any word such that $q_- \cdot u = q$. Then one can show that $f$ satisfies the three required properties.
The end is a bit sketchy, let me know if you need more details. | {
"domain": "cstheory.stackexchange",
"id": 2228,
"tags": "automata-theory, dfa"
} |
Would anything be different on Earth if the rotation of the Moon wasn't synchronous? | Question: The Moon is in synchronous rotation, meaning the same side of the Moon is always pointing towards the Earth. My intuition is that it wouldn't make any difference (other than the fact that the whole surface of the Moon would be visible). But maybe there is something I didn't think of.
What would be different on Earth if the rotation of the Moon wasn't synchronous?
Answer: It would be noticeable from an astronomy perspective in that we would be able to see all faces of the Moon.
However, it would have a negligible effect on our tides. The Moon is not a gravitationally symmetric body (see mascons), but our Earth tidal deviations are mostly due to the Moon's eccentric orbit and the positive or negative interference with tides due to the Sun.
On the Moon, significant tidal deformation would be taking place due to the Earth's gravity. Some of this energy would be released as heat, so we might also get to see volcanic activity on the Moon.
Over a longer time period, as the Moon tidally locked, we would expect to see an evolution of its orbit, either inward or outward, depending on its spin direction. | {
"domain": "astronomy.stackexchange",
"id": 6091,
"tags": "the-moon"
} |
When converting DFA/NFA to regular expression, where the DFA/NFA accepted empty string, is it okay to not have empty string as kleene star? | Question: Let's say that we have a DFA, where the initial state was also the accept state. Meaning the DFA accepts the "empty string". Now, let's say that we convert the DFA to regular expression $R$, using Arden's Method ( my prefered method )
Now, the resultant expression looks something like $(R)^*$, which is obviously capable of representing empty string, implying that the language we are describing also accepts empty string. But is it legal to represent empty string as a "kleene star" or do i have to explicitly mention "empty string" + $R$
My understanding is that, it's okay to represent it as Kleene star
Answer: If the start state is also a final state, then the empty word must be in the interpretation of the regular expression you obtain.
That does not necessarily mean that the regular expression $e$ you obtain is of the form $E = F^*$, for example, the interpretation of $(a+b)^*aab(a+b)^*\mid (ba)^*$ contains the empty word.
However, if that's not the case, then that means you made a mistake somewhere, no matter the method.
I am not so sure what you mean by 'is it legal to represent empty string as a "kleene star"': if the interpretation of $F$ is not the empty language, then $F^*$ does not represent only the empty word. However, $\emptyset^*$ contains only the empty word. | {
"domain": "cs.stackexchange",
"id": 20827,
"tags": "formal-languages, regular-expressions"
} |
How can you tell if a protein-coding gene is nuclear or mitochondrial? | Question: New to genes, and have to read literature to find candidate genes for a particular study. I cannot for the life of my understand if all genes are placed into either the "nuclear" or "mitochondrial" category...are there more categories? Some are easy, like cytochrome subunit genes are always written as "mitochondrial" and then rhodopsin are written as nuclear...but other genes like ATPase etc. don't have those descriptions. When I search online further, it still does not clarify, so I am wondering now if there are just more categories to this?
Answer: I have limited my answer to refer to humans, but the advice generalises to other eukaryotic cells.
In human cells, almost all of the genes that code for proteins are located in the genome, which is located in the nucleus. The mitochondria have their own genetic material, however, the mitochondrial genes only encode for 14 proteins. The answer to this question also notes the existence of extrachromosomal circular DNA. There are also a number of proteins (1192) that are verified to exist but their genomic location is currently unconfirmed.
In your comments you mention ATP1 and note that it is called a membrane protein. The gene is located on chromosome 19 in the nucleus. The protein, that is encoded by the gene, can be found in a variety of places, including the nucleus and also the mitochondrion. The gene for Cytochrome c oxidase subunit 1 (CO1/COX1) is encoded in the mitochondrial genome and it localises to the inner membrane of the mitochondrion.
The links in this answer all point to Uniprot. This is what I use to get an idea of the basic functions of proteins. The database includes information genes but its primary purpose is to describe proteins and their functions. The information is roughly correct, it is not perfect. It includes a many species and links to a number of tools, including the gene ontology which attempts to characterise proteins using a controlled vocabulary. | {
"domain": "biology.stackexchange",
"id": 6305,
"tags": "genetics"
} |
Deciding CL-IS on graph efficiently | Question: Given an arbitrary graph $G$, could there be a polynomial time algorithm to tell if it has a larger size clique $(\omega(G))$ or larger independence number$(\alpha(G))$?
Answer: Take an abitrary instance of the independent set problem: given an $n$-vertex graph $H=(V,E)$ and an integer $k\ge1$, does $H$ contain an independent set on $k$ vertices.
Construct a new graph $G$ that is the disjoint union of graph $H$, a clique $C$ on $n+k$ vertices and an independent set $I$ on $n$ vertices.
Note that $\omega(G)=n+k$.
If $H$ contains an independent set on $k$ vertices, then $\alpha(G)\ge n+k+1$.
You may use the $k$-element independent set in $G$, the $n$ vertices in $I$, and one vertex from $C$.
If $H$ does not contain an independent set on $k$ vertices, then $\alpha(G)\le n+k$.
Hence by comparing $\omega(G)$ and $\alpha(G)$, you can answer arbitrary instances of the NP-hard independent set problem. | {
"domain": "cstheory.stackexchange",
"id": 3128,
"tags": "cc.complexity-theory, ds.algorithms"
} |
Is thermodynamic work always defined, even for irreversible processes? | Question: Suppose we have a thermodynamic system, and we are capable of doing pressure-volume $(PV)$ work on the system. For a infinitesimal reversible process (where the only type of work done is pressure-volume work), the incremental amount of work done on the system is $\delta W = -P dV$. For an irreversible process of the same type, done slowly enough so that variables like pressure and volume are still easily measurable, the work is now $\delta W = -P_{ext} dV \leq -P dV$, where $P_{ext}$ is the external pressure exerted by the system's environment. Let me know if I'm mistaken about any of this.
In cases like these, and in many other situations, the amount of work done can be easily calculated, as long as pressure and volume measurements have been made on the system during each process. However, I can imagine much more complicated scenarios where it's less obvious how work would be calculated. For example, suppose we have a violent expansion of a gas and a corresponding shrinking of the gas's environment, which is so rapid that the external pressure is no longer uniform across the gas-environment boundary, and where the densities of gas and environment vary so much that volume is hard to measure or even define. (Ok, in this process, maybe we could argue that it occurs so rapidly that no heat exchange occurs, and so the work is just $W= \Delta E$, the change in energy of the system. But suppose I could come up with a better example, where heat flow is possible but quantities like $P$ and $V$ are still ill-defined in this way.)
In such a situation, is thermodynamic work still defined? Even if we can't calculate it directly with a formula like $\delta W = -P_{ext} dV$, I'm wondering if there's a way to get at it indirectly. Or are there certain processes which just don't have a definite value of work, from an operational point of view?
Answer: The amount of work done on the system is always the integral of $-P_{ext} dV$ (where $P_{ext}$ represents the force per unit area applied to the gas at the piston face), irrespective of whether the process is reversible or irreversible. But, if the process is reversible, then the gas pressure and temperature are uniform spatially within the cylinder, and thus $P_{ext}=P$. Under these circumstances, one can use the ideal gas law (or other equation of state) to calculate the work.
If the process is irreversible (involving, say, a very rapid deformation), the pressure and temperature within the cylinder are not typically uniform spatially, so the equation of state can not be applied globally. In addition, there are viscous stresses present within the gas that contribute to the force per unit area at the piston face. This too prevents using an equation of state to determine $P_{ext}$ and the work. So, using only thermodynamics, unless you can manually control $P_{ext}$ from the outside, you cannot determine the work.
However, it is still possible to get the work if you are able to apply the laws of fluid mechanics and a differential version of the first law locally within the cylinder. This involves solving a complicated set of partial differential equations to determine the temperature, pressure, stresses, and deformations as functions of time and position. Usually, such calculations would be accomplished using Computational Fluid Dynamics (CFD). The deformations inside the cylinder could be turbulent, and this would require CFD capabilities to approximate turbulent flow and heat transfer. So, for irreversible processes, predicting the behavior in advance can be much more complicated (but possible). | {
"domain": "physics.stackexchange",
"id": 38338,
"tags": "thermodynamics, work, reversibility"
} |
Bandwith use of ros topic | Question:
Hi,
I'm going to run my ros system on multiple computers, connected only by a wifi link of (possibly) bad quality. However I want to use data from multiple usb web cams.
I'm wondering how ros distributes the various message on the network and if it makes a difference where roscore is located.
Example Scenario:
Machine 1:
Node 1:
outgoing topics:
camera1/image_raw
camera1/image_compressed
Machine 2:
Node 2:
outgoing topics:
camera2/image_raw
camera2/image_compressed
Machine 3:
Node 3.1:
incoming topics:
camera1/image_raw
camera2/image_raw
outgoing topics:
control/steering
control/debug
Node 3.1:
incoming topics:
camera1/image_raw
control/steering
Which messages are actually send?
Do the camera*/image_compressed topics take up bandwith, even though they are never subscribed to?
is camera1/image_raw send twice over the network?
is camera2/image_raw also send to machine 1?
Does it matter whether roscore is started on machine 1 or machine 3?
Originally posted by NsN on ROS Answers with karma: 95 on 2012-03-27
Post score: 0
Answer:
Each message is sent pairwise over a TCP connection, directly from the publisher to each subscriber.
Messages that are actually subscribed
No.
Yes. I believe there are ways to avoid that.
No.
Mostly not.
Originally posted by joq with karma: 25443 on 2012-03-27
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Stephan on 2012-03-27:
Re 3): You can use http://www.ros.org/wiki/topic_tools/relay on machine 3 to have a local node that republishes the incoming images, so that other local nodes can subscribe to that relay instead of the topic published by machine 1. | {
"domain": "robotics.stackexchange",
"id": 8759,
"tags": "ros, network, topic, bandwidth, topics"
} |
What is the physical meaning of the large $N$ expansion? | Question: I know about the $1/N$ expansion for some time. Apart from the fact that as Witten suggests, it can be the correct expansion parameter of QCD Baryons in the $1/N$ Expansion (in a parallel that he draws between QED and QCD coupling constants), I was thinking of the meaning of the topological expansion in terms of the Riemann surfaces.
I don't know what is the meaning of the embedding of the 't Hooft's double-line diagrams on Riemann surfaces and what the Riemann surface really represent.
Moreover, it seems by taking such a limit, one is dealing with "so many fields".
What is the physical advantage of such a limit? It reminds me, though perhaps quite irrelevantly of a thermodynamical limit where one deals with a huge number of degrees of freedom, although each component field, entails an infinite number of harmonic oscillators, this might be a second infinite limit.
Or are the physical intentions behind such limit, relating the two-dimensional QFT which is well-studied, to four-dimensional QFT, and that's the meaning of the embedding on Riemann surfaces?
If so, why a certain limit of something related to the internal symmetry group of the interaction, namely the number of color degrees of freedom, causes this dimensional reduction of spacetime?
Does this suggest that in the large $N$ limit, two-dimensional topological perturbation theory is equivalent to fixed topology 4d non-topological perturbation theory?
EDIT: There's also a noticeable mathematical fact, that the mathematical identities used in the large $N$ limit, resemble that of $U(N)$.
But $U(N)$ is not simply connected, in fact, $\pi_{1}(SU(N))=0$ while, $\pi_{1}(U(N))=\mathbb{Z}$ which means there are $U(1)=U(N)/SU(N)$ instanton solutions in two dimensional case, but the $SU(N)$ gauge theory has $SU(N)/[SU(N-2)\times U(1)]$instanton solution in $\mathbb{R}^4$ for any $N \ge 2$.
Isn't this another hint for the large $N$ limit as a relation between 2d and 4d QFT regarding the topological solutions?
Having in mind the Massive Thirring model equivalence with the Sine-Gordon in two dimensions where topological solutions of one theory are related to non-topological solutions of the other theory, does this mean that the 2D topological perturbation theory relates with the perturbative non-topological expansion in 4D?
I was curious if one can find a link between the quantum corrections to the topological solutions in 4D and those of 2D. The problem is that each Riemann surface demands a different topological solution.
In other words, isn't also the 4D quantum corrections to the topological solutions related to the first term of the topological expansion in 2D (the topologically trivial term, i.e. non-topological) in the large $N$ limit?
Answer:
About the thermodynamic limit:
The following paper: Large N as a thermodynamic limit shows that the Large $N$ limit is correctly interpreted as the thermodynamic limit and not the classical limit that Witten (THE 1 / N EXPANSION IN ATOMIC AND PARTICLE PHYSICS) and Coleman (1/N) advocated. | {
"domain": "physics.stackexchange",
"id": 90531,
"tags": "quantum-field-theory, quantum-chromodynamics, yang-mills, large-n"
} |
How to optimize service calls | Question:
I'm trying to optimize my code so I was measuring how long different parts of ROS process take. Specifically
rospy.wait_for_service(rospy.get_param('~service_name'))
always seems to take little over 2 ms
service = rospy.ServiceProxy(rospy.get_param('~service_name'), MySrv)
always seems to take little under 1 ms
resp = service(MySrvRequest())
always seems to take little over 1 ms
This all causes the one service call to result in about 4 ms of delay and I want to get my program to 1 ms precision. What are the variables that effect how long those service calls take? Can I adjust the setting somewhere to make it faster?
Originally posted by kump on ROS Answers with karma: 308 on 2019-07-04
Post score: 0
Original comments
Comment by PeteBlackerThe3rd on 2019-07-04:
Service calls use the networking functions to communicate with different nodes. So their speed is effected by many factors. If the service server node was on a different computer These numbers could be much higher! If you really need speed and consistency that much then you'll have to try and merge functionality into a single node (binary)
Answer:
A few things to note:
you're trying to achieve a 1kHz loop rate with a Python script: not impossible, but certainly not the first combination of runtime environment and performance requirements I'd choose. Have you considered the amount of jitter this is going to experience?
using parameters is always good (improves reusability of your code), but in this case for your service variable to be initialised you're incurring the overhead of communicating with the parameter server by using get_param(..) there. That's a full XML-RPC session setup, communication (ie: transmit and receive), (de)serialisation and teardown just to read a single parameter.
you don't appear to be using a persistent connection. That causes rospy to always do a full lookup and rebinding of the ServiceProxy upon invoking it. That is a lot of overhead as the network stack and multiple other nodes are involved (ie: service server and the master).
And from this:
This all causes the one service call to result in about 4 ms of delay and I want to get my program to 1 ms precision.
I get two impressions:
you're executing this piece of code in an inner / performance critical loop
you're using services for something they are not really suited for
If my first impression is correct: don't do that. If you must, only invoke the already initialised (ie: bound) service call in your loop. The rest should be done in the initialisation phase of your script.
For the second: I would personally not use services for something like this, but that is of course your choice and would depend on many things we do not know.
Finally: in #q328017 you mention the word "real-time". You're most likely already aware, but a standard Python interpreter is not a deterministic runtime environment. rospy is incapable of deterministic execution, and the TCP/IP based communication system used is also non-deterministic. If "real-time" was meant to say "fast enough", then it's probably possible, but it won't be deterministic.
Originally posted by gvdhoorn with karma: 86574 on 2019-07-09
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by Simon Schmeisser on 2022-12-16:
For completeness: http://wiki.ros.org/rospy/Overview/Services#persistent_connections shows how to enable persistent connections in rospy while your impressive answer links to roscpp | {
"domain": "robotics.stackexchange",
"id": 33335,
"tags": "ros, optimization, ros-kinetic, rosservices"
} |
Why can't a permanent magnet's magnetic field be converted into other types of energy? | Question: Can a solid magnet excite any type of atom's electrons to release photons?
Why can't magnets be arranged to be its own generator? is there an equation for reason solid magnets can't self perpetuate?
Sources
Answer:
Why can't a permanent magnet's magnetic field be converted other types of energy?
A permanent magnet's magnetic field can be changed to other forms of energy. If you heat the magnet enough it will lose its magnetic field and the energy stored in the magnetic field will turn into some more heat.
Can a solid magnet excite any type of atom's electrons to release photons?
A magnet can excite electrons, but it has to be in motion relative to the atoms. After all that is how an electricity generator works.
Why can't magnets be arranged to be its own generator?
Magnets are used to generate electricity.
is there an equation for this?
The complete set of equations about magnets and electricity are called Maxwell's equations. | {
"domain": "physics.stackexchange",
"id": 34611,
"tags": "electrostatics, electricity, magnetic-fields, electric-circuits, unit-conversion"
} |
Can CFGs generate all languages? Are they (PDAs) finite or infinite state automata? | Question: I was looking for the limitations of a CFG. I think there is some limitation given there are only finitely many states of a PDA (or non-terminals in a CFG).
I suspect that languages like $\text{L} = \{10,10100,101001000, \dots\}$ can not be generated by a CFG. I can not see that intuitively (heuristics will help me). I have seen PDAs to be finite state automata in some places, and infinite in the rest.
Can someone tell me the limitations of a CFG / PDA (if any) and whether or not $\text{L}$ can be generated by a PDA/CFG? Additionally, are PDAs infinite state automata?
Answer: Since each context-free language can be described by a grammar, there are only countably many context-free languages (over a fixed alphabet). Therefore, "most" languages are not context-free. Examples of particular languages which are not context-free abound. Your language $\{ 1010^210^3\cdots 10^n : n \geq 1 \}$ is one such example. It can be proved to be non-context-free using the pumping lemma, for example.
Push-down automata have finitely many states, but an infinite stack. Although the stack is infinite, it can only be accessed in very specific ways, and this severely limits the power of the automaton. If we replace the stack with a queue, or add a second stack, then the automaton becomes much more powerful – equivalent to a Turing machine. But even such automata can only accept countably many languages. In particular, they cannot accept uncomputable languages such as that of the halting problem, as shown by Turing. | {
"domain": "cs.stackexchange",
"id": 17137,
"tags": "automata, finite-automata, pushdown-automata"
} |
Non-coordinate basis in GR | Question: I am reading Nakahara's Geometry, topology and physics and I am a bit confused about the non-coordinate basis (section 7.8). Given a coordinate basis at a point on the manifold $\frac{\partial}{\partial x^\mu}$ we can pick a linear combination of this in order to obtain a new basis $\hat{e}_\alpha=e_\alpha^\mu\frac{\partial}{\partial x^\mu}$ such that the new basis is orthonormal with respect to the metric defined on the manifold i.e. $g(\hat{e}_\alpha,\hat{e}_\beta)=e_\alpha^\mu e_\beta^\nu g_{\mu\nu}=\delta_{\alpha \beta}$ (or $=\eta_{\alpha \beta}$). I am a bit confused about the way we perform this linear transformation of the basis. Is it done globally? This would mean that at each point we turned the metric into the a diagonal metric, which would imply that the manifold is flat, which shouldn't be possible as the change in coordinates shouldn't affect the geometry of the system (i.e. the curvature should stay the same). So does this mean that we perform the transformation such that the metric becomes the flat metric just at one point, while at the others will also change, but without becoming flat (and thus preserving the geometry of the manifold)? Then Nakahara introduces local frame rotations, which are rotations of these new basis at each point, which further confuses me to why would you do that, once you already obtained the flat metric at a given point. So what is the point of these new transformations as long as we perform the first kind in the first place? Sorry for the long post I am just confused.
Answer: This transformation is done locally, i.e. so that $g_{\alpha\beta} = \eta_{\alpha\beta}$ in a neighbourhood rather than a single point. I believe that due to topological effects, we cannot in general do it globally. However, even though the new basis is orthonormal, we have to remember that it is (in general) non-holonomic, i.e that there is no set of functions $y^\alpha$ satisfying
$$
e^\mu_\alpha = \frac{\partial x^\mu}{\partial y^\alpha}.
$$
This means that the curvature does not (in general) vanish. Indeed, it holds that
$$T_{\alpha\beta\ldots\gamma} = e^\mu_\alpha e^\nu_\beta e^\sigma_\gamma T_{\mu\nu\ldots\sigma},$$
for all tensors $T_{\mu\nu\ldots\sigma}$, as is obvious by the linearity of $e^\mu_\alpha$, and thus in particular for $R_{\mu\nu\sigma\tau}$. You may be confused if you have learned that the connection coefficients are given by
$$
\Gamma_{\mu\nu\sigma} = \frac{1}{2}\left(g_{\mu\nu,\sigma} + g_{\mu\sigma,\nu} -
g_{\nu\sigma,\mu}\right),
$$
but this is only valid in a holonomic frame. More generally
$$
\Gamma_{\alpha\beta\gamma} = \frac{1}{2}\left(g_{\alpha\beta|\gamma} + g_{\alpha\gamma|\beta} - g_{\beta\gamma|\alpha} + C_{\gamma\alpha\beta} + C_{\beta\alpha\gamma} - C_{\alpha\beta\gamma}\right),
$$
where $f_{|\alpha} \equiv e_\alpha(f)$, $[e_\alpha,e_\beta] = C^\gamma{}_{\alpha\beta}e_{\gamma}$, and $C_{\gamma\alpha\beta} \equiv g_{\gamma\delta}C^\delta{}_{\alpha\beta}$. In particular, in an orthonormal frame:
$$
\Gamma_{\alpha\beta\gamma} = \frac{1}{2}\left(C_{\gamma\alpha\beta} + C_{\beta\alpha\gamma} - C_{\alpha\beta\gamma}\right).
$$
Unless I misunderstand you, the rotations you refer to are local Lorentz transformations, and we are interested in them because each orthonormal frame corresponds to the instantaneous rest frame of an observer (with velocity field equal to $e_0$), and thus these rotations transform between different rest frames. | {
"domain": "physics.stackexchange",
"id": 44842,
"tags": "general-relativity, differential-geometry, metric-tensor"
} |
Projectile motion of a basketball shot | Question: I'm working on a project which calculates the some statics of a basketball shot. I haven't done physics since high school so I wanted to see if I was on the right track or if I'm completely wrong. Note: this is not a problem for school or nothing like that.
Currently the information I've got to work with is as follows:
Height of the hoop
Distance from the hoop
Height of when the ball was released
Time in the air (can be calculated from when it left the players can till it goes in the hoop)
What I don't have (and trying to find):
Angle of release
Initial velocity
I was following pretty much whats in this video: https://www.youtube.com/watch?v=fNfkYWqB9w8
but since the basketball hoop is a higher elevation that means I have to find that, correct? Could i just use: $y−y_0=(v_yt)−(\frac{1}{2}gt^2)$ where $y =$ height of basketball hoop and $y_0 =$ height of where ball was released? (then solve for $v_y$)
If so I could just use the remaining formulas of $V_x = \Delta x / \Delta t$ and $a^2 + b^2 = c^2$ to find the angle like in the video.
I guess all I'm really asking is to make sure I'm doing this correctly.
Answer: $\def\th{\theta}
\def\ra{\rightarrow}$Suppose the ball is thrown from $(0,h)$ to $(d,H)$ under the influence of gravity in time $t$ and that the initial velocity is
${\bf v}_0 = (v_0\cos\th,v_0\sin\th)$.
(In what follows we assume $d>0$ so $-\pi/2<\th<\pi/2$.)
We have
\begin{align*}
d &= v_0 t\cos\th \\
H &= h + v_0 t\sin\th-\frac1 2 g t^2.
\end{align*}
This is a two-by-two nonlinear system of equations for $(v_0,\th)$.
We solve this system with a standard method.
The system is equivalent to
\begin{align*}
v_0 t\cos\th &= d \tag{1}\\
v_0 t\sin\th &= H-h+\frac1 2 g t^2.\tag{2}
\end{align*}
We square each side of (1) and (2), add, and use Pythagoras' theorem with the result
$$v_0^2 t^2 = d^2 + \left(
H-h+\frac1 2 g t^2
\right)^2.$$
Thus,
$$v_0 = \frac{
\sqrt{d^2 + \left(
H-h+\frac1 2 g t^2
\right)}}{t}.$$
If instead we take the ratio of (2) to (1) and solve for $\th$ we find
$$\th = \arctan\frac{H-h+\frac1 2 g t^2}{d}.$$
One can check that these results are dimensionally correct and that they "act right" by taking various limits.
(A) For example, suppose $H=h$ and $t\ra 0$.
We find $(v_0,\th)\ra(d/t,0)$.
If the ball lands at the height from which it was thrown, for short time periods the ball's motion is uniform and horizontal.
(B) If $g\ra 0$ we find
$(v_0,\th)\ra(\sqrt{d^2+(H-h)^2}/t,\arctan((H-h)/d))$.
That is, if there is no gravitational force, the ball's motion is uniform from $(0,h)$ to $(d,H)$.
(C) If $d\ra 0$ (and $H-h+\frac 1 2 g t^2>0$) we find $(v_0,\th)\ra((H-h+\frac 1 2 g t^2)/t,\pi/2)$.
That is, the motion is in the vertical direction and $H=h+v_0 t-\frac 1 2 g t^2$.
This is just one-dimensional kinematics of constant accelerated motion. | {
"domain": "physics.stackexchange",
"id": 69230,
"tags": "kinematics, projectile"
} |
How to calculate a TQFT Gaussian path integral from Seiberg's "fun with free field theory"? | Question: In his talk "Fun with Free Field Theory", Seiberg discusses a topological quantum field theory in $d+1$ dimensions with the action
$$ S = \frac{n}{2\pi} \int \phi\, \mathrm{d} a \tag{1}$$
where $\phi$ is a periodic scalar ($\phi \sim \phi + 2\pi$), $a$ is a $d$-form gauge field quantised such that $\int_M a \in 2\pi\mathbb{Z}$ for any $d$-cycle $M$, and $n$ is an integer. He writes down the correlation function
$$ \left\langle \mathrm{e}^{\mathrm{i}\phi(p)} \mathrm{e}^{\mathrm{i}\oint_M a}\right\rangle = \mathrm{e}^{\frac{2\pi\mathrm{i}}{n} \ell}\tag{2} $$
where $p$ is a point, $M$ is a closed $d$-dimensional hypersurface, and $\ell$ is the linking number of $p$ and $M$. He says that since the theory is Gaussian (that is, free), it is straightforward to compute the partition function and get the above result by performing a Gaussian integral.
I don't understand how to do this. My main concern is that the path integral
$$Z = \int \mathscr{D}\phi \mathscr{D}a\,
\mathrm{e}^{-S}\tag{3}$$
doesn't look Gaussian to me. To me, the Gaussian integral is
$$ \int\mathrm{d}^n\Phi\, \mathrm{e}^{-\frac{1}{2} \Phi^T M \Phi} = \frac{1}{\sqrt{\det(M/2\pi)}} \tag{4}$$
where $M$ is symmetric and positive definite,
but if I try to define (in 1+1 dimensions for concreteness) $\Phi = (\phi, a_0, a_1)$, I get
$$ \begin{align} S &= \frac{n}{2\pi} \int \mathrm{d}^2 x\, \mathrm{d}^2 y\, \frac{1}{2} \phi(\partial_0 a_1 - \partial_1 a_0) \\
&= \int \mathrm{d}^2 x\, \mathrm{d}^2 y\, \frac{1}{2} \Phi^T \underbrace{\frac{n}{4\pi} \delta^{(2)}(x - y)
\begin{pmatrix} 0 & -\partial_1 & \partial_0 \\
\partial_1 & 0 & 0 \\
-\partial_0 & 0 & 0\end{pmatrix}}_{M(x,y)} \Phi
\end{align}\tag{5} $$
and it seems like this operator $M$ surely has determinant $0$. Therefore, the path integral doesn't make sense, and in particular, I can't compute correlators by the standard method of introducing source terms and completing the square, because this would require inverting $M$.
I can think of three problems with what I have said:
My $M$ doesn't look symmetric because I performed partial integrations $\phi \partial_\mu a_1 \to -a_1 \partial_\mu \phi$ (but it is Hermitian?)
I haven't performed any gauge fixing or regularisation of the path integral, and
since $\phi$ is periodic and $a$ is quantised, the ordinary way of doing Gaussian integrals may not work.
Is the path integral really Gaussian? How would you go about computing it? Would taking the above "problems" into account solve the issue?
Any help is greatly appreciated!
Related: How does this Gaussian integral over the auxiliary field in 2D topological gauge theory work?
Answer: Here is one way to derive OP's correlator (2):
One can think of the Wilson-lines/vertex operators in eq. (2) as part of an extended action
$$\begin{align}\tilde{S}~=~& S + \phi (p) + \int_M\! a\cr
~=~&\frac{n}{2\pi} \int\! \phi~ \mathrm{d} a+\int \! \phi (x)~\delta^{d+1}(x,p)~(\star 1)(x) + \int a \wedge \mathrm{d}1_N(x) ,\end{align}\tag{A}$$
where $\star 1$ is the volume-form on $\mathbb{R}^{d+1}$ and $1_N$ is the indicator/characteristic function. Here we have for simplicity assumed that the cycle $M=\partial N$ is a boundary and we have made some implicit choices of orientation. The action is called Gaussian/free because each term only contain (up to) 2 fields.
The EOM for $\phi$ is$^1$
$$ \frac{n}{2\pi} \mathrm{d} a(x)+\delta^{d+1}(x,p)~(\star 1)(x)~\approx~0,\tag{B}$$
while the EOM for $a$ is
$$ \mathrm{d}(\frac{n}{2\pi}\phi - 1_N)~\approx~0 \qquad\Rightarrow\qquad \frac{n}{2\pi}\phi - 1_N~\approx {\rm const} .\tag{C}$$
The classical on-shell action becomes (after neglecting a boundary term)
$$\tilde{S}_{\rm cl}~=~\frac{2\pi}{n}\int\! 1_N(x) ~\delta^{d+1}(x,p)~(\star 1)(x)~=~\frac{2\pi}{n}\ell.\tag{D}$$
A similar calculation for the original action $S$ yields
$$S_{\rm cl}~=~0.\tag{E}$$
OP's correlator (2) can be calculated via 2 Gaussian integrals$^2$
$$\begin{align} \left\langle \mathrm{e}^{\mathrm{i}\phi(p)} \mathrm{e}^{\mathrm{i}\oint_M a}\right\rangle
~=~\frac{\tilde{Z}}{Z}
~=~\frac{\int\!\mathscr{D}\phi \mathscr{D}a~
\mathrm{e}^{\mathrm{i}\tilde{S}}}{\int\!\mathscr{D}\phi \mathscr{D}a~
\mathrm{e}^{\mathrm{i}S}} ~=~\frac{\mathrm{e}^{\mathrm{i}\tilde{S}_{\rm cl}}}{\mathrm{e}^{\mathrm{i}S_{\rm cl}}}
~\stackrel{(D)+(E)}{=}~ \mathrm{e}^{\frac{2\pi\mathrm{i}}{n} \ell}.\end{align}\tag{F}$$
(One should analytically continue the Gaussian integrals to make them convergent.) Completing the square yields the classical on-shell actions. Note that the 2 Gaussian determinants cancel.
--
$^1$ Here the $\approx$ sign means equal modulo the EOMs.
$^2$ Here we ignore for simplicity gauge-fixing. Gauge-fixing would lead to extra terms in the 2 actions, which cancel in the correlator (F). | {
"domain": "physics.stackexchange",
"id": 78726,
"tags": "quantum-field-theory, path-integral, correlation-functions, partition-function, topological-field-theory"
} |
Does the prefix of an inverted unit also get inverted? | Question:
What is the energy of radiation that has a frequency of $\pu{2.51 \times 10^11 ms-1}$?
(a) $\pu{1.66 \times 10^-19 J}$ (supposedly correct)
(b) $\pu{1.66 \times 10^-22 J}$
(c) $\pu{7.92 \times 10^-37 J}$
(d) $\pu{1.66 \times 10^-25 J}$
My argument was that: since $\pu{Hz} = \pu{s-1}$, then $\pu{ms-1} = \pu{mHz}$. So, I divided $2.51 \times 10^{11}$ by $1000$ and solved the question normally (using $E = h\nu$). But my professor said that since $\pu{m}$ is a prefix, it should follow whatever is in front of it, so $\pu{ms-1}$ will become $1/\pu{ms}$, then you multiply it by 1000 to change it to Hz. Is that the correct way to do it?
Answer: You should read $\pu{ms-1}$ as $(\pu{ms})^{-1}$ and not as $\mathrm{m}(\pu{s-1})$. Your teacher is correct.
[...] since $\pu{Hz} = \pu{s-1}$ then $\pu{ms-1} = \pu{mHz}$
This is incorrect.
If something happens once per hour, it happens less often than when it happens once per minute. This is because an hour is longer than a minute. In a similar manner, if something happens once per second, it happens less often than when it happens once per millisecond. This is because a second is longer than a millisecond.
So $\pu{ms-1} = \pu{kHz}$, i.e. the frequency of something happening every millisecond is larger than the frequency of something happening every second. | {
"domain": "chemistry.stackexchange",
"id": 11780,
"tags": "units"
} |
Sudoku-Solver with GUI in Java | Question: I've wrote a Sudoku-Solver in Java, which also contains a GUI, so you can just enter the Sudoku, press "OK" and it will solve the Sudoku using backtracking.
Here's the code:
import javax.swing.*;
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import static javax.swing.WindowConstants.EXIT_ON_CLOSE;
public class SudokuSolver {
public static void main(String[] args) {
int[][] board = {
{ 0, 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0, 0 },
};
board = getSudoku(board);
boolean solve = solver(board);
if(solve) {
display(board);
}
else {
JOptionPane.showMessageDialog(null,"Not solvable.");
}
}
//Backtracking-Algorithm
public static boolean solver(int[][] board) {
for (int i = 0; i < 9; i++) {
for (int j = 0; j < 9; j++) {
if (board[i][j] == 0) {
for (int n = 1; n < 10; n++) {
if (checkRow(board, i, n) && checkColumn(board, j, n) && checkBox(board, i, j, n)) {
board[i][j] = n;
if (!solver(board)) {
board[i][j] = 0;
}
else {
return true;
}
}
}
return false;
}
}
}
return true;
}
public static boolean checkRow(int[][] board, int row, int n) {
for (int i = 0; i < 9; i++) {
if (board[row][i] == n) {
return false;
}
}
return true;
}
public static boolean checkColumn(int[][] board, int column, int n) {
for (int i = 0; i < 9; i++) {
if (board[i][column] == n) {
return false;
}
}
return true;
}
public static boolean checkBox(int[][] board, int row, int column, int n) {
row = row - row % 3;
column = column - column % 3;
for (int i = row; i < row + 3; i++) {
for (int j = column; j < column + 3; j++) {
if (board[i][j] == n) {
return false;
}
}
}
return true;
}
public static int[][] getSudoku(int[][] board) {
JFrame frame = new JFrame();
frame.setSize(800, 700);
frame.setDefaultCloseOperation(EXIT_ON_CLOSE);
JPanel panel = new JPanel();
JPanel subpanel1 = new JPanel();
subpanel1.setPreferredSize(new Dimension(500,500));
subpanel1.setLayout( new java.awt.GridLayout( 9, 9, 20, 20 ) );
JTextArea[][] text = new JTextArea[9][9];
for(int i = 0; i < 9; i++) {
for(int j = 0; j < 9; j++) {
text[i][j] = new JTextArea();
text[i][j].setText("0");
text[i][j].setEditable(true);
Font font = new Font("Verdana", Font.BOLD, 40);
text[i][j].setFont(font);
subpanel1.add(text[i][j]);
}
}
JPanel subpanel2 = new JPanel();
JButton button = new JButton("OK");
button.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent actionEvent) {
for(int i = 0; i < 9; i++) {
for(int j = 0; j < 9; j++) {
String s = text[i][j].getText();
board[i][j] = Integer.valueOf(s);
helper(1);
}
}
}
});
subpanel2.add(button);
panel.add(subpanel1, BorderLayout.WEST);
panel.add(subpanel2, BorderLayout.EAST);
frame.add(panel);
frame.setVisible(true);
while(helper(0)) {
}
frame.dispose();
return board;
}
public static void display(int[][] board) {
JFrame frame = new JFrame();
frame.setSize(700,700);
JPanel panel = new JPanel();
panel.setLayout(new GridLayout (9,9, 3 ,3));
JTextArea[][] text = new JTextArea[9][9];
for(int i = 0; i < 9; i++) {
for(int j = 0; j < 9; j++) {
text[i][j] = new JTextArea();
text[i][j].setText("" + board[i][j]);
text[i][j].setEditable(false);
Font font = new Font("Verdana", Font.BOLD, 40);
text[i][j].setFont(font);
panel.add(text[i][j]);
}
}
frame.add(panel);
frame.setVisible(true);
}
private static boolean test = true;
public static boolean helper(int x) {
if(x == 1) {
test = false;
}
System.out.print("");
return test;
}
}
Do you have any suggestions on how to improve the code?
Answer: Solver
Your back-tracking algorithm to find the solution to the puzzle is fine, although it is fairly inefficient.
On each recursive call, the algorithm must search for the position of the next unknown, which means starting at [0][0] and searching over the same locations over and over on each call. You could improve it by creating an ArrayList<> of unknown positions, and directly indexing to the unknown corresponding to the current search depth. Or you could pass the current board position (i, j) as a starting point for the next level of the search.
You could use a Set<>, such as a BitSet, to store the unused numbers in each row, column, and box. When processing each cell, you could "and" those sets together to create a much smaller set of candidate values to try.
But these optimizations are only necessary if your solver algorithm isn't fast enough. I haven't tried it.
An organizational improvement might be to move the Sudoku Solver code into its own class, so it could be used in other projects. For instance, you could make a JavaFX, or an SWT version of your program, and reuse the solver code ... if it was a stand-alone class.
GUI
Your GUI is where a lot of work is absolutely needed. This code is just plain awful.
Minor
Starting with the easiest to fix items:
the getSudoku() method creates a JFrame and sets EXIT_ON_CLOSE, but the display() method creates a JFrame without EXIT_ON_CLOSE. If the user closes the second frame, the program will not immediately terminate.
JTextArea is a multiline text edit window. You are creating 81 of these in a 9x9 grid. Surely you wanted to use the much lighter weight JTextField ... or even the JLabel when displaying the solution.
You create 81 identical Font objects, one for each JTextArea. You should create just one, and set the font of each JTextArea (or JTextField/JLabel) to this common Font object. Simply move the statement out of the double loop.
public static int[][] getSudoku(int[][] board) is this method allocating a new board and returning it, or is it just modifying the board it was given? Why have both an input parameter board and a return value, if the board that it is given is the board that is returned?
But the most SERIOUS problem is you are creating and manipulating Swing GUI objects from threads other than the Event Dispatching Thread (EDT). Swing is NOT thread safe. It is a convenience, and a thorn, that Swing allows you to build the GUI on the main thread. Swing goes to great lengths to allow it ... once. After the realization of any Swing GUI object, or after any Timer is started, all interaction must be performed on the EDT, or unexplainable, hard-to-debug behaviour -- up to and including application crashes -- are possible. So up to this line, which realizes the GUI components:
frame.setVisible(true);
you are safe. However, it is followed by:
while(helper(0)) {
}
frame.dispose();
which is a recipe for disaster. It is bad that this is an empty spin loop on the main application thread, but frame.dispose() is the violation about touching live Swing objects from threads other than the EDT. Then, the code returns to the main() function where display() is called, and more Swing GUI items are created on not the EDT.
Working on the EDT
First, you should divorce yourself from the main thread, and create your GUI on the EDT:
public class SudokuSolver {
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
@Override
void run() {
createGUI();
}
});
}
private static void createGUI() {
/* Create JFrame, JPanel, etc here */
frame.setVisible(true);
}
...
}
Or, if you are comfortable with lambdas and method references:
public class SudokuSolver {
public static void main(String[] args) {
SwingUtilities.invokeLater(SudokuSolver::createGUI);
}
private static void createGUI() {
/* Create JFrame, JPanel, etc here */
frame.setVisible(true);
}
...
}
The invokeLater() method takes a runnable, switches to the Event Dispatching Thread, and runs the runnable. So this create all GUI objects on the EDT. The final step is the frame is made visible. And then execution ends. Nothing else happens. The main thread has already reached the end of main() and has terminated; nothing else happens there, either. The application has become purely an event driven GUI application. It is now waiting for the user to interact with the GUI elements.
Working off the EDT
Once the user has entered their grid, the press the "OK" button, and the actionPerformed method is called.
Note that this method called helper(1) a total of 81 times. At any point after this method was called once, and before it had been called the last time, the board[][] would have contained an incomplete starting point for the solution, but the main thread could start attempting to solve the grid, since the test flag would have been cleared! Just one more danger of multithreaded processing.
Instead, after both loops in the actionPerformed method, a SwingWorker should be created and given a copy of the board. This worker could solve the board in its background thread, and then in its done() method, which is run once again on the EDT, the solved board could be displayed in the GUI. | {
"domain": "codereview.stackexchange",
"id": 36976,
"tags": "java, swing, gui, sudoku"
} |
How does hybridisation affect an otherwise chiral centre? | Question: In basic theory, a carbon atom with four nonidentical substituents attached, makes a chiral centre. Thus any molecule is chiral as long as it has a chiral centre (except meso compounds). I thought identifying a chiral centre was straight forward until I saw the following video.
Is Carbon (with the ability to bond 4 covalent bonds) the only atom qualified as a chiral centre (stereogenic centre)? Wikipedia link states, other atoms like Nitrogen or Sulfur could also be a chiral centre.
The above video shows that other atoms too can be chiral as well... However a comment in the same video, replies that a hybridised Nitrogen atom is not qualified to be a chiral centre. Original comment : That Nitrogen is sp2 hybridized, the lone pair of electron are in a unhybridized p-orbital; it would not be considered a chiral center.
With reference to this question 1-, it has number of comments, but not quite the answer I am after. On the other hand this question 2- has more emphasis on the lone pair for chirality - an atom with lone pairs can't be a chiral centre.
So given these, why can't a hybridised atom be a chiral centre?
Answer: I really wouldn't put too much stock into YouTube comments. Often, it's just the blind leading the blind. The comments on this video have all sorts of incorrect statements, it's really not where you want to be learning from...
Firstly, chiral centres are not restricted to only carbon. Source: IUPAC Gold Book. With that out of the way, we can enumerate the three main possibilities for nitrogen.
$\mathrm{sp^3}$ nitrogen with a lone pair is usually not considered to be a chiral centre, as the rate of inversion is very rapid (google "nitrogen inversion").
There are exceptions, such as if the amine geometry is constrained by a bicyclic system (something similar to quinuclidine), but that's a discussion for another day.
$\mathrm{sp^3}$ nitrogen without a lone pair e.g. in ammonium cations, can be chiral if there are four different groups attached. So these two molecules are enantiomers of each other.
In fact, it is this context in which Wikipedia says that nitrogen can be a chiral centre.
Racemization by Walden inversion may be restricted (such as ammonium cations), or slow, which allows the existence of chirality.
The implication is that nitrogen inversion is fast in the case where there is a lone pair. However, it is very unclear, so I don't blame you.
$\mathrm{sp^2}$ nitrogen, e.g. amide nitrogen, is planar so it cannot be chiral. So, this video you watched is incorrect in stating that the amide nitrogen is a chiral centre.
And, that also means the comment you quoted is correct. But, I would caution that it is not a direct cause and effect relationship between "$\mathrm{sp^2}$ hybridisation" and "not a chiral centre". It is more like: because it is $\mathrm{sp^2}$, therefore the geometry around it is planar; and because it is planar, it cannot be chiral.
As was mentioned in the comments, phosphorus and sulfur behave differently. This has been covered multiple times here; here are some links. 1 2 3 4 | {
"domain": "chemistry.stackexchange",
"id": 8557,
"tags": "stereochemistry, hybridization, chirality"
} |
Momentum and energy conservation principle doubt | Question:
Let's say we release a bob of mass $m$ attached the ceiling through a wire of length $l_0$. Now at the bottommost point another identical bob of mass $m$ gets gently attached to the bob. Now we are required to find the angle upto which the system rises.
First I tried to solve the question using Energy Conservation problem from the starting point to the last point (maximum $\theta$) (taking the bottommost point as the point of potential reference)
$$U_{initial} = mg(l_0)$$
$$K_{initial}=0$$
$$U_{final}= 2mgl_0(1-\cos\theta)$$
$$K_{final}=0$$
Using $$U_{initial}+K_{initial}=U_{final}+K_{final}$$
$$mg(l_0)=2mgl_0(1-\cos\theta)$$
$$\Rightarrow \theta= 60^0$$
Then I tried to solve the problem using the conservation of momentum
At the bottommost point velocity of the single mass is $\sqrt{2gl_0}$
Using conservation of momentum
$$m \cdot \sqrt{2gl_0}=2m \cdot v$$ (where $v$ is the velocity of the combined mass from the lowermost point )
$$v= {\sqrt{2gl_0} \over 2}$$
Now again using the energy conservation from the lowermost point to the last point
$$U_{initial} = 0$$
$$K_{initial}={1 \over 2}(2m)v^2$$
$$U_{final}= 2mgl_0(1-\cos\theta)$$
$$K_{final}=0$$
Using $$U_{initial}+K_{initial}=U_{final}+K_{final}$$
$${1 \over 2}(2m)v^2=2mgl_0(1-\cos\theta)$$
$${m(2gl_0) \over 4}= 2mgl_0(1-\cos\theta)$$
$$\Rightarrow \theta = cos^{-1}(3/4)$$
Why such contradiction exists between the answers? Can't the energy conservation law be applied here.
Answer: For the bobs to join together the collision has to be inelastic and so kinetic energy is not conserved during the joining of the two bobs.
Once you have found the speed after the joining of the the two masses by momentum conservation then you will see that there is a decrease in the kinetic energy.
Update which I have found great difficulty writing.
Whether the joining is gentle or not because there are no external horizontal forces during the collision momentum conservation in the horizontal direction must apply.
So the kinetic energy of the one bob before the joining is $\frac 1 2 \; m \; 2gl_o$ and the kinetic energy of the two bob after the joining is $\frac 1 2 \; 2m \;\dfrac{2gl_o}{4}$.
The translational kinetic energy of the centre of mass of the two bobs has decreased by a half.
If the joining is done by a pin attached to one bob going into the other bob then one can say that work needs to be done to drive the pin into the bob and the result is that some bonds between atoms are permanently broken and the bobs get hotter.
Suppose instead that the two bobs, $A$ and $B$ were so clean that when they collided together they were joined together by cohesive forces.
So the moving bob $A$ hits the stationary bob $B$, the bonds between the two bobs deform and the centre of mass of the two bobs moves off at half the speed of the initial speed of bob $A$ as determined by momentum conservation.
There are two extremes:
1 As a result of the collision the bonds between the two bobs are permanently deformed and the work needed to do this comes from the kinetic energy that bob $A$ had initially.
The collision between the bobs is inelastic.
2 The collision between the two bobs is elastic which means that the compressed bonds act like compressed springs and are storing elastic potential energy and also exert forces on the two masses.
The two masses separate, gaining kinetic energy and continue to do so until the bonds start to get stretched.
The two bobs then slow down losing kinetic energy and the bonds because they are being stretched gaining elastic potential energy.
Eventually the two bobs stop and the bonds now pull the two bobs together.
So in summary.
You have the centre of mass of the two bobs moving in an arc of a circle and the two bobs oscillating about their centre of mass with an energy associated with that oscillatory motion which is half the kinetic energy that bob $A$ had initially.
All this is rather abstract but the point I am trying to make is that the “missing” kinetic energy is not lost but it is no longer part of the translational kinetic energy of the two bob system.
In the real world the collision might well send (shock) waves within the bobs.
As time went on those waves would die down and as a result the bobs would have a higher temperature.
So by whatever mechanism you chose to join the two bobs together half of the initial kinetic energy that bob $A$ had would end up as heat, sound and work done permanently deforming the bobs. | {
"domain": "physics.stackexchange",
"id": 33874,
"tags": "homework-and-exercises, newtonian-mechanics, momentum, conservation-laws, collision"
} |
Collision of block and a spring system kept at rest | Question: A body of mass $m$ moving with velocity $v$ collides with a system of 2 blocks of each mass $m$ connected by a spring of spring constant $k$. All the three lie on the same plane (1D collision). There is no friction anywhere.Do the spring compress? . If yes then why? To compress or elongate a spring equal and opposite forces should be applied to opposite ends, then how could the spring compress. Please explain
Answer: It seems you tacitly set a colinear collision, if not let me know to modify the answer.
The spring will compress in a variety of modes.
-If the collision is elastic, the moving block will stop entirely after it hits the stationary block and give all its momentum (Newtons system of balls and cradle ) to stationary block and then that stationary block starts to move back with speed v and this will compress the spring and start to push the last block back. This will start a free vibration in the system of the spring and two blocks which will gradually slide away while vibrating.
-If the collusion is not elastic there will be some loss of kinetic energy of the moving block and it may recoil back while pushing the second block back and compressing the spring, or depending on the stiffness and length of spring it may become one with the two blocks and move and with them at least for the compression part of the vibration. | {
"domain": "physics.stackexchange",
"id": 57473,
"tags": "newtonian-mechanics, conservation-laws, collision, spring"
} |
Which anaerobic environments would exist without aerobic life? | Question: All of the anaerobic environments I can think of are that way because a layer of aerobic life above them separates them from oxygen. If the aerobic life were removed, the anaerobic compartment would equilibriate with the partial pressures of the atmosphere, however slowly, and it would remain at equilibrium until competition between aerobic cells once again pushed the oxygen boundary to the top.
After the great oxygenation event, there was oxygen in the atmosphere but no layers of aerobic life to stop it from diffusing to the bottom of the ocean or deep underground. If there are anaerobic environments that exist independently of aerobic life, our anaerobic ancestors could have survived there until aerobic respiration or tolerance evolved. Otherwise aerobic tolerance would have had to have been present before the great oxygenation event for anything to survive at all.
Which is closer to the truth? (An environment that takes extremely long to equilibriate with the atmosphere, long enough for evolution to be fast in comparison, would count as isolated for the purposes of this question since it would serve as a reservoir of life from which aerobic respiration could emerge.)
Answer: In the question body it says:
After the great oxygenation event, there was oxygen in the atmosphere but no layers of aerobic life to stop it from diffusing to the bottom of the ocean or deep underground.
This statement isn't justified, given that the Great Oxidation "Event" developed over one to two billion years. It took half a billion years for the atmospheric oxygen concentration to go from nil to around 3%. This is a large amount of time for evolution to select organisms that can tolerate the presence of free oxygen at various levels. There would be a wide range of environments with differing levels of oxygen, depending on many factors including distance from the oxygen emitters (cyanobacteria) and the nature of the intervening materials. The atmospheric oxygen concentration remained at fairly stable levels of 2% to 4% for nearly a billion more years, giving more time for evolution of oxygen-tolerant and -consuming organisms to evolve. Thus, to the extent that anaerobic organisms depend on aerobic organisms to shield them from oxygen diffusion, there would have apparently been plenty of evolutionary time for that to occur. | {
"domain": "biology.stackexchange",
"id": 11875,
"tags": "evolution, ecology, anaerobic-respiration"
} |
Independent Set Problem Variant, Induction | Question: So the question gives us some mysterious algorithm, that given a graph G and an integer k, it outputs true/false to whether there is an independent set of size k in G.
So we have to design an algorithm that can call this mysterious algorithm within a polynomial number of times, to return an independent set of size k from G if it exists, or outputs impossible. The hint is induction.
I'm having trouble figuring an algorithm to solve this. I know however, that the proof for this algorithm involves removing or adding a vertex in the inductive step.
Answer: Hint: Suppose you have a graph $G = (V,E)$ which is yes, and then there is a vertex $v \in V$ such that $G - v$ (the graph where you have removed $v$ from the graph) is no. What can you say about $v$ and $G$?
Let us take an example, namely the following graph:
$G= (\{a,b,c,d\}, E=\{ab, bc, cd, ad, ac\})$
a —— d
| \ |
| \ |
b —— c
and with $k=2$. This is currently a yes instance.
First we ask: If we delete $a$ from $G$, will the resulting instance still be a yes instance?
d
|
|
b —— c
The answer is yes, this graph is still a yes instance.
Then we delete $b$ and ask the same question:
d
|
|
c
The answer is now no, this instance is no longer a yes instance. Hence we cannot delete $b$. Instead we try to delete $c$:
d
b
The answer is still yes, and $|V(G')| = k$, so we conclude that $\{b, d\}$ is an independent set in $G$ of size $k$. | {
"domain": "cs.stackexchange",
"id": 16126,
"tags": "algorithms, graphs"
} |
ROSJava installation problem | Question:
Hi, I want to install rosjava in my labtop. And I follow the instructions
http://wiki.ros.org/rosjava/Tutorials/indigo/Source%20Installation
And in step 3.3, When I input:
wstool merge https://raw.githubusercontent.com/me/rosinstalls/master/my_custom_msg_repos.rosinstall
I got the error:
ERROR in config: Unable to download URL [https://raw.githubusercontent.com/me/rosinstalls/master/my_custom_msg_repos.rosinstall]: HTTP Error 404: Not Found
It seems the address is not existed any more. Anyone has the idea what problem is this? How can I solve this problem?
Originally posted by vcdanda on ROS Answers with karma: 38 on 2017-03-16
Post score: 0
Answer:
It seems this source installation doesn't work, so I try the deb installation. Let's see what happened...
Originally posted by vcdanda with karma: 38 on 2017-03-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 27334,
"tags": "ros"
} |
Computer analogy to non-locality in quantum mechanics | Question: It is not uncommon to say that the non-locality of quantum mechanics is equivalent to the following computer analogy: if you are trying to model an entangled two spin system, then even if the spins are simulated to move apart from each other, you need to do the calculations in the same computer, that is, you cannot take one computer for each spin and separate them to do the calculations, the two computers have to remain together. I heard Leonard Susskind saying this in one of his classes, and there is also the quantum randi challenge https://arxiv.org/abs/1207.5294, which is used to debunk local theories of quantum mechanics.
My question is: can this analogy be formally proved from quantum mechanics, is it so obvious that it does not need any proof, or is it not even correct?
Answer: I think that it can be proved, and the proof is Bell's theorem (or a similar result). I suspect that Susskind had Bell's theorem in mind.
In Bell's original paper, the outcomes of the two measurements were determined by arbitrary functions $A(\vec a, λ)$ and $B(\vec b, λ)$, where $λ$ was arbitrary hidden state information and $\vec a$ and $\vec b$ were the measurement axes, and noncommunication was captured in the fact that each function had only one axis as an argument. In an attempt to make this more friendly, you could say that $A$ and $B$ are calculated by computers, $λ$ is perhaps the software installed on them, and the code that is told $\vec a$ can't communicate with the code that is told $\vec b$, which is most easily phrased as there being two (non-networked) computers doing the calculations. Those assumptions lead to a result that contradicts quantum mechanics. Therefore, to correctly simulate quantum mechanics, you need networked computers, or more simply a single computer. I imagine that Susskind's example came from a line of thought similar to that.
This argument breaks down if the computers are quantum and their initial memory states are entangled. You can use quantum computers if the initial states aren't entangled, but it's probably easier to require classical computers. Classical computers can simulate quantum computers anyway (as long as they aren't entangled with other quantum computers). | {
"domain": "physics.stackexchange",
"id": 82537,
"tags": "quantum-mechanics, quantum-entanglement, non-locality, computer"
} |
Why isn't the aniline hydrolysed in Sanger's protein sequencing method? | Question: Recently I was reading about identifying N-terminal amino acid residues using Sanger's reagent (1-fluoro-2,4-dinitrobenzene). The following image showing the reaction is taken fro Wikimedia Commons:
The free N-terminus amino group performs a nucleophilic aromatic substitution to get the first product; after that we hydrolyse the adduct. Why doesn't the Ar–N bond in the final product also undergo hydrolysis to give 2,4-dinitroaniline, though? There are many examples (e.g. isocyanides, cyanides) where heating with strong acid causes cleavage of the C–N bond. Is it because resonance with the aromatic ring causes the nitrogen lone pair to be not basic?
Answer: You raise the examples of isonitriles and nitriles undergoing C–N bond cleavage via hydrolysis. That is true, but it is worth thinking about how this happens. Both isonitriles and nitriles are converted to amides first when reacted with aqueous acid. Now, amides will certainly be hydrolysed at the last stage of the peptide sequencing: you can see that in the image itself, as the amide bonds holding the polypeptide together are broken (so that the labelled residue can be detected).
However, amides behave quite differently from amines. The C–N bond you're asking about ($\ce{R-NH-Ar}$, Ar = 2,4-dinitrophenyl) is an amine, not an amide. The comparison to isonitriles / nitriles is therefore not relevant. Amines simply don't undergo typical $\mathrm{S_N2}$ reactions (quaternary ammonium salts do, but that's a different matter entirely), so the C–N bond doesn't get broken via hydrolysis. It is true that there is resonance with the electron-withdrawing dinitrophenyl group, which reduces the availability of the nitrogen lone pair, but this is actually irrelevant: even if that aryl group were just a methyl group, the conclusion would not be affected.
Note that this has nothing to do with the C–N bond strength, either. The C–N bond in an amide has partial double bond character, and is stronger than the C–N bond in an amine (which is a plain single bond). The question is not which is stronger, it is which has an available pathway for cleavage that doesn't involve too high an activation energy. Amides can be cleaved via a nucleophilic acyl substitution mechanism: although this is not easy to accomplish (you need strong acid and lots of heat), it is still possible. Amines, on the other hand, could conceivably be cleaved via a $\mathrm{S_N2}$ or $\mathrm{S_N1}$ mechanism. Neither is very favourable in this case (amines are very poor leaving groups, and the carbocation above is not stable at all, with that electron-withdrawing carbonyl group next to it). | {
"domain": "chemistry.stackexchange",
"id": 16242,
"tags": "organic-chemistry, biochemistry, amines"
} |
Why correlation length diverges at critical point? | Question: I want to ask about the behavior near critical point.
Let me take an example of ferromagnet.
At $T < T_c$, all spins are aligned to the same direction thus it is in the ordered state, scale invariant, its correlation length is effectively infinite. At $T > T_c$, all spins are aligned randomly so it is disordered state.
However, in my understanding, we say the system is scale invariant and its correlation length diverges only at critical point.
What is wrong in my understanding? Furthermore, could you explain an intuitive region why at critical point, the correlation length should diverge?
Answer: It is not the correlation length of the system that you should look at, but the correlation of the fluctuations. If T>>Tc the spins are randomly oriented and the lenghtscale of fluctuations is very small. As you get closer to Tc, the fluctuations become more correlated, and lenghtscale increases toward infinity. Similarly for the ferromagnet at temperatures much less than Tc, all spins are aligned. The fluctuations at 0 < T << Tc have short correlation lengths. As you heat the system, it is still mostly ordered, but the number of spins pointing in the opposite direction increases, and so does the correlation length of these fluctuations | {
"domain": "physics.stackexchange",
"id": 99583,
"tags": "statistical-mechanics, phase-transition, ising-model, critical-phenomena, scale-invariance"
} |
Pull-up sequence accumulator counter | Question: I wanted to create a function describing a sport game called "Leader". The idea is that you make as many push-ups as you can, increasing each repetition by 1 and as you reach your maximum, each next repetition is decreased by 1 until you reach 0 push-ups eventually.
I managed to do this using dictionaries, but I think this could be done in much easier way.
from typing import List, Tuple
def leader_step(max_pushups, step): # maximum pushups a person can do and a step of increment
i = 0 # count of the repetitions
pushups: List[Tuple[int, int]] = [(0, 0)] # number of pushups at the beginning (at each repetition, it total)
while pushups[i][0] <= max_pushups + abs(step): # +abs(step) in case step > 1
if pushups[i][0] >= max_pushups: # decrease push-ups as they reach max
step = -step
i += 1
now = step + pushups[i - 1][0]
sum = now + pushups[i - 1][1] # counting the sum of all push-ups by adding previous sum and current pushups
pushups.insert(i, (now, sum))
if pushups[i][0] < 1: # game stops when you reach 0 push-up
break
return pushups[1:-1]
Function should return 2 sequences:
showing the number of push-ups at each repetition
showing total sum of push-ups made at each repetition
Answer: You can indeed simplify this quite a bit using a generator and the itertools module.
I would separate out the generating of the pushups to be done from the total pushups. For this you can use two range objects and the yield from (Python 3.3+) keyword combination:
def pushups(n):
yield from range(1, n)
yield from range(n, 0, -1)
The accumulation can be done using itertools.accumulate and itertools.tee to duplicate the generator:
from itertools import accumulate, tee
def leader_step(n):
gen1, gen2 = tee(pushups(n))
return list(gen1), list(accumulate(gen2))
if __name__ == "__main__":
print(leader_step(5))
# ([1, 2, 3, 4, 5, 4, 3, 2, 1], [1, 3, 6, 10, 15, 19, 22, 24, 25])
As noted in the comments by @Peilonrayz, it is not actually necessary to split the generator (as long as it fits into memory, which is very likely, given that presumably a human will try to do this training):
def leader_step(n):
training = list(pushups(n))
return training, list(accumulate(training)) | {
"domain": "codereview.stackexchange",
"id": 35021,
"tags": "python, python-3.x, hash-map"
} |
Leetcode valid sudoku | Question: Link here
I'll include a solution in Python and C++ and you can review one. I'm mostly interested in reviewing the C++ code which is a thing I recently started learning; those who don't know C++ can review the Python code. Both solutions share similar logic, so the review will apply to any.
Problem statement
Determine if a 9 x 9 Sudoku board is valid. Only the filled cells need to be validated according to the following rules:
Each row must contain the digits 1-9 without repetition. Each column
must contain the digits 1-9 without repetition. Each of the nine 3 x
3 sub-boxes of the grid must contain the digits 1-9 without
repetition.
Note:
A Sudoku board (partially filled) could be valid but is not necessarily solvable.
Only the filled cells need to be validated according to the mentioned rules.
Example 1:
Input: board =
[["5","3",".",".","7",".",".",".","."]
,["6",".",".","1","9","5",".",".","."]
,[".","9","8",".",".",".",".","6","."]
,["8",".",".",".","6",".",".",".","3"]
,["4",".",".","8",".","3",".",".","1"]
,["7",".",".",".","2",".",".",".","6"]
,[".","6",".",".",".",".","2","8","."]
,[".",".",".","4","1","9",".",".","5"]
,[".",".",".",".","8",".",".","7","9"]]
Output: true
Example 2:
Input: board =
[["8","3",".",".","7",".",".",".","."]
,["6",".",".","1","9","5",".",".","."]
,[".","9","8",".",".",".",".","6","."]
,["8",".",".",".","6",".",".",".","3"]
,["4",".",".","8",".","3",".",".","1"]
,["7",".",".",".","2",".",".",".","6"]
,[".","6",".",".",".",".","2","8","."]
,[".",".",".","4","1","9",".",".","5"]
,[".",".",".",".","8",".",".","7","9"]]
Output: false
Explanation: Same as Example 1, except with the 5 in the top left corner being modified to 8. Since there are two 8's in the top left 3x3 sub-box, it is invalid.
valid_sudoku.py
def is_valid(board, empty_value='.', b_size=3):
seen = set()
size = b_size * b_size
for row in range(size):
for col in range(size):
if (value := board[row][col]) == empty_value:
continue
r = f'0{row}{value}'
c = f'1{col}{value}'
b = f'2{row // b_size}{col // b_size}{value}'
if r in seen or c in seen or b in seen:
return False
seen.update({r, c, b})
return True
if __name__ == '__main__':
g = [
["5", "3", ".", ".", "7", "5", ".", ".", "."],
["6", ".", ".", "1", "9", "5", ".", ".", "."],
[".", "9", "8", ".", ".", ".", ".", "6", "."],
["8", ".", ".", ".", "6", ".", ".", ".", "3"],
["4", ".", ".", "8", ".", "3", ".", ".", "1"],
["7", ".", ".", ".", "2", ".", ".", ".", "6"],
[".", "6", ".", ".", ".", ".", "2", "8", "."],
[".", ".", ".", "4", "1", "9", ".", ".", "5"],
[".", ".", ".", ".", "8", ".", ".", "7", "9"],
]
print(is_valid(g))
Stats:
Runtime: 92 ms, faster than 81.70% of Python3 online submissions for Valid Sudoku.
Memory Usage: 14.1 MB, less than 73.95% of Python3 online submissions for Valid Sudoku.
Here's an alternative solution using numpy, it's shorter and more readable but slower:
import numpy as np
def is_valid(board, size=3, empty_value='.'):
board = np.array(board)
blocks = board.reshape(4 * [size]).transpose(0, 2, 1, 3).reshape(2 * [size * size])
for grid in [board, board.T, blocks]:
for line in grid:
non_empty = line[line != empty_value]
if not len(non_empty) == len(set(non_empty)):
return False
return True
Stats:
Runtime: 172 ms, faster than 5.19% of Python3 online submissions for Valid Sudoku.
Memory Usage: 30.2 MB, less than 11.10% of Python3 online submissions for Valid Sudoku.
valid_sudoku.h
#ifndef LEETCODE_VALID_SUDOKU_H
#define LEETCODE_VALID_SUDOKU_H
#include <string_view>
#include <unordered_set>
bool sudoku_check_update(const size_t &row, const size_t &col, const char &value,
const int &block_size,
std::unordered_set<std::string_view> &seen);
bool sudoku_check(const std::vector<std::vector<char>> &board,
const char &empty_value = '.');
void test1();
#endif //LEETCODE_VALID_SUDOKU_H
valid_sudoku.cpp
#include <iostream>
#include <vector>
#include <string_view>
#include <cmath>
#include <unordered_set>
bool sudoku_check_update(const size_t &row, const size_t &col, const char &value,
const int &block_size,
std::unordered_set<std::string_view> &seen) {
std::string_view r, c, b;
r = "0-" + std::to_string(row) + value;
c = "1-" + std::to_string(col) + value;
b = "2-" + std::to_string(row / block_size) + std::to_string(col / block_size) +
value;
for (const auto &seen_id: {r, c, b}) {
if (seen.find(seen_id) != seen.end())
return false;
seen.insert(seen_id);
}
return true;
}
bool sudoku_check(const std::vector<std::vector<char>> &board,
const char &empty_value = '.') {
std::unordered_set<std::string_view> seen;
const auto row_size = board.size();
const int block_size = std::sqrt(row_size);
for (size_t row = 0; row < row_size; ++row) {
for (size_t col = 0; col < row_size; ++col) {
auto value = board[row][col];
if (value == empty_value)
continue;
if (!sudoku_check_update(row, col, value, block_size, seen))
return false;
}
}
return true;
}
void test1() {
std::vector<std::vector<char>> v = {
{'5', '3', '.', '.', '7', '.', '.', '.', '.'},
{'6', '.', '.', '1', '9', '5', '.', '.', '.'},
{'.', '9', '8', '.', '.', '.', '.', '6', '.'},
{'8', '.', '.', '.', '6', '.', '.', '.', '3'},
{'4', '.', '.', '8', '.', '3', '.', '.', '1'},
{'7', '.', '.', '.', '2', '.', '.', '.', '6'},
{'.', '6', '.', '.', '.', '.', '2', '8', '.'},
{'.', '.', '.', '4', '1', '9', '.', '.', '5'},
{'.', '.', '.', '.', '8', '.', '.', '7', '9'}
};
std::cout << sudoku_check(v);
}
Stats:
Runtime: 48 ms, faster than 17.98% of C++ online submissions for Valid Sudoku.
Memory Usage: 20.4 MB, less than 22.55% of C++ online submissions for Valid Sudoku.
Answer: Here are some suggestions for how you might be able to improve your code.
C++ version
Use all of the required #includes
The type std::vector<std::vector<char>> is used in the definition of sudoku_check() in the header file, but #include <vector> is missing from the list of includes there.
Minimize the interface
The .h file is a declaration of the interface to your software. The .cpp is the implementation of that interface. It is good design practice to minimize the interface to just that which is needed by outside programs. For that reason, I would remove the sudoku_check_update() and test1() functions and just use this:
#ifndef LEETCODE_VALID_SUDOKU_H
#define LEETCODE_VALID_SUDOKU_H
#include <vector>
bool sudoku_check(const std::vector<std::vector<char>> &board,
const char &empty_value = '.');
#endif //LEETCODE_VALID_SUDOKU_H
The implementation should include the interface header
As the title of this sections states, the implementation should include the interface header. This assures that the interface and implementation match, and eliminates errors. If we do that in this case, we see that the default value for empty_value is declared twice. It should only be declared once in the header file.
Make local functions static
With the smaller interface as advocated above, the sudoku_check_update function becomes an implementation detail used only within the .cpp file. For that reason, it should be made static so the compiler knows that it's safe to inline the function.
The keyword static when used with a function declaration specifies that the linkage is internal. In other words it means that nothing outside of that file can access the function. This is useful for the compiler to know because, for instance, if a static function is used only once and/or is small, the compiler has the option of putting the code inline. That is, instead of the usual assembly language call...ret instructions to jump to a subroutine and return from it, the compiler can simply put the code for the function directly at that location, saving the computational cost of those instructions and helping to assure cache predictions are correct (because normally cache takes advantage of locality of reference.)
Also read about storage class specifiers to better understand what static does in other contexts and more generally declaration specifiers for explanations of constexpr and more.
Fix the bug!
The code currently uses string_view inappropriately. A std::string_view is essentially a pointer to a string that exists. But your strings are composed and deleted dynamically, so this is an invalid use of std::string_view. If you replace all instances of string_view with string, the program works.
Memory problems like this and concurrency errors are among the most difficult problems for programmers to detect and correct. As you gain more experience you will find that your ability to spot these problems and avoid them come more reflexively. There are many approaches to finding such errors. See Leak detection simple class for some of them.
Write better test functions
The bug mentioned above was easily discovered by calling the function several times with varying inputs. Perhaps you already had a more extensive array of test functions, but if not, I highly recommend creating and applying them.
Use efficient data structures
If the goal of this code is to be efficient in terms of both run time and memory, there are a lot of improvements that could be made. First, the data structure std::unordered_set<std::string_view> is not optimal. Whenever we are working on a performance optimization, it's useful to measure. So I wrote a very simple test program based on my stopwatch template. It's here:
#include "valid_sudoku.h"
#include "stopwatch.h"
#include <iostream>
#include <vector>
#include <string>
int main(int argc, char* argv[]) {
std::vector<std::vector<char>> v = {
{'5', '3', '.', '.', '7', '.', '.', '.', '.'},
{'6', '.', '.', '1', '9', '5', '.', '.', '.'},
{'.', '9', '8', '.', '.', '.', '.', '6', '.'},
{'8', '.', '.', '.', '6', '.', '.', '.', '3'},
{'4', '.', '.', '8', '.', '3', '.', '.', '1'},
{'7', '.', '.', '.', '2', '.', '.', '.', '6'},
{'.', '6', '.', '.', '.', '.', '2', '8', '.'},
{'.', '.', '.', '4', '1', '9', '.', '.', '5'},
{'.', '.', '.', '.', '8', '.', '.', '7', '9'}
};
if (argc != 2) {
std::cout << "Usage: " << argv[0] << " num_trials\n";
return 1;
}
auto iterations = std::stoul(argv[1]);
Stopwatch<> timer{};
bool valid{true};
for (auto i{iterations}; i; --i) {
valid &= sudoku_check(v);
}
auto elapsed{timer.stop()};
if (!valid) {
std::cout << "The program failed!\n";
return 2;
}
std::cout << iterations << " trials took " << elapsed << " microseconds\n"
" for an average of " << elapsed/iterations << " microseconds/trial\n";
}
When I run this on my machine with 1,000,000 trials, (with the bug noted above fixed as described) here's the result I get:
1000000 trials took 1.44351e+07 microseconds
for an average of 14.4351 microseconds/trial
Now let's think of a more efficient data structure. Instead of an unordered_set, we might use a set of fixed arrays. There are nine rows, nine columns and nine subsquares. Each of those either contains a number or it doesn't. To me, that suggests we could use a object like this:
using SeenType = std::array<std::array<std::array<bool, 9>, 9>, 3>;
This contains the 3 types (rows, columns, subsquares) and within each, 9 collections of 9 bits; one bit for each number. Let's rewrite the function to use this:
static bool sudoku_check_update(std::size_t row, std::size_t col,
char value, SeenType &seen) {
static constexpr std::size_t block_size{3};
static_assert(block_size * block_size == row_size, "block_size must be the square root of row_size");
const std::size_t block = col / block_size + block_size * (row / block_size);
std::size_t dim{0};
value -= '1'; // adjust from digits '1'-'9' to indices 0-8.
for (const auto &seen_id: {row, col, block}) {
if (seen[dim][seen_id][value])
return false;
seen[dim][seen_id][value] = true;
++dim;
}
return true;
}
Now re-run the program with a million trial as before:
1000000 trials took 562153 microseconds
for an average of 0.562153 microseconds/trial
So that one change made things 25x faster. We could also use the fact that the dimensions are known to use a std::array<std::array<char, 9>, 9> instead of the vectors and use constexpr for those dimensions. Doing that change as well, we get this:
1000000 trials took 160808 microseconds
for an average of 0.160808 microseconds/trial
So now it is 90x faster.
Prefer {} style initializations
You may notice that the code I write tends to use the {}-style of initialization. There are several reasons for this, including the fact that when you see it, it is always an initialization and can't be mistaken for a function call. See ES.23 for more details.
Pass values rather than references for short data types
Rather than passing const size_t &col or const char& value, it's generally better to pass those by value. This is often advantageous because the pointer is likely to be longer than the thing it's pointing to and because it allows the elimination of an indirection and memory lookup.
Move calculations from runtime to compile time where practical
It probably doesn't take up a lot of time, but this line is not as fast as it could be:
const int block_size = std::sqrt(row_size);
What this does is to convert row_size to a double, invokes the floating-point sqrt function and the converts the double back to an int. By contrast, we could just write this:
constexpr std::size_t block_size{3};
Now it takes no time at all at runtime because the value is known at compile time. It also eliminates having to pass the value and, as above, its definition can be put the only place it's actually needed, which is within the sudoku_check_update function.
Generally, we prefer to move things from runtime to compile time for three reasons:
programs are generally run more times than they are compiled, so we optimize for the more common occurrence
the sooner we detect bugs, the cheaper and easier they are to fix
it tends to make software smaller and, internally, simpler which improves loading speed, cache performance and simpler software tends to improve quality
Python version
Avoid continue by restructuring the loop
There's nothing intrinsically wrong with your use of the walrus operator, but there seems to be little reason not to invert the sense of the comparison and simply process the update rather than using continue. It does not affect performance, but aids the human reader of the code in understanding program flow. I tend to put early "bailout" clauses early in a function to quickly reject invalid conditions, but avoid continue in loops; ultimately it's a question of readability and style in either C++ or Python.
Use more efficient data structures
What was true in C++ also works in Python. We can use the same ideas and speed up the code by a factor of 6:
def is_valid(board, empty_value='.', b_size=3):
size = b_size * b_size
seen = [[(size * [False]) for _ in range(size)] for _ in range(3)]
for row in range(size):
for col in range(size):
if (value := board[row][col]) != empty_value:
block = col // b_size + b_size * (row // b_size)
dim = 0
value = int(value) - 1
for seen_id in [row, col, block]:
if seen[dim][seen_id][value]:
return False
seen[dim][seen_id][value] = True
dim += 1
return True | {
"domain": "codereview.stackexchange",
"id": 39992,
"tags": "python, c++, programming-challenge"
} |
Leetcode: String to Integer (atoi) | Question: Solved this:
Implement atoi to convert a string to an integer.
Requirements for atoi:
The function first discards as many whitespace characters as necessary
until the first non-whitespace character is found. Then, starting from
this character, takes an optional initial plus or minus sign followed
by as many numerical digits as possible, and interprets them as a
numerical value.
The string can contain additional characters after those that form the
integral number, which are ignored and have no effect on the behavior
of this function.
If the first sequence of non-whitespace characters in str is not a
valid integral number, or if no such sequence exists because either
str is empty or it contains only whitespace characters, no conversion
is performed.
If no valid conversion could be performed, a zero value is returned.
If the correct value is out of the range of representable values,
INT_MAX (2147483647) or INT_MIN (-2147483648) is returned.
Solution:
Check if string is null first then empty, if either is true, return 0. Keep iterating through string until we hit the first non-empty space character.
From this position, we check if the first character is '+' or '-'. If so, we move the iterator by one step and note the sign in boolean isNegative.
From the current position, as long as the character is an integer, we add it to solution while checking for overflow each time we do so. We keep iterating through the string until we hit a non-integer character and return solution:
class Solution {
boolean isNegative = false;
int i = 0; // string iterator position
public int myAtoi(String str) {
int solution = 0;
if (str == null || str.isEmpty()) {
return 0;
}
while (i < str.length() && str.charAt(i) == ' ') {
i++; // discard leading whitespace
}
checkSign(str);
for (; i < str.length(); i++) {
if (str.charAt(i) >= '0' && str.charAt(i) <= '9') {
int prev = solution; // keep solution from last iter to check for overflow
solution = solution*10; // move number left one position
solution = solution+(str.charAt(i)-'0'); // increase solution by curr integer
if (isOverflow(prev, solution)) {
if (isNegative) {
return Integer.MIN_VALUE;
}
return Integer.MAX_VALUE;
}
} else {
return signSolution(solution); // we've reached a non-integer character before end of string
}
}
return signSolution(solution); // last character of string is an integer
}
boolean isOverflow(int prev, int curr) {
// prev = value at last iteration
// curr = value after current iteration
if (curr/10 == prev) {
return false;
}
return true;
}
void checkSign(String str) {
if (str.charAt(i) == '+') {
i++;
} else if (str.charAt(i) == '-') {
isNegative = true;
i++;
}
}
int signSolution(int solution) {
if (isNegative) {
return solution*-1;
}
return solution;
}
}
Answer: Normally when parsing a String into an int, people expect it to be a static method
Having to create a class to do the computation is a bit bulky, and then you need to worry about synchronization (what happens if this class is shared across multiple threads?)
boolean isOverflow(int prev, int curr) {
// prev = value at last iteration
// curr = value after current iteration
if (curr/10 == prev) {
return false;
}
return true;
}
This method is already a great candidate to become static, it only relies on its two inputs to figure out if an overflow happened
boolean isNegative = false;
int i = 0; // string iterator position
void checkSign(String str) //uses i and isNegative
int signSolution(int solution) //uses isNegative
These are slightly more tricky, int signSolution(int solution) can just take another parameter: static int signSolution(int solution, boolean isNegative)
void checkSign(String str) would need to be included as part of the myAtoI method or be rewritten a bit
If we treat + as whitespace, then it would only need to worry about handling -
static boolean checkSign(String str, int i)
{
return str.charAt(i) == '-';
}
And calls to checkSign(str); would instead be if(checkSign(str)) i++;
Be consistent. You're mixing for and while loops of similar structure
while (i < str.length() && str.charAt(i) == ' ')
i++;
[...]
for (; i < str.length(); i++) {
[...]
}
They can either both be for loops, or both be while loops, but mixing is unexpected
for(; i < str.length() && str.charAt(i) == ' '; i++);
[...]
for(; i < str.length(); i++)
{
[...]
}
vs
while(i < str.length() && str.charAt(i) == ' ')
i++;
[...]
while(i < str.length())
{
[...]
i++;
}
Checking "whitespace" currently only checks spaces
Instead of using str.charAt(i) == ' ', you can offload complicated functionality to another method (and be easier to change later)
static boolean isWhiteSpace(char c)
{
return c == ' ';
}
Then if you'd like to include all whitespace (and + and mentioned above)
static boolean isWhiteSpace(char c)
{
return Character.isWhitespace(c) || (c == '+');
}
Instances of if(bool) return true; else return false; are the same as just return bool;
if (curr/10 == prev) {
return false;
}
return true;
Can be simplified to just
return curr / 10 != prev;
OptionalA:
Another way to check for overflow is to store the output as a long, and check if it overflows
public static int myAtoI(String str)
{
long solution = 0;
[...]
}
private static isOverflow(long num)
{
return num == (int)num;
}
OptionalB:
Instead of adding the values and then flipping the result, you could subtract values if it's a negative number | {
"domain": "codereview.stackexchange",
"id": 28608,
"tags": "java, programming-challenge"
} |
Detecting a certain amount of violet in an image | Question: I need to be able to detect if an image has the colour Violet in it. (Follow-up)
The problem that I'm facing is that there are quite a number of shades of violet that it seems almost impossible it add every single one.
private void ProcessImage(PictureBox picture)
{
using (System.IO.MemoryStream ms = new System.IO.MemoryStream())
{
List<Color> ColoursToDetect = new List<Color>()
{
Color.FromArgb(202,156,254),
Color.FromArgb(143,125,151),
Color.FromArgb(100,76,136),
Color.FromArgb(232,175,254)
};
Boolean colour_Found = false;
Bitmap SelectedImage = new Bitmap(picture.Image);
Color selected_Pixel;
for (int x = 0; x != SelectedImage.Width; x++)
{
for (int y = 0; y != SelectedImage.Height; y++)
{
selected_Pixel = SelectedImage.GetPixel(x, y);
foreach (Color c in ColoursToDetect)
{
if (c == selected_Pixel)
{
colour_Found = true;
MessageBox.Show("Found");
colour_Found = true;
}
}
if (colour_Found)
{
break;
}
}
}
}
}
The above code is working (So to speak) However I'm worried that one day an image will have a certain violet that i have no added and in hindsight will make the code useless.I did look at the .Net library AForge http://www.aforgenet.com/projects/iplab/, However I couldn't find any information about detecting a range of a certain colour.
Answer: Try using HSV/HSL to represent color instead of RGB.
You can then define violet as some hue range, e.g. 250° to 310° and similarly for saturation and lightness/value.
Always use a using when you have tempoary disposable resources like a Bitmap
Variables in C# are camelCase:
Boolean colourFound = false;
Bitmap selectedImage = new Bitmap(picture.Image);
Color selectedPixel;
Try to limit the scope of variables, move the declaration of selectedPixel to where you currently assign it. | {
"domain": "codereview.stackexchange",
"id": 30902,
"tags": "c#, image"
} |
How to calculate the period of non-circular orbits? | Question: How to calculate the period of non-circular orbits?
By conservation of mechanical energy:
$$
E = -\frac{GMm}{r} + \frac{1}{2}\mu \left ( \dot{r}^2 + r^2 \dot{\theta}^2 \right )
$$
By the conservation of angular momentum:
$$
L = \mu r^2 \dot{\theta} \; \; \Rightarrow \; \; \dot{\theta} = \frac{L}{\mu r^2}
$$
Hence:
$$
E = -\frac{GMm}{r} + \frac{L^2}{2\mu r^2} + \frac{\mu}{2}\left( \frac{dr}{dt} \right)^2
$$
Rearranging a little bit:
$$
\int_{r_{max}}^{r_{min}} \frac{\pm \mu r}{\sqrt{2\mu E r^2 + 2GMm\mu r - L^2}}\; dr = \int_0^{T/2}dt
$$
$$
T = \int_{r_{max}}^{r_{min}} \frac{\pm 2 \mu r}{\sqrt{2\mu E r^2 + 2GMm\mu r - L^2}}\; dr
$$
The limits of integration of the radial integral, go from $r_{max}$ to $r_{min}$. This would take $T/2$ seconds. That's why the limits of integration for $t$, range from $0$ to $T/2$.
My question is: I have chosen the limits of $r$ arbitrarily. But why I can't choose $r_{min}$ first and then $r_{max}$?
When I've tried to plug this integral into a software program, it failed. I saw that it was because of my choice of, $r_{min}$, it would not enter into the domain of:
$$
\frac{\mu r}{\sqrt{2\mu E r^2 + 2GMm\mu r - L^2}}
$$
So, the integral would be undefined.
Why does this happen? It should work, no?
Do I miss something? Is this integral even right?
Answer: I won't go through all the algebra of the mentioned equations, I will just provide some basic information.
First of all $r_{min}$ and $r_{max}$ should be determined. As $r$ is a radial variable is should only adapt positive values. However, as the motion of the particle is bound, the total energy $E$ will adapt a negative value.
The turning point are defined by a change of sign of the derivative $\frac{dr}{dt}$, i.e. at the turning points $\frac{dr}{dt}=0$. Then we obtain an quadratical equation whose solutions provide as with $r_{min}$ and $r_{max}$.
The quadratical equation has the following form:
$$E = -\frac{A}{r} +\frac{B}{r^2}$$
where we abreviated $A= GMm$ and $B=0.5L^2/\mu$. Furthermore we will use $F=-E=|E|$ with $F>0$. Then we get:
$$ r^2 -\frac{A}{F}r + \frac{B}{F} =0 $$
whose solutions are( we assume here that $(GMm)^2 > 2L^2F/\mu \equiv 2L^2|E|/\mu$):
$$r_{min} = \frac{A}{2F} -\sqrt{(\frac{A}{2F})^2 - \frac{B}{F}}>0$$
and
$$r_{max} = \frac{A}{2F} +\sqrt{(\frac{A}{2F})^2 - \frac{B}{F}}>r_{min}>0$$
Both values should be inserted as integration borders. And replace in the integral $+E$ by $-F$ remembering that $F>0$. With this hint you should be able to carry out the integration.
By the way: the integration borders should be set like $\int_{r_{min}}^{r_{max}}$. The time span on the rhs is positive and the integral of the lhs too and of course it is the same independent on if the particle moves from $r_{min}$ to $r_{max}$ or from $r_{max}$ to $r_{min}$. | {
"domain": "physics.stackexchange",
"id": 96896,
"tags": "newtonian-mechanics, classical-mechanics, gravity, newtonian-gravity"
} |
What happens when you dump antimatter onto a naked singularity? | Question: Inspired by this question, what would happen if you dumped antimatter onto a naked singularity? The answers for the previous question suggest that once the antimatter crossed the event horizon, no annihilation reaction would occur and its mass-energy would be added to that of the black hole.
However, it's possible for rotating black holes to lack an event horizon, creating a naked singularity. What would happen if such a naked singularity were to be exposed to a considerable amount of anti-matter? Would the anti-matter be assimilated into the singlularity, pass by it unaffected, or would there be some sort of reaction?
Answer: The reason why naked curvature singularities are a no good very bad just the worst object for physicists is the fact that there is no theory of their dynamical behavior, and no real predictions can be given for their appearances or interactions with their environments. If you do take a curvature singularity and predict its dynamical behaviour such as in astrophysical accretion, you are either implicitly or explicitly providing additional postulates independent of Einstein equations and other known physics in general.
I have been poking around the field of exact solutions to Einstein equations to tell you with relative certainty that there is really no prescription that would allow to give you a unique, covariant prescription for the behavior of all known naked singularities. The reason for that is that there is simply too many types of them - spacelike, timelike, null, directional, vacuum, nonvacuum, with various degree of divergence and dimensionality. So the postulates are typically done on an ad hoc, case to case basis.
So, addressing your question - you can really postulate anything to happen. You can postulate that the naked singularity will be annihilated in a burst of photons once colliding with the antimatter. However, you can also postulate the collision will produce a herd of beautiful unicorns that go and radiate love particles throughout the Universe. | {
"domain": "physics.stackexchange",
"id": 58000,
"tags": "black-holes, event-horizon, antimatter, singularities"
} |
Returns two elements from a list whose sum is a target variable | Question: def search(a,b):
for d in b:
if a==d:
m=True
break
else:
m=False
return m
x=[1,4,5,7,9,6,2]
target=int(raw_input("Enter the number:"))
for i in x:
if i<target:
pair=int(target)-int(i)
in2=search(pair,x)
if in2==True:
print "the first number= %d the second number %d"%(i,pair)
break
Answer: Your search function is terribly unclear. search is a vague name, and a, b, d and m don't communicate anything. Looking at the usage I can see you're testing if a number is in the list, but Python can already do that for you much more readably.
if number in list
The in keyword tests if a value exists in a list, returning either True or False. So you can forget about search and just directly test for pair's existence in the main loop:
x = [1,4,5,7,9,6,2]
target = int(raw_input("Enter the number:"))
for i in x:
if i < target:
pair = int(target) - int(i)
if pair in x:
print "the first number= %d the second number %d"%(i,pair)
break
Back to the naming, x is not clear, at least using numbers indicates what it contains. x implies a single value which makes using for i in x extra confusing. pair also sounds like it's multiple items, when really it's the difference you want.
You also don't need to call int on i when the list is populated with integers already.
Lastly, you're using the old form of string formatting. You should instead use str.format, like so:
print("the first number= {} the second number {}".format(i,pair))
It works very similarly, except that it's type agnostic. It will just do it's best to get a string representation of the two values and insert them. There's lots of other uses for this approach so it's good to get used to. | {
"domain": "codereview.stackexchange",
"id": 17072,
"tags": "python, performance, array"
} |
center of mass of rotating rod system | Question: I have a system with a rot connected to a motor and I need to determine the center of mass to calculate the moment caused by gravitation.
To illustrate:
I am uncertain about how to do this, is it a poor assumption to only consider the length along the rod? such that I get the length from the rod end (where the motor is attached) to the center of mass to be:
$L_{cm} = \frac{L_{1,y} \cdot m_1 + L_{2,y} \cdot m_2 + L_{3,y} \cdot m_3}{m_1+m_2+m_3}$
Where $L_{i,y}$ is the length from the end to i'th mass along the rod (y direction).
Can I do this or do I need to consider the direct lengths? If this is the case how can I determine the gravitation Moment if the center of mass is not located at the system (meaning that if I use direct length the center of mass might be located beside the rod)?
Answer: The moment experienced by the drives will be the sum of the three moments. Yes, you have to calculate to the centre of each mass, not the distance along the rod.
Let $\alpha$ be the angle of each radius from the horizontal. The total moment can then be calculated by,
$$ M = L_1m_1cos\alpha_1 + L_2m_2cos\alpha_2 + L_3m_3cos\alpha_3$$ | {
"domain": "engineering.stackexchange",
"id": 4950,
"tags": "forces, centre-of-gravity"
} |
Using user defined function in groupby | Question: I am trying to use the groupby functionality in order to do the following given this example dataframe:
dates = ['2020-03-01','2020-03-01','2020-03-01','2020-03-01','2020-03-01',
'2020-03-10','2020-03-10','2020-03-10','2020-03-10','2020-03-10']
values = [1,2,3,4,5,10,20,30,40,50]
d = {'date': dates, 'values': values}
df = pd.DataFrame(data=d)
I want to take the largest n values grouped by date and take the sum of these values. This is how I understand I should do this: I should use groupby date, then define my own function that takes the grouped dataframes and spits out the value I need:
def myfunc(df):
a = df.nlargest(3, 'values')['values'].sum()
return a
data_agg = df.groupby('date').agg({'relevant_sentiment':myfunc})
However, I am getting various errors, like the fact that the value keep is not set, or that it's not clearly set when I do specify it in myfunc.
I would hope to get a dataframe with the two dates 03-01 and 03-10 with respectively the values 12 and 120.
Any help/insights/remarks will be appreciated.
Answer: You could do it simple and it should work like this:
def myfunc(df):
return df.nlargest(3, 'values')[['values']].sum()
and then:
data_agg = df.groupby('date', as_index=False).apply(myfunc)
You decide if "data_agg" is the proper name then.
Good luck! | {
"domain": "datascience.stackexchange",
"id": 9220,
"tags": "pandas, groupby"
} |
Expressing a wave on a string using Fourier series | Question: Where does time function of wave on string go when expressed in the Fourier series?
A Standing wave on string of length $L,$ fixed at its ends $x=0$ and $x=L$ is: $\quad y(x, t)=A \sin (k x) \cos \left(\omega t+\phi_{0}\right) \quad$
Where: $k=\frac{n \pi}{L}$
A periodic function $f(x)$ with period $P$ is represented by the Fourier series:
$f(x)=\frac{1}{2} a_{0}+\sum_{n=1}^{\infty} a_{n} \cos \left(n \frac{2 \pi}{p} x\right)+\sum_{n=1}^{\infty} b_{n} \sin \left(n \frac{2 \pi}{p} x\right)$
Where:
$$
a_{0}=\frac{2}{p} \int_{-P / 2}^{P / 2} f(x) d x \quad a_{n}=\frac{2}{p} \int_{-P / 2}^{P / 2} f(x) \cos \left(\frac{2 \pi}{p} n x\right) d x \quad b_{n}=\frac{2}{p} \int_{-P / 2}^{P / 2} f(x) \sin \left(\frac{2 \pi}{P} n x\right) d x
$$
For question where a guitar is played and the string is put into motion by plucking it. If we want to write $y(x)$ as a sum of the basis function, $y_{n}(x)$ we write:
$$
y(x, 0)=\sum_{n=1}^{\infty} a_{n} \sin \left(k_{n} x\right) \quad \rightarrow \quad y(x, t)=\sum_{n=1}^{\infty} a_{n} \sin \left(k_{n} x\right) \cos \left(\omega_{n} t\right)
$$
[since the wave function is usually odd, so the $a_n$ function will be eliminated)
Also in the case where the wave is neither an odd or even function when we have values for $a_{0}, a_{n}, b_{n}$ (not just 0).
And the periodic function is given by $f(x)=\frac{1}{2} a_{0}+\sum_{n=1}^{\infty} a_{n} \cos \left(n \frac{2 \pi}{p} x\right)+\sum_{n=1}^{\infty} b_{n} \sin \left(n \frac{2 \pi}{p} x\right)$.
Where do we add the $\cos \left(\omega_{n} t\right)$ part?
Does the equation look like this: $y(x, t)=\frac{1}{2} a_{0} \cos \left(\omega_{n} t\right)+\sum_{n=1}^{\infty} a_{n} \cos \left(n \frac{2 \pi}{p} x\right) \cos \left(\omega_{n} t\right)+\sum_{n=1}^{\infty} b_{n} \sin \left(n \frac{2 \pi}{p} x\right) \cos \left(\omega_{n} t\right)$
Answer: Let's start from the equations of motion for a guitar string (with damping). Let $A(x,t)$ be the amplitude of the wave at a point $x$ along the string at time $t$. Then
\begin{align}
\partial_t^2 A + b\partial_t A - \partial_x^2 A = S(x,t)\,,
\end{align}
where $b$ is the damping coefficient and $S$ is the source term (representing the pluck). Let's assume that the string is length $L$ and the string is fixed with $A(0,t) = A(L,t) = 0$. The "normal modes" of the string are the eigenfunctions of the operator
\begin{align}
D = \partial_t^2 + b\partial_t - \partial_x^2\,.
\end{align}
It is easy to see that the eigenfunctions that satisfy the boundary conditions are of the form
\begin{align}
f_n(\omega,x,t) = \sin\left(\frac{\pi n}{ L }x\right) e^{{\rm i}\omega t}\,.
\end{align}
Thus, we can decompose
\begin{align}
A(x,t) = \sum_{n = -\infty}^\infty \int_{-\infty}^\infty\frac{{\rm d}\omega}{2\pi} A_n(\omega) f_n(\omega,x,t)\,.
\end{align}
We can now solve for $A_n(\omega)$,
\begin{align}
A_n(\omega) = \sum_{n = -\infty}^\infty\int_{-\infty}^\infty\frac{{\rm d}\omega}{2\pi}\frac{f_n(\omega,x,t)}{\lambda_n(\omega)}\int_0^L{\rm d}x\int_{-\infty}^\infty{\rm d}t S(x,t)f_n^*(\omega,x,t)
\end{align}
where $\lambda_n(\omega)$ are the eigenvalues
\begin{align}
D f_n(\omega, x,t) = \lambda_n(\omega)f_n(\omega,x,t)\,.
\end{align} | {
"domain": "physics.stackexchange",
"id": 71575,
"tags": "waves, fourier-transform, string"
} |
Does Dirac's theorem on Hamiltonian cycles only apply to undirected graphs? | Question: Does Dirac's theorem on Hamiltonian cycles only apply for undirected graphs? If the theorem applied to directed graphs, the graph with the following adjacency list should have a Hamiltonian cycle, but I can't find one.
$0$: $1$ $2$ $3$
$1$: $2$ $3$
$2$: $1$ $3$
$3$: $1$ $2$
Answer: While Dirac's theorem wasn't intended/proved for directed graphs, it was extended to directed graphs by GHOUILA-HOURI in 1960 ("Une condition suffisante d’existence d'un circuit hamiltonien").
Their requirements, however, were that each vertex should have both out-degree and in-degree of at least $n/2$, which is not the case in your graph.
One of the original conditions of GHOUILA-HOURI, as Saeed mentioned, is that the graph is strongly connected. In fact, the degree conditions are strong enough for only diameter $\leq 2$ graphs to be considered. | {
"domain": "cstheory.stackexchange",
"id": 2668,
"tags": "graph-theory"
} |
Print receipt from cash register in Python | Question: Problem
Output a receipt of purchased items. Inputs:
products_text_path: Products are found in a text file in <barcode>:<name>:<price> line format.
purchased_items_set: The purchased items are given as a list of integer barcodes. Same item be found several times.
discount_rules_text_path: There are discount rules in a text file in <barcode>:<min_quantity>:<discount> format which mean that buying more than min_quantity of the barcode item will give you a discount (given as the discount percentage /1)
Goal
Readable code that handles very large amount of input
Example
Products:
- 123:banana:0.5
- 789:apple:0.4
Discounts:
- 123:2:0.2 # Meaning buying 2 bananas will add a 20% discount
Purchases:
- [123,123] # Bought 2 bananas
Output:
- [('banana', 2, 0.8)]
Potential issues in the code below
I'm holding all the information in memory.
I might be using too many classes to represent the difference pieces of information?
Code
Code is best understood when read starting from the start_process function
class Product:
def __init__(self, barcode: int, name: str, price: float):
self.bardcode = barcode
self.name = name
self.price = price
@classmethod
def parse_product(cls, product_line: str):
name, barcode_str, price_str = product_line.split(':')
price = float(price_str)
barcode = int(barcode_str)
return cls(barcode, name, price)
class Products:
def __init__(self):
self.products = {} # Type: Dict<int, Product>
def add_product(self, product: Product):
self.products[product.bardcode] = product
def get_product(self, product_barcode):
return self.products[product_barcode]
@classmethod
def parse_products(products_cls, products_path: str):
products = products_cls()
with open(products_path) as products_lines:
for product_line in products_lines:
product = Product.parse_product(product_line)
products.add_product(product)
return products
class ProductPurchase:
def __init__(self, barcode: int, quantity = 1, discount = 0.0):
self.barcode = barcode
self.quantity = quantity
self.discount = discount
class DiscountRule:
def __init__(self, product_barcode: int, min_quantity: int, discount_amount: float):
self.product_barcode = product_barcode
self.min_quantity = min_quantity
self.discount_amount = discount_amount
@classmethod
def parse(cls, discount_rule: str):
barcode_str, min_quantity_str, discount_amount_str = discount_rule.split(':')
return cls(int(barcode_str), int(min_quantity_str), float(discount_amount_str))
class DiscountRules:
def __init__(self):
self.dicounts = {} # Type: Dict<int, DiscountRule>
def add_discount(self, discount: DiscountRule):
self.dicounts[discount.product_barcode] = discount
def apply_discount(self, purchase: ProductPurchase):
if purchase.barcode not in self.dicounts:
return
discount = self.dicounts[purchase.barcode]
if purchase.quantity < discount.min_quantity:
return
purchase.discount = discount.discount_amount
@classmethod
def parse_discount_rules(cls, rules_path: str):
rules = cls()
with open(rules_path) as rules_lines:
for rule_line in rules_lines:
rule = DiscountRule.parse(rule_line)
rules.add_discount(rule)
return rules
class Purchases:
def __init__(self):
self.purchases = {} # Type: Dict<int, ProductPurchase>
def add_purchase(self, purchased_barcode: int):
if purchased_barcode in self.purchases:
self.purchases[purchased_barcode].quantity += 1
else:
self.purchases[purchased_barcode] = ProductPurchase(purchased_barcode)
def get_purchases(self):
for purchase in self.purchases.values():
yield purchase
class Checkout:
def __init__(self, purchases: Purchases, products: Products, discount_rules: DiscountRules):
self.purchases = purchases
self.products = products
self.discount_rules = discount_rules
def check_out(self):
result = []
for purchase in self.purchases.get_purchases():
self.discount_rules.apply_discount(purchase)
product = self.products.get_product(purchase.barcode)
price = _calculate_price(product.price, purchase.quantity, purchase.discount)
result.append((product.name, purchase.quantity, price))
return result
def _calculate_price(price, quantity, discount):
return str(round(price * quantity * (1.0 - discount), 2))
def start_process(products_paths: str, discounts_path: str, scanner_input):
purchases = Purchases()
for purchased_barcode in scanner_input:
purchases.add_purchase(purchased_barcode)
products = Products.parse_products(products_paths)
discount_rules = DiscountRules.parse_discount_rules(discounts_path)
checkout = Checkout(purchases, products, discount_rules)
return checkout.check_out()
Answer:
Use dataclasses so you don't have to write boilerplate for your data containers.
You don't really need Products, DiscountRules, and Purchases when you could just have dict[int, Product], dict[int, DiscountRule], and dict[int, ProductPurchase] respectively. The existence of these classes makes the code harder to read.
Checkout doesn't need to be a class. You can just declare the method check_out. If you find yourself writing a class that only has two methods, one of which is __init__, you probably don't need a class.
Don't store currency as floats. Use a currency's smallest unit so you can store it as an integer. The main idea is to avoid rounding errors that come up in floating point arithmetic. For example, if Product's price is the price in US currency, it should be an integer representing the price in cents, not dollars.
You can use collections.Counter to count the scanned barcodes, which gives you something like a dict[int, int] (barcode -> quantity). Then you can use this to create a collection of ProductPurchases later.
I see type hints being used, but not consistently. Type hints should be added for method return types as well.
This is personal preference, but for the methods that parse a string into a data container type like Product, instead of the names parse_product, parse, etc. I would name it from_string so the usage is more self-documenting, e.g. Product.from_string(s).
Refactored version
from __future__ import annotations
from collections import Counter
from dataclasses import dataclass
@dataclass
class Product:
barcode: int
name: str
price: int # in the currency's smallest unit, e.g. cents for US currency
@staticmethod
def from_string(product: str) -> Product:
barcode, name, price = product.split(":")
return Product(int(barcode), name, int(price))
@dataclass
class DiscountRule:
barcode: int
min_quantity: int
discount_amount: int # percentage, e.g. 20 => 20% discount
@staticmethod
def from_string(discount_rule: str) -> DiscountRule:
barcode, min_quantity, discount_amount = discount_rule.split(":")
return DiscountRule(
int(barcode), int(min_quantity), int(discount_amount)
)
@dataclass
class ProductPurchase:
product: Product
quantity: int
discount: int # percentage, e.g. 20 => 20% discount
@property
def cost(self) -> float:
return self.product.price * self.quantity * (100 - self.discount) / 100
def __str__(self) -> str:
discount_text = (
f" (with {self.discount}% discount)" if self.discount > 0 else ""
)
return f"{self.quantity}x {self.product.name}{discount_text} {self.cost / 100:.2f}"
def calculate_product_purchase(
products: dict[int, Product],
discounts: dict[int, DiscountRule],
barcode: int,
quantity: int,
) -> ProductPurchase:
product = products[barcode]
discount_pct = (
discount.discount_amount
if (discount := discounts.get(barcode, None))
and quantity >= discount.min_quantity
else 0
)
return ProductPurchase(product, quantity, discount_pct)
def print_receipt(
products_path: str, discounts_path: str, purchases: list[int]
) -> None:
with open(products_path) as p_file, open(discounts_path) as d_file:
products = {
p.barcode: p for line in p_file if (p := Product.from_string(line))
}
discounts = {
d.barcode: d
for line in d_file
if (d := DiscountRule.from_string(line))
}
for barcode, quantity in Counter(purchases).items():
product_purchase = calculate_product_purchase(
products, discounts, barcode, quantity
)
print(product_purchase)
if __name__ == "__main__":
print_receipt(
"products.txt", "discounts.txt", [123, 123, 123, 123, 789, 789]
)
products.txt
123:banana:50
789:apple:40
discounts.txt
123:2:20 | {
"domain": "codereview.stackexchange",
"id": 40998,
"tags": "python, performance, python-3.x"
} |
Fully linear time regular expression matching | Question: Is there an $O(n+m)$ algorithm to check whether a size $n$ regular expression matches a size $m$ string, assuming a fixed size alphabet if that matters?
The standard NFA algorithm is $O(nm)$ worst case. Groz et al. achieve linear time for a variety of regular expression classes, but not all. Are there any better results?
Groz, B., Maneth, S., & Staworko, S. (2012, May). Deterministic regular expressions in linear time.
Answer: Groz et al. explicitly state that the best known algorithm for general regular expressions (as of 2012) is $O(nm(\log\log n)/(\log n)^{3/2}+n+m)$, due to Bille and Thorup 2009, doi:10.1007/978-3-642-02927-1_16 (preprint).
For a fixed size alphabet, Sebastian Maneth pointed out to me that $O(n+m)$ is possible for deterministic regular expressions by constructing the Glushkov DFA: each position in the regular expression is one state, and the transitions are determined by the bounded set of symbols that may appear before moving to a position.
However, without fixing the alphabet size, it still holds that "finding a time $O(m+n)$ algorithm remains an open problem" even in the deterministic case. | {
"domain": "cstheory.stackexchange",
"id": 3370,
"tags": "regular-expressions, parsing"
} |
AMCL Small particles don't allow Planning on NAV2 | Question:
Hi I was able to solve "partially" the related issue AMCL Fails to localize.
I've been working on improving localization for my custom Ackermann robot in an environment scaled 7-12 times the size of the standard turtlebot world. However, I've encountered a couple of issues.
Localization Anomalies: Despite the environment scale, the AMCL
particles appear much smaller and
less dispersed than when applied to
the turtlebot3. I've adjusted
several parameters in my
nav2_params.yamll
file, but the issue persists. How
can I increase the size and spread
of these particles?
Here's a video: AMCL Particles too Small illustrating the issue, and a screenshot of my RQT Graph for reference.
2.Trajectory Planning: Despite nodes and topics from the planner
server and recover_server appearing
correctly, the trajectory is not
being calculated or drawn in RVIZ.
The topics /goal_pose and /plan do
not receive any data from the
trajectory planner. Could the small
AMCL particles be causing this issue?
I've verified that my RVIZ configurations (Costmaps, TF, footprint, maps) and Gazebo plugins (publishing odom TF, Lidar, camera) seem to be functioning correctly, as illustrated in the following image:
I appreciate an assistance in this issue.
Originally posted by Vini71 on ROS Answers with karma: 266 on 2023-05-18
Post score: 0
Answer:
I've been experiencing a rather vexing issue with the localization and planning for my Ackermann type vehicle using Nav2. Initially, the problem seemed to be related to the particle size; however, upon changing the image resolution, I managed to get the size of particles similar to the turtlebot tutorial. Now, I'm beginning to suspect that the planning issue might be connected to the costmap and Lidar, as suggested by the repeated warnings I've received:
[rviz2-4] [INFO] [1684557117.532457854] [rviz]: Message Filter dropping message: frame 'odom' at time 18.718 for reason 'the timestamp on the message is earlier than all the data in the transform cache'
[controller_server-8] [WARN] [1684557117.542791989] [local_costmap.local_costmap]: Sensor origin at (4.15, -0.01 2.18) is out of map bounds (34.12, 29.97, 2.95) to (-14.95, -14.95, 0.00). The costmap cannot raytrace for it
9.98, 2.95) to (-14.95, -14.95, 0.00). The costmap cannot raytrace for it.
[controller_server-8] [WARN] [1684557296.342507914] [local_costmap.local_costmap]: Sensor origin at (4.19, 0.00 2.18) is out of map bounds (34.17, 29.98, 2.95) to (-14.95, -14.95, 0.00). The costmap cannot raytrace for it.
[controller_server-8] [WARN] [1684557296.542695295] [local_costmap.local_costmap]: Sensor origin at (4.19, 0.00 2.18) is out of map bounds (34.17, 29.98, 2.95) to (-14.95, -14.95, 0.00). The costmap cannot raytrace for it.
[controller_server-8] [WARN] [1684557296.742759887] [local_costmap.local_costmap]: Sensor origin at (4.19, 0.00 2.18) is out of map bounds (34.17, 29.98, 2.95) to (-14.95, -14.95, 0.00). The costmap cannot raytrace for it.
[controller_server-8] [WARN] [1684557296.942633179] [local_costmap.local_costmap]: Sensor origin at (4.19, 0.00 2.18) is out of map bounds (34.17, 29.98, 2.95) to (-14.95, -14.95, 0.00). The costmap cannot raytrace for it.
[controller_server-8] [WARN] [1684557297.142774081] [local_costmap.local_costmap]: Sensor origin at (4.19, 0.00 2.18) is out of map bounds (34.17, 29.98, 2.95) to (-14.95, -14.95, 0.00). The costmap cannot raytrace for it.
[controller_server-8] [WARN] [1684557297.342435994] [local_costmap.local_costmap]: Sensor origin at (4.19, 0.00 2.18) is out of map bounds (34.17, 29.98, 2.95) to (-14.95, -14.95, 0.00). The costmap cannot raytrace for it.
[controller_server-8] [WARN] [1684557297.542871713] [local_costmap.local_costmap]: Sensor origin at (4.19, 0.00 2.18) is out of map bounds (34.17, 29.98, 2.95) to (-14.95, -14.95, 0.00). The costmap cannot raytrace for it.
^C[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)
I I believe that there might be a misconfiguration issue with costmap parameters (size, height, width) as the Lidar is mapping correctly and localization appears functional. I am yet to find a solution for this
Update:
I've made some progress and managed to resolve the costmap issue. The solution involved using a static transform publisher from base_link to Lidar. Interestingly, even though my joint state publisher is functioning correctly, this redundant step was required, which deviates from the turtlebot tutorial.
However, it's important to note that the placement of my Lidar is not directly attached to the base_link, but to other children from base_link. The costmap is highly sensitive to offset in the Z-axis. For instance, running "ros2 static publisher 0 0 0.3 0 0 0 0 base_link lidar_link" would raise a costmap error. Yet, the costmap doesn't seem to raise issues even when x and y offsets can reach up to 15 meters.
After running the node with the launch file, I can use the "Set Goal Pose" icon on Rviz and see the desired goal spot coordinates on Terminal. But the planner doesn't draw on Rviz or calculate anything, it just freezes.
As a provisional fix, I've been manually setting the pose and goal coordinates using the ros2 launch action. This method helps to draw the planner in Rviz, but the controller can't move my truck (presumably set for lighter robots). It only steers the wheels without forward movement.
To overcome this, I provide a starting linear velocity via teleop, and the truck then follows the drawn path. Unfortunately, both the localization (AMCL) and Costmap do not travel along with the vehicle, leading to the truck losing its location very quickly.
Adjusting an Ackermann Robot to use Nav2 has been a struggle, and despite following all the documentation and paid course steps, the performance remains unsatisfactory.
I hope this information can assist other researchers or students facing similar issues, and any further help or suggestions for improvement would be greatly appreciated.
Originally posted by Vini71 with karma: 266 on 2023-05-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 38392,
"tags": "ros, ekf-localization"
} |
How to figure out growth/culture conditions for a specific strain? | Question: Specifically, I'd like to find out how to grow ME-3 L.Fermentum (DSM 14241) at home without any fancy equipment.
What food does it need? Sugar? Lactose? Something else? At what temperature does it grow? How fast?
How can I generally find answers to these questions? ME-3 is not the only strain I'd like to grow.
I know ME-3 grows in milk, but that doesn't work for me. Can I grow it in vegan milk? In just water with sugar?
I'm even more interested in finding ways to find out the answer to these questions. Where would you look to answer this? I tried to google it but it's hard. Is there not some cool data base that has growth conditions for every known strain?
Answer: If you do not have any experience culturing microbes do not attempt this.
You are very very likely to contaminate your culture with things that could make you very sick or even kill you. Pathogenic microorganisms are part of our natural microflora and are a significant cause of hospitalizations throughout the world as a result of food poisoning. For this to work you would need some skills in both identification of microorganisms (to make sure you have the right one) and in sterile technique. Growing specific microorganisms is NOTHING like making yogurt or beer with high inoculums of culture organism that suppress the growth of contaminants, or even like starting a sourdough starter, where you have a relatively low inoculum but the right the culture to the right bacteria/yeasts.
This would mean that you need, at a minimum, a microscope (with 40-100x objectives; for identification purposes), slides, a differential bacterial stain (e.g. Gram stain), sterilization equipment (for preparing media; this could be a pressure cooker), media and an incubator with the right conditions.
This organism needs some carbon, nitrogen and a number of elements to grow happily. I don't think you will be able to do this at home easily; it would require quite a bit of experimentation and you are likely to grow something nasty as a contaminant before you get a decent culture of your target organism.
The American Type Culture Collection recommends Medium 416 for growth of Lactobacillus group organisms. This medium is comprised of lactobacillus MRS medium from BD and contains:
Proteose Peptone No. 3 10g/L, Beef extract 10g/L, Yeast extract 5g/L, Dextrose 20g/L, Polysorbate 80 1g/L, Ammonium Citrate 2g/L, Sodium Acetate 5g/L, Magnesium Sulfate 0.1g/L, Manganese Sulfate 0.05g/L, Dipotassium Phosphate 2g/L.
Now this might look like a scary list of things, and fairly unobtainable, but beef extract is essentially beef broth, and yeast extract is hydrolyzed yeast. The Polysorbate is an emulsifier used to help solubilize the ingredients and as a carbon source. I don't know if it is obtainable outside of a laboratory setting. The other chemicals may be hard to find too, though citrate = citric acid (fruit acids, you can buy this in a supermarket), acetate = vinegar/acetic acid. Manganese and Magnesium might be obtainable as health supplement pills, but you would need to work out how much is in a pill and do some calculations to convert. As a bonus, supplement pills often contain calcium phosphate as a bulker/tableting agent. For the peptone you might be able to substitute milk powder or "protein shake" powder. You would need to know a bit of chemistry to for the citrate -> citric acid, acetate -> acetic acid, and etc. conversions.
The last problem is that the ATCC recommends that the organism be grown in 5% CO2 at 37 degrees Celsius (98.6 F). This requires a sealed incubator box, with a means to get the CO2 in there. You might be able to achieve this by chemical means, reacting some chalk (calcium carbonate) with acid, but it would take some experimentation to do so effectively. It should grow at lower temperatures too, but it will grow slowly, if at all. I don't know if the CO2 is a requirement or just something that helps with the growth. This is not an obligate anaerobe as suggested in the comments, but can grow anaerobically too (facultative anaerobe). | {
"domain": "biology.stackexchange",
"id": 12196,
"tags": "microbiology, bacteriology"
} |
Taking Fourier Transform to momentum space | Question: I have a wave function
$$\psi(x) = \frac{1}{\sqrt{\sigma\sqrt{\pi}}} \exp \left (\frac{-x^2}{2 \sigma^2} \right ) \exp \left (\frac{ipx}{\hbar} \right )$$
And I have to convert this to $Q(p)$, in momentum space, by taking the Fourier transform. So I use this to perform the transform:
$$Q(p) = \frac{1}{\sqrt{2\pi\hbar}}\int_{-\infty}^{\infty} \psi(x) e^{-ipx/\hbar} dx$$
However, when I do, the imaginary parts in the exponential cancel, and I'm left with a constant which would not yield the original wave function upon taking the inverse Fourier transform. How do I proceed in the right way?
Answer: The variable "$p$" in the definition of the wavefunction is not the same as the variable $p$ in the definition of the Fourier Transform. It's best if you call the first $p$ some other constant, say, $p_0$. This also makes sense physically, since it actually represents the expectation value of the momentum of the Gaussian wavepacket. (You should be able to show that $\langle \hat{p} \rangle_\psi = p_0$.)
If you do this, the integral reduces to $$Q(p) \propto \int_{-\infty}^\infty \exp\left( -\frac{x^2}{2\sigma^2}\right) \exp\left( -\frac{i (p-p_0) x}{\hbar}\right) \text{d}x.$$
When you integrate this (by completing the square, etc.) you will get a function of $p$ which is the momentum space wavefunction. | {
"domain": "physics.stackexchange",
"id": 75899,
"tags": "quantum-mechanics, homework-and-exercises, momentum, wavefunction, fourier-transform"
} |
Predict output sequence one at a time with feedback | Question: I would like to solve the following classification problem: Given an input sequence and zero or more initial terms of the true output, predict the next term in the output sequence.
For example, my training set is a large corpus of question and answer pairs. Now I would like to predict the response to "How are you?" one word at a time.
Some reasonable responses might be "I'm fine.", "OK.", "Very well, thanks." etc. So the predicted distribution over possible first words would include the first words in those examples, among others.
But now if I see the first word is "Very", I would like my prediction for the second word to reflect that "OK" is now less likely, because "Very OK" is not a common response.
And I would like to repeat this process of predicting the next word given the input sentence and every word so far in the output sentence.
One approach might be to train a stateful RNN on examples like How are you<END> I am fine<END>. But if I understand correctly this would also learn to predict words in the "How are you" part, which I think might detract from the goal.
Maybe separate layers for the input and partial output that are then merged into a decoder layer?
Answer: After some more research, I believe this can be solved with a straightforward application of encoder-decoder networks.
In this tutorial, we can simply replace sampled_token_index and sampled_char with actual_token_index and actual_char, computed according. And of course in our case it's actual_word.
To summarize we divide our training set into input/output pairs, where output examples begin with a <START> token and end with <STOP>, and train a sequence to sequence model on these pairs, as described in the tutorial.
Then at inference time we feed the <START> token to the model to predict the next word. Then after we receive the actual next word, we feed the actual (observed) output so far into the model, and so on.
The other answers had some interesting ideas, but unfortunately they didn't really address how to deal with variable length inputs and outputs. I believe seq2seq models based on RNNs are the best way to address this. | {
"domain": "datascience.stackexchange",
"id": 3792,
"tags": "classification, rnn, lstm, sequence-to-sequence"
} |
How to check rapidly if an element is present in a large set of data | Question: I am trying to harvest scientific publications data from different online sources like Core, PMC, arXiv etc. From these sources I keep the metadata of the articles (title, authors, abstract etc.) and the fulltext (only from the sources that provide it).
However, I dont want to harvest the same article's data from different sources. That is, I want to create a mechanism that will tell if an article that I am trying to harvest is present in the dataset of the articles that I already harvested.
The first thing I've tried was to see if the article (which I want to harvest) has a DOI and search in the collection of metadatas (that I already harvested) for that that DOI. If it is found there then this article was already harvested. This approach, though, is very time expensive given that I should do a serial search in a collection of ~10 millions articles metadata (in XML format) and the time would increase much more for the articles that don't have a DOI and I will have to compare other metadatas (like title, authors and date of publication).
def core_pmc_sim(core_article):
if core_article.doi is not None: #if the core article has a doi
for xml_file in listdir('path_of_the_metadata_files'): #parse all PMC xml metadata files
for event, elem in ET.iterparse('path_of_the_metadata_files'+xml_file): #iterate through every tag in the xml
if (elem.tag == 'hasDOI'):
print(xml_file, elem.text, core_article.doi)
if elem.text == core_article.doi: # if PMC doi is equal to the core doi then the articles are the same
return True
elem.clear()
return False
What is the most rapid and memory-efficient way to achieve this?
(Whould a bloom filter be a good approach for this problem?)
Answer: If you knew that every article had a DOI, you could just store a hashtable of the DOIs of the papers that you've already harvested. Of course, in practice, many papers don't have a DOI or don't list their DOI.
If you knew that the paper would be listed with the correct title and authors, you could use a hash of the titles and authors and store that in a hashtable. Of course, in practice it is common for titles to be mis-spelled, or for there to be variations in how they are listed (e.g., changes in capitalization; do you list the full first name of each author or just their first initial; and so on and so on).
If you knew the list of all variations, you could try to canonicalize the title and authors (e.g., lowercase the title, use only the last names of the authors, and so on). Of course, in practice we probably can't know all the variations.
If you figured that all variations would be at small edit distance, you could use fuzzy matching and some kind of database that allows approximate match lookups. (e.g., Efficient map data structure supporting approximate lookup, How fast can we identifiy almost-duplicates in a list of strings?, How to compare/cluster millions of strings?) However, I suspect that in practice edit distance might not be enough to find all matches. Also, there is a risk that you might end up conflating two different papers (for instance, I have seen a series of papers with titles like "Catching the Bugblatter, Part I" and "Catching the Bugblatter, Part II" with the same authors).
Hopefully all of this is conveying that in practice I suspect the problem is messy and there is no single clean answer. Perhaps a pragmatic solution is to use some of the above techniques to find obvious matches, with the idea that this will take care of most cases of matches. For the remainder, just accept that you might store the article multiple times, but hopefully this won't be too common. Ultimately, storage space is extremely cheap, so storing the same article a few times probably isn't so bad. | {
"domain": "cs.stackexchange",
"id": 14981,
"tags": "python, big-data"
} |
Geometric Langlands as a partially defined topological field theory | Question: I have heard from several physicists that the Kapustin-Witten topological twist of $N=4$ 4-dimensional Yang-Mills theory ("the Geometric Langlands twist") is not expected to give
rise to fully defined topological field theory in the sense that, for example, its partition function on a 4-manifold (without boundary) is not expected to exist (but, for example, its
category of boundary conditions attached to a Riemann surface, does indeed exist). Is this really true? If yes, what is the physical argument for that (can you somehow see it from the path integral)? What makes it different from
the Vafa-Witten twist, which leads to Donaldson theory and its partition function is, as far as I understand, well defined on most 4-manifolds?
Answer: From the path integral point of view, one can argue why the KW theory partition function won't be well defined as follows.
At the B-model point the KW theory dimensionally reduces to the B model for the derived stack $Loc_G(\Sigma')$ of $G$-local systems on $\Sigma'$. The B-model for any target $X$ is expected to be given by the volume of a natural volume form on the derived mapping space from the de Rham stack of the source curve $\Sigma$ to $X$.
Putting this together, we see that the KW partition function on a complex surface $S$ is supposed to be the "volume" of the derived stack $Loc_G(S)$ (with respect to a volume form which comes from integrating out the massive modes).
Now we see the problem: the derived stack $Loc_G(S)$ has tangent complex at a a $G$-local system $P$ given by de Rham cohomology of $S$ with coefficients in the adjoint local system of Lie algebras, with a shift of one. This is in cohomological degrees $-1,0,1,2,3$.
In other words: fields of the theory include things like $H^3(S, \mathfrak{g}_P)$ in cohomological degree $2$. Because it's in cohomological degree $2$, we can think of it as being an even field -- and then it's some non-compact direction, so that we wouldn't expect any kind of integral to converge.
(By the way, I discuss this interpretation of the KW theory in my paper http://www.math.northwestern.edu/~costello/sullivan.pdf) | {
"domain": "physics.stackexchange",
"id": 3333,
"tags": "quantum-field-theory, research-level, topological-field-theory, topological-order"
} |
Is it possible to create a pair of polarized, polarization-entangled photons? | Question: Is there a light source which emits (mostly) polarization-entangled pairs of photons that have a known polarization angle, e.g. a certain angle in relation to the orientation of the source?
Applying filters to pairs of photons with unknown polarization won't do it, because it would break entanglement.
Answer: I don't know about sources that emit polarization-entangled photon pairs, but polarization-entangled pairs can be obtained from a polarized source in many ways.
One example is Spontaneous Parametric Down Conversion (SPDC). Quoting from Wikipedia:
In a commonly used SPDC apparatus design, a strong laser beam, termed the "pump" beam, is directed at a BBO (beta-barium borate) crystal. Most of the photons continue straight through the crystal. However, occasionally, some of the photons undergo spontaneous down-conversion with Type II polarization correlation, and the resultant correlated photon pairs have trajectories that are constrained to be within two cones, whose axes are symmetrically arranged relative to the pump beam. Also, due to the conservation of energy, the two photons are always symmetrically located within the cones, relative to the pump beam. Importantly, the trajectories of the photon pairs may exist simultaneously in the two lines where the cones intersect. This results in entanglement of the photon pairs whose polarization are perpendicular.
If $\mid V \rangle$ denotes a vertically polarized photon and $\mid H \rangle$ a horizontally polarized photon, then at the intersection of the two cones it will be possible to find photons in the state
$$\mid \psi \rangle = \frac 1 {\sqrt 2} (\mid H \rangle \mid V \rangle + \mid V \rangle \mid H \rangle)$$
(See here for more details)
Notice anyway that, as Mark Mitchison pointed out, in an entangled pair neither photon has a definite polarization. To understand why, consider the state
$$\mid \phi \rangle = \frac 1 {\sqrt 2} (\mid H \rangle \mid V \rangle + \mid V \rangle \mid V \rangle)$$
This state is not entangled, as it can be written as
$$\mid \phi \rangle = \frac 1 {\sqrt 2} (\mid H \rangle+ \mid V \rangle ) \mid V \rangle$$
So we know that in this state one photon has vertical polarization, while the other is polarized at $45°$. You can easily see that the same trick cannot be applied to the entangled $\mid \psi \rangle$ state: an entangled state is in fact, by definition, non separable.
So even if you know the polarization state of the two photons before the entanglement, the creation of the entanglement itself will destroy any information about the individual polarization of the photons. | {
"domain": "physics.stackexchange",
"id": 31574,
"tags": "quantum-mechanics, photons, quantum-entanglement, polarization"
} |
Implementing QFT for Shor's Algorithm? | Question: I'm studying Shor's algorithm.
This diagram shows a calculation of $4^x\mod21$.
I don't understand how this expresses $4^x \mod21$.
Could you explain this? For example, by showing another calculation such as $11^x\mod15$.
And what does this result mean?
Answer: It seems that you are trying to make sense of compiled circuits from this paper. All Sections, Tables and Figures noted below are in reference to this paper. The short answers are in bold in case you are not looking for explanation.
WHY?
The circuit in your question is a "compiled" quantum circuit, which uses known information about the solution to a specified problem to create a simplified implementation of Shor's algorithm. The motivation for doing this is discussed in Section III(A).
Using the notation convention $f_{a,N}(x)=a^x \, (\text{mod} \, N)$, the authors create a truth table (Table V) that implements the modular exponentiation of $f_{4,21}(x)$:
$$\vert x \rangle \vert 0 \rangle \rightarrow \vert x \rangle \vert \; 4^x \, (\text{mod} \, 21) \rangle.$$
The input value, $x$, corresponds to the left side of both Table V and Table VI, which in turn corresponds to the value over the three input qubits ($q1_0$ through $q1_2$ in your circuit).
The value of the function $f_{4,21}(x)$ corresponds the right side of Table V, which in turn corresponds to the five output qubits of the circuit in Figure 5.
The two output qubits in your circuit ($q1_3$ and $q1_4$) correspond to $\text{log}_4 (f_{4,21}(x))$, where $x$ is the input value over $q1_0$, $q1_1$, and $q1_2$.
These values are tabulated in the right side of Table VI. This is a second level of compilation, again made possible by a priori knowledge of the solution.
What Does this Result Mean?
In the context of this compiled circuit we are not interested in measuring the output qubits because the circuit was specifically constructed so that the output qubits will match Table VI. It's an easy exercise to modify your circuit and verify that they do.
Instead, the state of the three input qubits after the QFT is the interesting measurement. These values are interesting because despite the layers of synthetic compilation, they match the values predicted by theory.
This is explained in detail in Section V. The end result is Table XI, which tabulates the probability of measuring state $\vert k \rangle$ (over your $q1_0$, $q1_1$, and $q1_2$) in the columns relative to a postulated period of $f_{4,21}(x)$ in the rows. Note that the authors do not use binary in the column headers, so, e.g. $\vert 5 \rangle \equiv \vert 101 \rangle$.
From Table III we know a priori that the period of $f_{4,21}(x)$ is 3. So as expected
Your histogram corresponds to row 3 of Table XI.
If you increase your shots the numbers should align even more closely.
Can I Show a Compiled Circuit for $11^x \, (\text{mod} \, 15)$?
The cited paper has examples of composite circuits constructed for several other problems. These are much more detailed and reliable than I could hope to provide here, so I would refer you there instead. | {
"domain": "quantumcomputing.stackexchange",
"id": 1271,
"tags": "circuit-construction, shors-algorithm"
} |
The dependence of learning generalization bounds on the dimension of the instance space | Question: Here is a popular generalization bound:
If $X$ is the input space and
$Y=\{0, 1\}$ is the output/label space, and
there is a joint distribution $D$ defined on this space.
We sample $m$ points from this joint distribution as observation/training data:
$S = ((x_1, y_1)\ldots(x_m,y_m)) \in (X\times Y)^m$.
Suppose $H$ is a class we (user) are choosing from, and $h \in H$.
Define the risk (true expected error) as
$$L_D(h) := \Pr_{(x,y)\sim D} [h(x)\neq y]$$
the empirical risk as
$$L_S(h) := \frac{1}{m}\sum_{i=1}^m l(h,z_i)$$
where $z_i = (x_i, y_i)$.
Then, with probability of at least $1 − \delta$ we have
$$\forall h\in H, L_D(h) \leq L_S(h) +
c\sqrt{\frac{VCdim(H)+ \log(2/\delta)}{2m}} $$
Question:
Any intuition why the dimension of instance space $X$
and its size $|X|$ do not come into play in this inequality?
Intuitively the bigger the dimension is,
the harder the estimation problem should be.
(E.g. $X = \mathbb{R}^d$,
$|\mathbb{R}|=\infty$ should be harder than $|\{x\}|=1$.)
Answer: I think the VC-dimension term pretty much takes care of it. You can think of three cases:
The hypothesis space $H$ is far more "complex" than the input space $X$
$H$ and $X$ are matched up well to each other
$H$ is much "simpler" than $X$
We are usually only motivated by the second and third cases, so the VC-dimension term in your bound captures what we care about.
For example in $X=\mathbb{R}^d$, you probably want to choose a hypothesis class that depends on $d$, for instance halfspaces in $\mathbb{R}^d$, and so $d$ shows up in the VC-dimension term (in the case of halfspaces, this is $d+1$).
There are also some examples of classes with finite VC-dimension that are still interesting. Here even if $X$ is "large" or complex, we can get a small bound. | {
"domain": "cstheory.stackexchange",
"id": 3622,
"tags": "machine-learning, lg.learning"
} |
Filter model of passive attenuation of headphones | Question: It is possible to model passive attenuation of headphones as a Matlab/Python filter ?
For example I would like to find digital filter which will be simulating the way of the passive attentuation of headphones. Particularly I'm interested in Sennheiser HDA 300.
They gave some table with the data of frequencies and attentuation in dB.
And they told that the measurement was taken according to the ISO 4869-1:1994 norm.
But I'm unable to find the norm.
I have a little experience with acoustics so I have a questions.
How these values in dB should be understand? It is attentuation of sound as I understand but probably I can obtain similar characteristics in Matlab/Python as characteristics of filter?
It is possible to recalculate them into linear scale, for filter developing?
But what reference value should be adopted for this recalculation?
Many thanks in advance.
Answer:
And they told that the measurement was taken according to the ISO 4869-1:1994 norm. But I'm unable to find the norm.
The measurements references IEC 60318 which is available at their webstore.
How these values in dB should be understand?
The first set of numbers is the sensitivity in Pa/V (Pascal per Volt). That's the ratio of the sound pressure measured at the coupler(s) to the input voltage of the headset. The second set of numbers is the Reference equivalent threshold sound pressure (relative to $20\mu Pa$). This one is more complicated to explain and you should probably stay away from that.
It is attentuation of sound as I understand but probably I can obtain similar characteristics in Matlab/Python as characteristics of filter?
It's not attenuation. It's the conversion factor between voltage and sound pressure. I can be modelled as a filter in either Matlab or Python, but without a solid mathematical understanding of the fundamentals of DSP and some hands on experience with designing filters for audio, this will be a very steep learning curve.
It is possible to recalculate them into linear scale, for filter developing?
Absolutely. The linear gain of the transfer function, $G$ is simply
$$G = 10^{L/20}\frac{Pa}{V} $$
where $L$ is the level in dB.
But what reference value should be adopted for this recalculation?
1 Pa/mV.
Here is a rough outline of probably the easiest way to do it.
Generate FFT frequency grid
Convert dB to linear gains and populate the grid at the given frequencies
Interpolate for the missing frequencies
Extrapolate "intelligently" to DC and Nyquist.
Add a minimum phase.
Do an inverse FFT to get an impulse response
Visually inspect: if it's ringy or has echoes, adjust you extrapolation in step 4 and try again
Once the impulse response looks good you have an FIT filter that can you apply to any signal using overlap add or similar methods. | {
"domain": "dsp.stackexchange",
"id": 10464,
"tags": "filter-design, digital-filters, acoustics"
} |
$φ^3$ theory: three-point, one-loop diagram ($D=4$) | Question: I am attempting to calculate the one-loop diagram for the $φ^3$ three-point function, which is equal to (dropping a factor of $-i$): $$g^3 \int_E \frac{1}{(2 \pi )^4} \frac{1}{(k+q)^2+m^2} \frac{1}{(k-p)^2+m^2} \frac{1}{k^2+m^2} \,d^4k.$$
After completing the square for the integrated momentum and using Feynman parameterisation, we eventually get: $$\frac{g^3}{16 \pi ^2} \int _0^1\int _0^1\int _0^1\frac{1}{x y q^2+\left(1-z+z^2\right) m^2}\delta (1-x-y-z)dxdydz.$$
I'm having some trouble computing this final integral.
Answer: First of all, how did you get the denominator?
Are there some kinematic conditions on $p^2$ and $(p+q)^2$, which might be related to $q^2$ and $m^2$?
Are you sure about the coefficient in front of $m^2$?
Nevertheless, I would show a way to proceed with your integral:
\begin{equation}
I \equiv \int_0^1 dx \int_0^1 dy \int_0^1 dz
\frac{\delta(1-x-y-z)}{x y q^2 + (1 - z + z^2) m^2},
\end{equation}
though I will not go into the detail that will be too cumbersome.
After integrating over $z$ by using the delta function, and subsequently changing the variable $x \to 1 - x$, you get
\begin{equation}
I = \int_0^1 dx \int_0^x dy \frac{1}{A x^2 + B y^2 + C xy + D x + E y + F} ,
\end{equation}
where
\begin{equation}
A = B = F = m^2, \qquad
C = - (q^2+2m^2), \qquad
D = - m^2, \qquad E = q^2 + m^2 .
\end{equation}
This type of integrals appearing in one-loop calculations was thoroughly studied in the famous 1979 paper by 't Hooft and Veltman.
The denominator is quadratic both in $x$ and $y$, which is rather tough to tackle, so the first step is applying a variable transformation in such a way that the denominator becomes linear in one of the variables. Then performing the integral with respect to the linear variable (with an adequate splitting of the integration domain and variable transformations) gives logarithms, whose arguments are now quadratic in the remaining variable. A quadratic argument can be factorised as a product of two linear terms, so the logarithms with quadratic arguments are split into logarithms with linear arguments. At the end of the day, integrating such logarithms (multiplied with some reciprocals) leads to dilogarithms (known as Spence's functions); indeed many terms with dilogarithms and logarithms in general. | {
"domain": "physics.stackexchange",
"id": 76650,
"tags": "homework-and-exercises, quantum-field-theory, feynman-diagrams, integration"
} |
Python 3 in ROS indigo | Question:
I have a script in python 3, it reads data from a serial port, do some processing and return some integer and float values. This script manipulates bytes in a way that it will be time consuming for me to adapt it to python2. Is there a way to run python3 script in ROS Indigo? I dont use any ros package in python, only external libraries like pyserial.
I try to install python3-pkg, but the package remove many(if not all) of my ros packages and do not install any substitutes.
Thanks,
Iuro Nascimento
Originally posted by gagarin on ROS Answers with karma: 41 on 2015-05-18
Post score: 4
Answer:
I don't know why the apt installation of rospkg removes existing packages, but the same happened to me. What works, actually, is to create a virtual machine with python3 interpreter (virtualenv -p /usr/bin/python3, for example) and under it run pip install rospkg).
(additionally, you'd have to run pip install catkin_pkg, in order to run python3 scripts. I can confirm this approach works for basic ROS functionality - establishing nodes, publishing and subscribing to topics etc.However, there are packages that don't work with python3. In order to combine both - scripts that have to use python3 and those that have to use python2, I define in the shebang the interpreter to use, e.g. #!/home/bot/virtualEnvs/v3/bin)
Originally posted by gavran with karma: 526 on 2016-04-19
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by joq on 2016-04-20:
The problem is that many ROS Python packages install different versions under the same file name, so the python3-foo-bar package has to "Conflict" with the corresponding python-foo-bar.
Comment by ravijoshi on 2020-01-12:
I followed these steps but couldn't make it work. I posted my question here. Could you please write down these steps from the beginning? Do I just need to run these two commands virtualenv -p /usr/bin/python3 myenv and under myenv run pip install rospkg? | {
"domain": "robotics.stackexchange",
"id": 21717,
"tags": "ros, python3, ros-indigo"
} |
Definition of $PV$ work | Question: A balloon contains gas with pressure $P_1$, volume $V_1$. It is connected to a container with vacuum with volume $V_2$, and the gas is released into the vacuum. What is the work done by the gas and the heat change of the system?
By conventional wisdom, the gas does not need to do any work against the vacuum to enter the container, so the work done is obviously $0$. But looking at it as $W=\int_{V_i}^{V_f}PdV$(in which the $W$ here is defined as the work done by the system on the surroundings), does that mean that the $P$ here in the integral is defined as the pressure exerted by the surroundings on the system? That doesn't seem to fit with wikipedia's definition, which says "$P$ denotes the pressure inside the system, that it exerts on the moving wall that transmits force to the surroundings"
To sum it up, my question is, does the $P$ in $W=\int_{V_i}^{V_f}PdV$ represent pressure of system on surroundings, or surroundings on system?
Answer: According to Newton's 3rd law, the magnitude of the force per unit area exerted by the surroundings on the system is equal to the magnitude of the force per unit area exerted by the system on the surroundings. In a reversible process, the force per unit area exerted by the system on the surroundings is equal to the pressure calculated from the ideal gas law (or other equation of state). But, for an irreversible process, it is not, because the ideal gas law applies only to equilibrium states of the system. In an irreversible process, the force per unit area exerted by the system on the surroundings is a combination of the ideal gas pressure plus a contribution from viscous stresses. However, this is still equal to the external pressure of the surroundings.
In the present example, no work is done on the vacuum end of the gas. But work is done by the air outside the balloon in pushing air into the chamber. If we neglect the pressure difference across the balloon membrane, this work is equal to the outside air pressure times the change in volume of the balloon (assuming that the amount of air that enters the balloon is not sufficient to deplete the volume of the balloon when the pressure in the chamber matches the outside air pressure). | {
"domain": "physics.stackexchange",
"id": 73871,
"tags": "thermodynamics, pressure, work, gas, volume"
} |
R: Error when using Aggregate function to compile monthly means into yearly means | Question: Disclaimer: I'm extremely new to R and have been getting by with using google as my professor.
I have a somewhat large collection of monthly values over a period of several years from several different locations. I am attempting to use the aggregate function to calculate the yearly means for each location so that yearly rates of change can be calculated. However, when I run the code
read_csv_filename <- function(filename){
ret <- read.csv(filename)
ret$Source <- filename #EDIT
ret
}
import.list <- ldply(filenames, read_csv_filename)
by1 <- import.list$Source
by2 <- import.list$Result
by3 <- import.list$Year
Yearly_Mean <- aggregate(import.list, by==list(by1, by2, by3), FUN= "mean")
I get an error like this
> Yearly_Mean <- aggregate(import.list, by==list(by1, by2, by3), FUN= "mean")
Error in by == list(by1, by2, by3) :
comparison (1) is possible only for atomic and list types
I've spent quite a bit of time looking here and elsewhere for similar issues, but haven't found a case that helped me out at all. Any advice on how to fix this (or a completely new, easier method) would be appreciated.
Thanks!
Answer: You can use the group_by() and summarize function from the dplyr package to achieve the above easily:
library(dplyr)
import.list <- ldply(filenames, read_csv_filename)
# No need to create b1, b2 and b3
#by1 <- import.list$Source
#by2 <- import.list$Result
#by3 <- import.list$Year
# here you want to group by the data based on 'Year' as you mentioned you want the yearly mean and you want to calculate mean of 'Result' right?.
import.list %>%
group_by(Year) %>%
summarize(Mean = mean(Result, na.rm=TRUE))->Final_Output
The above will group_by the data based on Year and calculate mean of Result column. | {
"domain": "datascience.stackexchange",
"id": 6685,
"tags": "r, time-series"
} |
how to write c++ codes in ros, for analysing kinect depth datas? | Question:
im trying to write 3d computer vision in ros in c++, but it looks like c++ in ros is little different, so what is the important neceasary lines to add in c++ code for this, or what is basic format of ros c++, for kinect?
Originally posted by dinesh on ROS Answers with karma: 932 on 2016-05-12
Post score: 0
Answer:
It's conceptually the same as in Python, just syntactically different. Include your message headers and parse the data using a [c++ subscriber](http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber(c%2B%2B).
Originally posted by curranw with karma: 211 on 2016-05-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by jarvisschultz on 2016-05-12:
The answer from @curranw is correct... a PointCloud2 message is just like any other message. Subscribe to the data and work with it. However I will point out that it seems a huge majority of people working with C++ and point clouds tend to use PCL and pcl_ros | {
"domain": "robotics.stackexchange",
"id": 24636,
"tags": "ros, 3d-object-recognition"
} |
Can I use the clock from a bag in ROS2? | Question:
I don't see an option for ros2 bag play --clock <bag>, is there an equivalent feature in ROS2 yet?
Originally posted by dawonn_haval on ROS Answers with karma: 103 on 2020-03-16
Post score: 0
Answer:
rosbag2 is not yet clock-aware. See the related GitHub ticket here: https://github.com/ros2/rosbag2/issues/99
You can always record and playback the /clock topic, but rosbag2 doesn't support features like jumping to timepoints or changing the rate of playback.
Originally posted by jacobperron with karma: 1870 on 2020-03-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 34597,
"tags": "ros, ros2, bagfile, clock"
} |
What is the connection between geometry of physical space and Hilbert space? | Question: In Quantum Mechanis (QM), the dynamical variables are the (quantized) coordinates $x_j$ and their canonical conjugate $p_j = -i\partial_j$ with the commutation relation $[x_j,p_k]=i\delta_{jk}$ acting as operators on the quantum state space.
What exactly happens to that state space when we change the underlying geometry or topology of "physical" space - the (spacetime) manifold that serves as the background for the quantum system? How does this change in geometry/topology reflect on the Hilbert space?
In Quantum Field Theory (QFT), the dynamical objects are the (quantized) fields $\phi (x^\mu)$ and the coordinates $x_i$ are demoted to mere labels. What happens in this case? How does a change in geometry/topology alter the resulting Fock space?
I am new to this area, so what I need would be a basic explanation (for QM and QFT) how to make the connection between the two concepts geometry/topology of physical space and resulting properties of the quantum state space - if such a wish makes sense at all.
Answer: OP comments as an example of what the question is about:
Let us consider the case of an electron confined to a curved surface. Does the geometry of the background have any consequences for the state space?
A simple answer can be given for ordinary QM: A (scalar, i.e. spin-0) particle moving in one-dimension has state space $L^2(\mathbb{R})$, a particle in three dimensions has state space $L^2(\mathbb{R}^3)$. The state space of a particle moving on a submanifold $\mathcal{M} \subset \mathbb{R}^3$, e.g. a particle moving on any smooth surface, is then by analogy simply $L^2(\mathcal{M})$ i.e. the functions whose square-integral over $\mathcal{M}$ exists.
Note that Fourier (i.e. relating the position and momentum representations) transforms on manifolds that are not $\mathbb{R}^n$ are somewhat complicated, cf. this math.SE post. | {
"domain": "physics.stackexchange",
"id": 19736,
"tags": "quantum-mechanics, quantum-field-theory, geometry, topology"
} |
Why is there a yellow solution formed upon reaction of silicon tetrachloride with water? | Question: I've read somewhere that reacting the two will give a white solid and a yellow solution. I'm assuming the white solid is the silicon dioxide, but what is the chemical origins of the yellow colour of the solution?
Answer: The yellow color is indeed due to an impurity. According to Wikipedia:
Silicon tetrachloride is prepared by the chlorination of various silicon compounds such as ferrosilicon, silicon carbide, or mixtures of silicon dioxide and carbon. The ferrosilicon route is most common.[1]
Thus the most likely impurities are chlorine and iron, the latter converted to ferric chloride by the chlorination process. Either one may produce a yellow color in water solution.
Cited Reference
Simmler, W. "Silicon Compounds, Inorganic". Ullmann's Encyclopedia of Industrial Chemistry. Weinheim: Wiley-VCH. doi:10.1002/14356007.a24_001 | {
"domain": "chemistry.stackexchange",
"id": 17784,
"tags": "inorganic-chemistry, analytical-chemistry"
} |
Set combination data structure (And storage complexity) | Question: I have already posted this question on Stackoverflow, but I'm starting to think that this is the right place.
I have a problem where I am required to associate unique combinations from a set (unique subsets) to a given value. e.g.: Let S={a, b, c, d}, the required data structure should perform the following:
Key -> value
{a,b} -> value1
{a,c} -> value2
{c,d} -> value3
Property 1: The length of the set in the key is fixed (In this
example it's fixed to 2).
Property 2: The data structure does not
hold all possible subsets of S.
Question 1: What is the storage complexity of a simple Map holding these values? O(N!)? (given that |S| = N and it's not fixed)
Question 2: Is there any efficient data structure that could store such elements? (The most important efficiency would be required in storage complexity)
Answer: Question 1: The storage complexity of a map for holding these values is $\Theta(m)$, where $m$ is the number of entries in the map, assuming we don't count the space to store the sets themselves, and assuming we store them in a hash table (or similar data structure).
This doesn't include the space to store the sets in the keys, as those presumably already exist. If you want to include that space as well in your estimate of the space complexity, then we need at most $\Theta(m)$ space for the map, plus $\Theta(mk)$ space to hold the sets (assuming each set contains $k$ elements), for a total of $\Theta(mk)$ space. I am assuming that each element of $S$ can be stored in a single word ($\Theta(1)$ space), e.g., that $n \le 2^{64}$. If $n$ is huge, so that an element of $S$ cannot be stored in a single word, a better estimate of the total space usage is $\Theta(mk \lg n)$, since it takes $\Theta(\lg n)$ bits to store a single element of $S$, each subset of $k$ elements takes $\Theta(k \lg n)$ bits, and there are $m$ entries in the map, each of which requires $\Theta(k \lg n)$ bits to store it.
The number of unique subsets of size $k$, out of a universe of $n$ items, is ${n \choose k}$. Therefore, if you know that the keys are subsets of size $k$, you know that $m \le {n \choose k}$. But you mentioned in the question that the number of entries in the map is less than all possible subsets of size $k$, so this isn't helpful.
Question 2: Yes, a hash table would be an efficient data structure to store these elements. There's a straightforward way to define a hash function on subsets of $S$, so you can use that to build a hash table. A hash table achieves the space complexities listed above. | {
"domain": "cs.stackexchange",
"id": 1740,
"tags": "data-structures, space-complexity, sets"
} |
If earth's magnetic field disappeared, would cosmic radiation lead to increased radioactivity? | Question: This is about the effect of cosmic radiation on earth. Is it the type of radiation that could make things radioactive?
So if earth's magnetic field weakened considerably (such as could happen if it was continually flipping rapidly), or disappeared altogether, would it cause an increase in radioactivity? Could it cause an increase in, for example, Carbon-14?
Answer: Yes, it will cause many things to become radioactive, carbon-14 being one example. The cosmic radiation itself may reach the ground so we have much more to worry than just carbon-14. One of the first large-scale phenomena you will see is global aurora caused by solar wind. Over time, solar wind will disrupt the ozone layer and destroy yet another shield protecting us from cosmic and solar radiation. Ultimately, Earth's atmosphere will be stripped away by solar wind and Earth will become barren like Mars.
Update:
I just realized I should emphasize that my answer is based on complete elimination of the magnetosphere. | {
"domain": "physics.stackexchange",
"id": 19492,
"tags": "magnetic-fields, earth, radioactivity, cosmic-rays"
} |
Finding the random uncertainty of a set of values | Question: Ok, for the switch-on voltage of a red LED I have the readings as follows, all in volts:
$$
1.45, 1.46, 1.46, 1.44, 1.45
$$
The mean of these readings, in volts, is $1.45$ (I rounded up to $2$ decimal places as my scale reading uncertainty was $\pm 0.01\,\mathrm{V}$, and my teacher told me to round them up since to state my scale reading uncertainty for the mean the mean will have to have the same number of decimal places as the scale reading uncertainty). Now, my random uncertainty for these values is $\pm 0.004\,\mathrm{V}$, which is not to $2$ decimal places (it's to $1$ significant figure). So, I was wondering if I would have to make my random uncertainty have $3$ significant figures ($\pm 0.00400\,\mathrm{V}$?) to express the random uncertainty in absolute form (Mean Value $\pm$ Random Uncertainty). And if I did so, would it be, in terms of physics, correct?
Answer: I believe in this case you should round your uncertainty to 0.01, in addition to the convention where it is better to give your results "the benefit of the doubt", if your digital measurement tool (your voltmeter) only had up to two decimal places, the smallest measurable value (0.01) would be uncertainty inherent in the measurement tool. | {
"domain": "physics.stackexchange",
"id": 29650,
"tags": "measurements, measurement-problem"
} |
How to decode a Barker code | Question: Let's suppose that I send the following sequence of bits:
$x = [1, 0]$
I use a 7 length Barker code ($b = [−1,−1,−1,1,1,−1,1]$) in a BPSK modulation scheme, resulting in the following sequence:
$x_B =\\
[−1,−1,−1,\:\:1,\:\:1,−1,\:\:1\\
\:\:\:1,\:\:\:1,\:\:1,-1,-1,\:\:\:1,-1]$
We send it, through a AWGN channel, and after decoding there are some errors in the sequence:
$y_B =\\
[\:\:1,−1,\:\:1,\:\:1,\:\:1,−1,\:\:1\\
\:\:\:1,-1,\:\:1,-1,-1,\:\:\:1,-1]$
How would I decode this sequence?
Answer: Under the white Gaussian noise assumption, the optimal decoder would be to minimize each window of $7$ bits, that is take you first 7 symbols
$$s_1 = [\:\:1,−1,\:\:1,\:\:1,\:\:1,−1,\:\:1]$$
and compute two metrics
$$c_0 = \Vert s_1 - b_0 \Vert_2^2 = 2^2 + 2^2 + 2^2 + 2^2 + 2^2 = 20$$
$$c_1 = \Vert s_1 - b_1 \Vert_2^2 = 2^2 + 2^2 = 4$$
where $b_1 = [−1,−1,−1,1,1,−1,1]$ is the Barker code corresponding to $1$ and $b_0 = [1,1,1,-1,-1,1,-1]$ is the Barker code corresponding to $0$. So if $c_0 < c_1$ then the first received bits is $0$, else $1$.
Next, grab the other $7$ symbols
$$s_2 = [1,-1,1,-1,-1,1,-1]$$
and do the same thing.
PS: You could have also computed a Hamming distance as the criterion chooses the minimum number of errors. | {
"domain": "dsp.stackexchange",
"id": 7707,
"tags": "digital-communications"
} |
Does the real Clifford group contain all real diagonal gates? all permutation gates? | Question: The real Pauli group is the subgroup of $ O_{2^n}(\mathbb{R}) $ generated by products and tensor products of $ X $ and $ Z $ (this deviates from the usual Pauli group in that only real Paulis are allowed, in other words there is no global phase of $ i $).
The real Clifford group is defined to be the normalizer in $ O_{2^n}(\mathbb{R}) $ of the real Pauli group.
The real Clifford group is finite of size
$$
2^{n^2+n+2}(2^n-1)\prod_{j=1}^{n-1}(4^j-1)
$$
see
https://arxiv.org/abs/math/0001038
According to the same reference the real Clifford group contains all diagonal gates of the form
$$
diag((−1)^{q(v)}+a)
$$
where $ q $ is a binary quadratic form and $ a $ is $ 0 $ or $ 1 $.
The real Clifford group also contains all permutation matrices that correspond to the action of an element of $ AGL(n,2) $ on the $ 2^n $ basis vectors.
These two types of gates are mentioned in the context of a "convenient generating set" for the real Clifford group. Are all real diagonal gates and all permutation matrices in the in the real Clifford group? (I assume not because Toffoli is a permutation matrix that is not in the Clifford group) Or are the diagonal gates in terms of quadratic forms and the permutation matrices correspond to $ AGL(n,2) $ the only diagonal and permuation gates, respectively, in the real Clifford group?
Answer: I assume by "real diagonal" you mean diagonal with $\pm1$ entries since, at least in the quantum arena, we're talking about unitaries.
I'm not familiar with the real Clifford group, so I'm going to assume that its evolutions can be efficiently simulated on a classical computer, by a version of the Gottesman-Knill theorem (real Paulis are mapped to real Paulis, so we just have to keep track of them).
The Toffoli gate is a permutation matrix that is not in the real Clifford group (which we know because Toffoli+Hadamard is universal, Hadamard is in the real Clifford group, and the real Clifford group is not universal).
By extension, if I take Toffoli and pre- and post-multiply the target qubit by a Hadamard gate, I get controlled-controlled-$Z$, which is real diagonal but not in the real Toffoli group. | {
"domain": "quantumcomputing.stackexchange",
"id": 4358,
"tags": "mathematics, clifford-group"
} |
What is the formula for calculating the tension of the rope section? | Question: There is a circle of rope that rotates at a uniform angular velocity $ω$. What is the formula for calculating the tension of the rope section? Without gravity, the density of the rope is $ρ$, the radius of the rope circle is $R$, and the section radius of the rope is $r$.
Answer: Consider an infinitesimally small section of the of the string $d\theta$. The following diagram illustrates this:
The tension is of the same magnitude throughout the rope, and it acts perpendicular to the vector from the center of the string to the point of action.
From this diagram, you can tell that only the x-components to the left matter, since the y-components of the tensions cancel out. The x-components of the two tension vectors must be equal to the force required for centripetal acceleration.
$$2T\text{ sin}(d\theta/2) = Td\theta = (dm)\omega^2R$$
The small bit of mass can be found as follows:
$$dm = \rho dV = \rho A dx = \rho \pi r^2 (r d\theta) = \rho \pi r^3 d\theta$$
Then, cancelling out the $d\theta$ on both sides of the equation:
$$\boxed{T = \pi \rho r^3 \omega^2 R}$$
Note: This solution assumes $r \ll R$. | {
"domain": "physics.stackexchange",
"id": 64171,
"tags": "newtonian-mechanics, forces, string"
} |
Two slit experiment: Where does the energy go? | Question: In Physics class we were doing the two slit experiment with a helium-neon red laser. We used this to work out the wavelength of the laser light to a high degree of accuracy. On the piece of paper the light shined on there were patterns of interference, both constructive and destructive. My question is, when the part of the paper appeared dark, where did the energy in the light go?
Answer: It goes to the brighter strips. In these regions there is a constructive interference, which is actually brighter than would be with just one source. | {
"domain": "physics.stackexchange",
"id": 384,
"tags": "quantum-mechanics, double-slit-experiment"
} |
Normalising Generators of a Lie Algebra | Question: Ok, so I'm asking this in physics because I'm currently working through part of Srednicki's text on QFT, even though it's really a maths question.
In Srednicki's chapter on non-Abelian gauge theory, he introduces the generators of a Lie group. At the moment we're only analysing $SU(N)$, which is defined by $M M^\dagger = 1$ and $\det(M) = 1$ for all $M \in SU(N)$
And the corresponding conditions on the generators of the group are $T = T^\dagger$ and ${\rm Tr}(T) = 0$ for all $T \in \mathfrak{su}(N)$
Then what I don't understand is that Srednicki tells me that we should normalise our generators so that $${\rm Tr}(T^i T^j) = \frac{1}{2}\delta^{ij}.$$
So presumably this arises because our set of $N^2-1$ generators is a basis for the tangent space of $SU(N)$ at the identity, and we choose it to be orthogonal and then need a condition to normalise the lengths of all of the basis vectors?
Why did the condition Srednicki gave do that? And where did we input that the vectors are orthogonal?
Answer: The Lie algebra $\mathfrak{su}(N)$, viewed as a vector space of matrices, can be equipped with the following standard inner product:
\begin{align}
\langle X,Y\rangle = \mathrm{tr}(X^\dagger Y),
\end{align}
where $X^\dagger Y$ is the matrix product of $X^\dagger$ and $Y$, and $\mathrm{tr}$ is the trace. Since $X^\dagger = X$ for all $X\in\mathfrak{su}(N)$, the right hand side reduces to $\mathrm{tr}(XY)$. Thus, the condition Srednicki writes expresses orthogonality with respect to this standard inner product, and Srednicki chooses to normalize the generators to have norm-squared $1/2$. | {
"domain": "physics.stackexchange",
"id": 21684,
"tags": "group-theory, lie-algebra, representation-theory, conventions"
} |
Speed of light: measured quantity or theoretical derivation? | Question: In a comment to this answer it was stated
Yes, the speed of light is not a measured value. In natural units, it equals one (e.g. one light second per second)... It is not a measured value, but a postulated whole number with no fractions. You cannot measure the speed of light, it is theoretically given as the unity that for every day convenience is multiplied by this whole number.
In my understanding the value of speed of light was found due to observations and it isn't possible to derive it theoretical.
How you can explain or discard the one or the other point of view?
Answer: The speed of light was found by observation, but then we discovered a remarkable property about the universe, which is that light is very unlike sound. If you move at a speed $v$ through the air then sounds move away from you at speed $v_\text{sound} - v$ in the direction that you're going and at speed $v_\text{sound} + v$ in the direction opposite. It turns out that nobody can experience the same thing with light -- no matter what speed you go, and no matter relative to who, light rays in vacuum always move at speed $c$.
Combined with the fact that around 1900 we developed a new way to measure distances based on the interference of light, our measurements of the velocity of light became dominated by uncertainty on what exactly "one meter" is, so we redefined the meter in terms of the distance light goes in a certain amount of time. Now the speed of light in vacuum is a mathematical constant for our unit system. But of course we chose the definition so that we did not need to change our meter-sticks. | {
"domain": "physics.stackexchange",
"id": 43004,
"tags": "speed-of-light"
} |
Orthodox Easter calculator | Question: I have code for calculating the date for orthodox Easter and it makes me cry when I look at the if-else part. Please give your suggestions on how to make it shine (if possible).
I guess I can format day/month to DateTime format but I'm afraid it'll bring just more mess.
It works just fine though this.
Source
puts "- Hello, I'm EasterEvaluator. I can tell you when will be(or was) Easter."
puts "- Please enter the year which you'r interested in:"
year = gets.chomp.to_i
a = year % 19
b = year % 4
c = year % 7
d = (19*a+15) % 30
e = (2*b + 4*c + 6*d + 6) % 7
f = d + e
if f<=9
easter = 22+f
easter = easter + 13 if year > 1918
if easter > 31
easter = easter - 31
month = "April"
else
month = "March"
end
else
easter = f-9
easter = easter + 13 if year > 1918
if easter > 30
easter = easter-30
month = "May"
else
month = "April"
end
end
puts "- In #{year}, Easter is going to be on #{easter} of #{month}."
Answer: Completely untested psuedo-code, but the key is to realize f = number of days from March 22 to Orthodox Easter.
(* f gives Easter as the number of days since March 22 *)
easter = f+22+(year<1918)*13
month = "March"; # default
if f > 31 {month="April"; day=f-31}
if f > 61 {month="May"; day=f-61}
You might also post to http://codegolf.stackexchange.com | {
"domain": "codereview.stackexchange",
"id": 10341,
"tags": "ruby, datetime"
} |
Is MAX CUT approximation resistant? | Question: CSP optimization problem is approximation resistant if it is $NP$-hard to beat the approximation factor of a random assignment. For instance, MAX 3-LIN is approximation resistant since a random assignment satisfies $1/2$ fraction of the linear equations but achieving approximation factor $1/2+ \epsilon$ is $NP$-hard.
MAX CUT is a fundamental $NP$-complete. It can be formulated as CSP problem of solving linear equations modulo 2 ($x_i + x_j= 1$ mod 2). A random assignment achieves $1/2$-approximation factor (of the total number of edges $|E|$). Haglin and Venkatesan proved that achieving an approximation factor $1/2+ \epsilon$ is $NP$-hard (i.e. finding a cut better than $|E|/2$). However, Hastad showed that MAX CUT is not approximable to $16/17+ \epsilon$ factor within the optimal cut unless $P=NP$. Goemans and Williamson gave a SDP-based polynomial time algorithm with 0.878 approximation factor (within the optimal cut) which is optimal assuming the Unique Games Conjecture. It seems to me that expressing the approximation factor relative to total number of constraints ($|E|$) is more natural and consistent with the convention used for MAX 3-LIN problem.
Why is the approximation factor for MAX CUT given relative to the size of optimal cut instead of the number of constraints (# of edges)? Am I right in concluding that MAX CUT is approximation resistant when the approximation factor is relative to the total number of constraints ($|E|$)?
Answer: If you measure approximation as a ratio between the number of constraints satisfied by your algorithm divided by the total number of constraints, then trivially all constraint satisfaction problems are unconditionally approximation-resistant.
By definition, a problem is approximation resistant if the (worst-case) approximation ratio of a random solution is (up to an additive $o(1)$ term) best possible among the (worst-case) approximation ratios achieved by all polynomial time algorithms.
With your definition of approximation ratio, you get approximation resistance of Max Cut (and all constraint satisfaction problems) just because you can always construct instances for which the ratio given (on average) by a random assignment is more or less (up to a $o(1)$ term) the same as the ratio given by an optimal assignment.
For example, in the Max Cut problem, a clique on $n$ vertices is a graph that has ${n \choose 2} = n\cdot (n-1)/2$ edges, and in which an optimal cut cuts $n^2/4$ edges. This means that every algorithm, and in particular every polynomial time algorithm, has a worst-case approximation ratio (according to your definition) that is at most $1/2 + O(1/n)$ on $n$-vertex graphs. The random assignment has ratio $1/2$ on all instances, and so no algorithm can do better than the random algorithm, and the problem is approximation-resistant. | {
"domain": "cstheory.stackexchange",
"id": 503,
"tags": "cc.complexity-theory, approximation-hardness, max-cut"
} |
Is black tea a pH indicator? | Question: Today, I made a cup of lemon tea in a different way than usual. Instead of pouring hot water in to a mug containing lemon juice and black tea (my normal routine), I let the tea steep for a little while before squeezing the lemon. Before adding the lemon, I noticed that the tea was much darker than I am used to. After squeezing the lemon, the tea changed color to a lighter brown and became more translucent.
I could see only a half inch down the side of the cup before squeezing the lemon, but afterwards I could see three inches down, all the way to the bottom. I did not notice any precipitated solids on the bottom of the cup. What happened?
Answer: The colour of black tea is given by a complex mixture of theaflavins and thearubigins.
These compounds are formed by enzymatic oxidation and condensation of polyphenols of green tea, typically of gallocatechin and epigallocatechin gallate ( EGCG )
Basic dissociated forms of these compounds have strong absorption of visible light due to delocalization of phenolate electrons. This absorption decreases a lot when phenolates are protonized in acidic environment.
So yes, in some sense, black tea infusion is a $\mathrm{pH}$ indicator, similar to litmus or juice from red beet roots. Many natural colourful compounds have $\mathrm{pH}$ dependent colour, as their molecules undergo acid-basic reactions.
But synthetic indicators are better due stability, sharper and better defined colour transition. | {
"domain": "chemistry.stackexchange",
"id": 12696,
"tags": "ph, food-chemistry"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.