anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How would one represent/model the effect of air resistance on horizontal motion?
Question: Evidently, there are formulas that allow us to represent the effect of air resistance on the velocity of an object in free fall but how would one calculate the effect of air resistance on horizontal motion, for instance how much air resistance hinders a humans speed or so ? Answer: Nothing on how to think about air resistance depends on the direction you move into. Same usual representations applying to a free fall are used for other purposes. Mainly a force that is $\sim\vec v$ in case of a laminar regime or $\sim|\vec v|^2 \hat v$ in a turbulent one.
{ "domain": "physics.stackexchange", "id": 61700, "tags": "velocity, drag" }
What is the difference between a qubit and a quantum state?
Question: In general, a qubit is mathematically represented as a quantum state of the form $\lvert \psi\rangle = \alpha \lvert 0\rangle + \beta \lvert 1\rangle$, using the basis $\{ \lvert 0\rangle, \lvert 1\rangle \}$. It seems to me that a qubit is just a term used in quantum computing and information to denote a quantum state (i.e. a vector) of a system. Is there any fundamental difference between a qubit and a quantum state? What's more to a qubit than the quantum state it represents? Answer: There are a few things to distinguish here, which are often conflated by experts because we're using these terms quickly and informally to convey intuitions rather than in the way that would be most transparent to novices. A "qubit" can refer to a small system, which has a quantum mechanical state. The states of a quantum mechanical system form a vector space. Most of these states can only be distinguished from each other only imperfectly, in that there is a chance of mistaking one state for the other, no matter how cleverly you try to distinguish them. One may then ask the question, of a set of states, whether they are all perfectly distinguishable from one another. A "qubit" is an example of a quantum mechanical system, for which the largest number of perfectly distinguishable states is two. (There are many different sets of perfectly distinguishable states; but each such set contains only two elements.) These may be the polarisation of a photon ($\lvert \mathrm H \rangle$ versus $\lvert \mathrm V \rangle$, or $\lvert \circlearrowleft \rangle$ versus $\lvert \circlearrowright \rangle$); or the spin of an electron ($\lvert \uparrow \rangle$ versus $\lvert \downarrow \rangle$, or $\lvert \rightarrow \rangle$ versus $\lvert \leftarrow \rangle$); or two energy levels $\lvert E_1 \rangle$ and $\lvert E_2 \rangle$ of an electron in an ion, which may occupy many different energy levels but which is being controlled in such a way that the electron stays within the subspace defined by these energy levels when it isn't being acted on. Common to these systems is that one can describe their states in terms of two states, which we might label as $\lvert 0 \rangle$ and $\lvert 1 \rangle$, and consider the other states of the system (which are vectors in the vector space spanned by $\lvert 0 \rangle$ and $\lvert 1 \rangle$) using linear combinations taking the form $\alpha \lvert 0 \rangle + \beta \lvert 1 \rangle$, where $\lvert \alpha \rvert^2 + \lvert \beta \rvert^2 = 1$. A "qubit" can also refer to the quantum mechanical state of a physical system of the sort we've described above. That is, we may call some state of the form $\alpha \lvert 0 \rangle + \beta \lvert 1 \rangle$ "a qubit". In this case we are not considering what physical system is storing that state; we are interested only in the form of the state. "A qubit" can also refer to an amount of information which is equivalent to a state such as $\alpha \lvert 0 \rangle + \beta \lvert 1 \rangle$. For instance, if we know two states $\lvert \psi_0 \rangle$ and $\lvert \psi_1 \rangle$ of some complicated quantum system, and we have some physical system whose state $\lvert \Psi \rangle$ is in some superposition $\alpha \lvert \psi_0 \rangle + \beta \lvert \psi_1 \rangle$, then it doesn't matter how complicated the system is or whether either of the states $\lvert \psi_j \rangle$ have any entanglement: the amount of information expressed by the possible values of $\lvert \Psi \rangle$ is one qubit, because with a clever enough noiseless procedure, you could reversibly encode that complicated quantum state into the state of a (physical system) qubit. Similarly, you can have a very large quantum system which encodes $n$ qubits of information, if you could reversibly encode the state of that complicated system as the state of $n$ qubits. This may seem confusing, but it's no different from what we do all the time with classical computation. If in a C-like language I write int x = 5; you probably understand that x is an integer (an integer variable that is), which stores an integer 5 (an integer value). If I then write x = 7; I don't mean that x is an integer which is equal to both 5 and 7, but rather that x is a container of sorts and that what we are doing is changing what it contains. And so forth — these ways in which we use the term 'qubit' are just the same as how we use the term 'bit', only it so happens that we use the term for quantum states instead of for values, and for small physical systems rather than variables or registers. (Or rather: the quantum states are the values in quantum computation, and the small physical systems are the variables / registers.)
{ "domain": "quantumcomputing.stackexchange", "id": 4583, "tags": "experimental-realization, terminology-and-notation, quantum-state" }
Do these unit tests cover my method under test?
Question: I've written an extension method to truncate strings: /// <summary> /// Returns the string, or a substring of the string with a length of <paramref name="maxLength"/> if the string is longer than maxLength. /// </summary> /// <param name="maxLength">The maximum length of the string to return.</param> /// <exception cref="ArgumentException">If <paramref name="maxLength"/> is smaller than zero, an ArgumentException is raised.</exception> /// <![CDATA[ Documentation: /// If the string is null or an empty string, the input is returned. /// If the string is shorter than or equal to maxLength, the input is returned. /// If the string is longer than maxLength, the first N (where N=maxLength) characters of the string are returned. /// ]]> public static String Truncate(this String input, int maxLength) { if (maxLength < 0) { throw new ArgumentException("Must be >= 0", "maxLength"); } if (String.IsNullOrEmpty(input) || input.Length <= maxLength) { return input; } return input.Substring(0, maxLength); } I have written these test cases: [TestFixture] public class TruncateTests { String longString = "ABC"; String shortString = "A"; String nullOrEmptyString = null; String output = ""; [Test] public void LessThanOrEqual() { // If input is shorter than or equal to maxLength, the input is returned. output = longString.Truncate(longString.Length + 5); Assert.AreEqual(longString, output); output = longString.Truncate(longString.Length); Assert.AreEqual(longString, output); } [Test] public void GreaterThan() { // If input is longer than maxLength, the first N (where N=maxLength) characters of input are returned. output = longString.Truncate(1); Assert.AreEqual(shortString, output); } [Test] public void NullOrEmpty() { // If input is null or an empty string, input is returned. output = nullOrEmptyString.Truncate(42); Assert.AreEqual(output, nullOrEmptyString); nullOrEmptyString = ""; output = nullOrEmptyString.Truncate(42); Assert.AreEqual(output, nullOrEmptyString); } [ExpectedException(typeof(ArgumentException))] [Test] public void MaxLengthException() { // If maxLength is smaller than zero, an ArgumentException is raised. "".Truncate(-1); } //http://www.fileformat.info/info/unicode/char/1f3bc/index.htm string multiByteString = "\x1F3BC"; [Test] public void MultiByte() { Assert.IsTrue(multiByteString.Length == 2); output = multiByteString.Truncate(1); Assert.IsTrue(output.Length == 1); } } Now how can I confirm that: The method Truncate() does what it is supposed to do? I have written test code to test al promises made? This test code follows practices and guidelines valid for unit testing? I'm especially curious about the last one. I've written a few tests in my time, but I'm never sure whether I'm testing enough and not too much at the same time. Can anyone shed a general light on this and maybe point me towards invaluable resources about unit testing? Answer: About the function comment: You're over-complicating the summary. As of how I understand the function does Get the first <paramref="maxLength"> characters as a summary. Special conditions are mentioned bellow. Those should be simplified as well. For maxLength I'd check for min/max integer as well as for -1/0/1 (does the function behave correctly at the limits?) Check input for null and additionally for those combinations: input.length < maxLength input.length = maxLength input.length > maxLength You're not testing your code when testing for MB-functionality. Test one thing at a time. If NullOrEmpty fails, how to you know which of those assertions failed? Does it fail if input is empty or if it is null?\ Remove comments, make the test-names more clear instead. Tests are simple and easy to understand (at least they should be :)). Safe your time of writing comments and improve the test instead. For most of the functions you're just repeating the behavior asserted. However this is already written down by the assertion itself. The comment just duplicates the code. In my opinion, test names should be speaking of what they do. NullOrEmpty doesn't state anything about what actually happens. NullInputReturnsNull and EmptyInputReturnsEmptyString do however. Furthermore speaking test-names can be used as a documentation (i.e testdox). When are you done? Never. Eventually you (or someone else) will find bugs, misbehavior, ... (e.g. during integration). It's more important to keep the tests up to date at those times as well. Update On 2. I'm testing for MAX_INT because your promise is this functions accepts integer values between [0, MAX_INT]. Imagine at some time there is a maxLength + 1 statement for some reason. As a rule of thumb: always test the border values of parameters. Further reading: equivalence partitioning and boundary value analysis On 3. That'd be four tests. You should split those. On 4. The point is your are testing the c# library, not your code. You don't have any code dedicated for MB handling. What you are actually doing is testing the Substring method and Length property for MB handling.
{ "domain": "codereview.stackexchange", "id": 2820, "tags": "c#, unit-testing" }
A small "TODO" app built using a modular pattern
Question: I am just beginning to learn JavaScript and although I feel I am starting to grasp the language, I don't feel I know how to structure an app properly. I decided to start small and build a "TODO"-app using a modular pattern. I am aware that I should use more validations, there are unused functions, and there may be a missing semicolon here and there. My main focus on this thing was to see if I could structure an app in a good way. I would love some feedback on my setup. Is this a proper way to set up such a small application? Am I missing something completely? Am I using the modular pattern correctly? Generally, have I gotten the gist of it? (function todoApp() { var storage = (function () { var _db, todos, addTodo, removeTodo, updateTodo, getAllTodos, save; if (!window.localStorage.todoApp) { window.localStorage.todoApp = JSON.stringify({todos: []}); } _db = JSON.parse(window.localStorage.todoApp); addTodo = function addTodo(todo) { var id = Date.now().toString(); _db.todos.push({id: id, todo: todo}); save(); }; removeTodo = function removeTodo(id) { _db.todos.every(function (c, i) { if (c.hasOwnProperty("id") && c.id == id) { _db.todos.splice(i, 1); save(); return false; } }); }; updateTodo = function updateTodo(id, todo) { _db.todos.every(function (c, i) { if (c.hasOwnProperty("id") && c.id == id) { _db.todos[i].todo = todo; save(); return false; } }); }; getAllTodos = function getAllTodos() { return _db.todos; }; save = function save() { window.localStorage.todoApp = JSON.stringify(_db); }; return { addTodo: addTodo, removeTodo: removeTodo, updateTodo: updateTodo, getAllTodos: getAllTodos }; })(); var view = (function (storage) { var $todos = $("[rel='jstodo-container'] > [rel='jstodo-todos']"), todos = storage.getAllTodos(), render; render = function render() { $todos.html(""); todos.forEach(function (c) { $todos.append("<li rel='" + c.id + "'>" + c.todo + "</li>"); }); }; return { render: render }; })(storage); view.render(); $("[rel='jstodo-container'] > [rel='jstodo-submit']").on("click", function () { storage.addTodo($("[rel='jstodo-container'] > [rel='jstodo-input']").val()); view.render(); }); })(); <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <script src="https://code.jquery.com/jquery-3.1.1.min.js" integrity="sha256-hVVnYaiADRTO2PzUGmuLJr8BLUSjGIZsDYGmIJLv2b8=" crossorigin="anonymous"></script> <title>JS Bin</title> </head> <body> <div rel="jstodo-container"> <h1>Todos: </h1> <input type="text" rel="jstodo-input"/> <button rel="jstodo-submit">Legg til!</button> <ul rel="jstodo-todos"></ul> </div> </body> </html> Here is a jsbin of the project (in case the Stack Snippet has permission issues). Answer: There's some stuff that's confusing in this coding and it's structure. A few things of note: First when defining a class you do not need the beginning ( and ending )(); What that is for is to create a nameless function that will then be run once and then for all intents and purposes be removed from memory after. You are wanting to create a class that can be referenced and used later on. Simply: function todoApp(){ //define class here } This then creates an object constructor that you then call to create and store the object: var myTodoApp = new todoApp(); Then your jQuery call after that, outside of your constructor, not in: $("[rel='jstodo-container'] > [rel='jstodo-submit']").on("click", function () { myTodoApp.storage.addTodo($("[rel='jstodo-container'] > [rel='jstodo-input']").val()); myTodoApp.view.render(); }); When giving a class (your todoApp function) a custom class for a variable, that 'sub-class' should be defined separately and then referenced inside of your class. Inside of your class, anything you want to be accessible outside of your class should start with this. var mayTodoApp = new todoApp(); $("[rel='jstodo-container'] > [rel='jstodo-submit']").on("click", function () { myTodoApp.storage.addTodo($("[rel='jstodo-container'] > [rel='jstodo-input']").val()); myTodoApp.view.render(); }); function todoApp(){ this.storage = new storageClass(); this.example = 'this is a string accessible outside of the class'; //run any initialization code here } function storageClass(){ var _db, todos, addTodo, removeTodo, updateTodo, getAllTodos, save; this.addTodo = function addTodo(todo) { //do stuff this.save(); //note: need 'this.' for class to access itself }; } This is more along the structure you are looking for. A few things I would suggest looking into is the concept of this and maybe some Google results on javascript object oriented programming. Also having a good strong IDE (integrated development environment) helps a lot. I recommend Komodo personally, but to each their own. There might be a better one out there for you. But a good strong one can give code completion and show you syntax errors to help you get the structure down. One other thing of note that helps quite a bit: Javascript is all read and processed into the DOM before anything is actually run (accept for dynamically added stripts, but that is a whole different beast). What this means is that before any coding is already run, the interpreter has already processed over the class definition. In other words, you can call your classes before defining them: function todoApp(){ this.storage = new storageClass(); } function storageClass(){ this.addTodo = function(todo){ } } var todo = new todoApp(); todo.storage.addTodo('fooBar'); Will do the same as: var todo = new todoApp(); todo.storage.addTodo('fooBar'); function storageClass(){ this.addTodo = function(todo){ } } function todoApp(){ this.storage = new storageClass(); } Outside of class definition, order matters, or course. You must create the object instance of your class before using it. Note that the latter is the standard. Classes should be defined below the coding to be run globally, this makes for better readability. EDIT: I also just noticed your JSON objects. You do not need to define then in that manner. Javascript automatically determines variable type which is why you use var instead of char and int and such. For this same reason JSON objects are automatically defined by format. It's a harder concept for some to grasp at first and I would definitely read more into it but basically it is an associated array, or array of pairs (keys and values). that is in this format: {key: 'value', key: 'value'} It is defined inside of curly brackets { } the key is a name to reference the value, and does not need quotes. The value needs quotes unless you want to use a variable. Also numbers do not need quotes for the values. Values can be objects or arrays as well. And the pairs are delimited by a comma ,. As long as you follow these rules javascript does the rest. var home = '1234 Fake St.'; var example = {date: '12/01/2016', time: '9:42am', location: home, peopleAttending: 10};
{ "domain": "codereview.stackexchange", "id": 23207, "tags": "javascript, beginner, to-do-list" }
Before and During the sunrise
Question: Last year we went to a remote place near forest to a site with dark and clear skies. I captured two photos of the East horizon before and during the sunrise. I have questions regarding the effects you can see here. In the above image you can see the sky seems Dark above the horizon above it is a bright band. It seems much like the photos in this wikipedia article Earth's shadow Which says it happens due to earth casting it's own shadow on atmosphere and that bright band seems like Venus's belt. Check here about them. BUT they are the phenomenon observed on the opposite direction to the direction of setting and rising of sun. Now into the second image you could clearly see the sun when rising the dark region seems to make a valley like structure sloping from both sides towards sun. I don't have a comment why it shouldn't be. BUT What makes this happen? Answer: The first picture shows the point in time where ancient cultures started counting the (variable with seasons) twelve hours of the day , i.e. when a line appeared between dark blue and pink. At night the sun is obscured by the bulk of the earth, and the observable sky is in the shadow. As the earth turns towards the sun the point comes where the higher part of the atmpsphere is no longer in the shadow , and is being illuminated, which becomes more obvious if there are clouds high up, or airplane tracks. The image below should clear up questions. Diagram showing displacement of the Sun's image at sunrise and sunset S' is what is observed but the real sun is at S, and the picture is recording the refracted image through the atmosphere. The colors in the atmosphere depend on the scattering of light in the atmosphere and the type of dust and humidity in the atmosphere. The sun emits all visible frequencies of light which add up as white in our color perception. Atmospheric nitrogen and oxygen scatter violet light most easily, followed by blue light, green light, etc. So as white light (Red,Orange,Yellow,Green,Blue,Indigo,Violet in rising frequency) from the sun passes through our atmosphere, the high frequencies (BIV) become scattered by atmospheric particles while the lower frequencies (ROY) are most likely to pass through the atmosphere without a significant alteration in their direction. This scattering of the higher frequencies of light illuminates the skies with light on the BIV end of the visible spectrum. which we see as mostly blue. This holds true for the shadow still seen in the second picture, where it is the refracted image of the sun coming through the atmosphere, that you have caught. The sun is still below the horizon. Refraction happens at the image dimensions the rest of the horizon does not get as many refracted rays the further away from the sun image so the earth's shadow remains dark. The sun's color is in the lower part of the spectrum, which we observe as mainly orange, because the higher frequencies are scattered more by the atmosphere. The inverted cone of shadow can be explained by the fall of the intensity of light with respected to the projected distance to the center of the sun. Less light is there to be refracted, scatter and illuminate , as a function of the area of the sun seen over the horizon. The higher parts contribute more to the intensity than the low part of the image.
{ "domain": "physics.stackexchange", "id": 29019, "tags": "earth" }
Momentum conservation in spontaneous pair creation and annihilation
Question: I know that in free space, photon cannot decay into an electron and a positron since momentum is frame dependent for massive particles while invariant for a photon. Given this, how is spontaneous pair creation and annihilation possible? Can someone shed some light on it? Or is it that it is actually impossible, and that I had a wrong comprehension? Answer: Pair creation You are right: In free space a photon cannot decay into an electron-positron pair (because it would violate energy/momentum conservation). However, near an atomic nucleus a photon can decay into an electron-positron pair. In this process the atomic nucleus receives some recoil. See also Wikipedia: Pair production. The process can be visualized by a Feynman diagram like below. image from Wikipedia: Pair production Pair annihilation You are also right, that an electron-positron pair in free space cannot decay into a photon (again because it would violate energy/momentum conservation). However, an electron-positron pair can decay into two photons. The two gamma ray photons depart in roughly opposite directions. See also Wikipedia: Electron–positron annihilation. The process can be visualized by the Feynman diagram below. image from Feynman diagram for annihilation
{ "domain": "physics.stackexchange", "id": 62079, "tags": "particle-physics, quantum-electrodynamics" }
Time in general relativity
Question: A physical quantity is introduced by its operational definition. In general relativity we use a differential manifold to describe the 4-dimensional space-time and, to identify a point in it, we use a reference frame. This frame consists of an origin and four coordinate, time and the three spatial coordinates. My question is:"If we want to practically identify an event in space-time, how we measure its space-time coordinate, especially time? With a clock in the neighborhood of the spatial location of the event? With the help of some particular "game" of light signal? Answer: A physical quantity is introduced by its operational definition. Yes. Excellent. And physical quantities would include facts like whether an observer receives one signal between receiving two other signals or sees one mark between two other marks (clocks and rulers are designed on these principles) In general relativity we use a differential manifold to describe the 4-dimensional space-time [...] Correct. [...] and, to identify a point in it, we use a reference frame. No. The manifold ready has points, which in relativity are called events. You can label them any way your want. Just like you could give observers names like Alice or Bob. This can help you to communicate. But just saying the label by itself to someone that doesn't know how you assigned the labels is not physical. This frame consists of an origin and four coordinate, time and the three spatial coordinates. That sounds like a global frame in special relativity. If we want to practically identify an event in space-time, how [can] we measure its space-time coordinate, especially time? Coordinates are not physical by themselves. If you looked at two points you can't tell their x coordinate, maybe one of them is the origin, maybe the other is, there are many possible coordinate systems. It's not an operational definition. You could ask about the so called interval between two event along a path, and if the events were infinitesimally close you could talk about the interval between them without having to specify a path. But you can't look at the physics and from the physics figure out something with arbitrariness like what the coordinates are in some coordinate system. This already happened in regular Newtonian mechanics, the origin could be anywhere, the x axis could point in any direction. With a clock in the neighborhood of the spatial location of the event? With the help of some particular "game" of light signal? That can help you to find the interval between two events. It still can't tell you if one or the other event is the origin in some arbitrary b coordinate system since there are coordinate systems where one is the origin, others where the other is, and yet more where neither of them it, even coordinate patches where no event is the origin. Remember in Newtonian mechanics, when the origin could be anywhere, and the x axis could point in any direction? In special relativity, the origin can be any event and the time axis can point in any timelike direction (any direction that is tangent to the world line of a massive observer or equivalently that has a tangent with a timelike sign for the squared interval). This carries over to general relativity, you don't have global frames (that's what the general in general relativity means) but it is still the case that the tangent to an observer (no matter how quickly it moves at a sublight speed, or which direction the velocity goes) can be a time axis for a local coordinate patch. So given an event it could be the origin of a local coordinate patch, or not. So its coordinates could be zero ... or not. And it could be the origin in some coordinate patches and not the origin in others. And coordinate differences are not physically meaningful. In fact you write the metric in a coordinate system because you can use it to compute physical things from the coordinate system. So the physics is "Coordinate system + Metric in that coordinate system = Physically Meaningful results." What kind of physically meaningful results can you get? Given a coordinate path (a parameterized collection of coordinates) you can use the metric to find out if path is a possible path of an observer by seeing if the interval of the tangent to the curve is timelike (metrics allow you do compute intervals). If it is, then you could break it into regular pieces each of which has the same arc length where arc length is measured, not by coordinate differences, but by the interval you get from the metric. Now you can relate this to clocks. If a clock travels that path it will tick an equal amount on each of those paths. So now you can predict how many clocks ticks happen between the event where your path intersects two other events. That's something physical. And that prediction comes out the same no matter what coordinate system you used. If you use a different coordinate system all the coordinates are different, and the metric possibly has different values but you compute the interval between events in the same way. So you will agree whether the tangents are timelike and you will agree about which events break a given lath from event A to event B into 2,3, ... or 100 pieces of equal arc length when arc length is measured by the interval. So you agree on the physics, regardless of your coordinate system. That's great. But it means there isn't an experiment you can do to find out "the" coordinates of an event because they aren't unique or physical. What you have to do is make a model. You make a mathematical thing and then you assert that certain parts of it correspond to things in reality that match up then you look at other parts of the model and extract your predictions from them. The coordinates you use (if any) in your model are irrelevant. Sure you might use ones that feel very intuitive to make your life easier, but that doesn't make them physical, just convenient for you. And I said "if any" because even the use of coordinates at all is not required. All you really want to know is whether an event happened between two particular clock ticks or a ruler has two particular marks on either side of an object. Those are the kind of things you have in the lab. If you actually put the ruler and clocks into your model then that information is already there and you don't need to add a coordinate system. What general relativity does is restrict which models you can make, it restricts you to only making models that satisfy Einstein's Field Equation. Once you have one of those, you don't need to pick a coordinates system if the model already has the information you need.
{ "domain": "physics.stackexchange", "id": 23885, "tags": "spacetime, time, relativity" }
Sperm formation - Frequent Ejaculations
Question: I have visited various authenticated websites and other materials and learnt that a complete sperm development takes approximately 64 days. My doubts : If sperm truly takes 64 days to develop, how can a man ejaculate multiple times (e.g., 4-5) in a day and still have sperm come out each time? Why don't all the sperm come out at once? And each time he ejaculates, the sperm are complete (i.e., the "head" and "tail" portions are always included), right? Does frequent ejaculation, then, reduce the sperm count of each subsequent ejaculation? If so, do the sperms remain depleted for extended periods (i.e., 64 days), or are they replaced more regularly? At what rate are they replaced? Answer: The arithmetic of human sperm A young, healthy man produces about 1000 sperms every second, which comes to about 90 million per day [1]. These sperms are stored in the epididymis and ductus deferens until ejaculation. With several days of storage, the number can easily become big enough to allow for multiple ejaculations on the same day. The maturation period of 64 days does not matter here, because sperms form and mature asynchronously [1]. While one spermatogonium is on day 1 of its life, another is on day 2, another on day 3 and so on. Only those sperms that are mature are released from the testes (NOT directly in semen, but into the epididymis for storage). Does semen normally contain sperm with abnormal morphology? Yes. Even men in whom 96% of the ejaculated sperms have abnormal morphology can successfully conceive [2]. What happens on ejaculating repeatedly? This has already been assessed in several studies [3–5]. Overall, the results have been that with repeated ejaculation in quick succession, Semen volume decreases. Sperm count decreases (pointing to sperm depletion) but not to zero. The percentage of sperms with abnormal morphology remains unchanged. Hope that answers your questions. References Mesiano S, Jones EE. The male reproductive system. In: Boron WF, Boulpaep EF, editors. Medical physiology. 3rd ed. Philadelphia: Elsevier; c2017. p 1092–1107. Cooper TG, Noonan E, von Eckardstein S, et al. World Health Organization reference values for human semen characteristics. Human Reproduction Update. 2010 May–Jun;16(3):231–45. doi: 10.1093/humupd/dmp048 Oldereid NB, Gordeladze JO, Kirkhus B, Purvis K. Human sperm characteristics during frequent ejaculation. J Reprod Fertil. 1984 May;71(1):135–140. doi: 10.1530/jrf.0.0710135 Zvĕrina J, Pondĕlícková J. Changes in seminal parameters of ejaculates after repeated ejaculation. Andrologia. 1988 Jan–Feb;20(1):52–4. doi: 10.1111/j.1439-0272.1988.tb02363.x Mayorga-Torres JM, Agarwal A, Roychoudhury S, et al. Can a short term of repeated ejaculations affect seminal parameters? J Reprod Infertil. 2016 Jul-Sep;17(3):177-83. http://www.jri.ir/article/674
{ "domain": "biology.stackexchange", "id": 10626, "tags": "reproduction, sexual-reproduction" }
need help with camera simulation
Question: hi all. i using ubuntu 12.04 with ros-fuerte i trying to connect a camera sensor to a car urdf model that i created. i follow the example on http://ros.org/wiki/urdf/Tutorials/AddingSensorsToPR2. this is the relevant part of my urdf (xacro) model : <joint name="camera_joint" type="fixed"> <origin xyz="${chassis_length/2} 0 ${chassis_hight/2}" rpy="0 0 0" /> <parent link="base_link" /> <child link="camera_link"/> </joint> <link name="camera_link"> <inertial> <mass value="0.01" /> <origin xyz="0 0 0" /> <inertia ixx="0.001" ixy="0.0" ixz="0.0" iyy="0.001" iyz="0.0" izz="0.001" /> </inertial> <visual> <origin xyz="0 0 0" rpy="0 0 0"/> <geometry> <box size="0.001 0.001 0.001" /> </geometry> </visual> <collision> <origin xyz="0 0 0" rpy="0 0 0"/> <geometry> <box size="0.001 0.001 0.001" /> </geometry> </collision> </link> <gazebo reference="dany_car_camera"> <sensor:camera name="dany_car_camera"> <imageSize>640 480</imageSize> <imageFormat>R8G8B8</imageFormat> <hfov>90</hfov> <nearClip>0.01</nearClip> <farClip>100</farClip> <updateRate>20.0</updateRate> <controller:gazebo_ros_camera name="camera_controller" plugin="libgazebo_ros_camera.so"> <alwaysOn>true</alwaysOn> <updateRate>20.0</updateRate> <imageTopicName>dany_car_camera/image</imageTopicName> <frameName>camera_link</frameName> <interface:camera name="dany_car_camera_iface"/> </controller:gazebo_ros_camera> </sensor:camera> <turnGravityOff>false</turnGravityOff> <material>Gazebo/Red</material> </gazebo> when i roslaunch empty_world.launch and my launch file i get no errors about the camera. the simulation loads and runs correctly. but when i try to view the camera window by commanding $ rosrun image_view image_view image:=dany_car_camera/image i get an empty window. pleas help me found my mistake. tanks. Originally posted by dmeltz on ROS Answers with karma: 192 on 2012-08-11 Post score: 1 Original comments Comment by SL Remy on 2012-08-12: What is the result of a rostopic list btw? Comment by joq on 2012-08-13: Also, rostopic hz is useful to check whether anything is being published. Answer: I found my mistake : the reference field must be equal to the link name on which the sensor (in this case camera) is positioned. here the correct code : <!-- CAMERA : joint , link , sensor --> <joint name="yuval_camera_joint" type="fixed"> <origin rpy="0 0 0" xyz="0.5 0 0.1"/> <parent link="base_link"/> <child link="camera_link"/> </joint> <link name="camera_link"> <inertial> <mass value="0.01" /> <origin xyz="0 0 0" /> <inertia ixx="0.001" ixy="0.0" ixz="0.0" iyy="0.001" iyz="0.0" izz="0.001" /> </inertial> <visual> <origin xyz="0 0 0" rpy="0 0 0"/> <geometry> <box size="0.1 0.1 0.1" /> </geometry> <material name="red"/> </visual> <collision> <origin xyz="0 0 0" rpy="0 0 0"/> <geometry> <box size="0.1 0.1 0.1" /> </geometry> </collision> </link> <gazebo reference="camera_link"> <sensor:camera name="dany_car_camera_sensor"> <imageFormat>R8G8B8</imageFormat> <imageSize>704 480</imageSize> <hfov>45</hfov> <nearClip>0.1</nearClip> <farClip>100</farClip> <updateRate>20.0</updateRate> <controller:gazebo_ros_prosilica name="dany_car_CAMERA_controller" plugin="libgazebo_ros_prosilica.so"> <cameraName>dany_car_camera_sensor</cameraName> <alwaysOn>true</alwaysOn> <updateRate>20.0</updateRate> <imageTopicName>/dany_car_sensors/camera/image_raw</imageTopicName> <cameraInfoTopicName>/dany_car_sensors/camera/camera_info</cameraInfoTopicName> <pollServiceName>/dany_car_sensors/camera/request_image</pollServiceName> <frameName>camera_link</frameName> <CxPrime>1224.5</CxPrime> <Cx>1224.5</Cx> <Cy>1025.5</Cy> <focalLength>849.803363</focalLength> <!-- focal_length = width_ / (2.0 * tan( HFOV/2.0 )) --> <hackBaseline>0.0</hackBaseline> <distortionK1>0.0</distortionK1> <distortionK2>0.0</distortionK2> <distortionK3>0.0</distortionK3> <distortionT1>0.0</distortionT1> <distortionT2>0.0</distortionT2> <interface:camera name="dany_car_camera_iface" /> </controller:gazebo_ros_prosilica> </sensor:camera> <turnGravityOff>false</turnGravityOff> <material>Gazebo/Red</material> </gazebo> Originally posted by dmeltz with karma: 192 on 2012-12-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10570, "tags": "ros, gazebo-plugin, camera" }
Query for combining results of same query running across multiple databases
Question: I wrote a query to run the same query across multiple databases and combine the results. While it seems plenty quick I was wondering if there is a better way to do this. create table #serverlist( ID smallint IDENTITY(1,1), dbName varchar(50) ) create table #browsercounts( ID smallint IDENTITY(1,1), --Email varchar(50), Browser varchar(50), Counts int) insert into #serverlist select name from sys.databases where name like '%Test2Portal%' and name not like '%_Test%' Declare @counter int, @rows int set @counter = 1 set @rows = (select COUNT(dbName) from #serverlist) while (@counter <= (@rows)) Begin Declare @SQL varchar(1000) Declare @database varchar(50) = (select dbName from #serverlist where ID = @counter) Select @SQL = 'select Browser, COUNT(Browser) as Counts from ' + @database+ '.dbo.Session where Browser is not null group by Browser' insert into #browsercounts Exec (@SQL) set @counter += 1 End Select * From #browsercounts drop table #serverlist drop table #browsercounts Answer: You probably don't need any temp tables unless you need access to these temp tables across different stored procedures, here is how I would write it. DECLARE @sql VARCHAR(MAX) SELECT @Sql = COALESCE(@sql + ' UNION ALL ', '') + 'SELECT [' + name + '].dbo.Session.Browser, COUNT(['+name+'].dbo.Session.Browser) AS Counts FROM [' + name + '].dbo.Session WHERE [' + name + '].dbo.Session.Browser IS NOT NULL GROUP BY ['+name+']dbo.Session.Browser' from sys.databases where name like '%Test2Portal%' and name not like '%_Test%' EXEC @sql --this will perform the select for you hope this is helpful
{ "domain": "codereview.stackexchange", "id": 1032, "tags": "sql, sql-server, t-sql" }
Is the kinetic energy $T=\frac{p^2}{2 m}$ always valid?
Question: As the title states, Is $T=p^2 / 2 m$, where $T$ is the kinetic energy, $p$ is the norm of the 4-momentum, and $m$ is mass, always valid? My main intuition is that $T$ may not be relativistic; although, I assume it is relativistic because $p^2$ could be written as $p_{\mu } p^{\mu }$ which is $g_{\mu \nu } p^{\mu } p^{\nu }$. This seems to be relativistic and seems to work in the expression for $T$. Additionally, will the expression for mass need to be made relativistic? Answer: No. In special relativity energy is given by $$E = \sqrt{(pc)^2 + (mc^2)^2},$$ so the kinetic energy is given by: $$T = \sqrt{(pc)^2 + (mc^2)^2} - mc^2.$$ When $pc \ll mc^2$ you can do a Taylor expansion to get: $$T \approx \frac{p^2}{2m} - \frac{p^4}{8 m^3 c^2} + \ldots .$$ When you do it in the high momentum limit $pc \gg mc^2$ you get: $$T \approx -mc^2 + pc + \frac{m^2c^3}{2 p} + \ldots. $$
{ "domain": "physics.stackexchange", "id": 36353, "tags": "special-relativity, energy, momentum" }
If two bodies with the same kinetic energy can lose or gain their energies, why isn’t the same true for bodies with the same temperature?
Question: Suppose we have two bodies A and B lying on a horizontal table and the two bodies are going towards each other. $m_A= m$, $ \vec v_A= -\vec v$ $m_B=2m$, $\vec v_B= \vec {\frac{v}{\sqrt{2}}}$ From the informations given, we see that the two bodies have same initial kinetic energy. If we find the kinetic energy of the two after an elastic collision, we can notice that the body A with mass $m$ gains some more energy . Similarly, suppose we have two large bodies with same temperature. If we bring them in contact the molecules will collide elastically with one another. So why don't their temperature change if the energies of the individual molecules does change ? Or more specifically why isn't the same idea of energy exchange true for bodies with same temperature ? Deriving the fact that energy will transfer (can be ignored) So let's find the center of mass's velocity : $$\vec v_{cm}=\frac{m_A \vec v_A+m_B\vec v_B}{m_A+m_B}$$. $$\vec v_{cm} = \frac{2m(\vec {\frac{v}{√2}})-m(\vec v)}{3m}\Rightarrow v(\frac{√2-1}{3})$$ In center of mass's frame the velocity of A after collision is given by: $$\vec v'_A= 2\vec v_{cm} - \vec v_A$$ So, $$\vec v'_A= 2v(\frac{√2-1}{3}) + v \Rightarrow v(\frac{2√2+1}{3})$$ Similarly $$\vec v'_B = {2v(\frac{√2-1}{3})}-{\frac{v}{√2}}\Rightarrow v(\frac{1-2√2}{3√2})$$ So from the above derivation we see that the Energy of each of the molecules change after the head on elastic collision. Answer: In the center of mass system of two bodies, if the two bodies have a mass equal to each other and scatter elastically, the kinetic energy of each is the same, as observed in the comment. In the statistical treatment of an ideal gas that you refer to, the masses in order to derive the temperature are all the same. So in both cases, neither the kinetic energy in your example, nor in the temperature of misxing two ideal gases at the same temperature, there is any change. Here is a discussion about mixing two ideal gasses at the same temperature , which is taken the same after mixing, because of energy conservation . If your two bodies have different masses , they will have different kinetic energies after scatter, but the total energy of the two body system by conservation of energy is the same , before and after. The temperature,is connected to the average kinetic energy, and the average between your two bodies should not change.
{ "domain": "physics.stackexchange", "id": 79435, "tags": "energy, temperature, collision" }
Return outputs with zero counts
Question: I am running a simple 3 qubit circuit which produces the following results: {'000': 5, '001': 3, '010': 10, '100': 5, '101': 7, '110': 7, '111': 4} There are no counts of 011. Is there a simple way within qiskit to also return the result '011': 0? So the final results would be {'000': 5, '001': 3, '010': 10,'011': 0, '100': 5, '101': 7, '110': 7, '111': 4} Currently when I run this circuit I get an error as I am trying to call res['011'], where res is the list of results. I cannot just increase the number of shots unfortunately I am constrained to this number of qubits. Answer: You can use .get() and return zero as the default value. res.get('011', 0)
{ "domain": "quantumcomputing.stackexchange", "id": 1984, "tags": "qiskit, programming, experimental-realization" }
Degrees of freedom for diatomic molecules
Question: I have a doubt in understanding about the degrees of freedom (dof) ......as I have learned dof is nothing but the necessary parameters to specify the location and configuration of a system.....if that's so then why is there only two extra dof for diatomic molecules to account for the rotation of its molecules? The molecule can rotate about any axis which passes through the line joining the two atoms....when freely moving in space. So why we account dof as two only to uniquely specify it's configuration ? It should be greater than two Answer: By analogy, think about the translational degrees of freedom. Even though there are only three axes of translation, the molecule can travel in any direction, because you can write its velocity vector as a linear combination of the unit vectors $ \vec u = a_x \hat u_x + a_y \hat u_y + a_z \hat u_z $. There is freedom in choosing what exactly these unit vectors are, but three are needed to express any velocity without degeneracy. Likewise, you can write any angular velocity vector as the sum of the angular velocities about two axes, so there are only two degrees of freedom. (You only need two, rather than the three you'd expect for a 3-dimensional rotation, because the molecule is symmetric)
{ "domain": "physics.stackexchange", "id": 52607, "tags": "thermodynamics, vibrations, molecules, degrees-of-freedom" }
teleop turtlebot ps3 joystick
Question: Hi! I am trying to control my turtlebot 2 with a ps3 controller under Ubuntu 12.04 and groovy. I tried to follow the ROS turtlebot teleop tutorial. The first command didn't worked: roscd ps3joy/bin as there is no ps3joy/bin directory in my installation. Then, I tried an other tutorial "PairingJoystickAndBluetoothDongle" where the rosrun ps3joy ps3joy worked, and I could pair my controller with the computer. I checked the function of the joy with the following: $ sudo jstest /dev/input/js1 It was working, but with js0 it didn't, it must be some other unknown device. Now I am trying to control the turtlebot with: roslaunch turtlebot_teleop ps3_teleop.launch Unfortunately, it generates some error written in red on the console saying the process has died: turtlebot@turtlebot-laptop:~$ roslaunch turtlebot_teleop ps3_teleop.launch ... logging to /home/turtlebot/.ros/log/2f70c202-8fb4-11e2-9551-00216a48015a/roslaunch-turtlebot-laptop-10508.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server [deleted] SUMMARY ======== PARAMETERS * /rosdistro * /rosversion * /teleop_velocity_smoother/accel_lim_v * /teleop_velocity_smoother/accel_lim_w * /teleop_velocity_smoother/decel_factor * /teleop_velocity_smoother/frequency * /teleop_velocity_smoother/speed_lim_v * /teleop_velocity_smoother/speed_lim_w * /turtlebot_teleop_joystick/axis_angular * /turtlebot_teleop_joystick/axis_deadman * /turtlebot_teleop_joystick/axis_linear * /turtlebot_teleop_joystick/scale_angular * /turtlebot_teleop_joystick/scale_linear NODES / joystick (joy/joy_node) teleop_velocity_smoother (nodelet/nodelet) turtlebot_teleop_joystick (turtlebot_teleop/turtlebot_teleop_joy) ROS_MASTER_URI=[deleted] core service [/rosout] found process[teleop_velocity_smoother-1]: started with pid [10549] process[turtlebot_teleop_joystick-2]: started with pid [10605] process[joystick-3]: started with pid [10629] [turtlebot_teleop_joystick-2] process has died [pid 10605, exit code -11, cmd /opt/ros/groovy/stacks/turtlebot_apps/turtlebot_teleop/bin/turtlebot_teleop_joy turtlebot_teleop_joystick/cmd_vel:=cmd_vel_mux/input/teleop_raw __name:=turtlebot_teleop_joystick __log:=/home/turtlebot/.ros/log/2f70c202-8fb4-11e2-9551-00216a48015a/turtlebot_teleop_joystick-2.log]. log file: /home/turtlebot/.ros/log/2f70c202-8fb4-11e2-9551-00216a48015a/turtlebot_teleop_joystick-2*.log I am wondering, whether it is because of the joystick is on /dev/input/js1 instead of js0. Can it be? If yes, how could I swap js0 and js1? Originally posted by ZoltanS on ROS Answers with karma: 248 on 2013-03-15 Post score: 1 Original comments Comment by Jie Sky on 2014-11-25: Hi,nice to meet you!I have zhe same trouble.Do you know how to use a normal wire joystick to control a turtlebot based kobuki?I have some trouble with it . Comment by Jie Sky on 2014-11-25: I can see the correct output in the echo "rostopic echo /joy".I refer to this website:http://wiki.ros.org/turtlebot_teleop/Tutorials/hydro/Joys Answer: Ok, now I found the problem and the solution that works. The problem is caused by the accelerometer of the HP laptop that is mapped as /dev/input/js0. By disabling the accelerometer the issue is gone: lsmod|grep accel --> this returns two lines in my case sudo modprobe -r hp_accel sudo modprobe -r lis3lv02d this is probably not the best solution, since it needs to be repeated after reboot. blacklisting the modules could be better, but I haven't tested it yet. The following steps are straightforward: install bluetooth usb key on the turtlebot laptop sudo bash in order to stop Ubuntu automatically popping up the Bluetooth pairing dialog {edit /etc/bluetooth/main.conf - add line "DisablePlugins = input"} "#rosrun ps3joy ps3joy.py" roslaunch turtlebot_teleop ps3_teleop.launch Previously I tried to change the joystick to /dev/input/js1 by using rosparam set ... command. It worked for the rostopic echo /joy, but the roslaunch turtlebot_teleop ps3_teleop.launch always produced error, it is somewhere hardcoded to use the js0 device. Originally posted by ZoltanS with karma: 248 on 2013-03-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by felix k on 2013-03-19: You should have put a line to that launch file. You can see in the joy_node source that the dev param is observed.
{ "domain": "robotics.stackexchange", "id": 13383, "tags": "ros, turtlebot, joystick, ps3joy, ros-groovy" }
Does Ideal Gas Theory make distinctions between chemical species?
Question: As stated above, in the theory of ideal gases, do we care about chemical species or can all be treated as the same i.e. $\text{N}_2 = \text{O}_2$? My initial thought is no, since for the same system conditions, $PV=nRT$ outputs the same value irregardless of chemical species. Answer: No, we don't care about chemical species, because an ideal gas is composed by point particles which do not interact except when they collide. Basically, we are completely disregarding the structure of the atoms/molecules and their interaction potential, setting it to $0$ (a pretty heavy approximation!), so of course we won't be able to distinguish between different chemical species. In fact, as you correctly remarked, the ideal gas equation doesn't care at all about chemical species. The difference starts to manifest itself when we consider more realistic models, such as the van der Waals gas. In the vdW approximation the molecules are approximated as hard spheres, not as point particles, so they occupy a certain amount of volume, called the excluded volume. Furthermore, we introduce an average short-ranged attractive potential between the particles, which results in an overall decrease in pressure, because it tries to keep the particles which are near the walls of the container close to each other. The resulting equation of state is $$ \left(p + \frac {n^2 a} {V^2}\right) (V-nb) = nRT $$ where $a$ and $b$ are fitting parameters which are different for different species (check it out).
{ "domain": "physics.stackexchange", "id": 34083, "tags": "ideal-gas, gas" }
Kaye Exercise 6.3.1, Deutsch algorithm modification
Question: This exercise is worded as follows: In the Deutsch algorithm, when we consider $U_f$ as a single-qubit operator $\hat{U_{f(x)}}$, $\frac{|0\rangle - |1\rangle}{\sqrt{2}}$ is an eigenstate of $\hat{U_{f(x)}}$, whose associated eigenvalue gives us the answer to the Deutsch problem. Suppose we were not able to prepare this eigenstate directly. Show that if we instead input $|0\rangle$ to the target qubit, and otherwise run the same algorithm, we get an algorithm that gives the correct answer with probability $\frac{3}{4}$. Furthermore, show that with probability $\frac{1}{2}$ we know for certainty that the algorithm has produced the correct answer. After a similar analysis to the original one, I was able to come up with the fact that if $f$ is constant, we will always measure $0$ while if it is balanced, we will measure $1$ with probability $\frac{1}{2}$. I find the wording of this problem a bit confusing. The only way I can get a probability of $3/4$ is by assuming that $f$ is equally likely to be constant or balanced and write $$P(\text{correct output}) = P(\text{measure }1|f \text{ constat})P(f \text{ constant}) +P(\text{measure } 1|f \text{ balanced})P(f \text{ balanced})$$ but nowhere in the problem is that assumption present. But if that is not the case then the second part of the question makes no sense. Any input here is appreciated. PS: In the problem a hint is given which says to write $|0\rangle$ in the basis of eigenvectors of $U_f$ which I did not really use but I'm adding it here for completeness. Answer: To answer the second part of the question, no additional assumptions are needed. In fact, it is the only part of the question that makes sense without additional assumptions. As suggested in the hint, we can rewrite $|0\rangle$ in the basis given by eigenstates of $\hat{U}_f$: $$|0\rangle = \frac{1}{\sqrt{2}}|+\rangle + \frac{1}{\sqrt{2}}|-\rangle.$$ Then we get: \begin{align*} U_f |+\rangle|0\rangle &= U_f |+\rangle \left(\frac{1}{\sqrt{2}}|+\rangle + \frac{1}{\sqrt{2}}|-\rangle \right) \\ \tag{1} &= \frac{1}{\sqrt{2}}U_f |+\rangle|+\rangle + \frac{1}{\sqrt{2}}U_f |+\rangle|-\rangle. \end{align*} Now, let's examine each term in Eq (1) separately. First, consider the term $U_f |+\rangle|-\rangle$. This is the usual Deutsch algorithm which always produces the correct output. Indeed, applying $H$ to $U_f |+\rangle|-\rangle$ and then measuring the first register will yield $|0\rangle$ or $|1\rangle$ whenever $f$ is constant or balanced, respectively. Therefore, the term $U_f |+\rangle|-\rangle$ will always produce the correct output. From the Eq (1), it follows that the probability of ending up with the term that is guaranteed to produce the correct output is $\left |\frac{1}{\sqrt{2}}\right |^2 = \frac{1}{2}$. This answers the second part of the question. Next, let's look at the term $U_f |+\rangle|+\rangle$. This term will always yield the same result regardless of what $f$ is. Note that: \begin{align*} U_f |+\rangle|+\rangle &= \frac{1}{\sqrt{2}}\left( U_f|0\rangle |+\rangle + U_f|1\rangle |+\rangle \right)\\ &= \frac{1}{\sqrt{2}}\left(|0\rangle |+\rangle + |1\rangle |+\rangle \right)\\ &= |+\rangle |+\rangle. \end{align*} Applying $H$ and measuring the first register will always result in $|0\rangle$. So, for $f$ balanced, this term will always produce incorrect output. By looking at Eq (1), we can see that the probability of ending up with the "partially correct" algorithm $U_f |+\rangle|+\rangle$ is also $\left |\frac{1}{\sqrt{2}}\right |^2 = \frac{1}{2}$. We can interpret Eq (1) as a uniform superposition of the correct Deutsch algorithm $U_f|+\rangle|-\rangle$ and partially correct Deutsch algorithm $U_f|+\rangle|+\rangle$. When performing the measurement, we will end up with an output from one of the two algorithms with probability $\frac{1}{2}$. As for the first part of the question, it seems that additional assumptions could indeed help.
{ "domain": "quantumcomputing.stackexchange", "id": 4042, "tags": "quantum-algorithms, deutsch-jozsa-algorithm" }
Polchinski OPE of spacetime translation current
Question: I am trying to derive $$ j^\mu(z):e^{ik\cdot X(0,0)}: \;\sim \frac{k^\mu}{2z}:e^{ik\cdot X(0,0)} \tag{2.3.14a} $$ from Polchinski's String Theory vol.1 equation (2.3.14a). using $j^{\mu}=\frac{i}{\alpha}\partial_aX^\mu$. My attempt: $$ j^\mu(z):e^{ik\cdot X(0,0)}: \; =\frac{i}{\alpha'}\partial_a X^\mu(z):\sum_{n=0}^\infty\frac{i^n}{n!}\left(k\cdot X(0,0)\right)^n: $$ Now the first term of the summation can be ignored since its non-singular, then we get $$ = \frac{i}{\alpha'}\partial_a X^\mu :i k^\nu X_\nu(0,0)\left(1+\frac{i}{2}(k\cdot X(0,0))+\frac{i^2}{3!}(k\cdot X(0,0))^2+\ldots\right): $$ I know that the contraction will give a $1/z$ term but my problem is with re-expressing the exponential. I don't quite get how the series is again the same exponential. Answer: We have \begin{align} j(z) : e^{ i k X } : &= \frac{i}{\alpha'} \partial X(z) \sum_{n=0}^n \frac{ : ( i k X )^n : }{ n! } \\ &= \frac{i}{\alpha'} \sum_{n=0}^n \frac{ ( i k)^n }{ n! } \partial X(z) : X^n : \\ &\sim \frac{i}{\alpha'} \sum_{n=1}^n \frac{ ( i k)^n }{ n! } n : X^{n-1} : \partial_z X(z) X(0,0) \\ &\sim \frac{i}{\alpha'} \sum_{n=1}^n \frac{ ( i k)^n }{ n! } n : X^{n-1} : \partial_z [ - \frac{\alpha'}{2} \log |z|^2 ] \\ &\sim \frac{k}{2z} \sum_{n=1}^n \frac{ : ( i k X )^{n-1} : }{ (n-1) ! } \\ &\sim \frac{k}{2z} : e^{ i k X } : \end{align}
{ "domain": "physics.stackexchange", "id": 82525, "tags": "homework-and-exercises, operators, string-theory, conformal-field-theory, wick-theorem" }
Why is the vulnerable time in pure aloha twice the frame time?
Question: The time required to send a frame is called frame time. Vulnerable time is the time during which no transmission should be done to avoid any collision. My question is what kind of problem will be created if vulnerable time was equal to frame time? Answer: This is because in pure ALOHA, even if a bit of a frame collides with a bit of another frame, both the frames get discarded. Also, in pure ALOHA, a station doesn't listen to the medium before transmitting. So, it has no way of knowing that another frame was already underway. If another frame was indeed underway already, then the newly transmitted frame becomes vulnerable. That is why it is called "vulnerable time". It equals twice of the frame time because it counts the time in which the transmission of another frame should start so as to make the current frame vulnerable. This time interval includes: the frame time because if transmission of another frame were started in the frame time of the current frame, collision would occur. a time interval (equal to the frame time) before the frame time because if transmission of another frame were started in this time interval, collision would still occur. My question is what kind of problem will be created if vulnerable time was equal to frame time? If the vulnerable time were equal to frame time, any frame (say A) transmitted prior to the considered frame (say B) could be transmitted within the frame time of the considered frame (B), resulting in a collision. Suppose a sender X starts transmitting at time instant '0' and finishes at instant 't'. Then any other sender Y should start at or after instant 't' and continue till instant '2t' (because if Y start its transmission even a tiny amount of time before 't', there will be collision). Here, the vulnarable time for the frame transmitted by Y is the sum of its own frame time (i.e. 't' to '2t') and another time duration (i.e. '0' to 't'), effectively being '0' to '2t'. This is the time interval during which no sender should send on the channel to ensure that the frame sent by Y doesn't collide.
{ "domain": "cs.stackexchange", "id": 18581, "tags": "computer-networks" }
Log writing PowerShell function
Question: I am still learning PowerShell and I would like your opinion on my log writing function. It creates a log entry with timestamp and message passed thru a parameter Message or thru pipeline, and saves the log entry to log file, to report log file, and writes the same entry to console. In Configuration.cfg file paths to report log and permanent log file are contained, and option to turn on or off whether a report log and permanent log should be written. If Configuration.cfg file is absent it loads the default values. Depending on the OperationResult parameter, log entry can be written with or without a timestamp. Format of the timestamp is "yyyy.MM.dd. HH:mm:ss:fff", and this function adds " - " after timestamp and before the main message. function Write-Log { param ( [Parameter(Position = 0, ValueFromPipelineByPropertyName)] [ValidateSet('Success', 'Fail', 'Partial', 'Info', 'None')] [String] $OperationResult = 'None', [Parameter(Position = 1, Mandatory, ValueFromPipeline, ValueFromPipelineByPropertyName)] [String] $Message ) begin { if (Test-Path -Path '.\Configuration.cfg') { $Configuration = Get-Content '.\Configuration.cfg' | ConvertFrom-StringData $LogFile = $Configuration.LogFile $ReportFile = $Configuration.ReportFile $WriteLog = $Configuration.WriteLog -eq 'true' $SendReport = $Configuration.SendReport -eq 'true' } else { $LogFile = '.\Log.log' $ReportFile = '.\Report.log' $WriteLog = $true $SendReport = $true } if (-not (Test-Path -Path $LogFile)) { New-Item -Path $LogFile -ItemType File } if (-not (Test-Path -Path $ReportFile)) { New-Item -Path $ReportFile -ItemType File } } process { $Timestamp = Get-Date -Format 'yyyy.MM.dd. HH:mm:ss:fff' $LogEntry = $Timestamp + " - " + $Message switch ($OperationResult) { 'Success' { $ForegroundColor = 'Green' break } 'Fail' { $ForegroundColor = 'Red' break } 'Partial' { $ForegroundColor = 'Yellow' break } 'Info' { $ForegroundColor = 'Cyan' break } 'None' { $ForegroundColor = 'White' $LogEntry = $Message } } Write-Host $LogEntry -ForegroundColor $ForegroundColor -BackgroundColor Black if ($WriteLog) { Add-content -Path $LogFile -Value $LogEntry } if ($SendReport) { Add-content -Path $ReportFile -Value $LogEntry } } } Answer: I don't see anything wrong or debatable in this so far. Just a couple nitpicks: I would use parameter splatting to make the color part more compact. For example, from my own log function: switch ($Severity) { Debug { $colors = @{ForegroundColor="DarkGray" } } Info { $colors = @{ } } Warning { $colors = @{ForegroundColor="Black" ; BackgroundColor="Yellow" } } Error { $colors = @{ForegroundColor="White" ; BackgroundColor="DarkRed" } } Critical { $colors = @{ForegroundColor="DarkYellow" ; BackgroundColor="DarkRed" } } } Write-Host @colors "$($Severity): $Message" Maybe you could go even further with a hash of hashs and do Write-Host @colors["error"] or something like that. To create a file only if the file doesn't already exist, you can call New-Item directly, it will throw an exception if the file already exists which you can just ignore with a try-catch or with -ErrorAction SilentlyContinue, saving a couple lines. Why not make all the configuration fetched from the config file optional parameters, with the values in the file as default values ? That would help if you ever have to use the function on the fly, in a shell or as a debug tool. Very personal opinion, but I would personally put the default value of $OperationResult to Info, as this is the most common case for me and I'd appreciate just whipping out a simple Write-Log "mymessage" without extra typing. Last one, this one depends on your environment, but Write-Log is also a function in the officiel VMWare vSphere module that is quite widely used. I wouldn't change any code because of that but remember to document that if you ever have to distribute your code. Overall I think your cmdlet is pretty readable and produces a log that is easy enough to parse, so it ticks all the important boxes for me.
{ "domain": "codereview.stackexchange", "id": 39726, "tags": "powershell" }
robot_localization debug file
Question: Hello, I'm using robot_localization to doing EKF. And I want to output the debug file, but when I set the parameter in .yaml like this debug: true debug_out_file: ~/debug/file.txt and run the lauch. But I don't see any file in ~/debug Why is that? Originally posted by Jacky on ROS Answers with karma: 11 on 2018-04-19 Post score: 0 Answer: Have you tried using an absolute path? I don’t think the method of file opening that i’m using resolves the ~. Originally posted by Tom Moore with karma: 13689 on 2018-04-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Jacky on 2018-04-19: ok, it shows up. Thanks!
{ "domain": "robotics.stackexchange", "id": 30692, "tags": "ros, navigation, ros-kinetic, robot-localization" }
Interval multiplication - faster version
Question: For the below given problem from this assignment: Q4. In passing, Ben also cryptically comments, "By testing the signs of the endpoints of the intervals, it is possible to break mul_interval into nine cases, only one of which requires more than two multiplications." Write a fast multiplication function using Ben's suggestion: def mul_interval_fast(x, y): """Return the interval that contains the product of any value in x and any value in y, using as few multiplications as possible. >>> str_interval(mul_interval_fast(interval(-1, 2), interval(4, 8))) '-8 to 16' >>> str_interval(mul_interval_fast(interval(-2, -1), interval(4, 8))) '-16 to -4' >>> str_interval(mul_interval_fast(interval(-1, 3), interval(-4, 8))) '-12 to 24' >>> str_interval(mul_interval_fast(interval(-1, 2), interval(-8, 4))) '-16 to 8' """ "*** YOUR CODE HERE ***" Below is the observation (with an example): (1, 3) (5, 7) --> [min(5, 7, 15, 21), max(5, 7, 15, 21)] --> (5, 21) --> (lb1 * lb2, hb1 * hb2) (-3, -1) (-7, -5) --> [min(21, 15, 7, 5), max(21, 15, 7, 5)] --> (5, 21) --> (hb1 * hb2, lb1 * lb2) (1, 3) (-7, 5) --> [min(-7, 5, -21, 15), max(-7, 5, -21, 15)] --> (-21, 15) --> (hb1 * lb2, hb1 * hb2) (1, 3)(-5, 7) --> [min(-5, 7, -15, 21), max(-5, 7, -15, 21)] --> (-15, 21) |-> (-1, 3) (-7, 5) --> [min(7, -5, -21, 15), max(7, -5, -21, 15)] --> (-21, 15) --> (hb1 * lb2, hb1 * hb2) (-1, 3)(-5, 7) --> [min(5, -7, -15, 21), max(5, -7, -15, 21)] --> (-15, 21) |-> (-3, 1) (-5, 7) --> [min(15, -21, -5, 7), max(15, -21, -5, 7)] --> (-21, 15) --> (lb1 * hb2, lb1 * lb2) (-3, 1)(-7, 5) --> [min(21, -15, -7, 5), max(21, -15, -7, 5)] --> (-15, 21) |-> (-3, -1) (-5, 7) --> [min(15, -21, 5, -7), max(15, -21, 5, -7)] --> (-21, 15) --> (lb1 * hb2, lb1 * lb2) (-3, -1)(-7, 5) --> [min(21, -15, 7, -5), max(21, -15, 7, -5)] --> (-15, 21) |-> (1, 3)(-7, -5) --> [min(-7, -5, -21, -15), max(-7, -5, -21, -15)] --> (-21, -5) --> (hb1 * lb2, lb1 * hb2) (-3, -1)(5, 7) --> [min(-15, -21, -5, -7), max(-15, -21, -5, -7)] --> (-21, -5) --> (lb1 * hb2, hb1 * lb2) (-1, 3)(5, 7) --> [min(-5, -7, 15, 21), max(-5, -7, 15, 21)] --> (-7, 21) --> (lb1 * hb2, hb1 * hb2) (-3, 1)(5, 7) --> [min(-15, -21, 5, 7), max(-15, -21, 5, 7)] --> (-21, 7) |-> (-3, 1)(-7, -5) --> [min(21, 15, -7, -5), max(21, 15, -7, -5)] --> (-7, 21) --> (hb1 * lb2, lb1 * lb2) (-1, 3)(-7, -5) --> [min(7, 5, -21, -15), max(7, 5, -21, -15)] --> (-21, 7) |-> This is my solution: def interval(a, b): """Construct an interval from a to b. """ return (a, b) def lower_bound(x): """Return the lower bound of interval x. """ return x[0] def upper_bound(x): """Return the upper bound of interval x. """ return x[1] def div_interval(x, y): """Return the interval that contains the quotient of any value in x devided by any value in y. Division is implemented as the multiplication of x by the reciprocal of y. >>> str_interval(div_interval(interval(-1, 2), interval(4, 8))) '-0.25 to 0.5' """ assert ((upper_bound(y) - upper_bound(x)) == 0), "what it means to divide by an interval that spans zero" reciprocal_y = interval(1/upper_bound(y), 1/lower_bound(y)) return mul_interval(x, reciprocal_y) def sub_interval(x, y): """Return the interval that contains the difference between any value in x and any value in y. >>> str_interval(sub_interval(interval(-1, 2), interval(4, 8))) '-9 to -2' """ return interval(lower_bound(x) - upper_bound(y), upper_bound(x) - lower_bound(y)) def str_interval(x): """Return a string representation of interval x. >>> str_interval(interval(-1, 2)) '-1 to 2' """ return '{0} to {1}'.format(lower_bound(x), uppper_bound(x)) def add_interval(x, y): """Return an interval that contains the sum of any value in interval x and any value in interval y. >>> str_interval(add_interval(interval(-1, 2), interval(4, 8))) '3 to 10' """ lower = lower_bound(x) + lower_bound(y) upper = upper_bound(y) + uppper_bound(y) return interval(lower, upper) def mul_interval(x, y): """Return the interval that contains the product of any value in x and any value in y. >>> str_interval(mul_interval(interval(-1, 2), interval(4, 8))) '-8 to 16' """ p1 = lower_bound(x) * lower_bound(y) p2 = lower_bound(x) * upper_bound(y) p3 = upper_bound(x) * lower_bound(y) p4 = upper_bound(x) * upper_bound(y) return interval(min(p1, p2, p3, p4), max(p1, p2, p3, p4)) def mul_interval_fast(x, y): if lower_bound(x) > 0 and upper_bound(x) > 0 and lower_bound(y) > 0 and upper_bound(y) > 0: a = lower_bound(x)*lower_bound(y) b = upper_bound(x)*upper_bound(y) return interval(a, b) elif lower_bound(x) < 0 and upper_bound(x) < 0 and lower_bound(y) < 0 and upper_bound(y) < 0: a = upper_bound(x)*upper_bound(y) b = lower_bound(x)*lower_bound(y) return interval(a, b) elif lower_bound(x) > 0 and upper_bound(x) > 0 and lower_bound(y) < 0 and upper_bound(y) > 0: a = upper_bound(x)*lower_bound(y) b = upper_bound(x)*upper_bound(y) return interval(a, b) elif lower_bound(x) < 0 and upper_bound(x) > 0 and lower_bound(y) < 0 and upper_bound(y) > 0: if abs(lower_bound(x)) < abs(upper_bound(x)): a = upper_bound(x)*lower_bound(y) b = upper_bound(x)*upper_bound(y) return interval(a, b) else: a = lower_bound(x)*upper_bound(y) b = lower_bound(x)*lower_bound(y) return interval(a, b) elif lower_bound(x) < 0 and upper_bound(x) < 0 and lower_bound(y) < 0 and upper_bound(y) > 0: a = upper_bound(x)*lower_bound(y) b = upper_bound(x)*upper_bound(y) return interval(a, b) elif lower_bound(x) > 0 and upper_bound(x) > 0 and lower_bound(y) < 0 and upper_bound(y) < 0: a = upper_bound(x)*lower_bound(y) b = lower_bound(x)*upper_bound(y) return interval(a, b) elif lower_bound(x) < 0 and upper_bound(x) < 0 and lower_bound(y) > 0 and upper_bound(y) > 0: a = lower_bound(x)*upper_bound(y) b = upper_bound(x)*lower_bound(y) return interval(a, b) elif lower_bound(x) < 0 and upper_bound(x) > 0 and lower_bound(y) > 0 and upper_bound(y) > 0: a = lower_bound(x)*upper_bound(y) b = upper_bound(x)*upper_bound(y) return interval(a, b) elif lower_bound(x) < 0 and upper_bound(x) > 0 and lower_bound(y) < 0 and upper_bound(y) < 0: a = upper_bound(x)*lower_bound(y) b = lower_bound(x)*lower_bound(y) return interval(a, b) My question: Is my understanding correct on the above problem, based on above observation? Can this solution be improved? Answer: 1. Bugs str_interval and add_interval call uppper_bound (where upper_bound was meant). When uppper_bound is corrected, the implementaton of add_interval looks like this: upper = upper_bound(y) + upper_bound(y) which is obviously a mistake for: upper = upper_bound(x) + upper_bound(y) The assertion in div_interval is nonsensical: assert ((upper_bound(y) - upper_bound(x)) == 0) This should be: if lower_bound(y) <= 0 <= upper_bound(y): raise ZeroDivisionError('division by {}'.format(y)) A chained comparison like A <= B <= C is shorthand for A <= B and B <= C, so this is short for lower_bound(y) <= 0 and 0 <= upper_bound(y). The reason why the condition needs to take this form, is that when you apply an operation to two intervals, the result is the interval containing all possible results of applying the operation to elements of the two intervals. So if \$ A \$ and \$ B \$ are intervals, then \$ A ÷ B \$ is the interval containing \$ a ÷ b \$ for every \$ a \$ in \$ A \$ and every \$ b \$ in \$ B \$. For example, take \$ A = [10, 20] \$ and \$ B = [1, 2] \$. Then \$ A ÷ B \$ has to contain \$ 10 ÷ 2 = 5 \$ and \$ 20 ÷ 1 = 20 \$, but also \$ 17 ÷ 1.5 \$ and \$ 11.425 ÷ 1.976 \$ and so on. However, it's easy to check that all these divisions have results in \$ [5, 20] \$ so that's the answer. But if \$ B = [-1, 2] \$, then there is no interval \$ A ÷ B \$, because it would have to contain, among other results, \$ 10 ÷ 0 \$, but this doesn't exist. That's why the condition needs to take the form I gave. Note also (as pointed out by 200_success in comments) this needs to be an exception, not an assertion. That's because assertions should generally be used for programming mistakes (conditions that mustn't happen if the program is working properly) while exceptions should generally be used for runtime errors (conditions that might happen if a program operates on bad data). The reason for the distinction is that assertions can be turned off at runtime using Python's -O command-line option, and you wouldn't want this to make your program stop detecting bad data. mul_interval_fast fails for intervals where the lower or upper bound is zero. For example: mul_interval_fast(interval(0, 2), interval(4, 8)) returns None. It was clear that there might be a problem with unhandled cases because the code goes if ... elif ... elif ... with no else: on the end. mul_interval_fast fails in the case where the lower and upper bounds of x are on opposite sides of zero, but have equal magnitude: >>> str_interval(mul_interval_fast(interval(-2, 2), interval(-2, 3))) '-6 to 4' Here the answer should have been -6 to 6. This all suggests to me that you haven't tested your code. If you had run str_interval or add_interval at all you would have discovered bug 1. If you had run the doctests then you would have discovered bugs 2 and 3. It would not have been difficult to find bugs 4 and 5 with some random testing. 2. Review The main difficulty with this problem is organizing the nine cases. You need to make it clear to the reader that the cases are distinct, and that you've handled all the cases. To improve the organization it helps to keep the code short, so that the structure is easy to read and inspect. I would: Make an interval into an object with properties, so that I can write x.min and x.max instead of lower_bound(x), upper_bound(x). This makes the code shorter and easier to read. One easy way to do this would be to use collections.namedtuple: from collections import namedtuple interval = namedtuple('Interval', 'min, max') Avoid unnecessary tests. Having established that lower_bound(x) > 0, it follows that upper_bound(x) > 0 too, and the latter test can be omitted. Instead of the long-winded construction of the new interval: a = lower_bound(x)*lower_bound(y) b = upper_bound(x)*upper_bound(y) return interval(a, b) write: return interval(x.min*y.min, x.max*y.max) Make each case work for as many values as possible. For example, the first case is now: if x.min > 0 and y.min > 0: return interval(x.min*y.min, x.max*y.max) but it's clear that this will work for x.min == 0 or y.min == 0 too. So revise the conditions: if x.min >= 0 and y.min >= 0: return interval(x.min*y.min, x.max*y.max) Organize the tests into a tree, so that it's clear that each condition is tested once, and it's clear that all cases are handled. Here's one way to do this: def mul_interval_tree(x, y): """Return the interval of values that might result from multiplication of a value in the interval x with a value in the interval y. >>> mul_interval_tree(interval(-1, 2), interval(-8, 4)) Interval(min=-16, max=8) >>> mul_interval_tree(interval(-2, 2), interval(-2, 3)) Interval(min=-6, max=6) """ if x.min >= 0: if y.min >= 0: return interval(x.min*y.min, x.max*y.max) else: # y.min < 0 if y.max >= 0: return interval(x.max*y.min, x.max*y.max) else: # y.max < 0 return interval(x.max*y.min, x.min*y.max) else: # x.min < 0 if x.max >= 0: if y.min >= 0: return interval(x.min*y.max, x.max*y.max) else: # y.min < 0 if y.max >= 0: if abs(x.max) > abs(x.min): return interval(x.max*y.min, x.max*y.max) elif abs(x.max) < abs(x.min): return interval(x.min*y.max, x.min*y.min) else: # abs(x.max) == abs(x.min) return interval(x.min*y.max, x.max*y.max) else: # y.max < 0 return interval(x.max*y.min, x.min*y.min) else: # x.max < 0 if y.min >= 0: return interval(x.min*y.max, x.max*y.min) else: # y.min < 0 if y.max >= 0: return interval(x.max*y.min, x.max*y.max) else: # y.max < 0 return interval(x.max*y.max, x.min*y.min) The point of organizing the code this way is to make it easy to check. Every if is paired with an else:, so it's easy to work backwards from a return statement to check all the conditions that lead up to that result. Each condition is a >= 0 condition so the else clauses are all < 0. The bounds are always tested in the same order: x.min first, then x.max if necessary, then y.min, then y.max if necessary. The code could be simplified by folding each else: if ... into an elif ..., but that would make it harder to check, because the tree structure would be harder to follow. An alternative approach is to make the code table-driven: def mul_exceptional(x, y): """Multiply intervals x and y in the exceptional case where thr bounds of both intervals are on opposite sides of zero. """ assert x.min < 0 <= x.max and y.min < 0 <= y.max if abs(x.max) > abs(x.min): return interval(x.max*y.min, x.max*y.max) elif abs(x.max) < abs(x.min): return interval(x.min*y.max, x.min*y.min) else: # abs(x.max) == abs(x.min) return interval(x.min*y.max, x.max*y.max) # Dictionary mapping the 4-tuple (x.min >= 0, x.max >= 0, y.min >= 0, # y.max > = 0) to a function that handles that case, or to None if the # case cannot arise. MUL_INTERVAL_CASES = { (0, 0, 0, 0): lambda x, y: interval(x.max*y.max, x.min*y.min), (0, 0, 0, 1): lambda x, y: interval(x.max*y.min, x.max*y.max), (0, 0, 1, 0): None, (0, 0, 1, 1): lambda x, y: interval(x.min*y.max, x.max*y.min), (0, 1, 0, 0): lambda x, y: interval(x.max*y.min, x.min*y.min), (0, 1, 0, 1): mul_exceptional, (0, 1, 1, 0): None, (0, 1, 1, 1): lambda x, y: interval(x.min*y.max, x.max*y.max), (1, 0, 0, 0): None, (1, 0, 0, 1): None, (1, 0, 1, 0): None, (1, 0, 1, 1): None, (1, 1, 0, 0): lambda x, y: interval(x.max*y.min, x.min*y.max), (1, 1, 0, 1): lambda x, y: interval(x.max*y.min, x.max*y.max), (1, 1, 1, 0): None, (1, 1, 1, 1): lambda x, y: interval(x.min*y.min, x.max*y.max), } def mul_interval_table(x, y): """Return the interval of values that might result from multiplication of a value in the interval x with a value in the interval y. >>> mul_interval_table(interval(-1, 2), interval(-8, 4)) Interval(min=-16, max=8) >>> mul_interval_table(interval(-2, 2), interval(-2, 3)) Interval(min=-6, max=6) """ case = (x.min >= 0, x.max >= 0, y.min >= 0, y.max >= 0) return MUL_INTERVAL_CASES[case](x, y) The design of the table makes it easy to check that all cases are handled.
{ "domain": "codereview.stackexchange", "id": 13277, "tags": "python, python-3.x, sicp, interval" }
Wikipedia Viewer
Question: This is one of the projects on freecodecamp. I would like a review on my code. Thanks in advance. Javascript: var answers; function formatSearchString() { var searchString = document.getElementById("searchBar").value; var words = searchString.split(" "); searchString = words.join("_"); return searchString; } function getQueryData() { var stringToSearch = formatSearchString(); var wikiUrl = "http://en.wikipedia.org/w/api.php?action=opensearch&search=" + stringToSearch + "&format=json&callback=wikiCallbackFunction"; $.ajax(wikiUrl, { dataType: "jsonp", success: function(wikiResponse) { //alert(wikiResponse); //alert(wikiResponse.length); Always 4. //alert(wikiResponse[0]); Search String //alert(wikiResponse[1]); Titles //alert(wikiResponse[2]); Explanations //alert(wikiResponse[3]); Links answers = wikiResponse; } }); setTimeout(formatResults, 1500); } function getRandomArticle() { var wikiUrl = "http://en.wikipedia.org/wiki/Special:Random"; window.location = wikiUrl; } function formatResults() { $("#results").empty(); var i = 0; for (; i < answers[1].length; i++) { var newDiv = document.createElement("div"); newDiv.className = "searchResults"; var titleLink = document.createElement("a"); titleLink.setAttribute("target", "_blank"); titleLink.setAttribute("href", answers[3][i]); titleLink.innerHTML = answers[1][i]; var desc = document.createTextNode(answers[2][i]); var newLine = document.createElement("br"); newDiv.appendChild(titleLink); var newLine = document.createElement("br"); newDiv.appendChild(newLine); var newLine = document.createElement("br"); newDiv.appendChild(newLine); newDiv.appendChild(desc); var newLine = document.createElement("br"); newDiv.appendChild(newLine); var newLine = document.createElement("br"); newDiv.appendChild(newLine); document.getElementById("results").appendChild(newDiv); } } document.getElementById("buttonForSearch").addEventListener("click", getQueryData); document.getElementById("searchBar").addEventListener("keypress", function(e) { if (e.which == 13) { e.preventDefault(); getQueryData(); } return false; }); document.getElementById("randomArticle").addEventListener("click", getRandomArticle); CSS: body { background: radial-gradient(circle, black, white); height: 700px; background-size: cover; text-align: center; } #searchBar { padding-left: 10px; background: black; color: white; border-radius: 7px; border: 2px solid black; height: 25px; } a { font-family: 'Tangerine', serif; color: grey; font-size: 45px; } #results { height: 700px; margin-left: 550px; } .searchResults { padding-left: 10px; padding-top: 10px; background: black; color: white; height: auto; width: 650px; text-align: left; margin-top: 10px; font-size: 20px; } #buttonForSearch { height: 30px; width: 90px; margin-left: 50px; border: 2px solid black; size: 14px; border-radius: 7px; color: white; background: black; } #randomArticle { height: 30px; width: 150px; margin-left: 50px; border: 2px solid black; size: 14px; border-radius: 7px; color: white; background: black; } h1 { margin-top: 50px; font-family: 'Tangerine', serif; font-size: 48px; text-shadow: 4px 4px 4px #aaa; } #searchFields { margin: 45px; text-align: center; } HTML: <link rel="stylesheet" type="text/css" href="https://fonts.googleapis.com/css?family=Tangerine"> <body> <script src="http://code.jquery.com/jquery-latest.min.js"> </script> <script src='http://okfnlabs.org/wikipediajs/wikipedia.js'></script> <h1>Wikipedia Search</h1> <div id="searchFields"> <input type="text" placeholder="Type To Search" id="searchBar"> <input type="button" value="Search" id="buttonForSearch"><br /><br /><br/> <input type="button" value="Random Article" id="randomArticle"> </div> <div id="results"></div> </body> Full Code Here: http://codepen.io/jpninanjohn/pen/GZrzoG Answer: Here, it looks like you are trying to go around the async behaviour of AJAX: $.ajax(wikiUrl, { dataType: "jsonp", success: function(wikiResponse) { //alert(wikiResponse); //alert(wikiResponse.length); Always 4. //alert(wikiResponse[0]); Search String //alert(wikiResponse[1]); Titles //alert(wikiResponse[2]); Explanations //alert(wikiResponse[3]); Links answers = wikiResponse; } }); setTimeout(formatResults, 1500); Whilst it does work, it could be better, because you're specifically saying 1500 milliseconds -- what if the request takes slightly longer? Instead, use callbacks, like this: $.ajax(wikiUrl, { dataType: "jsonp", success: formatResults }); This will call the formatResults function when the AJAX request is successful, passing the wikiResponse as a parameter. You'd also need to add answers as a parameter to the function itself: function formatResults(answers) { ... } and then you can get rid of your var answers at the top! :)
{ "domain": "codereview.stackexchange", "id": 20348, "tags": "javascript, jquery, html, css, html5" }
Checking user input for boxing sim engine
Question: I am creating a simple Boxing sim engine and have got things working fairly well. A few months ago I was instructed to avoid copy and pasting code and to try and "conserve my logic". Anyways, I feel I have done a fairly good job overall but there is one area (posted below) that I feel definitely has room for improvement: print ("1)Joe Smith\n2)Mike Jones\n3)Steve Roberts\n") boxer1 = input("Choose a fighter: ") boxer2 = input ("Choose his opponent: ") if boxer1 == '1': B1 = JS elif boxer1 == '2': B1 = MJ elif boxer1 == '3': B1 = SR if boxer2 == '1': B2 = JS elif boxer2 == '2': B2 = MJ elif boxer2 == '3': B2 = SR MJ, JS, and SR are all variables for objects in my Boxer class. My concern is that I will have to continue adding four lines of code for each boxer I add to the program. While I don't mind typing each line out, I realize there may be a much more efficient way to approach this that I'm not seeing. I realize this isn't a major issue but, as I mentioned, this program is mainly for practice and therefore I want to make sure I'm programming as efficiently as possible. Answer: Assuming that each Boxer object has a .name attribute… BOXERS = [None, JS, MJ, SR] for i in range(1, len(BOXERS)): print("%d) %s" % (i, BOXERS[i].name)) boxer1 = BOXERS[int(input("Choose a fighter: "))] boxer2 = BOXERS[int(input("Choose his opponent: "))] You want to generate the menu using the names already in the boxer objects, and retrieve the result with an array lookup. Note that with this solution, you'll get an IndexError if the user picks an invalid entry. You should decide how such errors should be treated, since your original code also lacked error handling.
{ "domain": "codereview.stackexchange", "id": 4788, "tags": "python, python-3.x" }
Electric potential inside a polarised conductor
Question: Say a conductor with an initial electric potential of zero is subject to an arbitrary charge. I understand that because if this outside charge, there would be charge distribution inside the conductor, so as to make the electric field in it zero. What happens to the initial electric potential inside the conductor? Would it be greater than zero since now one side of the conductor is positively charged and another negatively? Answer: Let's be a little more precise about what we mean by a zero potential. We'll take the potential of earth to be zero, and before we bring up the charge we'll connect our conductor to earth to make its potential zero as well. Then we disconnect the conductor from earth. Now we bring up the external charge, and as you say it will polarise the conductor. The question is whether the potential of the conductor has been changed, and the simple way to test this is to connect it to earth again and see if any charge flows between earth and the conductor. If no charge flows the potential of the conductor must be unchanged, and if charge flows the potential must have changed. And if we tried this we would find that charge does flow between earth and the conductor as soon as we connect them. If we bring up a positive charge and connect the conductor to earth we'll find electrons flow from earth onto the conductor to give it a net negative charge. Likewise if we bring up a negative charge we'll find electrons flow off the conductor to earth giving the conductor a net positive charge. Either way bringing the external charge close to the conductor does change its potential relative to earth. Actually calculating the change in the potential would be hard, and if would depend on the size and shape of the conductor. However our thought experiment makes it clear that the potential does change.
{ "domain": "physics.stackexchange", "id": 89546, "tags": "electromagnetism, electric-fields, charge, potential, conductors" }
moveit_core headers not found
Question: After trying to include moveit in a find_package(), I receive this error. CMake Error at /home/marco/catkin_ws/devel/share/moveit_core/cmake/moveit_coreConfig.cmake:98 (message): Project 'moveit_core' specifies '/home/marco/catkin_ws/src/moveit/src/moveit_core/background_processing/include' as an include dir, which is not found. It does neither exist as an absolute directory nor in '/home/marco/catkin_ws/src/moveit/src/moveit_core//home/marco/catkin_ws/src/moveit/src/moveit_core/background_processing/include'. Ask the maintainer 'Sachin Chitta robot.moveit@gmail.com, Ioan Sucan isucan@gmail.com, Acorn Pooley acorn.pooley@sri.com' to fix it. Suggestions? Originally posted by DevonW on ROS Answers with karma: 644 on 2014-08-29 Post score: 0 Answer: Did you have moveit in your workspace at some point in the past, and have you removed it since then? The cmake config files for packages can often be left behind in the build directory of your workspace even after you remove the package from your src directory. I've found that the easiest fix for this is to remove my build and devel directories and rebuild my workspace from scratch. Originally posted by ahendrix with karma: 47576 on 2014-08-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by DevonW on 2014-08-29: I did attempt to remove the /build /devel folders and I still received that error. Indeed it was in src before. What fixed it was installing the source code of moveit again and compiling with it However, that should be unnecessary so I'll try removing all the source and /devel /build and try again.
{ "domain": "robotics.stackexchange", "id": 19233, "tags": "ros" }
Dialogue writing style & Yuri Gurevich's imaginary student: Quisani
Question: Recently, while I was looking for more sources of crossing sequence in computational complexity theory, I came across the article "Is Randomness 'Native' to Computer Scientist?" written by M. Ferbus-Zanda & S. Griogorieff. The writing style immediately caught my attention, since it's quite uncommon to me. Indeed, the whole article is written as a discussion between Quisani (Q), a fictitious student who asks questions, and the authors (A). After a quick search, it seems that Yuri Gurevich was the first to introduce Quisani (as its virtual student). And interestingly, lots of Gurevich's coworkers (I guess?) adopt this style. Here are some instances : Semantics-to-syntax analyses of algorithms, Yuri Gurevich Logic on words, Jean-Eric Pin Why are Modal Logics so Robustly Decidable?, Erich Grädel I found this dialogue style very easy to follow, pleasant to read and sometimes even funny. In my opinion, this is a nice alternative to the usual format. But obviously it doesn't fit every author. Here are some extracts: Introductions (extract from Algorithms vs. Machines, Andreas Blass & Yuri Gurevich) 1 Prelude Quisani: I have a question about the ASM thesis, the claim that every algorithm can be expressed, on its natural level of abstraction, by an abstract state machine (ASM). Do you still believe this thesis? Authors: Yes. In fact, there has been some recent progress [3], extending the proof from the sequential algorithms covered in [5] to parallel algorithms. To be precise, we deal with parallel algorithms that operate in sequential time and have bounded sequentiality within each step. [...] (extract from A New Zero-One Law and Strong Extension Axioms, Andreas Blass & Yuri Gurevich) 1 Shelah's Zero-One Law Quisani: What are you doing, guys? Author: We are proving a zero-one law which is due to Shelah. Q: Didn't Shelah prove the law? A: Oh yes, he proved it all right, and even wrote it down [14]. Q: So what is the problem? Can't you read his proof? A: Reading Shelah's proofs may be research in its own right. His great mathematical talent is not matched by his talent of exposition. Q: I suspect that you don't limit yourself to reproving Shelah's theorem. A: We have proved some related results [1]. Q: Tell me about this zero-one law which is exciting enough to divert your attention from abstract state machines. [...] Development (extract from Is Randomness 'Native' to Computer Scientist?, M. Ferbus-Zanda & S. Griogorieff") 6 Random Finite Strings and Their Applications 6.1 Random Versus How Much Random Q: Let's go back to the question: "what is a random string?" A: This is the interesting question, but this will not be the one we shall answer. We shall modestly consider the question: "To what extent is $x$ random?" We know that $K(x) \le |x| + O(1)$. It is tempting to declare a string $x$ random if $K(x) \ge |x| - O(1)$. But what does it really mean? The $O(1)$ hides a constant. Let's explicit it. Definition 15. [$c$-incompressible string] [...] Q: Are there many $c$-incompressible strings? A: Kolmogorov noticed that they are quite numerous. Theorem 16. [...] Ending (extract from Is Randomness 'Native' to Computer Scientist?, M. Ferbus-Zanda & S. Griogorieff") Q: Wow! It's getting late. A: Hope you are not exhausted. Q: I really enjoyed talking with you on such a topic. (extract from Logic on words, Jean-Eric Pin) Q: I am a bit tired, and I need to assimilate all what you said. What would you suggest me to read? A: There are several survey papers you could read [24, 26, 32, 29, 38, 48]. Then you can compulse the references given in these papers to go further on. Q: Before I go, why did you get interested into logic? A: [...] Q: I see what you mean. It’s a good conclusion for our conversation. Thank you. My questions are the following: Are there any authors (possibly in other fields) using a similar style or other alternatives? Any anecdotes of Quisani's born? I would also want to know how the TCS community receive such writing style. Comments are welcomed. EDIT: A similar question is "Casual tours around proofs" which discuss "casual tour" style vs. "Definition-Theorem-Proof (DTP) format". Answer: As far as I can know, Yuri's imaginary student Quizani (no typo, but the name was later changed to Quisani) was first introduced by Yuri in 1988. It was for his first Logic in Computer Science column of the Bulletin of EATCS. This column, entitled On Kolmogorov Machines And Related Issues ( Bull. EATCS, No. 35, June 1988, 71–82.) was later published by World Scientific Series in Computer Science, Current Trends in Theoretical Computer Science, pp. 225-234 (1993). Here are the first sentences of this dialog: It is often easier to explain oneself in a dialog. To this end, allow me to introduce my imaginary student Quizani. Quizani: I think you should introduce yourself too. Don’t assume everyone knows you. Author: All right. I grew up in the Soviet Union and started my career in the Ural University as an algebraist and self-taught logician. (...) Later on, several authors, including myself, borrowed Quisani from Yuri to write their own columns. I still have Yuri's review of the first draft of my 1994 column: Yuri had criticized the fact that Quisani was a little too cooperating...
{ "domain": "cstheory.stackexchange", "id": 4916, "tags": "soft-question, writing" }
Minimal number of changes to make string into concatenation of $k$ palindromes
Question: The following question is taken from leetcode: 1278. Palindrome Partitioning III You are given a string $s$ containing lowercase letters and an integer $k$. You need to: First, change some characters of $s$ to other lowercase English letters. Then, divide $s$ into $k$ non-empty disjoint substrings such that each substring is a palindrome. Return the minimal number of characters that you need to change to divide the string. So I did find the cost array to make a string to a palindrome. Something like below: [('bcccdefg', 4), ('ccde', 2), ('ccdef', 2), ('ef', 1)] # (character, cost to make it a palindrome) I am thinking whether this can be done using knapsack recurrence relations, where profit will be cost and weight will be length of the substring; the total backpack weight will be the length of the main string. Though I am not sure what to do with $k$. Do I need 3 states, or are 2 states enough? Is my thinking correct? If no, what is the right way to write a recurrence relation for this? Answer: Let $s=s_1 s_2 \dots s_n$. Suppose that you already know, for each substring $s[i:j]=s_is_{i+1}\dots s_k$ of $s$, the minimum number $D(i,j) $ of character substitutions needed to make $s[i:j]$ palindrome. These values can be found in $O(n^2)$ time using dynamic programming since $D(i,j) = 0$ if $i>j$ and $D(i,j) = D(i+1,j-1) + \mathbb{1}_{s_i \neq s_j}$ if $i \le j$. Your problem exhibits the optimal substructure property. Suppose that you already known the beginning position $i$ of the last substring $s[i:n]$ of $s$ in an optimal partition: clearly you will need to make $D(i,n)$ changes in order to transform $s[i:n]$ into a palindrome. You are then left with the prefix $s[1:i-1]$ of $s$. This prefix still needs to be partitioned into $k-1$ nonempty substrings that are palyndromes. Fortunately this is exactly an instance of the problem you were trying to solve in the first place. You can exploit this property as follows: Define $OPT[j,h]$ as the minimum number of character substitutions needed in order to partition $s[1:j]$ into $h$ substrings, each of which is non-empty and palindrome. If no such partition can exist, regardless of the number of character substitutions, let $OPT[j,h]=+\infty$. By definition $OPT[0,0]=0$ (since $s[1:0]$ is the empty string) and, for $h=1,\dots,k$, $D[0,h] = +\infty$. Moreover, for $i=1,\dots,n$ and $h=1, \dots, k$: $$ OPT[j,h] = \min_{i=1,\dots,j} \left\{ OPT[i-1,h-1] + D(i, j) \right\}. $$ This gives you a dynamic programming algorithm with time complexity $O(k n^2)$. The optimal solution to your original problem is $OPT[n,k]$.
{ "domain": "cs.stackexchange", "id": 15913, "tags": "dynamic-programming, recurrence-relation" }
Computing the complement of a set
Question: Suppose I have a set $A$ of elements in $\{1, \ldots, n\}$, given as an unordered list. I would like to compute the complement of $A$, i.e., I would like to produce an unordered list of entries in $\{1, \ldots, n\}$ but not in $A$. One way to do this is to sort the entries of $A$ and then go through them and list all the entries I do not see. This takes $O(n \log n)$ in the unit cost RAM model. My question is whether there exists a linear time $O(n)$ algorithm in the same model. Answer: Since you have both lists, even if not both at the same time, your space cost is at least that of the longest, thus at least $n/2$. This corresponds to a $O(n)$ space complexity. So, unless you exclude it for some reason, it makes sense to use a bit array $M[1:n]$ to represent the set A by its characteristic function, i.e. $M[i]=0$ iff $i\notin A$, and $1$ otherwise. The space complexity is unchanged, and the array $M$ is actually much smaller than one of the two lists (initial or final). Initializing the array $M$ from the initial list is linear. Then you can take the logical complement of the array $M$ in linear time. Then you scan the array in linear time to get all the element of the complement of your initial list in linear time $O(n)$ Of course, by now you have noticed your mistake: sorting is in time $O(n \log n)$ (or better according to comment) when $n$ is the number of element to be sorted. But it is trivially linear if $n$ is the size of the finite ordered set they come from. Using an array, as I just did is one way of sorting in linear time with respect to the size of the referance set. Your own answer was good because of that. Only your complexity assessment was wrong.
{ "domain": "cs.stackexchange", "id": 3069, "tags": "algorithms, algorithm-analysis, time-complexity, sorting, finite-sets" }
Is this charge density function, from a problem in Griffiths' book, a physically valid density?
Question: This is from the book on electrodynamics by Griffiths: A sphere of radius $R$, centered at the origin, carries charge density $$\rho(r,\theta)= k(R/r^2)(R-2r)\sin(\theta)$$ where $k$ is constant, and $r$ and $\theta$ are usual spherical coordinates. When the radius is $r=0$, what is value of $\theta$? We can not define $\theta$ there, since to define $\theta$ we need some finite (however small) displacement from the origin. Also, at $r=0$, the denominator is zero, and then $\rho$ will be infinite. So is this a valid (i.e. physically possible) density function? Answer: The important thing about a density is that its integral over any volume, which represents the total charge (or whatever it is a density of) inside that volume, is finite. At $r=0$, $\theta$ is not defined since the polar coordinate chart for $\mathbb{R}^n$ covers everything but the origin. However, since the origin as a point is a set of zero Lebesgue measure, the value of any function at that single point does not contribute to any volume integral, so it is allowed to have a density that is not defined there. Since the function $\rho(r,\theta)$ is integrated against the volume element1 $$\mathrm{d}V = r^2\sin(\phi)\mathrm{d}r\mathrm{d}\phi\mathrm{d}\theta$$ its integral over any volume will indeed be finite (as the worrisome $\frac{1}{r^2}$ term is cancelled), so the density is physically admissible. 1Hat tip to garyp, who pointed out the lack of the $r^2$ in a prior attempt to answer this question.
{ "domain": "physics.stackexchange", "id": 25603, "tags": "electrostatics, charge, density, singularities, regularization" }
Checking if a binary tree is balanced
Question: class BinaryTreeNode { private BinaryTreeNode left; private BinaryTreeNode right; private int data; BinaryTreeNode(int d) { data = d; } public void insertLeft(BinaryTreeNode n) { this.left = n; } public void insertRight(BinaryTreeNode n) { this.right = n; } public int height() { // height = Max(Hl, Hr) + 1 int leftHeight = this.left != null ? left.height() : -1; int rightHeight = this.right != null ? right.height() : -1; return Math.max(leftHeight, rightHeight) + 1; } public String toString() { String leftStr = this.left == null ? "" : this.left.toString(); String rightStr = this.right == null ? "" : this.right.toString(); return leftStr + " : " + data + " : " + rightStr; } public static void main(String[] s) { BinaryTreeNode n = new BinaryTreeNode(0);// 0 l0 // / \ BinaryTreeNode l1 = new BinaryTreeNode(1);// 1 4 l1 // / BinaryTreeNode l2 = new BinaryTreeNode(2);// 2 l2 // / BinaryTreeNode l3 = new BinaryTreeNode(3);// 3 l3 l2.insertLeft(l3); l1.insertLeft(l2); n.insertLeft(l1); n.insertRight(new BinaryTreeNode(4)); System.out.println(n.height()); System.out.println(isBalanced(n)); } public static boolean isBalanced(BinaryTreeNode n) { // if height = (|hl - hr|) <=1 int leftHeight = n.left != null ? n.left.height() : -1; int rightHeight = n.right != null ? n.right.height() : -1; return leftHeight - rightHeight <= 1; } } Answer: Your question is unclear, though I can tell that your code is broken. Your code here: public static boolean isBalanced(BinaryTreeNode n) { // if height = (|hl - hr|) <=1 int leftHeight = n.left != null ? n.left.height() : -1; int rightHeight = n.right != null ? n.right.height() : -1; return leftHeight - rightHeight <= 1; } Should return true if the tree is balanced, but, it takes a node in, computes the height of the left and right sides, and compares the differences. This method is broken.... Just because the height of each side of the tree (left, and right) are similar, does not mean that each side is also balanced. Consider a tree like: // 0 // / \ // 1 4 // / / // 2 5 // / / // 3 6 When you test the root, you will determine that both sides have the same height, yet, the tree is far from balanced.
{ "domain": "codereview.stackexchange", "id": 13317, "tags": "java, algorithm, tree" }
Can pulsating DC current be transformed?
Question: Since pulsating DC current is changing, why doesn't it induce a changing magnetic flux in the transformer core? Is it able to induce a transformed current in the secondary coil? Answer: As long as the DC component does not saturate the core of the transformer, the (lower frequency) components of the waveform should be induced in the secondary. Consider, for example, the output transformer of a single ended class A triode audio amplifier Image credit In this case, the primary current is 'pulsating' DC, i.e., the primary current varies with time but never goes through zero (never alternates) while the secondary current through the speaker has no DC component.
{ "domain": "physics.stackexchange", "id": 89681, "tags": "electricity, magnetic-fields, electric-circuits, electric-fields, induction" }
Is uniform RNC contained in polylog space?
Question: Log-space-uniform NC is contained in deterministic polylog space (sometimes written PolyL). Is log-space-uniform RNC also in this class? The standard randomized version of PolyL should be in PolyL, but I don't see that (uniform) RNC is in randomized-PolyL. The difficulty I see is that in RNC, the circuit can "look at the random bits" as much as it wants; i.e., the random inputs can have arbitrary fanout. But in the randomized version of PolyL, it's not like you get a tape of random bits you get to look at as much as you want; rather, you are only allowed to flip a coin at each time step. Thanks! Answer: Perhaps most people think that $\mathsf{RNC}\subseteq \mathsf{DSPACE(polylog)}$ (or even that $\mathsf{RNC}=\mathsf{NC}$), but I'm skeptical about this (see the second part of my answer, below). If $\mathsf{RNC}$ is indeed contained in $\mathsf{DSPACE(polylog)}$, then it is also contained in $\mathsf{NTIME(2^{polylog})}$ (more specifically, it is in $\mathsf{DTIME(2^{polylog})}$ by exhaustive search). Valentine Kabanets explained to me the following (folklore) argument from his paper with Russell Impagliazzo that explains why $\mathsf{RNC} \subseteq \mathsf{NTIME(2^{polylog})}$ is unlikely. Theorem: If $\mathsf{RNC}\subseteq \mathsf{NTIME(2^{polylog})}$, then either $\mathsf{NEXP}$ is not computable by Boolean circuits of size $o(2^n/n)$ (i.e. sub-maxsize by Shannon; irrelevant but see Lupanov for tightness), or Permanent is not computable by (division-free) arithmetic formulas over ${\mathbb Z}$ of quasipolynomial size. Proof: assume $\mathsf{RNC}\subseteq \mathsf{NTIME(2^{polylog})}$. If Permanent has a quasipolynomial size formula, then we can guess and verify such a formula for Permanent using a quasipolynomial time polynomial identity tester by assumption. This places Permanent in $\mathsf{NTIME(2^{polylog})}$. By Toda's theorem, $\Sigma_2$ is then also in $\mathsf{NTIME(2^{polylog})}$. By padding, the linear-exponential time version of $\Sigma_5$ is also in $\mathsf{NEXP}$. Hence the linear-exponential version of $\Sigma_5$ has a circuit of size $o(2^n/n)$ (i.e. submax). But, by a simple diagonalization argument, one can show that the linear-exponential version of $\Sigma_5$ requires max circuit size, which is a contradiction (by the way, this is a variant of a mid-level question for a graduate-level complexity course; okay, maybe proving that $\mathsf{EXPSPACE}$ requires max-size circuits is a simpler one). QED. Now the unpopular direction. We already know that randomness read many times can do something non-obvious. An interesting example can be found in "Making Nondeterminism Unambiguous" by Reinhardt and Allender (they state it in terms of non-uniformity but in principle it is about using read-many-times randomness). Another interesting example (less directly related) is "Randomness Buys Depth for Approximate Counting" by Emanuele Viola. I guess all I'm saying is that I wouldn't be surprised if the derandomization of $\mathsf{RNC}$ is not what most people would expect it to be. (There are also a couple of other papers, like Noam Nisan's wonderful paper on read-once vs. read-many randomness, that show how to buy two-sided error with one-sided error.) By the way, understanding how to construct PRGs fooling space-bounded models of computation with multiple accesses to their input (e.g. linear lengths Bps) is also very related to this question. -- Periklis
{ "domain": "cstheory.stackexchange", "id": 2196, "tags": "complexity-classes, randomized-algorithms" }
Design of genetic algorithm which would allow TDD
Question: I'm implementing genetic algorithm in Java and I want to learn TDD with this project. Currently I have this code: package geneticAlgoritm; import geneticAlgoritm.randomNumbers.IRandomGenerator; import geneticAlgoritm.randomNumbers.RandomGenerator; import lombok.Getter; import lombok.NonNull; import java.util.List; public class Population { @NonNull private IRandomGenerator generator = new RandomGenerator(); @Getter @NonNull private List<Chromosome> population; @NonNull final IFitnessCalculator calculator; public Population(int n, IFitnessCalculator calculator) { this.calculator = calculator; this.population = generatePopulation(n); } int nextGeneration() { } private List<Chromosome> generatePopulation(int n) { return null; } private Chromosome mutate(Chromosome chromosome, double probability) { return null; } private Chromosome crossover(Chromosome first, Chromosome second, double probability) { return null; } private List<Chromosome> pickBestOfPopulation(int n) { return null; } } Basically, I have Chromosome class, which is wrapper for byte[] array and fitness of that array. nextGeneration generates new population with better traits. How I can change design to allow testing crossover, mutate etc functions? nextGeneration is really inconvenient to test for all functionalities, especially with some randomness in each (I'm using custom interface to remove it). Answer: You have already written the API of your implementation. Being strict: That's not how tdd works. Write one failing test Make the test pass Refactor Your test case should not only test your implementation, but also document the behaviour of your implementation - without having written the implementation. And if you're at the first point, your test case describes the expected behaviour of your implementation. And it guides the design of your implementation. And what's very important, too: Know when to stop. If you have code which is not covered by your test cases - and should be covered by a test case - at the end of an iteration, you did it wrong. And in my opinion: A code review does help, but not too much. Because with tdd, an application grows iteration by iteration. And the thoughts which have to be thought during those steps have to be learned and practiced.
{ "domain": "codereview.stackexchange", "id": 28907, "tags": "java, beginner, unit-testing, genetic-algorithm" }
Proving proof system properties within the proof system itself?
Question: While reading about Frege proof systems in [1], I came across the completeness theorem and its proof, which involves a few lemmas introduced first. Here are the first two of those lemmas: $$\begin{equation} \vdash \phi \supset \phi \tag{1} \end{equation}$$ $$\begin{equation} \Gamma, \phi \vdash \psi \space \text{if and only if} \space \Gamma \vdash \phi \supset \psi \tag{2} \end{equation}$$ The first one is easy to prove using the axioms and inference rule of Frege proof system. The proof of the second one uses induction. This confused me a bit and prompted the following questions: Are proof systems "self-contained", in the sense that their properties can be proven within themselves? Or are properties such as soundness and completeness "external" to the system? The rationale being part of the motivation for proof systems to formalize proofs and thus serve as a foundation of sorts for other math. Is there some "strange loop" going on when using something like induction (which is defined on the natural numbers) to prove such properties? Also, $\Gamma$ is a set, which is not really part of this proof system vocabulary. For example, if you use a sound but incomplete proof system $P$ to prove the soundness and completeness of another proof system $Q$, does that imply that $Q$ can't "express" $P$ somehow, since that would be paradoxical? Sorry if my question is ill-posed. I'm a beginner to this topic, so I would appreciate any advice on formulating the question correctly as well. [1] Buss, Samuel R. (ed.), Handbook of proof theory, Studies in Logic and the Foundations of Mathematics. 137. Amsterdam: Elsevier. 811 p. (1998). ZBL0898.03001. [2] https://en.wikipedia.org/wiki/Semantic_theory_of_truth Answer: The first problem is what does is even mean that a propositional proof system can prove its own properties: there is a serious discrepancy of the languages, because the propositional proof system can only express propositional formulas, whereas properties of the proof system are first-order statements in a language that can reason about finite strings, i.e., basically, in the language of arithmetic. Common solutions are: If the property in question can be formulated as a universal statement $\forall x\,\phi(x)$ where $\phi(x)$ is polynomial-time computable, it can be encoded by a sequence of propositional tautologies $\{[\![\phi]\!]_n(p_0,\dots,p_{n-1}):n\in\mathbb N\}$, where $[\![\phi]\!]_n$ expresses the truth of $\phi(x)$ for $x$ of length $n$. You can then ask whether these tautologies have polynomial-size, or even polynomial-time constructible, $P$-proofs. Many common propositional proof systems $P$ have a “corresponding” first-order arithmetical theory $T$. The correspondence is a somewhat loose notion, but generally it means that on the one hand, universal statements provable in $T$ translate (as in 1) to sequences of tautologies that have polynomial-time constructible $P$-proofs, and on the other hand, $T$ can prove the soundness of $P$. In a sense, this says that the arithmetical theory $T$ is a “uniform version” of the propositional proof system $P$. When you have such a correspondence, you may explicate the informal statement “$P$ proves this and this property” as formally meaning that $T$ proves the property. Now, if a proof system “proves” (in sense 1 or 2) a given property of itself of course depends on the property and on the proof system, there is no general answer: There is usually no difficulty with formalizing basic efficient syntactic manipulation. For example, (the bounded arithmetic corresponding to) the Frege system can prove that Frege satisfies the deduction theorem (your (2)). Whether a proof system can prove its own soundness is an important property that’s discussed a lot in the literature. The propositional translations of the soundness of $P$ are called the reflection principles for $P$. Proof systems that have a “corresponding” theory of arithmetic have polynomial-time constructible proofs of their own reflection principles more or less by definition of the “correspondence”. For example, this holds for Frege, and for most typical sufficiently strong proof systems. Moreover, under mild assumptions, if a proof system $P$ has polynomial-time proofs of the reflection principle for a proof system $Q$, then $P$ polynomially simulates $Q$. Under the most reasonable interpretation, (likely) no proof system can prove its own completeness. First, this is not a universal statement, but a $\forall\exists$ statement (with an unbounded, or at best exponentially bounded, existential quantifier), hence directly translating it in the sense 1 is not possible: “for every formula $A$, there exists a $P$-proof of $A$, or an unsatisfying assignment to $A$”. One can ask about its provability in the corresponding theory of arithmetic (i.e., in sense 2), but Parikh’s theorem (that holds for any such theories) implies that this completeness statement is not provable unless one can place a polynomial bound on the existential quantifiers, i.e., unless the proof system $P$ is polynomially bounded, which implies NP = coNP. One may cheat by encoding the completeness statement only for logarithmically short formulas: that is, by considering propositional tautologies $A_n$ suitably expressing “every formula of length $\le\log n$ has a $P$-proof of size $\le n^c$” (or the corresponding arithmetical sentence). In systems like Frege, this should be provable with no difficulty. You may find further information e.g. in [1] Stephen A. Cook and Phuong Nguyen: Logical foundations of proof complexity. Perspectives in Logic, Cambridge University Press, New York, 2010. [2] Jan Krajíček: Proof complexity. Encyclopedia of Mathematics and Its Appplications, vol. 170, Cambridge University Press, 2019.
{ "domain": "cstheory.stackexchange", "id": 5271, "tags": "proofs, proof-theory" }
Centripetal force pendulum
Question: So I was trying to figure out the free-body diagram for a pendulum, and I stumbled upon a video that explains it pretty well (see screenshot). What I don't understand is how we can have a centripetal acceleration. Shouldn't the tension of the rope be equal to the radial component (or in this case, $x$ component) of the gravitational force? I assumed we have $$ T=-G_{g_x}. $$ However, that seems odd too, because sure we have a centripetal force if our motion is circular... So could someone explain what's going on? Answer: Just because the motion of the bob is along a circle doesn't mean that its acceleration has to be tangent to it - which is, I think, the root of your confusion. The bob has both a tangential acceleration to change the magnitude of its velocity vector (because it is speeding up or slowing down according to whether it is going down the circle or up the circle currently) and also a radial (centripetal acceleration) that changes the direction of the velocity vector. So, no, the tension should not equal the radial component of gravity because the bob does have net acceleration in this direction.
{ "domain": "physics.stackexchange", "id": 41531, "tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram, centripetal-force" }
Techniques for Reversing the Order of Quantifiers
Question: It is well-known that in general, the order of universal and existential quantifiers cannot be reversed. In other words, for a general logical formula $\phi(\cdot,\cdot)$, $(\forall x)(\exists y) \phi(x,y) \quad \not\Leftrightarrow \quad (\exists y)(\forall x) \phi(x,y)$ On the other hand, we know the right-hand side is more restrictive than the left-hand side; that is, $(\exists y)(\forall x) \phi(x,y) \Rightarrow (\forall x)(\exists y) \phi(x,y)$. This question focuses on techniques to derive $(\forall x)(\exists y) \phi(x,y) \Rightarrow (\exists y)(\forall x) \phi(x,y)$, whenever it holds for $\phi(\cdot,\cdot)$. Diagonalization is one such technique. I first see this use of diagonalization in the paper Relativizations of the $\mathcal{P} \overset{?}{=} \mathcal{NP}$ Question (see also the short note by Katz). In that paper, the authors first prove that: For any deterministic, polynomial-time oracle machine M, there exists a language B such that $L_B \ne L(M^B)$. They then reverse the order of the quantifiers (using diagonalization), to prove that: There exists a language B such that for all deterministic, poly-time M we have $L_B \ne L(M^B)$. This technique is used in other papers, such as [CGH] and [AH]. I found another technique in the proof of Theorem 6.3 of [IR]. It uses a combination of measure theory and pigeon-hole principle to reverse the order of quantifiers. I want to know what other techniques are used in computer science, to reverse the order of universal and existential quantifiers? Answer: Reversal of quantifiers is an important property that is often behind well known theorems. For example, in analysis the difference between $\forall \epsilon > 0 . \forall x . \exists \delta > 0$ and $\forall \epsilon > 0 . \exists \delta > 0 . \forall x$ is the difference between pointwise and uniform continuity. A well known theorem says that every pointwise continuous map is uniformly continuous, provided the domain is nice, i.e., compact. In fact, compactness is at the heart of quantifier reversal. Consider two datatypes $X$ and $Y$ of which $X$ is overt and $Y$ is compact (see below for explanation of these terms), and let $\phi(x,y)$ be a semidecidable relation between $X$ and $Y$. The statement $\forall y : Y . \exists x : X . \phi(x,y)$ can be read as follows: every point $y$ in $Y$ is covered by some $U_x = \lbrace z : Y \mid \phi(x,z) \rbrace$. Since the sets $U_x$ are "computably open" (semidecidable) and $Y$ is compact there exists a finite subcover. We have proved that $$\forall y : Y . \exists x : X . \phi(x,y)$$ implies $$\exists x_1, \ldots, x_n : X . \forall y : Y . \phi(x_1,y) \lor \cdots \lor \phi(x_n, y).$$ Often we can reduce the existence of the finite list $x_1, \ldots, x_n$ to a single $x$. For example, if $X$ is linearly ordered and $\phi$ is monotone in $x$ with respect to the order then we can take $x$ to be the largest one of $x_1, \ldots, x_n$. To see how this principle is applied in a familiar case, let us look at the statement that $f : [0,1] \to \mathbb{R}$ is a continuous function. We keep $\epsilon > 0$ as a free variable in order not to get confused about an outer universal quantifier: $$\forall x \in [0,1] . \exists \delta > 0 . \forall y \in [x - \delta, x + \delta] . |f(y) - f(x)| < \epsilon.$$ Because $[x - \delta, x + \delta]$ is compact and comparison of reals is semidecidable, the statement $\phi(x, \delta) \equiv \forall y \in [x - \delta, x + \delta] . |f(y) - f(x)| < \epsilon$ is semidecidable. The positive reals are overt and $[0,1]$ is compact, so we can apply the principle: $$\exists \delta_1, \delta_2, \ldots, \delta_n > 0 . \forall x \in [0,1] . \phi(\delta_1, x) \lor \cdots \phi(\delta_n, x).$$ Since $\phi(\delta, x)$ is antimonotone in $\delta$ the smallest one of $\delta_1, \ldots, \delta_n$ does the job already, so we just need one $\delta$: $$\exists \delta > 0 . \forall x \in [0,1] . \forall y \in [x - \delta, x + \delta] . |f(y) - f(x)| < \epsilon.$$ What we have got is uniform continuity of $f$. Vaguely speaking, a datatype is compact if it has a computable universal quantifier and overt if it has a computable existential quantifier. The (non-negative) integers $\mathbb{N}$ are overt because in order to semidecide whether $\exists n \in \mathbb{N} . \phi(n)$, with $\phi(n)$ semidecidable, we perform the paralel search by dovetailing. The Cantor space $2^\mathbb{N}$ is compact and overt, as explained by Paul Taylor's Abstract Stone Duality and Martin Escardo's "Synthetic Topology of Datatypes and Classical Spaces" (also see the related notion of searchable spaces). Let us apply the principle to the example you mentioned. We view a language as a map from (finite) words over a fixed alphabet to boolean values. Since finite words are in computable bijective correspondence with integers we may view a language as a map from integers to boolean values. That is, the datatype of all languages is, up to computable isomorphism, precisely the Cantor space nat -> bool, or in mathematical notation $2^\mathbb{N}$, which is compact. A polynomial-time Turing machine is described by its program, which is a finite string, thus the space of all (representations of) Turing machines can be taken to be nat or $\mathbb{N}$, which is overt. Given a Turing machine $M$ and a language $c$, the statement $\mathsf{rejects}(M,c)$ which says "language $c$ is rejected by $M$" is semidecidable because it is in fact decidable: just run $M$ with input $c$ and see what it does. The conditions for our principle are satisfied! The statement "every oracle machine $M$ has a language $b$ such that $b$ is not accepted by $M^b$" is written symbolically as $$\forall M : \mathbb{N} . \exists b : 2^\mathbb{N} . \mathsf{rejects}(M^b,b).$$ After inversion of quantifiers we get $$\exists b_1, \ldots, b_n : 2^\mathbb{N} . \forall M : \mathbb{N} . \mathsf{rejects}(M^{b_1}, b_1) \lor \cdots \lor \mathsf{rejects}(M^{b_n},b_n).$$ Ok, so we are down to finitely many languages. Can we combine them into a single one? I will leave that as an exercise (for myself and you!). You might also be interested in the slightly more general question of how to transform $\forall x . \exists y . \phi(x,y)$ to an equivalent statement of the form $\exists u . \forall v . \psi(u,v)$, or vice versa. There are several ways of doing this, for example: Skolem normal form, Herbrand normal form, Gödel's functional interpretation.
{ "domain": "cstheory.stackexchange", "id": 606, "tags": "lo.logic, big-picture, proof-techniques" }
Is photoelectric effect example of inelastic collision
Question: I was reading photoelectric effect , which was completely explained by Einstein. And a bit difficult question , atleast seems to me is whether photoelectric effect is inelastic collision or elastic collision . Every where it is written that photoelectric effect is example of inelastic collision . But as far as I know in inelastic collision some energy become internal energy of the system , there maybe some energy loss as heat or something . But here in photoelectric effect photon is being absorbed by electron but where is the energy loss? No energy is being lost here the energy $h\nu $ is absorbed by electron and its kinetic energy increases so no energy loss occurs . So in my opinion it is not an inelastic collision . So what is the reality , is it inelastic or elastic collision ? Answer: When a photon interacts with an atom, three things can happen: elastic scattering, when the photon keeps its energy, and changes angle inelastic scattering, when the photon gives part of its energy to the atomic system and changes angle absorption, when the photon gives all its energy to the atom, and the valence electron will move to a higher energy level, and then the electron will eventually move back to a lower energy level and emit a photon At low energies, Rayleigh scattering happens, 1. elastic. In your case, photo electricity is when 3. happens. That is at higher energies. At even higher energies, 2. happens, Compton scattering.
{ "domain": "physics.stackexchange", "id": 90032, "tags": "quantum-mechanics, photoelectric-effect" }
How do you calculate the efficiency of a steam turbine?
Question: Yesterday I saw a comment on an article that claimed a turbine with a steam temperature of 500° F and a room temperature 70 ° F couldn't possibly have an efficiency higher than 45%. He used the carnot efficiency equation as far as I could tell, but wouldn't a turbine use an equation completely different than the carnot cycle equation? Answer: but wouldn't a turbine use an equation completely different than the carnot cycle equation Yes, it would be different. And lower. The Carnot cycle has the maximum possible efficiency. Bigger than any other thermodynamic cycle. Calculate the Carnot efficiency $\eta_C$ for any engine, and the real efficiency $\eta$ is lower (maybe much, much lower): $$\eta_C>\eta$$ So nothing is wrong in his argument, it is just a very rough value. He just found $\eta_C$ and states that this is the maximum limit and that the real value must be lower than this - without knowing if the real value is close or much smaller. This is just a rough back-of-the-envelope value that you can always quickly find for any engine.
{ "domain": "physics.stackexchange", "id": 32843, "tags": "carnot-cycle" }
Bash script for reviewing .pdf articles
Question: I wrote a bash script for reviewing scientific articles. It adds a blank page for notes and creates a new landscape file. I'm sure it can be made better code wise. Any suggestions? #!/bin/bash if [ $# -ne 1 ] then echo "Usage example: ./bashscript src.pdf" exit $E_BADARGS else NUM=$(pdftk $1 dump_data | grep 'NumberOfPages' | awk '{split($0,a,": "); print a[2]}') COMMSTR='' for i in $(seq 1 $NUM); do COMMSTR="$COMMSTR A$i B1 " done $(echo "" | ps2pdf -sPAPERSIZE=a4 - pageblanche.pdf) $(pdftk A=$1 B=pageblanche.pdf cat $COMMSTR output 'mod_'$1) (pdfnup 'mod_'$1 --nup 2x1 --landscape --outfile 'print_'$1) $(rm pageblanche.pdf && rm 'mod_'$1) fi #for f in *.pdf; do ./bashscript.sh $f; done 2> /dev/null Answer: First of all "bashscript" is a very poor name: It doesn't tell what the script does. "prepare-articles-for-review.sh" would be better. Or just "prepare-articles.sh" It is customer to use .sh extension for Bash scripts If your typical use case is this: for f in *.pdf; do ./bashscript.sh $f; done 2> /dev/null Then it would be better to make the script handle multiple file parameters. exit $E_BADARGS I'm not familiar with such variable. I don't think it's a standard, and it's not mentioned in my man bash. If the variable is undefined (and I think it is), then this will have the same effect as exit, which is equivalent to exit $?, where $? is the exit code of the last statement, in this case the echo on the previous line, which is most probably 0. I think you intended to exit with failure, with non-zero exit code. NUM=$(pdftk $1 dump_data | grep 'NumberOfPages' | awk '{split($0,a,": "); print a[2]}') Many problems here: You should double-quote "$1", to protect from spaces in the filename The quoting in grep 'NumberOfPages' is redundant Are you sure there won't be more than one lines matching "NumberOfPages"? Just to be safe, I would add exit in the Awk command The Awk command can be written simpler as awk -v FS=": " '{print $2}' COMMSTR='' You can simplify to: COMMSTR= for i in $(seq 1 $NUM); seq is not recommended because it's not portable. You can achieve the same using native Bash functionality: for ((i=1; i<=$NUM; i++)); $(echo "" | ps2pdf -sPAPERSIZE=a4 - pageblanche.pdf) The $(...) wrapping is really pointless, and awful. No need for echo "", simply echo is exactly the same. $(pdftk A=$1 B=pageblanche.pdf cat $COMMSTR output 'mod_'$1) Does this work at all? Unfortunately I don't have pdftk so cannot try. Maybe it's fine, but it looks suspicious. Btw, you don't need the quotes in 'mod_'$1. And again, the $(...) wrapping is pointless. (pdfnup 'mod_'$1 --nup 2x1 --landscape --outfile 'print_'$1) As earlier, you don't need those quotes, and the (...) wrapping is pointless. The same goes for the rest of the code. Suggested implementation I cannot fully test this because I don't have pdftk, but it should work: #!/bin/bash if [ $# = 0 ] then echo "Usage: $0 file1.pdf file2.pdf ..." exit 1 fi for file; do NUM=$(pdftk "$file" dump_data | awk -v FS=": " '/NumberOfPages/ { print $2; exit }') COMMSTR= for ((i=1; i<=$NUM; i++)); do COMMSTR="$COMMSTR A$i B1 "; done blank=blank.pdf ps2pdf -sPAPERSIZE=a4 - $blank < /dev/null pdftk A="$file" B=$blank cat $COMMSTR output mod_"$file" pdfnup mod_"$file" --nup 2x1 --landscape --outfile print_"$file" rm $blank && rm mod_"$file" done
{ "domain": "codereview.stackexchange", "id": 10378, "tags": "bash, pdf" }
History of octanol-water partition coefficient
Question: We are aware that one of the basic information about a material is the logP value it has. The logP is worked out from the partitioning between the octanol and the water phases. Does anyone know who was the first to introduce this octanol-water partition coefficient? I had a look through the literature but was not able to find the author of that rule. Answer: The most oft-cited publication is Partition Coefficients and Their Uses. A. Leo, C. Hansch, and D. Elkins. Chemical Reviews 71(6), 525-616 (1971). This manuscript was noted as a "Citation Classic" (see text here) in which A. Leo writes (in part): In recognition of the effort of designing a method of calculating log P (o:w) from structure, the early version of which was reported in this article, the Chesapeake section of Sigma Xi presented me with its Excellence in Science Award for 1980.” Note bene: In the last article I've linked, mention is made that the phenomena was first investigated by Bertholet in 1872 and "put on a firm basis by Nernst in 1891." That said, the 1971 publication of Leo $\it{et\; al}$ is the first modern/comprehensive listing of octanol-water partition coefficients and the place where the symbology (and definition) were written.
{ "domain": "chemistry.stackexchange", "id": 4178, "tags": "physical-chemistry, solutions, history-of-chemistry" }
Energy of Ground State of Quantum Well
Question: I struggle with the following concept. Consider the finite square well potential in the figure below. Consider the case where the electron energy is below the potential ($V(x) = V_0$) outside the well, and above in the potential ($V (x) = 0$) inside the well, as marked by the dashed line. Our professor said that the following explanation is false: "The energy of the ground state in the depicted potential is lower that the ground state energy for an infinite square well with length $a$. " I just want to be sure to understand why that is so because he gave no further explanation. The reason is simply because an infinite well is, by definition, "greater" than a finite well, and thus also has more energy. Is that correct ? Answer: I guess we can try using semi-classical approximation. Notice that in one 'period' the particle moves from $x=0$ to $x=a$ and back. This gives a phase shift $\gamma = -\pi$ since there are two smooth turning points in such an orbit. Use the Bohr-Sommerfeld equation ($n \in \mathbb{N} = \{0,1,2,...\}$): $$\frac{1}{\hbar}\oint p_x dx = 2\pi n - \gamma = 2\pi(n+1/2).$$ And notice that $p_x = \pm \sqrt{2mE}$ in the well since the particle is free for $x \in (0,a)$. Take the $+$ solution and take the orientation of the integral such that $\mathbf{p}\parallel \hat{\mathbf{x}}$. This gives noticing that $p_x$ is constant in the well $$\frac{1}{\hbar} (2p_x a) = \frac{2\sqrt{2mE}}{\hbar}a =2\pi (n+1/2).$$ Solving for $E = E_n$ we see that $$E_n = \frac{h^2 (n+1/2)^2}{8ma^2}.$$ Notice that these are indeed smaller than for the infinite square well of same size $a$ and with $n \in \mathbb{N}$: $E_{n,\infty} = \frac{h^2 (n+1)^2}{8ma^2}$.
{ "domain": "physics.stackexchange", "id": 64016, "tags": "quantum-mechanics, energy, potential, schroedinger-equation, ground-state" }
Limits of superdense coding
Question: Holevo's theorem says that no more than n bits can be stored (and retrieved) in n qubits. Indeed, allowing error can't improve this either -- the probability of retrieving the correct information is no better than that which could be transmitted in the same number of bits and guessing at the rest. Superdense coding is one way around this bound: if the receiver shares n maximally entangled qubits with the sender, the sender can manipulate them such that when she gives the receiver her n qubits the receiver can obtain 2n bits of information. Perhaps this is not surprising, though, since he has to measure 2n qubits to get the data. Is this the limit of quantum information capacity? That is, say sender and receiver share a large number N of entangled qubits and (after judicious manipulation and selection) the sender gives n of them to the receiver. Can more than 2n bits be transmitted in this way? It would seem that the answer is "no", but I'd like a double-check. I'm very much a beginner, just working through Michael Nielsen's tutorials and Scott Aaronson's book. This question is similar to another question here but my question is different and not answered there. Answer: No, only $2n$ bits can be transferred. The maximum capacity of superdense coding is actually known explicitly, and it is given by $\log_2(d) - S(A|B) = \log_2(d) - S(AB) + S(B)$. Here $d$ is the dimension of the system, in case of two level systems $d = 2$. This means that conditional entropy tells you by how much the standard classical capacity of $\log_2(d)$ is either attenuated or increased. It can be increased only because of the peculiar situation that the quantum conditional entropy can be negative. We know that $\log_2(d) - S(A|B) = S(\rho^{AB} || 1/d \otimes \rho^B)$, where $S(\rho || \sigma)$ is the relative entropy. This quantity satisfies $0 \leq S(\rho^{AB} || 1/d \otimes \rho^B) \leq 2 \log_2(d)$ in case that both $A$ and $B$ systems have equal dimension. If they do not then replace $d = \min(d_A, d_B)$. So for $n$ entangled qubits you will have $d = 2^n$ and therefore you can transmit at most $2 \log_2(2^n) = 2n$ bits of information. If Alice posseses $N$ entangled qubits (I will imagine this means $N/2$ pairs) initially and sends $n$ of them to Bob, them $d = \min(d_A, d_B) = \min(2^{N-n}, 2^n) = 2^n$. This implies that the capacity is $2n$ bits. So having $N$ entangled qubits by itself does not offer any advantage over classical coding unless their counterparts are already with the receiver. For further reading see for instance https://arxiv.org/abs/quant-ph/0407037, or Nielsen & Chuang. If you want a more general picture you can also read about this in the introductory chapters of my thesis https://arxiv.org/abs/1303.4690.
{ "domain": "physics.stackexchange", "id": 8215, "tags": "quantum-mechanics, quantum-information, quantum-entanglement" }
Typical set in Shannon's source coding theorem
Question: I was following the textbook by David Mackay: Information theory inference and learning algorithms. I have question on asymptotic equiparition' principle: For an ensemble of $N$ $i.i.d$ random variables $X^N=(X_1,X_2....X_N),$ with $N$ sufficiently large, the outcome $x=(x_1,x_2...x_N)$ is almost certain to belong to a subset of $|A_x^N|$ having only $2^{NH(x)}$ members, with each member having probability "close-to" $2^{-NH(x)}$. And then in the textbook, it also says that typical set doesn't nesscarry to contain the most probable element set. On the other hand, "smallest-sufficient set" $S_{\delta}$ which defines to be: the smallest subset of of $A_x$ satisfying $P(x\epsilon S_{\delta})\ge 1-\delta $, for $0\leq{\delta}\leq1. $ In other words, $S_{\delta}$ is constructed by taking the most probable elements in $A_x$, then the second probable......until the total probabily is $\ge1-{\delta}$. My question is, as $N$ increases, does $S_{\delta}$ approaches typical set such that these two sets will end up be equivalent of each other? If the size of the typical set is identical to the size of $|S_{\delta}|$, then why are we even bother with $S_{\delta}$? Why can't we just take the typical set as our compression scheme instead? Answer: The elements in the typical set have typical probability, close to $2^{-NH(x)}$. An element with untypically large probability, say the one with maximal probability, may not satisfy this constraint. Same goes for the rest of $S_\delta$. The source coding theorem does take the typical set as an encoding scheme.
{ "domain": "cs.stackexchange", "id": 3372, "tags": "sets, coding-theory, encoding-scheme" }
Fourier series coefficient of signal when Time period is twice the fundamental period
Question: My try: First of all I tried observing the symmetry but I did'nt find any.So I tried to calculate the fourier series coefficient of the signal like this First I differentiated the signal $x(t)$ so that $y(t)=\frac{dx(t)}{dt}$ Now suppose $d_k$ is the Fourier series coefficient of the differentiated signal and $a_k$ is the fourier series coefficient of Orignal signal $x(t)$ Now $d_k=(jk \omega_0 )c_k$ so $c_k=\frac{d_k}{jk \omega_0}$.Now calculating $d_k$ That is \begin{align} d_k &=\frac{1}{3} \int_{-1}^2 \big(\delta(t+1)+\delta(t)-2\delta(t-1)\big) e^{\frac{-jk2\pi t}{T}}dt\\ &=\frac{1}{3}\big(1+e^{\frac{jk2\pi}{3}}-2e^{\frac{-jk2\pi}{3}}\big) \end{align} Now $$a_k=\frac{\frac{1}{3}\big(1+e^{\frac{jk2\pi}{3}}-2e^{\frac{-jk2\pi}{3}}\big)}{jk \omega_0}$$ $$a_k=\frac{\big(1+e^{\frac{jk2\pi}{3}}-2e^{\frac{-jk2\pi}{3}} \big)}{j2\pi k}$$ Now checking at $k=1,a_k\ne 0$ and at $k=2,a_k\ne 0$ so for one odd and one even value of n its not zero so no options matching.What is the mistake? EDIT: As stated by @Matt L in the commment to check for period T=6 also, so I did like this I differentiated the signal from -3 to +3 so the fourier series coefficient I got like this \begin{align} d_k &=\frac{1}{6} \int_{-3}^3 \big(\delta(t+3)-2\delta(t+2)+\delta(t+1)+\delta(t)-2\delta(t-1)+\delta(t-2)\big)e^{\frac{-jk2\pi t}{6}}dt\\ &=\frac{1}{6}\big(1+e^{jn\pi}-2e^{\frac{jk2\pi}{3}}+e^{\frac{jn\pi}{3}}-2e^{\frac{-jk\pi}{3}}+e^{\frac{jk2\pi}{3}}\big) \end{align} Now at $k=1,d_k=0$ at $k=2,d_k\ne 0$,at $k=3,d_k=0$ but at $k=6$ also $d_k=0$ Now whats the mistake??? Answer: As Matt stated, by your calculation you have excluded options a and b but you should also check for options c and d, from which you would see that the answer is the option d. You could practically check the result of coefficients $d_k$ from the interpretation that CTFS coefficients of a periodic waveform $\tilde{x}(t)$ can be obtained from an inverse-DTFT (or an inverse-DFT more practically), by treating $\tilde{x}(t)$ as a DTFT waveform. Note that the computation is exact if the CTFS coefficients $d_k$ were of finite length in advance, but otherwise approximate which would only get better by choosing long enough samples to represent the waveform $\tilde{x}(t)$. Therefore, the following computes (approximately here) $3N$ CTFS coefficients of $x(t)$: > N = 128; > x = [ones(1,N) , -1*ones(1,N) , zeros(1,N)]; > d = ifft(x); On the orther hand the following code would do the same by treating $\tilde{x}(t)$ as periodic in $2 T_0$ rather than $T_0$ > N = 128; > x = [ones(1,N) , -1*ones(1,N) , zeros(1,N)]; > d = ifft([x x]); Based on your edit, the following provides you the answer: Given that the CTFS coefficients of the derivative signal is: \begin{align} d_n &=\frac{1}{6} \int_{-3}^3 [\delta(t+3)-2\delta(t+2)+\delta(t+1)+\delta(t)-2\delta(t-1)+\delta(t-2)]e^{\frac{-jn2\pi t}{6}}dt\\ &=\frac{1}{6}[ e^{j\frac{2\pi}{6}3n} - 2 e^{j\frac{2\pi}{6}2n} + e^{j\frac{2\pi}{6}1n} + 1 -2e^{-j\frac{2\pi}{6}1n} + e^{-j\frac{2\pi}{6}2n}] \\ &=\frac{1}{6} [ e^{j\pi n} - 2 e^{j\frac{2\pi}{3}n} + e^{j\frac{\pi}{3}n} + 1 -2e^{-j\frac{\pi}{3}n} + e^{-j\frac{2\pi}{3}n}] \\ \end{align} Above three lines were straightforward. Now we shall group those terms to yield something that can be simplified: $$d_n =\frac{1}{6} [ (1 + e^{j\pi n}) - 2 ( e^{j\frac{2\pi}{3}n} + e^{-j\frac{\pi}{3}n} ) + (e^{j\frac{\pi}{3}n} + e^{-j\frac{2\pi}{3}n})] $$ Now add $2\pi n$ to the negative angles so that they become: $$e^{-j\frac{\pi}{3}n} = e^{j( 2\pi n -\frac{\pi}{3}n) } = e^{j\frac{5\pi}{3}n } = e^{j\pi n }e^{j\frac{2\pi}{3}n } $$ and $$e^{-j\frac{2\pi}{3}n} = e^{j( 2\pi n -\frac{2\pi}{3}n) } = e^{j\frac{4\pi}{3}n } = e^{j\pi n}e^{j\frac{\pi}{3}n } $$ respectively. And plugging them into $d_n$ line, yields: \begin{align} d_n &=\frac{1}{6} [ (1 + e^{j\pi n}) - 2 ( e^{j\frac{2\pi}{3}n} + e^{j\pi n }e^{j\frac{2\pi}{3}n } ) + (e^{j\frac{\pi}{3}n} + e^{j\pi n}e^{j\frac{\pi}{3}n})]\\ &=\frac{1}{6} [ (1 + e^{j\pi n}) - 2 (1 + e^{j\pi n}) e^{j\frac{2\pi}{3}n} + (1 + e^{j\pi n}) e^{j\frac{\pi}{3}n}]\\ &=\frac{1}{6} [1 + e^{j\pi n}]\cdot[1 - 2 e^{j\frac{2\pi}{3}n} + e^{j\frac{\pi}{3}n}]\\ &=\frac{1}{6} [e_n]\cdot[f_n]\\ \end{align} Now it can be shown that the product term $e_n$ will be zero for all odd indice $n$, hence $d_n$ will also be zero for all odd $n$. Note that the other product term $f_k$ will be zero for $n = 6m$ that $d_n$ will also be zero for $n = 6,12,18...$ However for your example it sufficies to show that $e_n$ will be zero for odd $n$. FURTHERMORE below is the theoretical proof that it's not a coincidence to have all the odd indexed terms in CTFS to be zer0, independent of the signal itself, when the period $T_y$ of the periodic signal $x(t)$ is assumed to be twice that of its fundamental period $T_x$. Assume $x(t)$ is a periodic signal with fundamental period of $T_x$ and let $y(t)$ be the same signal $x(t)$ but interpreted to have a period of $T_y = 2 T_x$. Let $a_k$ and $b_k$ denoted the CTFS coefficients of $x(t)$ and $y(t)$ respectively, then we have: \begin{align} b_k &= \frac{1}{T_y} \int_{0}^{T_y} y(t) e^{-j\frac{2\pi}{T_y} k t} dt = \frac{0.5}{T_x} \int_{0}^{2T_x} x(t) e^{-j\frac{2\pi}{T_x} (k/2) t} dt\\ &= 0.5 \left( \frac{1}{T_x} \int_{0}^{T_x} x(t) e^{-j\frac{2\pi}{T_x} (k/2) t} dt + \frac{1}{T_x} \int_{T_x}^{2T_x} x(t) e^{-j\frac{2\pi}{T_x} (k/2) t} dt \right)\\ \end{align} Make the substitution $t' = t - T_x$ in the second integral and recognize that $x(t'+T_x) = x(t')$ as $x(t)$ is periodic with $T_x$ , yielding: $$b_k = 0.5 \left( \frac{1}{T_x} \int_{0}^{T_x} x(t) e^{-j\frac{2\pi}{T_x} (k/2) t} dt + e^{-j\frac{2\pi}{T_x} (k/2) T_x} \frac{1}{T_x} \int_{0}^{T_x} x(t') e^{-j\frac{2\pi}{T_x} (k/2) t'} dt' \right) $$ Now recognize the integrals as $a_{k/2}$; i.e, the CTFS of the signal $x(t)$ evaluated at $k/2$. And simplify the sum: \begin{align} b_k &= 0.5 \left( a_{k/2}+ e^{j\pi k} a_{k/2} \right)\\ &= 0.5 \left( 1+ e^{j\pi k} \right) a_{k/2} \\ &= \begin{cases}{ a_{k/2} ~~, \text{ for k even, k=2m , m=1,2,...}\\ 0 ~~~~~~~, \text{ for k odd , k=2m+1, m=1,2,...}}\end{cases} \\ \end{align}
{ "domain": "dsp.stackexchange", "id": 5770, "tags": "homework, fourier-series" }
How does a single-track Turing machine simulate a multi-track Turing machine?
Question: It's easy to see how a multi-track Turing machine can simulate a single-track Turing machine; it does so by ignoring all but the first track. But how does it work the other way? I need a specification of a transition function that does the job. If there are $k$ tracks, then we can think of symbols as being vectors and arrange them one after another in the tape; but again, what's the transition function like in the equivalent single-track machine? Answer: If $\Sigma = (x_1,...,x_n)$ is the alphabet of the $m$-tracks $TM$, just use an expanded alphabet $\Sigma' = \Sigma \times ... \times \Sigma$ for the single-track $TM'$ ($|\Sigma'| = n^m$). Every vector $\bar{x}_i$ of $m$ symbols from $\Sigma$ can be mapped to a unique alphabet symbol $u_i$ in $\Sigma'$: $\bar{x}_i = (x_{i_1},x_{i_2},...,x_{i_m}) \rightarrow u_i \in \Sigma'$ Hence every transition of $TM$ $(q_h,(x_{i_1},x_{i_2},...,x_{i_m}))\rightarrow (q_k,(x_{j_1},x_{j_2},...,x_{j_m}),dir)$ can be mapped to an equivalent transition in $TM'$ where the "read vector" $\bar{x_i}$ and "write vector" $\bar{x_j}$ are replaced with the corresponding alphabet symbols in $\Sigma'$: $(q_h,u_i)\rightarrow (q_k,u_j,dir)$
{ "domain": "cs.stackexchange", "id": 469, "tags": "computability, turing-machines, simulation" }
How does Dichlorofluoromethane get into the stratosphere?
Question: I was wondering why FCKWs can destroy the ozone layer. It seems like these gases have a high molecular weight in comparison to $\ce{N2}$ or $\ce{O2}$. How can these gasses come up to the outer layer of our atmosphere? In particular, how long (on average) would it take to get to such an altitude (through the diffusion of such a particle)? I mean, they are basically several times heavier than all the other components of our atmosphere combined. Hence, it shouldn't be very energetically stable for them to float on the outer layers of space on top of other lighter particles. Answer: This is one of the more frequently asked questions about ozone depletion. I will try to give a short summary. First of all, it is important to know that Chlorofluorocarbons (CFCs) are extremely stable at room temperature and pressure. It is therefore only natural, that in the lower atmosphere these gases are enriched, therefore they have plenty of time getting in higher altitudes via diffusion. The earths atmosphere is in constant motion, you can experience that on a macroscopic level via different winds. The sun heats up the earth's surface, causing all sorts of liquids to be evaporated. While evaporating water, these small molecules may form little conglomerates, taking other molecules up to higher heights. This will happen to dust, so it will also happen to CFCs. There will be local areas (or volumes, layers) of the air in different heights of the atmosphere that have different densities, concentrations and composition. As a result they will also have different air pressures. You can experience this on a plane at higher altitudes. Assuming that all reactions tend to react towards equilibrium, there will be forces acting between these areas. In a consequence there will be also upward forces acting on the CFC containing gas mixture. Slowly these molecules will move upwards to the stratosphere. As you can see, this process is heavily dependent on many different factors and may well require years. I think it is actually very hard to find any accurate numbers for this process. My research on the subject turned up nothing about that. In most of the cases it was just stated, that these gasses are there. Some further reading: Why does free chlorine in the stratosphere lose its ozone-depleting potential after about 100,000 reactions? A collection of nature articles. Some are free access. Ozonewatch @ NASA http://www.ozzyozone.org/ (for children, in english, french, spanish) A hole in the sky @ Science in school (different languages available)
{ "domain": "chemistry.stackexchange", "id": 1498, "tags": "thermodynamics, atmospheric-chemistry" }
How to prove that average of random forces is zero?
Question: The average of random is zero $$ \langle F(t) \rangle = 0 $$ How to prove this? Answer: There is not one only "ramdomness". There are infinite possibilities for a random force. You need to specify the probability function, $p(t)$. Without it, the term random is just incompelte... If you mean random in the sense of all directions are equally probable, then we're talking about a constant probability density. For any symmetric distribution, you can show that the mean is 0, as there will be as many times leftwards as times rightwards. You basically have to compute the definition of mean $$\langle F\rangle= \frac{1}{T}\int_0^T F(t) p(t) dt $$
{ "domain": "physics.stackexchange", "id": 80034, "tags": "statistical-mechanics, stochastic-processes" }
Mass of empty AdS$_5$
Question: Five dimensional empty AdS$_5$ space has mass $$ E = \frac{3 \pi \ell^2}{32 G}. $$ Is the above equation correct? Let's do some dimensional analysis to confirm. In natural units, in 5 dimensions $[G] = -3$ where $[...]$ is the mass dimension. Also $[\ell]=-1$. Therefore $\left[ \frac{\ell^2}{G} \right] = 1$. So the dimensions seem to work out fine. Here's my second question: The limit of $\ell \to \infty$ of $AdS_5$ space is flat space. Isn't it weird that the mass diverges in this limit? I would have presumed that mass should vanish in this limit, since flat space has vanishing mass? Are we using two different definitions of mass? EDIT: Due to some requests in the comments, I will include the derivation of the formula above. I use the boundary stress tensor given by Brown-York (derived from the Einstein action alongwith the Gibbons-Hawking boundary term. $$ t_{ij} = \frac{1}{8\pi G} \left[ K_{ij} - \gamma_{ij} K + \frac{2}{\sqrt{-\gamma}} \frac{\delta S_{ct}}{\delta \gamma^{ij}} \right] $$ Here $K_{ij} = \nabla_{(i} n_{j)}$ is the extrinsic curvature. $n^\mu$ is the unit normal vector to the boundary. $\gamma$ is the boundary metric and $K = \gamma^{ij} K_{ij} $. $S_{ct}$ is the counterterm action included to make all the B-Y charges finite, which are defined as $$ Q_\xi = \int_{\cal B} d^d x \sqrt{\sigma} u^i \xi^j t_{ij} $$ Here $\sigma_{ab}$ is the metric on a spatial hypersurface and $u^i$ is a time-like unit vector normal to the hypersurface. $\xi^j$ is a Killing vector of the boundary metric. Now, for $AdS_5$, the counterterm action is given by $$ S_{ct} = -\frac{3}{\ell} \int d^4 x \sqrt{-\gamma} \left( 1 + \frac{\ell^2}{12} R(\gamma) \right) $$ The B-Y tensor is then $$ t_{ij} = \frac{1}{8\pi G} \left[ K_{ij} - \gamma_{ij} K - \frac{3}{\ell} \gamma_{ij} + \frac{\ell}{2} \left( R_{ij} - \frac{1}{2} \gamma_{ij} R \right) \right] $$ We can now work in the Fefferman-Graham coordinates for $AdS_5$ space where the metric is $$ ds^2 = \frac{\ell^2 d\rho^2}{4\rho^2} - \frac{(1+\rho)^2}{4\rho} dt^2 + \frac{\ell^2 ( 1 - \rho)^2}{4\rho} d\Omega_3^2 $$ Thus $$ t_{ij} = - \frac{ \rho }{4\ell \pi G} \left( \gamma_{ij}^{(0)} + \gamma_{ij}^{(2)} \right) + {\cal O}(\rho^2) $$ where $$ \gamma_{ij}^{(0)} dx^i dx^j = -\frac{1}{4} dt^2 + \frac{\ell^2}{4} d \Omega_3^2 $$ $$ \gamma_{ij}^{(2)} dx^i dx^j = - \frac{1}{2} dt^2 - \frac{\ell^2}{2} d \Omega_3^2 \\ $$ We also have $$ u = \frac{2 \sqrt{\rho}}{1+\rho} \partial_t,~~ \xi = \partial_t,~~\sqrt{\sigma} = \frac{\ell^3 (1 - \rho)^3 }{8 \rho^{3/2} } \sin^2\theta \sin \phi $$ Plugging all this in, we find that the B-Y charge corresponding to the Killing vector $\partial_t$ is $$ Q_t = \frac{3 \pi \ell^2 }{32 G} $$ This is where I got the formula from. I interpret this as the mass of $AdS_5$ space. Disclaimer - I have intentionally left out several computations to reduce the length of the problem. I have not referred any paper and all computations have been done by me. Answer: As has been shown in this paper, pointed out by Matthew in the comments, the expression found is indeed correct and can be understood from a holographic point of view. I now reproduce the argument from relevant section (number 5) of the paper: It seems unusual from the gravitational point of view to have a mass for a solution that is a natural vacuum, but we will show that this is precisely correct from the perspective of the AdS/CFT correspondence. We use the conversion formula to gauge variables: $$\frac{\ell^3}{G}=\frac{2N^2}{\pi}$$ Then the mass of global AdS$_5$ is $$M=\frac{3N^2}{16\ell} $$ The Yang-Mills dual of AdS$_5$ is defined on the global AdS$_5$ boundary with topology $S^3\times R$. A quantum field theory on such a manifold can have a non-vanishing vacuum energy - the Casimir effect. In the free field limit, the Casimir energy on $S^3\times R$ is: $$E_\text{cas}=\frac{1}{960r} (4n_0+17n_{1/2}+88n_1)$$ where $n_0$ is the number of real scalars, $n_{1/2}$ the number of Weyl fermions and $n_1$ the number of gauge bosons, and $r$ is the radius of $S^3$. For $SU(N)$, $\mathcal{N}=4$ super Yang-Mills $n_0=6(N^2-1)$, $n_{1/2}=4(N^2-1)$ and $n_1=N^2-1$ giving: $$E_\text{cas}=\frac{3(N^2-1)}{16r} $$ To compare with [the expression for the mass], remember that $M$ is measured with respect to coordinate time while the Casimir energy is defined with respect to proper boundary time. Converting to coordinate time by multiplying by $\sqrt{-g_{tt}}=\frac{r}{\ell}$ gives the Casimir “mass”: $$M_\text{cas}=\frac{3(N^2-1)}{16\ell} $$ In the large $N$ limit we recover the earlier expression for the mass of AdS$_5$. $$M=\frac{3\pi\ell^2}{32G}$$ This is an CW answer based on comments by other users, supplemented with the relevant results from a paper on the topic. I have written this answer to get it out of the 'unanswered' tab.
{ "domain": "physics.stackexchange", "id": 15918, "tags": "general-relativity, mass, dimensional-analysis, anti-de-sitter-spacetime" }
how much entropy is there in the shape of a rock?
Question: If I dig up a given rock, make a 3d model of it, and compare that model with those of a large number of other rocks from the same area, what fraction of other rocks would have the same 3d model? Let's say that the measurement is performed with an accuracy relative to the size of the rock, so a rock that fits within a 100cm sphere might be measured to within the nearest centimeter. It would be fine to only look at rocks that have been through similar geological processes, rather than all rocks in an area. I'm broadly interested in an answer for all land areas on Earth, but if there's data for some small region, or theories about rock shape that allow estimating the average entropy, that would be a useful answer. Answer: This would actually be an interesting research project. The ability to do a 3-D scan and model of a rock's shape in a manner that is fast enough to get a statistically meaningful number of rocks measured is relatively new. Certainly you can easily name things that probably control the shape of rocks (lithology, mode of erosion, mode of transport, history of these things) but saying something about this in a quantitative way has not been done AFAIK. I think this is the sort of question that might result in surprising insights. Or it might lead nowhere. Certainly the question 'Does arid region erosion result in different shaped rocks than glacial erosion?' answered in a quantitative way sounds worthwhile to me.
{ "domain": "earthscience.stackexchange", "id": 1326, "tags": "geology, mathematics" }
Why can the FFT always be mirrored in the middle of the x axis?
Question: Plotting the FFT of a signal with Matlab e.g. by the code: y_fft = abs(fft(y)); f = fs*(0:length(y)-1)/length(y); plot(f,y_fft); where y is the discrete signal and 'fs' is sample rate. creates a plot where the curve can be mirrored in the middle of the x-axis e.g. Why can the FFT always be mirrored in the middle of the x axis? Answer: The DFT of a real signal is conjugate symmetric. For example, if your DFT result at, say, 2Hz was $1+j5$, then your DFT result at -2Hz would be $1-j5$. This is conjugate symmetry. Of course, when you take the absolute magnitude, the result in both cases is the same, which is why you see this mirror image. Going deeper, the reason for why a real signal can be decomposed into symmetric complex conjugate parts, is because a real signal oscillates on the real axis, and this can be thought of as the resultant of two phasors rotating around the center of the complex plane, in opposite directions. Hope that helped.
{ "domain": "dsp.stackexchange", "id": 836, "tags": "fft, matlab" }
How does one actually measure the position or momentum of a quantum object?
Question: How is the position or momentum of a Quantum particle is measured experimentally in laboratory? Suppose we want to know the position or momentum of quantum particle which is kept in a box i.e. an infinite square well, then how to perform the experiment? In some books it is said that we have to use photons to illuminate the quantum particle and then visualise its position but simultaneously it is written that this is just a thought experiment to make students understand the concept. Answer: I want to clear up that the original post has suffered a number of edits, and not by the OP, and I am starting my answer to the original post which had explicit reference to the Heisenberg uncertainty principle. I.e. it stated: Heisenberg uncertainty says that the position and momentum or the energy and time of a quantum particle can't be measured simultaneously with great accuracy. Which is the reason for my discussing the HUP below. The Heisenberg uncertainty principle, HUP, was a precursor of the final theory formulated that describes the behavior of quantum entities, elementary particles and their composites, the standard model,SM. What does it say? It describes a limiting volume of the two variables under consideration, so that the greater the accuracy of simultaneous measurement of p will give a large uncertainty to the value of x and vice verso. If one substitutes laboratory numbers, for example microns for space, the momentum will have to be larger than a very small number, as h_bar is a very small number. It is a relationship due to the commutator relations of the quantum mechanical operators of the corresponding variables and which are axiomatic in the quantum mechanical theory that is used in the SM. The SM has been validated innumerable times. In this convoluted sense the good agreement of the model with innumerable data is enough to validate the HUP. The HUP relationship also can be derived from the basic theory of quantum mechanics, so all measurements in the particle physics can be considered to validate the HUP. This, a recent measurement, may interest you have performed measurements on photons (particles of light) and showed that the act of measuring can introduce less uncertainty than is required by Heisenberg’s principle. The total uncertainty of what can be known about the photon's properties, however, remains above Heisenberg's limit. Edit after comment by OP: I just want to know the experiment by which,say,i will be able to measure the position of a quantum particle in 1d box,or its momentum The real world is three dimensional in space, and your thought experiment cannot materialize in the lab.The positions and momenta of elementary particles are measured with complicated detector systems and the measurement errors are such that the HUP constraint is always fulfilled. It needs special setups that can go to accuracies which challenge the dimensional constraints of the HUP, as in the link above: Steinberg's group does not measure position and momentum, but rather two different inter-related properties of a photon: its polarization states. In this case, the polarization along one plane is intrinsically tied to the polarization along the other, and by Heisenberg’s principle, there is a limit to the certainty with which both states can be known. they use another set of conjugate variables in order to check the HUP.
{ "domain": "physics.stackexchange", "id": 43906, "tags": "quantum-mechanics, experimental-physics, operators, measurement-problem, observables" }
Voltage reading between hand and 3.3V source is only 0.1mV. Why?
Question: I held one multimeter probe to a 3.3V source, the other probe I held in my hand. The voltage measured between these 2 points was around 0.1mV. How is this possible? I expected 3.3V since I thought my hand was a ground? What I find at least as strange is that when I held one probe in my left hand and one in my right the voltage is larger than the former measure(around 2.3mV!! But it fluctuates much more though) Edit (After reading Olin Lathrop's answer and reading about voltage drop) Is the following correct? So the multimeter can't pick the real voltage drop up because the total resistance is too high but for some reason related to how the multimeter is built picks up a stable 0.1mV. Also, the circuit extends from my hand to the floor I'm standing on to the table's legs up to the ground terminal of the device delivering the voltage. The real voltage drop between the two probed points is very small because the resistance of the multimeter dissipates so little energy it hardly drops the voltage. Suppose the multimeter was a perfect measuring device, given that the larger the resistor, the more energy used by that resistor, and the bigger the voltage drop across that resistor and my body and the rest of the circuit has a very high resistance, the voltage drop between my hand and the ground terminal is actually very large. Answer: Draw a complete schematic of what all was connected and you should be able to see for yourself. The negative terminal of the power supply and your body were not connected. For example, they weren't both connected to ground via a sufficiently low resistance path. Just because a power supply uses wall power and has a ground lead on the plug doesn't mean any part of the output is connected to that ground. Most ordinary power supplies like that, and all "wall wart" type supplies you will be able to find are deliberately isolated from the input power feed for safety reasons. There are two reasons you measure some voltage when holding both meter probes in opposite hands. First, the moisture of your skin and the probe metal will cause a small battery effect. It is very difficult to make the two battery effects of each hand ballance out so that the meter reads zero. 2.3 mV is really a very small voltage. Second, your body is most likely picking up a significant amount of power line hum, which becomes a common mode signal to the meter. The meter's electronics aren't perfect, so some of this common mode signal can show up as a differential signal, or get rectified a bit to show as a DC offset. Lick the finger holding the meter probe of one hand only, and you should see a larger voltage between the two hands.
{ "domain": "physics.stackexchange", "id": 9269, "tags": "electricity, electric-circuits" }
Why do we use retarded Green's functions in response theory?
Question: When computing the response of a system to an external perturbation, we usually use the retarded Green's function to describe the response. On the other hand, from scattering theory in Quantum Mechanics, the perturbation series is given in terms of time-ordered Green's functions. If I would use time-ordered perturbation theory to calculate the response of my system, I would probably get a very different answer. How do I know which one I need to use? For example, if I'm shining a laser on my material, is that scattering or is that more like equilibrium? Answer: Response of a system is usually described by a retarded Green's function, which reflects the causality (response follows the perturbation, rathe rthan preceeds it). Indeed an approximate evolution of an operator, under Hamiltonian $H=H_0 + V(t)$ is described by: $$ \langle A_H(t)\rangle=\langle S^\dagger(t)A_I(t)S(t)\rangle\approx \langle A_I(t)\rangle - \frac{i}{\hbar}\int_{-\infty}^tdt_1\langle\left[A_I(t), V_I(t_1)\right]\rangle =\\ \langle A_I(t)\rangle - \frac{i}{\hbar}\int_{-\infty}^{+\infty}dt_1\langle\left[A_I(t), V_I(t_1)\right]\rangle\theta(t-t_1),\\ S(t)=T\exp\left[-\frac{i}{\hbar}\int_{-\infty}^tdt_1V_I(t_1) \right], $$ where $A_H(t)$ and $A_I(t)$ are the operators in Heisenberg and the interaction representations. The retarded Green's functions is manifest in the second line. The problem is that calculation of the retarded function is usually rather cumbersome, whereas for time-ordered Green's functions one may obtain nice closed expressions (e.g., the very intutive ones represented by the Feynmann diagrams). One therefore usually calculates the time-ordered Green's function and then passes to the retarded Green's function using Lehmann representation. This logic is taken one step further in Keldysh formalism, where one calculates contour-ordered Green's function, which has time-ordered components alongside the retarded and advanced ones. Detailed discussion of calculating retarded response function from a time-ordered one can be found in discussions of Kubo formula or dielectric response in most many-body texts, such as Fetter&Walecka or Mahan (AGD has it all too, but in rather cryptic form).
{ "domain": "physics.stackexchange", "id": 80370, "tags": "quantum-mechanics, quantum-field-theory, condensed-matter, many-body, greens-functions" }
is there any way to create electric field around two similarly charged plates?
Question: Can an electric field exist around two similarly charged plates, like if we consider to put small paper pallets around it so that it can attract paper pallets without generating repulsion ? Answer: In reference to what you are asking, the two similarly charged plates would repel each other. Now if the two plates are kept in an electric field then the two cases can exist: If the outside electric field is opposing the electric field between two plates and is larger than the field between between two plates than the paper bits would be attract ed towards the plate. If outside electric field is smaller than the electric field between the two plates then the two plates would not attract the paper.
{ "domain": "physics.stackexchange", "id": 30092, "tags": "electrostatics, capacitance" }
Why is light, that was emitted right at the big bang (same time/place), arriving to us at different times
Question: I understand the big bang to have (at least mathematically) started our universe at a virtually precise moment and within a virtually infinitesimal volume. Yet some of the light from that moment/volume reached us in the distant past, some is reaching us now, and more will reach us in the distant future (expanding particle horizon). That seems strange to me. I am thinking that maybe the big bang really is sort of the limiting case of the event horizon approaching $0$ distance from us. That is, if time began at the big bang ($t=0$ at the moment of the big bang), then at any $t>0$, there is a finite event horizon (albeit very tiny for $t \approx 0$) and, if object A is closer to it than object B, then object A's light will reach us later than object B (and object A will reach the particle horizon later than object B). But, at $t=0$, the actual big bang, I think all this breaks down. However, perhaps we can talk about how and when we receive light emitted as $ t \to 0$, and interpret light "emitted at the big bang" in that way? Or maybe there is some kind of quantum uncertainty regarding the moment of the big bang, so that the event horizon was never (with no uncertainty) exactly $0$ distance from us? Then, at $t=0$, we have a non-trivially uncertain event horizon. In any case, talking about "light emitted at (or during, if you prefer) the big bang" sounds problematic. Answer: Here's a spacetime diagram to illustrate what's going on: The future --> E /E\ / E \ E = Earth The present --> / E \ / /E\ \ / \ = light / / E \ \ The past --> / / E \ \ ~~~ = glowing plasma / / /E\ \ \ / / / \ \ \ / / / \ \ \ / / / \ \ \ CMBR emission time --> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Light travels diagonally at the speed of one ASCII character per ASCII line. At every time in Earth's history, there has been somewhere in the universe from which CMBR light traveling at the speed of light since it was emitted was just reaching Earth at that time. The universe was completely filled with glowing plasma. If you trace the path of a photon, or any object, back in time from Earth's current location, or any location, you will eventually hit the plasma, because it was everywhere.
{ "domain": "physics.stackexchange", "id": 61456, "tags": "cosmology, space-expansion, observable-universe" }
Concerning fate of Milky Way Galaxy
Question: Andromeda Galaxy is the largest galaxy in the galaxy cluster[1] called Local Group[2] and our Milky Way Galaxy as well as some satellite/dwarf galaxies are orbiting around each other within Local Group. My question isn't Milky Way Galaxy supposed to orbit around Andromeda Galaxy[3] so how come the predicted merger[4]? Whose slowing down or do the gravity work differently as things are being scaled further up? Answer: The Milky-Way does not orbit the Andromeda galaxy, they both move under the influence of all the members of the local group. Even if one were orbiting the other the orbit need not be near circular but could be a very eccentric (elongated) ellipse. The projected merger is because the tangential component of Andromeda's velocity with respect to the Milky-Way is small compared to its radial component That is Andromeda galaxy appears to be moving almost directly towards the Milky-Way (which is what it says in the Wikipedia page you link to, but it is not too difficult to find primary sources using Google, here is an arXiv paper reporting a proper motion study of Andromeda and reporting such).
{ "domain": "astronomy.stackexchange", "id": 869, "tags": "gravity, milky-way, galaxy-cluster" }
Faraday Flashlight LED lights in both directions
Question: I have made a Faraday's Flashlight with 25mm ht x 25mm dia cylindrical Neodymium Magnet inside a 1 inch dia hollow cardboard pipe. Pipe is covered with 1000 turns of AWG 29 gauge wire. Ends of wire are connected to an LED. The 2 ends of pipe are sealed by a tape. When I shake the pipe, LED lights up due to Faraday's law. Since direction of current will reverse when magnet moves from rt end to left end Vs left end to rt end, shouldn't the LED light up only when magnet moves from 1 direction ? Why does the LED light up on both directions ? Thanks Answer: The source of the voltage/current is a magnet moving through a coil of wire. As the magnet moves towards the coil the magnetic flux linked with the coil increases and a voltage is induce3d. The rate of change of magnetic flux linkage will increase as the magnet gets closer and classer to the coil but will eventually the rate of change of flux linkage will start to decrease and become zero when the magnet is approximately at the centre of the coil. At this point there will be no induced voltage. As the magnet passes the centre the induced voltage will reverse direction and eventually become zero when the magnet is a long way away from the coil. So the LED will be on for part of the time the magnet is passing through the coil. Reversing the direction of the magnet through the coil will result in a similar sequence of events and again the LED will be on for part of the time the magnet is passing though the coil. The key to the operation of your flash light is that the induced voltage is of the correct polarity to switch the led on for part of the time the magnet is passing through the coil. A more sophisticate arrangement has a capacitor and a diode as part of the circuitry. A diode between the coil and a capacitor will enable a capacitor to be charged with the correct polarity so that when an led is connected across the capacitor it emits light.
{ "domain": "physics.stackexchange", "id": 37077, "tags": "electromagnetism, electromagnetic-induction" }
Turing Machine construction
Question: How should I go about building a Turing machine for the following language: $$L = \{ a^ib^j \in (a,b)^* \mid i \le j \le 2i \} $$ I know how to construct a Turing machine for $\{ a^nb^nc^n \mid n \in \mathbb N \}$ but don't really where to start for the one above. Answer: How would you construct a TM for $\{a^i b^j \mid i \le j\}$? Well, the easiest solution is to keep removing letters on both sides until you run out of $a$s or $b$s. At this point you can look at what you are left with and decide whether $i \le j$ was initially true. Now, what if, instead of removing $a$s, you were moving them somewhere to keep for later use? This way you’d be able to test for $i \le j$ and still have all your $a$s available. By doing something similar with all those saved $a$s and the remaining $b$s you will be able to check the second condition, $j \le 2i$.
{ "domain": "cs.stackexchange", "id": 4622, "tags": "formal-languages, turing-machines" }
Count total set bits
Question: Im am solving Count total set bits: Find the sum of all bits from numbers 1 to N. Input: The first line of input contains an integer T denoting the number of test cases. The first line of each test case is N. Output: Print the sum of all bits. Constraints: 1 ≤ T ≤ 100 1 ≤ N ≤ 1000 Example: Input: 2 4 17 Output: 5 35 Explanation: An easy way to look at it is to consider the number, n = 4: 0 0 0 = 0 0 0 1 = 1 0 1 0 = 1 0 1 1 = 2 1 0 0 = 1 Therefore , the total number of bits is 5. My approach: /*package whatever //do not write package name here */ import java.io.InputStreamReader; import java.io.BufferedReader; import java.io.IOException; class GFG { private static int noOfBits (int N) { int sum = 0; for (int i = 1; i <= N; i++) { if ((i & i-1) == 0) { sum += 1; } else { sum += numBits(i); } } return sum; } private static int numBits (int num) { int sum = 0; int rem; while (num != 0) { rem = num%2; num /= 2; sum += rem; } return sum; } public static void main (String[] args) throws IOException { //code BufferedReader br = new BufferedReader (new InputStreamReader(System.in)); String line = br.readLine(); int T = Integer.parseInt(line); String line2; int N; for (int i = 0; i < T; i++) { line2 = br.readLine(); N = Integer.parseInt(line2); System.out.println(noOfBits(N)); } } } I have the following questions with regards to the above code: How can I further improve my approach? Is there a better way to solve this question? Are there any grave code violations that I have committed? Can space and time complexity be further improved? Answer: Your way of indenting and placing braces is consistent, which is good. I prefer the K&R style, which is also recommended in the (historical) Java Code Conventions from Sun Microsystems, or the Google Style Guides: Open brace “{” appears at the end of the same line as the declaration statement Closing brace “}” starts a line by itself indented to match its corresponding opening statement, ... You have separated the I/O from the actual computation, which is again good, as it keeps the main method short, and allows to add unit tests more easily. Reading the input can be simplified slightly by using Scanner. Short (nondescriptive) variable names T and N are usually not recommended. In this case it might be acceptable, since those names correspond directly to the identifiers used in the programming challenge description. However, it is not immediately apparent what the method names noOfBits and numBits stand for, and what distinguished them. A better choice could be totalSetBits (corresponding to the challenge description), and countBits, plus short explaining comments. I am not sure if the special treatment of powers of 2 in if ((i & i-1) == 0) is worth the additional code, as it applies only to few numbers in the range 1...N. In any case, it should be part of the numBits() method. The separate variable int rem in numBits() is not needed. Summarizing the suggestions so far, the code would look like this: import java.io.IOException; import java.util.Scanner; class GFG { // Total count of all 1 bits in the binary representation of the // numbers 1 ... N. private static int totalSetBits(int N) { int sum = 0; for (int i = 1; i <= N; i++) { sum += countBits(i); } return sum; } // Number of 1 bits in the binary representation of n. private static int countBits(int n) { if ((n & n-1) == 0) { return 1; // n is a power of 2. } int count = 0; while (n != 0) { count += n % 2; n /= 2; } return count; } public static void main (String[] args) throws IOException { Scanner scanner = new Scanner(System.in); int T = scanner.nextInt(); for (int i = 1; i <= T; i++) { int N = scanner.nextInt(); int bits = totalSetBits(N); System.out.println(bits); } } } How can we make this faster? One approach would be to make countBits() faster, and you'll find various methods to count the number of set bits in an integer at Bit Twiddling Hacks. But perhaps we can compute totalSetBits(N) better than adding countBits(n) for all numbers n from 1 to N? This is the real challenge here, and I don't want to deprive you from the satisfaction of figuring out the solution yourself, so here are some hints only: First have a look at some special values: 0 1 --> totalSetBits(1) = 1 00 01 10 11 --> totalSetBits(3) = 2 * 2 = 4 000 001 ... 110 111 -> totalSetBits(7) = 3 * 4 = 12 Can you spot the general pattern? Then try compute totalSetBits(N) for arbitrary N by using those “special values.” This should lead to a \$ O(\log N) \$ solution instead of the current \$ O( N) \$ solution. And of course – when not in an interview – you can “cheat:” Compute totalSetBits(N) for the first (e.g.) 40 values of N, and look up the resulting sequence in the On-Line Encyclopedia of Integer Sequences®. For many programming challenges, this leads to useful information and insights into the problem.
{ "domain": "codereview.stackexchange", "id": 30920, "tags": "java, beginner, interview-questions, complexity" }
Devising algorithm for an A.I. to solve the Wumpus game
Question: Background I am currently coding an AI in C++ to solve a game called Hunt the Wumpus. You can try the game out here. My implementation of the game is based on a Graph template which I use to model the caves and the AI's "memory". Rules The start position for the player, the Wumpus and the hazards (pits or bats) is randomized, one of the 20 caves. You can move from cave to cave freely and each cave will warn you if it has dangerous neighbors. You can shoot one of your 5 arrows to a cave adjacent to yours. The arrow will fly randomly for up to three caves. If it comes in contact with you or the Wumpus, the arrow will kill it. When shoting an arrow, if it's a misfire, then the Wumpus wakes up and each turn it will move randomly. You die either when one of the following happens: Falling into a pit Walking into the Wumpus' cave (it devours you) Losing all of your arrows An arrow which you shot randomly flies back into your cave and kills you. AI Rules The AI marks each cave it visits as safe. If it enters a cave which has a threatful neighbor, it makrs the cave with TN (Threat Neighbor) and it marks each neighbor as a possible threat (PT). The AI has a "memory" which is a copy of the graph, but it is incomplete. The AI fills it as it travels through the graph. Based on the information stored in said memory, the AI decides to move or shoot. Problem I'm having trouble with the algorithm that I'm trying to implement "to hunt the Wumpus". My algorithm searches for the non-dangerous caves, each turn analysing the current memory and deducing which caves are optimal to move. I think my algorithm cannot solve this problem, since it might get stuck. This is because I instructed to back off when there is a TN flag and the algorithm will not be able to deduce if there are safe caves remaining within reach. I read up this paper [1] which explains an approach to solving the problem but does not describe the algorithm in detail. The section I'm trying to base my algotrithm off is section 4.2 and following of [1]. The algorithm Suppose $G=(\lbrace v_0,...,v_{19}\rbrace, E)$ is the graph we are working on and let $N_i = \lbrace u\in G: \lbrace u,v_i\rbrace\in E\rbrace$ represent the neighborhood of vertex $v_i$. Suppose $M$ is the graph which represents the A.I.'s memory. $M$ is constructed step by step, for each turn that passes, each vertex $m_i\in M$ stores the following data: vertex number in $G$, indications (room is safe/dangerous, neighborhood has pit/bat/wumpus), and times A.I. has been here. For the first cave $v_i$, $m_0$ stores the information for this cave. Even if it is dangerous we have to move to a random neighbor in $N_i$. We store the room number, the indications it gives in memory for future reference and mark this room as safe. When we enter the next room $v_j$, if we didn't die we mark it as safe. We recieve the indications from the neighbors, store them in $m_1$ and we decide what to do next. If there were no threats coming from the neighbors, we move once again to a random neighbor in $N_j$ except the previous one. If there was a threat flag, then we only store the information of the room and backtrack to the last threatless room we were in. If we find the Wumpus-Is-Neighbor (W) flag in vertex $v_k$ then we mark the neighbors $N_k$ as possible hosts for the wumpus. Because there are no 4-cycles in the graph, once we hit with the W flag again in, say vertex $v_l$, we will know certainly that the Wumpus is in $N_l$. When we find the W flag again we will search inside $N_k\cap N_l$ for the only vertex inside Each turn when we move or shoot, the A.I. reads all of $M$ and adds flags, then we deduce information. Once the A.I. evaluates this, the A.I. decides to move or shoot. It will only shoot once we have found two instances of the W flag in two non-adjacent rooms. As an example: Suppose that the A.I. walks into $v_i$, $m_i$ corresponding to $v_i$ has a TN flag, and $v_j\in N_i$. Then $m_j$ corresponding to $v_j$ will have a PT flag. Now suppose the A.I. arrives at $v_k$, there is no TN flag in $m_k$, and $v_j\in N_k$. This means that the PT flag in $m_j$ is not true, and therefore $v_j$ is a safe room. A more detailed example Step 1: The A.I. starts the game at $v_i$. It stores information in $m_0$, the $i$ index of $v_i$, the flags from this room (either safe or non safe). Then it moves randomly as it cannot backtrack. Step 2: The A.I. walks into $u\in N_i$, once again storing the information in $m_1$. Suppose $u$ has a TN flag, then all of $v\in N(u)$ has a PT flag except for $v_i$. The A.I. decides to backtrack to the previous cave (if it wasn't dangerous). Else, the A.I. has to choose to move randomly between $N(u)$. For the following steps the A.I. covers all of the area it can reach and it deduces that there are safe rooms which it hasn't gone to. Here I face one of the problems: I feel the A.I. isn't taking enough risks and therefore it might reach a stalemate where there are no more safe rooms to explore and it won't be able to deduce that there exist more of such rooms. Step $i$: Essentially the A.I. arrives at this room, records the information in a new vertex in $M$ and decides whether to explore this room's neighborhood or backtrack. Another problem I have is that I am not sure whether the A.I. will reach the Wumpus' room. Also when the A.I. at a room with bats, I'm not sure how to handle the $M$ graph, since what I'm doing is adding a $m_i$ vertex one by one. I think that it shouldn't generate a problem, but I'm not sure. Hunt the Wumpus: an Empirical Approach by Graeme Cole (2005) Answer: If you have explored all edges out of all safe rooms, and you don't have enough information to identify the location of the Wumpus, then you can pick an unsafe room randomly and randomly explore an edge out of it. That prevents you from getting stuck if you've explored all safe rooms. (There are other possible strategies, but this is a simple one that's probably reasonable and easy to program.) Rather than backtracking, I would suggest the following strategy: if the current room $r$ is not safe, pick a random safe room $r'$ that has an unexplored edge out of it, move from $r$ to $r'$, and then explore the unexplored edge out of $r'$. Since you've built up the graph in memory, you can easily find a path from $r$ to $r'$ (e.g., via BFS). In this way, you're not restricted to rooms $r'$ and paths that take the form of "backtracking", but you can go to anywhere known and reachable. In this way, if there is any safe place you can explore, the algorithm will explore that before trying anything unsafe. In this way, you won't get stuck. Bats should be straightforward to handle. Suppose the bat takes you to a room $r$ that you haven't previously been in. Then you add a new vertex for $r$ to the graph, that's disconnected from all other vertices. Those simple tweaks should take care of all of the problems/concerns you listed in the question.
{ "domain": "cs.stackexchange", "id": 9328, "tags": "algorithms, graphs, optimization" }
Graph 3-colorability is self-reducible
Question: I am interested in self-reducibility of Graph 3-Coloralibity problem. Definition of Graph 3-Coloralibity problem. Given an undirected graph $G$ does there exists a way to color the nodes red, green, and blue so that no adjacent nodes have the same color? Definition of self-reducibility. A language $L$ is self-reducible if a oracle turing machine TM $T$ exists such that $L=L(T^L)$ and for any input $x$ of length $n$, $T^L(x)$ queries the oracle for words of length at most $n-1$. I would like to show in very strict and formal way that Graph 3-colorability is self-reducible. Proof of self-reducibility of SAT can be used as example (self-reducibility of SAT). In my opinion, the general idea of proof of self-reducibility of Graph 3-colorability is different from proof of SAT self-reducibility in few aspects. SAT has two choices for every literal (true or false) and Graph 3-colorability has three choices (namely, red green blue). Choices of SAT literal are independent on each other and choices of colors of Graph 3 colorability are strictly dependent, any adjacent node must have different color, this property potentially could help to make less iteration among all colors. The general idea of proof. Let's denote by $c_{v_i}$ the color of the vertex $v_i$, which can take one of the following values (red,green,blue). Define graph $G'$ from a given graph $G$ by coloring the arbitrary vertex $v_0$, assign $c_{v_0}$ to 'red' and put the graph $G'$ with colored vertex $v_0$ to the input of the oracle. If oracle answers 1, which means that the modified graph is still 3-colorable, save the current assignments and start new iteration, with the different vertex $v_1$ chosen arbitrarily, color vertex $v_1$ according to the colors of the adjacent vertices. if oracle answers 0, which means the previous assignment has broken 3 colorability, pick different color from the set of three colors, but still according to colors of adjacent vertices. The previous proof is not mathematical robust, the question is how to improve it and to make it more formal and mathematical strict. It looks like I need more carefully distinguish the cases when new vertex doesn't have any edges with already colored vertices and when the new vertex is adjacent to already colored vertices. In addition I would like to prove that Graph 3-colorability is downward self-reducible. Definition of downward self-reducible language. The language $A$ is said to be downward self-reducible if it is possible to determine in polynomial time if $x \in A$ using the results of shortest queries. The idea seems to be simple and intuitive: start with coloring an arbitrary vertex, and on each iteration add one more colored vertex and check by oracle if graph is still 3-colorable, if not reverse previous coloring and check another color. But how to write the proof in a strict way and more important how to find an appropriate encoding of a graph. In short, I would like to show that Graph 3-colorability is self-reducible and downward self-reducible in strict and formal way. I will appreciate sharing your thoughts with us. Update: downward self-reducibility Downward self-reducibility is applied to decision problem and it's oracle answers the same decision problem with shorter input, at the end of the process of downward self-reduction we should have the right color assignments. Every 3 - colorable graph $G$ with more than three vertices, has two vertices $x,y$ with the same color. Apparently, there is only three colors and more than three vertices so some number of non-adjacent vertices might have the same color. If we merge $x$ and $y$ with the same color as the result we still have 3 - colorable graph, just because, if graph is 3 - colorable, then there are exist right assignment of all vertices that are adjacent to $x$ and $y$ according to the same color of $x, y$, so by merging $x, y$ we don't need to change any color of any vertices, we only need to add more edges between already correctly colored vertices (I know it's not the best explanation, I will appreciate if someone could explain it better). On every iteration we take two non-adjacent vertices $x,y$ of graph $G$, merge $x$ and $y$ and get graph $G'$ which is our shorter input to the oracle. Oracle answers if it's 3-colorable or not. Now the problem is before setting $G'$ on the input of oracle I should color the merged vertex and test colorability of $G'$, if it's not 3-colorable change the color, but how to implement it correctly, I need right encoding for it. self-reducibility First, we should check if a given graph $G$ is 3-colorable at all, so set it on input of oracle, and oracle will answer if it's 3 - colorable, if yes then start the process. Any two nonadjacent vertices can have the same color in 3-colorable graph. The process of self-reducibility we should run in iterations, I think we can start from small subgraph $G'$ of a given graph $G$ and on every iteration add one more vertices from $G$ to $G'$. In paralel, we should maintain the assignment of already colored vertices. Unfortunately, I still don't get the idea completely. Would appreciate for help and hints. Answer: As Vor mentions in his comment, your reduction doesn't work, since 3-colorability doesn't accept partial assignments of colors. The problem goes even deeper, since setting the color of a single vertex doesn't make any progress in determining whether the graph is 3-colorable: indeed, the graph is 3-colorable iff there is a 3-coloring in which vertex $v$ is assigned color $c$, for any $v,c$ you choose. Here is a hint on how to solve your exercise, second part. In any 3-coloring of a graph $G$ on more than three vertices, there are two vertices $x,y$ getting the same color (why?). If we merge $x$ and $y$, the resulting graph is still 3-colorable (why?). Try to use this idea to construct a downward self-reducing algorithm for 3-colorability. Edit: And here is a hint on how to solve the exercise, first part. Consider any two unconnected vertices $x,y$. If there is a coloring in which they get the same color then $G_{xy}$ is 3-colorable (why?), and a coloring of $G$ can be extracted from a coloring of $G_{xy}$ (how?). When will this process stop?
{ "domain": "cs.stackexchange", "id": 15040, "tags": "complexity-theory, reductions" }
Acceleration due to gravity of mass on a double spring
Question: Three balls of mass 10 kg, 20 kg and 10 kg are hanging by a massless string and are connected by massless springs as shown in the figure below. Initially the system is in equilibrium and all the objects are at rest. If the string at the top snaps suddenly, what is the acceleration of the topmost ball at that instant of time? [In the following, g denotes the acceleration due to gravity] (a) $0\quad$ (b) $2g\quad$ (c) $g\quad$ (d) $4g\quad$ (e) $3g$ My conceptual understanding is rather not good although I can give couple of attempts. Attempt: Answer: $g$, since the only external force acting is gravity. But I think the spring attached to two masses might exert force. Answer: $4g$, taking the bottom mass, $10g = -k(x_2-x_1)$ the mass in the middle, $20g = -kx_1+k(x_2-x_1)$ adding both gives me $30g = -kx_1$ at the topmost ball, $$T-10g-30g = -10g\\ T=0\quad g=4g$$ [$T$ tension in the string, $x_1$ displacement of the spring between topmost mass and mass in the middle, $x_2$ displacement of the spring between mass in the middle and bottom most] Please clarify the concept that I might be mixing up and give the solution if my attempts are not correct. Answer: Take it step by step. The clincer is that the system is at equilibrium in the beginning.Hence all the forces on all the balls are balanced. Looking at the lowermost ball, the force exerted by the spring must be equal to weight i.e 10g. Taking the second ball, forces acting downwards are weight and force due to the lower spring, which is 10g as obtained from previous paragraph( tension developed depends on the lower ball only). So net force acting downwards on string is 20g+10g=30g. This is balanced by force due to upper spring, which must be 30 g as well. Now the net force acting downwards on the uppermost ball is 10g + forced due to spring(30g)=40g. This is balanced by the string and hence the body is in equilibrium. As a result, on cutting the spring, the net force is 40g downwards and hence acceleration at that instant is 4g.
{ "domain": "physics.stackexchange", "id": 38539, "tags": "homework-and-exercises, newtonian-mechanics, acceleration, spring, string" }
Does a magnetized sphere contain energy
Question: I believe this question is similar to "does a permanent magnet contain energy", which I understand to be no, but I just want to be sure. Say we have uniformly magnetized sphere with magnetization $M_0\hat{z}$ and radius $R$. I understand the resulting fields to be: $$\vec{H}_{in} = -\frac{M_0}3\hat{z} $$ $$\vec{B}_{in} = \frac{2\mu_0M_0}3\hat{z} $$ For r < R, and $$\vec{H}_{out} = \frac{M_0}3\frac{R^3}{r^3}[2\cos(\theta)\hat{r}+\sin(\theta)\hat{\theta}] $$ $$\vec{B}_{out} = \mu_0 \vec{H}_{out} $$ for r > R. So to obtain total energy, I need to solve the volume integral: $$U_M=\frac{1}2 \int{\vec{H} \cdot \vec{B}dV} $$ --EDIT: Here are my intermediate steps if anyone sees any obvious errors. $$U_{in} = \frac{1}2(\frac{4}3 \pi R^3)(-\frac{2 \mu_0 M_0^2}9)= -\frac{4}{27} \pi R^3 \mu_0 M_0^2$$ $$U_{out} = \frac{1}2 \int_0^{2\pi} { \int_0^{\pi} { \int_{R}^{\infty}{ \mu_0 (\frac{M_0}3)^2 (\frac{R}{r})^6 [3\cos^2(\theta)+1] r^2 \sin{\theta} dr d\theta d\phi} } }$$ $$ U_{out} = \frac{\pi \mu_0 M_0^2 R^6}{9} \int_0^{\pi} { \int_{R}^{\infty}{ \frac{1}{r^4} [3\cos^2(\theta)+1] \sin{\theta} dr d\theta} }$$ --END EDIT This solves as: $$U_{in} = -\frac{4}{27} \pi R^3 \mu_0 M_0^2$$ $$U_{out} = \frac{4}{27} \pi R^3 \mu_0 M_0^2$$ Meaning the total energy of the permanently magnetized system is zero. Is this just a specific case demonstrating that a permanent magnet has no energy? My confusion comes from trying to solve a problem with a magnetic material in a uniform magnetic field. Instead of solving the problem the traditional way, I wanted to find the energy stored in the uniform field, the energy stored in the magnetized sphere, and the "energy coupling" between these two bodies. Similar to a mutual inductance, if you will. But my argument quickly falls apart. Another way of phrasing my question: I'm confused why one can't define a mutual inductance term with a permanent magnet, a source of magnetic flux. Answer: Ah, I found the problem. The analog, a polarized sphere is described here I used the equation: $$ U_M = \frac{1}2 \int{ \vec{H} \cdot \vec{B} dV } $$ Which only described the energy produced by free currents. But here the energy is stored entirely in bound currents. So I must use the equation: $$ U_M = \frac{1}{2\mu_0} \int{ | \vec{B}|^2 dV } $$ Which accounts for all currents, and does result in a positive net energy. EDIT: Stratton is actually careful to define the energy of a magnetic material in unambiguous terms (Electromagnetic Theory, sections 2.15-2.18). The energy of a magnetic body in a Magnetostatic field is given by: $$ U_M = \frac{1}2 \int{ \vec{M} \cdot \vec{B} dV } $$ By this, the energy of just the magnetic material of the sphere is given by: $$U_M = \frac{3}2 V_p \mu_0 K H_0^2 $$ Where Vp is the volume of the particle, H0 is the applied field, and K is the Clausius-Mossotti Factor defined as: $$K=\frac{\mu_r-1}{\mu_r+2}$$
{ "domain": "physics.stackexchange", "id": 38286, "tags": "electromagnetism, magnetic-fields, inductance" }
Would Bessel beam laser propulsion create true constant acceleration?
Question: I've heard that Bessel beams don't diffract. If Bessel beams continuously bombarded a laser sail, would it keep the ship constantly accelerating from our frame of reference? Answer: No, the apparent acceleration would continually diminish as the object being propelled approached the speed of light.
{ "domain": "physics.stackexchange", "id": 82557, "tags": "rocket-science" }
Liver - Regeneration in Cirrhosis
Question: Liver is the most resilient of the human organ (on par with or next to skin). A very interesting experiment on liver regeneration is here. Even if two-thirds of the liver is removed, the remaining liver would regenerate to recover the full volume and function. Cirrhosis of liver is a condition where excessive fibrosis occurs shattering the architecture of the liver. The blood vessels and the parenchymal cells of the liver get constricted into narrow spaces causing portal hypertension and fall in the Liver Function Test parameters (albumin level falls, clotting factors fall leading to spontaneous bleeding, bilirubin level rises causing jaundice, etc...). It can be either congenital or acquired. Cirrhosis is considered an end stage liver disease. In the end stages of cirrhosis the only treatment option available is liver transplantation. Question: Why is partial hepatectomy (removal of part of liver) not a treatment option for end stage liver disease (cirrhosis of the acquired variety)? Answer: The answer lies in the question. Liver cirrhosis constrains hepatocytes into small fibrous spaces limiting regeneration, hence the nodular pattern. However,fibrous degeneration occurs in specific patterns. It can be located mostly around portal regions (centrolobular pattern) or diffusely through the whole lobule. So basically, it is all good that hepatocytes can regenerate, but the limiting factor is not that. It is the structural impairment of the liver lobules and drainage system that will impair regeneration. Moreover, most cirrhotic patients could not tolerate halving of their residual hepatic function. Not to mention the extensive surgery required. It would kill them swiftly.
{ "domain": "biology.stackexchange", "id": 3868, "tags": "physiology, pathology, liver" }
Proofs using the regular pumping lemma
Question: I have two questions: I consider the following language $$L_1= \{ w\in \{0,1\}^* \mid \not \exists u\in \{0,1\}^* \colon w= uu^R\}.$$ In other words $w$ is not palindrome with even length. I proved that this language is NOT regular by proving that its complement is not regular. My question is how to prove it using the pumping lemma without using going over the complement. Let $$L_2=\{w\in\{0,1\}^* \mid \text{$w$ has same number of 101 substrings and 010 substrings}\}. $$ I proved that this language is not regular by using equivalence classes. How I can prove it using the pumping lemma? Thanks alot for edit:) Answer: Not all non-regular languages fail the test of the pumping lemma. Wikipedia has an annoyingly complex example of a non-regular language which can be pumped. So even if a language is non-regular, we may not be able to prove this fact using the pumping lemma. But it turns out we can use the pumping lemma to prove your first language is not regular. I'm not sure about the second. Claim: $L_1$ is non-regular. Proof: By the pumping lemma. Let $p$ be the pumping length. (I'm going to use the alphabet $\{a,b\}$ rather than $\{0,1\}$.) If $p = 1$, then take the string $abbaa$, which is in $L_1$ and pump it to $aabbaa$ which is not in $L_1$, so $L_1$ would not be regular. If $p > 1$, then take the string $a^pbba^{N}$. (We'll figure out what we want $N$ to be later.) Then consider any division of the string into $xyz$ where $x=a^{p-k}$, $y=a^k$, and $z=bba^{N}$. Now let's pump this string $i$ times. (We'll figure out what we want $i$ to be later.) We get the string $xy^iz$, which gives $a^{p-k}a^{ik}bba^{N} = a^{p-k+ik}bba^{N}$. Now let's back up. First, we picked $N$. Then, some choice of $k$ was made. Then, we picked $i$. We want to figure out what $N$ to pick so that, for any choice of $k \in [1,p]$, we can choose an $i$ that makes this string a palindrome by making the number of $a$s on the left equal the number on the right. (It will always have even length.) So we want to always get that $p-k+ik=N$. If we play around with the math, we find that we should pick $N=p+p!$ and pick $i=p!/k+1$. So to recap, we picked $N=p+p!$ and chose the string $a^pbba^N$. Then some choice of $k$ was made so that the string was made up of $a^{p-k}ybba^N$ where $y=a^k$. Then we picked $i=p!/k + 1$. We pumped the string to get $a^{p-k}y^{i}bba^N = a^{p-k}a^{ik}bba^N = a^{p-k+ik}bba^N$. But we know that $p-k+ik = p-k+(\frac{p!}{k}+1)k = p-k+p!+k=p+p!$. And $N=p+p!$. So the number of $a$s on both ends is the same, so the string is an even-length palindrome, so it's not in $L_1$, so $L_1$ is not regular. $\square$
{ "domain": "cs.stackexchange", "id": 772, "tags": "formal-languages, regular-languages, pumping-lemma" }
Given 4 vertices representing a quadrilateral, divide it into N parts
Question: I am attempting to, given two dimensions (width and height) which represent a quadrilateral, partition that quad in to N parts where each part is as proportionally similar to the other as possible. For example, imagine a sheet of paper. It consists of 4 points A, B, C, D. Now consider that the sheet of paper has the dimensions 800 x 800 and the points: A: {0, 0} B: {0, 800} C: {800, 800} D: {800, 0} Plotting that will give you 4 points, or 3 lines with a line plot. Add an additional point E: {0, 0} to close the cell. To my surprise, I've managed to do this programatically, for N number of cells. How can I improve this code to make it more readable, pythonic, and as performant as possible? from math import floor, ceil import matplotlib.pyplot as plt class QuadPartitioner: @staticmethod def get_factors(number): ''' Takes a number and returns a list of factors :param number: The number for which to find the factors :return: a list of factors for the given number ''' facts = [] for i in range(1, number + 1): if number % i == 0: facts.append(i) return facts @staticmethod def get_partitions(N, quad_width, quad_height): ''' Given a width and height, partition the area into N parts :param N: The number of partitions to generate :param quad_width: The width of the quadrilateral :param quad_height: The height of the quadrilateral :return: a list of a list of cells where each cell is defined as a list of 5 verticies ''' # We reverse only because my brain feels more comfortable looking at a grid in this way factors = list(reversed(QuadPartitioner.get_factors(N))) # We need to find the middle of the factors so that we get cells # with as close to equal width and heights as possible factor_count = len(factors) # If even number of factors, we can partition both horizontally and vertically. # If not even, we can only partition on the X axis if factor_count % 2 == 0: split = int(factor_count/2) factors = factors[split-1:split+1] else: factors = [] split = ceil(factor_count/2) factors.append(split) factors.append(split) # The width and height of an individual cell cell_width = quad_width / factors[0] cell_height = quad_height / factors[1] number_of_cells_in_a_row = factors[0] rows = factors[1] row_of_cells = [] # We build just a single row of cells # then for each additional row, we just duplicate this row and offset the cells for n in range(0, number_of_cells_in_a_row): cell_points = [] for i in range(0, 5): cell_y = 0 cell_x = n * cell_width if i == 2 or i == 3: cell_x = n * cell_width + cell_width if i == 1 or i == 2: cell_y = cell_height cell_points.append((cell_x, cell_y)) row_of_cells.append(cell_points) rows_of_cells = [row_of_cells] # With that 1 row of cells constructed, we can simply duplicate it and offset it # by the height of a cell multiplied by the row number for index in range(1, rows): new_row_of_cells = [[ (point[0],point[1]+cell_height*index) for point in square] for square in row_of_cells] rows_of_cells.append(new_row_of_cells) return rows_of_cells if __name__ == "__main__": QP = QuadPartitioner() partitions = QP.get_partitions(56, 1980,1080) for row_of_cells in partitions: for cell in row_of_cells: x, y = zip(*cell) plt.plot(x, y, marker='o') plt.show() Output: Answer: First, there is no need for this to be a class at all. Your class has only two static methods, so they might as well be stand-alone functions. In this regard Python is different from e.g. Java, where everything is supposed to be a class. Your get_factors function can be sped-up significantly by recognizing that if k is a factor of n, then so is l = n / k. This also means you can stop looking for factors once you reach \$\sqrt{n}\$, because if it is a square number, this will be the largest factor not yet checked (and otherwise it is an upper bound). I also used a set instead of a list here so adding a factor multiple times does not matter (only relevant for square numbers, again). from math import sqrt def get_factors(n): ''' Takes a number and returns a list of factors :param number: The number for which to find the factors :return: a list of factors for the given number ''' factors = set() for i in range(1, int(sqrt(n)) + 1): if n % i == 0: factors.add(i) factors.add(n // i) return list(sorted(factors)) # sorted might be unnecessary As said, this is significantly faster than your implementation, although this only starts to be relevant for about \$n > 10\$. (Note the log scale on both axis.) As for your main function: First figure out how many rows and columns you will have. For this I would choose the factor that is closest to \$\sqrt{n}\$: k = min(factors, key=lambda x: abs(sqrt(n) - x)) rows, cols = sorted([k, n //k]) # have more columns than rows Then you can use numpy.arange to get the x- and y-coordinates of the grid: x = np.arange(0, quad_width + 1, quad_width / cols) y = np.arange(0, quad_height + 1, quad_height / rows) From this you now just have to construct the cells: def get_cells(x, y): for x1, x2 in zip(x, x[1:]): for y1, y2 in zip(y, y[1:]): yield [x1, x2, x2, x1, x1], [y1, y1, y2, y2, y1] Putting all of this together: import numpy as np def get_partitions(n, width, height): factors = get_factors(n) k = min(factors, key=lambda x: abs(sqrt(n) - x)) rows, cols = sorted([k, n //k]) # have more columns than rows x = np.arange(0, width + 1, width / cols) y = np.arange(0, height + 1, height / rows) yield from get_cells(x, y) if __name__ == "__main__": for cell in get_partitions(56, 1980, 1080): plt.plot(*cell, marker='o') plt.show()
{ "domain": "codereview.stackexchange", "id": 32944, "tags": "python, python-3.x, factors" }
Must the Lagrangian always be known for the Euler-Lagrange equations to be of any use?
Question: When studying classical mechanics using the Euler-Lagrange equations for the first time, my initial impression was that the Lagrangian was something that needed to be determined through integration of the Euler-Lagrange equations$^1$. Of course, now I know it's something that's a given for a mechanical system, and we integrate it via the Euler-Lagrange equations to get the evolution of the coordinates of the mechanical system as a function of time. But are there alternative uses for the Euler-Lagrange equations where the Lagrangian isn't known before hand? To all the teachers out there, for god's sake, explain right at the start that the Lagrangian is a functions known before hand for a system. Answer: I'm not completely certain what OP is asking. The Euler-Lagrange equations are by definitions derived from an action principle. If the action doesn't exist, it does not make sense to talk about Euler-Lagrange$^1$ equations. Instead what OP might want to ask is the following interesting question: If one is given a set of equations of motion for some physical system, could they (secretly) be Euler-Lagrange equations for some action? In other words, does there exists an action principle for the system? This is already discussed in several posts on Phys.SE, for instance here. -- $^1$ There exists a notion of Lagrange's equation (without Euler!), $$\tag{1} \frac{d}{dt} \left(\frac{\partial T}{\partial \dot{q}^j} \right)-\frac{\partial T}{\partial q^j}= Q_j,$$ which doesn't require an action principle. (The generalized force $Q_j$ might not have a generalized potential $U$.) See e.g. Goldstein, Classical Mechanics, Chap. 1. Equation (1) is not called Euler-Lagrange equation unless an action principle exists. Equation (1) may be derived from D'Alembert's principle, which in turn may be derived from Newton's laws under additional assumptions, such as, e.g. no sliding friction. See also this Phys.SE post.
{ "domain": "physics.stackexchange", "id": 6871, "tags": "classical-mechanics, lagrangian-formalism" }
Extracting queries from controller to model
Question: I had a complex query in the controller like: def outgoing_tasks @tasks = current_user. assigned_tasks. uncompleted. includes(executor: :profile). order("deadline DESC"). paginate(page: params[:page], per_page: Task.pagination_per_page) end and since I have some other collections in that controller I refactored to this: def outgoing_tasks @tasks = current_user.assigned_tasks.assigned_default(page: params[:page]) end with these new methods in the model: def self.ordered order("deadline DESC") end def self.paginated(page: 1) paginate(page: page, per_page: self.pagination_per_page) end def self.assigned_default(page: 1) uncompleted.includes(executor: :profile).ordered.paginated(page: page) end Should I do something differently or is this okay? Answer: This is a good place to use scopes. http://edgeguides.rubyonrails.org/active_record_querying.html#scopes Scopes are just syntactic sugar for class methods that return relations. In this way they can always be chain-able. scope :ordered -> { order(deadline: 'DESC') } scope :paginated -> (page: 1) { paginate(page: page, per_page: pagination_per_page) } scope :assigned_default -> (page: 1) { uncompleted.includes(executor: :profile).ordered.paginated(page: page) }
{ "domain": "codereview.stackexchange", "id": 20523, "tags": "ruby, ruby-on-rails, controller, active-record" }
Photons as propagators of an electro-magnetic field
Question: What does it mean when somebody, let's say a random person on the crosswalk waiting for the sign to go green, say that a "photon is a propagator of the Electromagnetic field"? I don't know if it's a dumb question, I just stumbled upon that sentence and wondered what it really meant. Answer: Most physicists wouldn’t say that. Instead, they’d say that photons are quanta of the electromagnetic field. There’s only one EM field, throughout the universe, and all photons are quanta of this one field. You can crudely think of quanta as particle-like excitations of the field. A “propagator” in physics is a mathematical function of two points that gives the probability amplitude for a particle to go from A to B. So there is a “photon propagator” function, but saying that the photons themselves are propagators is nonstandard terminology. Sometimes photons are said to “mediate” or “carry” the electromagnetic interaction between charged particles. In classical electrodynamics, this is what the electromagnetic field does. In quantum electrodynamics, this is what the field’s quanta —- “real” and “virtual” photons — do. (Or at least they do Feynman’s perturbative approach to QED.)
{ "domain": "physics.stackexchange", "id": 61138, "tags": "electromagnetism, photons, propagator" }
Packaging with autotools for Ubuntu
Question: I've developed a C application for Ubuntu and created the installer. The file structure is ├── autom4te.cache ├── build-aux ├── contrib ├── debian │ └── source ├── doc ├── gnulib │ ├── lib │ └── m4 ├── man ├── po ├── src └── tests configure.ac dnl Process this file with autoconf to produce a configure script. dnl dnl This file is free software; as a special exception the author gives dnl unlimited permission to copy and/or distribute it, with or without dnl modifications, as long as this notice is preserved. dnl dnl This program is distributed in the hope that it will be useful, but dnl WITHOUT ANY WARRANTY, to the extent permitted by law; without even the dnl implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. AC_INIT([GNU Hello], [2.7], [bug-hello@gnu.org]) dnl Must come before AM_INIT_AUTOMAKE. AC_CONFIG_AUX_DIR([build-aux]) AM_INIT_AUTOMAKE([readme-alpha dist-xz]) # Minimum Autoconf version required. AC_PREREQ(2.60) # Where to generate output; srcdir location. AC_CONFIG_HEADERS([config.h:config.in])dnl Keep filename to 8.3 for MS-DOS. AC_CONFIG_SRCDIR([src/main.c]) dnl Checks for programs. # We need a C compiler. AC_PROG_CC # Since we use gnulib: gl_EARLY must be called as soon as possible after # the C compiler is checked. The others could be later, but we just # keep everything together. gl_EARLY gl_INIT # GNU help2man creates man pages from --help output; in many cases, this # is sufficient, and obviates the need to maintain man pages separately. # However, this means invoking executables, which we generally cannot do # when cross-compiling, so we test to avoid that (the variable # "cross_compiling" is set by AC_PROG_CC). if test $cross_compiling = no; then AM_MISSING_PROG(HELP2MAN, help2man) else HELP2MAN=: fi # i18n support from GNU gettext. AM_GNU_GETTEXT_VERSION([0.18.1]) AM_GNU_GETTEXT([external]) AC_CONFIG_FILES([Makefile contrib/Makefile doc/Makefile gnulib/lib/Makefile man/Makefile po/Makefile.in src/Makefile tests/Makefile]) AC_OUTPUT Top-level Makefile.am # Process this file with automake to produce Makefile.in (in this, # and all subdirectories). # Makefile for the top-level directory of GNU hello. # # Copyright 1997, 1998, 2005, 2006, 2007, 2008, 2009, 2010, 2011 # Free Software Foundation, Inc. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3, or (at your option) # any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # Find gnulib headers. ACLOCAL_AMFLAGS = -I gnulib/m4 # Additional files to distribute. EXTRA_DIST = ChangeLog.O gnulib/m4/gnulib-cache.m4 # Subdirectories to descend into. SUBDIRS = contrib gnulib/lib po src doc man # `make diff' produces a diff against the previous version, given both # .tar.gz's in the current directory. This should only be done when an # official release is made (and only if you care to provide diffs). # hello_pre = 2.6 diff: diffcheck @(echo "To apply these patches, cd to the main directory of the package"; \ echo "and then use \`patch -p1 <hello-XXX.diff'."; \ echo "Before building the program, run \`autogen.sh'."; ) > \ $(PACKAGE)-$(hello_pre)-$(VERSION).diff -diff -rc2P --exclude=configure --exclude=config.h.in --exclude=*.info \ --exclude=*.gmo --exclude=aclocal.m4 \ $(PACKAGE)-$(hello_pre) $(PACKAGE)-$(VERSION) >> \ $(PACKAGE)-$(hello_pre)-$(VERSION).diff gzip --force --best $(PACKAGE)-$(hello_pre)-$(VERSION).diff diffcheck: for d in $(PACKAGE)-$(hello_pre) $(PACKAGE)-$(VERSION) ; do \ if test ! -d $$d ; then \ if test -r $$d.tar.gz ; then \ tar -zxf $$d.tar.gz ; \ else \ echo subdir $$d does not exist. ; \ exit 1 ; \ fi ; \ fi ; \ done # Verify that all source files using _() are listed in po/POTFILES.in. # The idea is to run this before making pretests, as well as official # releases, so that translators will be sure to have all the messages. # (From coreutils.) po-check: if test -f po/POTFILES.in; then \ grep -E -v '^(#|$$)' po/POTFILES.in \ | grep -v '^src/false\.c$$' | sort > $@-1; \ files=; \ for file in $$($(CVS_LIST_EXCEPT)) `find * -name '*.[ch]'`; do \ case $$file in \ djgpp/* | man/*) continue;; \ esac; \ case $$file in \ *.[ch]) \ base=`expr " $$file" : ' \(.*\)\..'`; \ { test -f $$base.l || test -f $$base.y; } && continue;; \ esac; \ files="$$files $$file"; \ done; \ grep -E -l '\b(N?_|gettext *)\([^)"]*("|$$)' $$files \ | sort -u > $@-2; \ diff -u $@-1 $@-2 || exit 1; \ rm -f $@-1 $@-2; \ fi # Example of updating the online web pages for the documentation # with the gendocs.sh script; see # http://www.gnu.org/prep/maintain/html_node/Invoking-gendocs_002esh.html # gnulib = $(HOME)/gnu/src/gnulib gendocs = $(gnulib)/build-aux/gendocs.sh gendocs_templates = $(gnulib)/doc gendocs_envvars = GENDOCS_TEMPLATE_DIR=$(gendocs_templates) # manual = hello manual_title = "Hello, GNU World" email = bug-hello@gnu.org gendocs_args = --email $(email) $(manual) $(manual_title) # www_target = $(HOME)/gnu/www/hello/manual # doctemp = doc/wwwtemp wwwdoc: rm -rf $(doctemp) && mkdir $(doctemp) cd $(doctemp) \ && ln -s ../*.texi . \ && env $(gendocs_envvars) $(gendocs) $(gendocs_args) cp -arf $(doctemp)/manual/. $(www_target) ls -ltu $(www_target)/html_node | tail # cvs rm -f obsolete files # followed by cvs add of new files and cvs commit. src level Makefile.am # Makefile.am for hello/src. # # Copyright 1996, 1997, 2001, 2005, 2006, 2007, 2008 Free Software # Foundation, Inc. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3, or (at your option) # any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. bin_PROGRAMS = opsh opsh_SOURCES = edit.c eval.c exec.c expr.c funcs.c histrap.c jobs.c lalloc.c lex.c main.c misc.c shf.c syn.c tree.c var.c opsh_LDADD = @LIBINTL@ ../gnulib/lib/libgnu.a -lbsd -L/lib localedir = $(datadir)/locale AM_CPPFLAGS = -I$(top_srcdir)/gnulib/lib -I$(top_builddir)/gnulib/lib DEFS = -DLOCALEDIR=\"$(localedir)\" @DEFS@ Answer: Looks good. Ship it. Your first two files appear to be all or mostly boilerplate, with maybe the set of subdirs being project specific. It seems likely the link editor would already look in /lib, so -L/lib may be redundant. And -lbsd seems like a blast from the past. I imagine that dh_make / debuild work cleanly with your package.
{ "domain": "codereview.stackexchange", "id": 26909, "tags": "makefile" }
Perl 6: Grammar issues (simple defines)
Question: My goal is to parse defines in Objective-C code: #define wowFont [UIFont fontWithName:(DEV)?@"one":@"two" size:(DEV? 10: 12)] #define wowColor [UIColor colorWithRed:(DEV?240/255.0f:255.0f) green:((DEV)? 230.0f / 255.0f) blue: 10.0f] I try to write grammar for it: grammar Defines::Grammar { token TOP { \s*<define_expression>\n } token define_expression { ^^ #define $<keyword>=\w+ \s+ $<value> = <general_expression> } token general_expression{ # here, determine parenthes <simple_expression>|[(]<general_expression>[)]|<value_expression> } token simple_expression{ # here, determine ?: behaviour \s*<general_expression>\s*[?]\s*<general_expression>\s*[:]\s*<general_expression>\s* } token value_expression { # stop here, need to determine ?: operator here <numeric_expression>|<string_expression>|<function_invocation_expression>|<bareword_expression> } token function_invocation_expression { [\[] (<class>|<bareword_expression>) \s+ <function_body>\s* [\]] } token function_body { <bareword_expression>|<bareword_expression> \s* [:] <general_expression> } token bareword_expression { [\w_]+ } token string_expression { \@["][^"]["] } token numeric_expression { # we could +-*/ numeric expressions, also we can put lead sigil [-]. put parentheses and it could be a <number> <numeric_expression> [*/+-] <numeric_expression> | [-+]<numberic_expression> | [(] <numeric_expression> [)] | <number> } token number { # number expression here, ok # not needed legal sign here, numeric_expression already have it \d+ [\.] \d+ } } Could anybody help to improve it? P.S: As a part of answer (I hope that somebody will find this documents helpful) I place here a link to Perl 6 book (I found out that Grammars section is good enough for beginning). Perl 6 Book Answer: It doesn't look like this code is currently working. AFAIK Objective-C uses the normal C preprocessor. The preprocessor works on a token level, and does not know about stuff like method calls. It is entirely irrelevant that [ starts a method call or an array subscript, for the preprocessor it's just some token. So as a rough sketch: token TOP { ^^ \s* '#define' \s* $<name>=<bareword> \s* $<value>=(<token>+) $$ } Notice that my # is inside a string and is thus not interpreted as a comment! There are however some special cases to consider: Arguments to the macros: #define MIN(a, b) (((a) < (b)) ? a : b) Here, the a and b tokens have special meaning to the preprocessor. Line continuations: #define MIN(a, b) \ (((a) < (b)) ? a : b) Token concatenation with ##: // "MANGLE(foo, bar)" → foobar #define MANGLE(prefix, name) prefix##name The next problem is that \s is any Unicode whitespace, including newlines. It would be recommendable to restrict that to horizontal whitespace. It would make sense to override the ws rule: token ws { <!ww> \h* } With support for line continuations: token ws { <!ww> [ \h | \\\n ]* } We can now use sigspace (significant whitespace) to simplify our tokens as rules, e.g: rule TOP { ^^ '#define' $<name>=<bareword> $<value>=(<token>+) $$ } What is a token? Identifier, number, character, symbol, or string. You attempt to match a string with \@["][^"]["], but this fails for various reasons: In Perl 6, the […] denotes a non-capturing group, but not a character class. These now look like <[…]>. Especially, a negated character class looks like <-[…]>. If that were a Perl 5 regex, you'd only be allowing a single letter inside the string, because the [^"] charclass is not being repeated. If we also want to handle escapes inside the string, a string token might be matched by token string { \@\" [ <-[\"\\]> | \\. ]* \" } Similar errors also occur in other tokens.
{ "domain": "codereview.stackexchange", "id": 7169, "tags": "parsing, perl, grammar" }
What are the polarizations of an entangled photon pair?
Question: My understanding of polarization of light is that a photon can be horizontally polarized, or vertically polarized, or some angle in between (eg 45 degrees from horizontal) — leaving aside circularly polarized for now. This means basically that the electromagnetic wave oscillates in just this direction. If you put a polarizing filter parallel to the direction of polarization, there is a 100% chance the photon goes through. If it’s perpendicular there is a 0% chance it goes through. If it’s somewhere in between there’s some other probability, eg for a 45 degree angle it’s 50% chance to go through. My understanding with entangled photons is that it’s a pair of photons whose polarization is related to each other somehow. My question is - what is that “somehow”? Is the entangled pair polarized at 90 degrees to each other (such that if you sum their polarizations you get 0)? Answer: The answer is that "it depends on the setup". For example, in the entangled photon pair resulting from SPDC, where energy and momentum are conserved, the entangled photons have a "Type II polarization correlation" -- which means "their polarizations are perpendicular". (source).
{ "domain": "physics.stackexchange", "id": 91739, "tags": "quantum-mechanics, photons, quantum-entanglement, polarization" }
sensors lastMeasurementTime
Question: I've configured a robot with 2 laser scanners and a IMU all the sensors have 10Hz update rate now, If I look at sensor datas I notice that datas have different time stamp: e.g. by using gztopic echo, IMU time stamps are 0.1, 0.2 and so on the first laser produces data at 0.101, 0.201, 0.301 and so on and the second laser at 0.102, 0.202, 0.203 notice that my simulation time is 1ms, so exactly the 0.001 differences between sensors it is that normal? why this happens? I was expecting that all the sensors were acquired exactly at the same time since they have the same update rate. Originally posted by SImone on Gazebo Answers with karma: 33 on 2013-03-07 Post score: 0 Answer: Sensor and physics run in different threads. It's possible for physics to take a step between an IMU and Laser update. Originally posted by nkoenig with karma: 7676 on 2013-03-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3096, "tags": "gazebo" }
URDF model of ultrasonic sensor
Question: Hi guys, I am brand new to URDF/xacro and I am trying to figure out the correct way to model an ultra-sonic sensor in urdf format. From what I know, the model would have to discretize the data so I'm thinking I want to create an arc composed of n rays in order to model this. However, i'm not sure how to implement this, or even if this is the best way to go about. If you have any ideas, please let me know. Thanks, Max Originally posted by mgoldman on ROS Answers with karma: 1 on 2012-12-06 Post score: 0 Answer: Hi. Perhaps this tutorial can help you? I think your main issue will be to find the right plugin for the sensor's controller. In addition, you can try to look for robot URDF model that contains similar sensor. Originally posted by ChengXiang with karma: 201 on 2012-12-08 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12008, "tags": "ros, urdf, beginner" }
Unclear proof step in Feder and Greene's 1988 paper showing NP-Hardness of approximating k-center problem within a factor of 1.82
Question: I was reading the paper "Optimal Algorithms for Approximate Clustering", Feder and Greene [1988] (https://dl.acm.org/doi/10.1145/62212.62255). Specifically, I was trying to look at the $1.82$ approximation hardness proof for central L2 clustering (this is the standard k-center problem in Euclidean $\mathcal{R}^2$ space with the normal metric - we need to cluster a pointset into $k$ clusters such that the maximum distance from centroid of cluster to a point in the cluster is minimized). The proof is considerably shorter and simpler than Stuart Mentzer's construction independently showing the same result (https://www.academia.edu/23251714/Approximability_of_Metric_Clustering_Problems). The relevant parts of the paper are Section 2, Theorem 2.1 and Figure 1. At the start of Section 2, it is mentioned that An instance of vertex cover for planar graphs of degree at most $3$ can be embedded in the plane, by replacing all edges with odd length paths, so that edges become segments of length $1$ [see Fig. 1]. The midpoints of these edges then form an instance of central clustering which has a $k$-clustering with cluster size $1$ if and only if the embedded graph has a vertex cover with $k$ nodes. We can use this construction to show that approximate clustering is NP-Complete. I don't quite understand why the stated claim in the given form is true. There are two graphs which we can discuss here. First, is the planar graph with max vertex degree $ \leq 3$, let's call this graph $G$ (this is an arbitrary instance of the vertex cover problem in planar graphs with max vertex degree $\leq 3$, which has been shown to be NP-Complete by Gary and Johnson in 1977 as mentioned in references). Second, we take the graph $G$, obtain its planar embedding, and ensure that the edge length is odd (by stretching edges I presume). Then, we split up each edge into segments of length 1 and convert the edges into paths (by adding nodes at lengths of $1$). Thus, we have transformed $G$ into a new graph $G^{'}$ (which also has a embedding on the plane). Now, the claim in the paragraph can be interpreted in two ways: Claim Interpretation 1: The midpoints of these edges then form an instance of central clustering which has a $k$-clustering with cluster size $1$ if and only if $G$ has a vertex cover with $k$ nodes. Claim Interpretation 2: The midpoints of these edges then form an instance of central clustering which has a $k$-clustering with cluster size $1$ if and only if $G^{'}$ has a vertex cover with $k$ nodes. As far as I understand, Claim 1 is incorrect while Claim 2 is correct. However, I do not understand how just Claim 2 being correct is enough to show NP-Hardness of approximation. It seems to be that the paper assumes that Claim 1 is correct (in Theorem 2.1). Can someone point out what I might be missing? Answer: It just so happened that we used a similar reduction as part of our proof for NP-hardness of regret-minimizing set, so I'm pretty familiar with this. We have the following claim: Let $V$ and $V'$ be the vertex sets of $G$ and $G'$, then $G'$ has a vertex cover of size $k$ if and only if $G$ has a vertex cover of size $k-\frac{1}{2}(|V'|-|V|)$. This is because: Given a vertex cover $C\subseteq V$ for $G$, one can construct a vertex cover for $G'$ of size $|C|+\frac{1}{2}(|V'|-|V|)$: On each edge in $G$, starting from an endpoint in $C$, alternately select the padding vertices. Consider a vertex cover $C'\subseteq V'$ for $G'$. On each edge $e=(u,v)$ in $G$, assume there are $p_e$ (which is an even number) padding vertices in $G'$. At least $\frac{1}{2}p_e$ of them have to be in $C'$, and if $u\notin C'$ and $v\notin C'$ then at least $\frac{1}{2}p_e+1$ of them have to be in $C'$. Therefore $$|C'|\geq |V\cap C'|+\frac{1}{2}\sum p_e+\#\{e=(u,v)\mid u,v\notin C'\}.$$ But you can take the set $V\cap C'$, and for each edge $e=(u,v)$ in $G$ that $u,v\notin C'$, just add $u$ to this set. Then it forms a vertex cover for $G$, which has size at most $|C'|-\frac{1}{2}\sum p_e=|C'|-\frac{1}{2}(|V'|-|V|)$. Combining this claim with your Claim 2 shows the NP-hardness of clustering, and I think hardness of approximation follows from the rest of the proof in that paper.
{ "domain": "cstheory.stackexchange", "id": 5598, "tags": "cc.complexity-theory, reductions, computational-geometry" }
Is using unsupervised learning to setup supervised classification reasonable?
Question: I've got a biological dataset describing genes. The overall idea is that there are thousands of these genes to sort through, so if ML can rank them I can then know which should be going into the lab for functional research first. Currently, I make labels for supervised classification of these genes based on their known biology (so for example some genes interact with drugs related to a disease so I label them as 'most likely to cause the disease' and this goes down until I have a final 4th label of 'unlikely to cause the disease'). The way I make these labels seems impossible to not be biased, since I'm making all the decisions, so I'm wondering if I can compare my decisions with seeing how an unsupervised model would group the data (e.g. I've got 4 labels but if the model finds 5 groups then it shows how far off I am potentially?). Would it even also be possible to use unsupervised learning to create the labels by itself or would this too be unreliable as you can't know why it's grouping certain genes together? Or would doing this step alone actually make the supervised step redundant anyway? Answer: Is using unsupervised learning to setup supervised classification reasonable? Absolutely. This is a common strategy in ML. As you said yourself, using information coming from the data itself has the benefit of being less biased. Would it even also be possible to use unsupervised learning to create the labels? Technically yes. Though, some clustering techniques require you to specify the number of clusters, which isn't helpful. As you said, if you can cluster data points in a satisfactory manner, you don't need supervised learning anymore. Also, indeed, if your scenario requires you to have an understanding of what differentiates the clusters, you may not be lucky depending on which clusters come out. They are not always interpretable. What I would suggest would be to turn your classification problem into a regression problem. 1.0 could be most likely to cause the disease, 0.0, least likely. This way, you don't have to worry about how many labels you need in the first place.
{ "domain": "datascience.stackexchange", "id": 7552, "tags": "machine-learning, unsupervised-learning, supervised-learning" }
Confusion regarding plot of linear convolution vs fast convolution via FFT
Question: Question [Example 5.23 of Digital Signal Processing Using MATLAB Third Edition By Vinay K. Ingle and John G. Proakis ] I am not getting same plot/results as book although I am using almost same code The issue with book code is that, I get error for NI variable and MATLAB says NI is undefined, (I have red-underlined NI in Snapshot), so I used NL in my code and NL=N*L, but I am not getting output as book My Code clc;clear all;close all; conv_time = zeros(1,150); fft_time = zeros(1,150); % for L = 1:150 tc = 0; tf=0; N = 2*L-1; nu = ceil(log10(N*L)/log10(2)); N = 2^nu; for I=1:100 h = randn(1,L); x = rand(1,L); t0 = clock; y1 = conv(h,x); t1=etime(clock,t0); tc = tc+t1; t0 = clock; y2 = ifft(fft(h,N).*fft(x,N)); t2=etime(clock,t0); tf = tf+t2; end % conv_time(L)=tc/100; fft_time(L)=tf/100; end % n = 1:150; subplot(1,1,1); plot(n(25:150),conv_time(25:150),n(25:150),fft_time(25:150)) My Output Answer: The variable NI in the book is just a typo, it should be N instead (not N*L as in your code). Apart from that, remember that book was written about $25$ years ago, and the code was run on a $33$ MHz $486$ PC. So in order to see some effects on today's computers, you should crank up the value of L. I've modified the code a bit (see below). Now the figure illustrates the expected result. L = 200:500:10000; K = length(L); conv_time = zeros(1,K); fft_time = zeros(1,K); Nav = 10; for k = 1:K tc = 0; tf=0; Lk = L(k); N = 2*Lk-1; nu = ceil(log10(N)/log10(2)); N = 2^nu; for I=1:Nav h = randn(1,Lk); x = rand(1,Lk); t0 = clock; y1 = conv(h,x); t1=etime(clock,t0); tc = tc+t1; t0 = clock; y2 = ifft(fft(h,N).*fft(x,N)); t2=etime(clock,t0); tf = tf+t2; end conv_time(k)=tc/Nav; fft_time(k)=tf/Nav; end plot(L,conv_time,L,fft_time)
{ "domain": "dsp.stackexchange", "id": 9008, "tags": "matlab, fft, convolution, fast-convolution" }
Min/max height of B-tree
Question: I have a question asking for the minimum and maximum height $h$ of a B-Tree with 1000 elements under following conditions: each block can save 1 to 4 records, the number of internal nodes is between 3 and 5 and the root has between 3 and 5 children. The solution is given as $4\leq h \leq7$. How this is reached? Answer: In the worst case you will store 1 record per node, so you will need 1000 nodes. In the best case you will store 4 record per node, so you only need 1000/4 = 250 nodes. In the worst case you will have 3 children per node, so your tree will only grow by multiples of 3 for each level. So we can say that $3^h \geq 1000$, where $h$ is the height. So $h=\lceil\log_3 1000\rceil=7$ In the best case you will have 5 children per node, so your tree will be $h=\lceil\log_5 250\rceil=4$
{ "domain": "cs.stackexchange", "id": 3445, "tags": "data-structures, trees" }
Python caching generator
Question: Is this the correct way to cache a generator in Python, without iterating over it right away (for example using list() over it)? @property def results(self): def caching_iterator(it, r): for t in it: yield t r.append(t) if self._results: # _results is [] initially return self._results return caching_iterator(self.run(), self._results) This is a property of a class, as you can see. Basically, the idea is that the first time the property is accessed, a generator is returned. So the results are actually computed as the need arises. All the other subsequent times, the results are already inside a list, so they are returned straight away. Are there simpler ways to achieve the same result? I like mine quite a bit, but I have one argument against it. The second argument of caching_iterator is a mutable type, a list. Since that object gets modified inside the function, this is potentially dangerous. However, I don't see how that could happen. The first time _results is None and caching_iterator does its job. But all the other times, the if test passes and _results is directly returned. Answer: Memoizing a generator is a good idea to save having to rerun it. There are a few ways to do it - one is to handle it the way you're handling it, by returning the cached result if it exists, otherwise returning and caching the new result. The first way, is to the the tee() method from itertools. Such as follows: from itertools import tee def run(): for i in range(1000): yield i sequence, memoized_sequence = tee(run()) This returns a 32 byte object: <itertools.tee object at 0x23a564> <itertools.tee object at 0x23a58c> which you can then iterate over as you normally would: for x in memoized_sequence: print x Another way to do it, if the generator consistently returns small sets of data, is to just use list(). This is especially true if, like aforementioned, the generator returns small sets of data and that data is not dependent on an action (ie, data is static) and that data is not kept in memory for a long time (such as in a child that is killed (and its memory freed) once its served its purpose). The way to do so, is: def run(): for i in range(1000): yield i memoized_sequence = list(run()) However, this set of 1000 integers while have a size of 4.2KB in memory. So its not ideal, nor memory efficient, but its an option. Now, the way that you did it, as follows: _results = [] def run(): for i in range(1000): yield i def results(): def caching_iterator(it, r): for t in it: yield t r.append(t) if _results: # _results is [] initially return _results return caching_iterator(run(), _results) Would result in a 36 byte object: <generator object caching_iterator at 0x23f97c> I would suggest using the system provided tee function over the custom memoization technique, so your code would look like from itertools import tee def results(): _, memoized_sequence = tee(self.run()) return memoized_sequence Doing so shaves you 4 bytes from the returned iterator object, as well as all the variable assignments and the added function and compacts your code. EDIT Seems that tee() does not memoize as I thought - it just returns an iterator. However, list() does so, though statically: import random from itertools import tee def run(): for i in range(5): x = random.randint(0, 100) * i print "calculating" yield x _, memoized = tee(run()) for x in memoized: print x l = list(run()) print l print l Results in: calculating 0 calculating 92 calculating 94 calculating 165 calculating 100 calculating calculating calculating calculating calculating [0, 43, 106, 222, 8] [0, 43, 106, 222, 8]
{ "domain": "codereview.stackexchange", "id": 9078, "tags": "python, cache, iterator, memoization" }
What animal has the longest juvenile period?
Question: I just heard the following complaint from a comedienne. Humans are the only animal that is completely useless for the first twenty five years of life. Obviously this is just a joke but it is true that primates and especially humans do have profoundly long juvenile periods compared with most animals. It's got me wondering whether humans really do spend the most time in sexual immaturity. Unfortunately I have found this question hard to search because Google thinks I'm asking about the longest gestation period (which seems to be a much more popular question). To be clear, I am not asking about the longest gestation period. Answer: As you indicate in your question, the average age of sexual maturity is probably the best way to approach this, since immaturity is usually how juveniles are defined. Age of puberty is also different in boys and girls (the same goes for many animals), and has also decreased in the 21 century. However, as an historical average for humans 15 years is probably fair, even though there is a lot of variance. That is high compared to most animals, but there are some that have similar or higher ages of maturation, e.g.: Cicadas in the genus Magcicada, which can have a 17 year life cycle Elephants reach sexual maturity at about an age of 8-15 years, but usually dont start to breed until they are at least 18-20 years (see e.g. elephantsforever.co.za and Association of Zoos and Aquariums). Males mature and start reproducing later than females, and in practice it is mostly older bulls that reproduce. The Nile crocodile, which reach sexual maturity at at an age of 12-16 years (largely dependent of body size though). Some species of Tortoise reach maturity at ages 13-16 years (e.g. Gopherus sp, see Germano, 1994), and, again, which is largely due to body size and growth rate. Captive bred individuals can mature more quickly. As you can see, there is a difference between sexual maturity and reproductive age. To actually be able to reproduce, especially in some male mammals, you often need experience and size, which means that reproduction is delayed in practice. Whether this period from sexual maturity (with respect to producing mature eggs/sperm) to reproductive age should be defined as least partially as "adolescence" will affect how humans compare to other animals. In my examples, I have focused on sexual maturity (which usually has less variation between individuals). For other examples, you might want to look at insects other than Cicadas that have delayed larval stages. This does for many woodliving beetles, where development times can be very variable, and under poor conditions take up to 20-30 years. This is usually not the norm though. When searching for other examples, use the terms "age of maturation, "reproductive age", "sexual maturity" etc, to focus on sexual maturation and not gestation.
{ "domain": "biology.stackexchange", "id": 11204, "tags": "zoology, reproduction, development, sexual-reproduction, life-history" }
A few questions about cars
Question: When I start the car, why do I hear the engine working when I'm not pressing the gas pedal? How exactly does the torque and power of the gears work? Torque = Radius * Force Power = Torque * Speed Do the different gears have different torque? They should, because they have difference radiuses, right? I'm a bit confused on how torque and power relates to the torque and what happens when we switch gears. How automatic transmission cars detect what torque to put into the wheels and how do they put it? How electric cars detect what torque to put into the wheels and how do they put it? Answer: The noise you hear is the engine running at the idle speed. To provide minumu energy for lubrication, charging the battery, air conditiong, water pump etc. In newer cars it has been eliminated for duration of intermittent stops to save fuel and pollute less. The computer will start the engine the moment you release the brakes. The gear box, manual and automatic, function is to extend the range of engine torque. Because gasoline engines normal RPM is not spread enough to cover all the torque demand of the car. The gears torque has to do with their arrangement in the gear box and as you said to the ratio of their RPM to that of the engine crankshaft. A gear turning 1/2 times faster provides 2 times more torque. The automatic transmissions used to be controlled mechanically by the engine RPM, carburetor vacuum and rear axel RPM which has to do with the speed of the car. But modern transmissions past 90s are controlled by a computer with many sensors, such as the air and engine temperatures, road condition, humidity, altitude, etc. Electric cars, Tesla namely, don't have a transmission. They have all the torque range needed because they have wide range of RPM up to 20kRPM. They have a computer which converts the direct current of the battery to alternative 3 phase current whit variable frequency, read variable RPM, via an inverter. Completely controlled by the master computer and in many cases even override the driver to correct his or her errors.
{ "domain": "engineering.stackexchange", "id": 2913, "tags": "mechanical-engineering, gears, torque, car" }
Partial Measurement of a Bell State: How to project a Bell state onto its new "eigenbasis"?
Question: In Alain Aspect's talk "Bell’s Theorem: The Naive View of an Experimentalist", Aspect explains the strong correlations in an EPR experiment where both polarisers are in the same orientation. I have difficulties with the actual mathematical formulation of the projection onto the partially measured "eigenspace". This derivation can be found in section 2.3 of the aforementioned paper. Prerequisites Consider two photons, entangled in the following Bell state: $$ \left|\psi\right> = \frac{1}{\sqrt{2}}\left( \left|x\ x\right> + \left|y\ y\right> \right) $$ Here, $\left|x\ x\right>$ denotes the Kronecker product $\left|x\right>_1\otimes \left|x\right>_2$. Each photon $i$ passes a polariser that is rotated by the angle $\theta_i$. After passing the polariser, the photons are detected in either of the two states $\left|\pm\right>_i$: $$ \left|\pm\right>_i = \cos\left(\theta\right)\left|x\right>_i +\sin\left(\theta\right)\left|y\right>_i $$ Partial measurement of the Bell state Let's consider only photon 1 and measure it. Let's say the result is $\left|+\right>_1$. This outcome has probability $\frac{1}{2}$. Alain Aspect now continues that the new state $\left|\psi'\right>$ after the measurement is obtained by projection of the initial state vector $\left|\psi\right>$ onto the eigenspace associated to the result $+$. However, this step is missing a detailed derivation and I failed to derive the proper projection onto the new basis. My approach is as follows: Let $\hat{P}$ be the projection operator with \begin{align} \hat{P} &= \left|+\right>_1 \left<+\right|_1 \otimes \mathbb{1}_2 \\ &=\left(\cos^2\theta_1 \left|x\right>_1\left<x\right|_1+ \cos\theta_1\sin\theta_1 \left|x\right>_1\left<y\right|_1+ \sin\theta_1\cos\theta_1 \left|y\right>_1\left<x\right|_1+ \sin^2\theta_1 \left|y\right>_1 \left<y\right|_1\right) \otimes \mathbb{1}_2. \end{align} or in matrix notation where $\left|x\right> = (1\ 0)^T, \left|y\right> = (0\ 1)^T$ \begin{align} \hat{P} &= \left( \begin{matrix} \cos^2\theta_1 & \cos\theta_1\sin\theta_1 \\ \cos\theta_1\sin\theta_1 & \sin^2\theta_1 \end{matrix} \right) \otimes \mathbb{1}\\ &= \left( \begin{matrix} \cos^2\theta_1 & 0 & \cos\theta_1\sin\theta_1 & 0 \\ 0 & \cos^2\theta_1 & 0 & \cos\theta_1\sin\theta_1 \\ \cos\theta_1\sin\theta_1 & 0 & \sin^2\theta_1 & 0 \\ 0 & \cos\theta_1\sin\theta_1 & 0 & \sin^2\theta_1 \end{matrix} \right). \end{align} In matrix notation, the Bell state is given by $$ \left|\psi\right> = \frac{1}{\sqrt{2}}\left[ \left( \begin{matrix} 1 \\ 0 \\ 0 \\ 0 \end{matrix} \right) + \left( \begin{matrix} 0 \\ 0 \\ 0 \\ 1 \end{matrix} \right) \right]. $$ Now the new state $\left|\psi'\right>$ is given by \begin{align} \left|\psi'\right> &= \hat{P}\left|\psi\right> \\ &= \frac{1}{\sqrt{2}} \left( \cos^2\theta_1\left|x\ x\right> \cos\theta_1\sin\theta_1\left|x\ y\right> \cos\theta_1\sin\theta_1\left|y\ x\right> \sin^2\theta_1\left|y\ y\right> \right) \\ &\equiv \frac{1}{\sqrt{2}} \left|+\ +\right> \equiv \frac{1}{\sqrt{2}} \left|+\right> \otimes \left|+\right> \end{align} This result appears to be close to that of Alain Aspect. However, my approach brings up the factor $\frac{1}{\sqrt{2}}$ in the final state. This would also mean that the final state is not normalised ($\left|\left<\psi'|\psi'\right>\right|^2 = \frac{1}{2}$). This so happens to be exactly the probability of measuring $\left|+\ +\right>$ in the first place: $\left|\left<+\ +|\psi\right>\right|^2 = \frac{1}{2}$. My questions are Is my formulation of the projection operator $\hat{P}$ correct? Why is the final state not normalised? Answer: When you project a state onto some subspace, you will always get something with a different norm unless that state is already in the subspace, so something like a factor of $1/\sqrt{2}$ is hardly surprising. If the projection actually corresponds to something physical like a measurement, then the post-measurement state is just the normalized version of the projection (as per the usual projection measurement postulate of QM!), which you either effect by hand or state (as part of the postulate) that the post measurement state is $$ |\psi_{\textrm{post-measurement}}\rangle=\frac{\hat{P}|\psi\rangle}{\sqrt{\langle\psi|\hat{P}|\psi\rangle}}\,. $$ Following through with this calculation will result in exactly the state $\lvert+\,+\rangle$. Otherwise, it appears to me that OP's math is correct!
{ "domain": "physics.stackexchange", "id": 98235, "tags": "quantum-mechanics, operators, bells-inequality" }
Is a 4 dimensional spherical universe possible with flat curvature?
Question: I'm trying to understand this snippet from Wikipedia, in particular the section I've emphasized: The curvature of the universe places constraints on the topology. If the spatial geometry is spherical, i.e., possess positive curvature, the topology is compact. For a flat (zero curvature) or a hyperbolic (negative curvature) spatial geometry, the topology can be either compact or infinite.[14] Many textbooks erroneously state that a flat universe implies an infinite universe; however, the correct statement is that a flat universe that is also simply connected implies an infinite universe.[14] For example, Euclidean space is flat, simply connected, and infinite, but the torus is flat, multiply connected, finite, and compact. https://en.wikipedia.org/wiki/Shape_of_the_universe#Curvature So if the universe has flat curvature, it can be either infinite or bounded with a 4 dimensional shape (compact). But why can't it be simply connected, like a 4 dimensional sphere? That would seem to be the most obvious shape to me for a finite universe. Answer: A flat space means you can draw parallel straight lines that neither converge nor diverge. On a two-sphere, this idea shows up as the lines of constant longitude converging and the lines of constant latitude not being straight: following a line of constant latitude (except the equator) requires that you are always turning off of the geodesic (great circle) tangent to your current motion.
{ "domain": "physics.stackexchange", "id": 78495, "tags": "cosmology, spacetime, universe, curvature" }
is it possible to use nao_ctrl and choregraphe simultaneously?
Question: If it is not possible out of the box, can you suggest some code changes / a high-level approach? My desired task is to teleoperate the limbs (with nao_ctrl and openni_nao), while having Choregraphe do face recognition and attention / gaze servoing with the head. Thanks for any tips! Originally posted by Nick Armstrong-Crews on ROS Answers with karma: 481 on 2011-08-24 Post score: 0 Answer: There's no reason why it shouldn't work, as long as you don't try to do conflicting things (but I don't know in detail how openni_nao is communicating with ROS / NaoQI). In fact, you can even connect nao_ctrl to the simulated Nao running in Choregraphe (use the correct port and IP 127.0.0.1 for NaoQI). Of course, if you start changing Nao's leg joint angles in Choregraphe while he's walking with nao_ctrl you will produce a fall in the worst case - in the best case NaoQI won't even let you control it or will stop the walk. Same for the head: if you do gaze servoing then just don't try to change the head angles at the same time with teleoperation. Originally posted by AHornung with karma: 5904 on 2011-08-24 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Nick Armstrong-Crews on 2011-08-30: PS - if I get the error above, I can get back in business by: closing (and disconnecting from robot) Choregraphe and nao_ctrl, then running nao_ctrl, then running Choregraphe and connecting it to robot Comment by Nick Armstrong-Crews on 2011-08-29: Confirmed to work! A bit touchy, though... I sometimes get the error: "Address already in use" Could not create Proxy to "ALMotion". Desc: 'Port not free. Another broker is using this port.' Overall, seems to work better if I have nao_ctrl running first, then open Choregraphe. THANKS!
{ "domain": "robotics.stackexchange", "id": 6504, "tags": "nao" }
CharacterEscaper - further optimizable? Bugs?
Question: I have been working on a character escaper for a production server that receives quite a few requests per second (sometimes in the range of 400). We forward some "query" requests to apache Lucent/SOLR, and I need to escape characters. Containing also feedback from programmers-stackexchange , the code below is the result. Can anyone see any bugs or additional ways to optimize? String query = "http://This+*is||a&&test(whatever!!!!!!)"; char[] queryCharArray = new char[query.length()*2]; char c; int length = query.length(); int currentIndex = 0; for (int i = 0; i < length; i++) { c = query.charAt(i); if(mustBeEscaped[c]){ if('&'==c || '|'==c){ if(i+1 < length && query.charAt(i+1) == c){ queryCharArray[currentIndex++] = '\\'; queryCharArray[currentIndex++] = c; queryCharArray[currentIndex++] = c; i++; } } else{ queryCharArray[currentIndex++] = '\\'; queryCharArray[currentIndex++] = c; } } else{ queryCharArray[currentIndex++] = c; } } query = new String(queryCharArray,0,currentIndex); System.out.println("TEST="+query); private static final boolean[] mustBeEscaped = new boolean[65536]; static{ mustBeEscaped[':']= // for(char c: "\\?+-!(){}[]^\"~*&|".toCharArray()){ mustBeEscaped[c]=true; } } Answer: First I have to be honest and say that this code puts me into WTF-Mode very early on, but that must not be a bad thing. But what took me by surprise is the lack of comments, there are some edge-cases in there I can not make out why they're the way they're. On the latter, changing, suggestions I did only perform minor profiling by simply running it in a loop with 1 million iterations and seeing if the time differs by much. Rule of thumb: Never roll your own security! Your indentation/brace-style is a mess. Java normally uses a modified K&R-Style, with the opening brace on the same line. function test() { if (condition) { // Stuff } else { // Stuff } } You should fix that first. Some of your names can be improved. F.e. queryCharArray might be better named escapedQuery. You might want to extract all hard coded chars at least into static final variables for readability. Your loop-beginning can be changed to: char[] queryCharArray = new char[query.length() * 2]; int currentIndex = 0; for (int i = 0; i < query.length(); i++) { char c = query.charAt(i); This improves readability by a lot. Your loop can be optimized to this: char c = query.charAt(i); if (mustBeEscaped[c]) { queryCharArray[currentIndex++] = '\\'; if ('&' == c || '|' == c) { if (i + 1 < query.length() && query.charAt(i + 1) == c) { queryCharArray[currentIndex++] = c; i++; } } } queryCharArray[currentIndex++] = c; This removes unnecessary else-blocks. if ('&' == c || '|' == c) { if (i + 1 < query.length() && query.charAt(i + 1) == c) { You should leave a comment here explaining why these characters are special. Extracting them into another array might be a good idea. if (skipNextOccurence[c]) { Putting all this together (and renaming the constants to fit the Java naming conventions), I ended up with this in my tests: private static final boolean[] MUST_BE_ESCAPED = new boolean[65536]; private static final boolean[] SKIP_NEXT_OCCURENCE = new boolean[65536]; private static final char ESCAPE_CHARACTER = '\\'; static { for (char c : "\\?+-!(){}[]^\"~*&|".toCharArray()) { MUST_BE_ESCAPED[c] = true; } for (char c : "&|".toCharArray()) { SKIP_NEXT_OCCURENCE[c] = true; } } private static String escape(String query) { char[] escapedQuery = new char[query.length() * 2]; int currentIndex = 0; for (int idx = 0; idx < query.length(); idx++) { char c = query.charAt(idx); if (MUST_BE_ESCAPED[c]) { escapedQuery[currentIndex++] = ESCAPE_CHARACTER; if (SKIP_NEXT_OCCURENCE[c]) { // Check if the next char is the same, and if yes add it to // the escapedQuery and make sure that it is skipped. if (idx + 1 < query.length() && query.charAt(idx + 1) == c) { escapedQuery[currentIndex++] = c; idx++; } } } escapedQuery[currentIndex++] = c; } return new String(escapedQuery, 0, currentIndex); } To elaborate on the possibilities to use a StringBuilder, or a Set, or pretty much everything else I tried to come up with which looked more readable...damn that thing is fast! Even using a StringBuilder slowed it down considerably. So as long as you're aware that you are, a little bit at least, abusing memory with that 65k array (which will end up as 65k * 8Byte on most platforms) it's fine with me. Also don't forget that you allocate twice the amount of whatever is passed into this function. So if somebody feels funny and passes a 1Mb query into that, it will at least consume 3Mb (1+1*2) on it's way. That is, if the JVM is not smarter then I think and does not allocate all that at once. Just use more comments and JavaDoc.
{ "domain": "codereview.stackexchange", "id": 4580, "tags": "java" }
The Big Bang and universal elemental abundance. Predictions seems bad and unconvincing?
Question: Allow me to preface this question by stating that I am not a Big Bang denier. I am reading the book A Universe from Nothing by Lawrence M. Krauss. In his book, he presents the following card as one piece of evidence for the Big Bang: Apparently, the boxes outline the predicted abundance of each element in the universe, whilst the shaded areas represent measured abundances. This may just be because I'm a (student) mathematician, but this seems like extremely poor and unconvincing evidence on the part of physicists. The only prediction that is convincing is that of deuterium; the other predictions seem to be horribly inaccurate. As I stated, this is obviously only one of many pieces of evidence for the Big Bang. However, without considering these other pieces of evidence, it seems that physicists regard this specific piece of evidence as being in itself extremely convincing. As such, I wonder what it is that I'm misunderstanding? Am I interpreting this incorrectly? I realise that absolute proofs exist only in mathematics, but these predictions (excluding deuterium) seem surprisingly bad. EDIT: Sean Lake commented on what seems to be a graph with much more accurate measurements: http://www.astro.ucla.edu/~wright/BBNS.html. Answer: First, you have it the wrong way around. The boxes show the measurements/estimates of what the initial primordial abundances are. These are measured in various, indirect, ways and are afflicted by both measurement uncertainties and known systematic uncertainties. The height of the boxes represents these uncertainties. This is an old graph. The size of the deuterium and lithium uncertainties have been reduced. The very convincing evidence that you speak of, is that the shaded regions represent the raw predictions of the "vanilla", homogeneous big bang model as a function of the current density of baryons (which is tens of orders of magnitude less than it was the epoch at which these elements were made). To me, it is extraordinary that these measured primordial abundances of D, He and Li, which differ by many orders of magnitude and for which we have no a priori expectations, are simultaneously predicted to a good accuracy with a single value of the baryon density (represented by the vertical shaded strip), and that this baryon density (about 4% of the critical density) is itself close to the measured value from the cosmic microwave background, which is entirely independent of the primordial abundances. Lithium is currently problematic, both to predict and measure, because there are nucleosynthetic uncertainties and it is both created and destroyed in various ways other than primordial nucleosynthesis. The horizontal shaded region in this case represents the lithium abundance that is measured in very old, metal-poor stars (the so-called "Spite plateau"). There are various classes of explanation for this "lithium problem", the most likely of which (in my view) is that these stars have depleted lithium from the higher primordial value. See Discrepancy problem in lithium? A slightly more modern (post-Planck results) version of this "Schramm plot" is shown below (from Coc et al. 2014). The green shaded boxes represent observational estimates of the primordial abundances (though note what I said above about Li), while the red dotted lines mark the range of uncertainty in the predictions (due to things like the lifetime of the neutron and nuclear reaction cross-sections). The vertical baryon density measurement from the CMB is now very precise. Note the spectacularly good agreement between the predicted and measured D abundance with the measured baryon to photon ratio from the CMB. There are no free parameters tuned to get this agreement.
{ "domain": "physics.stackexchange", "id": 36942, "tags": "particle-physics, cosmology, popular-science, elementary-particles" }
What does "Assertion `cloud.points.size () == cloud.width * cloud.height' failed" mean?
Question: Hello, after storing each points' coordinates of a point cloud in an array, then converting them to a ROSmessage and publishing them I get the following error message while running: array: /opt/ros/fuerte/include/pcl-1.5/pcl/ros/conversions.h:248: void pcl::toROSMsg(const pcl::PointCloud&, sensor_msgs::PointCloud2&) [with PointT = pcl::PointXYZ, sensor_msgs::PointCloud2 = sensor_msgs::PointCloud2_std::allocator<void >]: Assertion `cloud.points.size () == cloud.width * cloud.height' failed. Aborted (core dumped) What does the last part about size, width and height mean? Here's the part that might be the source of the error. The array's declared as 'field' with x,y,z as its elements. msg->header.frame_id = "some_tf_frame"; msg->height = 1; msg->width = pc.points.size(); //Storing the vector values in msg->points for(i = msg->points.begin();i != msg->points.end();++i){ j=distance(i,msg->points.begin()); msg->points.push_back (pcl::PointXYZ(field[j].x,field[j].y,field[j].z)); } sensor_msgs::PointCloud2 output; pcl::toROSMsg(*msg,output); pub.publish(output); Originally posted by tordalk on ROS Answers with karma: 1 on 2012-10-04 Post score: 0 Answer: Keep in mind that setting msg->width and msg->height has no effect over msg->points. The problem is that you are trying to iterate over msg->points when it is still empty. You can verify that actually, the inside of the for loop is never being executed and msg->points.size() will be 0. msg->width * msg->height will be equal to pc.points.size() and hence the assertion failure. You should iterate over the correct variable. Probably pc.points I might be wrong, but I hope this helps. Originally posted by Martin Peris with karma: 5625 on 2012-10-04 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 11240, "tags": "pcl" }
Pole/Zero existence at infinity
Question: How can poles and zeros exist at infinity?Can anybody explain using a system function? Answer: consider a general rational transfer function of order $N$, first with an equal number of zeros and poles: $$ \begin{align} H(z) & = A \prod_{n=1}^N \frac{z - q_n}{z - p_n} \\ & = A \frac{\prod_{n=1}^N z - q_n}{\prod_{n=1}^N z - p_n} \\ & = A \frac{\prod_{n=1}^N q_n - z }{\prod_{n=1}^N p_n - z} \\ & = B \frac{\prod_{n=1}^N 1 - \frac{z}{q_n} }{\prod_{n=1}^N 1 - \frac{z}{p_n}} \\ \end{align} $$ where $ B = A \prod_{n=1}^N \frac{q_n}{p_n}$ . now suppose that the number of zeros is actually less than the number of poles. we could express the transfer function as $$ H(z) = C \frac{\prod_{n=1}^M 1 - \frac{z}{q_n} }{\prod_{n=1}^N 1 - \frac{z} {p_n}} $$ where $M<N$. or we can express it as $$ H(z) = B \frac{\prod_{n=1}^N 1 - \frac{z}{q_n} }{\prod_{n=1}^N 1 - \frac{z} {p_n}} $$ where $(N-M)$ zeros have values of $\infty$ which make $\frac{z}{q_n}$ disappear (for those zeros), leaving only $1$ as a factor in the transfer function. at the moment, i am not sure what to do with the $A$, $B$, or $C$ factors which might have an $\infty$ in them. i'll worry about that later.
{ "domain": "dsp.stackexchange", "id": 2157, "tags": "discrete-signals, z-transform" }
Wait, is this... LINQ?
Question: Context I'm working on a little project that consists in a series of Microsoft Excel add-ins (.xlam). The code being submitted for review here, is located in the Reflection project: Feel free to comment on the project architecture, but I'm mostly interested in the Reflection.LinqEnumerable class. Linq? Ok not exactly linq, but very much inspired by System.Linq.Enumerable, and only made possible with the Reflection.Delegate class. I'm working on a Grouping class that will enable adding a GroupBy method in there... but for now these are the members of the LinqEnumerable class: The Object Explorer displays a mini-documentation for the selected method because I've added hidden VB_Description attributes for every public method. Here's the whole class, with the attributes: VERSION 1.0 CLASS BEGIN MultiUse = -1 'True END Attribute VB_Name = "LinqEnumerable" Attribute VB_GlobalNameSpace = False Attribute VB_Creatable = False Attribute VB_PredeclaredId = True Attribute VB_Exposed = True Private encapsulated As New Collection Option Explicit Private Function EquateReferenceTypes(value As Variant, other As Variant) As Boolean Dim equatable As IEquatable If TypeOf value Is IEquatable Then Set equatable = value EquateReferenceTypes = equatable.Equals(other) Else EquateReferenceTypes = (ObjPtr(value) = ObjPtr(other)) End If End Function Private Function EquateValueTypes(value As Variant, other As Variant) As Boolean EquateValueTypes = (value = other) End Function Friend Sub Add(ParamArray values()) Dim valuesArray() As Variant valuesArray = values AddArray valuesArray End Sub Friend Sub Concat(ByVal values As LinqEnumerable) AddArray values.ToArray End Sub Friend Sub AddArray(values() As Variant) Dim value As Variant, i As Long For i = LBound(values) To UBound(values) encapsulated.Add values(i) Next End Sub Public Property Get Item(ByVal index As Long) As Variant Attribute Item.VB_Description = "Gets or sets the element at the specified index." Attribute Item.VB_UserMemId = 0 If IsObject(encapsulated(index)) Then Set Item = encapsulated(index) Else Item = encapsulated(index) End If End Property Public Property Get NewEnum() As IUnknown Attribute NewEnum.VB_Description = "Gets an enumerator that iterates through the sequence." Attribute NewEnum.VB_UserMemId = -4 Attribute NewEnum.VB_MemberFlags = "40" Set NewEnum = encapsulated.[_NewEnum] End Property Public Property Get Count() As Long Attribute Count.VB_Description = "Gets the number of elements in the sequence." Count = encapsulated.Count End Property Public Function Contains(ByVal value As Variant) As Boolean Attribute Contains.VB_Description = "Determines whether an element is in the sequence." Contains = (IndexOf(value) <> -1) End Function Public Function Distinct() As LinqEnumerable Attribute Distinct.VB_Description = "Returns distinct elements from the sequence." Dim result As New LinqEnumerable Dim value As Variant For Each value In encapsulated If Not result.Contains(value) Then result.Add value Next Set Distinct = result End Function Public Function Except(ByVal values As LinqEnumerable) As LinqEnumerable Attribute Except.VB_Description = "Produces the set difference with specified sequence." Dim result As New LinqEnumerable Dim value As Variant For Each value In encapsulated If Not values.Contains(value) Then result.Add value Next Set Except = result End Function Public Function First() As Variant Attribute First.VB_Description = "Returns the first element in the sequence." If Count = 0 Then Exit Function If IsObject(Item(1)) Then Set First = Item(1) Else First = Item(1) End If End Function Public Function FromArray(ByRef values() As Variant) As LinqEnumerable Attribute FromArray.VB_Description = "Creates a new instance by copying elements of an array." Dim result As New LinqEnumerable result.AddArray values Set FromArray = result End Function Public Function FromCollection(ByVal values As VBA.Collection) As LinqEnumerable Attribute FromCollection.VB_Description = "Creates a new instance by copying elements of a VBA.Collection instance." Dim result As New LinqEnumerable Dim value As Variant For Each value In values result.Add value Next Set FromCollection = result End Function Public Function FromEnumerable(ByVal value As System.Enumerable) As LinqEnumerable Attribute FromEnumerable.VB_Description = "Creates a new instance by copying elements of a System.Enumerable instance." Dim result As LinqEnumerable Set result = LinqEnumerable.FromArray(value.ToArray) Set FromEnumerable = result End Function Public Function FromList(ByVal values As System.List) As LinqEnumerable Attribute FromList.VB_Description = "Creates a new instance by copying elements of a System.List instance." Dim result As New LinqEnumerable Dim value As Variant For Each value In values result.Add value Next Set FromList = result End Function Public Function GetRange(ByVal index As Long, ByVal valuesCount As Long) As LinqEnumerable Attribute GetRange.VB_Description = "Creates a copy of a range of elements." Dim result As LinqEnumerable If index > Count Then Err.Raise 9 Dim lastIndex As Long lastIndex = IIf(index + valuesCount > Count, Count, index + valuesCount) Set result = New LinqEnumerable Dim i As Long For i = index To lastIndex result.Add Item(i) Next Set GetRange = result End Function Public Function IndexOf(value As Variant) As Long Attribute IndexOf.VB_Description = "Searches for the specified object and returns the 1-based index of the first occurrence within the sequence." Dim found As Boolean Dim isRef As Boolean If Count = 0 Then IndexOf = -1: Exit Function Dim i As Long For i = 1 To Count If IsObject(Item(i)) Then found = EquateReferenceTypes(value, Item(i)) Else found = EquateValueTypes(value, Item(i)) End If If found Then IndexOf = i: Exit Function Next IndexOf = -1 End Function Public Function Last() As Variant Attribute Last.VB_Description = "Returns the last element of the sequence." If Count = 0 Then Exit Function If IsObject(Item(Count)) Then Set Last = Item(Count) Else Last = Item(Count) End If End Function Public Function LastIndexOf(value As Variant) As Long Attribute LastIndexOf.VB_Description = "Searches for the specified object and returns the 1-based index of the last occurrence within the sequence." Dim found As Boolean Dim isRef As Boolean LastIndexOf = -1 If Count = 0 Then Exit Function Dim i As Long For i = 1 To Count If IsObject(Item(i)) Then found = EquateReferenceTypes(value, Item(i)) Else found = EquateValueTypes(value, Item(i)) End If If found Then LastIndexOf = i Next End Function Public Function ToArray() As Variant() Attribute ToArray.VB_Description = "Copies the entire sequence into an array." Dim result() As Variant ReDim result(1 To Count) Dim i As Long If Count = 0 Then Exit Function For i = 1 To Count If IsObject(Item(i)) Then Set result(i) = Item(i) Else result(i) = Item(i) End If Next ToArray = result End Function Public Function ToDictionary(ByVal keySelector As Delegate, Optional ByVal valueSelector As Delegate = Nothing) As Scripting.Dictionary Attribute ToDictionary.VB_Description = "Creates a System.Dictionary according to specified key selector and element selector functions." Dim result As New Scripting.Dictionary Dim value As Variant For Each value In encapsulated If valueSelector Is Nothing Then result.Add keySelector.Execute(value), value Else result.Add keySelector.Execute(value), valueSelector.Execute(value) End If Next Set ToDictionary = result End Function Public Function ToCollection() As VBA.Collection Attribute ToCollection.VB_Description = "Copies the entire sequence into a new VBA.Collection." Dim result As New VBA.Collection Dim value As Variant For Each value In encapsulated result.Add value Next Set ToCollection = result End Function Public Function ToList() As System.List Attribute ToList.VB_Description = "Copies the entire sequence into a new System.List." Dim result As System.List Set result = List.Create result.AddArray Me.ToArray Set ToList = result End Function Public Function OfTypeName(ByVal value As String) As LinqEnumerable Attribute OfTypeName.VB_Description = "Filters elements based on a specified type." Dim result As LinqEnumerable Dim element As Variant For Each element In encapsulated If TypeName(element) = value Then result.Add element Next Set OfTypeName = result End Function Public Function SelectValues(ByVal selector As Delegate) As LinqEnumerable Attribute SelectValues.VB_Description = "Projects each element of the sequence." Dim result As New LinqEnumerable Dim element As Variant For Each element In encapsulated result.Add selector.Execute(element) Next Set SelectValues = result End Function Public Function SelectMany(ByVal selector As Delegate) As LinqEnumerable Attribute SelectMany.VB_Description = "Projects each element into a sequence of elements, and flattens the resulting sequences into one sequence." Dim result As New LinqEnumerable Dim element As Variant For Each element In encapsulated 'verbose, but works with anything that supports a For Each loop Dim subList As Variant Set subList = selector.Execute(element) Dim subElement As Variant For Each subElement In subList result.Add subElement Next Next Set SelectMany = result End Function Public Function Aggregate(ByVal accumulator As Delegate) As Variant Attribute Aggregate.VB_Description = "Applies an accumulator function over a sequence." Dim result As Variant Dim isFirst As Boolean Dim value As Variant For Each value In encapsulated If isFirst Then result = value isFirst = False Else result = accumulator.Execute(result, value) End If Next Aggregate = result End Function Public Function Where(ByVal predicate As Delegate) As LinqEnumerable Attribute Where.VB_Description = "Filters the sequence based on a predicate." Dim result As New LinqEnumerable Dim element As Variant For Each element In encapsulated If predicate.Execute(element) Then result.Add element Next Set Where = result End Function Public Function FirstWhere(ByVal predicate As Delegate) As Variant Attribute FirstWhere.VB_Description = "Returns the first element of the sequence that satisfies a specified condition." Dim element As Variant For Each element In encapsulated If predicate.Execute(element) Then If IsObject(element) Then Set FirstWhere = element Else FirstWhere = element End If Exit Function End If Next End Function Public Function LastWhere(ByVal predicate As Delegate) As Variant Attribute LastWhere.VB_Description = "Returns the last element of the sequence that satisfies a specified condition.." Dim result As Variant Dim element As Variant For Each element In encapsulated If predicate.Execute(element) Then If IsObject(element) Then Set result = element Else result = element End If End If Next If IsObject(result) Then Set LastWhere = result Else LastWhere = result End If End Function Public Function CountIf(ByVal predicate As Delegate) As Long Attribute CountIf.VB_Description = "Returns a number that represents how many elements in the specified sequence satisfy a condition." Dim result As Long Dim element As Variant For Each element In encapsulated If predicate.Execute(element) Then result = result + 1 Next CountIf = result End Function Public Function AllItems(ByVal predicate As Delegate) As Boolean Attribute AllItems.VB_Description = "Determines whether all elements of the sequence satisfy a condition." Dim element As Variant For Each element In encapsulated If Not predicate.Execute(element) Then Exit Function End If Next AllItems = True End Function Public Function AnyItem(ByVal predicate As Delegate) As Boolean Attribute AnyItem.VB_Description = "Determines whether any element of the sequence satisfy a condition." Dim element As Variant For Each element In encapsulated If predicate.Execute(element) Then AnyItem = True Exit Function End If Next End Function Note that due to language constraints I had to make some compromises: The overload of First taking a predicate parameter was renamed to FirstWhere; same with the Last overload, renamed to LastWhere - that's because VBA doesn't support overloading, obviously. Select was renamed to SelectValues, because "Select" is a reserved keyword. OfType was renamed to the here-more-accurate OfTypeName, since the function is really comparing type names; type comparison is possible in VBA, but not with value types - it's simpler to just take a type name and verify that instead. So, is this LINQ - Language-INtegrated Query for VBA? Not sure... but this is definitely a number of steps away from the plain old vanilla Collection class. Example Dim accumulator As Delegate Set accumulator = Delegate.Create("(work,value) => value & "" "" & work") Debug.Print LinqEnumerable.FromList(List.Create("the", "quick", "brown", "fox")) _ .Aggregate(accumulator) Produces this output: fox brown quick the Answer: Decomposition There are redundancies in translating from Array and Collection. Consider these three snippits Dim value As Variant, i As Long 'value is unused? For i = LBound(values) To UBound(values) encapsulated.Add values(i) Next Dim value As Variant For Each value In values result.Add value Next Set result = LinqEnumerable.FromArray(value.ToArray) They all do the same thing. Why translate from LinqEnumerable to Array just to go back to LinqEnumerable? Why have a separate method for adding an Array or Enumerable when the same procedure works for both? Private Sub Extend(ByVal sequence As Variant) Dim element As Variant For Each element in sequence encapsulated.Add element Next element End Sub Friend Sub Add(ParamArray values() As Variant) Extend values End Sub Friend Sub Concat(ByVal values As LinqEnumerable) Extend values End Sub Friend Sub AddArray(values() As Variant) Extend values End Sub ' Optional New methods Friend Sub AddCollection(ByVal values As VBA.Collection) Extend values End Sub Friend Sub AddList(ByVal values As System.List) Extend values End Sub All of those methods did the same thing, but expected different inputs. Duck-typing is one of the few high-level features that VBA does right. It's a shame to not take advantage of it. Public Function FromCollection(ByVal values As VBA.Collection) As LinqEnumerable Attribute FromCollection.VB_Description = "Creates a new instance by copying elements of a VBA.Collection instance." Dim result As New LinqEnumerable result.AddCollection values Set FromCollection = result End Function Public Function FromEnumerable(ByVal values As System.Enumerable) As LinqEnumerable Attribute FromEnumerable.VB_Description = "Creates a new instance by copying elements of a System.Enumerable instance." Dim result As LinqEnumerable result.Concat values Set FromEnumerable = result End Function Public Function FromList(ByVal values As System.List) As LinqEnumerable Attribute FromList.VB_Description = "Creates a new instance by copying elements of a System.List instance." Dim result As New LinqEnumerable result.AddList values Set FromList = result End Function Public Function FromArray(ByVal values() As Variant) As LinqEnumerable Attribute FromList.VB_Description = "Creates a new instance by copying elements of a System.List instance." Dim result As New LinqEnumerable result.AddArray values Set FromList = result End Function You can keep them if you want to enforce type safety, but I wouldn't. You need to add a two new methods for every other container you want to support. Honestly, I would just dump all but Extend and Add and make Extend Friend, then create just these two methods. Friend Sub Extend(ByVal sequence As Variant) Dim element As Variant For Each element in sequence encapsulated.Add element Next element End Sub Friend Sub Add(ParamArray values() As Variant) Extend values End Sub Public Function Create(ParamArray values() As Variant) As LinqEnumerable Set Create = CreateFrom(values) End Function Public Function CreateFrom(ByVal values As Variant) As LinqEnumerable Dim result As New LinqEnumerable result.Extend values Set CreateFrom = result End Function
{ "domain": "codereview.stackexchange", "id": 10084, "tags": "linq, vba, delegates" }
Large table mousehover effects
Question: HTML <table> <tr data-parent="0" data-id="1"> <td></td> </tr> <tr data-parent="1" data-id="2"> <td></td> </tr> <tr data-parent="1" data-id="3"> <td></td> </tr> <tr data-parent="3" data-id="4"> <td></td> </tr> </table> Where data-parent points to the data-id of another row. JQUERY/JS $("tr[data-parent='" + id + "']").css("background-color", "#1ba1e2").each(function () { $("tr[data-parent='" + $(this).attr("data-id") + "']").css("background-color", "#75F7FF").each(function () { $("tr[data-parent='" + $(this).attr("data-id") + "']").css("background-color", "#98E8E3").each(function () { $("tr[data-parent='" + $(this).attr("data-id") + "']").css("background-color", "#B6DCE8") });});}); (I know that I could have written a recursive function as well, that's not the point right now) When someone mouse-overs a TR, I want that row to be one color, all the rows that has the hovered row as parent, another color, the grandchildren another color etc (for several levels deep). Now, for 100 rows, this isn't a problem, but my data contains around 30 000 rows, and that's where it starts to go wrong, several-seconds-of-freezing-wrong. How can I optimise this code? jsFiddle Answer: The problem with the current approach, as I'm sure you already know, is the very high number of DOM elements that the JavaScript engine is navigating through and assessing before the end of the process (bearing the 30,000 row scenario in mind). And the jQuery selector function has a fair bit of work to do, checking every row element to see if it has the data attribute value or not, before it can even begin the highlighting. A taxing process. DOM manipulation is notoriously hard on performance. My answer would be that the mapping between row and parent, which is what essentially needs to be calculated for each row when hovered, needs to be taken out of the DOM entirely so this calculation doesn't require any work with the DOM at all. See my solution below, and this JSFiddle: https://jsfiddle.net/dawdg4sj/4 Ultimately, you'll see that on each row hover, the 'calculateHighlighting' function is called which performs the calculation "off-line" so to speak, away from the DOM (data attributes withdrawn) and instead uses the local 'map' object (row IDs on the left, parent row IDs on the right), and returned is a straight-forward array of objects each describing an element by ID and a class to add for highlighting purposes. Bottom line, you need to be able to generate the row id/parent mapping outside the DOM and inside JavaScript instead. As long as you have this mapping information inside the DOM and wish to process 30,000 rows, it'll be a sluggish affair. (Note: My solution uses function calls from the Lodash library for convenience, i.e. '.keys(.pick(...'. They are optional, keep or swap.) Finally, and separately, I would express my surprise at such a large number of rows. Working with so many elements in a single view of an app via JavaScript is always going to lead to slow processing to one extent or another. But obviously so many rows could not be visible within the viewport all at once, so I would further suggest that part of the solution is to limit the highlighting to those within view and just outside only, and update on scroll, or even paginate the rows. It depends on the context of the project of course, but those are my supplementary thoughts.) // --- // New-look HTML // --- <table> <tr id="row-1"> <td>test</td> </tr> <tr id="row-2"> <td>tes</td> </tr> <tr id="row-3"> <td>tes</td> </tr> <tr id="row-4"> <td>test</td> </tr> <tr id="row-5"> <td>tes</td> </tr> <tr id="row-6"> <td>tes</td> </tr> <tr id="row-7"> <td>test</td> </tr> </table> // --- // New-look JS // --- var map = { // row id -> parent row id 1: 0, 2: 1, 3: 1, 4: 3, 5: 3, 6: 1, 7: 0 }; var highlightColours = 4; // calculateHighlighting // Description: Establish a straight-forward array of highlighting rules (classes to add to elements) // e.g. [ {element: '#row-1', class: 'row-colour-1'}, {element: '#row-2', class: 'row-colour-4'} ... ] function calculateHighlighting(rootElementID) { var result = []; var elementIDPrefix = '#row-'; var elementColourClassPrefix = 'row-colour-'; var parentRows = []; var id = rootElementID.split('-')[1]; // (e.g. '#row-1' -> 1) parentRows.push(id); // Start with the root (hovered) element var colourIndex = 1; // Keep track of highlight colours used while(parentRows.length > 0 || colourIndex <= highlightColours) { // While there are parent rows to highlight (or we run out of highlight colours) var childRows; for(var i = 0; i < parentRows.length; i++) { // Fetch related rows based on id/parent mapping (using lodash function, _.keys, for convenience) childRows = _.keys(_.pick(map, function(parentID) { return parentID == parentRows[i]; })); if(childRows.length > 0) { // Highlight found parent rows for(var j = 0; j < childRows.length; j++) { result.push({ element: elementIDPrefix + childRows[j], class: elementColourClassPrefix + colourIndex }); } } } parentRows = childRows; // Just highlighted parent rows, become base for next parent search colourIndex++; // Next highlight colour } return result; } $('tr').on('mouseover', function () { var rowID = $(this).attr('id'); var rowsToHighlight = calculateHighlighting(rowID); for(var i = 0; i < rowsToHighlight.length; i++) { $(rowsToHighlight[i].element).addClass(rowsToHighlight[i].class); } }); $('tr').on('mouseleave', function () { $('tr').attr('class', ''); }); // --- // New-look CSS // --- tr:hover { background-color: red; } .row-colour-1 { background-color: #1ba1e2; } .row-colour-2 { background-color: #75F7FF; } .row-colour-3 { background-color: #98E8E3; } .row-colour-4 { background-color: #B6DCE8; }
{ "domain": "codereview.stackexchange", "id": 13099, "tags": "javascript, performance, jquery, html" }
Electric field near an a conductor which is near an insulating sheet?
Question: A thin sheet of magnitude A has uniform charge Q> 0, and is closely placed near a conductor of width d. The conductor has net charge of zero. In terms of the given quantities and fundamental constants, find the charge density on both the near and far surfaces of the conductor, and the electric field strength in each of the four regions. Indicate the direction of the electric field in each region using vectors Frankly, I am quite unsure of where to even begin for this problem, I am not asking anyone to do my work for me but it would extremely helpful if anyone could explain the underlining physics of how a conductor and charged insulator sheet interact. I presume that the sheet would induce some negative charge on the side of the conductor closest to the sheet which would also mean that there would be a positive charge to the other side of the conductor. However, I do not no what the magnitude of these charges would be. After that the electric field calculation should be easy to find for each part. I believe I would then use super position to find the field in each region, but if I am wrong on that tell me. Answer: I presume that the sheet would induce some negative charge on the side of the conductor closest to the sheet which would also mean that there would be a positive charge to the other side of the conductor. However, I do not no what the magnitude of these charges would be. As a hint, what do you know about the electric field inside a good conductor in steady state conditions? What would the charge on each surface of the conductor need to be to establish this condition? If it helps, think about how you use Gauss's law to find the field due to a sheet of charge, and how you can apply a similar Gaussian surface in this problem.
{ "domain": "physics.stackexchange", "id": 71804, "tags": "homework-and-exercises, electric-fields, charge" }
Measuring uncertainty in measurements
Question: The following question is taken from Physics by Halliday, Resnick and Krane, 5th ed. Vol. 1. A student is calculating the thickness of a single sheet of paper. She measures the thickness of a stack of $80$ sheets with Vernier calipers, and finds the thickness to be $l = 1.27~\mathrm{cm}$. To calculate the thickness of a single sheet she divides. Which of the following answers has the correct number of significant digits? (A) $0.15875~\mathrm{mm}$ (B) $0.159~\mathrm{mm}$ (C) $0.16~\mathrm{mm}$ (D) $0.2~\mathrm{mm}$ Now, my question is a general question regarding these type of measurement where measure some quantity and divide it because we do not have precise enough instruments. For example, we can "measure" the time period of one oscillation of a pendulum by measuring the time taken for ten oscillations. Now, my question is regarding the value and the error in the thickness of sheet of paper or time required for one oscillation. If $L$ is the thickness of one one sheet, then assuming that every sheet is equally thick, $L = l/80~\mathrm{cm} = 0.015875~\mathrm{cm}$. And $\Delta L = \Delta l / |80|$. Assuming $\Delta l = 0.01~\mathrm{cm}$, we get $\Delta L = 0.000125~\mathrm{cm}$. So, the actual thickness is $L = 0.0159 \pm 0.0001~\mathrm{cm}$. Am I right? This would correspond to option (B). But when performing an experiment to "measure" the time period of one oscillation of a pendulum by measuring the time taken for ten oscillations, my professor told that the time period of one oscillation cannot be reported such that the uncertainty in the value is smaller than the least count of the instrument used to measure the time period of ten oscillations. For example, if the least count of a stopwatch is $1~\mathrm{s}$, and the time taken for ten oscillations is $8~\mathrm{s}$, then what should be the time required for one oscillation? Should it be $0.8 \pm 0.1~\mathrm{s}$? But if this is true, then as the least count of the stopwatch is greater than the value itself, this value does not convey any information according to my professor's saying. Answer: I think you must have misunderstood your professor. If the time for ten oscillations is $8 \pm 1$ seconds then the time for one oscillation is: $$ \tau = \frac{8 \pm 1}{10} = \frac{8}{10} \pm \frac {1}{10} $$ So it is $\tau = 0.8 \pm 0.1$ seconds. If your professor says that: since the least count of the stopwatch is 1 s, the value of the time period of one oscillation cannot be certain than 1 s then they are wrong.
{ "domain": "physics.stackexchange", "id": 52776, "tags": "experimental-physics, error-analysis" }
Demonstration of Noether's Theorem
Question: So, as many, many people before me, I'm trying to get some insight on Noether's Theorem and its demonstration. As I'm in the process of self-teaching here, there are several things I'm "missing" or "not understanding". I made myself a way through Tong's notes, books like Peskin & Schroeder, Mandl & Shaw, Greiner (Field Quantization), etc., and several questions in PSE like this one and this one. Nevertheless, the more I read, the more questions I have. I will try to compile everything I've got so far, and I will write my questions bolded. I'm sorry if this post turns out to be a little too long, but I'm trying to be as accurate as possible, but without diving into the realm of group theories, that is, I'm trying so demonstrate the theorem at a pre-Ph.D. level. So, without further ado, I'll simply start. Suppose we have an infinitesimal- transformation $x^{\mu}\rightarrow\tilde{x}^{\mu}=x^{\mu}+\delta x^{\mu}$, which transforms our integration region $\Omega\rightarrow\tilde{\Omega}$. We have that under such a transformation, the fields may have a total variation $\Delta\phi^a$ such that: \begin{equation}\tag{1} \Delta\phi^a = \tilde{\phi}^a(\tilde{x})-\phi^a(x)\,. \end{equation} 1) Where is $\Delta\phi^a$ evaluated at? $x$ or $\tilde{x}$? I will assume, without understanding why, that $\Delta\phi^a = \Delta\phi^a(x)$. This variation must not be confounded with the local variation showing only how the fields change: \begin{equation}\tag{2} \delta\phi^a(x) = \tilde{\phi}^a(x)-\phi^a(x)\,. \end{equation} Nevertheless, they are related, because at least at first order (and assuming $\delta x^{\mu}$ and $\delta \phi^a(x)$ of the same order): \begin{equation}\tag{3} \Delta\phi^a(x) \simeq \delta\phi^a(x) + \delta x^{\mu}\frac{\partial\phi^a}{\partial x^{\mu}}(x)\,. \end{equation} This can be easily proven by using the definition of derivative at a first order: \begin{equation} \frac{df}{dx}(x) = \lim_{h\to0}\frac{f(x+h)-f(x)}{h}\simeq\frac{f(x+h)-f(x)}{h}\rightarrow f(x+h) \simeq f(x)+h\frac{df}{dx}(x)\,, \end{equation} i.e. $\tilde{\phi}^a(\tilde{x}) = \tilde{\phi}^a(x+\delta x) = \tilde{\phi}^a(x) + \delta x^{\mu}\frac{\partial\tilde{\phi}^a}{\partial x^{\mu}}(x)$, and using equation (2) sometimes. Later will also come in handy to have an expression for $\Delta(\partial_{\mu}\phi^a(x))$. My first assumption here is that: \begin{equation}\tag{4} \Delta(\partial_{\mu}\phi^a) = \tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x})-\partial_{\mu}\phi^a(x) \end{equation} 2) Same question as above, where is $\Delta(\partial_{\mu}\phi^a)$ evaluated at? 3) Is expression (4) actually correct? I will calculate this by starting with $\partial_{\mu}(\Delta\phi^a)$ : \begin{align} \partial_{\mu}(\Delta\phi^a) &= \partial_{\mu}\tilde{\phi}^a(\tilde{x}) - \partial_{\mu}\phi^a(x)\\ &= \tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x})- \partial_{\mu}\phi^a(x) + \partial_{\mu}\tilde{\phi}^a(\tilde{x}) - \tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x})\\ &= \Delta(\partial_{\mu}\phi^a) + \frac{\partial\tilde{x}^{\sigma}}{\partial x^{\mu}}\frac{\partial\tilde{\phi}^a}{\partial\tilde{x}^{\sigma}}(\tilde{x})-\tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x})\,. \end{align} To make things shorter, last two terms can be worked by using $\tilde{x}^{\mu}=x^{\mu}+\delta x^{\mu}$, expanding $\tilde{\phi}^a(\tilde{x}) = \tilde{\phi}^a(x+\delta x)$ at first order, replacing, and keeping again all terms at first order, arriving to: \begin{equation}\tag{5} \Delta(\partial_{\mu}\phi^a) = \partial_{\mu}(\Delta\phi^a) + \partial_{\mu}(\delta x^{\sigma})\partial_{\sigma}\phi^a(x)\,. \end{equation} After the detour, we come back to Noether. I decided to proceed by using the invariance of the action: \begin{equation} S[\phi^a(x),\partial_{\mu}\phi^a(x)]=\int_{\Omega}d^4x\;\mathcal{L}(\phi^a(x),\partial_{\mu}\phi^a(x))\,. \end{equation} To shorten the following expressions, I'll choose to abuse the notation: $S[\phi^a(x),\partial_{\mu}\phi^a(x)]=S[x]$ and $\mathcal{L}(\phi^a(x),\partial_{\mu}\phi^a(x))=\mathcal{L}(x)$. We need our transformation to produce a variation of the action of maximum a boundary term, which is the integral of the divergence of a smooth field $W^{\mu}$, satisfying $\left.W^{\mu}\right|_{\partial\Omega}=0$, which is our condition for a well behaved variational principle. In summary, we need that: \begin{align} \delta S &= \tilde{S}[\tilde{x}]-S[x]\\ &= \int_{\tilde{\Omega}}d^4\tilde{x}\;\tilde{\mathcal{L}}(\tilde{x}) - \int_{\Omega}d^4x\;\mathcal{L}(x)\tag{6}\\ &= \int_{\Omega}d^4x\;\partial_{\mu}W^{\mu}(x)\,. \end{align} 4) I assume that integrating $\partial_{\mu}W^{\mu}$ in $\Omega$ or $\tilde{\Omega}$ is exactly the same, it depends on which set of coordinates we would like to have at the end of the demonstration. Am I right? 5) In expression (6), should the first Lagrangian have a tilde or not? If it doesn't have the tilde, and as stated by Goldstein, the underlying assumption would be that the functional form of the Lagrangian does not change with the transformation. This means, as an example, that if the Lagrangian was $\mathcal{L}(x) = 2\phi_1(x) + \phi_2(x)$, then after the transformation $\tilde{\mathcal{L}}(\tilde{x}) = 2\tilde{\phi}_1(\tilde{x}) + \tilde{\phi}_2(\tilde{x})=\mathcal{L}(\tilde{x})$. As far as I understand, this is not the general case, and there is a very good example/explanation in this post at PSE. I will therefore assume that the first term in my expression (6) must have a tilde. In next step is where most of my headaches start. To find out what $\delta S$ is, I would like to turn the first term in (6) to an integral in $\Omega$. The story with the Jacobian for $d^4\tilde{x}$ is not a problem for me. My problems come with the Lagrangian. My first assumption would be, to try to go to first order in its variations, as follows: \begin{align} \tilde{\mathcal{L}}(\tilde{x}) &= \tilde{\mathcal{L}}(\tilde{\phi}^a(\tilde{x}),\tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x}))\\ &= \tilde{\mathcal{L}}(\phi^a(x) + \Delta \phi^a(x),\partial_{\mu}\phi^a(x)+\Delta(\partial_{\mu}\phi^a(x)))\\ &\overset{?}{\simeq} \tilde{\mathcal{L}}(x) + \Delta\phi^a(x)\frac{\partial\tilde{\mathcal{L}}^a}{\partial\phi^a}(x) + \Delta(\partial_{\mu}\phi^a(x))\frac{\partial\tilde{\mathcal{L}}}{\partial(\partial_{\mu}\phi^a)}(x)\,. \end{align} 6) Is last expression correct? If so, how do I get rid of the tilde's? I tried to assume (I don't know why this should be true for the Lagrangian) that there is a local transformation for the Lagrangian as well, satisfying $\tilde{\mathcal{L}}(x) = \mathcal{L}(x) + \delta\mathcal{L}(x)$, replaced it in last expression, and then I used expressions (3) and (5), the Jacobian in $d^4\tilde{x}$, but I never seem to get anywhere close to the expresion I should be getting, which is expression (2.48) on the book of Greiner (note that I used here a different notation): \begin{equation} \delta S = \int_{\Omega} d^4x\left(\delta\mathcal{L}(x) + \frac{\partial}{\partial x^{\mu}}\left(\mathcal{L}(x)\delta x^{\mu}\right)\right)\,. \end{equation} The next steps after this expression is reached are not so complicated, given that Greiner expands the local transformation of the Lagrangian at first order (very much like when finding the Euler-Lagrange equations), plays with the variations a little bit, and finally gets to the conserved current. Once again, I'm sorry for the extension of the post, I have invested weeks already on this, and I do not seem to be able to advance somehow. Any inputs would be very much appreciated! Answer: These are the same issues I struggled with, and the resolution is obviously just to have clear definitions, not hidden behind (what I find to be) the mysterious tilde/ primed notation. I'll restrict attention just to the case of a scalar field. I'll first talk about the different variations of a scalar field, and then talk about variations of the Lagrangian then the action. 1. Changes to Scalar fields. Fix a smooth oriented $n$-dimensional manifold $M$ (we interpret this as spacetime with $n=4$). For simplicity, let us just stick to real scalar fields, meaning smooth maps $\phi:M\to\Bbb{R}$. Before we get to the actual details, some preliminaries: Consider a smooth deformation $\Phi$ of $\phi$. This means we consider a smooth map $\Phi:I\times M\to \Bbb{R}$, where $I\subset \Bbb{R}$ is some open interval containing the origin, such that $\Phi(0,\cdot)=\phi(\cdot)$. It is of course tradition to write the values $\Phi(\epsilon, p)$ as $\Phi_{\epsilon}(p)$ for $(\epsilon,p)\in I\times M$. Consider a smooth vector field $X$ on $M$, and let $\Psi$ denote its flow (this is 'basic' ODE theory). The name "flow" really is apt here, because if you imagine a little marble being dropped in a river, it will follow a trajectory naturally as described by the flow of the water. More mathematically, the interpretation of the flow map $\Psi$ is that given a point $p\in M$ and sufficiently small $|\epsilon|$, the quantity $\Psi_{\epsilon}(p)$ can be thought of as "where the point $p$ ends up $\epsilon$ units of time later, if it is left under the influence of the vector field $X$". It is standard terminology to call the vector field $X$ the "infinitesimal generator of $\Psi$". Here the adjective "infinitesimal" refers to the fact that this is at the level of tangent spaces. In a local coordinate chart $(U, (x^1,\dots, x^n))$, we can write the vector field as $X=X^{\mu}\frac{\partial}{\partial x^{\mu}}$. These $X^{\mu}$ are what you've written as $\delta x^{\mu}$. Now that we have the concept of a deformation and a flow, we can start talking precisely about the various ways things change. The first way to change a scalar field, is to consider a deformation of it. At an "infinitesimal level" this gives rise to a variation $\delta \phi$. By definition $\delta\phi:M\to\Bbb{R}$ is a smooth map defined for each $p\in M$ as $\delta\phi(p):=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Phi_{\epsilon}(p)$. The second way to investigate a change in a scalar field arises due to the effect of the vector field $X$ itself, which moves points in the manifold around. More precisely, this is the idea of the Lie derivative $L_X\phi$. By definition, this is a smooth map $M\to\Bbb{R}$, defined for each $p\in M$ as $(L_X\phi)(p):=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}(\Psi_{\epsilon}^*\phi)(p)=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\phi(\Psi_{\epsilon}(p))$. In a local coordinate chart we can write $L_X\phi=X^{\mu}\frac{\partial \phi}{\partial x^{\mu}}$ (traditional physics texts write this term as $\delta x^{\mu}\partial_{\mu}\phi$). The third way is to combine both effects. We define $\Delta \phi:M\to\Bbb{R}$ as $(\Delta \phi)(p):=\frac{d}{d\epsilon}\bigg|_{\epsilon}(\Psi_{\epsilon}^*\Phi_{\epsilon})(p)=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Phi_{\epsilon}(\Psi_{\epsilon}(p))$. Using the chain rule, you can show $\Delta \phi=\delta \phi+L_X\phi$ (as a hint for how to efficiently prove this, fix a point $p\in M$ and consider the function $h(s,t)=(\Psi_s^*\Phi_t)(p)=\Phi_t(\Psi_s(p))$. This is a smooth function of two real variables, defined on a small open set around the origin. The goal is to calculate $\frac{d}{d\epsilon}\bigg|_{\epsilon=0}h(\epsilon,\epsilon)$, which by the chain rule is just $\frac{\partial h}{\partial s}(0,0)+\frac{\partial h}{\partial t}(0,0)=\frac{d}{ds}\bigg|_{s=0}h(s,0)+\frac{d}{dt}\bigg|_{t=0}h(0,t)$, and recall $\Psi_0=\text{id}_M$ and $\Phi_0=\phi$). Hopefully this answers your question of where things are evaluated. We have $\Delta \phi=\delta\phi+L_X\phi$; this is an equality of smooth functions on $M$. You evaluate everything at the same point $p$. This equation is just telling you that the total change in the scalar field $\phi$ arising from the deformation and the vector field's flow moving points around is imply the sum of the two separate effects (by chain rule). You next ask about things like $\Delta (\partial_{\mu}\phi)$ and so on, but if you just stick to the $\Psi_{\epsilon}$ and its pullback, and to the deformation $\Phi_{\epsilon}$, this makes things much clearer (for me anyway), and you'll never even have to write things like $\Delta(\partial_{\mu}\phi)$. 2. Changes to Lagrangian. One technical detail is that rather than considering the Lagrangian as a real-valued function, it is much more convenient to consider it as being $n$-form valued, i.e to include $dx^1\wedge \cdots \wedge dx^n$ as part of its definition. So, let $\Lambda$ denote this object; it eats scalar fields and their derivatives and outputs $n$-forms. So, in a local coordinate chart $(U,(x^1,\dots, x^n))$ and points $p\in U$, we can write \begin{align} \Lambda_{\phi}(p)&:=\Lambda(\phi(p),\partial_{\mu}\phi(p)):=\mathcal{L}(\phi(p),\partial_{\mu}\phi(p))\,(dx^1\wedge \cdots \wedge dx^n)(p) \end{align} So, $\Lambda_{\phi}$ is a perfectly good object to integrate on the oriented manifold $n$ (remember that $n$-forms should be integrated on oriented $n$-manifolds). Again, there are three ways to change the Lagrangian: Simply plug in the deformed field, i.e consider $\Lambda_{\Phi_{\epsilon}}$. Plug in the original field $\phi$ but consider pullback by the flow map $\Psi_{\epsilon}$, i.e $\Psi_{\epsilon}^*(\Lambda_{\phi})$. Do both of the above steps by considering $\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}})$. Again, for each sufficiently small $\epsilon$, the three procedures above give us a new $n$-form on $M$. We can differentiate at $\epsilon=0$ to obtain three variations of the lagrangian. THe first we can denote $\delta(\Lambda_{\phi})$ (variation due to plugging in deformed field), the second is by definition the Lie derivative $L_X(\Lambda_{\phi})$ (the variation due to the vector field's flow moving points around), and the final we can denote $\Delta(\Lambda_{\phi})$, and it equals the sum $\Delta(\Lambda_{\phi})=\delta(\Lambda_{\phi})+L_X(\Lambda_{\phi})$ (the reason for the sum is same chain rule reasoning as above). Connecting back to more classical notation, writing $\tilde{\mathcal{L}}(\tilde{x})$ means you're considering the third type of change $\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}})$. Why? The tilde in $\tilde{\mathcal{L}}$ denotes that we're plugging in the deformed fields into the Lagrangian (so something like $\tilde{\mathcal{L}}(p)$ is shorthand for $\mathcal{L}(\Phi_{\epsilon}(p),\partial\Phi_{\epsilon}(p))$) and the final $(\tilde{x})$ indicates that we should do a pullback $\Psi_{\epsilon}^*$ to capture the effect of the vector field moving things around. The expression $\tilde{\mathcal{L}}(x)$ is taken to mean you plug in just the deformed fields, i.e what I have denoted as $\Lambda_{\Phi_{\epsilon}}$ (the $(x)$ rather than $(\tilde{x})$ tells us we don't do any pullback via the vector field's flow). 3. Changes to Action I'm sure you can guess what I'm about to say here. For each small enough $\epsilon$, and "nice" open set $\Omega\subset M$ (say for example having compact closure with smooth boundary), we can consider three types of integrals: \begin{align} \int_{\Omega}\Lambda_{\Phi_{\epsilon}}\quad\text{or}\quad\int_{\Omega}\Psi_{\epsilon}^*(\Lambda_{\phi})\quad \text{or}\quad \int_{\Omega}\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}}). \end{align} In words, we take the three types of varied Lagrangians (either by virute of the scalar field being deformed, or by the vector field moving points around or both), and integrate the resulting $n$-form over a given set $\Omega$. I hope you realize now that there's no need to go down to the level of coordinates and look at the images of the set $x[\Omega]$ or $x'[\Omega]$ or whatever. The general calculation of course takes both effects into account, so we focus on the third integral. The resulting total variation in the action is \begin{align} (\Delta S)(\phi; \Omega)&:=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\int_{\Omega}\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}})\\ &=\int_{\Omega}\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}})\\ &=\int_{\Omega}\Delta (\Lambda_{\phi})\\ &=\int_{\Omega}\delta(\Lambda_{\phi})+L_X(\Lambda_{\phi}) \end{align} These are exactly the two terms which one arrives at. At this stage calculating $\delta(\Lambda_{\phi})$ is the usual Euler-Lagrange type calculation. The term $L_X(\Lambda_{\phi})$ can be massaged slightly if you invoke Cartan's magic formula that Lie derivative on differential forms is given by $L_X=d\iota_X+\iota_Xd$, where $\iota_X$ denotes interior product and $d$ is exterior derivative. Now, since $\Lambda_{\phi}$ is an $n$-form on an $n$-manifold, its exterior derivarive vanishes, so we just get $L_X(\Lambda_{\phi})=d(\iota_X \Lambda_{\phi})\equiv d(X\,\,\lrcorner \,\,\Lambda_{\phi})$, where $\lrcorner$ is just another symbol for the interior product. A straightforward coordinate calculation then shows that this term is exactly $\frac{\partial (\mathcal{L}(x)X^{\mu})}{\partial x^{\mu}}\, dx^1\wedge \cdots \wedge dx^n$; maybe you'll find this answer of mine helpful in carrying out the coordinate calculations with exterior derivatives and interior products. 4. Computations in Local Coordinates. First, I'll write out $\delta(\Lambda_{\phi})$ in local coordinates $(U,(x^1,\dots, x^n))$. Fix a point $p\in U$, then we get: \begin{align} \frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Lambda_{\Phi_{\epsilon}}(p)&=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\mathcal{L}\left(\Phi_{\epsilon}(p), \partial_{\mu}\Phi_{\epsilon}(p)\right)\, (dx^1\wedge \cdots \wedge dx^n)(p)\\ &=\left[\frac{\partial \mathcal{L}}{\partial \phi}\bigg|_{(\phi(p),\partial\phi(p))}\cdot \frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Phi_{\epsilon}(p) + \frac{\partial \mathcal{L}}{\partial(\partial_{\mu}\phi)}\bigg|_{(\phi(p),\partial\phi(p))}\cdot \frac{d}{d\epsilon}\bigg|_{\epsilon=0}\partial_{\mu}\Phi_{\epsilon}(p)\right](dx^1\wedge\cdots \wedge dx^n)(p)\\ &=\left[\frac{\partial \mathcal{L}}{\partial \phi}\bigg|_{(\phi(p),\partial\phi(p))}\cdot\delta\phi(p)+\frac{\partial \mathcal{L}}{\partial(\partial_{\mu}\phi)}\bigg|_{(\phi(p),\partial\phi(p))}\cdot \partial_{\mu}(\delta\phi)(p)\right]\,(dx^1\wedge\cdots \wedge dx^n)(p). \end{align} The first line is just the chain rule, and the second line used the definition of $\delta\phi$ in the first term, and in the second term I used the fact that for smooth enough functions, derivatives commute which is why I could swap the $\partial_{\mu}$ with $\frac{d}{d\epsilon}\bigg|_{\epsilon=0}$. Now, we can do the second term $L_X(\Lambda_{\phi})$. Here, I won't assume any familiarty with Cartan's formula or anything. We'll just work as much as possible from the definitions. One thing you should note however is that pullback of differential $n$-forms is different from pullback of functions, in the sense that we don't just do composition. We also have to deal with the $dx^1\wedge \cdots\wedge dx^n$ term; but this term transforms with the Jacobian determinant term, as you've probably seen in other courses. Let us write \begin{align} \Lambda_{\phi}=\mathcal{L}_{\phi}\,dx^1\wedge \cdots \wedge dx^n \end{align} so $\mathcal{L}_{\phi}:U\to\Bbb{R}$ is defined by plugging in the field: $\mathcal{L}_{\phi}(p)=\mathcal{L}(\phi(p),\partial_{\mu}\phi(p))$. Now, pullback behaves as follows: \begin{align} \Psi_{\epsilon}^*(\Lambda_{\phi})&=\Psi_{\epsilon}^*(\mathcal{L}_{\phi}\,dx^1\wedge \cdots \wedge dx^n)\\ &=(\Psi_{\epsilon}^*\mathcal{L}_{\phi})\cdot \Psi_{\epsilon}^*(dx^1\wedge \cdots \wedge dx^n)\\ &=(\mathcal{L}_{\phi}\circ \Psi_{\epsilon}) \cdot \det\left(\frac{\partial (x^i\circ\Psi_{\epsilon})}{\partial x^j}\right)\, dx^1\wedge \cdots \wedge dx^n \end{align} So, if we now want to differentiate with respect to $\epsilon$ at $\epsilon=0$, we apply the product rule (we have a product of functions of $\epsilon$). THe first term's derivative is as I discussed in section 1, the definition of the Lie derivative $L_X(\mathcal{L}_{\phi})$, which simplifies to $X^{\mu}\frac{\partial \mathcal{L}_{\phi}}{\partial x^{\mu}}$. At $\epsilon=0$, the second term is the determinant of identity matrix so $1$. Next, the derivative of the second term is \begin{align} \frac{d}{d\epsilon}\bigg|_{\epsilon=0}\det\left(\frac{\partial (x^i\circ\Psi_{\epsilon})}{\partial x^j}\right) &=\text{trace}\left(\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\frac{\partial (x^i\circ \Psi_{\epsilon})}{\partial x^j}\right)=\text{trace}\left(\frac{\partial X^i}{\partial x^j}\right)=\frac{\partial X^{\mu}}{\partial x^{\mu}} \end{align} where in the middle, I again used commutativity of derivatives for smooth functions to swap $\frac{d}{d\epsilon}$ with $\frac{\partial}{\partial x^{j}}$. So, putting all of this together, we get \begin{align} L_X(\Lambda_{\phi})=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Psi_{\epsilon}^*(\Lambda_{\phi})&=\left[X^{\mu}\frac{\partial \mathcal{L}_{\phi}}{\partial x^{\mu}}\cdot 1+ \mathcal{L}_{\phi}\cdot \frac{\partial X^{\mu}}{\partial x^{\mu}}\right]\,dx^1\wedge \cdots \wedge dx^n\\ &=\frac{\partial (\mathcal{L}_{\phi} X^{\mu})}{\partial x^{\mu}}\,dx^1\wedge \cdots \wedge dx^n. \end{align}
{ "domain": "physics.stackexchange", "id": 86052, "tags": "lagrangian-formalism, symmetry, action, noethers-theorem, variational-calculus" }
How to calculate bicarbonate and carbonate from total alkalinity
Question: It's been a long time since I did my chemistry classes and I'm currently trying to analyze groundwater samples for hydrogeology purposes. I would like to evaluate carbonate and bicarbonate concentration from groundwater samples, but I only have values of total alkalinity as $\ce{CaCO3}$, $\mathrm{pH}$, and temperature. Is it possible? Some of the $\mathrm{pH}$ values are above 8.3. I know that: If $\mathrm{pH}$ is above 8.3: $[\mathrm{alk}_{tot}]=[\ce{HCO3-}]+2[\ce{CO3^2-}]+[\ce{OH-}]-[\ce{H+}]$ If $\mathrm{pH}$ is under 8.3: $[\mathrm{alk}_{tot}]=[\ce{HCO3-}]+[\ce{OH-}]-[\ce{H+}]$ With the $\mathrm{pH}$, I can find calculate $[\ce{OH-}]$ and $[\ce{H+}]$. But how can I calculate $[\ce{HCO3-}]$ and $[\ce{CO3^2-}]$? Thank you! Answer: If I understood your question correctly, you have solutions where you know there is a given amount of calcium carbonate dissolved, and would like to know the distribution of this carbonate between all the species present. This proportion is commonly refered as the alpha($\alpha$) for a given species, that varies from 0 to 1(0% - 100%). From your question, I can make some assumptions: As a groundwater sample, any solids dissolved are very diluted, so we don't need to worry about ionic strength; pH is not fixed; Temperature is not fixed, but I will assume its close to room temperature; As other components are not mentioned, I will assume all carbonate comes from calcium carbonate. Carbonic acid, $\ce{H2CO3}$, has two ionizable hydrogens, so it may assume three forms: The free acid itself, bicarbonate ion, $\ce{HCO3-}$(first-stage ionized form) and carbonate ion $\ce{CO3^2+}$(second-stage ionized form). The respective proportions in comparison with the total concentration of calcium carbonate dissolved are $\alpha0$, $\alpha1$ and $\alpha2$. They must sum to 1(100%), as in chemical reactions matter is neither created or destroyed, only changing between forms. When the calcium carbonate dissolves, a equilibrium is established between its three forms, expressed by the respective equilibrium equations: First stage: $$\ce{H2O + H2CO3 <=> H3O+ + HCO3-}$$ $$K1 = \frac{\ce{[H3O+][HCO3-]}}{\ce{[H2CO3]}} \approx 4.47*10^-7 $$ Second stage: $$\ce{H2O + HCO3- <=> H3O+ + CO3^2-}$$ $$K2 = \frac{\ce{[H3O+][CO3^2-]}}{\ce{[HCO3-]}} \approx 4.69*10^-11 $$ You can also write a equation for the overrall reaction, by sum of each stage (and multiplication of the respective equilibrium constants): $$\ce{2H2O + H2CO3 <=> 2H3O+ + CO3^2-}$$ $$K1K2 = \frac{\ce{[H3O+]^2[CO3^2-]}}{\ce{[H2CO3]}}$$ Analysing our system, to give a full treatment, if we know the solution pH, we can calculate $\ce{[H3O+]}$. So we are left with three unknown variables, $\ce{[H2CO3]}$, $\ce{[HCO3-]}$ and $\ce{[CO3^2+]}$. But so far we have only two independent mathematical equations, for K1 and K2 (the overrall equation does't count as independent, as it's only the merging together of the other two). To solve it, we need at least one more independent equation, to match the number of unknows. What we need is the equation for the material balance of the system. As we assumed all carbonate came from calcium carbonate, we can write: $$Cs = \ce{[CaCO3]} = \ce{[H2CO3] + [HCO3-] + [CO3^2-]}$$ Where Cs here stands for the known concentration of the salt, calcium carbonate. Now we can start replacing values taken from the equilibrium expressions into the material balance, isolating each unknow. For the bicarbonate, for example: $$Cs = \ce{[H2CO3] + [HCO3-] + [CO3^2-]}$$ $$Cs = \ce{\frac{[HCO3-][H3O+]}{K1} + [HCO3-] + \frac{K2[HCO3-]}{[H3O+]}}$$ $$Cs = \ce{\frac{[HCO3-][H3O+]^2 + K1[HCO3-][H3O+] + K1K2[HCO3-]}{K1[H3O+]}}$$ $$\frac{\ce{[HCO3-]}}{Cs} = \ce{\frac{K1[H3O+]}{[H3O+]^2 + K1[H3O+] + K1K2}} = \alpha1$$ So we got the expression for $\alpha1$, that has a curious structure: a fraction, where the denominator is a polynomial of degree 2, and the numerator its middle term. The same procedure can be repeated to find the expressions for the alphas of the other dissolved species. For sake of brevity, I won't do it, but the final result will be: $$\alpha0 = \frac{\ce{[H2CO3]}}{Cs} = \ce{\frac{[H3O+]^2}{[H3O+]^2 + K1[H3O+] + K1K2}}$$ $$\alpha2 = \frac{\ce{[CO3^2-]}}{Cs} = \ce{\frac{K1K2}{[H3O+]^2 + K1[H3O+] + K1K2}}$$ Note that a interesting pattern emerges. The expressions for the remaining two species have the same structure, just changing the term that goes in the numerator. With the expressions for all species, it's helpful to use a spreadsheet to automate the calculations for a entire range of pH values, to grasp in a visual way what happens with carbonates as pH changes. I did just that, look at the results (here the spreadsheet, to whomever wants to download and play with it): We see that in lower pH the predominant form for carbonate is the free carbonic acid. A bit over 6 bicarbonate ion takes over, and reigns up to pH a bit over 10, from where fully ionized carbonate ion takes over. For a given pH, the concentration of each species can be computed multiplying the respective $\alpha$ by the concentration of total calcium carbonate originally present. The plot that looks like a "XX" also allows us to see a interesting property of carbonates. Nowhere in the plot you will find a pH value where we have the three species all in significant amounts. In the lower pH region you can find both bicarbonate and carbonic acid. But carbonate only shows up when carbonic acid goes away. It's like the unconfortable situation where you have two close friends who both hate each other. In a given moment I can see you in a room talking with either friend, but I will never see you three in the same room, or both friends of yours. The dividing line is close to the pH 8.6 you mentioned in your question. However, that sad situation has a upside. It makes the problem easier to calculate. The full treatment I gave to this problem was indeed overkill. If I have three species, but only two show up together at any given time, I can "forget" I'm dealing with a diprotic acid. I need only to see the dividing line I've found, around pH 8.6. If I'm above it, free carbonic acid concentration is zero, and I have to deal only with the pair bicarbonate/carbonate, pretending the bicarbonate anion is just a monoprotic acid. From the equilibrium, we have: $$\ce{[H3O+]} = \frac{\ce{K2[HCO3-]}}{\ce{[CO3^2-]}}$$ Or in logarithimic form: $$pH = pK2 + log(\frac{\ce{[HCO3-]}}{[CO3^2-]})$$ This is the old Henderson–Hasselbalch equation you surely heard about before. As we know the pH and K2, we can calculate the ratio between carbonate and bicarbonate. In the other side, if I'm below my dividing line near 8.6, carbonate ion concentration is zero, now I have to deal only with the pair carbonic acid/bicarbonate, pretending carbonic acid is just other monoprotic acid. From the equilibrium, we have: $$\ce{[H3O+]} = \frac{\ce{K1[H2CO3]}}{\ce{[HCO3-]}}$$ Or in logarithimic form: $$pH = pK1 + log(\frac{\ce{[H2CO3]}}{[HCO3-]})$$ As we know the pH and K1, we can calculate the ratio between carbonic acid and bicarbonate. If you want to study in depth such calculations, I recommend this book: Butler, James N. Ionic Equilibrium: Solubility and PH Calculations. John Wiley & Sons, 1998.
{ "domain": "chemistry.stackexchange", "id": 13695, "tags": "water, ph, buffer" }
HELP unable to launch roscore
Question: hi, I am having trouble launching roscore. I am running Ubuntu 14.04 in a virtualbox VM. This is after rebooting the machine so I have no other ROS processes running. I am able to ping sn-VirtualBox2 just fine I have previously had no problems running ros. ANy insights would be appreciated sn:/opt/ros/indigo$ roscore ... logging to /home/sn/.ros/log/4f95944c-d6b2-11e6-8e27-0800271175be/roslaunch-sn-VirtualBox2-2858.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://sn-VirtualBox2:39912/ ros_comm version 1.11.20 SUMMARY ======== PARAMETERS * /rosdistro: indigo * /rosversion: 1.11.20 NODES roscore cannot run as another roscore/master is already running. Please kill other roscore/master processes before relaunching. The ROS_MASTER_URI is http://sn-VirtualBox2:11311/ The traceback for the exception was written to the log file sn:/opt/ros/indigo$ Here is the log file: [roslaunch][INFO] 2017-01-09 12:50:18,186: Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt [roslaunch][INFO] 2017-01-09 12:50:18,190: Done checking log file disk usage. Usage is <1GB. [roslaunch][INFO] 2017-01-09 12:50:18,190: roslaunch starting with args ['roscore', '--core'] [roslaunch][INFO] 2017-01-09 12:50:18,190: roslaunch env is {'MANDATORY_PATH': '/usr/share/gconf/ /ubuntu.mandatory.path', 'ROS_DISTRO': 'indigo', 'ROS_LOG_FILENAME': '/home/sn/.ros/log/3a8f14ce-d6ad-11e6-872a-0800271175be/roslaunch-sn-VirtualBox2-2777.log', 'XDG_GREETER_DATA_DIR': '/var/lib/lightdm-data/sn', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'GTK_IM_MODULE': 'ibus', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'PKG_CONFIG_PATH': '/opt/ros/indigo/lib/pkgconfig', 'ROSLISP_PACKAGE_DIRECTORIES': '', 'CPATH': '/opt/ros/indigo/include', 'LOGNAME': 'sn', 'WINDOWID': '29360139', 'PATH': '/opt/ros/indigo/bin:/opt/ghc/7.8.4/bin:/opt/cabal/1.22/bin:/home/sn/Software/eclipse/cpp-neon/eclipse:/home/sn/Software/jre1.8.0_101/bin:/home/sn/.opam/4.02.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'XDG_VTNR': '7', 'GNOME_KEYRING_CONTROL': '/run/user/1000/keyring-ApksQp', 'CMAKE_PREFIX_PATH': '/opt/ros/indigo', 'LD_LIBRARY_PATH': '/opt/ros/indigo/lib', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 'SHELL': '/bin/bash', 'XDG_SESSION_PATH': '/org/freedesktop/DisplayManager/Session0', 'XAUTHORITY': '/home/sn/.Xauthority', 'LANGUAGE': 'en_US', 'SESSION_MANAGER': 'local/sn-VirtualBox2:@/tmp/.ICE-unix/1699,unix/sn-VirtualBox2:/tmp/.ICE-unix/1699', 'SHLVL': '1', 'HACMS': '/home/sn/Projects/HACMS', 'QT_QPA_PLATFORMTHEME': 'appmenu-qt5', 'JOB': 'dbus', 'TEXTDOMAINDIR': '/usr/share/locale/', 'TEXTDOMAIN': 'im-config', 'QT4_IM_MODULE': 'xim', 'CLUTTER_IM_MODULE': 'xim', 'SESSION': 'ubuntu', 'MANPATH': ':/home/sn/.opam/4.02.1/man', 'SESSIONTYPE': 'gnome-session', 'XMODIFIERS': '@im=ibus', 'ROS_ETC_DIR': '/opt/ros/indigo/etc/ros', 'GPG_AGENT_INFO': '/run/user/1000/keyring-ApksQp/gpg:0:1', 'HOME': '/home/sn', 'SELINUX_INIT': 'YES', 'COMPIZ_BIN_PATH': '/usr/bin/', 'CAML_LD_LIBRARY_PATH': '/home/sn/.opam/4.02.1/lib/stublibs', 'XDG_RUNTIME_DIR': '/run/user/1000', 'INSTANCE': '', 'PYTHONPATH': '/opt/ros/indigo/lib/python2.7/dist-packages', 'COMPIZ_CONFIG_PROFILE': 'ubuntu', 'SSH_AUTH_SOCK': '/run/user/1000/keyring-ApksQp/ssh', 'VTE_VERSION': '3409', 'ROS_ROOT': '/opt/ros/indigo/share/ros', 'GDMSESSION': 'ubuntu', 'IM_CONFIG_PHASE': '1', 'OCAML_TOPLEVEL_PATH': '/home/sn/.opam/4.02.1/lib/toplevel', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'GNOME_KEYRING_PID': '', 'XDG_SEAT_PATH': '/org/freedesktop/DisplayManager/Seat0', 'ROS_PACKAGE_PATH': '/opt/ros/indigo/share:/opt/ros/indigo/stacks', 'XDG_CURRENT_DESKTOP': 'Unity', 'XDG_SESSION_ID': 'c1', 'DBUS_SESSION_BUS_ADDRESS': 'unix:abstract=/tmp/dbus-owPEPrnTQZ', '_': '/opt/ros/indigo/bin/roscore', 'DEFAULTS_PATH': '/usr/share/gconf/ubuntu.default.path', 'PERL5LIB': '/home/sn/.opam/4.02.1/lib/perl5:', 'DESKTOP_SESSION': 'ubuntu', 'UPSTART_SESSION': 'unix:abstract=/com/ubuntu/upstart-session/1000/1479', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-ubuntu:/usr/share/upstart/xdg:/etc/xdg', 'GTK_MODULES': 'overlay-scrollbar:unity-gtk-module', 'XDG_SEAT': 'seat0', 'OLDPWD': '/home/sn', 'GDM_LANG': 'en_US', 'XDG_DATA_DIRS': '/usr/share/ubuntu:/usr/share/gnome:/usr/local/share/:/usr/share/', 'PWD': '/opt/ros/indigo', 'QT_IM_MODULE': 'ibus', 'ROS_MASTER_URI': 'http://sn-VirtualBox2:11311/', 'COLORTERM': 'gnome-terminal', 'DISPLAY': ':0', 'XDG_MENU_PREFIX': 'gnome-', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:', 'USER': 'sn'} [roslaunch][INFO] 2017-01-09 12:50:18,191: starting in server mode [roslaunch.parent][INFO] 2017-01-09 12:50:18,191: starting roslaunch parent run [roslaunch][INFO] 2017-01-09 12:50:18,191: loading roscore config file /opt/ros/indigo/etc/ros/roscore.xml [roslaunch][INFO] 2017-01-09 12:50:18,312: Added core node of type [rosout/rosout] in namespace [/] [roslaunch.pmon][INFO] 2017-01-09 12:50:18,312: start_process_monitor: creating ProcessMonitor [roslaunch.pmon][INFO] 2017-01-09 12:50:18,312: created process monitor <ProcessMonitor(ProcessMonitor-1, initial daemon)> [roslaunch.pmon][INFO] 2017-01-09 12:50:18,313: start_process_monitor: ProcessMonitor started [roslaunch.parent][INFO] 2017-01-09 12:50:18,313: starting parent XML-RPC server [roslaunch.server][INFO] 2017-01-09 12:50:18,313: starting roslaunch XML-RPC server [roslaunch.server][INFO] 2017-01-09 12:50:18,313: waiting for roslaunch XML-RPC server to initialize [xmlrpc][INFO] 2017-01-09 12:50:18,313: XML-RPC server binding to 0.0.0.0:0 [xmlrpc][INFO] 2017-01-09 12:50:18,313: Started XML-RPC server [http://sn-VirtualBox2:36221/] [xmlrpc][INFO] 2017-01-09 12:50:18,314: xml rpc node: starting XML-RPC server [roslaunch][INFO] 2017-01-09 12:50:18,328: started roslaunch server http://sn-VirtualBox2:36221/ [roslaunch.parent][INFO] 2017-01-09 12:50:18,328: ... parent XML-RPC server started [roslaunch][INFO] 2017-01-09 12:50:18,329: master.is_running[http://sn-VirtualBox2:11311/] [roslaunch][ERROR] 2017-01-09 12:50:18,350: roscore cannot run as another roscore/master is already running. Please kill other roscore/master processes before relaunching. The ROS_MASTER_URI is http://sn-VirtualBox2:11311/ [roslaunch][ERROR] 2017-01-09 12:50:18,351: The traceback for the exception was written to the log file [roslaunch][ERROR] 2017-01-09 12:50:18,364: Traceback (most recent call last): File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/__init__.py", line 307, in main p.start() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/parent.py", line 279, in start self.runner.launch() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/launch.py", line 654, in launch self._setup() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/launch.py", line 630, in _setup self._launch_master() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/launch.py", line 391, in _launch_master raise RLException("roscore cannot run as another roscore/master is already running. \nPlease kill other roscore/master processes before relaunching.\nThe ROS_MASTER_URI is %s"%(m.uri)) RLException: roscore cannot run as another roscore/master is already running. Please kill other roscore/master processes before relaunching. The ROS_MASTER_URI is http://sn-VirtualBox2:11311/ [rospy.core][INFO] 2017-01-09 12:50:18,364: signal_shutdown [atexit] Originally posted by sunking on ROS Answers with karma: 31 on 2017-01-09 Post score: 0 Answer: It seems like something else is listening on port 11311. You may want to double-check that another roscore isn't running (search running process for "ros": ps aux | grep -i ros), and you can also look at which process have open ports with netstat -tlnp Originally posted by ahendrix with karma: 47576 on 2017-01-09 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by sunking on 2017-01-09: OK thanks I was just doing that - I found that I had apache2 running and when i killed that it was fine
{ "domain": "robotics.stackexchange", "id": 26679, "tags": "ros, roscore" }
Ubuntu ARM lacking /sys/devices/cape-bone-iio
Question: I'm trying to pull analog input from a beaglebone black using this tutorial. However when I go to /sys/devices there is no cape-bone-iio. I have spoken with several other programmers and one of them suggested that the cape-bone does not work with the newer versions of Linux. However downgrading could have negative impact on the rest of the project. Is there any other solution? Answer: I believe I encountered the exact same problem that you are encountering. The short/easy fix is downgrade your kernel. The newer kernel version (later than 3.8.x) for UbuntuARM does not use device tree overlays and stores all of these ADC values in god knows where. You'll also notice that your tutorial is indeed using kernel 3.8.13. One that still involves device tree overlays and stores ADC values in the cape-bone-iio. While it might not be exactly what you want, on my github I have a step by step approach to getting ADC values (as part of using ROS) on the BBB. This includes how to downgrade your kernel and enable Device Tree Overlays and then where the file locations for these values will be. I would recommend taking a look at it for some guidance towards your specific solution. My Github The steps for accomplishing this are directly on this page. I cannot post them directly due to a lack of reputation and the number of links the guide entails. Another helpful hint, would be to specifically search for the name Robert C Nelson alongside Beaglebone Black. He has worked extensively with them and is involved in a multitude of blogs and tutorials that might help you find your solution. Hope all of this helps.
{ "domain": "robotics.stackexchange", "id": 2577, "tags": "beagle-bone, linux" }
How to Run GAMESS and Avogadro on Command Line?
Question: I always use GAMESS and Avogadro on my own laptop. Recently I installed them on our university supercomputer and started using them by remote logging-in. On the laptop everything was super easy but unfortunately I can not find any documentation for using them from Terminal Command Line ( I mean sending a job and receiving the result all from Terminal Command Line ). It would be great if someone knows a good documentation or tutorial for this subject. Answer: Contact the HPC helpdesk at your university or search for some instructions compiled by them. We cannot know whether your local cluster uses PBS (TORQUE/MAUI), SLURM or anything else to reserve cores, nodes, memory and cpu time to run jobs.
{ "domain": "chemistry.stackexchange", "id": 2862, "tags": "quantum-chemistry, computational-chemistry" }
How to get topic message by time (similar to tf)?
Question: Hi, I want to be able to get a topic message by its time(stamp). tf provides similar functunality for transformations, but is there any implementation for other topics? For example I want to get the message on topic '/foo' that arrived 2 seconds ago. Thanks, xaedes Originally posted by xaedes on ROS Answers with karma: 13 on 2014-05-11 Post score: 0 Answer: A message filter cache should be able to do that: http://docs.ros.org/api/message_filters/html/c++/classmessage__filters_1_1Cache.html Originally posted by dornhege with karma: 31395 on 2014-05-11 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 17914, "tags": "ros, ros-hydro, topic" }