anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Cockroach-like insect identification (India)
Question: I just saw an insect in my room, and I have never seen anything like it ever before. I captured a photo, please see if you can identify it. It has got quite unique colours on its exodermis and it has 2 pairs of wings ( I was not able to get a picture of them) which are very similar in structure and position to those of a cockroach. In addition to that, it is able to fly, albeit for short distances. I live in Kolkata, India and the length of the insect is approx. 6-8 cm. Answer: It's a longhorn beetle (Cerambycidae). Most likely, it's a Red-spotted longhorn beetle (Batocera rufomaculata), also known as a mango tree borer, mango stem-borer and tropical fig borer. See this SE question pertaining to the same insect. From Biolib: Country check-list: China, India, Israel, Jordan, Laos, Lebanon, Madagascar, Malaysia, Mauritius, Myanmar, Pakistan, Puerto Rico, Sri Lanka, Syria, Thailand, Virgin Islands [Might Not be complete] Food: Ficus carica L. (Fig), Carica papaya L. (Papaya), Mangifera indica L. (Mango), Shorea robusta Gaertner f. (Sal Tree) It might also be B. parryi, which, similar to B. rufomaculata, is common in SE Asia (but which is absent from Western Asia or Africa where B. rufomaculata is also found). Since you live in India, both are possible. I will look for a key to determine how to differentiate the 2 species. Meanwhile, here is a list of related species in the Lamiinae subfamily for further reference. Also, a great resource (including keys) for wood boring beetles of the world!
{ "domain": "biology.stackexchange", "id": 5287, "tags": "zoology, species-identification, entomology, nomenclature" }
Making water spin a wheel
Question: Let's say that I have a wheel with 8 symmetric arranged rectangular plates of area $A$ and the water, with density $\rho$, in the river it moves with $v\: \mathrm{m/s}$. How do I calculate the angular velocity of the wheel? Answer: Assuming the wheel moves at the same speed as the water (ie: neglecting and 'slip' of the wheel past the water), angular velocity is given by: $$\omega = v / r$$ where $r$ is the 'radial' distance from the centre of the wheel to the top of the water. In practice, there will be some water sliding past the wheel, depending upon the hydrodynamics of the plates and channel, so this is really an ideal first approximation.
{ "domain": "physics.stackexchange", "id": 18513, "tags": "newtonian-mechanics, fluid-dynamics, rotational-kinematics" }
What is a Rotating Shadowband Radiometer, and how does it work?
Question: This answer explains that the device labeled "B" in the photo below is a "Ultraviolet Multifilter Rotating Shadowband Radiometer." What is a Rotating Shadowband Radiometer, and how does it work? Screenshot from https://vimeo.com/315754123 Answer: A rotating band shadow meter will give you the diffuse radiance as well as direct radiance. When the band is in the lower location the sensor will measure all the radiance from the sun both direct and indirect. when the band is raised it blocks the direct sun into the sensor giving an indirect radiance measurement. The total radiance - the indirect radiance will give you direct radiance. In this case the meter is setup to capture the UV portion of the spectrum. Multifilter suggests that it can record radiance for bands of UV radiance so it would be tracking radiance on two or more ranges wavelengths. The model described in this link supports six distinct frequencies though none are in the UV band. The band rotates above the sensor occluding the light from the sun, when the radiance is at its lowest during the band rotating you are, typically, capturing the point where the direct sunlight is being filtered by the band. When the band is not occluding the sun you would be capturing the highest radiance. The sampling rate would be setup in the datalogger or computer to allow radiance values at specific time intervals. A short video of one in operation can be seen in this tweet.
{ "domain": "earthscience.stackexchange", "id": 1710, "tags": "measurements, sun, instrumentation" }
Shortest path in a dynamic tree with vertex updates
Question: There is a tree with $n$ nodes. All edges are of equal weights. The vertices of the tree can be of two types: 0 or 1. There are two types of queries: Set(X): change the given vertex X from type 0 to type 1, Dist(X): find the shortest path from X to a node of type 1 and return the length of this path. This will return zero if X is of type 1. The naive method would be to just maintain a tree and on every update change the type of vertex; to answer queries of second type, run a bfs starting at X and stop as soon as a vertex with type 1 is found. With this simple scheme, Set(X) would run in $O(1)$ time, but Dist(X) would take $O(n)$ time. However, in my application the number of queries and number of nodes are both of the order of $10^5$, so the naive method is too slow. In particular, $O(n)$ time is too slow. Can somebody suggest a better algorithm for doing this? Answer: One can improve your naive method by augmenting the data structure. At each node in the tree, store an extra field that contains the distance to the closest descendant of type 1. Assume each node as a pointer to its parent. Let the depth of the tree be $d$. Now Set(X) takes $O(d)$ time: you traverse the parent pointers to visit each of X's ancestors and update their distance field. Also Dist(X) takes $O(d)$ time: the distance field for X tells you the shortest path to a descendant of X, and by looking at the distance fields for the siblings of the nodes along the path from X to the root, you can find the shortest path to a non-descendant of X. Thus, both operations can be done in $O(d)$ time. If the tree is balanced, we'll have $d = O(\lg n)$, so both operations run in $O(\lg n)$ time -- a significant improvement over the naive method you mention. Of course, in the worst case the depth of the tree could be $\Theta(n)$, so this method won't help for such trees. But in practice many trees (e.g., random trees) tend to have depth $O(\lg n)$. And if you need an algorithm that works for all trees, you might be able to take advantage of a heavy-light decomposition to achieve good worst-case running time bounds.
{ "domain": "cs.stackexchange", "id": 5490, "tags": "algorithms, graphs, trees" }
Examples of Fat-Shattering Dimension
Question: What are some good examples for analysis of a class's Fat-Shattering dimension? By (Alon et al) I know that the Fat-Shattering Dimension characterizes the learnability of real-valued function classes but I didn't find any proper examples of function class with a proof for a bound on the Fat-Shattering Dimension of the class. Answer: For $L$-Lipschitz functions on a metric space $(X,\rho)$ with $\epsilon$-packing number $M(\epsilon)$, the $\gamma$-shattering dimension is $M(2\gamma/L)$, as proved here: http://ieeexplore.ieee.org/document/6867374/
{ "domain": "cstheory.stackexchange", "id": 3983, "tags": "machine-learning, lg.learning, computing-over-reals" }
Searching a maze using DFS and BFS in Python 3
Question: I solved the problem posted from Make School Trees and Maze article, which asks me to searching a maze using DFS and BFS in Python. Here's the final output from PyGame: I would like to ask for code review, as I paste my code snippet below: Here's my DFS and BFS solution for solve_maze.py code review for implementation. BFS solutino looks like this: You can find my code for generating_maze here: https://github.com/Jeffchiucp/graph-maze-problem import maze import generate_maze import sys import random BIT_SOLUTION = 0b0000010010010110 # Solve maze using Pre-Order DFS algorithm, terminate with solution def solve_dfs(m): stack = [] current_cell = 0 visited_cells = 1 while current_cell != m.total_cells -1: print(current_cell) unvisited_neighbors = m.cell_neighbors(current_cell) if len(unvisited_neighbors) >= 1: # choose random neighbor to be new cell new_cell_index = random.randint(0, len(unvisited_neighbors) - 1) new_cell, compass_index = unvisited_neighbors[new_cell_index] # knock down wall between it and current cell using visited_cell m.visit_cell(current_cell, new_cell, compass_index) # push current cell to stack stack.append(current_cell) # set current cell to new cell current_cell = new_cell # add 1 to visited cells visited_cells += 1 else: m.backtrack(current_cell) current_cell = stack.pop() print("run") m.refresh_maze_view() m.state = 'idle' # Solve maze using BFS algorithm, terminate with solution def solve_bfs(m): """ create a queue set current cell to 0 set in direction to 0b0000 set visited cells to 0 enqueue (current cell, in direction) while current cell not goal and queue not empty dequeue to current cell, in direction visit current cell with bfs_visit_cell add 1 to visited cells call refresh_maze_view to update visualization get unvisited neighbors of current cell using cell_neighbors, add to queue trace solution path and update cells with solution data using reconstruct_solution set state to 'idle' """ queue = [] cur_cell = 0 in_direction = 0b0000 visited_cells = 0 queue.insert(0, (cur_cell, in_direction)) while not cur_cell == len(m.maze_array) - 1 and len(queue) > 0: cur_cell, in_direction = queue.pop() m.bfs_visit_cell(cur_cell, in_direction) visited_cells += 1 m.refresh_maze_view() neighbors = m.cell_neighbors(cur_cell) for neighbor in neighbors: queue.insert(0, neighbor) m.reconstruct_solution(cur_cell) m.state = "idle" def print_solution_array(m): solution = m.solution_array() print('Solution ({} steps): {}'.format(len(solution), solution)) def main(solver='dfs'): current_maze = maze.Maze('create') generate_maze.create_dfs(current_maze) if solver == 'dfs': solve_dfs(current_maze) elif solver == 'bfs': solve_bfs(current_maze) while 1: maze.check_for_exit() return if __name__ == '__main__': if len(sys.argv) > 1: main(sys.argv[1]) else: main() Answer: Kill the noise It's great that the solution works, but it's full of elements that seem to serve no purpose, which makes it confusing and hard to read. visited_cells is modified but never used BIT_SOLUTION is defined but never used A comment like # add 1 to visited cells adds no value to a code like visited_cells += 1. Avoid writing such comments. The same goes for the # Solve maze ... comments. The doc comment """ ... """ in solve_bfs is inappropriate. Instead of useful documentation, it's pseudo-code of the implementation. It's unnecessary. The return statement is unnecessary at the end of a function. Why write 0b0000 instead of simply 0? Confusion I'm confused by the different terminating condition in the two implementations. In one of them, reaching the goal is expressed as cur_cell == len(m.maze_array) - 1, in the other it's current_cell == m.total_cells - 1. It's best when there's one clear way to do something. I suggest to change the maze implementation so that the terminating condition can be expressed as m.is_goal(cell_id). Encapsulation The posted code is a client of the maze library (I guess your own). It knows too much about how the maze is implemented. It's not good that it accesses implementation details such as m.maze_array and m.total_cells. It's not good that the client knows that the last cell in the implementation is the exit of the maze. It's not good that the client knows that the cells of the maze are stored in a list (maze_array). This also limits the kind of mazes that can be modeled. If you change the maze API so that clients can check if they have reached the exit by calling m.is_goal(cell_id), then the maze library will be free to place the exit to wherever it likes in its storage, it wouldn't need to be the last. The client also wouldn't know what data structure was used to store the cells of the maze, which again would give the maze library the freedom to use whatever it likes, and the freedom to change the underlying storage as needed, for example to something more efficient in a future release. Style You can replace len(v) > 0 and len(v) >= 1 with simply v, because a non-empty collection is truthy in Python. Instead of while 1:, it's more natural to write while True:.
{ "domain": "codereview.stackexchange", "id": 31147, "tags": "python, graph, breadth-first-search, depth-first-search" }
Cell targets of Glybera
Question: So we know that there is a first gene therapy drug in the market out there called Alipogene tiparvovec to address lipoprotein lipase deficiency (LPLD) at a genetic level. Does this genetic drug target all cell types (except germline) that expresses the LPL gene in the human body or simply the parenchymal cells that process fat? I have not been able to find information specific to Glybera. Answer: Glybera doesn't target all cells, and doesn't target fat processing cells. Glybera is adeno-associated virus 1, which normally infects skeletal muscles. Glybera is delivered by intramuscular injection, so the virus easily infects the muscle tissue. The infected muscle cells produce a variant of lipoprotein lipase that has better activity than normal LPL and secrete it into the bloodstream. Once in the blood LPL can interact with chylomicrons and make them smaller. Here is a paper about viral gene therapy for LPL deficiency, specifically about glybera, but I don't think it's open access.
{ "domain": "biology.stackexchange", "id": 2903, "tags": "molecular-genetics, gene-expression, human-genetics, biotechnology, lipids" }
Difference between J2000, FK5 and ICRS coordinate systems? Which one does the Yale Bright Star Catalog use?
Question: I have no background in astronomy. I have been wanting to write code to make star charts for a given location and time, which led to this question. I figured out that I would need to convert the coordinates of stars from the equatorial coordinate system to a horizontal coordinate system defined by a time and place of interest. I decided to use the Yale Bright Star Catalog for this purpose. From what I could gather, the Yale Bright Star Catalog does not explicitly mention the coordinate system it uses, it mentions the epoch to be J2000 frequently though and reports RA, DEC. So, it is my understanding that the coordinate system should be an equatorial one with J2000 as the equinox. As a novice, I could understand the horizontal, equatorial and ecliptic coordinate systems. However, when I started using an astronomy library to write the code, I realised that in practice, there are equatorial coordinate systems such as FK5 and ICRS with minute differences; and that ICRS is the coordinate system adopted by the IAU. So, which coordinate system is the YBSC catalog in, exactly? I have noticed that mentioning the epoch, but not the exact coordinate system is the somewhat confusing, but standard way to report coordinates. Example: coordinates in M33's Wikipedia Page, or CDS name resolver. So, in general, which equatorial coordinate system should be considered 'default' when just the epoch is mentioned as in the examples above? Should it be FK5, ICRS, or something else? Answer: The questions confuses things: J2000 is the epoch / equinox in which the catalogue is in. This defines the reference point in time for Right Ascension and Declination. See e.g. here https://en.wikipedia.org/wiki/Right_ascension and here https://en.wikipedia.org/wiki/Epoch_(astronomy). Now FK5 and ICRS are different reference systems, but the difference between both is small (less than 80 micro arseconds). Nowadays, we work mostly with the ICRS, which replaced the FK5 at the end of the last century. Transformation between the reference frames and epochs are given in the specialised literature (e.g. "The Explanatory Supplement to the Astronomical Almanac" or see the references here: https://www.iers.org/IERS/EN/Science/ICRS/ICRS.html ). If you do not want to dive deeply into this, I suggest to use implementations that are readily available, e.g. https://docs.astropy.org/en/stable/coordinates/index.html ) The Yale Bright Star Catalogue (in the revised edition) contains of its entries in J2000 and it is tied to the FK5 (ICRS wasn't there when it was released).
{ "domain": "astronomy.stackexchange", "id": 5893, "tags": "coordinate, fundamental-astronomy, star-catalogues" }
Gauge invariance and diffeomorphism invariance in Chern-Simons theory
Question: I have studied Chern-Simons (CS) theory somewhat and I am puzzled by the question of how diff. and gauge invariance in CS theory are related, e.g. in $SU(2)$ CS theory. In particular, I would like to know about the relation between large gauge transformations and large diffeos. If you know any good sources, I would be really grateful. Thank you! Answer: A useful textbook for your purposes is "Gravitation and gauge symmetries" by M. Blagojevic (IOP, 2002). It has a chapter on Chern-Simons theory and its relation to 3-dimensional gravity. If that textbook is not accessible to you I suggest that you look at Max Banados' talk http://arXiv.org/abs/hep-th/9901148 or Steve Carlip's review http://arXiv.org/abs/gr-qc/0503022 The first two section of Ed Witten's paper http://arXiv.org/abs/arXiv:0706.3359 and references therein should also be useful. BTW, the SL(2)xSL(2) Chern-Simons theory is basically the "Palatini" formulation of 3D gravity in terms of Cartan variables (dreibein and dualized spin connection). It is a unique feature of 3 dimensions that you can linearly combine the Vielbein with the dualized connection, since only in 3 dimensions the dual of an antisymmetric tensor is a vector. The gauge symmetries of this Chern-Simons theory correspond to diffeos and local Lorentz trafos (at least on-shell).
{ "domain": "physics.stackexchange", "id": 7115, "tags": "quantum-field-theory, gauge-theory, topological-field-theory, chern-simons-theory, diffeomorphism-invariance" }
Why does the force of air resistance depend on contact area but friction doesn't?
Question: Isn't Air resistance very similiar to friction? So why is air resistance an exception that depends on contact area compared to other frictional forces? Answer: The link that @Charlie provided ( physics.stackexchange.com/q/154443 )already provides the details of the reasons for the independence of dry contact friction on surface area. The following will rather elaborate on the difference between the mechanisms of air resistance and dry friction. Both air resistance (a.k.a. air drag) and dry contact friction are dissipative forces. That is, they dissipate the macroscopic kinetic energy of the moving object(s) involved and convert into other forms (heat, light, etc.). However the mechanism by which the energy is dissipated differs as well as the dependency upon surface area as you already know. In the case air resistance, the moving object has to "push" or compress the air in front of it while moving it out of the way. All other things being equal, the larger the projection of the surface area of the object in the direction of motion, the more air that has to be pushed away and therefore the greater the air resistance. The work the object needs to do to push the air results in a loss of macroscopic kinetic energy of the object. The main result in an increase in the local temperature of the air (an increase in its internal microscopic kinetic energy) sometimes, though technically erroneously, referred to as heat. In the case of dry contact kinetic friction, the relative motion between the surfaces raises the temperature of those surfaces (increases the internal microscopic kinetic energy of the materials). The elevated temperatures then result in heat transfer to within the materials and to the environment. Hope this helps..
{ "domain": "physics.stackexchange", "id": 65776, "tags": "friction, drag" }
Dark veil when getting up too fast
Question: I was asking myself this weird question. When you get up or stand up too fast, sometimes, you see something like a dark veil, and you aren't able to see anything distinctly for 2 or 3 seconds, then it come back to normal. Do you know what causes this weird phenomenon (biologically speaking), or am i the only weird person feeling this sometimes? Answer: It's caused by a sudden shift in the pressure needed to circulate blood to your brain which your body fails to respond to sufficiently quickly. This results in a sudden loss of blood pressure termed Orthostatic Hypotension which, in term, results in a transitory reduction in the blood supply necessary for brain function. You experience a momentary loss of vision for the same reason that you would if someone strangles you.
{ "domain": "biology.stackexchange", "id": 2515, "tags": "neuroscience, eyes, vision" }
In searching for an exoplanet by observing transitions, isn't it rather rare that an orbital plane would line up with Earth?
Question: Given that we are at a random orientation to any remote star system, it seems to me that there is only a narrow angle at which transits of exoplanets can be observed. Imagine a large mathematical sphere with the remote system at the center, and the Earth on the surface. There would be a belt on that sphere where transits could be observed. Assuming an even probability of our position on that sphere, the chance of us being able to observe a transit would be the ratio of the area of that "transit-observing" belt and the total area of the sphere. A quick calculation tells me that the probability of being in position to see a transite $P_t$ is almost equal to radius of the star / radius of the planet's orbit ($R_{orbit}=a$). $$ P_t \approx \frac {R_{star}}{R_{orbit}} $$ Assumptions: Distance to star >> $R_{star}$ Exoplanet is a point. In reality it is non-zero, but this just simple blurs the edge case. If true, the chance of far observers seeing Earth transits of the sun: $\frac {0.696 \space million \space km}{149.6 \space million \space km} \approx \frac12\% $ The coincidence that we are in position to see the planets in the Trappist 1 system: $\frac {0.114 \cdot 0.696 \space million \space km}{0.011 \cdot 149.6 \space million \space km} \approx 5\% $ Answer: You are right, the probability of a priori being able to see transits around any star is low. But you made a mistake. The inner planet in the Trappist-1 system is 0.011au from the star. Thus the transit probability is actually about $0.114*6.96\times 10^8/0.011*1.5\times 10^{11} = 0.048$. (I also think you got the radius of the Sun wrong). Thus unlikely, but not astoundingly so. If you observed 20 such stars and they all had planetary systems like this, then you would expect to see one with transits. I will bet a lot of cash that Trappist-1 is not the only star that was monitored by the Trappist experiment. In fact I'd give you odds of 10-1 on that. (Or if you prefer, evens that they observed at least 10).
{ "domain": "physics.stackexchange", "id": 38221, "tags": "exoplanets, transit" }
Should I use module pattern for this notification controller?
Question: I have the following code, I use this pattern for the entire JavaScript code. I'm trying to re-factor my code again. Should I use the Module pattern? Revealing Module Pattern to be specific? What's happening is, somewhere in my code I say Dnianas.Notification.init(). First I bind the events, and I will handle them one by one. Is there a better way to do this? Also I notice that I don't use the var keyword. because they depend on each other. Dnianas.Notification = { init: function () { this.bindEvents(); this.cache(); }, bindEvents: function () { $(document).on('click', '#opennotifii', this.markAsRead); }, cache: function () { $notification = $('#opennotifii'); }, countNotifications: function () { var $notifications = $('.boxnotificationsusers').children().find('#boxsendnotifi'); ids = []; // Add unread notifications to the ids array. $notifications.each(function () { if ($(this).data('read') === 0) { ids.push($(this).data('id')); } }); return ids; }, markAsRead: function () { self = Dnianas.Notification; ids = self.countNotifications(); if (ids.length > 0) { var request = $.ajax({ url: '/notifications/read', data: { notifications: ids, _token: token, } }); request.done(function (data) { self.renderNotificationCount(data); }); } }, renderNotificationCount: function (data) { if (data.seen) { $notification.find('.not_nu1').fadeOut(200); } } }; Answer: Here's a modified version that uses the original singleton declaration. Description of changes: Remove all undeclared and thus implicitly global variables. Remove caching of jQuery selector object (don't see any need for it). Remove several intermediate variables that are only used once Add .bind() to event handler so it can use this directly Change from .done() to .then() to code more closely to the promise standard. Modified code: Dnianas.Notification = { init: function () { this.bindEvents(); }, bindEvents: function () { $(document).on('click', '#opennotifii', this.markAsRead.bind(this)); }, countNotifications: function () { var ids = []; // Add unread notifications to the ids array. $('.boxnotificationsusers').children().find('#boxsendnotifi').each(function () { if ($(this).data('read') === 0) { ids.push($(this).data('id')); } }); return ids; }, markAsRead: function () { var self = this; var ids = this.countNotifications(); if (ids.length > 0) { $.ajax({ url: '/notifications/read', data: { notifications: ids, _token: token, } }).then(function (data) { self.renderNotificationCount(data); }); } }, renderNotificationCount: function (data) { if (data.seen) { $('#opennotifii .not_nu1').fadeOut(200); } } }; Also, this piece of code looks suspicious: $('.boxnotificationsusers').children().find('#boxsendnotifi').each(...) because there should only ever be one '#boxsendnotifi' item in the entire document so if that was the case, you could just do this: $('#boxsendnotifi').each(...) or perhaps: $('.boxnotificationsusers #boxsendnotifi').each(...) if you want to limit what you find to a particular scope. Or, perhaps you should just be using a class name if there are potentially multiple matches: $('.boxnotificationsusers .boxsendnotifi').each(...)
{ "domain": "codereview.stackexchange", "id": 13572, "tags": "javascript, jquery, revealing-module-pattern" }
For outliers treatment: clipping, winsorizing or removing?
Question: I came across three different techniques for treating outliers winsorization, clipping and removing: Winsorizing: Consider the data set consisting of: {92, 19, 101, 58, 1053, 91, 26, 78, 10, 13, −40, 101, 86, 85, 15, 89, 89, 28, −5, 41} (N = 20, mean = 101.5) The data below the 5th percentile lies between −40 and −5, while the data above the 95th percentile lies between 101 and 1053. (Values shown in bold.) Then a 90% winsorization would result in the following: {92, 19, 101, 58, 101, 91, 26, 78, 10, 13, −5, 101, 86, 85, 15, 89, 89, 28, −5, 41} (N = 20, mean = 55.65) Clipping:Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [0, 1] is specified, values smaller than 0 become 0, and values larger than 1 become 1. Removing: Just taking them out. My questions are: In which cases should I use which one? If I always do winsorization (which seems the best in my opinion) when I am losing important information? Is this model-dependent (for decision trees, for linear...) or the same strategy can be applied to all of them Answer: On the difference between winsorizing and clipping : The techniques are very similar. They deal with extreme values (that are not necessarily outliers). Imo you should generally avoid thinking that big values = outliers. Solutions to deal with big values include normalizing your variables by a size factor for more comparability. Which one should you use is linked to your data and their predictiveness. Sometimes data outside of a threshold will be predictive, sometimes not. Sometimes the underlying process will depend on rank, sometimes on global values. There is no rule for when to do one of those two techniques, there is no rule on which one to choose. In the end, you should choose which one gives you better results. Yeah, windsorisation seems a bit more 'adaptative' as you don't have to look at each of your predictors to choose a threshold, but outside of that, there is no reason it performs better statistically. One of the main problems of those techniques is that they are univariate. If your predictors are correlated, modifying one value without another can breaks your relationships. Yes, the answer is model dependent. Having extreme value may be detrimental for the convergence of methods based on gradient descent, as some data points may swing the optimization path. For methods relying on the partition of the space like trees, having extreme values is less of a problem. Removing outliers is a completely different strategy. The difficulty is to identify outliers confidently... but that's another question.
{ "domain": "datascience.stackexchange", "id": 6695, "tags": "machine-learning, statistics, data-cleaning, preprocessing, outlier" }
why is there only one inertial frame that $ct$ and $x$ are orthogonal?
Question: It is very long time ago that I took a physics lesson, so I want to refresh my memory. I think I learned that there is only one inertial frame in Minkowski spacetime (or special relativity time) that $ct$ and $x$ are orthogonal. (inertial frames are assumed as Lorentz-invariant, and we assume one space axis, $x$.) So, why is it? ($ct$ is in vertical axis, $x$ is in horizontal axis.) To avoid confusion: I think I found a question on my old textbook :) Edit: orthogonality in Euclidean term is assumed. Show that the $S'$ axes, $x'$ and $ct'$, are nonorthogonal in a spacetime diagram. Assume that the $S'$ frame moves at the speed of $v$ relative to the $S$ frame ($S'$ is moving away from $S$ to the side of $+x$) and that $t = t' = 0$ when $x = x' = 0$. (The $S$ axes, $x$ axis and $ct$ axis, are defined orthogonal in a spacetime diagram.) Answer: This is false. Each inertial frame has spatial and temporal axes that are orthogonal to each other (in the Lorentz inner product, of course). Of course, different frames have different axes: temporal axes differ (to reflect relatively-moving origins, which also happens in galilean relativity) and spatial axes also differ (to reflect relativity of simultaneity, which does not happen in galilean relativity). You may be thinking of the fact that if we concentrate on some inertially-moving particle, then there is a unique spatial axis that is orthogonal to the particle's worldline. This axis is of course the spatial axis of the rest frame of the particle: the Lorentz frame that has its temporal axis along the particle's worldline.
{ "domain": "physics.stackexchange", "id": 3803, "tags": "special-relativity" }
How to measure the total size of a fastq file in base pairs?
Question: Or Kbps/Gbps. It feels like it should be conceptually very simple, but I can't seem to figure out the right combination of keywords to find it via my search engine. Help would be appreciated! I have BBMAP, SRAtoolkit and MEGAHIT already installed, and also use bash. I'd be very happy if this can be answered with software that I already have, but if not that's perfectly fine. Answer: I've been using this: cat file.fastq | paste - - - - | cut -f 2 | tr -d '\n' | wc -c Explanation : paste - - - - : print four consecutive lines in one row (tab delimited), to merge the info for each read cut -f2 : print only the second column, to access the sequence after the paste wc -c : count the characters tr -d '\n': to remove from count the eventual newline characters (a tip for your googling: try search for "counting number of bases in fastq file")
{ "domain": "bioinformatics.stackexchange", "id": 2588, "tags": "fastq, bash" }
Neutralization between Ca(OH)2 + H2SO4 in a 30% hydrogen peroxide solution
Question: I have performed an experiment where I added an excess of $\ce{Ca(OH)2}$ base to a solution consisting of 5 mL of 30 % hydrogen peroxide (buffered at pH 5) and a very small amount of sulfuric acid (such that the pH of the original solution of 30 % $\ce{H2O2}$ and acid was 1.7). I am having trouble understanding what reactions maybe occurring between these three substances to give a final pH of 10.5. I know that $\ce{Ca(OH)2}$ and $\ce{H2O2}$ form $\ce{CaO2}$ when reacted, but shouldn't the final pH be equal to that of $\ce{Ca(OH)2}$ considering it is in excess? What reactions could be occurring here and why are they not allowing the pH to reach that of $\ce{Ca(OH)2}$? Any help at all would be very much appreciated. Thank you in advance. Answer: in your question formulation, you have forgotten to take into account $\ce{H2O2}$ is a weak acid. The title should rather be: Neutralisation between calcium hydroxide and 30% hydrogen peroxide" Unless $\ce{Ca(OH)2}$ was in excess over $\ce{H2O2}$ - and it was said it was not - $\mathrm{pH}$ would be always significantly lower than pH of the hydroxide. $$\mathrm{pH}=\mathrm{p}K_ \mathrm{a,\ce{H2O2}} + \log \frac{[\ce{HO2-}]}{[\ce{H2O2}]}$$ where $\mathrm{p}K_ \mathrm{a,\ce{H2O2}}=11.75$$ by Wikipedia, but see the links below. If we consider reaction $$\ce{Ca(OH)2 + H2O2 -> H2O + Ca(OH)(HO2)}$$ we need to neutralize 50% of $\ce{H2O2}$ to reach $\mathrm{pH}=\mathrm{p}K_ \mathrm{a,\ce{H2O2}}$ The hydroxide forms from $\ce{H2O2}$ the $\mathrm{pH}$ buffer solution of a weak acid and it's salt. $$\begin{align} \ce{Ca(OH)2 &<=>> CaOH+ + OH-}\\ \ce{CaOH+ &<=>> Ca^2+ + OH- }\\ \ce{H2O2 &<<=> H+ + HO2-}\\ \ce{H+ + OH- &<=>> H2O}\\ \end{align}$$ $\ce{Ca(OH)2}$: $\mathrm{p}K_\mathrm{b1} =1.37$, $\mathrm{p}K_\mathrm{b2} =2.43$ ( Wikipedia ) Additionally, $\ce{HO2-}$ is partially eliminated by precipitation, therefore ratio $ \frac{[\ce{HO2-}]}{[\ce{H2O2}]}$ is kept low and so does $\mathrm{pH}$. $$\ce{CaOH+ + HO2- + 7 H2O <=>> CaO2 \cdot 8 H2O v}$$ Note also the hydrogen peroxide is weakly acidic even without addition of sulphuric acid and that it's $ \mathrm{p}K_ \mathrm{a}$ depends on $\ce{H2O2}$ concentration. H2O2 pH-and-Ionization-Constant The solubility constant of calcium peroxide octahydrate in relation to temperature; its influence on radiolysis in cement-based materials
{ "domain": "chemistry.stackexchange", "id": 12526, "tags": "acid-base" }
Magnetic field due to a circular ring
Question: In the EMFT notes of MIT Course-ware, the derivation of the magnetic field due to a circular ring at its axis, using Biot-Savart's Law and the cylindrical coordinate system is done as follows, I am unable to understand how they calculate the $a_r$ vector component. Mainly the line, 'the radial vector changes direction as a function of $\phi$, being oppositely directed at $-\phi$, so that the total magnetic field due to the whole in the radial direction is zero.' Why does the radial vector change direction isn't it always going to point outwards, thus in the positive direction? Can someone please explain this? Answer: There are actually two components of magnetic field due to an element on the ring. The radial itself means along the radius (radially outwards). It is perpendicular to axis of the ring for an element and it's direction depends on position of element on the ring. As you add the vectors the net will be zero. The other component is along the axis of the ring and they sum up to produce a net field along the axis of the ring.
{ "domain": "physics.stackexchange", "id": 66863, "tags": "electromagnetism, magnetic-fields, field-theory" }
Remove duplicates from a Pandas dataframe taking into account lowercase letters and accents
Question: I have the following DataFrame in pandas: code town district suburb 02 Benalmádena Málaga Arroyo de la Miel 03 Alicante Jacarilla Jacarilla, Correntias Bajas (Jacarilla) 04 Cabrera d'Anoia Barcelona Cabrera D'Anoia 07 Lanjarón Granada Lanjaron 08 Santa Cruz de Tenerife Santa Cruz de Tenerife Centro-Ifara 09 Córdoba Córdoba Cordoba For each row in the suburb column, if the value it contains is equal (in lower case and without accents) to district or town columns, it becomes NaN. This is the code I am using: df['suburb'] = np.where( ((df['suburb'].str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8').str.lower() == df['town'].str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8').str.lower()) | (df['suburb'].str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8').str.lower() == df['district'].str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8').str.lower()) ), np.nan, df['suburb']) df Example result: code town district suburb 02 Benalmádena Málaga Arroyo de la Miel 03 Alicante Jacarilla Jacarilla, Correntias Bajas (Jacarilla) 04 Cabrera d'Anoia Barcelona NaN 07 Lanjarón Granada NaN 08 Santa Cruz de Tenerife Santa Cruz de Tenerife Centro-Ifara 09 Córdoba Córdoba NaN I would like to reduce the amount of code, as I am sure it can be made shorter with the same performance. Answer: Looks like you could use a function: def accent_free(s: str): return unicodedata.normalize('NFKD', s).encode('ascii', errors='ignore').decode('utf-8').lower() Of course, you want this vectorized for numpy, so: accent_free = np.vectorize(accent_free) Now, you just need to use this function: df['suburb'] = np.where( ((accent_free(df['suburb']) == accent_free(df['town'])) | (accent_free(df['suburb']) == accent_free(df['district'])) ), np.nan, df['suburb']) Complete working example: import unicodedata import numpy as np import pandas as pd def accent_free(s: str): return unicodedata.normalize('NFKD', s).encode('ascii', errors='ignore').decode('utf-8').lower() accent_free = np.vectorize(accent_free) data = [ ["02", "Benalmádena", "Málaga", "Arroyo de la Miel"], ["03", "Alicante", "Jacarilla", "Jacarilla, Correntias Bajas (Jacarilla)"], ["04", "Cabrera d'Anoia", "Barcelona", "Cabrera D'Anoia"], ["07", "Lanjarón", "Granada", "Lanjaron"], ["08", "Santa Cruz de Tenerife", "Santa Cruz de Tenerife", "Centro-Ifara"], ["09", "Córdoba", "Córdoba", "Cordoba"], ] df = pd.DataFrame(data, columns=["code", "town", "district", "suburb"]) df['suburb'] = np.where( ((accent_free(df['suburb']) == accent_free(df['town'])) | (accent_free(df['suburb']) == accent_free(df['district'])) ), np.nan, df['suburb']) print(df.to_string())
{ "domain": "codereview.stackexchange", "id": 43593, "tags": "python, strings, pandas, natural-language-processing" }
Electron leaving the atom
Question: From the photoelectric effect, we know that a photon can kick an electron outside the atom if it has the right amount of energy ($E_{\gamma} \geq W_0$). On the other hand, pair production tells us that a photon can annihilate to form an electron and an anti-electron (positron). I'm wondering, how do we know that a photon in the photoelectric effect will actually interact with the electron not annihilate to form a positron and an electron, then the positron will interact with the electron in the atom while the other electron will be the one moving around? Answer: For a photon to give rise to a real (not virtual) electron/positron pair it must possess an energy of slightly greater than one million electron volts. This is a very energetic photon indeed. In comparison, the photon causing photoejection of an electron from an atom needs an energy of order ~an electron volt. This is typical of the photons that make up visible light.
{ "domain": "physics.stackexchange", "id": 73055, "tags": "quantum-mechanics, photoelectric-effect, pair-production" }
Extract HTML tags and its content from a string
Question: Goal I want to extract HTML tags and its content from a string. The content (input) is queried from WordPress database. Sample data (input) I extract this dummy data from my WordPress database: https://www.phpliveregex.com/p/tan I believe this should cover all the needed tags to parse. Expectation (output) Accepting a HTML-format string as input. The output should be able to return a string which could be any of these: The HTML element string itself Attributes string of the HTML element Text nodes, child nodes string of the HTML element My concern Which solutions take less execution time? Which solutions save more server memory? Security vulnerable of each solutions. Solutions I've come up with a 2 solutions by myself. It works fine, but I don't know which one is good for my case. Regex pattern $el = 'li'; // Ex $match = []; // Reserving for results /** * Regex - extract HTML tag and its content * Array map: * x[0] = everything * x[1] = open tag * x[2] = attributes * x[3] = content & end tag * x[4] = content only * * Note for content: including text node + children node */ $reg = '/(<'.$el.'(.*?)>)((\n*?.*?\n*?)<\/'.$el.'>|)/'; if (preg_match($reg, $html_str, $match)) { echo 'Moving onward!';} Result: see demo of regex DOM object $dom = new DomDocument(); $content = mb_convert_encoding( get_the_content(null, true), # WordPress func, it gives input str 'HTML-ENTITIES', 'UTF-8' ); $dom->loadHTML($content); $el = $doc->getElementsByTagName('li'); Result: returning a DOMNodeList and I have to do few more tasks to print it to a string which can be used. Answer: Using regex to parse valid html will work (and you might call it beautiful) until it doesn't work... then you'll bend over backwards (over and over) each time you encounter an anomaly then try to write a patch for the pattern. Allow me to notify you of a simple, valid html occurrence that will break your sample pattern: Demo <link rel="stylesheet" href="/html_5/tags/html_link_tag_example.css"> It would match because your li needle is not immediately followed by a word boundary character (\b). Is this a simple thing to fix? Yes, but my point remains -- regex is an inappropriate tool for parsing valid html. I generally rely on DomDocument for most cases and when XPath makes life simpler, I use that to perform clean, readable queries on the document. This is one time when focussing on speed is a moot point -- speed is the least of your worries. What good is speed if the results are bogus? The goal should be to design a robust and reliable script using DOM-aware techniques.
{ "domain": "codereview.stackexchange", "id": 35662, "tags": "performance, php, comparative-review, regex, dom" }
Electromagnetic Field VS Photons
Question: I am currently studying electrodynamics with all the fields and the like. Now, as I understand it, in a more modern viewpoint there is a duality between electromagnetic fields and photons, with photons being the particles that are exchanged in the process of interaction. My question is, what is the current explanation to what an electromagnetic field is? For example, consider a point charge $q_1$. In order for another charge $q_2$ to 'feel' $q_1$, there is an electromagnetic field generated by $q_1$ that allows interaction. However, using the photon picture of view, a charge should then constantly radiate photons in all possible directions to let other charges know that it's here and should be interacted with. This leads to a problem in energy conservation, as each photon carries an energy $h\nu$, and thus even if a charge is at rest it would radiate off its energy and subsequently be gone. How can this be resolved? What is the real connection between an electromagnetic field and photons? Answer: This is where virtual particles come into play. http://youtu.be/K6i-qE8AigE?t=3m23s Essentially you can think of these virtual particles as temporary photons as carriers that dont exactly behave ver well with conservation of energy. The field is full of these non-conservative carriers for a very brief instant as a function of the mass of the carrier (called a gauge boson). As (rest) massless particles photons can extend out ad infinium until they finally interact with another particle.
{ "domain": "physics.stackexchange", "id": 14785, "tags": "electromagnetism, photons, quantum-electrodynamics, virtual-particles, carrier-particles" }
If I reflect light from a projector using a mirror, then is the reflected image real or virtual?
Question: I am in a great confusion about this question. In our school, we have been taught that: A real image is an image which can be obtained on a screen. It is always inverted. A virtual image is an image which cannot be obtained on a screen. It is always erect. Example: Plane mirrors form a virtual image but a convex lens forms a real image. I have understood the simple examples but I am confused about this example of projectors. I have done the experiment myself and I found that: The image formed can be obtained on the wall. The image formed is erect. So is it a real or a virtual image? 1) If it is a real image, why can it be formed with a mirror? And why is it erect? 2) If it is a virtual image, why can I get it on a screen? I read many pages on the internet about real and virtual images, I can understand them and I think that it is a real image. However, I am still unsure about the answer. So please clear my doubts. Answer: The "rule" that real images are always inverted is not correct. That rule might work when you have only a single optical element (like a lens), but not necessarily when you have two or more. Take a look at this: That's a real, upright (aka erect) image labeled $I_B$. You can tell it's real because the rays at the final image actually converge at that physical location, unlike virtual images whose location of "convergence" does not actually have physical rays passing through it (only the "backtracking" rays one typically draws). As for your projector and mirror, you can draw a ray diagram carefully (if you know the internal workings of a projector) and apply the same test to see if the image is virtual or real. But do remember that projectors can be modified so that the image you see is inverted to accommodate mirrors and such. Anyway, to get something projected on to a screen, I do believe the image needs to be real, so I would say it is indeed a real image you're seeing. Finally, here's a simple example of two elements that again goes against the rules in your book: The situation above has a real, inverted image using a lens and mirror (a mirror!). Thus, the rules you've been given aren't general enough to deal with multiple-element situations.
{ "domain": "physics.stackexchange", "id": 15441, "tags": "optics, reflection, geometric-optics" }
about undefined symbol error
Question: After compiling a code with OMPL, I tried to move a turtlebot. However, next error arose. process[map_server-1]: started with pid [14847] process[amcl-2]: started with pid [14859] process[navigation_velocity_smoother-3]: started with pid [14964] process[kobuki_safety_controller-4]: started with pid [15018] process[move_base-5]: started with pid [15044] [ INFO] [1403471634.839981265]: Using plugin "static_layer" [ INFO] [1403471635.013088475]: Requesting the map... [ INFO] [1403471635.244269848]: Resizing costmap to 4000 X 4000 at 0.050000 m/pix [ INFO] [1403471635.784022404]: Received a 4000 X 4000 map at 0.050000 m/pix [ INFO] [1403471635.800435571]: Using plugin "obstacle_layer" [ INFO] [1403471635.830502313]: Subscribed to Topics: scan bump [ INFO] [1403471635.974988532]: Using plugin "inflation_layer" /home/turtlebot/catkin_ws/devel/lib/move_base/move_base: symbol lookup error: /home/turtlebot/catkin_ws/devel/lib//libompl_planner_rrt.so: undefined symbol: _ZN18base_local_planner12CostmapModelC1ERKN10costmap_2d9Costmap2DE [move_base-5] process has died [pid 15044, exit code 127, cmd /home/turtlebot/catkin_ws/devel/lib/move_base/move_base cmd_vel:=navigation_velocity_smoother/raw_cmd_vel __name:=move_base __log:=/home/turtlebot/.ros/log/6419a288-fa4f-11e3-9ded-dc85de8a0cd2/move_base-5.log]. log file: /home/turtlebot/.ros/log/6419a288-fa4f-11e3-9ded-dc85de8a0cd2/move_base-5*.log I execute a command rospack plugins --attrib=plugin nav-core The result is following. navfn /home/turtlebot/catkin_ws/src/navfn/bgp_plugin.xml ompl_planner_rrt /home/turtlebot/catkin_ws/src/ompl_planner_rrt/bgp_plugin.xml base_local_planner /home/turtlebot/catkin_ws/src/base_local_planner/blp_plugin.xml -- others -- I think that the result means that ompl_planner_rrt is available. But, the problem is undefined symbol: ZN18base_local_planner12CostmapModelC1ERKN10costmap_2d9Costmap2DE. I don't understand the meaning of the error. I guess that ZN18base_local_planner12CostmapModelC1ERKN10costmap_2d9Costmap2DE points out world_model = new base_local_planner::CostmapModel(*costmap);. However, as it is described on the tutorial (http://wiki.ros.org/navigation/Tutorials/Writing%20A%20Global%20Path%20Planner%20As%20Plugin%20in%20ROS), It probably doesn't have a problem. Could anyone teach me something clue not to arise an error? Thank you in advance! I followed ferg, editing CMakeLists.txt. target_link_libraries( ompl_planner_rrt ${catkin_LIBRARIES} ${OMPL_LIBRARIES} base_local_planner ) The error message has gone out. [ INFO] [1403566019.155955441]: Using plugin "static_layer" [ INFO] [1403566019.322259977]: Requesting the map... [ INFO] [1403566019.650859797]: Resizing costmap to 4000 X 4000 at 0.050000 m/pix [ INFO] [1403566020.184608226]: Received a 4000 X 4000 map at 0.050000 m/pix [ INFO] [1403566020.201439022]: Using plugin "obstacle_layer" [ INFO] [1403566020.228420634]: Subscribed to Topics: scan bump [ INFO] [1403566020.372557447]: Using plugin "inflation_layer" [ INFO] [1403566020.944125609]: Using plugin "obstacle_layer" [ INFO] [1403566021.114137708]: Subscribed to Topics: scan bump [ INFO] [1403566021.277011379]: Using plugin "inflation_layer" [ INFO] [1403566021.585192241]: Created local_planner base_local_planner/TrajectoryPlannerROS [ INFO] [1403566021.620077400]: Sim period is set to 0.20 [ WARN] [1403566021.884131333]: Map update loop missed its desired rate of 1.0000Hz... the loop actually took 1.3869 seconds [ INFO] [1403566022.889696351]: odom received! Originally posted by Ken_in_JAPAN on ROS Answers with karma: 894 on 2014-06-22 Post score: 0 Answer: What does your CMakeLists.txt look like for this plugin? Are you linking against the "base_local_planner" library? If not, you probably need to. You can run the undefined symbol through c++filt to see that: c++filt _ZN18base_local_planner12CostmapModelC1ERKN10costmap_2d9Costmap2DE outputs: base_local_planner::CostmapModel::CostmapModel(costmap_2d::Costmap2D const&) That class is defined in base_local_planner/src/costmap_model.cpp, and is part of the "base_local_planner" library exported by that package. Thus, you need to link against that library (since this is a plugin, you don't get an undefined symbol until runtime). Originally posted by fergs with karma: 13902 on 2014-06-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Ken_in_JAPAN on 2014-06-23: @fergs :Thank you for providing me with an important information! So, how should I do? I should edit CMakeLists.txt? My header file ompl_planner_rrt.h include #include &lt base_local_planner/costmap_model.h&gt Could you teach me that? Comment by ROSCMBOT on 2014-11-27: I am still getting a similar error even after adding base_local_plannerto target_link_libraries
{ "domain": "robotics.stackexchange", "id": 18345, "tags": "ros, turtlebot, ros-hydro, global-planner" }
pneumatic solenoid operation?
Question: Any idea on how this type of pneumatic solenoid works? I know that NC is normally closed, NO normally open, but not sure how air flows? Thanks! Answer: When no voltage is applied to the solenoid, air will flow between port 2 'Common' and port 3 'Normally Open,' but not to port 1 'Normally Closed.' When voltage is applied, air will flow between port 2 and port 1, but not to port 3. We could likely assume that the direction of air flow is intended to be from port 2 to ports 1 & 3, but it isn't a guarantee and it may not matter to the valve. Note that in pneumatic and hydraulic circuits, normally closed and normally open have the opposite definition as in electrical circuits. For a valve, closed means not passing air/oil, whereas for a relay closed means passing electrons. A more standard diagram for this valve would be #3 in the image below: Although it's worth noting that the 'T' designation implies a hydraulic valve, and numeric port designations are more common for pneumatic valves.
{ "domain": "engineering.stackexchange", "id": 1233, "tags": "airflow, compressed-air, pneumatic" }
IPTG and lac operator with e coli for foreign gene question
Question: We did an experiment were we have e coli with a plasmid with a gene from another bacteria in it, and we put in IPTG in for induction. Will after looking up more about IPTG online I see it's related to the lac operator, which from what I've found just deals with lactose. Is there some other function that has? How can this be related to or effect the thing we put in and what we're doing? I'm missing the connection here. Answer: The lac operon contains genes which are important for the metabolization of lactose as an energy source - normally glucose is used for this purpose. Usually the operon is tighly regulated and as long as there is another source of energy it is kept in an inhibited state. The presence of lactose removes the lac repressor from the lac operon and allows the expression of the genes and thus allowing the metabolization of lactose. The mechanism can be turned on and off depending on the presence of lactose. IPTG is a substance which mimicks the presence of allolactose (a metabolite of lactose) and it can activate transcription from the lac operon. As IPTG (in contrast to allolactose) cannot be hydrolyzed by β-galactosidase, it's concentration in the cell stays the same. Using the lac operon and IPTG enables you to switch on the expression of the gene on your plasmid and to start the overexpression.
{ "domain": "biology.stackexchange", "id": 3498, "tags": "proteins, gene-expression" }
SKAction on SKLabel with array of Strings
Question: I have an SKLabel attached to a parent SKNode and an array of strings: let configText:[String] = [ "Configuration", "Do stuff", "Do more stuff", "Nil", "It is the void"] The array is looped through with the following: parentNode_Label.run( SKAction.sequence([ SKAction.run{ if self.counter == self.configText.count - 1 { self.counter = 0 } }, SKAction.wait(forDuration: 3.0), SKAction.run { self.sprite_Label.text = self.configText[self.counter + 1] }, SKAction.fadeIn(withDuration: 0.5), SKAction.wait(forDuration: 3.0), SKAction.fadeOut(withDuration: 0.5), SKAction.run{ self.counter += 1 } ]).forever() ) extension SKAction public func forever() -> SKAction { return SKAction.repeatForever( self ) } } It works but seems kind of clunky/hacky. Is there a simpler, more efficient yet readable way to do this? Answer: The first array element configText[0] is never used, so you can remove it from the array (and modify the index calculations accordingly). Incrementing the counter with wrap-around can be simplified using the remainder operator: self.counter = (self.counter + 1) % self.configText.count The three "run" actions can be combined into one. Inside the actions array you can refer to the SKAction members without specifying the type explicitly, e.g. .wait instead of SKAction.wait. Putting it all together: parentNode_Label.run( SKAction.sequence([ .wait(forDuration: 3.0), .run { self.sprite_Label.text = self.configText[self.counter] self.counter = (self.counter + 1) % self.configText.count }, .fadeIn(withDuration: 0.5), .wait(forDuration: 3.0), .fadeOut(withDuration: 0.5), ]).forever() ) Also have a look at the Swift naming conventions: Names of types and protocols are UpperCamelCase. Everything else is lowerCamelCase. For example: parentNode, textLabel, without underscores.
{ "domain": "codereview.stackexchange", "id": 28398, "tags": "swift, sprite-kit" }
Bathtub boat, is it possible to go forward?
Question: I know nothing would happen when it's just floating. But once I somehow pumped the water to flow out from the shower head, would it keep shooting out water automatically and accelerate the boat endlessly? By the Siphon effect or something? (I know Siphon effect is something different and not applicable here, but anyway) Answer: Unfortunately there is no such thing as free energy, friction and gravity would quickly stop the water flow when the pump stopped. Due to the law of conservation of energy, energy cannot be created or destroyed, only transformed or transferred.
{ "domain": "physics.stackexchange", "id": 60383, "tags": "newtonian-mechanics, fluid-dynamics, pressure" }
With respect to inertial observer standing at the starting point of why is the velocity of ejected mass of rocket $v + v_0$
Question: We derived the equation of motion of a rocket this way: all the velocities are taken with respect to inertial observer standing where the rocket starts. As we take upwards direction to be positive. The initial momentum of rocket is mv. That's fine. Now at a certain time $\delta t$ later the momentum becomes $(m - \delta m) (v + \delta v) - \delta m (v + \delta v_0) $ {where $\delta m$ = amount of decrease of mass of rocket or amount of gas emitted. $\delta v$ = increase of velocity of rocket due to mass loss and conservation of momentum. Now my question is why we take the velocity of the gas to be $v + v_0$ . As we measuring with respect to inertial observer standing at the starting point of rocket. And if we are measuring with respect to the rocket then the initial momentum should be zero as well as the velocity of the rocket relative to itself is zero. So in my opinion the relative velocity of the gas with respect to the inertial observer should be $v_0 -v$ ;$v_0$ = velocity of the gas. Answer: The proper derivation says that we are in an inertial reference frame, and at some time $t$, we see the rocket traveling at speed $v(t)$ with total mass $m(t)$ ejecting mass backwards at a constant exhaust speed $s$ relative to the rocket. A momentum balance therefore says, $$ m(t) v(t) = m(t+dt) v(t+dt) + [m(t)-m(t + dt)](v(t) - s). $$ The term on the left is the total momentum of the rocket at time $t$, the first term on the right is the new momentum of the rocket at time $t+dt$, and the second term on the right is a chunk of fuel which has been ejected at speed $s$ backwards relative to the rocket, which is traveling forward at speed $v$. Expanding we have$f(t+dt)=f(t) + f'(t) dt + \dots$. Thus, this becomes$$ m~ v = (m + m'~ dt)(v+v'~dt) - m'~ dt(v - s). $$ Discarding the $dt^2$ term on the right yields $$ 0= m'~v+ v'~m - m'~v +m'~s. $$ And so finally, $$ v'~m = -m'~s. $$This integrates directly to $$ v(t) - v(0) = - s\ln\big(m(t)/m(0)\big). $$ You can get there somewhat more easily by just using a reference frame that happens to be moving forward at speed $v$ but that always seemed like cheating for me: it is important to understand that the two $m'~v $ terms cancel each other out.
{ "domain": "physics.stackexchange", "id": 51143, "tags": "newtonian-mechanics, momentum, conservation-laws, rocket-science" }
Turtlebot Purchasing Options
Question: For those of us who already have a few Roombas and Kinects lying around, is there going to be an option to buy some of the hardware from Turtlebot independently from the whole kit? I am mainly interested in purchasing the IMU and the shelving "stack" that comes with the Turtlebot kit. Originally posted by mjcarroll on ROS Answers with karma: 6414 on 2011-05-20 Post score: 5 Answer: We decided to make the TurtleBot hardware open source. We're working on posting the docs and linking to suppliers for all the parts, including the shelving stack and gyro/power board. So you will be able to buy them separately. Check back soon at turtlebot.com. Originally posted by Brian Gerkey with karma: 2916 on 2011-05-22 This answer was ACCEPTED on the original site Post score: 5
{ "domain": "robotics.stackexchange", "id": 5616, "tags": "turtlebot" }
Mechanism of syndesmophyte growth in AS
Question: Ankylosing Spondylitis (AS) causes inflammation around joints and the growth of syndesmophytes that may eventually fuse vertebrae. I'm familiar with the genetics (HLA-B27, IL1A) related to the condition, but I can't find any information about the mechanism that causes the actual growths to occur. My current assumption is that AS causes the over-production or under-production of a particular compound or enzyme at the growth site, but I can't find any studies or papers that explain this. Is the mechanism known? Is it directly related to abnormal levels of a particular substance? Answer: As far as I remember the pathology course from medical school, chronic long-lasting inflammation often leads to proliferation of connective tussie and ultimately to fibrosis. The actual mechanism here is the lack of oxygen which is used-up by different immune system cells to produce peroxydes and superoxydes.
{ "domain": "biology.stackexchange", "id": 291, "tags": "human-biology, pathology, joints" }
Heater efficiency on maintaining temperature?
Question: I have a central heating system with natural gas (atmospheric) heater warming the water going through the closed loop to radiators. I can set the temperature of the hot water in a fairly broad range, 30-57C. As I have added a thermostat which turns the heater on/off depeding on surrounding air temperature (set to 21C), I was wondering, if there was difference in efficiency (in terms of gas usage), depending on whether I set the heater temperature higher or lower. Obviously, the lower I set the temperature, the slower the rooms will heat up, and the longer the heater will run. On the other hand, it will need to maintain lower temperature in the pipes, and probably burn less fuel that way. Apart from keeping an eye on the gas meter for some time, (and interpreting the readings with adjustment for outside temperature), is there some way to figure out, what might be the best settings for using least gas, or even if it matters? Answer: First order effects: The main driver for energy consumption is simply the temperature difference between inside and outside and the thermal resistance of the house to the outside. This determines the amount of energy that you consume and the exact way how the energy gets into the house doesn't really matter. Second order effects: lower water temperature makes for a slower system. Most thermostats have a little bit of hysteresis, so they let temp drop a couple of degrees below the target before turning on the heater pump. Same happens if you crank up the thermostat or switch from night temperature to day temperature. If the steady state temperatures are the same, low temp water setting will result in a slightly lower average temperature and so overall less energy consumption. It's also less likely to result in short term "overshoots". Any type losses are also a function of temperature difference, so the high water setting will generate more losses. This may not matter: as long as the energy stays in the house, it still contributes to heating. However if pipes are in outside walls and not well insulated some of the losses may go directly to the outside without heating the house first. You may also end up with heat in places where you don't want/need it. So overall the low-water system will be slightly more efficient but it's also less comfy since it's slower.
{ "domain": "physics.stackexchange", "id": 61759, "tags": "thermodynamics, efficient-energy-use" }
What are the "parts" of separable quantum states?
Question: Of course I could only speak of whole states and banish the word "parts". Mathematics enables some quantum states to be separated into partial states. When the whole state has finite dimensions, the partial states are of lower dimensionality. So "parts" are intrinsic to the maths, but what about the physics? I'm not confining the discussion to particles but to any decomposition of whole into parts. So how should "parts" be regarded physically? In attempting to arrive at a comprehensive notion of them, there seems to be a progressive evaporation of wholes into space time regions to pure sets of numerical parameters. As we deconstruct the whole what should we make of the sub-whole? Answer: I take it that by "separable" pure state, one means a quantum state that can be factorized as a tensor product of two or more lower dimensional pure quantum states, as discussed here. The parts of separable quantum states are states of nonentangled quantum systems. What this means physically is that correlations between measurements on the subsystems are indistinguishable from any other classical correlation between classical random variables, and, in particular, heed the Bell and CHSH inequalities. The classical probability theory of correlated random variables describes the joint probability densities for measurements on the two subsystems. You can think of these states as classical mixtures of block-diagonal stripes in the full quantum state space. There is a generalization of this notion in that if a quantum state is a classical mixture of pure quantum states, the correlations between subsystems remain classical (i.e. heed the Bell and CSCH inequalities) if and only if the state is a classical mixture of separable states only; in turn, this means that the density matrix $\rho$ can be expressed as a sum of the form $\sum_{k}p_{k}\hat{\rho}_{k}^{A}\otimes\hat{\rho}_{k}^{B}$. For a primer on entanglement, with a detailed worked example, please see my answer here.
{ "domain": "physics.stackexchange", "id": 47158, "tags": "quantum-mechanics, quantum-entanglement" }
UDOO board- SD Card Partition
Question: Hi there, I have a UDOO Quad board loaded with Ubuntu 12.04. Before loading ubuntu I did not partition the SD card (8GB). Now if I want to Install Hydro ROS, will I be able to partition it without erasing ubuntu from the SD Card? Or can I put in ROS without partitioning? Originally posted by AMehra on ROS Answers with karma: 1 on 2014-03-21 Post score: 0 Original comments Comment by domikilo on 2014-03-21: Acctually ROS just like the software not really OS, if you install it , it will be stored in ubuntu file system, you dont need divide partition Comment by AMehra on 2014-03-22: Hi domikilo, thanks! Thats all I wanted confirm. Answer: You should be able to expand the root file system, if that is what you are asking. Originally posted by tonybaltovski with karma: 2549 on 2014-03-22 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 17379, "tags": "ros-hydro" }
Why does job.running in QISKit output False, even if the job is still running?
Question: I submitted a job in the 0.5.0 version of QISKit using job = execute(qc, 'ibmqx5', shots=shots) This just submits the job, and does not wait for a result. I then immediately tested whether the job was still running using print(job.running) This gave the result False. However, when I requested the result using job.result() This still took a while to get the result, suggesting that the job actually was still running after all. What is going on here? Answer: There are three stages that the job goes through, as you'll see if you also print the status using print(job.status). The first is an initialization stage. This returns False for job.running, because it hasn't started running yet. Then your job actually will run, and so give True for job.running. Finally it will have finished running, and so job.running goes back to False. So don't use job.running to test whether a result is ready.
{ "domain": "quantumcomputing.stackexchange", "id": 156, "tags": "programming, qiskit" }
Basic MDAS Java Swing Calculator
Question: I have recently started learning Java and decided to make a basic MDAS calculator in Swing. I am not completely new to programming but I may be making some common mistakes, or not writing the most efficient code. I wanted to make a calculator that can take multiple numbers and operations before finding the answer using MDAS, instead of just returning the answer after every operation and using it for the next. e.g. 2 * 3 + 4 - 5 / 5 = 9 instead of 1 My code consists of a single class. There isn't much code so I didn't know if there was a good reason to split it into multiple classes, however I have never written something like this so please feel free to correct me. Repo with example gif and runnable jar package calculator; import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.Font; import java.awt.GridLayout; import java.awt.event.ActionEvent; import java.util.ArrayList; import javax.swing.AbstractAction; import javax.swing.BorderFactory; import javax.swing.Box; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; public class GUI extends JFrame { private static final long serialVersionUID = 1L; private String title = "Basic MDAS Calculator"; private int currentNumber; private JLabel displayLabel = new JLabel(String.valueOf(currentNumber), JLabel.RIGHT); private JPanel panel = new JPanel(); private boolean isClear = true; final String[] ops = new String[] {"+", "-", "x", "/"}; private ArrayList<Integer> numHistory = new ArrayList<Integer>(); private ArrayList<String> opHistory = new ArrayList<String>(); public GUI() { setPanel(); setFrame(); } private void setFrame() { this.setTitle(title); this.add(panel, BorderLayout.CENTER); this.setBounds(10,10,300,700); this.setResizable(false); this.setVisible(true); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } private void setPanel() { panel.setBorder(BorderFactory.createEmptyBorder(10, 10, 10, 10)); panel.setLayout(new GridLayout(0, 1)); displayLabel.setFont(new Font("Verdana", Font.PLAIN, 42)); panel.add(displayLabel); panel.add(Box.createRigidArea(new Dimension(0, 0))); createButtons(); } private void createButtons() { // 0-9 for (int i = 0; i < 10; i++) { final int num = i; JButton button = new JButton( new AbstractAction(String.valueOf(i)) { private static final long serialVersionUID = 1L; @Override public void actionPerformed(ActionEvent e) { // If somebody presses "=" and then types a number, start a new equation instead // of adding that number to the end like usual if (!isClear) { currentNumber = 0; isClear = true; } if (currentNumber == 0) { currentNumber = num; } else { currentNumber = currentNumber * 10 + num; } displayLabel.setText(String.valueOf(currentNumber)); } }); panel.add(button); } // +, -, x, / for (String op : ops) { JButton button = new JButton( new AbstractAction(op) { private static final long serialVersionUID = 1L; @Override public void actionPerformed(ActionEvent e) { numHistory.add(currentNumber); currentNumber = 0; opHistory.add(op); displayLabel.setText(op); } }); panel.add(button); } // = JButton button = new JButton( new AbstractAction("=") { private static final long serialVersionUID = 1L; private int i; @Override public void actionPerformed(ActionEvent e) { // Display result numHistory.add(currentNumber); while (opHistory.size() > 0) { if (opHistory.contains("x")) { i = opHistory.indexOf("x"); numHistory.set(i, numHistory.get(i) * numHistory.get(i+1)); } else if (opHistory.contains("/")) { i = opHistory.indexOf("/"); numHistory.set(i, numHistory.get(i) / numHistory.get(i+1)); } else if (opHistory.contains("+")) { i = opHistory.indexOf("+"); numHistory.set(i, numHistory.get(i) + numHistory.get(i+1)); } else if (opHistory.contains("-")) { i = opHistory.indexOf("-"); numHistory.set(i, numHistory.get(i) - numHistory.get(i+1)); } opHistory.remove(i); numHistory.remove(i+1); } displayLabel.setText(String.valueOf(numHistory.get(0))); currentNumber = numHistory.get(0); numHistory.clear(); if (isClear) { isClear = false; } } }); panel.add(button); } public static void main(String[] args) { new GUI(); } } I would appreciate any tips. Answer: package calculator; Package names should associate the software with the author, like com.github.razemoon.basicmdasjavacaluclator. public class GUI extends JFrame { For Java nming conventions, you'd normally use UpperCamelCase, and use lowercase even for acronyms, like "Gui", "HtmlWidgetToolkit" or "HtmlCssParser". private static final long serialVersionUID = 1L; You only need this field if it is highly likely that the class will be serialized...in this case, most likely not. final String[] ops = new String[] {"+", "-", "x", "/"}; Why is this package-private? Also, final arrays are not as final as you'd think, the individual values can still be changed. You most likely want an Enum....actually, you want an interface, but in this example, an Enum would most likely do fine enough. private ArrayList<Integer> numHistory = new ArrayList<Integer>(); private ArrayList<String> opHistory = new ArrayList<String>(); Always try to use the lowest common interface for declarations, in this case List. private void setFrame() { this.setTitle(title); this.add(panel, BorderLayout.CENTER); this.setBounds(10,10,300,700); this.setResizable(false); this.setVisible(true); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } Why are you using this here but nowhere else? this.setResizable(false); Why? Your frame is perfectly resizable as far as I can see. By setting it not-resizable you only make sure that your application becomes unusable under different LaFs and font-sizes. for (int i = 0; i < 10; i++) { I'm a very persistent advocate for that you're only allowed to use single-letter variable names if your dealing with dimensions. for (int number = 0; number <= 9; number++) { // Or for (int digit = 0; digit <= 9; digit++) { final int num = i; Don't shorten variable names just because you can, the decreased amount of typing is not worth the decreased readability. Regarding the creation of buttons, I like to create helper methods and classes which make the code easier to read, in this case I'd go for lambdas, like this: private JButton createButton(String text, Runnable action) { return new JButton(new AbstractButton(text) { @Override public void actionPerformed(ActionEvent e) { action.run(); } }) } // In createButtons: panel.add(createButton(Integer.toString(number), () -> { // Code for the number button goes here. })); Another alternative would be to create a private class NumberAction which accepts a number in its constructor and performs the associated action. That would also allow you to get rid of the final-redeclaration. private int i; That;s a very bad variable name. public GUI() { setPanel(); setFrame(); } private void setFrame() { this.setTitle(title); this.add(panel, BorderLayout.CENTER); this.setBounds(10,10,300,700); this.setResizable(false); this.setVisible(true); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } // ... public static void main(String[] args) { new GUI(); } It would be better to split the responsibilities here. The frame itself is only responsible with getting its own layout going, while the main method should be responsible for getting the frame displayed. public GUI() { setPanel(); setFrame(); } private void setFrame() { this.setTitle(title); this.add(panel, BorderLayout.CENTER); this.setBounds(10,10,300,700); this.setResizable(false); } // ... public static void main(String[] args) { GUI gui = new GUI(); gui.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); gui.setVisible(true); } Your logic does not seem to contain any sort of error handling, I believe pressing an operator button twice in succession should yield an error. Maybe a better approach would be to print the whole expression to the screen as entered, and then apply the Shunting Yard Algorithm to process that expression. Your logic doesn't do decimals, neither does it gracefully handle overflows. By changing your logic yo use BigDecimals you could handle both easily. Note that you must create BigDecimals with an appropriate MathContext to have proper accuracy and behavior. If you want read an already existing implementation, I can recommend exp4j for a math-expression library using floats, EvalEx for one using BigDecimal, and my own project jMathPaper for a calculator which sport different GUIs (regarding abstraction).
{ "domain": "codereview.stackexchange", "id": 39776, "tags": "java, calculator, swing" }
Existing motion planner for double Ackermann kynematics?
Question: Hi, I have a robot with double ackermann kynematics/steering, this means that the unlike in a conventional car, you can also steer the rear wheels and make use of crab like steering. Is there any open source path planner for these kind of robots? Can for example OMPL be used for this? Cheers, Oier Originally posted by Oier on ROS Answers with karma: 145 on 2014-04-01 Post score: 0 Original comments Comment by ahendrix on 2014-04-01: Are you looking for a planner for the ROS navigation stack, or just a generic motion planning library? Comment by Oier on 2014-04-07: @ahendrix I am looking just for a generic motion planning library, because the ROS navigation stack and MoveIt have too much overhead/features for my applications requirements. But the robot is built upon ROS and I use rviz for simulation. What motion planners can be used for double ackermann? Comment by abraham on 2019-02-27: can you please post the link to you urdf file am planning to do slam double Ackerman ,but i can't able to fine any sample to start with . Answer: I´d also recommend using the sbpl_lattice_planner. We selectively use it on our double Ackermann steering USAR robots in case our exploration planner generates a plan that cannot be followed by the vehicle (because it would need to turn in place). That approach works pretty well both in simulation and with the real robot. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-04-08 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Oier on 2014-04-08: thanks @Stefan Kohlbrecher! What made you choose sbpl_lattice_planner over a sampling based method like RRT with motion primitives? Comment by Stefan Kohlbrecher on 2014-04-09: SBPL was the only motion primitive based approach that was proven and already available as a ROS package, so we tried it first, with success. Comment by schultza on 2016-03-22: @Stefan Kohlbrecher, can your give a little example how you created the mprim file with matlab? I dont get the introduction located at their homepage
{ "domain": "robotics.stackexchange", "id": 17491, "tags": "ros, moveit, path, planner, ompl" }
Hector pose estimation works, but only orientation and altitude is estimated
Question: Hello, I am trying to use hector_pose_estimation package for a real quadcopter. I have published information from IMU, barometer using sensor_msgs/Imu, hector_uav_msgs/Altimeter and pressure height messages using geometry_msgs/PointStamped message (just like hector_quadrotor package does in simulator). When I visualise pose estimate using rviz I see changes in orientation and height, but it doesn't move sideways, it's stuck in center and only height and orientation changes. Am I missing something out or hector_pose_estimation package doesn't calculate movement on XY axes? Using this package with simulator works fine, but with my real data from sensors - pose stuck in center. Originally posted by rock-ass on ROS Answers with karma: 55 on 2014-04-17 Post score: 1 Original comments Comment by ahendrix on 2014-04-17: Does you IMU data include accelerometer and gyroscope data, or only gyro data? Comment by rock-ass on 2014-04-17: IMU does include all data from acc and gyroscopes. Double checked data standard in sensor_data, also compared with data provided from gazebo simulator. Answer: Finally figured out, pose estimation package requires gps sensor_msgs/NavSatFix data. Now I can get position (x, y) estimated. Also I noticed some strange behavior - algorithm estimated that quadrotor is falling (estimated altitude was negative), because IMU sensor Z axis was -9,8. I had to invert IMU's Z axis value to get the algorithm stabilize quadcopter's altitude. Hope this helps if someone encounter similar problem. Originally posted by rock-ass with karma: 55 on 2014-04-30 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by alfa_80 on 2014-05-01: If I'm not mistaken the IMU Z-axis is +ve if going downward. Comment by rock-ass on 2014-05-01: As far as I understand positive Z would be going down faster than gravity pulls you. But this time +10Z mean hover
{ "domain": "robotics.stackexchange", "id": 17690, "tags": "imu, hector-quadrotor" }
openni_audio for Xtion Pro Live
Question: Hi all, I am trying to use two microphones of Xtion Pro Live sensor for my research. I found a package openni_audio here, https://kforge.ros.org/openni/openni_audio After remove line #9 () in manifest.xml file (I cannot find any package named as "openni"), I can make the package. However, I don't know how to run/launch the node because there is no launch file. Can anybody can share your experience with this node? To Mr. Micheal Ferguson, thank for your code. But could you share more? Originally posted by Quang Nguyen on ROS Answers with karma: 1 on 2012-10-09 Post score: 0 Answer: This is highly experimental code that just barely works -- really more a proof of concept than anything else. ROS doesn't have much in the way of audio support and so you won't find much that works with the output of this node (there is an example script in Python which might happen to play audio using the 'ao' library). That said, this is designed to be a nodelet that you would add to your normal openni launch file. It would look something like: <node pkg="nodelet" type="nodelet" name="driver" args="load openni_audio/AudioNodelet $(arg manager) $(arg bond)" respawn="$(arg respawn)"> <param name="channel" value="0" /> </node> if you were to add it directly to the openni launch files. The important parameter is 'channel' which specifies which microphone to connect to. If you want to connect to both sides, you would spawn two nodelets, each serving a different channel. Originally posted by fergs with karma: 13902 on 2012-10-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11299, "tags": "ros" }
Question about the norm of the four-velocity being equal to $c$
Question: On the way to the Einstein equation we derived the four-velocity: $$u^\mu=(c,v^k)$$ with $v^k$ being the 3-velocity, which can can be very low ($ |v|<<c$). However, the square of the four velocity is $$\eta_{\mu\nu}u^\mu u^\nu=\gamma^2(c^2-v^1v^1-v^2v^2-v^3v^3).$$ [We use the diag($+---$) Minkowski metric.] $$\eta_{\mu\nu}u^\mu u^\nu=\gamma^2(c^2-v^2)$$ $v=|v|$ ($ |v|<<c$). $$\eta_{\mu\nu}u^\mu u^\nu=\frac{1}{1-\frac{v^2}{c^2}}(c^2-v^2)=c^2$$ Since it is a tensor equation it is true for all moving systems, for example also for the driver of a car which moves at 10km per hour. How can it be grasped that this driver has a 4-speed of $c$? Answer: One easy way of grasping it is to say that an observer at rest moves forward along the time direction at lightspeed, while the driver tilts their velocity vector so it has a spatial component. It turns out that four-accelerations are spacelike and orthogonal to the four-velocity, allowing objects to stay on-shell and the velocity to remain $c$. ...except that saying a velocity through time has a certain value is pretty incorrect! Velocities as we know them are distance traversed per unit of time, so speaking of a velocity through time makes violence to how we normally use the word. One way out that is philosophically and mathematically fine is to just say that 4-velocities are not at all like our everyday velocities and should be treated as a very different kind of object that just happens to have an ordinary velocity as a part of itself (plus the time component, which is something else). In the spacetime picture worldlines are parametrized by the proper time, and the 4-velocity is the change in position per unit of proper time. That makes the constancy of 4-velocity easier to grasp: it is like how for a curve in 3D space parametrized by arc length the tangent vector is constant in length but changes direction. So the driver is just driving along a straight line in spacetime that is slightly tilted compared to an observer at relative rest. The 4-velocity points along this line, and has 3D components corresponding to the spatial velocity and a time component linked to the time dilation.
{ "domain": "physics.stackexchange", "id": 98371, "tags": "special-relativity, spacetime, speed-of-light, acoustics, velocity" }
Infinite plane gravity: what is "mass density per unit area"?
Question: Recently I learned that the gravity of an infinity plane is independent of the distance from that plane. In fact it is $$g = 2\pi G \sigma$$ where $\sigma$ is "the mass density of the plane per unit area". I am struggling to understand what this actually means. I do understand mass density (per volume), but "per area"? Would this not always be zero? Looking for example at a $2\,mm$ thick sheet of copper, where copper has a mass density of $\rho_{\text{Cu}} =8.92 \,g/cm³ $. What then is the $\sigma$ on the surface of the sheet? Is it just (at least approximately) the stacked density on each surface point, i.e. $\sigma = w\cdot\rho$ where $w=0.2\,cm$ is the thickness of the plate? What if the plate is not negligibly thick but, say $w=1\,km$? Edit: removed reference to finite plane, some comments may no longer apply. Answer: Let’s start with 1D. If you were to buy climbing rope, you might want to ask the vendor: “how much does a meter of rope weight?”. This question is the same as asking what is the mass per unit length of the rope. Yes, you might care what the actual density (mass per unit volume) of the rope is, but since the main variable will be the length, you abstract the girth away. mass per unit length = cross sectional area * density Same for the 2D case. Say you are buying material to build a sail. You want to know what is the mass per unit area because it is a more relevant piece of information than the density. So you see, we are not talking about exactly 1 or 2 dimensional objects (which should have a density of 0) but of situations in which only one or two dimensions of the object in question matter. In the case of an infinite plane, the way we derive that formula is to look how much mass is inside an (imagined) cylinder straddling the plane, where the bases of the cylinder (the two disks) are perpendicular to the plane. That mass will always be proportional to the area of the base, and completely independent of the height of the cylinder (can you see why?). So we don’t care about the density (or the thickness) of the plane itself.
{ "domain": "physics.stackexchange", "id": 50403, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, units, dimensional-analysis" }
Closure of CFL against right-quotient with regular languages
Question: Let $A/B$ = $\{ w \mid wx \in A$ for some $x \in B \}$. Show that if A is context free and B is regular, then $A/B$ is context free. My interpretation of this is is that we need to show that if a string $wx$ is accepted by a CFG, and we know that $x$ is accepted by a regular language (and therefore is also accepted by a context-free language), then $w$ must also be accepted by a CFG. My initial thought on how to solve this would be a proof by contradiction in which we assume that $A$ is context free, $B$ is regular, and then assume that $A/B$ is not context-free. Since $A$ is context free, we can construct an equivalent PDA that accepts $A$. From here, my thought was to take an arbitrary $wx$ that is accepted by $A$, such that $x \in B$. We can then construct another PDA based on the first that only accepts $wx$. We could then break the PDA into two pieces: one that accepts $w$ and one that accepts $x$ (with the two pieces concatenated together). Since there then would exist a PDA that accepts just $w$, and $w$ is arbitrary insofar as $wx$ was arbitrary, $A/B$ must therefore be context-free after all (contradiction). Will this approach work? (Is this a good general approach?) If so, how would I go about breaking the PDA that accepts $wx$ into chunks formally? Answer: Imagine you have a pushdown automaton (PDA) $X$ for $A$ and a DFA $Y$ for $B$. You want to build a PDA $Z$ for $A/B$. You can do as follows: the states $Q_Z$ of $Z$ are $Q_X\times Q_Y$. There are two phases: Phase 1: You just read the input word and advance in $X$, states of $Y$ are ignored. This corresponds to the word $w$ of your language. Phase 2: When you reach the end of $w$, you are allowed to perform $\epsilon$-transitions in order to reach an accepting state of $X\times Y$. So you guess a word $x$ that will continue to advance in $X$ and start to advance in $Y$, and that has to reach acceptance in both automata simultaneously. If there is an accepting run of this automaton, then the input word $w$ is in $A/B$ since you guessed the witness $x$ (and vice-versa, if $x$ exists, then it can be guessed by your automaton $Z$). The reason why $B$ has to be regular instead of CFL is that you cannot manage two stacks at the same time, if you want to stay CFL.
{ "domain": "cs.stackexchange", "id": 14693, "tags": "formal-languages, context-free, closure-properties" }
What's difference between rosrun and register_ros_package?
Question: I followed knowrob tutorials KnowRob basics and Computables. The way to use rosrun is: rosrun rosprolog rosprolog mod_vis ?- register_ros_package(ias_semantic_map). The way to use register_ros_package: rosrun rosprolog rosprolog ias_semantic_map I know the second way can use something like this rdf_triple(knowrob:'on-Physical', Top, Bottom). But what's the major difference idea between them? Thank you~ I run: sam@sam:~/code/ros/test$ rosrun rosprolog rosprolog ias_semantic_map % library(swi_hooks) compiled into pce_swi_hooks 0.00 sec, 3,616 bytes % library(error) compiled into error 0.00 sec, 17,688 bytes % library(lists) compiled into lists 0.00 sec, 41,424 bytes % library(shlib) compiled into shlib 0.00 sec, 62,200 bytes % library(option) compiled into swi_option 0.01 sec, 15,080 bytes % library(process) compiled into process 0.01 sec, 93,400 bytes % /opt/ros/electric/stacks/knowrob/rosprolog/prolog/init.pl compiled 0.01 sec, 100,848 bytes % library(jpl) compiled into jpl 0.01 sec, 285,496 bytes % library(sgml) compiled into sgml 0.00 sec, 38,464 bytes % library(quintus) compiled into quintus 0.00 sec, 21,384 bytes % rewrite compiled into rewrite 0.00 sec, 34,768 bytes % library(uri) compiled into uri 0.01 sec, 10,880 bytes % library(record) compiled into record 0.00 sec, 31,072 bytes % rdf_parser compiled into rdf_parser 0.01 sec, 161,832 bytes % library(gensym) compiled into gensym 0.00 sec, 4,432 bytes % rdf_triple compiled into rdf_triple 0.00 sec, 37,192 bytes % library(rdf) compiled into rdf 0.01 sec, 271,360 bytes % library(debug) compiled into prolog_debug 0.00 sec, 21,320 bytes % library(assoc) compiled into assoc 0.01 sec, 22,640 bytes % library(sgml_write) compiled into sgml_write 0.01 sec, 105,272 bytes % library(nb_set) compiled into nb_set 0.00 sec, 5,968 bytes % library(utf8) compiled into utf8 0.00 sec, 14,112 bytes % library(url) compiled into url 0.00 sec, 113,584 bytes % rdf_cache compiled into rdf_cache 0.01 sec, 15,904 bytes % library(semweb/rdf_db) compiled into rdf_db 0.03 sec, 681,176 bytes % comp_similarity compiled into comp_similarity 0.03 sec, 708,664 bytes % /opt/ros/electric/stacks/knowrob/ias_prolog_addons/prolog/init.pl compiled 0.03 sec, 709,544 bytes % library(broadcast) compiled into broadcast 0.00 sec, 7,344 bytes % library(semweb/rdf_edit) compiled into rdf_edit 0.01 sec, 86,744 bytes % library(semweb/rdfs) compiled into rdfs 0.00 sec, 25,992 bytes % library(semweb/owl) compiled into t20_owl 0.01 sec, 68,496 bytes % library(socket) compiled into socket 0.00 sec, 10,232 bytes % library(base64) compiled into base64 0.00 sec, 17,400 bytes % library(http/http_open.pl) compiled into http_open 0.00 sec, 77,136 bytes % library(thea/owl_parser) compiled into owl_parser 0.01 sec, 156,080 bytes % library(odbc) compiled into odbc 0.00 sec, 37,344 bytes % library(semweb/rdfs_computable) compiled into rdfs_computable 0.01 sec, 87,536 bytes % /opt/ros/electric/stacks/knowrob/semweb/prolog/init.pl compiled 0.10 sec, 1,475,208 bytes % library(tf_prolog) compiled into tf_prolog 0.00 sec, 19,296 bytes % /opt/ros/electric/stacks/knowrob/tf_prolog/prolog/init.pl compiled 0.00 sec, 20,424 bytes % library(knowrob_owl) compiled into knowrob_owl 0.00 sec, 13,512 bytes % library(knowrob_perception) compiled into knowrob_perception 0.01 sec, 22,128 bytes % library(knowrob_objects) compiled into knowrob_objects 0.01 sec, 77,472 bytes % library(knowrob_coordinates) compiled into knowrob_coordinates 0.01 sec, 18,048 bytes % /opt/ros/electric/stacks/knowrob/knowrob_objects/prolog/init.pl compiled 0.02 sec, 128,384 bytes % library(owl_export) compiled into owl_export 0.00 sec, 23,096 bytes % /opt/ros/electric/stacks/knowrob/knowrob_common/prolog/init.pl compiled 0.03 sec, 163,632 bytes % library(knowrob_actions) compiled into knowrob_actions 0.00 sec, 12,488 bytes % /opt/ros/electric/stacks/knowrob/knowrob_actions/prolog/init.pl compiled 0.03 sec, 187,504 bytes % library(util) compiled into util 0.00 sec, 27,328 bytes % library(classifiers) compiled into classifiers 0.00 sec, 47,864 bytes % library(jython) compiled into jython 0.01 sec, 24,256 bytes % Parsed "owl.owl" in 0.00 sec; 169 triples % Parsed "rdf-schema.xml" in 0.00 sec; 87 triples % Parsed "knowrob.owl" in 0.08 sec; 3,721 triples % /opt/ros/electric/stacks/knowrob/ias_knowledge_base/prolog/init.pl compiled 0.77 sec, 3,506,928 bytes % Parsed "comp_temporal.owl" in 0.01 sec; 164 triples % library(comp_temporal) compiled into comp_temporal 0.60 sec, 344,904 bytes % /opt/ros/electric/stacks/knowrob/comp_temporal/prolog/init.pl compiled 0.76 sec, 495,768 bytes % Parsed "comp_spatial.owl" in 0.02 sec; 459 triples % library(comp_spatial) compiled into comp_spatial 0.23 sec, 371,056 bytes % /opt/ros/electric/stacks/knowrob/comp_spatial/prolog/init.pl compiled 1.94 sec, 4,540,456 bytes % library(semweb/actionmodel) compiled into actionmodel 0.00 sec, 106,752 bytes % Parsed "ccrl2_semantic_map.owl" in 0.04 sec; 2,226 triples % ccrl2_semantic_map compiled 0.61 sec, 1,038,040 bytes % semantic_map_utils compiled into ias_semantic_map 0.00 sec, 15,136 bytes % /opt/ros/electric/stacks/knowrob/ias_semantic_map/prolog/init.pl compiled 2.55 sec, 5,599,600 bytes ?- visualisation_canvas(C). ERROR: toplevel: Undefined procedure: visualisation_canvas/1 (DWIM could not correct goal) ?- Originally posted by sam on ROS Answers with karma: 2570 on 2012-08-06 Post score: 0 Answer: Both methods add the project directory to Prolog's library search path and load the prolog/init.pl file inside the respective package. rosrun is a shell command that you call from e.g. your Bash shell. It starts Prolog and initially loads the init.pl of the package you give as argument, which then recursively loads all init.pl of all dependencies. Once you have started a Prolog shell, you can use register_ros_package (which is a Prolog predicate, no shell/bash command) to load additional packages. In the examples you gave, mod_vis and ias_semantic_map are independent of each other (none of them depends on the other). You therefore can't load both with a single command but have to load one first and then the other. Originally posted by moritz with karma: 2673 on 2012-08-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by sam on 2012-08-06: If two way both can load ias_semantic_map, why the way which use rosrun couldn't run visualisation_canvas? I have edited on the orginal post. Thank you~ Comment by moritz on 2012-08-06: You either need to use $ rosrun rosprolog rosprolog mod_vis ?- register_ros_package(ias_semantic_map). or $ rosrun rosprolog rosprolog ias_semantic_map ?- register_ros_package(mod_vis). Comment by moritz on 2012-08-06: You can check with rospack, neither package depends on the other one, so neither will be initialized if you launch KnowRob with the other one as argument. You need to load the package you haven't use as argument to rosprolog manually using register_ros_package.
{ "domain": "robotics.stackexchange", "id": 10491, "tags": "knowrob, rosrun" }
An utterly pointless jQuery program
Question: This is the first jQuery program I've written, and it doesn't do much, but I'd appreciate the feedback. Just click on the different colored buttons to do things. Codepen var $greenElement = $( '<div style="width:750px;height:25px; \ background-color:green;" class="green-bar"> \ <p>Hello</p> \ </div>' ); var $redElement = $( '<div style="width:750px;height:25px; \ background-color:red;" class="red-bar"> \ <p>Goodbye</p> \ </div>' ); /* Main JQuery loop */ $(document).ready(function() { /* Add a red item and fade to the right value*/ $('.add-item-red').mouseenter(function() { $(this).fadeTo('fast', 0.4); $(this).click(function() { $('.output-start').after($redElement); }); }); /* Check if the mouse is no longer on red button */ $('.add-item-red').mouseout(function() { $(this).fadeTo('fast', 1); }); /* Add a green item and fade to the right amount */ $('.add-item-green').mouseenter(function() { $(this).fadeTo('fast', 0.4); $(this).click(function() { $('.output-start').after($greenElement); }); }); /* Check if mouse is no longer on green button */ $('.add-item-green').mouseout(function() { $(this).fadeTo('fast', 1); }); /* Reset all the items */ $('.reset-items').mouseenter(function() { $(this).fadeTo('fast', 0.4); $('.reset-items').click(function() { $('.green-bar').remove(); $('.red-bar').remove(); }); }); /* Check if mouse is no longer on white button */ $('.reset-items').mouseout(function() { $(this).fadeTo('fast', 1); }); }); body { background-color: black; } .button-bar div{ display: table; display: table-cell; } .add-item-red { width: 50px; height: 50px; background-color: red; border-color: black; border-style: solid; border-width: 3.5px; } .add-item-green { width: 50px; height: 50px; background-color: green; border-color: black; border-style: solid; border-width: 3.5px; } .reset-items { width: 50px; height: 50px; background-color: white; border-color: black; border-style: solid; border-width: 3.5px; } .output-start { width: 50px; height: 15px; } <html> <head> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> </head> <body> <div class="button-bar"> <div class="add-item-red"> </div> <div class="add-item-green"> </div> <div class="reset-items"> </div> </div> <div class="output-start"> </div> </body> </html> Answer: Ok, so there's two different things here: Hover effects (fade in/out) Actions Now, #1 is the same for all three elements, and thus shouldn't need to be repeated in code. But #2 is expressly different for each button, which means it probably shouldn't mixed with the hover effects. Aside: Speaking of duplication, your CSS is full of it too. You should look into that. You should also avoid naming things for how they look. E.g. add-item-red is fine until that item is no longer red; usually it'll have some purpose other than being red. Its color is a presentation detail. Name stuff for what it does, not how it looks. Anyway, these days (read: modern browsers), I'd probably just use CSS transitions to handle the hover effect; no javascript necessary. For instance: div { background: green; padding: 0.3em; display: inline-block; transition-duration: 0.5s; } div:hover { opacity: 0.4; } <div>Hover</div> But since this is about jQuery, here's how I'd handle it with a common class name for all the buttons (which, incidentally, I'd use <button> tags for), and jQuery's appropriately named .hover() event handler. $(".fade").hover(function () { $(this).fadeTo('fast', 0.4); }, function () { $(this).fadeTo('fast', 1.0); }); body { background: black; } button { border: none; padding: 1em; } #helloButton { background: green; } #goodbyeButton { background: red; } #clearButton { background: white; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <button type="button" id="helloButton" class="fade">One</button> <button type="button" id="goodbyeButton" class="fade">Two</button> <button type="button" id="clearButton" class="fade">Three</button> Now, you're using mouseenter combined with mouseout. But that's not really "symmetrical". The opposite of mouseenter is mouseleave (and the opposite of mouseout is mouseover). The enter/leave events are originally Internet Explorer-only events which jQuery has replicated, and they behave differently than the standard over/out events with regard to nested elements. jQuery's docs have a discussion of this and a demo to illustrate. It happens to work in your case, but it's a bit like saying that the opposite of multiplying is subtracting; it just happens to work out for certain numbers, but that doesn't make it right. Next, there's the click actions. You're adding those inside the mouseenter event. This is a big no-no. It means that every time you mouse over a button, a new click event handler gets added! You just don't notice it because you're inserting and re-inserting the same element. But if you mouse over the red button 3 times before clicking it, the effect is that you're clicking it 3 times because now you've got 3 separate but identical event handlers attached. Yet another reason to keep the actions separate from the hover effects. You can either attach the event handlers to an element by its ID (as below), or you can use some other means of linking an element to an action. $(".fade").hover(function () { $(this).fadeTo('fast', 0.4); }, function () { $(this).fadeTo('fast', 1.0); }); var hello = $("<p></p>").text("Hello").css({background: "green"}); var goodbye = $("<p></p>").text("Goodbye").css({background: "red"}); $("#helloButton").on("click", function () { $("#output").prepend(hello); }); $("#goodbyeButton").on("click", function () { $("#output").prepend(goodbye); }); $("#clearButton").on("click", function () { $("#output").empty(); }); body { background: black; } button { border: none; padding: 1em; } #helloButton { background: green; } #goodbyeButton { background: red; } #clearButton { background: white; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <button type="button" id="helloButton" class="fade">One</button> <button type="button" id="goodbyeButton" class="fade">Two</button> <button type="button" id="clearButton" class="fade">Three</button> <div id="output"></div> You'll notice a few things here: I'm prepending the elements to the output div, rather than inserting them next to it. This makes more sense to me since now, all the output is, well, in the output. Not beside it. And it means I can just call empty to clean up. I'm constructing the elements using jQuery rather than with strings of HTML. I generally don't like a ton of hardcoded HTML in my JS. I also don't need IDs for them, because, again, I just need to call empty. I might go a step further and add the elements in the HTML itself, and simply hide them until needed, which would be even simpler in a lot of ways, and keep the JS free of hardcoded elements of any kind. In the end, I'd probably replicate your code with something like this: $("#helloButton").on("click", function () { $("#hello").show(); }); $("#goodbyeButton").on("click", function () { $("#goodbye").show(); }); $("#clearButton").on("click", function () { $("#output").children().hide(); }); body { background: black; } button { border: none; padding: 1em; display: inline-block; } .fade { transition-duration: 0.5s; } .fade:hover { opacity: 0.4; } #helloButton { background: green; } #goodbyeButton { background: red; } #clearButton { background: white; } #hello { background: green; } #goodbye { background: red; } #output p { display: none; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <button type="button" id="helloButton" class="fade">One</button> <button type="button" id="goodbyeButton" class="fade">Two</button> <button type="button" id="clearButton" class="fade">Three</button> <div id="output"> <p id="hello">Hello</p> <p id="goodbye">Goodbye</p> </div> It doesn't make the paragraphs switch places, but I don't think that's the most salient feature either, to be honest.
{ "domain": "codereview.stackexchange", "id": 12607, "tags": "javascript, jquery, html, css" }
General resolution in first order logic
Question: Assuming you have a formula in first order logic like $$(\forall_x p(x) \land \forall_x q(x)) \rightarrow \forall_x(p(x) \land q(x))$$ (which seems valid?) Converting the formula to CNF: $$(\neg p(x) \vee \neg q(y) \vee p(z)) \land (\neg p(x) \vee \neg q(y) \vee q(z)) $$ I don't see how I can apply general resolution to get the empty clause in this case which puzzles me since I believe the formula is valid. Answer: As you correctly point out, the original formula is valid (in every model either there is some element for which p or q doesn't hold, or p and q hold for all elements). To prove that your formula is valid, you cannot use resolution directly. Recall that with resolution, you can derive the empty clause from a clause set iff the clause set is unsatisfiable. Your formula is valid iff its negation is unsatisfiable. So to prove your formula is valid, you have to derive the empty clause from the clause set corresponding to the negation of your formula. Let $\varphi$ be your original formula. $$\lnot \varphi \equiv \forall_x \forall_y \exists_z (px \land qy \land (\lnot pz \lor \lnot qz))$$ Skolemization replaces $z$ by a function symbol $f(x,y)$: $$\lnot \varphi \equiv_{SAT} \forall_x \forall_y (px \land qy \land (\lnot pfxy \lor \lnot qfxy))$$ The corresponding clause set: $$\{\{px\},\{qy\},\{\lnot pfxy, \lnot qfxy\}\}$$ From there it should be straight-forward to derive the empty clause: Apply renaming $\nu_1 = \{x/x_1\}$ to $\{px\}$, yielding $\{px_1\}$, then resolve $\{p x_1 \}$ with $\{ \lnot pfxy, \lnot qfxy\}$ by $\sigma_1 = \{ x_1 / fxy\}$, yielding $\{\lnot q fxy\}$ Apply renaming $\nu_2 = \{y / y_1\}$ to $\{qy\}$, yielding $\{qy_1\}$, then resolve $\{qy_1\}$ with $\{ \lnot qfxy \}$ by $\sigma_2 = \{y_1/fxy\}$, yielding the empty clause $\Box$ In conclusion, we have shown by resolution that $\lnot \varphi$ is unsatisfiable. Therefore $\varphi$ is valid.
{ "domain": "cs.stackexchange", "id": 15007, "tags": "logic, artificial-intelligence, first-order-logic" }
Secure database connection in PHP
Question: I have this code I use to connect to the database and get a thing from it secure, How can I make more secure?? class db { // The database connection protected static $connection; /** * Connect to the database * * @return bool false on failure / mysqli MySQLi object instance on success */ protected function connect() { // Try and connect to the database if(!isset(self::$connection)) { // Load configuration there are defined in config.php require_once('app/config/config.php'); self::$connection = new mysqli(DB_HOST, DB_USERNAME, DB_PASSWORD, DB_DBNAME); } // If connection was not successful, handle the error if(self::$connection === false) { // Handle error - notify administrator, log to a file, show an error screen, etc. return false; } return self::$connection; } /* Query the database @param $query The query string @return mixed The result of the mysqli::query() function */ private function query($query) { // Connect to the database $connection = $this->connect(); // Query the database $result = $connection->query($query); return $result; } /* Fetch rows from the database (SELECT query) @param $query The query string @return bool False on failure / array Database rows on success */ public function select($query) { $result = $this->query($query); if($result === false) { return false; } $rows = array(); while ($row = $result->fetch_assoc()) { $rows[] = $row; } return $rows; } } When I want infomation from the DB Example: $db = new db(); $result = $db->select("SELECT * FROM users WHERE name = John"); Answer: You're asking the wrong question. The right question is "What should I use instead of query?" The answer is that if you want to reinvent the wheel, you should use prepared statements, and if you don't then you should use an ORM which uses prepared statements under the hood. Prepared statements allow you to pass in parameters and ensure that the escaping is done correctly. E.g. you would use "SELECT * FROM foo WHERE bar = ?" passing parameter "baz" rather than "SELECT * FROM foo WHERE bar = " . mysqli_real_escape_string("baz").
{ "domain": "codereview.stackexchange", "id": 23507, "tags": "php, mysqli, securestring" }
Algebraic trouble in gauge invariance of Schrodinger equation
Question: I've been trying to prove a component of a proof the gauge invariance of the schrodinger equation. Specifically the part in the first answer here where this is stated: $$\big(\frac{\nabla}{i}-q(\vec{A} +\nabla \Lambda)\big)e^{iq\Lambda}\psi = e^{iq\Lambda}\big(\frac{\nabla}{i}-q\vec{A}\big)\psi$$ By the product rule: $$\big(\frac{\nabla}{i}-q(\vec{A} +\nabla \Lambda)\big)e^{iq\Lambda}\psi = (e^{iq\Lambda} \frac{\nabla}{i} \psi + \psi \frac{\nabla}{i} e^{iq\Lambda} -e^{iq\Lambda}q\vec{A}\psi+ \nabla\Lambda e^{iq\Lambda}\psi) $$ where $$\nabla\Lambda e^{iq\Lambda}\psi = \Lambda e^{iq\Lambda}\nabla\psi + \Lambda\psi\nabla e^{iq\Lambda} + e^{iq\Lambda}\psi\nabla\Lambda $$ and with Lambda as a function of the spatial coordinates: $$\nabla e^{iq\Lambda} = iq e^{iq\Lambda}\nabla \Lambda$$ Then $$\nabla\Lambda e^{iq\Lambda}\psi = \Lambda e^{iq\Lambda}\nabla\psi + iq\Lambda\psi e^{iq\Lambda}\nabla \Lambda+ e^{iq\Lambda}\psi\nabla\Lambda $$ and $$(e^{iq\Lambda} \frac{\nabla}{i} \psi + \psi \frac{\nabla}{i} e^{iq\Lambda} -e^{iq\Lambda}q\vec{A}\psi+ \nabla\Lambda e^{iq\Lambda}\psi) $$ $$=e^{iq\Lambda} \frac{\nabla}{i} \psi + q\psi e^{iq\Lambda}\nabla \Lambda -e^{iq\Lambda}q\vec{A}\psi+ \Lambda e^{iq\Lambda}\nabla\psi + iq\Lambda\psi e^{iq\Lambda}\nabla \Lambda+ e^{iq\Lambda}\psi\nabla\Lambda $$ $$=\big(e^{iq\Lambda} ( \frac{\nabla}{i} -q\vec{A})\psi\big) + (q\psi e^{iq\Lambda}\nabla \Lambda + \Lambda e^{iq\Lambda}\nabla\psi + iq\Lambda\psi e^{iq\Lambda}\nabla \Lambda+ e^{iq\Lambda}\psi\nabla\Lambda )$$ Somehow the right set of parentheses goes to zero, and I'm wondering what I'm missing. This is another stack exchange question where the same pattern is used: Gauge Invariance of Schrodinger Equation Answer: Your last term $\nabla\Lambda e^{iq\Lambda}\psi)$ in the second line is simply $e^{iq\Lambda}\psi\cdot(\nabla\Lambda)$ and is not the third line where you are acting by nabla on all functions. Maybe you misunderstood the first line where $\nabla\Lambda$ is just a gradient of $\Lambda$.
{ "domain": "physics.stackexchange", "id": 54128, "tags": "quantum-mechanics, homework-and-exercises, schroedinger-equation, gauge-invariance" }
How are RDM patterns for reactions in the KEGG database constructed?
Question: With respect http://www.genome.jp/dbget-bin/www_bget?rp:RP00167 can anyone tell me how the RDM pattern were obtained?? Information what RDM is given here: http://www.genome.jp/kegg/reaction/ Note: This may be a silly question to all chemists, but not for me, i am a computer science student , and my research field in bioinformatics and so i have to study both biology and bit of chemistry, so please just dont close the question or be harsh on me, I have very little knowledge in chemistry and also googling doesn't help much. I asked this question in Biology forum, they closed the question and asked me to ask it here Answer: To my impression, there is no RDM value, but a RDM pattern! This is about mapping atoms is a reactant and a product by assigning R (reaction center), D (difference atoms) and M (matched atoms) in both compounds. Pairs for R, D, and M are separated by colons. The pattern is thus given as R(reactant)-R(product) : D(reactant)-D(product) : M(reactant)-M(product), where a an asterisk marks a void centre. In order to figure out how these patterns are retrieved, you definitely want to have a look at Modular Architecture of Metabolic Pathways Revealed by Conserved Sequences of Reactions, a freely available article by Minoru Kanehisa (KEGG founder), published in J. Chem. Inf. Model., 2013, 53,613-622 (DOI). Don't forget to read the Supporting Information, particularly the sections on KEGG atom types and RDM chemical transformation notation.
{ "domain": "chemistry.stackexchange", "id": 3475, "tags": "organic-chemistry, biochemistry, cheminformatics" }
Generators in group theory
Question: When we consider rotation matrix along $z$ axis and take the infinitesimal value the parameter (rotation angle), we get corresponding generator of the rotation.It has the form shown in the equation. $$J_3=\begin{pmatrix} 0 &-i & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$ Now we say by taking exponential I.e. $e^{iJ_{3}\alpha}$ we obtain rotation matrix corresponding to rotation angle $\alpha$. But my question is how can we take exponential of $J_3$? Because it is a singular matrix, 3rd column and 3rd row elements are zero.This matrix can't be diagonalizable. In the sense that if we use the expression $P D P^{-1}$. To determine $P^{-1}$ , we need the matrix $J_3$ to be diagonalizable. Answer: For completeness (although the other answers are probably simpler) the matrix $J_3$ can be written as: $$ J_3 = \begin{pmatrix}1/\sqrt{2} & 1/\sqrt{2}& 0 \\i/\sqrt{2} & -i \sqrt{2}& 0\\ 0 & 0 & 1\end{pmatrix} \begin{pmatrix}1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix}1/\sqrt{2} & -i/\sqrt{2}& 0 \\1/\sqrt{2} & i \sqrt{2}& 0\\ 0 & 0 & 1\end{pmatrix}$$ and its matrix exponential can be computed by exponentiating the diagonal matrix to give: $$ \exp(\alpha J_3) = \begin{pmatrix}1/\sqrt{2} & 1/\sqrt{2}& 0 \\i/\sqrt{2} & -i \sqrt{2}& 0\\ 0 & 0 & 1\end{pmatrix} \begin{pmatrix}e^\alpha & 0 & 0 \\ 0 & e^{-\alpha} & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix}1/\sqrt{2} & -i/\sqrt{2}& 0 \\1/\sqrt{2} & i \sqrt{2}& 0\\ 0 & 0 & 1\end{pmatrix}$$ $$\exp(\alpha J_3) = \begin{pmatrix}\cos \alpha & \sin\alpha & 0 \\ -\sin \alpha & \cos \alpha & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ Note that invertibility has nothing to do with diagonalisability. In particular, matrices with determinant zero can usually be diagonalised.
{ "domain": "physics.stackexchange", "id": 72689, "tags": "group-theory, rotation, matrix-elements" }
Quantum comparator with one fixed input
Question: I currently need to implement efficiently a quantum comparator, with one of the values that is known at generation-time. In other words, I am searching for a quantum routine compare(n, rhs) that will produce a circuit taking as input a quantum register $\vert a \rangle$ of size n that should be interpreted as a unsigned integer. a single qubit initialised to $\vert 0 \rangle$ that will store the result of $a < rhs$ at the end of the routine. at most one ancilla qubit, and preferably no ancilla qubit at all. There are a lot of quantum comparators in the wild (for example [1], [2] or [3]) but all of them are designed to compare values stored in two quantum registers, not one value from a quantum register and the other known at compile time. I tried to modify [1] in order to be able to remove the need for the quantum register $\vert b \rangle$, but I ended up proving it was not possible (see my question). Even if I have no rigorous proof, I think a similar argument applies to [2]. My "intuition" comes from the fact that in [2], both $\vert a \rangle$ and $\vert b \rangle$ are used to store intermediate results, which is impossible if we remove one of them. On the other hand, [3] is relatively easy to adapt. The only operation involving $\vert b \rangle$ is a controlled-Phase, so we can easily "remove" the control and just apply a Phase gate when the corresponding bit of $b$ (known at generation-time) is $1$. Draper's adder ([3]) is nice on multiple points: No ancilla qubit needed. Only QFT and phase gates, which should be easy to implement on hardware. A depth in $O(n)$. But an ideal implementation would also: have a number of gates that grows in $O(n)$. Draper's adder is $O(n^2)$ because of the QFT. have more room for optimisation with respect to the number of gates / depth (for example a very low cost when the constant is $\vert 00000\rangle$ or has a lot of $0$. be based on a logic/arithmetic approach like [1] or [2]. One of the problem with Draper's adder is that it requires very precise rotations angle, and it is hard to compute the error introduced if one of the rotations is only approximated and not exact. My question: do you know any paper that introduce an algorithm that may interest me, based on the lists above? Answer: You can do this with two ancillae and $O(n \lg n)$ operations by using the comparator from https://arxiv.org/abs/1706.07884. The constant comparator uses two constant adders: The constant adders are a recursive construction from https://arxiv.org/abs/1611.07995 which uses an incrementer and a carry signal propagator: The incrementer uses a normal adder with same-sized input and target registers: The same-sized input/target adder can be done with no ancillae: And the carry propagation uses linear dirty ancillae: If you're feeling really ambitious, you can remove at least one of the ancillae by using phasing operations in this way:
{ "domain": "quantumcomputing.stackexchange", "id": 929, "tags": "quantum-algorithms" }
How can I do when I make a $\log_2$ towards zero?
Question: Good day, I want to make a measurement on qubit by using formula von Neumann entropy using Mathematica given as below; $$S(\rho)=-Tr(\rho \log_2\rho)$$ The $$ \rho=\left(\array{0.5&0\\0&0.5} \right) $$ My problem is, when I make the $$ \log_2\left(\array{0.5&0\\0&0.5} \right) $$ I get the output $$\left(\array{-1&∞\\∞&-1} \right)$$ How can I deal with this value in my measurement since it cannot be calculated? Answer: The action of a function $f:\mathbb C\to\mathbb C$ on a diagonal matrix is, by definition, $$ f\begin{pmatrix}a_1& 0 & \cdots & 0 \\ 0 & a_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_n\end{pmatrix} = \begin{pmatrix}f(a_1)& 0 & \cdots & 0 \\ 0 & f(a_2) & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & f(a_n)\end{pmatrix}. $$ (For more details see e.g. Wikipedia and similar resources.) You therefore need to take the logarithm only on the diagonal. It's also important to note that $\mathrm{Tr}(\rho)\times\mathrm{Tr}(\log_2(\rho))$ will yield the wrong answer here - you just can't break up a trace into two different traces; that's a fundamental misunderstanding of how the trace works. The logarithm $\log_2(\rho)$ is a matrix, which you need to matrix-multiply with $\rho$ and then take the trace. If any of this isn't obvious, then you are probably rather out of your depth and you should really spend some time revising the basics.
{ "domain": "physics.stackexchange", "id": 32723, "tags": "quantum-mechanics, quantum-information, linear-algebra, density-operator" }
High dimensional wave equation
Question: In 3 dimensions, the wave equation $$\Box\psi=\delta(t)\delta(\vec{x})$$ has the retarded and advanced solutions $$\psi=A_R \frac{\delta(t-x)}{4\pi x} + A_A \frac{\delta(t+x)}{4\pi x}.$$ How does this generalize to higher dimensions? Answer: OK, by the OP's invitation, the two papers addressing his question are Greens Functions for the Wave Equation, A H Barnett 2006 and D V Gal'tsov 2002. The idea is to express the N-dimensional Laplacian in polar coordinates, $\frac{1}{r^{N-1}} \frac{\partial}{\partial r} \left(r^{N-1} \frac{\partial }{\partial r} \right)$.
{ "domain": "physics.stackexchange", "id": 65289, "tags": "waves, electromagnetic-radiation, field-theory, classical-field-theory" }
C++ std::optional implementation
Question: Took a shot at implementing a subset of std::optional functionality. A lot of core features are there but some things like converting constructors, etc are missing as I just wanted to focus on the basic ideas. The implementation: #include <compare> #include <typeinfo> #include <utility> struct BadOptionalAccess : std::exception { const char* what() const noexcept override { return "Tried to access an Optional()'s value, but no value exists!"; } }; template<typename T> class Optional { public: Optional() noexcept { std::memset(m_data, 0, sizeof(T)); } Optional(const T& value) : m_has_value{true} { new (m_data) T(value); } template<typename... Args> Optional(std::in_place_t, Args&&... args) { emplace(std::forward<Args>(args)...); } Optional(const Optional& other) : m_has_value{other.m_has_value} { if (other.m_has_value) { new(m_data) T(other.value()); } } Optional(Optional&& other) noexcept : m_has_value{other.m_has_value} { if (other.has_value()) { new(m_data) T(std::move(other.value())); other.m_has_value = false; } } Optional& operator=(const Optional& other) { Optional temp {other}; temp.swap(*this); return *this; } Optional& operator=(Optional&& other) noexcept { Optional temp {std::move(other)}; temp.swap(*this); return *this; } auto operator<=>(const Optional& other) const noexcept { if (!has_value() && !other.has_value()) { return std::strong_ordering::equal; } else if (!has_value() && other.has_value()) { return std::strong_ordering::less; } else if (has_value() && !other.has_value()) { return std::strong_ordering::greater; } else { return value() <=> other.value(); } } bool operator==(const Optional& other) const noexcept { return (*this <=> other) == std::strong_ordering::equal; } bool operator!=(const Optional& other) const noexcept { return !(*this == other); } explicit operator bool() const noexcept { return has_value(); } template<typename... Args> void emplace(Args&&... args) { reset(); new (m_data) T(std::forward<Args>(args)...); m_has_value = true; } template<typename U> T value_or(U&& default_value) { return has_value() ? std::move(**this) : static_cast<T>(std::forward<U>(default_value)); } void reset() { if (m_has_value) { T* val = reinterpret_cast<T*>(m_data); val->~T(); m_has_value = false; } } void swap(Optional& other) noexcept { // Do I need to do this? My first instinct was to do // an ordinary swap of the m_data's, but it seemed wrong // to do in case the types weren't trivially copyable/movable. // Maybe I should special case in those situations? if (other.has_value() && has_value()) { std::swap(*reinterpret_cast<T*>(m_data), *reinterpret_cast<T*>(other.m_data)); } else if (other.has_value()) { new (m_data) T(std::move(other.value())); } else if (has_value()) { new(other.m_data) T(std::move(value())); } std::swap(m_has_value, other.m_has_value); } bool has_value() const noexcept { return m_has_value; } const T& value() const { return *ptr(); } const T& operator*() const& { return value(); } const T* operator->() const { return ptr(); } T& value() { return *ptr(); } T& operator*() & { return value(); } T&& operator*() && noexcept { return std::move(value()); } const T&& operator*() const&& noexcept { // not 100% sure how to use/test this overload. Why would an rvalue be const? return std::move(**this); } T* operator->() { return ptr(); } ~Optional() { reset(); } private: alignas(T) unsigned char m_data[sizeof(T)]; bool m_has_value = false; const T* ptr() const { if (!m_has_value) { throw BadOptionalAccess(); } return (reinterpret_cast<const T*>(m_data)); } T* ptr() { if (!m_has_value) { throw BadOptionalAccess(); } return (reinterpret_cast<T*>(m_data)); } }; template<typename T, typename... Args> Optional<T> makeOptional(Args&&... args) { return Optional<T>(std::in_place, std::forward<Args>(args)...); } Some tests: #define BOOST_TEST_MODULE optionaltest #ifdef BOOST_TEST_DYN_LINK #include <boost/test/unit_test.hpp> #else #include <boost/test/included/unit_test.hpp> #endif // BOOST_TEST_DYN_LINK #include <boost/test/data/monomorphic.hpp> #include <boost/test/data/test_case.hpp> #include "Optional.h" #include <string> BOOST_AUTO_TEST_CASE(default_constructor_test) { Optional<int> o; } BOOST_AUTO_TEST_CASE(basic_constructors_test) { Optional<int> o1; Optional<int> o2 = 1; Optional<int> o3 = o2; // calls std::string( size_type count, CharT ch ) constructor Optional<std::string> o4(std::in_place, 3, 'A'); Optional<std::string> o5 = makeOptional<std::string>(3, 'A'); BOOST_CHECK(o4==o5); // Move-constructed from std::string using deduction guide to pick the type Optional o6(std::string{"deduction very long type"}); } BOOST_AUTO_TEST_CASE(access_operator_test) { using namespace std::string_literals; Optional<int> opt1 = 1; BOOST_CHECK_EQUAL(*opt1, 1); *opt1 = 2; BOOST_CHECK_EQUAL(*opt1, 2); Optional<std::string> opt2 = "abc"s; BOOST_CHECK_EQUAL(*opt2, "abc"s); BOOST_CHECK_EQUAL(opt2->size(), 3); Optional<std::string> taken = std::move(opt2); BOOST_CHECK_EQUAL(*taken, "abc"s); BOOST_CHECK_EQUAL(taken->size(), 3); BOOST_CHECK(!opt2.has_value()); } BOOST_AUTO_TEST_CASE(value_check_test) { Optional<int> opt; BOOST_CHECK(!opt.has_value()); opt = 43; BOOST_CHECK(opt.has_value()); if (opt) { BOOST_CHECK(true); } else { BOOST_CHECK(false); } opt.reset(); BOOST_CHECK(!opt.has_value()); } BOOST_AUTO_TEST_CASE(get_value_test) { Optional<int> opt = {}; BOOST_CHECK_THROW(opt.value(), BadOptionalAccess); BOOST_CHECK_THROW(opt.value() = 42, BadOptionalAccess); opt = 43; BOOST_CHECK_EQUAL(*opt, 43); BOOST_CHECK_EQUAL(opt.value(), 43); opt.value() = 44; BOOST_CHECK_EQUAL(*opt, 44); BOOST_CHECK_EQUAL(opt.value(), 44); } BOOST_AUTO_TEST_CASE(ValueOrTest) { // Test with existing value Optional<int> opt1(5); BOOST_CHECK_EQUAL(opt1.value_or(10), 5); // Test without an existing value Optional<int> opt2; // Empty optional BOOST_CHECK_EQUAL(opt2.value_or(10), 10); // Test with a different type Optional<double> opt3; // Empty optional BOOST_CHECK_CLOSE(opt3.value_or(10), 10.0, 0.0001); // BOOST_CHECK_CLOSE for floating point comparison // Test with rvalue when Optional has a value Optional<std::string> opt4("Hello"); BOOST_CHECK_EQUAL(opt4.value_or(std::string("World")), "Hello"); // Test with rvalue when Optional does not have a value Optional<std::string> opt5; // Empty optional BOOST_CHECK_EQUAL(opt5.value_or(std::string("World")), "World"); } BOOST_AUTO_TEST_CASE(swap_test) { Optional<std::string> opt1("Lorem ipsum dolor sit amet, consectetur tincidunt."); Optional<std::string> opt2("Some other lorem ipsum"); opt1.swap(opt2); BOOST_CHECK_EQUAL(*opt2, "Lorem ipsum dolor sit amet, consectetur tincidunt."); BOOST_CHECK_EQUAL(*opt1, "Some other lorem ipsum"); opt1.reset(); opt1.swap(opt2); BOOST_CHECK_EQUAL(*opt1, "Lorem ipsum dolor sit amet, consectetur tincidunt."); BOOST_CHECK(!opt2.has_value()); } BOOST_AUTO_TEST_CASE(comparison_test) { Optional<int> o1, o2; BOOST_CHECK(o1 == o2); o2.emplace(10); BOOST_CHECK(o1 < o2); BOOST_CHECK(o2 > o1); BOOST_CHECK(o2 != o1); o1.emplace(10); BOOST_CHECK(o1 == o2); BOOST_CHECK(o1 <= o2); BOOST_CHECK(o1 >= o2); } ``` Answer: Missing #includes You are missing #include <cstring> for std::memset(), and #include <new> for the placement new operator. The converting constructors are a core feature You have a non-converting constructor, but that does not even exist in std::optional. I would say that this is a core feature. Without it, you will get apparently inconsistent and surprising behavior. Consider: Optional<int> foo = 3.1415; // works Optional<float> bar = 42; // works Optional<std::string> baz = "Hello, world!"; // fails to compile The first two statements will happily cause conversions to happen. Fixing this so the third statement also compiles is trivial: Optional(const auto& value) : m_has_value{true} { new (m_data) T(value); } I am very surprised by the choice of "basic features"; you have implemented operator<=>(), but I have never seen any code using the relational operators on std::optionals. Unnecessary zeroing of memory in the default constructor The default constructor does not need to zero m_data[], it's just going to cost you performance. The move constructor should not unset other.m_has_value In the move constructor, you set other.m_has_value to false, but you don't call the destructor of the other's value. And since the destructor of other will then not destruct its m_has_value, you will have created an object which will never be destructed propery. You must leave other in a state such that everything will eventually be correctly destructed. However, as Igor G mentioned, std::optional's specification says that after a move construction, other.has_value() should not return a different value than before the move construction. So the best thing to do is to just not set other.m_has_value = false at all. Incorrect behavior of swap() Related to the above: you have a similar problem in swap(), in case you swap an optional which has a value with one which hasn't. Here you have to explicitly call reset() on the side which will no longer have a value after the swap. About swapping // Do I need to do this? My first instinct was to do // an ordinary swap of the m_data's, but it seemed wrong // to do in case the types weren't trivially copyable/movable. // Maybe I should special case in those situations? You did the right thing in the code. You can't just swap the m_datas, that would just swap bytes without checking if that is legal, and would also bypass any specializations of std::swap for T. And in case T is trivially copyable/movable, then std::swap<T> would do exactly the same as std::swap<decltype<m_data>>, so there is no need to make this a special case.
{ "domain": "codereview.stackexchange", "id": 45091, "tags": "c++, reinventing-the-wheel, c++20, optional" }
How the Sun orbits
Question: So the Sun is a population I star, right? Then why does it's orbit form a rosette shape, characteristic of pop II stars? source: w.astro.berkeley.edu/~echiang/classmech/gd2_chapter3.pdf page 166. this is from binney & tremain, Galactic dynamics p166 Answer: Every star's orbit forms a rosette. It's just a consequence of the orbits not closing, which is purely a consequence of the particular form of the galactic gravitational potential. The Sun's orbit is pretty nearly circular; Pop II stars have more elliptical orbits, so the rosette shape is more obvious.
{ "domain": "physics.stackexchange", "id": 48600, "tags": "astrophysics, orbital-motion" }
Calorie Calculator
Question: I have developed a calorie-tracker webpage and would appreciate some feedback on it. I'm particularly interested in optimizing performance, improving code readability, and ensuring best practices are followed. <!DOCTYPE html> <html lang='eng'> <head> <title>FITIFY | Fitness Tracker</title> <!--CSS Styling--> <style> h1{ color: gold; font-family: 'Open Sans', sans-serif; font-weight: bolder; } h2, h3{ color: white; margin: 6%; text-align: center; font-family: 'Instrument Sans', sans-serif; font-size: 40px; font-weight: bold; } h5{ background: linear-gradient(to right, red, orange, yellow, green, blue, indigo, violet); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; } a{ text-decoration: none; color: white; margin: 3%; } .nav-brand{ margin: 1%; color: white; } .btn{ background-color: #19376D; } .btn:hover{ background-color: green; font-weight: bold; } .nav-links{ width: 40%; margin-right: 0%; } .dropdown{ display: inline-block; } .main-box{ background: linear-gradient(to bottom right,#3d97ce 0%,#12debb 100%); width: 100%; height: 90vh; border-bottom: solid black 3px; margin-top: 6%; position: relative; } .main-text-box{ float: left; width: 50%; height: 100%; } #main-para{ color: white; text-align: left; margin: 2% 2% 2% 10%; font-size: 22px; visibility: hidden; font-weight: 400; } .main-image-box{ float: right; width: 50%; height: 100%; } #main-img{ width: 80%; height: 80%; margin: 5%; border:white solid 3px; } #to-features{ background-color: black; color: white; border: solid 2.5px; border-image-slice: 1; border-image-source: linear-gradient(to right, red, orange, yellow, green, blue, indigo, violet); width: 10%; height: 10%; position: absolute; margin: 0% 40% 0% 20%; bottom: 10%; font-family: 'Instrument Sans', sans-serif; font-size: 20px; font-weight: bold; } .feature-box{ background: linear-gradient(to top right,#3d97ce 0%,#12debb 100%); background-size: cover; height: auto; } .card{ color: black; background-color: white; width: 21%; height: 250px; margin: 14% 2% 10% 2%; } .row{ margin-right: 0%; } .cards-btn{ background-color: black; color: white; border: solid 3px; border-image-slice: 1; border-image-source: linear-gradient(to right, red, orange, yellow, green, blue, indigo, violet); padding: 2%; margin: 50px 10% 2% 20%; font-family: 'Instrument Sans', sans-serif; font-size: 13px; font-weight: bold; } .cards-btn:hover{ color: green; } @media screen and (max-width: 1200px) { h1{ font-size: 4rem; } h2{ font-size: 5rem; } a{ margin: 3%; font-size: 2.5vw; } .nav-links{ margin: 0%; width: 70%; } .btn{ font-size: 2.5vw; margin-right: 1%; } .main-box{ margin-top: 8%; height: 100vh; } .main-text-box{ width: 100%; } #main-para{ font-size: 6vw; } .main-image-box{ display: none; } #to-features{ width: 30%; height: 5%; padding: 1%; margin: 0% 20% 15% 35%; font-size: 2rem; font-weight: bolder; } .card{ width: 35%; height: 30%; margin: 15% 10% 5% 35%; font-size: 1.4rem; } .cards-btn{ margin: 5% 5% 2% 30%; } #goal-btn{ margin-left: 25%; } } @media screen and (min-width: 1800px) and (max-width: 2600px) { /* 4k */ h1{ font-size: 6rem; } h2{ font-size: 5rem; font-weight: bolder; } a{ margin: 5%; font-size: 3rem; } .nav-links{ margin: 0%; width: 55%; } .btn{ font-size: 3rem; margin-right: 10%; } .main-box{ margin-top: 6%; height: 100vh; } #main-para{ font-size: 60px; } #to-features{ width: 20%; height: 9%; padding: 1%; font-size: 40px; margin: 10% 0% 0% 15%; } .card{ width: 20%; height: 250px; margin: 15% 3% 10% 2%; } #goal-btn{ margin-left: 10%; } } </style> </head> <body> <header> <nav class="navbar bg-dark fixed-top" data-bs-theme="dark"> <div class="nav-brand"> <h1>FITIFY</h1> </div> <div class="nav-links"> <a href="./index.html">HOME</a> <div class="dropdown"> <button class="btn btn-secondary dropdown-toggle" type="button" data-bs-toggle="dropdown" aria-expanded="false"> FEATURES </button> <ul class="dropdown-menu gap-1 p-2 rounded-3 mx-0 shadow w-220px" data-bs-theme="light"> <li><a class="dropdown-item rounded-2" href="./bmi.html">BMI CALCULATOR</a></li> <li><a class="dropdown-item rounded-2" href="./calories.html">TRACK CALORIES</a></li> <li><a class="dropdown-item rounded-2" href="./goal.html">KNOW YOUR GOAL</a></li> <li><a class="dropdown-item rounded-2" href="./basic_redirect.html">STORE YOUR DETAILS</a></li> </ul> </div> <a href="./about.html">ABOUT US</a> <a href="./contacts.html">CONTACT US</a> </div> </nav> </header> <main> <div class="main-box"> <div class="main-text-box"> <h2>Transform yourself today.</h2> <p id="main-para">"Welcome to Fitify, your ultimate fitness companion! We are dedicated to helping you on your fitness journey by providing powerful tools to track your progress. With our user-friendly interface achieving your health and fitness goals has never been easier. Join our community today and take the first step towards a healthier, fitter you!"</p> <button id="to-features">GET FIT</button> </div> <div class="main-image-box"> <img src="./nutri.webp" id="main-img" alt="Display Image"> </div> </div> <div class="feature-box"> <div class="row"> <div class="card"> <i class="fa-sharp fa-solid fa-dumbbell"></i> <div class="card-body"> <h5 class="card-title">CALCULATE YOUR BMI</h5> <p class="card-text">Calculate your Body Mass Index (BMI) to know your health status right away.</p><br> <a href="./bmi.html" class="cards-btn">CALCULATE</a> </div> </div> <div class="card"> <i class="fa-solid fa-utensils"></i> <div class="card-body"> <h5 class="card-title">TRACK YOUR CALORIES</h5> <p class="card-text">Track your calories for the day and know how much you consumed today.</p> <a href="./calories.html" class="cards-btn">TRACK NOW</a> </div> </div> <div class="card"> <i class="fa-solid fa-person-dress"></i> <div class="card-body"> <h5 class="card-title">YOUR IDEAL WEIGHT</h5> <p class="card-text">Calculate your ideal weight by entering your measurements.</p><br> <a href="./goal.html" class="cards-btn" id="goal-btn">KNOW YOUR GOAL</a> </div> </div> <div class="card"> <i class="fa-solid fa-info"></i> <div class="card-body"> <h5 class="card-title">STORE YOUR BASIC DETAILS</h5> <p class="card-text">Keep a log of your basic information in our secure database.</p> <a href="./basic_redirect.html" class="cards-btn">STORE DETAILS</a> </div> </div> </div> </div> </main> <script> const button = document.getElementById('to-features'); window.onload = function() { var para = document.getElementById("main-para"); var para_text = para.innerHTML; var speed = 15; var i = 0; para.innerHTML = ""; function typeWriter_para() { if (i< para_text.length) { para.innerHTML += para_text.charAt(i); i++; setTimeout(typeWriter_para, speed); } } setTimeout(typeWriter_para, 100); // Delay before starting the paragraph animation para.style.visibility = "visible"; // Make the paragraph visible }; const feature_box = document.querySelector('.feature-box'); button.addEventListener('click', function(){ feature_box.scrollIntoView(); }); </script> </body> </html> LIVE SITE URL: Click Here Please feel free to provide any feedback, suggestions, or improvements you think would enhance the code quality and maintainability of my project. Thank you in advance for your time and valuable insights! Answer: 1 General Issues 1.1 Markup Validation As the comments state, eng is not a valid value for the lang attribute. W3C Validation Service will flag with a red flag. 1.2 Indents Your idents differ. Sometimes you use 2 spaces for indents and on other parts, you sued 4 spaces for indents. YOu should use the tab only and decide between 2 or 4 but do not mix them. 1.3 Loigcal Gaps You used no logical gaps which hurt readability. your code reads similar to a book with no paragraphs but all text in one block and without any punctuation. Use logical gaps to spread elements apart and give a visual difference between elements that do not directly belong to each other. This allows developers to read more easily and recognize specific modules or element groups. As an example, you could have split the single cards from each other. 1.4 Linebreaks You used no linebreak which also makes text hard to read. Split text within a paragraph to a new line to fit even a smaller screen. Right now any developer has to horizontally scroll to read the text. 1.5 Comments You only used one COmment and the above style tag to show that CSS follows. The comment there is redundant. Anyone who will read the code knows that a style tag contains CSS. You could use a comment as a headline if you use more then one CSS to headline a logical code block but it is unnecessary in your case. 2 Head Element 2.1 Character Set While it is still valid not to declare a character set, I would recommend you always use <meta charset="utf-8"> to ensure that the same character set is used as you programmed in. 2.2 Styling You chose head-style for your CSS. This overall does the job but will hurt both performance and readability. 2.2.1 Performance Since you have a single file and HTML as well as CSS and JS are single-threaded, everything will run and read synchronously. That means, that first the style is read before the document can even start to render the document. The rendering of the document with that is delayed. A performant solution would be to push the CSS into its own CSS file and load the CSS file as an external file. That way, the CSS can even be cached which will improve loading times at the second request. 2.2.2 Readability As for readability, a dedicated external file will always win. It is easier to read the CSS in a different file where I explicitly expect it to be CSS. On the other hand, I do not have to check where the CSS start and where it ends. Which also makes the HTML code shorter. 2.2.3 Best Practice The best practice when developing with an RWD (Responsive Web Design) approach is to start mobile first and then use styles for higher and higher screens that will overwrite previous stylings. You sued 2 media queries one for 1200px or below and one for between 1800 and 2600px. You have no queries for between 1200-1800px and none to address screens higher the 2600px. That is harder to read and is confusing as any developer will search for the missing queries and/or will have to scroll between the entire CSS to see which style would apply on a certain width. 2.3 JavaScript 2.3.1 Performance Same as with the CSS you should move the JS to an external file and allow the usage of running it into a second thread independent from the HTML. You should then declare it within the head element so it will be read already during the start and reduce the time it needs to execute the script. window.onload() This issue only matters if you keep the script at the end of the body. That event trigger is redundant or useless. A script at the end of the body will only execute when the entire document is already read and parsed. There is no need to wait for the onload event anymore. 2.3.2 Readability As said before, you miss logical gaps which make your code harder to read. You also miss sufficient commenting to explain what you trying to do. the next issue is following code fragment: var i = 0; para.innerHTML = ""; function typeWriter_para() { if (i< para_text.length) { i++; setTimeout(typeWriter_para, speed); } } in the if condition you are referencing a variable called i. As a quick sidenote in that configuration, the variable i is not readable. On the other hand, you use a function to create and copy the behavior of a for loop. 2.3.3 innerHTML You should not use innerHTML anymore. It has 2 issues, for one it is slow as it needs the DOM to prepare and on the other hand, it poses a security risk (XSS injection). Since you only try to get text, use textContent instead. 2.3.4 Conventions Conventions are a good way to increase readability and to work with other developers as well as maintenance. 2.3.4.1 Constants and variables You should differentiate constants and variables from each other by writing const NAMES as capital. Declare constants at the very top of the document or if you use an onload event at the top within the onload event. Your const feature_box is declared somewhere in the mid-end. Use a constant when you expect something not to change within your script. Noticeably in the variable var para = document.getElementById("main-para"); I would expect that the element does not change and as such a constant would have been more appropriate. 2.3.4.2 Naming You should use names that are self-explaining. Names are used to be self-explaining to developers and as such must be readable. At no point should they be chosen for efficiency to save a few bytes. The word para is an actual word coming from Latin and means "side". It is not the best choice to choose a short word for a paragraph. 2.3.5 Best Practice 2.3.5.1 let vs var You have to choices to declare a variable either let or var. In modern JS I would recommend always using let unless you understand the difference and explicitly need the options that var would give you. 2.3.5.2 onload In modern JS you do not use event triggers such as onload or onlick (all events starting with on). The issue here is, that you can only have one event and will always overwrite the previous event. The modern practice is to append events by using addEventListener. As such you should change: window.onload = function() { // your function } to this modern approach: window.addEventListener('DOMContentLoaded', function() { // your function }) 3 Body 3.1 Navbar <div class="nav-brand"> <h1>FITIFY</h1> </div> That is not an element I would expect within a navbar as it has no navigational purpose. More appropriate would be the placement within the header element outside of the navbar. As such your container <div class="nav-links"> will be obsolete as nav as a container can take over the task. It is not incorrect to use ul as a container for a list of links but IMHO it would be more appropriate for menu which behaves the same. The reason why it is not incorrect in technical terms is conflicting documentation between Mozilla (official documentation and WHATWG founder) with the WHATWG specifications. Those conflicts are caused because HTML5 and CSS3 are not officially specified in technical terms by the W3C but are proposed in the form of recommendations by WHATWG. 3.2 Main You contain your content within the main element by pushing them into logical containers by using div. A section would be more appropriate as a semantic container than a div
{ "domain": "codereview.stackexchange", "id": 44848, "tags": "javascript, html, css" }
Intersection of two NPDAs
Question: Is there a way to take the interection of two NPDAs? I can't seem to find anything that can make that happen, but it seems like the type of thing that is should be relatively trival. Answer: The intersection of two context-free languages can be non-context-free. The classical example is $$ \{ a^n b^n c^m : n,m \geq 0 \} \cap \{ a^m b^n c^n : n,m \geq 0 \} = \{ a^n b^n c^n : n \geq 0 \}. $$ So in general you cannot simulate the intersection of two NPDAs with an NPDA.
{ "domain": "cs.stackexchange", "id": 2591, "tags": "formal-languages, automata, closure-properties, pushdown-automata" }
Average Power Spectral Density of PAM signals
Question: I am reading through the PAM transmission scheme and about the power spectral density of the signals. Given that the Average Power Spectral Density of PAM Signals is: $$ \Phi_{ss}(f)=\Phi_{aa}\left(e^{j2\pi ft}\right)\frac{\lvert G(f)\rvert^2}{T} $$ (We consider that the amplitude coefficients of PAM $a[k]$ is a cyclo stationary discrete stochastic process and hence compute average of a PSD). There are three remarks made after this: It becomes obvious that the minimum bandwidth of the pulse $g(t)\overset{\mathcal F}\rightarrow G(f)$ has to be at least $1/T$. The PSD $\Phi_{aa}\left(e^{j2\pi fT}\right)$ of the discrete time sequence $a[k]$ is periodic with period $1/T$. If $G(f)$ has smaller bandwidth than $1/T$ a full period of the data spectrum is not contained in the PSD of the transmit signal. I am not sure if I have fully understood the remarks. Here is my understanding on the above: Since we have taken an Average PSD, the PSD repeats over the time interval $T$ and the bandwidth of the pulse is $1/T$. (I am not sure how that became the minimum bandwidth). The PSD of the discrete sequence $a[k]$ is periodic since it's a cyclo stationary process and vary over a time period $T$. If $G(f)$ would have had less bandwidth than $1/T$, that is the pulse has a higher frequency than $1/T$, the data spectrum would have smeared into each other leading to ISI. Please let me know if the above understanding is correct/valid. Please add/correct if anything is missing. Answer: Question 1: If we have a PAM sequence like this: $a[k]=[0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0]$, then the PAM signal (Waveform) must hold frequency components which cover these fast transitions from 0 to 1 to 0 to 1. The oscillation speed (frequency) of the transitions are basically given by the time $T$ between the PAM symbols in the waveform. If $G(f)$ has lower bandwidth than $1/T$, these fast transitions are not possible. All the frequency components which could oscillate at that speed are filtered out by $G(f)$. Here a plot of the above Waveform, its PAM sequence $a[k]$ and the single pulses (when $a[k]=1$) which summed up create the waveform. And another plot with the above Waveform and its highest frequency component. If the dashed cosine would oscillate any slower, it could not represent the Waveform anymore. Question 2: The minimum bandwidth of $G(f)$ is given by $1/T$, but it can be higher and almost always is (see root raised cosine filter and roll-off factor, which let's you control the bandwidth of $G(f)$). BUT, the PSD of a PAM sequence is only defined within $1/T$, and due to the Nyquist–Shannon sampling theorem it is repeated thereafter (aliasing). Why is this important? Look at the following plot, it shows the PSD of a PAM sequence (limited by $1/T$), the PSD of a PAM signal and the filter $|G(f)|^2$. $G(f)$ filters the PAM sequence and the result is the PAM signal, but what happens outside $1/T$? $G(f)$ filters the aliased part of the PAM sequence. Note: Not sure if the y-axis of the PSD is correct, but we are only interested in the bandwidth. Question 3: You are correct! But there are some tricks (which e.g. are exploited by the (root) raised cosine filter), see my first plot and the single pulses. They obviously overlap in time, but at each symbol a[k] only one pulse is responsibly for all the amplitude of the waveform. All the other pulses related to other symbols are zero. See the Nyquist ISI criterion.
{ "domain": "dsp.stackexchange", "id": 5440, "tags": "discrete-signals, digital-communications, power-spectral-density, random-process, stochastic" }
bcftools filtering all files in a directory
Question: Probably a silly oversight on my part, but I'm trying to filter all the vcfs in a directory with bcftools using a simple loop. My basic command is working fine: bcftools filter -i 'QUAL > 1000' -o filter/file1out.vcf file1in.vcf but when I try to loop it, it echoes through each filename, but generates no output files or error messages. for f in *.vcf; do echo "filtering $f"; bcftools filter -i 'QUAL > 1000' -o filter/$f_out.vcf $f; done Am I not handling the output direction properly? Any help much appreciated! Answer: I assume your shell is looking for the variable $f_out, but can't find it because you did not define it. It probably generated a hidden file with the name .vcf in your output directory. Instead, use the following: -o ${f}_out.vcf. This will "protect/limit" the bash variable to just $f rather than $f_out. When encountering issues like this it's worth putting echo in your loop right before your command, in this case bcftools (adding quotes whenever necessary). That would show you the command as how it is going to be executed and would show you, in this case, that the output name is incorrect.
{ "domain": "bioinformatics.stackexchange", "id": 801, "tags": "vcf, bcftools" }
Deformation gradient, strain tensor from cylindrical coordinates
Question: I do not have any engineering background but I am studying some mathematics to try and understand and solve this problem. I have a mesh (M1) in cylindrical coordinates and this deforms into a mesh (M2) (again in cylindrical coordinates). I need to compute strain in radial, longitudinal and circumferential directions. Basically I have done the following computations on each point of the mesh. The deformation gradient F is computed: $$F = x·X^T·(X·X^T)^{-1}$$ where: $x$: the deformed cylindrical coordinates (3x1 matrix) $X$: the undeformed cylindrical reference coordinates (3x1 matrix) $X^T$: transpose of $X$ $X^{-T}$ : inverse of transpose of $X$ The Langrangian finite strain tensor $E$ is then computed: $$E = \dfrac{1}{2}(F^T·F-I)$$ where: $F^T$ : transpose of $F$ $I$ : identity matrix (3x3) I then take the diagonal of $E$ and this gives me the principal strains in the orthogonal cylindrical coordinates. Questions: Am I doing it right? Do I need to do some further computations on the diagonal of $E$ to get the principal strains at each point of this mesh that is deforming? Answer: I'm not sure what the interpretation of your deformation gradient is. To my knowledge the deformation gradient gives the local deformation, i.e. the deformation around a certain point. To obtain this deformation, local information (i.e. information in one point only) is not enough, but you also require some information on the surroundings. The formula I know is $$ F_{ij}=\frac{\partial x_i}{\partial X_j} $$ with $F_{ij}$ being the deformation gradient, and $x_i$ and $X_j$ are local coordinates in the deformed and undeformed configuration, respectively. Thus to calculate the continuous $F_{ij}$, you need the differentials of the local coordinates $\partial x_i$ and $\partial X_j$. I'm no expert on discrete meshes, but one possibility is to calculate $F_{ij}$ for each element in your mesh by using finite differences $\Delta x_i$ and $\Delta X_j$. If unsure you should probably ask again more precisely. Your finite strain tensor calculation seems correct to me. Note that when the deformation gradient is calculated for an element, the finite strain tensor is also for that element. Again I'm no expert on the interpretation of the results. The wikipedia article on finite strain theory seems to have some answers for you. When calculating in curvilinear coordinate systems, things usually become a bit more complicated than in cartesian coordinates. However, since cylindrical coordinates are locally cartesian, your calculation is fine. For more complex curvilinear coordinate systems you would need to evaluate your equations using co- and contravariant bases.
{ "domain": "engineering.stackexchange", "id": 863, "tags": "structural-engineering, structural-analysis, finite-element-method, kinematics, deformation" }
nxt_rviz_plugin with Rosjava
Question: Hi! I'm using the nxt_lejos_proxy package and rviz. I would like to use the nxt_rviz_plugin, which displays data from a nxt_msgs::Range message as a cone. I've set the topic to /ultrasonic_sensor, but when i run the simulation I got: Status:Warning Topic No messages received Using rostopic echo /ultrasonic_sensor I can see that the ranges detected by the ultrasonic sensor are published on the /ultrasonic_sensor topic. But looking in the class ROSProxy.java I've seen that the message published on that topic is a sensor_msgs/Range and not a nxt_msgs/Range. Could this be the cause of the problem?And how can i solve it? Originally posted by camilla on ROS Answers with karma: 255 on 2012-08-08 Post score: 0 Answer: I solved publishing a message of type nxt_msgs/Range on a custom defined topic called /custom_ultrasonic_sensor and passing this topic to nxt_rviz_plugin. Originally posted by camilla with karma: 255 on 2012-08-08 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10533, "tags": "ros" }
What are the positive and negative effects of insulin on cognitive function?
Question: A UCLA study seems to imply that insulin interferes with cognitive function. The DHA-deprived rats also developed signs of resistance to insulin, a hormone that controls blood sugar and regulates synaptic function in the brain. A closer look at the rats' brain tissue suggested that insulin had lost much of its power to influence the brain cells. "Because insulin can penetrate the blood–brain barrier, the hormone may signal neurons to trigger reactions that disrupt learning and cause memory loss," Gomez-Pinilla said. He suspects that fructose is the culprit behind the DHA-deficient rats' brain dysfunction. Eating too much fructose could block insulin's ability to regulate how cells use and store sugar for the energy required for processing thoughts and emotions. "Insulin is important in the body for controlling blood sugar, but it may play a different role in the brain, where insulin appears to disturb memory and learning," he said. "Our study shows that a high-fructose diet harms the brain as well as the body. This is something new." At the same time, increased levels of IGF-1 seems to be implicated in higher level of intelligence. IGF-1 is not the same thing as insulin, but it does seem to have many similar effects. Answer: Diabetes I've not read about any conclusive evidence for a link between insulin and differential cognitive function, but I have read studies that link type-2 diabetes and impaired cognition (1). I will point out now that this is cross-sectional, so the study only reports associations (i.e. diabetes may not necessarily cause the cognitive impairment). The study I have mentioned does not conclude that this is caused by raised insulin, but rather the effects on the vascular system (specifically microvascular). They also find a significant interaction between diabetes and smoking status, in the context of cognitive impairment. Again, this seems likely to be the vascular system, rather than insulin levels. A recent review paper also refers to the links between diabetes and executive function (2), but again makes no reference to insulin as the cause, but rather microvascular changes, hypertension, and other associated traits (again, the causes are not known, these are speculative based on the evidence). Diabetes may not be the best model for studying this though, as it can be characterized by either high insulin (tolerance), or low insulin (impairment) - therefore the association may not be found this way. Insulin In a separate review (this may be the best one for you if you only read one of the papers I've referenced) the author posits that whilst insulin may have a neuroprotective role, it also increases amyloid-Beta metabolism and tau phosphorylation, possibly contributing to Alzheimer's-like pathologies (3). Studies have found that insulin directly improves cognitive performance (4) after infusion of insulin, but the long term effects are less certain - it is unlikely to improve overall health having constantly raised insulin serum levels! However the link between IGF-1 and improved cognition is less disputed, so you may well be right in thinking that the 'overall' effect of insulin on cognition may be protective, but this may just be a marker of good health overall, which is certainly protective!
{ "domain": "biology.stackexchange", "id": 361, "tags": "neuroscience" }
How would a free particle with known spin evolve?
Question: I searched a lot for a Hamiltonian of a pauli spinor with no potential energy but got no luck, so I tried deriving one my own. I took an overkill shortcut and used pauli's equation: $$i\hbar \frac{\partial}{\partial t}=\frac{1}{2m}[\vec{\sigma}\cdot (\hat{p}-q\vec{A})]^2+q\phi$$ And just set the magnetic and electric potential to zero: $$i\hbar \frac{\partial}{\partial t}=\frac{1}{2m}[\vec{\sigma}\cdot \hat{p}]^2$$ But other than this, I don't know how to continue. I don't know what the dot product is, let alone its square. All in all, is my approach correct? If so how to continue? In case everything I said was wrong, what's the reason? And finally, what is the solution to the differential equation? Answer: The operator $\hat{\sigma} \cdot \hat{\vec{p}}$ corresponds to the operator $$ \sigma_x p_x + \sigma_y p_y + \sigma_z p_z. $$ where the $\sigma_i$ stand for the Pauli matrices and the $p_i$ are the momentum operators (equal to $-i \hbar \frac{\partial}{\partial x_i}$ in position space.) So if you wrote this thing out as a position-space operator, you would get $$ \hat{\sigma} \cdot \hat{\vec{p}} = -i \hbar \begin{bmatrix} \partial_z & \partial_x - i \partial_y \\ \partial_x + i \partial_y & - \partial_z \end{bmatrix} $$ where $\partial_x \equiv \partial/\partial x$, etc. This operator then acts on a two-component spinor, each of whose components is a function of $\vec{r}$. In principle, this operator can then be squared. However, this just works out to be $$ \left(\hat{\sigma} \cdot \hat{\vec{p}} \right)^2 = -\hbar^2 \begin{bmatrix} \nabla^2 & 0 \\ 0 & \nabla^2 \end{bmatrix} $$ which just means that in the absence of a magnetic field, each of the components of the spinor satisfies the conventional Schrödinger equation, and can be solved using the typical techniques that you're already familiar with. (The simplification of the squared operator, by the way, can be viewed as a special case of the immensely useful Pauli vector identity, which is $$ (\vec{a} \cdot \hat{\sigma} )(\vec{b} \cdot \hat{\sigma}) = (\vec{a}\cdot\vec{b}) \mathbb{I} + i (\vec{a} \times \vec{b})\cdot \vec{\hat{\sigma}} $$ for any operators $\vec{a}$, $\vec{b}$ that commute with the Pauli matrices.)
{ "domain": "physics.stackexchange", "id": 89207, "tags": "quantum-mechanics, wavefunction, quantum-spin, hamiltonian, spinors" }
Control a robotic gripper
Question: I have created a robotic gripper. However, I need help in the control circuit: There are two buttons, the upper and lower one (connected to the timer): the upper one is two states timer either up or down (will be replaced with an active low pin in PCB design). the lower one is a push button that must be pushed and let back to its initial position to give a pulse to the monostable timer to make the servo rotate long enough to just close the gripper (calculate with the RC circuit). My problem: I want to replace the down button with some component so as to give a quick pulse when the upper button change state. For example, the button was 0 and went 1 -> Component -> a small pulse to drive the timer (not a pulse that will last until the state changes): Where, Green: is button state Red: trigger pulse needed. Answer: This can be easily done with a 555 timer, or even a flip flop and a counter. Are you looking for design concepts or a detailed design? EDIT Check out this site for a simple edge-triggered one-shot that uses two nor gates and an R-C (scroll down to the NOR Gate Monostable section). I can't draw the circuit on here using my phone so I apologize about just providing a link.
{ "domain": "robotics.stackexchange", "id": 1413, "tags": "control, robotic-arm" }
How is the reflection probability calculated?
Question: I was reading Feynman's book on QED and I stumbled upon the probability of reflection of photons by a piece of glass. Is there a way to calculate the probabilty of reflection? Answer: Yes. This is contained in Maxwell's equation. The probabilities are obtained from Fresnel's coefficients, and depend on the angle of incidence, as you can read here. There is also this nice set of slides that also gives quite a bit of details on how these are calculated from Maxwell's equations.
{ "domain": "physics.stackexchange", "id": 38596, "tags": "quantum-mechanics, optics, photons, quantum-electrodynamics, reflection" }
Forming digits on a grid by toggling 5-pixel diamonds
Question: Context: https://www.reddit.com/r/ProgrammerHumor/comments/6ggzvz/exceptionally_late_to_the_party_here_is_my_phone/ Some time ago, /r/ProgrammerHumor was flooding with phone number inputs. In the linked thread, you can find one based on the "Lights off" game. If you click a cell, it will flip, and all neighbouring cells will also flip. In this particular implementation, this propagates through the walls. We can represent a number as a 5x4 matrix of booleans. So if you start from a matrix that's all zeros, clicking the top left cell should leave you with the following: 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 The representation of all numbers is in the file numbers.h below. The question is whether all numbers can be reached, starting from "all lights off" (all values are zero). The author noted that he had a Python implementation that runs in 5 minutes. This seemed ridiculously long to me and I was curious to compare the timing of his Python implementation to a C implementation, so I wrote it. My C isn't very good, though, so comments are very welcome! numbers.h: char numbers[][20] = { // zero { 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1 }, // one { 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1 }, // two { 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1 }, // three { 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1 }, // four { 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1 }, // five { 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1 }, // six { 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, }, // seven { 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0 }, // eight { 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1 }, // nine { 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1 } }; phone-number-bruteforce.c: #include "stdio.h" #include "string.h" #include "numbers.h" void print_num(int in) { for (int i = 0; i < 5; ++i) { for (int j = 0; j < 4; ++j) { int nbit = 4*i+j; printf("%d", (in & (1 << nbit)) >> nbit); } printf("\n"); } } void print_char(char *in) { for (int i = 0; i < 5; ++i) { for (int j = 0; j < 4; ++j) { int n = 4*i+j; printf("%d", in[n]); } printf("\n"); } } int shift(int index, int shift) { // Note: NOT the same as (index + shift) / 4, due to rounding int row = (5 + index / 4 + shift / 4) % 5; int col = (4 + index + shift) % 4; return row * 4 + col; } char* click(char *in, int click) { in[click] ^= 1; in[shift(click, 1)] ^= 1; in[shift(click, -1)] ^= 1; in[shift(click, 4)] ^= 1; in[shift(click, -4)] ^= 1; return in; } int compare_numbers(char *in) { for (int i = 0; i < 10; ++i) { if (!memcmp(numbers[i], in, 20)) { return i; } } return -1; } int main() { for (int i = 0; i < (1 << 20); ++i) { int index = i; char new[20] = {0}; for (int j = 0; j < 20; ++j) { if (index & 1) { click(new, j); } index /= 2; } int found = compare_numbers(new); if (found >= 0) { printf("Found %d!\n", found); print_num(i); } } } I tried to save each number in one int (since I only need 20 bits), but I couldn't figure out the index calculations. This is the working version using an array of chars. Answer: Clarity The question is whether all numbers can be reached, starting from "all lights off" (all values are zero). But what this actually checks is if each configuration is reachable clicking any location only once. And it incidentally verifies that each configuration is only reached by one sequence of clicks (since it only says found once for each digit). That's a subset of the original problem. This could use more explanation, as it's not immediately clear what it's trying to do. int index = i; char new[20] = {0}; for (int j = 0; j < 20; ++j) { if (index & 1) { click(new, j); } index /= 2; } In particular, either if (index % 2 == 1) { click(new, j); } index /= 2; or if (index & 1) { click(new, j); } index >>= 1; would be clearer about what it was doing in the center part. And int j = 0; char configuration[20] = {0}; for (int click_pattern = i; click_pattern > 0; click_pattern /= 2) { if (click_pattern % 2 == 1) { click(configuration, j++); } } is even clearer. This also stops iterating if we run out of clicks to perform. The original would iterate twenty times even for 0, which meant no clicks. On most processors, I suspect that doing division and modulus is faster than doing a bitwise and with a right shift. And the compiler is more likely to optimize division/modulus by 2 into bitwise operations than the other way around. The presumption being that people using bitwise operations know enough to profile and optimize. If you rewrite this as int main() { for (int click_pattern = (1 << 20) - 1; click_pattern > 0; --click_pattern) { char configuration[20] = {0}; process(configuration, click_pattern); int found = compare_to_numbers(configuration); if (found >= 0) { printf("Found %d!\n", found); print_pattern(click_pattern); } } } Now it's much clearer that you are displaying what digit was found and the click pattern to reach there from the starting configuration. Since we don't care about order, counting down makes it easier to avoid repeated calculations. The compiler would probably optimize that out anyway, but this way it definitely only occurs once. Now we can clearly see that we print which digit configuration was found and the click_pattern to reach it. Also that we iterate over all 1,048,576 click patterns where each position is flipped at most once. Alternative #include <stdio.h> #include <string.h> #include <stdint.h> /* F999F in binary is 11111 10001 10001 10001 11111 */ static int32_t digits[] = { 0xF999F, 0x13111, 0xF1F8F, 0xF1F1F, 0x99F11, 0xF8F1F, 0xF8F9F, 0xF1248, 0xF9F9F, 0xF9F11 }; const int DIGIT_COUNT = sizeof digits / sizeof digits[0]; /* 1001B is 0001 0000 0000 0001 1011 */ static int32_t click_masks[] = { 0x1001B, 0x20027, 0x4004E, 0x8008D, 0x001B1, 0x00272, 0x004E4, 0x008D8, 0x01B10, 0x02720, 0x04E40, 0x08D80, 0x1B100, 0x27200, 0x4E400, 0x8D800, 0xB1001, 0x72002, 0xE4004, 0xD8008 }; const int POSITION_COUNT = sizeof click_masks / sizeof click_masks[0]; #define CONFIGURATION_COUNT (1 << 20) static int8_t searched_configurations[CONFIGURATION_COUNT] = {1}; int is_digit(int32_t in) { for (int i = 0; i < DIGIT_COUNT; ++i) { if (digits[i] == in) { return i; } } return -1; } void search(int32_t current, int click_pattern) { for (int i = 0; i < POSITION_COUNT; i++) { int32_t next = current ^ click_masks[i]; if (!searched_configurations[next]) { searched_configurations[next] = 1; search(next, click_pattern ^ (1 << i)); int found = is_digit(next); if (found >= 0) { printf("Found %d!\n", found); printf("Clicked %05x.\n", click_pattern ^ (1 << i)); } } } } int main() { search(0, 0); } Here's an alternative solution. Instead of iterating, this searches recursively for numbers from the start point. If a particular configuration has already been searched, it doesn't try to search it again (no loops). If it hasn't, it continues in a depth first search. This finds all configurations reachable from the empty start point. As it turns out, all the positions are reachable from the empty configuration (can't see it from this code, but if you count the searched_configurations, all of them are truthy). I'm not sure that this is possible with Lights Out under normal (no wraparound) rules without double clicks. As with the original code, this displays immediately when it finds something. That's bad form, but fixing that in C is more work than in other languages with better (or at least more complex) native data structures. This sets constants rather than scattering magic numbers throughout the code. This limits the global variables to just this compilation unit with the static keyword. In a more object-oriented language, we could do without them. We could also pass around a struct to avoid that, but I don't find that any cleaner. The logic has been moved out of main and into the recursive search function. The main function could be moved out of this file and everything would work. I did not name the functions uniquely with a prefix, as I'm not calling them from other code. That's something else that could be done to make the code more reusable. It would also make the code uglier. I put everything in the same file because I was running on ideone.com. I have no C compiler at the moment. The site reported the runtime as either .05 or .06 seconds. Either of which are much better than five minutes. I switched the includes from quotation marks ("string.h") to angle brackets (<string.h>) because that is the standard I expect for compiler libraries. To me, quotation marks are for user-defined libraries. The compiler also follows a different search path which may be more efficient, but the primary reason is readability. The real key to this solution is that it stores the on/off information as bits in an integer. It includes stdint.h so that int32_t can be used to ensure that the integer is wide enough. To click, first note that there are only twenty places to click. So we can enumerate the possible actions in click_masks. As your original program does, we just exclusive-or the bits of the current configuration with the appropriate mask. That gives us the result, which we pass to the next call to search.
{ "domain": "codereview.stackexchange", "id": 26102, "tags": "c" }
Why does thermodynamic integration work?
Question: Brief introduction: Thermodynamic integration is a neat computational method used mainly for computing free energy differences between target and reference states of classical many-body systems, such as gases and liquids. The key idea is the following: free energy is a thermal quantity (i.e. not expressible as averages of phase space coordinates) and thus not measurable as such in any experimental or numerical way. But free energy derivatives can of course be measured, e.g., in the canonical ensemble the derivative of the Helmholtz free energy with respect to the volume gives us the pressure, which is measurable both experimentally and numerically. Being able to compute such derivatives, one then uses thermodynamic integration methods to compute free energy differences along reversible paths (in the plane of any tuple of natural variables) that connect a reference state of the system (i.e., one for which the free energy is actually known) to a target desired state (whose free energy we want to compare with the reference state). All this sounds so far rather natural, but the trick in thermodynamic integration and maybe its strength lies in the fact that as long as we want to compute things numerically and thus not bound by experimental limitations, one is not limited to physical paths, instead any parameter $\lambda$ in the free energy can be used (as a dynamic variable) to perform the thermodynamic integration, as long as the function (potential energy or free energy) admits a derivative with respect to the chosen variable. One generally expresses this method as follows: We parametrize the potential energy of the system w.r.t any parameter $\lambda$ be it a physical one or not, then havign two states in mind, state (1) being the reference (obtained when $\lambda = 0$) and state (2) the target state (obtaied when $\lambda = 1$) whose free energy we are interested in, we write: $$ U(\lambda) = (1-\lambda)U_1 + \lambda U_2 \tag{(1)} $$ then for example if we take the parametrized Helmholtz free energy $F(\lambda),$ the free energy difference can be shown to be: $$ F_{\lambda=1} - F_{\lambda=0} = \int_{\lambda=0}^{\lambda=1} d\lambda \left\langle \frac{\partial U(\lambda)}{\partial \lambda} \right\rangle_{\lambda} \tag{(2)} $$ where $\langle \rangle_{\lambda}$ is an ensemble average over the system with the potential energy function $U(\lambda).$ The claim often made in literature is that, such thermodynamic integrations are valid using any function $U(\lambda)$ as long as it is differentiable and satisfies the boundary conditions (for the reference and target states). Question: Purely from a conceptual point of view, I have no idea what is going on here. How is it possible that we can just parametrize the free energy/potential energy by non-physical parameters and still manage to correctly gauge the free energy difference correctly between two physical states of a system? Intuitively, I would have expected that if the thermodynamic integration method is performed using non-physical parameters, then one would get nonsense, meaning predicting the wrong equilibrium configurations as an example. But somehow all this is possible and commonly used in computational physics. I'm just trying to understand why this method can work so flexibly. Answer: Some mathematical statements then an intuition statement: Mathematical We often use parameterizations like this to mathematically represent a continuous motion from one point to the next, for instance when discussing convex spaces, in which we say a space is convex if all points between any two given points are also in the set (in the language of the wiki page, convex if the vector $\lambda u_i+(1-\lambda) u_j$ is also in the space, for any vectors $u_i$ and $u_j$ in the space; satisfied by a sphere, but not by a donut). Similarly, the parameterization here is simply a way to represent traveling through a space, in this case a space of possible states. The "nonphysical" parameter $\lambda$ isn't introducing new physics any more than my above analogy means the space of a sphere is in any way physically changed by our mathematical wandering through the space of a sphere. The reason this works is that you're wandering through a continuously changing energy landscape, which brings me to... Intuition For state variables like energy, as long as you wander from one state to another, keeping track of your progress as you go (more on that below), you can find your new energy from your old, just like with the traveling analogy already discussed. The nuance is that if you, say, travel from one city to another city directly North, you might take a path that veers a bit east before coming back west to arrive at its destination. You might say, doesn't that increase my total distance traveled? Yes, but any travel east cancels any travel west. Similarly, going through this space of states with varying energies, any increases along the path cancel any decreases along the path, and you can arrive at the total energy difference between your initial and final states. Now you can see why we need this path to be differentiable with respect to $\lambda$: we can't allow discontinuous jumps. So in conclusion, we don't really care about the nature of the path, since any funny deviations from point A to point B will cancel. $\lambda$ is just the mathematical parameter that allows us to continuously follow our path and ensure we end where we want to go.
{ "domain": "physics.stackexchange", "id": 39792, "tags": "thermodynamics, classical-mechanics, statistical-mechanics, computational-physics" }
Example 4.2 of Griffiths introduction to electrodynamics
Question: So i was doing the chapter electric fields in matter and in example 4.2 ,the author asks to find electric field due to uniformly polarized sphere of radius R. So i think there is no free charge hence D=0 . Plugging into D=P+eE , i get E=-(P/e) where e= permittivity of free space. But by using conventional method by evaluating bound charges the answer is E= -(P/3e). Why is it so? Can anybody expalin? Answer: This is so because you are using the wrong $E$. In the equation: $D = \varepsilon_0E + P$. The electric field, $E$ is the total electric field i.e. the applied field plus the electric field polarization $P$ produces which in itself is due to the applied field in the first place. This process just continues. Where as in the example 4.2 you are calculating the field "inside" due to the polarization $P$ only.
{ "domain": "physics.stackexchange", "id": 50156, "tags": "classical-electrodynamics" }
Probability of error for detection problem
Question: Let $X \in \mathbb{R}^N$ and $Z \sim \mathcal{N}(0, \sigma^2 I)$ be random vectors. $Y = X + Z$ $X$ can be either $a_0 \in \mathbb{R}^N$ or $a_1\in \mathbb{R}^N$ with equal probability. So the decision rule is $$||y - a_0||^2 \overset{X = a_1}{\underset{X = a_0}{\gtrless}} ||y - a_1||^2$$ What is $P(\text{error} | X = a_0)$? My attempt at solution: \begin{align*} P(\text{error} | X = a_0) &= P(||Y - a_0||^2 > ||Y - a_1||^2 | X = a_0)\\ &= P(||Y - a_0||^2 > ||Y - a_1||^2) \, \, \text{ where $Y \sim \mathcal{N}(a_0, \sigma^2 I)$ }\\ &= \frac{1}{(2 \pi)^{N/2}} \frac{1}{\sigma^N} \int_{D} \exp \left( -\frac{1}{2 \sigma^2} ||y - a_0||^2 \right) dy \end{align*} where $D \subseteq \mathbb{R}^N$ is the region containing all points closer to $a_1$ than $a_0$. Of course, this integral does not have an analytic formula, but can this be written in terms of single dimensional CDFs, exploiting the fact that $Y$'s components are independent random variables. Answer: \begin{align*}||Y - a_0||^2 &\overset{X = a_1}{\underset{X = a_0}{\gtrless}} ||Y - a_1||\\ \end{align*} can be written as: \begin{align*}(a_1 - a_0)^T Y &\overset{X = a_1}{\underset{X = a_0}{\gtrless}} \frac{||a_1||^2 - ||a_0||^2}{2}\\ \end{align*} Note that $g(Y) = \left(a_1 - a_0 \right)^T Y$ is a sufficient statistic and a scalar quantity. You can prove that $\left ( g(Y) | X = a_0 \right) \sim \mathcal N \left( (a_1 - a_0)^T a_0, \sigma^2 ||a_1 - a_0||^2 \right)$ Hence, \begin{align*}P(\text{error} | X = a_0) &= P\left( g(Y) > \frac{||a_1||^2 - ||a_0||^2}{2} \Bigg | X = a_0\right) \end{align*} can be easily evaluated using one-dimensional CDFs.
{ "domain": "dsp.stackexchange", "id": 8379, "tags": "signal-detection, random, probability" }
Find the common elements of the lists in a list of lists
Question: Considering the problem: Make a function to find the common elements of the lists in a list of lists. Example: common ( [ [1,2,3,3,5,11], [2,2,2,3,3,5], [2,2,3,3,4,5,6,7,18,19], [3,5,10,15] ] ) = [3, 5]. I came up with this solution: import Data.List(group, nub, sort) -- | -- This function returns the elements common to all lists in a list of lists. -- commonXss :: Ord a => [[a]] -> [a] commonXss xss = concat [x | (x, y) <- countElemAppearsXss xss, y == length xss] -- | -- This function takes a list of lists and returns a list of tuples. Each tuple -- (x, y) is made of x == [element] and y == number of lists in which that -- element appeared. -- countElemAppearsXss :: Ord a => [[a]] -> [([a], Int)] countElemAppearsXss xss = freq $ sort $ concat $ map nub xss -- | -- This function returns a list of tuples. Each tuple (x, y) is made of -- x == [element] and y == it's frequency. -- freq :: Ord a => [a] -> [([a], Int)] freq xs = map (\x -> ([head x], length x)) . group . sort $ xs Is there a simpler way, more straightforward way that I'm missing? Perhaps in a library? How about the style? Answer: This problem is just a matter of folding using the intersect function of Data.List, then deduplicating the result using nub.
{ "domain": "codereview.stackexchange", "id": 13219, "tags": "programming-challenge, haskell" }
Space and spin symmetry in light baryons
Question: In Particle Physics by Martin and Shaw in Ch.6 they show how the baryon supermultiplets are built from the assumption that the space and spin wave functions are symmetrical, so eg. $uud$ has spins up,up,down so that under exchange of the $u$ quarks the wavefunction doesn't change sign as the spins are the same, but if a $u$ and $d$ is exchanged the spin changes so I assume the space wavefunction must introduce another minus from the flavour change to compensate the spin asymmetry and make the whole wavefunction symmetric. However, if say the $uds$ baryon has spin up,up,down then exchanging $u$ and $d$ particles leaves the spin wavefunction unchanged but if the $u$ and $d$ are exchanged it's not symmetrical due to the flavour part. So the $uud$ case requires an asymmetrical flavour wavefunction but $uds$ needs to be symmetrical under $u$ and $d$ exchange but antisymmetrical under exchange of the $s$ with the $u$ or $d$. So do you need to construct a spacial wavefunction for each spin state to make the whole thing symmetric? I feel like i'm missing something, it seems weird to have flavour change be asymmetric only sometimes, if someone could tell me where my thinkings going wrong or maybe justify the spacial wavefunction change I'd really appreciate it. Answer: Chapter 6 of Martin & Shaw does seem to say that the $aa$ quark pair in an $aab$ baryon must be in a spin-1 state, e.g. that the spin-up proton wave function is $u{\uparrow}\;u{\uparrow}\;d{\downarrow}$. You are right to be confused by this, since the actual proton flavour-spin wave function is $$ \frac{1}{\sqrt{18}}(2\;u{\uparrow}~u{\uparrow}~d{\downarrow} - u{\uparrow}~u{\downarrow}~d{\uparrow} -u{\downarrow}~u{\uparrow}~d{\uparrow} \\ \quad+ 2\;u{\uparrow} ~ d{\downarrow} ~ u{\uparrow} - u{\downarrow} ~ d{\uparrow} ~ u{\uparrow}-u{\uparrow} ~ d{\uparrow} ~ u{\downarrow} \\ \quad\;\;+ 2\;d{\downarrow} ~ u{\uparrow} ~ u{\uparrow} - d{\uparrow} ~u{\downarrow}~u{\uparrow} - d{\uparrow} ~ u{\uparrow} ~ u{\downarrow} ). $$ See, for example, the answers to Proton spin/flavor wavefunction or page 222 of this handout by Prof. Mark Thomson.
{ "domain": "physics.stackexchange", "id": 47967, "tags": "particle-physics, wavefunction, standard-model, quarks, baryons" }
Spring Security Web Configurer Adapter
Question: guys. The code below is spring security web adapter. I do not like configure(HttpSecurity) method that generates security confirmation policy. Any ideas to do it more readable and clear? /** * Spring security configuration * * @author Eugene Ustimenko * @date Nov 5, 2014 */ @Configuration @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Autowired @Qualifier ("loginService") private ILoginService loginService; @Override protected void configure (AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(loginService).passwordEncoder(passwordEncoder()); } @Override protected void configure (HttpSecurity http) throws Exception { final RequestMatcher csrfRequestMatcher = new RequestMatcher() { private RegexRequestMatcher requestMatcher = new RegexRequestMatcher("/login/*", null); @Override public boolean matches (HttpServletRequest request) { return requestMatcher.matches(request); } }; http.csrf() .requireCsrfProtectionMatcher(csrfRequestMatcher) .and().authorizeRequests().antMatchers("/admin/**").access("hasRole('ADMIN')") .and() .formLogin().loginPage("/login").failureUrl("/login?error") .usernameParameter("username").passwordParameter("password") .and() .logout().logoutSuccessUrl("/") .and() .csrf() .and() .exceptionHandling().accessDeniedPage("/403"); } @Bean public PasswordEncoder passwordEncoder () { final PasswordEncoder encoder = new BCryptPasswordEncoder(); return encoder; } @Bean (name = "auth") @Override public AuthenticationManager authenticationManagerBean () throws Exception { return super.authenticationManagerBean(); } } Answer: You don't need to build this as one line. You could try splitting it out into multiple lines and add some comments to describe what you are doing. I think the below example does what you want. I have removed the anonymous RequestMatcher class. RegexRequestMatcher implements the RequestMatcher interface. I have removed the second csrf() method call as it isn't needed. @Override protected void configure(final HttpSecurity http) throws Exception { // Enable csrf for login form http.csrf().requireCsrfProtectionMatcher(new RegexRequestMatcher("/login/*", null)); // Configure login page http.formLogin().loginPage("/login").failureUrl("/login?error").usernameParameter("username").passwordParameter("password"); // Configure logout redirect http.logout().logoutSuccessUrl("/"); // Ensure admin pages have correct role http.authorizeRequests().antMatchers("/admin/**").hasRole("ADMIN"); // Configure access denied exception redirect http.exceptionHandling().accessDeniedPage("/403"); }
{ "domain": "codereview.stackexchange", "id": 18468, "tags": "java, spring" }
Tools for 3D shape analysis
Question: I have a 3D shape in a 3D binary image. Therefore, I have a list of all of the x,y,z points. If I am to analyze this shape for various identification, such as "sphericity", volume, surface area, etc., what are some of the choices do I have here? Answer: To approximate the volume ($V_{counted}$) you need to count all the voxels. To roughly approximate the surface area ($A_{approx}$) count all the voxels that have an "empty" voxel as a neighbor. (Have a look at Marching cubes or Marching tetrahedrons for more inspiration and a more detailed discussion on how to determine a surface element). In order to determine the sphericity I would calculate the minimal bounding box$^1$, take the largest axis and regard it as the diameter: $V_{calc} = \frac{4}{3}\pi r^3$ Since your features ($f_{vol}$, $f_{area}$, $f_{sphere}$, $\dots$) should be in the interval $[0, 1]$ (or 0% - 100%) you simply need to divide the smaller value by the larger value: $f_{vol}(V_{counted}, V_{ref}) = \frac{\min(V_{counted}, V_{ref})}{\max(V_{counted}, V_{ref})}$ $f_{area}(A_{approx}, A_{ref})$ and $f_{sphere}(V_{counted}, V_{calc})$ accordingly. $f_{compactness} \tilde{=} f_{sphere}$, since a sphere is the most compact volume. I hope you can understand, what I am trying to say, and hopefully the answer you are looking for is somewhere in here. $^1$ if your data is not aligned, you may have to align it first (PCA).
{ "domain": "dsp.stackexchange", "id": 168, "tags": "shape-analysis, 3d" }
Recursive spinlock for C using ASM
Question: My history of C/ASM locks here in codereview: Simple spinlock for C using ASM Simple spinlock for C using ASM, revision #1 (basis for this approach. If this one fails, then so will the one below.) Once you have gotten taste of assembly, there is no turning back. I don't care if I am not ready/worthy, I going to step things up a bit and be a little bit more adventures. So I went ahead and made a lock that I hope is able to handle recursions. I will without a doubt fail very painfully and the mistake is also going to be hard to spot. No, I am not pessimistic. Just like gravity works to put things down, so does my reasoning skills to me. It is up to you to give me tough love and spot my faulty reasoning and throw bananas at me! The code in question. // "gate_" is the namespace marker for higher level languages that might import this code... as if... #define gate_Gate volatile int #define gate_Pass volatile int extern inline void gate_Enter (gate_Gate *gate, gate_Pass *pass) { asm volatile ( "jmp gate_Enter_check\n" // Skip the line. "gate_Enter_wait:\n" // Wait in line. "pause\n" // Wait a long time. "gate_Enter_check:\n" // Now we are at the gate. "cmp %[lock], %[checkin]\n" // See if our pass is any good. "jge gate_Enter_skip\n" // Skip if pass >= lock. "mov %[lock], %%eax\n" // eax = 1. "lock xchg %%eax, %[gate]\n" // Attempt to validate our pass and connect it to the gate. "test %%eax, %%eax\n" // Hope for the best. "jnz gate_Enter_wait\n" // There is no hope, go back in line. "gate_Enter_skip:\n" // We are VIP! "add %[lock], %[checkin]\n" // Checkin pass like pro! : [gate] "=m" (*gate), [checkin] "=m" (*pass) : [lock] "r" (1) : "eax" ); } extern inline void gate_Leave (gate_Gate *gate, gate_Pass *pass) { asm volatile ( "cmp %[pass], %[isLast]\n" // Have I checked in only once? "jg gate_Leave_skip\n" // Skip next step if I have checked in more then once. "mov %[unlock], %[gate]\n" // Close the gate because I am the last one to leave. "gate_Leave_skip:\n" // Comment! "add %[checkout], %[pass]\n" // Checkout. This may or may not be the last time. : [gate] "=m" (*gate), [pass] "=m" (*pass) : [unlock] "r" (0), [checkout] "r" (-1), [isLast] "r" (1) ); } // Example use gate_Gate gate = 0; gate_Pass exampleUsePass = 0; void exampleUse(int count) { printf("pass=%d\n", exampleUsePass); gate_Enter(&gate, &exampleUsePass); if (count == 3) { // exampleUsePass = 0; // force deadlock. } if (count < 5) { printf("Count = %d\n", count); exampleUse(++count); } gate_Leave(&gate, &exampleUsePass); printf("pass=%d\n", exampleUsePass); } exampleUse(0); The question The intent of this code is a lock that allows recursion. Will this lock make exampleUse both threadsafe and reentrant if we allow our self to assume the pass, exampleUsePass is only used for that function? If not? what is wrong? If yes, it is still wrong; isn't it?... Give me a good rant about what to consider, what is missing or about better approach. If you want to suggest libraries, please restrict your self to C. I am not experienced enough to bind C++ stuff to other languages, so I often can't use the good stuff over there :'( Answer: I don't believe your current code is thread safe. Looking at your gate_Enter function: The first thing you do is skip the wait loop, this is fine: "jmp gate_Enter_check\n" "gate_Enter_wait:\n" "pause\n" Then you check your reentry count. If it's > 0 then you assume you are the the one holding the lock and skip the actual lock part. "gate_Enter_check:\n" "cmp %[lock], %[checkin]\n" "jge gate_Enter_skip\n" "mov %[lock], %%eax\n" "lock xchg %%eax, %[gate]\n" "test %%eax, %%eax\n" // Hope for the best. "jnz gate_Enter_wait\n" // There is no hope, go back in line. And increment the nesting level. "gate_Enter_skip:\n" // We are VIP! "add %[lock], %[checkin]\n" // Checkin pass like pro! The issue is you don't actually know that the thread calling gate_Enter is the same one that originally entered the gate. If another thread calls exampleUse function it's using the same pass. You could get around this by making use of thread local storage for your pass/checkin variable. Or you could create exampleUsePass on the stack, then pass it in to recursive calls instead of using global vars. You would have a similar issue with your gate_Leave method, where you unlock the gate, then decrement the nesting count. It would be better to decrement the value, whilst you are within the locked section, something like this: "add %[checkout], %[pass]\n" "jnz gate_Leave_skip\n" // If not zero, this is a nested call "mov %[unlock], %[gate]\n" // Close the gate because I am the last one to leave. "gate_Leave_skip:\n" As an aside, I'm not a huge fan of the way you've essentially renamed the pass parameter to your gate_Enter function to checkin. [checkin] "=m" (*pass) A slightly more robust version of your code, which will abort if there is an unexpected exit from gate when you don't own the gate is shown below (along with a simple test harness). #include <stdio.h> #include <unistd.h> #include <unistd.h> #include <string.h> #include <pthread.h> #define gate_Gate volatile int #define gate_Pass volatile int /*extern*/ inline void gate_Enter (gate_Gate *gate, gate_Pass *pass) { asm volatile ( "jmp gate_Enter_check\n" // Skip the line. "gate_Enter_wait:\n" // Wait in line. "pause\n" // Wait a long time. "gate_Enter_check:\n" // Now we are at the gate. "cmp %[lock], %[pass]\n" // See if our pass is any good. "jge gate_Enter_skip\n" // Skip if pass >= lock. "mov %[lock], %%eax\n" // eax = 1. "lock xchg %%eax, %[gate]\n" // Attempt to validate our pass and connect it to the gate. "test %%eax, %%eax\n" // Hope for the best. "jnz gate_Enter_wait\n" // There is no hope, go back in line. "gate_Enter_skip:\n" // We are VIP! "add %[lock], %[pass]\n" // Checkin pass like pro! : [gate] "=m" (*gate), [pass] "=m" (*pass) : [lock] "r" (1) : "eax" ); } /*extern*/ inline void gate_Leave (gate_Gate *gate, gate_Pass *pass) { asm volatile ( "add %[checkout], %[pass]\n" "js error_abort\n" // Abort, underflow on *pass "jnz gate_Leave_skip\n" // Skip next step if I have checked in more then once. "mov %[unlock], %%eax\n" "lock xchg %%eax, %[gate]\n" "test %%eax, %%eax\n" "jnz gate_Leave_skip\n" // If zero, abort because we unlocked an unlocked gate! "error_abort:\n" "call abort\n" "gate_Leave_skip:\n" : [gate] "=m" (*gate), [pass] "=m" (*pass) : [unlock] "r" (0), [checkout] "r" (-1), [isLast] "r" (1) : "eax" ); } // Example use gate_Gate gate = 0; __thread gate_Pass exampleUsePass = 0; // notice this uses thread // local storage void exampleUse(int count) { printf("pass=%d\n", exampleUsePass); gate_Enter(&gate, &exampleUsePass); if (count == 3) { // exampleUsePass = 0; // force deadlock. } if (count < 5) { printf("Count = %d\n", count); exampleUse(++count); } // uncomment this line to simulate Leaving twice when we owned the gate //gate_Leave(&gate, &exampleUsePass); gate_Leave(&gate, &exampleUsePass); printf("pass=%d\n", exampleUsePass); } void *doSomeThing(void *arg) { // Uncomment lines in this method to force alternate thread to // try to leave gate, when it doesn't own it. // if(arg == NULL) exampleUse(0); // gate_Leave(&gate, &exampleUsePass); return 0; } pthread_t tid[2]; int main(void) { int i = 0; int err; while(i < 2) { err = pthread_create(&(tid[i]), NULL, &doSomeThing, i?&err:0); if (err != 0) printf("\ncan't create thread :[%s]", strerror(err)); else printf("\n Thread created successfully\n"); i++; } sleep(5); return 0; }
{ "domain": "codereview.stackexchange", "id": 20453, "tags": "c, reinventing-the-wheel, assembly, locking" }
MongoDB Filter between two dates
Question: Simple code for getting documents that have a creationDate between two values in mongodb. If the user provides only one of the values the code should still work and get the documents that have either a creationDate less than or bigger than the given value. I'm mainly looking for more readability and simplicity. interface mongoDateFilter { $gte?: Date; $lte?: Date; } export const getReportsForContent = async ( contentId: ObjectId, beginDate: Date | undefined, endDate: Date | undefined, ): Promise<Report[]> => { const reportsCollection = await getCollection('reports'); const creationDateMongoFilter: mongoDateFilter = {}; if (beginDate) { creationDateMongoFilter['$gte'] = beginDate; } if (endDate) { creationDateMongoFilter['$lte'] = endDate; } let reportsForContent: Report[] = []; if (beginDate || endDate) { reportsForContent = await reportsCollection.find({ contentId, creationDate: creationDateMongoFilter }).toArray(); } else { reportsForContent = await reportsCollection.find({ contentId }).toArray(); } return reportsForContent; }; ``` Answer: Prefer dot notation over bracket notation when syntax permits it - it's a bit easier to read and write. ESLint rule: dot-notation. Construct objects all in one go rather than mutating them afterwards, if you can - it's easier to write (especially in TypeScript, since you don't have to denote the type ahead of time) and can be easier to understand at a glance when unnecessary mutation is avoided. Don't assign expressions that won't be used - with let reportsForContent: Report[] = []; regardless of the situation, reportsForContent will be reassigned to something else afterwards, so you can leave off the = [] part. Or, even better: Return the value retrieved instead of reassigning a variable and then returning the variable. This: if (beginDate || endDate) { reportsForContent = await reportsCollection.find({ contentId, creationDate: creationDateMongoFilter }).toArray(); } else { reportsForContent = await reportsCollection.find({ contentId }).toArray(); } can be if (beginDate || endDate) { return reportsCollection.find({ contentId, creationDate: creationDateMongoFilter }).toArray(); } else { return reportsCollection.find({ contentId }).toArray(); } Or, even better, handle the case where no date is set at the very beginning, and only construct the creationDateMongoFilter later, if it's needed. In all: export const getReportsForContent = async ( contentId: ObjectId, beginDate: Date | undefined, endDate: Date | undefined, ): Promise<Report[]> => { const reportsCollection = await getCollection('reports'); if (!beginDate && !endDate) { return reportsCollection.find({ contentId }).toArray(); } const creationDateMongoFilter = { (...beginDate && { $gte: beginDate }), (...endDate && { $lte: endDate }), }; return reportsCollection.find({ contentId, creationDate: creationDateMongoFilter }).toArray(); }; No need for the mongoDateFilter interface anymore.
{ "domain": "codereview.stackexchange", "id": 40180, "tags": "javascript, node.js, typescript, mongodb" }
Computational Chemistry (Ab Initio), what should I study?
Question: I'm interested in ab initio computational chemistry. I'm trying to figure out what I should study. I know that I should study physical chemistry, mathematics, and programming. However, I am not sure in what I should be focusing on. It seems to be a fairly new field and books on the topic are rather specific. Can anyone working in computational chemistry research provide me with a list of (math, computational chemistry, physical chemistry) textbooks they studied or any additional resources to get me started? Answer: Computational chemistry is not a new field, but yes it can be difficult to find the right approach to it, because it is so diversified. And that means to some degree what you need depends strongly on what you want to study. Most of the time, you're basically solving the Schrodinger equation all kinds of different ways. It will be better if you can specify what level of knowledge you do have at this time. Assume that you've just got into college, then there's a list of things you should study: Basic theory Math. You need linear algebra and mathematical analysis. Any good old textbook will do. What you use the most everyday would be linear algebra, so you better be very very good at it, along with a good expertise in vectors, matrices and in some cases tensors. There may also be some functional analysis involved but for most people it really just boil down to linear algebra. Mathematical analysis is basically just differentiation, integration, and all kinds of tricks related to it. Quantum mechanics. You some of it from P-Chem II, but if you take it from the physics department it will be more helpful. I recommend the Cohen-Tannoudji textbook. Make sure you start from the beginning, go through every chapter to the very end. Quantum chemistry. With a background you can now proceed to understand the conventional methods of quantum chemistry. For this I would recommend Modern Quantum Chemistry by Szabo and Ostlund. Courses to take: In addition to what was already mentioned above, many universities will also have quantum chemistry and computational chemistry courses. I also recommend you to take group theory and calculus of variation. Programming skills And that prepares you for the theoretical part. Whether you need comprehensive programming skill depends on what you want to do. Some may make do with existing programs, some may need to write there own. But even if you use an existing program, it requires no less amount of understanding of the quantum chemical theories you're using. For programming skills, you will certainly benefit by taking numerical methods course, and mastery over one programming language. Take statistics and machine learning if you have time. Any one will do, it doesn't matter. For what we do, its straight forward to switch to a different language whenever you want. Computational chemists are not usually very big on the algorithms themselves. Most of the times, you just take something right out of a numerical analysis text book or program package and adapt a little bit. It is much more about the underlying physics, i.e., what form of wave function you choose, what approximation you make, etc. The most important thing about programming, I feel, is hands on experience. If you can understand quantum mechanics, programming is not gonna be hard. Chemistry As for other branches of chemistry, you don't have to be the expert, but you definitely need a good understanding because those are the problems you're trying to solve. You need to understand very well the experimental procedure you're trying to study.
{ "domain": "chemistry.stackexchange", "id": 3155, "tags": "computational-chemistry" }
How did the huge dinosaurs cope with gravity and loads on bones, etc.?
Question: It's very costly to be a huge animal. Your mass grows in cube when you scale up, but you still only have two/four legs to support the same weight. This increases the pressure that your body needs to cope with.(this is easy to see if you compare an ant with an elephant. The elephants legs are much thicker and strudier in comparison to it's body) Looking at a T-rex for example, speciemns have been found that are believed to weight more than 9 tonnes, compared to an elephants 10 tonnes. T-rex has ofcorse has only two legs. The heaviest dionsaur is believed to have weight 80 tonnes. That is the weight of about 20 cars on each of their feet. How could they support such massive weights? Answer: Assuming that gravity was essentially the same (other answers to this question notwithstanding), very large dinosaurs were dealing with the same forces that they would today. There are two clades of dinosaurs in which gigantism evolved, Sauropoda (quadrupdeal sauropods) and Theropoda (including T. rex). Each "solved" the problem of large size in different (but also somewhat similar) ways. The main reason why large size was not a problem was that, if posture changed to align the forces between the animal and the ground, the bones are compressed. Bone is very strong in compression. Theropoda Theropod essential operate as a see-saw, with a large muscular tail balancing a large head. As such, they did not likely use much active muscular force to balance. The analogy is a human standing. Just standing, you don't need much muscle force to balance. Hutchinson and Garcia (2002) showed that, because of a lack of plausibly large leg musculature, T. rex could not run. For a range of postures, they estimated how much muscle would be required to balance the animal and found that running behavior was unlikely. Hutchinson, J.R. and M. Garcia. 2002. Tyrannosaurus was not a fast runner. Nature 415:1018-1021. Sauropoda Sauropods show many similar adaptations as elephants, the largest extant land mammals. Their limbs were held upright (erect), which requires less energy for balance. Some sauropods had air-filled bones, which would also lighten the skeleton. Wilson and Carrano (1999) document the evolution of posture through sauropod evolution from a biomechanical perspective. Wilson, J.A. and M.T. Carrano. 1999. Titanosaurs and the origin of “wide-gauge” trackways: a biomechanical and systematic perspective on sauropod locomotion. Paleobiology 25:252–267.
{ "domain": "biology.stackexchange", "id": 160, "tags": "evolution, physiology, palaeontology" }
Lissajous vs Rosette: which one takes place in which conditon?
Question: TL; DR : Is rosette (hypotrochoid) and Lissajous figure basically same thing? If they are different, then when does a pendulum makes a rosette and in which other condition it produces a Lissajous? A simple pendulum (where the bob is a sand tank or an ink can) when oscillated in 2 planes simultaneously (suppose East West plane and North South plane), it leaves a pattern figure on ground. To my surprise, in some cases this figure is a Rosette (Like hypotrochoid flowers drawn using spirograph) or like a precision orbit, but whereas in some cases it is Lissajous figure. Example : Rosette: This youtube link of ink cans and Lissajous figures: This sand pendulum on youtube To my surprise: what factors cause one situation to print a rosette and in another situation to produce a Lissajous? Are these two curves same? is it possible that in some special conditions (Ratio etc) one changes into the other? Answer: They are closely related. A rosette combines two motions. One motion is a simple circle. The other adds a radial oscillation to it. This comes from a sand pendulum suspended from a point. If you swing it in a circle, it draws a circle. If you bump it so it also swings in and out. It draws a rosette. A spirograph does something very similar. It adds a circular motion on a circular motion. This answer to How does the Coriolis effect explain Ekman transport? happened to come up today. For a very simple elliptical rosette, it shows why circle on circle is equivalent to radial line on circle. A Lissajous pattern combines two motions. One is a linear oscillation in the x direction. The other is a linear oscillation in the y direction. This requires the two string support shown in the video. If you swing the sand pendulum in the x direction. It draws a line back and forth. If you bump it in the y direction, it draws a Lissajous pattern. Note that in both cases, the video shows the pendulum slowing down and drawing smaller and smaller patterns. To draw a true rosette or Lissajous, you would need to keep nudging the pendulum to overcome friction. The San Diego Museum of Natural History has a Foucault pendulum. See this and this The sphere is about a foot in diameter. It is suspended from a point a few stories up. A simplified explanation is that it swings back and forth in a line, and the earth turns under it. That is how it would work at the north pole. It is a little more complicated at San Diego's latitude. The result is that a complete rosette comes every 44 hours or so instead of every 24 hours. The disk in the floor has an electromagnet in it. It nudges the pendulum to keep it moving.
{ "domain": "physics.stackexchange", "id": 94157, "tags": "waves, orbital-motion, harmonic-oscillator, geometry" }
What is the shortest run needed for a "dropped skateboard" rider to reach 160 mph?
Question: I just saw a great clip on the fastest skateboarder to date. He wants to reach 160 mph from a dropped board, but says his speed is limited by the roads currently available. What is the shortest run required for a "dropped skateboard" rider to reach 160 mph? Is it even possible? Answer: As posted above the general value for terminal velocity is around 120mph. But this is for skydivers in a stable arms spreadeagle pose. Speed skiers on steep hills with streamlined clothing reach 156mph (251km/h). Presumably if you had low enough rolling friction, a long and smooth enough road (and sufficient levels of stupid) you could do this on wheels If you can ignore mechanical friction then the speed limit only depends on aerodynamics, ie only on your shape (assuming air at standard temperature and pressure) so with enough time you could achieve this on a hill of any slope. A steeper slope merely gives you more power to reduce the effect of rolling friction in the wheels.
{ "domain": "physics.stackexchange", "id": 4199, "tags": "gravity, velocity" }
Confused about shifting convolutions
Question: We know that for two signals $x[n]$ and $h[n]$ such that : $$ y_{1}[n]=(x*h)[n]=\sum_{k=-\infty}^{\infty}x[k]h[n-k]=\sum_{k=-\infty}^{\infty}h[k]x[n-k] $$ We can deduce that :$$y_{2}[n]=x[n+2]*h[n]=y_{1}[n+2]$$ and $$y_{3}[n]=x[n]*h[n+2]=y_{1}[n+2]$$ but can we know what : $$ y_{4}[n]=x[n+2]*h[n+2] $$ gives in terms of $y_{1}[n]$? I am unaware if I might be asking a really stupid question or am I not seeing it? Is it $y_{1}[n+4]$? Answer: Whenever you have doubts regarding the properties of the convolution operator, you should resort to its definition. Let $x_s[n] = x[n+2]$ and $h_s[n] = h[n+2]$. Then: $$y_4[n] = x_s[n] * h_s[n] = \sum_{k=-\infty}^\infty x_s[k]h_s[n-k]$$ If we replace with the definitions of our discrete functions: $$y_4[n] = \sum_{k=-\infty}^\infty x[k+2]h[n-k+2]$$ See what happens if we make the change of variables $m = k+2$: $$y_4[n] = \sum_{m=-\infty}^\infty x[m]h[n-(m-2)+2]=\sum_{m=-\infty}^\infty x[m]h[n+4-m]$$ Does that final expression sound familiar? You were right indeed in your OP, as that is exactly what you thought: $$y_4[n] = \sum_{m=-\infty}^\infty x[m]h[n+4-m] = y_1[n+4]$$
{ "domain": "dsp.stackexchange", "id": 9956, "tags": "discrete-signals, convolution" }
Appending a newline character to a string
Question: I'm trying to find a simplest solution to append a newline character to a string. Although the following code works, I would like to know if it's possible to make the code simpler. (The argument should be const char* and not std::string) My Code : static void sysGui(const char *s) { char buf[1000]; std::strcpy(buf, s); std::size_t size = std::strlen(s); buf[size] = '\n'; buf[size + 1] = '\0'; sys_gui(buf); } Answer: If you're really certain of the necessary conditions for this to actually work, a somewhat cleaner way to do the job would be to use sprintf: static void sysGui(const char *s) { char buf[1000]; sprintf(buf, "%s\n", s); sys_gui(buf); } You could use a std::ostringstream instead, but sprintf seems more in keeping with the fact that the rest of the code is essentially C anyway (regardless of how it's tagged).
{ "domain": "codereview.stackexchange", "id": 31606, "tags": "c++" }
Why does LTL equivalence (with next oeprator) implie trace equivalence in finite transition machines?
Question: I am following Katoen's YouTube course about Model Checking. I am trying to understand how finite transition machines are different from infinite transition machines in terms of equivalences. That lead me to the following question regarding equivalences, but the focus there was on infinite transition machines. Comments in the thread claimed that LTL equivalence = trace equivalence in finite transition machines, when LTL includes the next operator but I do not see how. The direction from trace equivalence to LTL equivalence is immediate, due to the fact that only traces satisfy LTL formulas. The other direction is problematic for me: If we have two transition machines that are LTL equivalent, I understand that they are equivalent with respect to finite traces, because I can use the next operator, but what happens with infinite traces? If I try and assume the claim is false in hopes of reaching a contradiction, I get that an infinite number of prefixes from each infinite trace from either transition system would have to be in both trace sets, but where's the problem? Can't I express infinitely many different prefixes using a finite transition machine? Answer: The important thing to notice is that in finite transition systems, two systems $K_1,K_2$ over an alphabet $\Sigma$ are not trace equivalent (for infinite traces) if and only if there exists a finite trace $w\in \Sigma^*$ that is in one of them, but not the other. Indeed, if $K_1$ and $K_2$ are not trace equivalent, denoted $L(K_1)\neq L(K_2)$, then w.l.o.g. there exists an infinite trace $x\in L(K_1)\setminus L(K_2)$, but we can say something stronger: since the transition systems are finite, we can actually assume $x$ is of the form $x=uv^\omega$ for finite words $u,v\in \Sigma^*$. This "lasso" structure of $x$ tells us that there is a finite trace $uv^m\in L(K_1)\setminus L(K_2)$, for some $m\in \mathbb{N}$, thus getting rid of the "infinitely many prefixes" problem. Now, we can define an LTL formula, using "next" operators, that states "$uv^m$ is not a prefix of the trace". This formula holds in $K_2$, but not in $K_1$.
{ "domain": "cstheory.stackexchange", "id": 5258, "tags": "linear-temporal-logic" }
How is the mass of air in the intake manifold calculated in an electronically fuel injected vehicle?
Question: I am considering a two wheeler petrol engine. How is the mass of air entering the intake manifold calculated? Given that I have: Manifold Absolute Pressure sensor Intake Air Tmperature sensor Engine Temperature sensor Throttle Position sensor Engine rpm sensor Please help me understand how to estimate the amount of air entering the engine for combustion. Answer: An IC engine is (for these purposes) a pump. It moves a certain volume of gas through on each cycle (one revolution for a 2-stroke, two for a 4-stroke). That amount is reported as the displacement of the engine (110 cc) in your case. Then you need to determine the mass of that air in that volume.You can arrive at that using the ideal gas law and the molecular weight of the air. $$ PV=nRT $$ Values in order: Pressure($Pa$), Volume (displacement in your case$m^{3}$), number of moles, the ideal Gas constant (8.314 $J\cdot K^{-1} \cdot mol^{-1}$) and temperature (of the air $K$). Solve for $n$ and multiply by the molecular weight (~.0288 $kg \cdot mol^{-1}$ for air). That'll be the mass of air. This is method ignores the volume taken up by the fuel itself, pressure loss between the intake manifold and cylinder etc.. It's a pretty good estimate, but won't be accurate enough to make the engine run acceptably well. You'll need to apply an empirical coefficient to tune it to the right value to control your fuel injection. That coefficient will not be constant over all RPM and other operating conditions. The good news is that the coefficient will be close to unity and won't change (significantly) over the lifetime of the engine. That tuning process can be involved. Ideally, you'd have an air-fuel ($\lambda$ sensor on the exhaust to measure how far from stoich the mix is. Tuning with an exhaust gas temp sensor is also possible (never had to do that myself), and may be cheaper. The last option would be to tune the engine on a dynamometer (measures power), although tuning to produce maximum power will usually produce a slightly fuel rich mixture. The $\lambda$ sensors are the best way to go and can be integrated into your controller to make the system more robust, but you could borrow one temporarily to tune and run the engine without it. Notes: I've specified everything in base SI units. Other units will work too, but make sure that you get the value of $R$ right and use an absolute temperature (Kelvin or Rankine, not Celsius or Fahrenheit) Engine temp and TP are not strictly necessary for the calculation, but TP is often used to tweak fuel delivery during changes in speed/TP.
{ "domain": "engineering.stackexchange", "id": 664, "tags": "mechanical-engineering, electrical-engineering" }
Why does the twirl of a quantum channel give a depolarizing channel?
Question: I would like to understand in detail why the twirl of a quantum channel gives depolarizing channel, which is the starting point of randomized benchmarking. To be self-contained, let me set up the notation. Let $\hat{U}$ denote a superoperator that acts on the density matrix as $\hat{U}(\rho)\equiv U\rho U^\dagger$ where $U$ (without the hat) is the corresponding unitary. Let $\hat{\Lambda}$ be a quantum channel such that $\hat{\Lambda}(\rho)=\sum_kA_k\rho A_k^\dagger$ where $A_k$ is the Kraus operator. We use $\circ$ to denote the composition of superoperators: $\hat{U}_1\circ\hat{U}_2(\rho)=U_1U_2\rho U_2^\dagger U_1^\dagger$. The twirl of a quantum channel is defined as $\hat{\Lambda}_t\equiv\int dU\hat{U}\circ\hat{\Lambda}\circ\hat{U}^\dagger$ which is equal to a depolarizing channel in the sense that \begin{equation} \hat{\Lambda}_t(X) = (1-p_d)X + \frac{p_d}{D}\text{tr}(X)I \end{equation} for any operator $X$. I would like to understand the derivation of this fact. What I know is that the twirl commutes with arbitrary unitary superoperator $\hat{U}\circ\hat{\Lambda}_t = \hat{\Lambda}_t\circ\hat{U}$ and it hints that I should use some sort of Schur's lemma in the natural representation of $\hat{\Lambda}$, but I am not sure how to proceed... Resources that I have found: Nielsen's paper, but I don't understand his argument below Eq. (10). The original RB paper, I don't understand their Eq. (46), so I guess I am missing some group theory here. Meier's thesis, essentially following 2 but with a slight different representation, which I could not follow as well. Any help to fill in the gap is really appreciated! Answer: Nielsen's paper cited in the question simplifies the arguments originally laid out in two papers by Horodecki family. This answer sketches the original arguments and is meant to complement the nice explanation based on representation theory written by @Markus Heinrich by requiring less background knowledge and hopefully providing some additional insight into the relationship between depolarizing channels and twirling. It also demonstrates the use of state-channel duality. High level summary The argument uses state-channel duality to translate twirling of channels to $U\otimes U^*$ twirling of states. By unitary invariance of the Haar measure, twirling is idempotent, so the Choi matrix of a twirled channel is invariant under $U\otimes U^*$ twirling of states. However, it turns out that the only states invariant under $U\otimes U^*$ twirling of states are the so-called noisy singlets which under state-channel duality correspond to depolarizing channels. Noisy singlet Consider two systems with the Hilbert spaces of the same finite dimension $N$. Let $|\psi\rangle:=\frac{1}{\sqrt{N}}\sum_{i=1}^N|i\rangle|i\rangle$. It is easy to check that for any linear operator $A$ $$ (A\otimes I)|\psi\rangle = (I\otimes A^T)|\psi\rangle.\tag1 $$ Now, for $p\in[0,1]$, we define the noisy singlet $\rho_p$ to be the bipartite state $$ \rho_p:=p|\psi\rangle\langle\psi|+(1-p)\frac{I\otimes I}{N^2}.\tag2 $$ Twirling Twirling of states sends a bipartite state $\rho$ to $$ \rho_t := \int dU (U\otimes U^*)\rho(U^\dagger\otimes U^T)\tag3 $$ where $U^*$ denotes the complex conjugate of $U$. Using $(1)$, we can show that the Choi matrix $J(\hat\Lambda_t)$ of a twirled channel $\hat\Lambda_t$ is the result of twirling of states applied to the Choi matrix $J(\hat\Lambda)$ of the original channel $\hat\Lambda$ $$ \begin{align} J(\hat\Lambda_t)&=\hat\Lambda_t\otimes\hat{I}(N|\psi\rangle\langle\psi|)\\ &=\left(\int dU\hat{U}\circ\hat{\Lambda}\circ\hat{U}^\dagger\right)\otimes\hat{I}(N|\psi\rangle\langle\psi|)\\ &=\left(\int dU(\hat{U}\otimes\hat{I})\circ(\hat{\Lambda}\otimes\hat{I})\circ(\hat{U}^\dagger\otimes\hat{I})\right)(N|\psi\rangle\langle\psi|)\\ &=\int dU(\hat{U}\otimes\hat{I})\circ(\hat{\Lambda}\otimes\hat{I})\left((U^\dagger\otimes I)N|\psi\rangle\langle\psi|(U\otimes I)\right)\\ &=\int dU(U\otimes I)\left[\hat{\Lambda}\otimes\hat{I}\left((U^\dagger\otimes I)N|\psi\rangle\langle\psi|(U\otimes I)\right)\right](U^\dagger\otimes I)\\ &=\int dU(U\otimes I)\left[\hat{\Lambda}\otimes\hat{I}\left((I\otimes U^*)N|\psi\rangle\langle\psi|(I\otimes U^T)\right)\right](U^\dagger\otimes I)\\ &=\int dU(U\otimes U^*)\left[\hat{\Lambda}\otimes\hat{I}\left(N|\psi\rangle\langle\psi|\right)\right](U^\dagger\otimes U^T)\\ &=\int dU(U\otimes U^*)J(\hat\Lambda)(U^\dagger\otimes U^T)\\ &=J(\hat\Lambda)_t. \end{align}\tag4 $$ Another fact we can easily prove using $(1)$ is that every noisy singlet $(2)$ is invariant under twirling of states $\rho_{p,t}=\rho_p$. In fact, it turns out that noisy singlets are the only states with this property. See section $V$ in this paper for a proof of this fact. Depolarizing channel Depolarizing channel is a CPTP map defined by $$ \hat\Delta_p(\rho) = p\rho + (1-p)\frac{I}{N}\mathrm{tr}\rho.\tag5 $$ A short calculation shows that the Choi matrix of $\hat\Delta_p$ is $$ J(\hat\Delta_p)=(\hat\Delta_p\otimes\hat I)(N|\psi\rangle\langle\psi|)=N\rho_p\tag6 $$ where $\rho_p$ is a noisy singlet. Putting it all together Finally, unitary invariance of the Haar measure implies that twirling a channel twice yields the same result as twirling it once $$ (\hat\Lambda_t)_t=\hat\Lambda_t.\tag7 $$ Therefore, by $(4)$ $$ J(\hat\Lambda_t)=J((\hat\Lambda_t)_t)=J(\hat\Lambda_t)_t\tag8 $$ i.e. the Choi matrix of $\hat\Lambda_t$ is invariant under twirling of states. But noisy singlets are the only states with this property. Therefore, $J(\hat\Lambda_t)$ is (a scalar multiple of) a noisy singlet $$ J(\hat\Lambda_t)=N\rho_p\tag9 $$ for some $p\in[0,1]$. However, $N\rho_p=J(\hat\Delta_p)$, so by injectivity of $J$, we have $$ \hat\Lambda_t=\hat\Delta_p\tag{10} $$ which was to be proven.
{ "domain": "quantumcomputing.stackexchange", "id": 4719, "tags": "quantum-operation, randomised-benchmarking, depolarizing-channel" }
How to determine the temperture inside a pipe knowing the temperature outside?
Question: I would like to know if there is a way to determine using calculation the inner wall temperature of a pipe which has a steady flow of water through it knowing the temperature measured on the surface of the pipe (outside)? As the measurement takes place on a very small points, one can consider there will not have any variation of the temperature on the length of the pipe at this point. Moreover if we consider the pipe to be perfectly cylindrical it will not have vairation of the temperature by rotation on the pipe. Therefore, in steady state, the heat equation boils down to : $$\frac{1}{r}\frac{\partial}{\partial r}\left(r\frac{\partial T}{\partial r}\right)=0 \tag 1$$ Which give a solution as, $$T(r)=C_1 \ln (r) + C_2 \tag 2$$ Now I am only having a few difficulties to apply the rights limit conditions. If I say, at the limits $T(r_0) = T_0$ (wall temperature of the pipe inside the pipe) and $T(r_1) = T_1$ (wall temperature of the pipe outside the pipe) then it gives the solution of $T(r)$ as a function of $T_0, T_1, r_0$ and $r_1$. But here the point is that $T_0$ is the temperature I want to determine... I would like to know the difference of temperature so that I can determine $Q$ the power lost as, $$Q = \frac{T_1 - T_0}{2 \pi \lambda L}$$ and I have absolutely no possibility to measure the temperature inside directly. I think there is probably something wrong on my way to approach the problem, but I haven't found anything concluant on the web to do such a thing. I am stuck there and I need some help.. Answer: The equation you've been trying to derive is $$\dot{Q}=2\pi k\frac{(T_1-T_0)}{\ln{(r_0/r_1)}}$$where $\dot{Q}$ is the rate of heat loss per unit length of pipe and k is the thermal conductivity of the pipe. Note that there are two unknowns ($\dot{Q}$ and $T_1$) but only one equation. To provide closure on this, as @Gert has indicated, you need to characterize the rate of heat loss from the pipe to the surrounding air in the room: $$\dot{Q}=2\pi r_0 h(T_0-T_{surr})$$where h is the convective heat transfer coefficient on the outside of the pipe. You can then get both $\dot{Q}$ and $T_1$ by combining these equations (using an estimate of h).
{ "domain": "physics.stackexchange", "id": 57097, "tags": "thermodynamics, heat-conduction" }
Theoretical definition and pratical mesurement of differential cross section
Question: In Sakurai's book, the definition of differential cross section is: $$d\sigma/d\Omega= transition \;rate / probability\; flux $$ However this def doesn't contain any information about the thickness of the material or the density of target particle. How do one compare the experiment with the theory? I checked Wikipedia but didn't find anything useful. Answer: This for me was the best place to start: Rutherford Scattering Differential Crosssections are a way of expressing the results of a scattering experiment. You have some initial state of the particles to be scattered, you have the target that they will be scattered off of, and you have some final state of your test particles in which they end up because of the scattering. That final state can be designated in many different ways. It can be a final energy state, a final momentum state. It could be some position. For example in the Stern-Gerlach experiment, spint up particles end up in one container, spin-down in another after passing through a magnetic field. Consider the following experiment: Suppose you're blind, but when you hear a sound, you know how far way it is from you, and you know how far away it is from whatever you can touch in front of you. Suppose someone has glued billiard balls of varying, non-standard sizes down on a pool table and you want to know where they are. You're at one of the narrower ends. Along this side in front of you are a series of buttons evenly spaced each with a marker in Braille withe letters A,B,C, and so on. Each is also some distance away from the zero point. Push one of these buttons and it shoots out a small ball bearing parallel to the adjacent walls. As these ball bearings are shot out, you hear a click if they hit one of the object balls. The ball bearings are deflected from their initial path. They are scattered elsewhere on the billiard table along a path deviating from their initial direction. When they hit the wall of the Billiard table, they make another sound telling you where and how long after the previous click they hit. You know the intial starting points of the ball bearings. You know where they are intially deflected, and you know where they end up. With all that information, combined with the conservation of energy and momentum, you can construct a map of where the object balls are, as well as their size and shape. However, just the initial point of emission of the ball bearings and their final location is going to give you a lot of that information. A cross section, in particular a differential cross section, represents information about initial and final states of a scattering experiment in such a way that you have some understanding of various features of the targets the ball bearings were scattered from, their size and shape, as well as location. In the Rutherford Experiment, they fired highly energetic helium nuclei, alpha particles, at a very thin gold foil. Sufficiently thin, the particles would scatter only once from the gold atoms in the foil. The alpha particles where shot along parallel lines and it was expected that the gold atoms would have roughly the same shape and size throughout the foil. They would also be evenly distributed. This implies some uniformity of final states of the alpha particle regardless of where or when an interaction occurs. If there is a pattern behind the transition from initial to final states, its due specifically to the interaction between the alphaparticle and the gold nucleus and not to incidentals as to where the nuclei are located in the foil or specific geometric features of the target nuclei. In other words, you know the initial magnitude and direction of your alpha particles' velocities. You also have detectors that can tell you the final magnitude and direction of their velocities. The final state is a function of the initial state and that function represents certain properties of what caused the transition. That's a "cross section". It has units of area while conveying information about the relevant interaction. For example, if the cross section stays the same at high or low energies of the alpha beam, then the interaction is mechanical, like two billiard balls bouncing off of each other. All that matters is point of impact, center's of mass and normals at the boundary point. If there's an action at a distance force like gravity or electromagnetism, then initial energy will gradually over come the non-mechanical forces involved. you'll find the cross section shrinks at higher energies, isolating the nuclei to a specific location. The cross section varies with energy. It could also vary with the relative position of the initial trajectory of the incoming beam. If you have the differential cross section instead of the cross section itself, you remove incidentals as to how the interaction works at different entities and have some picture of what features are common to every interaction and not just those specific cases. But a differential cross section is just another corss section. It's a representation of final states as they relate to initial states of the particle to be scattered. There are further geometric implications to scattering cross sections. Check out Mean Free Paths and Cross-sections as they apply to scattering theory: Scattering Theory.
{ "domain": "physics.stackexchange", "id": 53197, "tags": "particle-physics, differentiation, approximations, scattering-cross-section" }
Adding collaborator to project during reservation
Question: If I add a collaborator to an IBM Quantum Education project during a system reservation, do they immediately get priority access to the reserved system? Or do I need to add them before the reservation starts? Answer: Yes, you can add at any time. The reservation system cares about the project you send the jobs from, not the individual users.
{ "domain": "quantumcomputing.stackexchange", "id": 2910, "tags": "ibm-q-experience" }
Print Consecutive numbers by comparing two parameters
Question: input 3,5 output should be 3,4,5 input 5,3 output should be 5,4,3 And the code public static void test(int a, int b) { if(a>b) { for (int i = a; i >= b; i--) { System.out.print(i + "\t"); } }else if(a<b) { for (int i = a; i <= b; i++) { System.out.print(i + "\t"); } } } It works but looks a little messy. Is it possible to do without if else thing? Only one loop. Answer: The function name is weird. What is it testing? A more appropriate name might be printInclusiveRange(…, …). Since the bounds are inclusive, I would expect that for input 3,3, the output should be 3. Yet you output nothing. I would also expect the output not to end with a Tab character. Here's one solution that corrects those two problems, and also combines the loops: public static void printInclusiveRange(int a, int b) { System.out.print(a); if (a != b) { int i = a; int step = (a < b) ? +1 : -1; do { i += step; System.out.print("\t" + i); } while (i != b); } System.out.println(); // Perhaps you want a newline here? }
{ "domain": "codereview.stackexchange", "id": 13546, "tags": "java" }
Has a 3D chart of nuclides ever been published or proposed $(N, Z, S)$? What information might it show?
Question: Phys.org's 'Strange' glimpse into neutron stars and symmetry violation leads to the new Nature Physics Letter Measurement of the mass difference and the binding energy of the hypertriton and antihypertriton and that led me to Wikipeida's Hypertriton which says: Normal nuclei are formed only of protons and neutrons. To study them, scientists arrange the various nuclides into a two-dimensional table of nuclides. On one axis is the number of neutrons N, and on the other is the number of protons Z. Because the antihyperon introduces a third component (strangeness), the table becomes three-dimensional. However the article only shows the more familliar two-dimensional Z vs N Chart of Nuclides something like that shown below. which leads me to ask: Question: What would a 3D chart of nuclides actually look like (neutron and proton number and strangeness; $N$, $Z$, $S$)? Has one been made? If so, what information is entered for each entry? Example of a more conventional 2D $Z$ vs $N$ chart: click for larger, Source Answer: (This anecdote kind of straddles the line between a comment and an answer.) I saw the beginning of such a table in a conference presentation a decade ago. The format was the same as the usual (Z,N) chart of nuclides, but the data were measured lifetimes for hypernuclei where one baryon was a $\Lambda$. The heaviest nuclei on this chart had mass number $A\lesssim5$ --- it was just the low-mass corner of the table of isotopes. I suppose you could construct such a table for nuclei where one baryon is a $\Sigma$ with some charge quantum number. Nuclei with more than one strange baryon are unlikely to be experimentally accessible. So the presentation wouldn't really by three-dimensional; it'd be a series of similar-looking two-dimensional diagrams. Nuclides with one $\Lambda$ hyperon, nuclides with one $\Sigma^+$ hyperon, etc. I only vaguely remember this conference presentation, but I think the presenter was describing work done in Jefferson Lab's Hall B. That might be enough information to point a motivated sleuth towards an actual publication.
{ "domain": "physics.stackexchange", "id": 65571, "tags": "particle-physics, nuclear-physics, quarks, isotopes" }
Is the emitted spectrum is that of a blackbody when the blackbody is in thermal equilibrium with the ambient or with its interior or either?
Question: I have came across the following paragraph in Wikipedia A perfectly insulated enclosure which is in thermal equilibrium internally contains blackbody radiation, and will emit it through a hole made in its wall, provided the hole is small enough to have a negligible effect upon the equilibrium. So my question is, if I have a blackbody in my room but this black body isn't in equilibrium with with my room. However, it's in equilibrium with its internal structure, let's just imagine I have a perfect glowing blackbody in my room, Does the emitted radiation resembles that of a blackbody? Answer: Yes, radiation of a blackbody that is not in equilibrium with its environment is still blackbody radiation, with properties of blackbody radiation. Blackbody need not be in equilibrium with its environment, and its emission need not be equilibrium radiation! In fact usually radiation of a compact finite-sized blackbody can't be, as equilibrium radiation has no macroscopic energy flux in any direction, but blackbody emission produces energy flux from the body away. The concept of blackbody was not invented as something that exists only when in thermodynamic equilibrium. The premise is that it is the body that absorbs every incoming radiation, and reflects none of its; but at the same time, it also produces its own radiation (emission), and the emitted radiation is characterized by the blackbody temperature and geometry. The cavity with perfectly reflecting walls with equilibrium radiation inside is not a blackbody, instead the hole that is made in that wall through which radiation escapes, is a realization of blackbody surface (absorbs all incoming radiation, and emits its own blackbody radiation).
{ "domain": "physics.stackexchange", "id": 93837, "tags": "thermal-radiation" }
Hydraulic pump choice - which produces most force
Question: I have a choice of two hydraulic power units with specs as follows: A. 2.2kW motor, 8lpm flow rate, 120 Bar pressure rating B. 1.1kW motor, 2lpm flow rate, 200 Bar pressure rating Using F=PA I assume pump B is capable of producing more force in the same system (albeit a lot slower!) However, hearing the phrase "resistance to flow produces the pressure not the pump" many times gives me a nagging feeling. Am I correct or is there something I'm missing? Thanks Answer: For a given cylinder, B has higher pressure and will produce a larger force. You could select a larger diameter cylinder for A such that it would produce the same or larger force. If a cylinder is moving with zero resistance, the pressure in the cylinder will be very low (just enough to overcome its own friction). You are likely looking for maximum force (pump max pressure x cylinder head area) which occurs when the cylinder blocked or fully extended (zero flow). At flowrates and pressures in between full flow zero pressure and full pressure zero flow will require a pump curve, plumbing information, and more involved calculations.
{ "domain": "engineering.stackexchange", "id": 1366, "tags": "fluid-mechanics, hydraulics" }
Quantum propagator as a transition amplitude
Question: Consider a system whose state is initially $\left|\psi(t_i)\right\rangle$. At a later time $t_f$, its state will be $$\left|\psi(t)\right\rangle=\mathcal{U}(t_i,t_f)\left|\psi(t_i)\right\rangle$$ where $\mathcal{U}(t_i,t_f)=\exp(-i\hat{H}(t_f-t_i)/\hbar)$ is the time evolution operator (in the case of a time-independent Hamiltonian). Now consider the matrix elements of $\mathcal{U}(t_i,t_f)$ in the position eigenbasis $\{\left|x\right\rangle\}$. We define the propagator as $$\left\langle x_f\middle|\mathcal{U}(t_i,t_f)\middle|x_i\right\rangle$$ How can I show that the propagator as defined above can also be written as the transition amplitude $$\left\langle x_f,t_f\middle|x_i,t_i\right\rangle$$ ? Update After thinking about it for a bit, I came up with this: since $$\left|x_i,t_i\right\rangle=\exp(-i\hat{H}t_i/\hbar)\left|x_i\right\rangle$$ and $$\left|x_f,t_f\right\rangle=\exp(-i\hat{H}t_f/\hbar)\left|x_f\right\rangle\quad\Rightarrow\quad\left\langle x_f,t_f\right|=\left\langle x_f\right|\exp(i\hat{H}t_f/\hbar)$$ then it follows that $$\left\langle x_f,t_f\middle|x_i,t_i\right\rangle=\left\langle x_f\middle| \exp(-i\hat{H}(t_i-t_f)/\hbar) \middle| x_i\right\rangle$$ however this produces the wrong sign in the exponent, and I also believe I might be mixing the Heisenberg and Schrodinger pictures. Answer: Unless you have an alternative definition of the transition amplitude given in class I think this question is just a matter of notation / definition. When you write $\langle x_f, t_f | x_i, t_i\rangle$ you mean precisely the matrix element of the evolution operator (the name for this matrix element is the kernel, or propagator) in the position space representation. Note in particular that since you're working in the Schrödinger picture the basis states $\{ |x\rangle \} $ are time independent. Edit: Given your comment, if the states $|x_{i}, t_{i}\rangle$ are supposed to be taken in the Heisenberg picture then $ |x_{i}, t_{i}\rangle = e^{\frac{i}{\hbar} \hat{H} t_{i}} |x_{i}\rangle$ where $|x_{i}\rangle$ is the Schrödinger picture operator at time zero. Likewise for $|x_{f}, t_{f}\rangle$. Conjugating and taking the inner product we get $$\langle x_{f}, t_{f} | x_{i}, t_{i}\rangle = \langle x_{f} | e^{-\frac{i}{\hbar} \hat{H}t_{f}} e^{\frac{i}{\hbar} \hat{H}t_{i}} | x_{i}\rangle = \langle x_{f} | e^{-\frac{i}{\hbar} \hat{H}(t_{f} - t_{i})} | x_{i}\rangle \langle x_{f} | \hat{U}(t_{f}, t_{i}) | x_{i} \rangle$$ as required.
{ "domain": "physics.stackexchange", "id": 64280, "tags": "quantum-mechanics, hilbert-space, time-evolution, propagator" }
Why is the melting point of a solution lower than one of both of its components?
Question: The melting point of ice (H2O) is at 273K, and one of salt (NaCl) is at 1074K. However, if one dissolves salt in water, the melting point of the solution will be at ~250K. Why is it so low if the contained ice must melt at 273K and salt at 1074K? Answer: One can explain this phenomenon with entropy change. I am moving from liquid state to solid state As we reach the melting point of a pure solvent the entropy goes on decreasing. As entropy is defined as 'randomness' it can be said that when we add a solute in a solvent the entropy increases. So, to again attain the entropy achieved earlier we need to take more heat away from the solution, than we did earlier from the pure solvent. This further decreases the temperature at which solidification is attained.
{ "domain": "physics.stackexchange", "id": 37548, "tags": "solid-state-physics, crystals, molecules, molecular-dynamics, liquid-state" }
Is there a PDA for every Type 3 Grammar?
Question: we learned that for every type 2 grammar G exists a PDA A with L(A) = L(G). But does for every type 3 grammar G exist a PDA A_G with L(A_G) = L(G)? I think it does, because type 2 grammar is a subset of type 3 grammar. Am I wrong? Answer: You need to check the definitions: type-3 (regular) is a (proper) subset of type-2 (context-free). Therewith, every type-3 language has a PDA that accepts it. Essentially, you take a finite automaton for your type-3 language, add the stack but never use it. Up to minor definitory differences (e.g. w.r.t. acceptance), this gives you a PDA for the language. This is also one way to prove that all regular languages are context-free!
{ "domain": "cs.stackexchange", "id": 3031, "tags": "formal-languages, regular-languages, pushdown-automata" }
Propagator of massive spin-1 particle
Question: I am currently working on an exercise where I need to derive the propagator, in momentum space, of a massive spin-1 particle. The image included is the solution to it, and I am having trouble understanding parts of it. Namely, how do they go from eq. (80) to eq. (81)? As I see it, eq. (80) is summed over $\mu$ and $\nu$ and is therefore a scalar equation. How can we extract information about the terms in the sum? My second problem is that eq. (86) looks wrong, as the left part should be a scalar again, and the right part still has the uncontracted indices. Technically this is no problem, but something seems fishy to me. It appears I don't really understand how they go from the tensorial equations to scalar ones, the rest of the solution is ok for me. Answer: Take the Fourier transform of both sides of equation $80$ with respect to $x$ and $y$, the right hand side will become $\delta^{(4)}(k_1-k_2)$, the left hand side will pick up a $\delta^{(4)}(k_1-k_2)\,\delta^{(4)}(k_1-q)$. After evaluating the $q$ integral on the left hand side you can replace $k_1$ with $q$ and drop the delta functions on both sides. There is, technically, a mistake in the very first equation that gets corrected in the transition to $81$. Equation $78$ should read something like $$\left[g^{\mu\nu}\left(\Box + M^2\right) - \partial^\mu\partial^\nu\right]G^F_{\nu\lambda} = i\delta^{4}(x-y)\,\delta^\mu_{\hphantom{\mu}\lambda}.$$ There is a way to argue the opening up of the summation that takes place in the initial work, I'm sure, I just can't think of how it's done right now.
{ "domain": "physics.stackexchange", "id": 46028, "tags": "homework-and-exercises, quantum-field-theory, greens-functions, propagator" }
Disprove unrealistic speed-up of total Turing machines
Question: Let $T_1$ be a total Turing machine deciding language $L_1$, and let $I_1$ and $I_2$ be two separate inputs to $T_1$. Further, let $I_{c}$ be $I_2$ concatenated to $I_1$ with some separation symbol in between, and let $S_{T}(I)$ be the number of steps total TM $T$ needs to run until it accepts/rejects input $I$. I am wondering about the following two statements: For every $T_1$ there exists another total Turing machine $T_2$ such that for all valid inputs $I_1 \neq I_2$ for $T_1$, $T_2$ accepts $I_c$ if $T_1$ accepts $I_1$ or if $T_1$ accepts $I_2$. For every $T_1$, there exists a $T_2$ with the above property such that for all valid inputs $I_1 \neq I_2$, it holds that $S_{T_2}(I_c) < S_{T_1}(I_1) + S_{T_1}(I_2)$ To me, it seems as if the second statement would imply an impossible speed-up and should have an obvious counterexample, but I haven't been able to come up with one. Answer: Suppose that the input alphabet is $\{0,1\}$, and consider the language $L_1 = 0^*$. We can easily construct a Turing machine $T_1$ such that $S_{T_1}(I) = |I|+1$. On the other hand, $S_{T_2}(I_c) \geq |I_c|+1$. Since $|I_c|=|I_1|+1+|I_2|$, we get $$ S_{T_2}(I_c) \geq |I_1|+1+|I_2|+1 = S_{T_1}(I_1) + S_{T_1}(I_2). $$
{ "domain": "cs.stackexchange", "id": 16839, "tags": "turing-machines, time-complexity" }
What impact does the modulo operator have in a for-loop?
Question: Here's an example of what I mean: def complexity(n): count = 0 for x in range(1, n+1): for y in range(1, (n*n)+1): if y % n == 0: for z in range(1, n+1): count += 1 return count The first for loop is of course θ(n), the second one θ(n^2). But what impact does the modulo operator have? And what would then be the complexity of the whole thing? Answer: It feels like you're trying to think in terms of pre-defined recipes, rather than just figuring it out by reasoning about the situation. The if statement triggers every time $y$ is a multiple of $n$. So just work out how many different values in $\{1, \dots, n^2+1\}$ are multiples of $n$, and that's how many times the innermost for loop runs. Of course, your code is just an example but, in real life, with something as simple and predictable as the modulo operator, you wouldn't use this approach of "generate every possible value of $y$ and see which ones pass the test". It's much more efficient to just write a for loop that directly assigns the values $n, 2n, \dots, n^2$ to $y$.
{ "domain": "cs.stackexchange", "id": 14194, "tags": "time-complexity, asymptotics" }
Why do Volcanoes give out so much Sulphur Dioxide and Carbon Dioxide?
Question: From my understanding, the mantle is in a highly reduced state, so I can't understand why a volcano would give off a highly oxidised gas such as sulphur dioxide. Carbon dioxide is too an oxidised gas which is emitted by volcanoes, however I justify its emission because carbon dioxide is continually being fed into the mantle by the subduction of carbonate sediments, is this correct? If this is true, why is there still carbon dioxide emitted at mid-ocean ridges? Or is this all wrong, and the gases are being emitted because they are still being degassed from the solid mantle in the process if heterogenous separation which separated the atmosphere from the mantle. If this is the case, why have all other gases been separated but not the two I have mentioned? Answer: the mantle is in a highly reduced state This is not entirely correct. The core is in a highly reduced state, but the mantle is not necessarily reduced, and is quite oxidised in some places. The mantle is heterogeneous. however I justify its emission because carbon dioxide is continually being fed into the mantle by the subduction of carbonate sediments, is this correct? The carbon dioxide in the mantle does not necessarily derive from subduction of carbonates. Or is this all wrong, and the gases are being emitted because they are still being degassed from the solid mantle in the process if heterogenous separation which separated the atmosphere from the mantle. If this is the case, why have all other gases been separated but not the two I have mentioned? Not entirely accurate. Although it is true that some of the gases in the atmosphere derived from the differentiation of the core-mantle-crust system, a popular theory suggests that many of the gases came after the differentiation through other planetary bodies such as comets and meteorites. You are also implying that carbon dioxide and sulphur dioxide are the only two gases that are emitted from volcanoes. No. Water vapour is also emitted (commonly much more than the other two) and also other gases. It's just that less people care about the water and it gets less publicity, because, well, it's just water. There are other gases that are emitted as well: nitrogen, the noble gases. But they are much less common than the other two. They are not the only gases. The authors of Fluxes and sources of volatiles discharged from Kudryavy, a subduction zone volcano, Kurile Islands write: Several potential sources for CO2 can be identified: (1) hotspot-type mantle, (2) MORB-type mantle, (3) subducted oceanic crust (MORB), (4) marine carbonate, (5) organic C from crustal and subducted sediments They also give a table which states that in island arc volcanoes, 12% of carbon is sourced from the mantle, 67% is sourced from carbonates and 21% is organic carbon. Note that this only gives the amount degassed from volcanoes in subduction zone arcs. This does not mean that all carbonate-hosted carbon is released from the rocks to be degassed. I read somewhere that only 50% is degassed, and the rest is lost to the deep mantle. I would also expect that MORB magmas will have significantly less carbonate-sourced carbon. Also, The deep carbon cycle and melting in Earth's interior may be of interest.
{ "domain": "earthscience.stackexchange", "id": 186, "tags": "geochemistry, volcanology" }
Car presenter tests
Question: I caught a whiff of a code-smell emanating from one of my tests, in a scenario akin to the following: [TestFixture] public void CarPresenterTests{ [Test] public void Throws_If_Cars_Wheels_Collection_Is_Null(){ IEnumerable<Wheels> wheels = null; var car = new Car(wheels); Assert.That( ()=>new CarPresenter(car), Throws.InstanceOf<ArgumentException>() .With.Message.EqualTo("Can't create if cars wheels is null")); } } public class CarPresenter{ public CarPresenter(Car car) { if(car.Wheels == null) throw new ArgumentException("Can't create if cars wheels is null"); _car = car; foreach(var wheel in _car.Wheels) { wheel.Rolling += WheelRollingHandler; } } } I was struggling to describe what the problem is except that it seems wrong that a CarPresenter should attempt to dictate to a Car whether or not its Wheels are initialised correctly. I wondered what pointers people here might give me? Answer: Is a car really a car without wheels? If not, the check should be done at Car construction time and CarPresenter should not have to check for null, it should assume that a good working car is passed to it. (Correction, CarPresenter should check at construction time that Car is not null.) Assuming of course that Car is a class that you control as well therefore you can change it. And while we are talking about smells... You do not have much encapsulation. Wheels on Car should probably be private with public accessor methods if needed. Answer to comment: So would you just handle the event Wheels.Rolling event and accept that if Wheels is null an exception will be thrown No, if you check at construction time and make sure you have working wheels, then you do not need to worry about it later. It is a lot better to bluntly (runtime exception) point out as early as possible if someone made a mistake (passing null for wheels) than waiting for these wheels to blow up who knows where in the code.
{ "domain": "codereview.stackexchange", "id": 433, "tags": "c#, unit-testing" }
How did the Zinder-Lederberg experiment on Transduction work?
Question: In the paper that introduced TRANSDUCTION (J Bacteriol. 1952 Nov;64(5):679-699), Lederberg and his student Zinder reported that S. typhimurium "LT-22 is lysogenic for a virus active on LT-2. This virus is capable of inducing lysogeny in LT-2." I read this line as: LT-22 carries a prophage that, upon induction, can infect LT-2. They then used subsets of these strains: LA-2 (also known as SW-414), which required methionine and histidine; and LA-22, a collection of strains with different metabolic requirements. Zinder and Lederberg commented that Prototrophs appeared in the platings of LA-22 but not of LA-2. Sterile filtrates of LA-2 broth cultures did not elicit prototrophs from LA-22. However, filtrates from mixed cultures of LA-2 and LA-22 elicited about one prototroph per million LA-22 cells. Thus LA-2 produced a filtrable agent (FA), under stimulation from LA-22, that could elicit prototrophs from LA-22. Filtrates of LA-22 cultures, containing substantial amounts of phage (PLT-22) active on LA-2, also stimulated FA production from LA-2. Thus, LA-2 produced a phage that infected LA-22 to produce a prototroph. But if the phage was present in LA-22 (derived from LT-22), how could it infect LA-2 (derived from LT-2)? Is there a detailed explanation of this experiment? Thank you Answer: There certainly are detailed explanations of the experiments. For this answer I'm using a 2016 retrospective in J. Bacteriology, the same journal that published the original paper. However those descriptions will tend to gloss over the points that concern you as described in your comment: The fact that the phage moved from LA-22 to LA-2, took some genes from it and then re-infected LA-22 (which, being lysogenic for the phage, should have been immune to it) Remember, these experiments took place in a very early time for biochemistry. The Hershey-Chase paper that established nucleic acid, not protein, as the carrier of genes was also published in 1952! The "how" of transduction wasn't worked out until later. The Filterable Agent responsible for transduction between LT-2 and LT-22 is now known as phage P22. Its prophage tends to be integrated into the host genome. When it switches from lysogenic to lytic phase, it replicates its genome while still integrated into the host. The headful mechanism fills the first capsid with part of the prophage genome and continues through the adjacent bacterial chromosome for up to seven or more successive capsid headfuls. So, a significant fraction (about 2%) of the virus particles produced by lytic P22 contain host bacterial DNA and little to no phage DNA. Why superinfection exclusion (SIE) fails in the LT-22 is an interesting question, but it turns out to not be surprising. P22 has four separate SIE systems, and one, sieA works by binding to and blocking the exit of P22's "ejection proteins", so it can stop transduction by preventing entry of any kind of DNA. However, it looks like this system doesn't perfectly prevent DNA injection. From a 1971 paper on P22 SIE: The frequency of transduction of wild-type lysogens is reduced by a factor of 250, indicating that transducing particles are excluded by P22 prophage So, if the concentration of P22 particles with bacterial DNA is high enough, a few will get through. Incidentally, I found that LT-2 was first isolated in Sweden, and LT-22 in Chile, both in the 1940s.
{ "domain": "biology.stackexchange", "id": 12480, "tags": "genetics, virology, recombination, bacteriophage, transduction" }
Invoking only the last registered event handler
Question: We are building a game and we have a dialog system. Dialogs may open and stack on top of one another. When every dialog opens, it registers itself to handle the BackButtonPressed event: AppHelper.OnBackPressed += HandleBackPressed; When the dialog closed, it unhooks from the event: AppHelper.OnBackPressed -= HandleBackPressed; The problem is, we'd like the event to raise the event handler of the active dialog only (the active dialog is the last one registered to the event. A proposed solution was to: Manually create the add and remove methods for the event Keeping registered handlers in a list. When raising the event, call the last handler in the list Here's an example in code, is this the proper way of doing this? private static List<BackPressed> backPressed = new List<BackPressed>(); public static event BackPressed OnBackPressed { add { backPressed.Add(value); } remove { if(backPressed.Contains(value)) { backPressed.Remove(value); } } } Raising the event: backPressed[backPressed.Count - 1].Invoke(); Answer: Yes, this is a reasonable way to implement the event. Another would be use the normal even implementation (without explicit add and remove) and get the last subscriber using GetInvocationList(). But I think the fundamental problem here is that what you want doesn't really behave like an event, so it probably shouldn't be an event. What I would probably do is to make the abstraction something like “a collection of open Dialogs”, not “a collection of delegates that are going to be invoked when the back button is pressed”. That way, you can easily access the top-most open Dialog and invoke its HandleBackPressed() method. But you could also use it for other purposes. So this would make your code more flexible, without adding much complexity.
{ "domain": "codereview.stackexchange", "id": 11097, "tags": "c#, .net, event-handling, delegates" }