anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How do fungi react to being grown in a tissue culture?
Question: I know how plant cells and animal cells react to being cultured and grown this way, but what about mushrooms? Would they respond like plants, and result in embryos growing off the mass? Or does the mass just grow indefinitely as in animals? Has anyone done this? How do they react when grown this way? Answer: Generally, fungi are cultured on agar with a food source such as malt extract. One way to do this is to clone a mushroom from the fruitbody of a mushroom or from colonized substrate (e.g. rotting wood). Another approach is to germinate spores. The fungus will grow as vegetative mycelia until it runs out of food. At that time, some species will grow tiny fruitbodies or sporulate before entering a more dormant phase, dying out. Unless refrigerated, standard agar plates dehydrate. Also, fungi may take on different textures: wispy, bumpy, root-like (rhizomorphic), leathery, powdery, and may darken or change color as they age. Some cultures start to adapt their metabolism to the media after a large number of transfers (based on observation and discussed in Stamets' "The Mushroom Cultivator".) at this point, cultures may also start to grow more slowly. The mycelia can be transferred from one plate to another or to another media (substrate) such as a log, 'fortified sawdust' (sawdust with ~30% w/w wheat bran), or pasteurized compost. With the appropriate nutrition and environmental stimuli (CO2 levels, temperature, moisture, light, etc), the mycelia will fruit, creating mushrooms - generally after fully colonizing the substrate. Here is a figure of 35 agar plates with 16 different fungal cultures that I isolated from the wild (soil, wood, mushrooms, as described in Allison et. al, 2009; LeBauer, 2010), and have transferred from a mature culture (about a 2 months old) to a new plate, which are about two weeks old in this picture. Parent cultures are in the first and third row, with children below (they are genetic clones; instead of counting generations, I count number of transfers. You can also find cultures of fungi (Yeast) in unfiltered beer. I have used many techniques to isolate and culture fungi (Isikhuemhen and LeBauer, 2004; LeBauer, 2010) but I would recommend the books "The Mushroom Cultivator" and "Growing Gourmet and Medicinal Mushrooms" by Paul Stamets for more information on sterile culture techniques used in mushroom cultivation.
{ "domain": "biology.stackexchange", "id": 250, "tags": "mycology" }
No interactive markers displayed whilst following moveit! tutorial ROS Kinetic
Question: I am new to using rviz and moveit and currently working through the following tutorials on ROS Kinetic: http://docs.ros.org/kinetic/api/moveit_tutorials/html/doc/getting_started/getting_started.html http://docs.ros.org/kinetic/api/moveit_tutorials/html/doc/quickstart_in_rviz/quickstart_in_rviz_tutorial.html I completed the first link, moving onto the second link (quickstart in rviz) and completing steps 1 and 2 I find that the interactive markers are not displaying. I am unsure where the errors lie here considering I have followed the instructions perfectly. When examining the terminal which launches moveit tutorial I see that I have a warning. I'm not sure how relevant it will be but I have included it anyway: [ WARN] [1550244405.694068338]: Unable to update multi-DOF joint 'virtual_joint': TF has no common time between '/world' and 'panda_link0': Further searching around I have found that this issue may have popped up before: https://answers.ros.org/question/278820/moveit-interactive-marker-missing-on-new-kinetic-version/ However the link to the discussion is something beyond me and it doesn't seem to appear to be instructions to fix it. Unsure where to go from here and would appreciate any suggestions to fixes. Please let me know if more information is required from logs etc. Originally posted by Burarara on ROS Answers with karma: 66 on 2019-02-15 Post score: 3 Original comments Comment by benb35 on 2019-02-20: Having the same issue Comment by kevin29 on 2019-02-22: I have the same problem, please let me know if someone finds a solution! I get an also an info, that "Stereo is NOT SUPPORTED" because "OpenGl version: 3 (GLSL 1.3)". This might be because I have got a machine with an Intel on board graphics... Do you also get this? Might this be a problem? Answer: Although I did not seem to find a permanent solution to the downloaded demo not displaying interactive markers, I did however find an alternative way to work around it. Simply following the instructions of the next step in the tutorials, the setup assistant tutorial (http://docs.ros.org/kinetic/api/moveit_tutorials/html/doc/setup_assistant/setup_assistant_tutorial.html), will ask you to delete the panda demo folder and you will replace it yourself with a new panda which you create yourself. After this I found that interactive markers were now displaying on the new panda so I could go back and explore the previous tutorial. I don't have an ideal fix to this, nor do I have a concrete answer as to why the demo panda does not display it (I suspect it to be linked with the end effector of the panda not being properly configured) but hopefully this will allow anyone working with moveit for the first time to continue with the tutorials. Originally posted by Burarara with karma: 66 on 2019-02-22 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Conan on 2019-03-03: Having the same problem,but thank you for the answer since the assistant can repair it. Comment by rebecatourinho on 2019-04-04: This way worked for me too.
{ "domain": "robotics.stackexchange", "id": 32474, "tags": "rviz, moveit, ros-kinetic, ubuntu, interactive-markers" }
How can the temperature in a column of gas be uniform, if kinetic energy increases when gas molecules go down?
Question: I cannot wrap my head around a supposed temperature gradient versus total energy gradient paradox for thermodynamic equilibrium of (open space ) gas in gravitational field. For simplicity, consider ideal monoatomic gas like ideal argon. There is supposed the zero temperature gradient in equilibrium. But how to deal with the constant total energy and altitude dependent kinetic energy of molecules between collisions with nonzero vertical velocity projection? $$\frac{d(E_\mathrm{k} + E_\mathrm{p})}{\mathrm{d}h} = \frac{\mathrm{d}(\frac 12 mv^2 + mgh)}{\mathrm{d}h} = 0$$ With reversible exchange $E_\mathrm{k}$ and $E_\mathrm{p}$ and for the statistical means, it should be like $$\frac{\mathrm{d}(\frac 32k_\mathrm{B}T + mgh)}{\mathrm{d}h}=0$$ In case of zero temperature gradient, how comes descending molecules do not convert their potential energy to kinetic one and inject thermal energy to lower layers (and vice versa)? How comes it does not cause temperature gradient until the total mean molecular energy gradient is zero? I feel I am missing something and that density gradient somehow compensates the effect of molecular energy gradient at zero temperature gradient, but I do not see how. A detailed kinetic theory analysis is very probably above my abilities. I have discussed in in chats in both CH and PH SE sites at: CH SE: density-gradient-vs-entropy-of-mixing chat discussion-between-poutnik-and-theorist and PH SE: what-is-the-reason-of-dt-dh-0-in-the-gas-column chat discussion-between-poutnik-and-giorgiop I have also searched site:stackexchange.com for related Q/A about gas equilibrium and gravitationalfield, but I have not found a topic addressing it unless I have missed it. PH Se: in-a-gravitational-field-will-the-temperature-of-an-ideal-gas-will-be-lower-at considers Earth atmopsphere, which is not at equilibrium ( I have meteorological background from my days of an enlisted airfield meteorologist so I am aware of dry-adiabatic gradient 0.0098 K/m.) Answer: In equilibrium, there's no temperature gradient, no kinetic energy gradient, and no heat transfer. But like most results in kinetic theory, it's unintuitive unless you follow what each particle in detail. First, let's explain why there's no kinetic energy gradient. Think about the particles that start low and end up high. Since it costs energy to go up, doesn't that mean that the particles that end up high should be moving slower? No, because particles that were originally moving slowly don't have enough energy to get up high in the first place. The only particles that get high are those that got an unusually high kinetic energy through a lucky collision. As they go up, they lose that extra kinetic energy to potential energy, arriving at the top with the typical amount of kinetic energy. (Of course, in reality there's some distribution of kinetic energies, but this logic holds for each part of the distribution. Suppose you had some mix of particles with kinetic energy $0$, $1$, $2$, $3$, ... at the bottom. The particles with kinetic energy $0$ don't make it up. The particles with kinetic energy $1$ arrive with kinetic energy $0$. If you work through it quantitatively, you end up with exactly the same distribution.) Second, let's explain why there's no heat flow. The point is that the density at each level stays the same in equilibrium. The particles falling from the "high" level to the "low" level pick up a lot of kinetic energy, so they arrive at the "low" level with more kinetic energy than most of the particles already there. But at the same time, particles are leaving the "low" level to go up to the "high" level, and as we just argued, the only particles that can do this are the most energetic ones. So in equilibrium, you predominantly have particles with unusually high total energy going in each direction, but since the flow of particles balances, there is no net heat flow from up to down. By the way, as you suspected, the existence of a density gradient is essential to maintain equilibrium. That's because all of the particles at the high level can fall to the low level, but only the highest energy particles at the low level can go up to the high level. For the rates to balance, there need to be more particles at the low level, which is precisely what happens in equilibrium.
{ "domain": "physics.stackexchange", "id": 95973, "tags": "thermodynamics, statistical-mechanics, kinetic-theory" }
A deamon for sending poem to clients based on KISS
Question: My code is about sending random poem from /etc/poem.conf to client using TCP sockets. In this implementation my daemon have restart mechanism using SIGHUP signal and DEBUG mechanism using defining DEBUG macro during compilation. My goals: Simpler code Less code Cleaner code #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <signal.h> #include <syslog.h> #include <setjmp.h> #include <time.h> #include <errno.h> #include <err.h> #include <sys/socket.h> #include <arpa/inet.h> #ifndef DEBUG #define perror(msg) syslog(LOG_ERR, "%s: %s", msg, strerror(errno)) #define err(status, msg) perror(msg), _exit(status) #endif sigjmp_buf jmp; void sighub (__attribute__ ((unused)) int signo) { siglongjmp (jmp, 1); } int main() { int sfd, cfd; char poem[BUFSIZ]; struct sockaddr_in sa; FILE *fpoem = NULL; #ifndef DEBUG if (daemon (0, 0)) err (1, "daemon"); #endif if (signal (SIGHUP, sighub)) err (1, "signal"); if ((sfd = socket (AF_INET, SOCK_STREAM, IPPROTO_TCP)) == -1) err (1, "socket"); sa.sin_family = AF_INET; sa.sin_addr.s_addr = htonl (INADDR_ANY); sa.sin_port = htons (1073); if (bind (sfd, (struct sockaddr *)&sa, sizeof (sa))) err (1, "bind"); if (listen (sfd, 10)) err (1, "listen"); if (sigsetjmp (jmp, 1)) fclose (fpoem), close (cfd); if (! (fpoem = fopen ("/etc/poem.conf", "r"))) err (1, "/etc/poem.conf"); for (;;) { if ((cfd = accept (sfd, NULL, NULL)) == -1) perror ("accept"); srand (time (NULL) + rand()); fseek (fpoem, 0, SEEK_END); fseek (fpoem, rand() % ftell (fpoem), SEEK_SET); while (ftell (fpoem) > 0 && fgetc (fpoem) != '\n') fseek (fpoem, -2, SEEK_CUR); fgets (poem, BUFSIZ, fpoem); write (cfd, poem, strlen (poem)); close (cfd); } } Answer: Make use of your operating system's facilities A lot of the complexity in your code comes from wanting to call daemon() yourself, and then also wanting a way to run your code in the foreground. I strongly recommend that you just design your code as a foreground-running process, and then use whatever facilities your operating system provides for running it in the background if so desired. Consider that most init systems allow you to start and manage a process running in the background, and will do a better job at this than you can from within your program. With systemd, you can create a .service file that will start your program automatically at boot, redirects all output to logfiles, can restart your program either manually or automatically when it crashes, synchronize with other services so it starts at the right time, and so on. You can go further than that and even offload the TCP socket handling part to the init system. Again systemd has features for that, but even with the venerable SysV init system you have inetd. Of course, you can go even further; if you consider coreutils to be part of the operating system, you can just replace your program with the command shuf -n1 /etc/poem.conf. This removes all code, so it perfectly fulfills the three goals you have. Consider binding to an IPv6 socket On many operating systems it is possible to create a single IPv6 socket that can listen on both IPv4 and IPv6. On some this happens automatically, on others you might have to set the IPV6_V6ONLY socket option: int option = 0; setsockopt(sfd, IPPROTO_IPV6, IPV6_V6ONLY, &option, sizeof option); Unnecessary calls to fseek() The first call to fseek() is only necessary once to get the length of the file, so it should be moved out of the for-loop. Also consider not scanning backwards for a newline, but scan forwards instead, this avoids the third call to fseek(). This also brings me to: Efficiently getting to the start of a new line If you scan forward for a newline, then you don't need to read character by character in a while-loop, you can just call fgets() to discard one partial line, and then the second call will get you a complete line. Of course, you might have an issue when you first seek into the middle of the last line in the file. In that case, consider wrapping to the start of the file and returning the first line: fseek(fpoem, 0, SEEK_END); long size = ftell(fpoem); for (;;) { ... fseek(fpoem, rand() % size, SEEK_SET); // seek to random position fgets(poem, BUFSIZ, fpoem); // discard partial line fgets(poem, BUFSIZ, fpoem); // read full line if (feof(fpoem)) { fseek(fpoem, 0, SEEK_SET); // seek to start fgets(poem, BUFSIZ, fpoem); // read first line } ... } Of course, you could also just read the poem into an array of strings at the beginning of your program, this would avoid the overhead later on. Missing error handling You are checking for errors for most things, except when reading a random line and sending it to the peer. Consider that all calls to fseek(), fgetc(), fgets() and write() can fail, either because of a permanent error or because of something like EINTR. Possibility of partial reads and writes What if your poem contains a line equal to or longer than BUFSIZE? In that case, fgets() will still succeed, but read only the first BUFSIZ - 1 characters. Check if the last character in the string is a newline to verify if a whole line was read. A call to write() might write less than you told it to. This is something you should handle by checking the return value, and in case of a partial write, retry sending the remaining part, and so on in a loop until everything has been sent or a permanent failure happens. Alternatively, consider using fdopen() to get a FILE * handle for the socket, so you can just call fputs() and not worry about these details. Beware of corner cases What if /etc/poem.conf exists but is empty? What if the last line doesn't contain a newline? What if an I/O error happens in the while-loop and? What if SIGHUP is sent right after sigsetjmp() but before you even opened fpoem (and cfd is still uninitialized)? Make sure you think about corner cases, and adress all of them.
{ "domain": "codereview.stackexchange", "id": 44075, "tags": "c, socket, signal-handling" }
When does the sensitivity of tangent galvanometer approaches a max value?
Question: The image shows the approach of a few saying that the sensitivity is maximum if the deflection approaches zero. The white portion is a snap of a reference book which says the sensitivity is maximum if the deflection approaches $45^\circ$. Which is right? Kindly point out the mistake. Answer: The discrepancy arises because two different definitions are being used. Your book is defining sensitivity as change in deflection per unit fractional change in current. You are defining it as change in deflection per unit change in current. I've never come across the first of these. That's not to say that it's outright wrong, but the reader would always have to be told that 'sensitivity' was being used in this sense. I think that most of us would use the second definition and say that the sensitivity is greatest for small currents, when $\theta \approx 0.$
{ "domain": "physics.stackexchange", "id": 70347, "tags": "instrument" }
Representation of nonabelian Wilson line in terms of fermionic fields
Question: Context: The coupling action of a particle of charge $q$ to a $U(1)$ gauge field is given by \begin{equation} S = q \int d \tau A_\mu \left( X \right) \frac{dX^\mu(\tau)}{d \tau} = -i \ln W_q, \tag{1} \end{equation} where \begin{equation} W^{\text{abelian}}_q = \exp{ \left(iq \int A_\mu dX^\mu\right) } \end{equation} is the Wilson line, the integral being over the trajectory. For a particle charged under a nonabelian gauge field, it seems reasonable to try to find the coupling by considering the nonabelian version of the Wison line: \begin{equation} W^{\text{nonabelian}} = \text{Tr} \, \mathcal{P} \exp{ \left(i \int A_\mu dX^\mu\right) } = \text{Tr} \, \prod_{\tau=\tau_i}^{\tau_f} \left( 1 + i \,d \tau A_\mu \left( X \right) \frac{dX^\mu(\tau)}{d \tau}\right). \end{equation} My reason for considering this is that such a coupling should appear between the endpoints of open strings to the nonabelian gauge field one finds in the massless spectrum in the presence of D-branes, but nothing in this question depends on the details of string theory. The question: How should one deal with the path-ordering $\mathcal{P}$ in $W^{\text{nonabelian}}$ in order to rewrite this as a simple integral like (1)? The paper Particles with non abelian charges suggests the form \begin{equation} S_{\text{NA}} = \int d\tau \left( \bar{c}^\alpha \frac{dc_\alpha}{d \tau} -i A^a_\mu (X) \frac{d X^\mu}{d \tau} \bar{c}^\alpha \left( T^a \right)_\alpha^{\phantom{a} \beta} c_\beta \right), \end{equation} where $c_\alpha$ and $\bar{c}^\alpha$ are fermionic fields transforming respectively in the fundamental and anti-fundamental representations of $SU(N)$, under which the particle is charged, and satisfying a Dirac algebra with respect to these indices. The $\left( T^a \right)_\alpha^{\phantom{a} \beta}$ are $SU(N)$ generators in the chosen representation. Integrating out these fermions in the generating functional should give \begin{equation} \int \mathcal{D} c \mathcal{D}\bar{c} e^{i S_{\text{NA}}} \sim \det \left( \delta^\beta_\alpha \frac{d}{d \tau} -i A^a_\mu (X) \frac{d X^\mu}{d \tau} \left( T^a \right)_\alpha^{\phantom{a} \beta} \right) = e^{iS_{\text{coupling}}}, \end{equation} but I was not able to check this, due to the complicated nature of the operator inside the determinant. Answer: The evaluation of determinants like this one can be found in various references, including, for example, https://arxiv.org/abs/1411.6540 https://inspirehep.net/literature/1324409 https://inspirehep.net/literature/1475719 The idea is to work in the Cartan subalgebra of the gauge group where everything is nice and diagonal, and then express the result in terms of group invariants. Ultimately this reproduces the non-Abelian Wilson line you started with (in a chosen representation), so the real advantage of this first quantised representation would be to find an alternative, more convenient way of evaluating the one-dimensional path integral, perhaps in perturbation theory.
{ "domain": "physics.stackexchange", "id": 98429, "tags": "quantum-field-theory, string-theory, gauge-theory, wilson-loop" }
How does a wheelie work?
Question: So I've been trying to create a mathematical model for an electric motorcycle and began to wonder about the maximum possible torque that could be supplied to the rear driven wheel without having the bike begin to lift up on one wheel. I found ways to calculate this value online; however, the basic concept as to how the bike actually lifts escapes me. My problem started when I began to think about what axis the bike will rotate about when it is doing the wheelie. My first intuition was that the frame and front wheel, together, rotate about the rear axle. But when I drew a free body diagram of the frame/front wheel system just at liftoff (see below) I made note that the only forces acting about the rear axle, point O, is the force of weight. This means that an increase in applied torque and subsequently, the applied force Fa, should not effect the rotation about the rear axle. I know that the applied force on the back wheel is indeed correlated with the propensity for a bike to wheelie, so I considered that the axis of rotation I was choosing was wrong. If we take the free body diagram above and sum the moments about the center of mass, we would find that an increased applied force would in fact cause the solid body to rotate. The problem with this understanding is that during rotation, the center of mass of the drawn system should actually rises relative to the surface the bike is moving across. If the bike were to be truly rotating about its center of mass, then the back wheel would begin to dip below the surface of the road like you might see in a glitchy video game. So I suspect that the bike is in fact rotating about the back axle, but I don't understand why, please help! edit: I added the external torque from the back wheel to the frame, which would allow the bike to rotate about the back axle edit 2: I suppose that the torque acting on the frame via the engine should not effect the rotation as the movement of a motorcycle can be perfectly replicated by applying a force at the back axle. A third possible axis of rotation might be the lowest point of the back wheel. Answer: If we take the free body diagram above and sum the moments about the center of mass, we would find that an increased applied force would in fact cause the solid body to rotate. Perhaps. Or additional forces can appear. If I push up on my car's bumper, a rotational force is being applied. But the normal force on the wheel farther from me increases, so the total torque still sums to zero. The problem with this understanding is that during rotation, the center of mass of the drawn system should actually rises relative to the surface the bike is moving across. If the bike were to be truly rotating about its center of mass, then the back wheel would begin to dip below the surface of the road like you might see in a glitchy video game. Another way to interpret this is that if you apply a small torque and imagine it around the center of mass, you're pushing the rear wheel into the ground. As you do so, the normal force increases. This increased normal force counters the torque you are applying. But the maximum this can be is the weight of the bike. So if you increase past this maximum, the bike will rotate. I had originally tried to analyze this from the rear axle, but because the bike will accelerate, this makes fictitious forces appear in the axle's frame that have to be dealt with. We can mostly ignore this by analyzing around the center of mass instead. I'll ignore friction for now, and just assume that we have sufficient friction to avoid wheel slip. Then the forces we need to consider are the weight, the normal forces, and the frictional force from the road. As the wheel accelerates faster it provides a torque to the bike. The bike responds by changing the balance of the normal forces. At the limit, only the rear wheel is providing a normal force, and that will equal the weight of the bike. Since gravity acts through the center of mass, the only torques that appear are the normal force and frictional force. When torque from friction exceeds torque from the normal, the bike will tip. $$\tau_{friction} > \tau_{Normal}$$ $$F_f \times y > F_N \times x$$ $$F_f > \frac{mgx}{y}$$ To tip the bike, the wheel has to push with a force greater than the weight of the vehicle times a factor that depends on the location of the center of mass. \ And since we have the forward force, we can solve for forward acceleration and know that as it begins to tip, the bike will be accelerating at $\frac{x}{y}g$ And then the bit that I think began your question: ...the only forces acting about the rear axle, point O, is the force of weight. This means that an increase in applied torque and subsequently, the applied force Fa, should not effect the rotation about the rear axle. In your initial diagram, you were neglecting one additional force, and that is the fictitious force due to acceleration of the frame. This force is equal to $ma$ and is applied to the center of mass in the direction opposite the acceleration of the bike. As the acceleration is due to this force, it means it does affect the rotation, even though it wasn't obvious when you started summing torques.
{ "domain": "physics.stackexchange", "id": 95660, "tags": "newtonian-mechanics, rotational-dynamics, torque, inertia, moment" }
How is this classical "paradox" resolved in electromagnetism?
Question: A magnet and a coil move relative to each other. In the frame of reference of the magnet, there is a magnetic field and consequently a force acting on the charges in the coil according to the Lorentz force $F=qv\times B$ but there is no net electric field. In the frame of reference of the coil, there is a magnetic field and also an electric field, induced by the magnet, $E'$ that moves the charges in the coil, producing a current. But, in the first case no work is done on the charges, since the force is perpendicular to the velocity. In the second case, the force $qE'$ does work on the charges. How is this "paradox" resolved in classical electromagnetism? Answer: When the magnet is moving, the electric field of the magnet is doing the work, pushing the current carriers around the wire. When the magnet is still and the wire is moving, the magnetic field produces a force in the current carriers, but this force does no work, it is the constraint force that keeps the electrons in the wire that is doing the work. The paradox is resolved by noting the the wire is moving, so the constraint is not time-independent. The constraint force is perpendicular to the surface of the wire pushing on the charge carriers in the direction of motion (because the whole thing is moving). This force is doing the work on the charge carriers in this frame (although it is somewhat strange to think of a constraint force doing work). The push of the current carriers against the wire's constraint force gives the breaking force on the wire, which slows it down so as to conserve energy, as the resistance gives off heat.
{ "domain": "physics.stackexchange", "id": 2215, "tags": "electromagnetism, special-relativity, relative-motion, maxwell-equations" }
robot_localization and gmapping - how the transform should be done?
Question: We are trying to use robot_localization and gmapping. What we have done so far: we use gmapping to publish the transform between map -> odom. Then we tried to apply gps, imu and odom using ekf from robot_localization. We use navsat_transform and one ekf node to fuse all the data. This node does odom -> base_link transform. This setting does not work very well. Here: https://roscon.ros.org/2015/presentations/robot_localization.pdf (page 4) there is a picture: localization and navigation http://slideplayer.com/slide/8827261/26/images/4/robot_localization+and+the+ROS+Navigation+Stack.jpg This indicates, that I should fuse gmapping and navsat_transform result in one localization node. Should this be done? How should it be done? Is there an example of it? Originally posted by Ago on ROS Answers with karma: 23 on 2017-08-22 Post score: 1 Original comments Comment by juanlu on 2017-08-23: Can you tell us how are you configuring gmapping and robot_localization? Specifically the parameters base_frame and odom_frame in gmapping and the parameter base_link_frame in robot_localization. Also, have you tried other robot_localization packages? like robot_pose_ekf for example. Comment by Humpelstilzchen on 2017-08-23: As explained in the presentation video: The upper is a either-or relation. So you fuse either gmapping or amcl or navsat_transform. This is because all these nodes provide an absolute position. Comment by Ago on 2017-08-23: Haven't tried robot_pose_ekf. base frame is base_footprint, odom frame is odom. In robot_localization, base_link_frame is base_footprint. About Humpelstilzchen's comment, how could I use laser and gps together? Comment by Deep on 2017-10-14: I am have the same issue. Any help please. Comment by Robbe_C on 2018-04-02: Hey, Ago and Deep, did you get your setup to work? Answer: That's a pretty complex configuration! You're not fusing the gmapping output into the EKF in some way, are you? This is also probably not going to play well with loop closure, unless you have a GPS with high accuracy. In any case, it's almost impossible to make a determination about what's not working in your setup without at least your launch files and sample input messages. Originally posted by Tom Moore with karma: 13689 on 2017-10-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Robbe_C on 2018-04-08: Hello Tom Moore How would you handle this problem if you had gmapping, imu and GPS? Is it possible to use the gmapping with a gps? or do you have to choose between them? Comment by Tom Moore on 2018-04-09: The problem is that your map frame will constantly be shifting any time gmapping closes a loop, so the navsat_transform_node transform to your world frame will be constantly invalidated. Once you have a map, then you can use GPS data, I suppose, but only after you localize in the map. Comment by Robbe_C on 2018-04-09: Ok, thanks for the reply! If I understand it correctly, it's not a good idea to use gps in combination with gmapping. Is it a better idea to use a combination of gmapping and imu if the laser scanner data is available and use gps in combination with imu if it's not available? Comment by Tom Moore on 2018-04-12: The answer to this is far too long to put in a comment, I'm afraid. Please ask a separate question. Thanks. Comment by Robbe_C on 2018-04-12: Ok, I have posted my question: https://answers.ros.org/question/288429/how-to-set-up-robot_localization-with-gmapping-imu-and-gps/
{ "domain": "robotics.stackexchange", "id": 28681, "tags": "ros, navigation, gmapping, navsat-transform, robot-localization" }
How exactly does matrix factorization help with collaborative filtering
Question: We start with a matrix of user ratings for different movies with some elements unknow i.e the rating of a yet to be seen movie by an user. We need to fill in this gap. So How can you decompose or factorize a matrix where all elements are not known in advance. after you somehow decompose it, do you just multiply the matrices back to recreate the original one, but now with the missing item populated? How do you know which factorization method like non negative,singular value,eigen to choose,without going into too much math? Answer: Answers to your question You factorize the matrix in order to approximate original one as closely as possible. This is generally done by starting with randomized values, and updating based on error (between product of factors and original matrix). In other words, for a given matrix A, you are trying to find matrices C & D such that Error(A - (C x D)) is lowest. The algorithm is designed to find an approximation , which might result in original missing entries being replaced by new values (recommendations or ratings). Do you just multiply? Yes. That it is the essence of calculation. For every user and product, multiplication gives you ratings or score. Sorting by score, and picking the index you get recommendation for each user. It also allows you to now store much smaller matrices than the original one. Choice of factorization will be also dictated by application. If your application has only positive ratings, then it is better to use non-negative matrix factorization. You may start using matrix factorization methods without knowing the implementation to start with, as long as you are aware of the overall idea (and the pitfalls in using it). Further Comments It is a little perplexing that you start with a matrix with many missing entries (unseen items), approximate the matrix via factorization and expect to get non-missing entries (which help in doing predictions). If the task is to approximate original entries, then recommendations won't be good since you can have missing/zero entries in the approximation and still get lowest error on approximation task. The idea: regularization imposed in the algorithm (point 1 above) ensures that noise in the original data is filtered and only patterns are detected. But this idea has its fair share of critics. The introduction section of this paper by Stefen Rendle gives a readable introduction to what is happening in simple matrix factorization methods and what can go wrong. The paper also re-formulates the task with better optimization criteria. You can also read this post by Simon Funk which explains the mechanics in a readable language (and code), for matrix factorization applied to recommendations.
{ "domain": "datascience.stackexchange", "id": 2500, "tags": "recommender-system, matrix-factorisation" }
Simple but important function to check and retry given action's result
Question: What I'm trying to achieve is Given a predicate function and an action which returns the object to check, check predicate, retry and fail after numTries .. or success . Purpose of this function is : During constant data collection from network resources ( web, apis, .. ) return of network operations not successful every time ( because of bad servers, network problems , etc.. ). If you need that data you should retry, and if you do this kind of ops. a lot, you should have a function to handle this. I'm looking for are there better ways to do this. public static async Task<bool> CheckAndRetryAsync<T> ( T _object, //object to check state Func<T, bool> predicate, Func<Task<T>> action, // object creation func int sleep = 150, // sleep ms. between tries int numTries = 6, // .. bool rethrowExceptions = false, // should rethrow exc. during object creation ManualResetEvent mr = null ) { int tries = 0; bool success = false; while (true) { try { _object = await action(); success = predicate(_object); if (!success && tries < numTries) { tries++; await Task.Delay(sleep); } else { //goto exit; break; } } catch (Exception e) { if (rethrowExceptions) throw; if (tries < numTries){ tries++; await Task.Delay(sleep); } else{ goto exit; } } } exit: if(mr != null) mr.Set(); return success; } Tests for function : public class StaticUtilFuncTests { ManualResetEvent mr = new ManualResetEvent(false); [Fact] public async Task T_010_CheckAndRetryAsync() { int tries = 1; mr.Reset(); Stopwatch sw = new Stopwatch(); sw.Start(); TObj ttx = null; bool succsess = await CheckAndRetryAsync<TObj>(ttx, (o) => (o != null && o.Value != ""), async () => { if (tries <= 4) { ttx = new TObj(); tries++; } else { ttx = new TObj("sdf"); } mr.Set(); return ttx; }, numTries: 5, sleep: 400 ); mr.WaitOne(); long elapsed = sw.ElapsedMilliseconds; Assert.True(elapsed >= 1600l); // numtries * sleep Assert.True(succsess); Assert.True(tries == 5); } [Fact] public async Task T_011_CheckAndRetryAsync() { mr.Reset(); Stopwatch sw = new Stopwatch(); sw.Start(); TObj ttx = null; bool succsess = await CheckAndRetryAsync<TObj>( ttx, (o) => (o != null && o.Value != ""), async () => { ttx = new TObj(""); mr.Set(); return ttx; }, numTries: 3, sleep: 1100 ); mr.WaitOne(); long elapsed = sw.ElapsedMilliseconds; Assert.True(elapsed >= 3300l); // numtries * sleep Assert.False(succsess); } [Fact] public async Task T_012_CheckAndRetryAsync() { mr.Reset(); //Mock<TObj> to = new Mock<TObj>(); //to.Setup( // m => m.) Stopwatch sw = new Stopwatch(); sw.Start(); TObj ttx = null; bool succsess = await CheckAndRetryAsync<TObj>( ttx, (o) => (o != null && o.Value != ""), async () => { ttx = new TObj("",true); //mr.Set(); return ttx; }, numTries: 30, sleep: 50, mr: mr ); mr.WaitOne(); long elapsed = sw.ElapsedMilliseconds; Assert.True(elapsed >= 1500l); // numtries * sleep Assert.False(succsess); } } public class TObj { public string Value; public TObj(string value = "" , bool testing = false) { if(value == "" && testing) throw new Exception(); Value = value; } } } A real usage example : BittrexBtcTicker btcTicker = null; success |= await CheckAndRetryAsync<BittrexBtcTicker>( btcTicker, // object we need to get ( also save db ) (o) => (o != null && o.Value > 0), // predicate not null, and bigger than 0 async () => { mr.Reset(); // a noise but needed during async ops. btcTicker = await BittrexClient .GetTicker(saveDb: true, updateDb: updateDb, _context: context, mr: mr); // call to object creation.. mr.WaitOne(); // wait to op. complete return btcTicker; } ); Answer: Design I see you fixed the bug where the method would get stuck in an infinite loop when action throws an exception, but note how similar the normal and the exception paths are now. The method can be simplified by incrementing tries before invoking the action and by awaiting the delay after (outside) the try-catch. Try to keep code DRY (Don't Repeat Yourself). More simplifications include using a proper while condition: while (tries < numTries), to make the intent of the loop more clear, and the success check can be simplified to if (predicate(result)) break;. There's no need for goto here - break is sufficient (and easier to understand). The T _object parameter is useless. Currently, the only way to obtain a result is to use a closure. I assume you meant to make this a ref parameter? If so, I would rename it to TResult result. Alternately, you could return a (bool success, TResult result) value-tuple, which can be used like var (success, result) = await CheckAndRetryAsync(...); You may want to add ConfigureAwait(false) to your awaits, unless resuming in the same context is important. Are you sure you want to ignore exceptions by default? Including the last try? No logging even? What's the point of passing a ManualResetEvent into this method? Why not just do that in awaiting code: await CheckAndRetryAsync(...); mr.Set();? You may want to enforce that sleep is not negative. Tests The test method names do not describe what they're testing, and none of the Assert statements contain a descriptive error message. That's not good for future maintenance. There's a lot of code duplication in the tests. You may want to refactor it into a data-driven approach, or write a utility method for the repetitive parts. What's the purpose of that ManualResetEvent? Why is it shared between all tests? You may want to add a comment explaining why it's there. Throwing an exception from a constructor (TObj) instead of throwing it directly inside a test method is making the tests harder to understand. You may want to add a timeout to your tests, to ensure that infinite loops and other such problems will get caught. Instead of comparing elapsed against a magic number, I'd create local variables for the number of tries and timeout, so the minimum elapsed time can be calculated from them. In T_010, tries will always end up being 5, even if action gets called more than 5 times. There's no test that checks whether exceptions are rethrown. Style/readability All parameters are on a line of their own, except the two most important ones, so at first sight it looks like there's only one Func parameter. That's confusing! Personally I'd put action before predicate, to match execution order (you first need a result before you can check it, after all). Add some whitespace between methods and between blocks of code inside methods to improve readability. The l prefix looks a lot like a 1 - consider using L instead, or just leave it out.
{ "domain": "codereview.stackexchange", "id": 32529, "tags": "c#, beginner, error-handling, async-await" }
Lexer for expression evaluator
Question: I started learning my first functional programming language (Haskell) yesterday and I have been working on a lexer for an expression evaluator. The lexer is now basically feature complete, but I'm not sure what I can do to improve the code. import Data.Char import Data.List data TokenType = Identifier | RealNumberLiteral | PlusSign | MinusSign | Asterisk | ForwardSlash | Caret | LeftParenthesis | RightParenthesis deriving (Show) data Token = Token TokenType String deriving (Show) read_token :: String -> Token read_token [] = error "Unexpectedly reached the end of the source code while reading a token." read_token source_code@(next_character:_) | isSpace next_character = read_token (dropWhile isSpace source_code) | isAlpha next_character = Token Identifier (takeWhile isAlpha source_code) | isDigit next_character = let token_lexeme = (takeWhile (\x -> isDigit x || x == '.') source_code) in let period_count = length (filter (=='.') token_lexeme) in Token RealNumberLiteral (if period_count <= 1 then token_lexeme else error "There can only be one period in a real number literal.") | next_character == '+' = Token PlusSign "+" | next_character == '-' = Token MinusSign "-" | next_character == '*' = Token Asterisk "*" | next_character == '/' = Token ForwardSlash "/" | next_character == '^' = Token Caret "^" | next_character == '(' = Token LeftParenthesis "(" | next_character == ')' = Token RightParenthesis ")" | otherwise = error ("Encountered an unexpected character (" ++ [next_character] ++ ") while reading a token.") append_read_tokens :: [Token] -> String -> [Token] append_read_tokens tokens source_code | null source_code = tokens | isSpace (head source_code) = append_read_tokens tokens (dropWhile isSpace source_code) | otherwise = let next_token@(Token next_token_type next_token_lexeme) = read_token source_code in append_read_tokens (tokens ++ [next_token]) (drop (length next_token_lexeme) source_code) tokenize :: String -> [Token] tokenize [] = [] tokenize source_code = append_read_tokens [] source_code Answer: No need of TokenType: data Token = Identifier String | RealNumberLiteral String | PlusSign ... No need of nested let statements, see http://learnyouahaskell.com/syntax-in-functions#let-it-be camelCase naming is preferred everywhere. generally it is more efficient to aggregate lists using cons (: operator), and reverse at the end, if needed. It is about tokens ++ [next_token] fragment. read_token could return a tuple of token and the rest string, no need to drop (length next_token_lexeme) source_code after that. Note, I didn't inspect code logic at all.
{ "domain": "codereview.stackexchange", "id": 4146, "tags": "parsing, haskell" }
Can CDCL Algorithm Derived Conflict Clauses Always Be Obtained Through Resolution from an Unsatisfiable CNF Formula?
Question: I have a question regarding the Conflict-Driven Clause Learning (CDCL) algorithm applied to an unsatisfiable CNF formula $F$. Specifically, can all the conflict clauses learned by the CDCL algorithm be derived from the original formula $F$ through resolution? I believe the answer should be affirmative. My reasoning is that if this were not the case, I would have trouble understanding why the general resolution size of $F$ can serve as a lower bound for the runtime of the CDCL algorithm. However, I'm seeking clarification or confirmation on this point. Could someone provide insights or references that confirm or refute this understanding? Thank you! Answer: It is indeed and here is the reference for it if needed: Pipatsrisawat, K., & Darwiche, A. (2011). On the power of clause-learning SAT solvers as resolution engines. Artificial intelligence, 175(2), 512-525
{ "domain": "cstheory.stackexchange", "id": 5837, "tags": "sat, proof-complexity" }
Installation error in fedora 18
Question: Hi all, I am trying to install gazebo for the first time in Fedora 18, After some work I manage to configure (cmake) without any error, some warnings, yes. When I run Make I get some errors at the very beginning of: Linking CXX executable gazebo, The erros are: /usr/local/include/boost/thread/detail/thread.hpp:180: undefined reference to `boost::thread::start_thread_noexcept()' /usr/local/include/boost/thread/detail/thread.hpp:751: undefined reference to `boost::thread::join_noexcept()' util/libgazebo_util.so.1.8.1: undefined reference to `boost::thread::do_try_join_until_noexcept(timespec const&, bool&)' collect2: error: ld returned 1 exit status make[2]: *** [gazebo/gazebo-1.8.1] Error 1 make[1]: *** [gazebo/CMakeFiles/gazebo.dir/all] Error 2 make: *** [all] Error 2. I will appreciate any help, thanks in advance. Hugo Originally posted by Hugo Siles on Gazebo Answers with karma: 1 on 2013-05-27 Post score: 0 Original comments Comment by scpeters on 2013-05-28: What operating system and boost version are you using? Comment by Hugo Siles on 2013-05-29: thank you very much for your answer, I am not quite sure about how to apply the patch, please give me some hints Answer: It looks like a linking error. Can you try applying the following patch to the source code and rebuilding? diff -r 684ea4ae5365 gazebo/util/CMakeLists.txt --- a/gazebo/util/CMakeLists.txt Tue May 28 17:36:39 2013 -0700 +++ b/gazebo/util/CMakeLists.txt Tue May 28 22:46:11 2013 -0700 @@ -40,6 +40,7 @@ gazebo_transport gazebo_math gazebo_msgs + ${Boost_LIBRARIES} ) gz_install_library(gazebo_util) Originally posted by scpeters with karma: 2861 on 2013-05-29 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Hugo Siles on 2013-05-29: the boost version I am using now is the boost_ 1_49_0 , I also tried the version boost_1_53_0, but I got more errors at different stages of the build up. Comment by Hugo Siles on 2013-05-29: thank you very much for your answer, I am not quite sure about how to apply the patch, please give me some hints. Comment by Hugo Siles on 2013-05-31: I made a patch file Comment by Hugo Siles on 2013-05-31: Ok here I am, I made a file with the above lines, put the file in a gazebo directory and run the patch file. I still have the errors during the make build. Comment by Hugo Siles on 2013-05-31: Ok here I am, I made a file with the above lines, put the file in a gazebo directory and run the patch file. I still have the errors during the make build. Comment by iche033 on 2013-06-01: the error seems to suggest that you are compiling against boost 1.53 headers but the linker is probably linking against 1.49 where these function definitions are not present. Do you have multiple boost versions installed? Comment by Hugo Siles on 2013-06-01: hummm, yes, first I instaled 1.53 but it gave errors in the cmake configuration, after wards I installed 1.49 and I managed to configure with no erros, but during the make build I got the above errors, I suppose that I have to uninstall one of the boost versions, how do I do that?
{ "domain": "robotics.stackexchange", "id": 3320, "tags": "gazebo" }
Problem with teleop
Question: Hi all, My teleop didnt work properly in my electric installation. I got the command velocity info that I passed to husky A200. But the problem was, husky didnt move a bit. So I tried to replace the below mentioned three packages with another set of same packages in opt/ros/electric/stacks:- clearpath_common clearpath_husky clearpath_kinect I also rosmaked all of them. It all went error free. But, when I tried to run:- roslaunch husky_teleop teleop.launch It gave me this error :- ERROR: cannot launch node of type [clearpath_teleop/teleop.py]: Cannot locate node of type [teleop.py] in package [clearpath_teleop] But the node is present, when I checked the location. I also tried to replace the new ones with my old packages. But still the same error persisted. Any help would be appreciated. Thanks in advance Originally posted by Arjun PE on ROS Answers with karma: 18 on 2012-09-03 Post score: 0 Answer: Are you sure that clearpath_teleop is in your ROS_PACKAGE_PATH? Try with roscd clearpath_teleop. Also, I don't believe this will solve the problem you had according to your previous question. Did you try fixing what I assume to be a networking problem? Otherwise, this node won't work either. Originally posted by Lorenz with karma: 22731 on 2012-09-03 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 10867, "tags": "navigation, roslaunch, husky, joystick, teleop" }
Twitter Bootstrap CMS
Question: Ok, so I started to build my own CMS. You can see Aleksandar Golubovic's Blog,and I need your help. Please, look at my code, and tell me is that safe? You can download current version from Twitter Bootstrap CMS. If you want to install it, just read instructions. So, please, tell me if I get some errors, or some of your opinions, instructions... I really need help. EDIT: I have many functions like: function DeletePost() { if (isset($_GET['action'])) { if ($_GET['action']==='delete-post' && !isset($_GET['id'])) { $qry=mysql_query("SELECT * FROM posts ORDER BY id DESC"); while ($arr=mysql_fetch_array($qry)) { echo '<div class="span8 offset1"><p><h3>'.$arr['post_title'].'</h3> <a href="javascript:;" title="Delete This Post" class="delete" id="'.$arr['id'].'">Delete this post</a></p><p>'.$arr['post_content'].'</p></div>'; } } if ($_GET['action']==='delete-post' && isset($_GET['id'])) { $del=$_GET['id']; $qry=mysql_query("DELETE FROM posts WHERE id='$del'") or die ("ERROR!!!"); echo "This post has been successfull deleted."; } } } And this function is included in some page. So my question is: Is this method safe? Note: All functions is separeted pages using .htaccess Answer: Well looking at this as a code review of the code you posted i can recommend a couple of things; Security The major flaw at the moment is the possibility for SQL injection. You put the $_GET values directly into the query without validation or escaping. PHP.net has a page on SQL injection: http://php.net/manual/en/security.database.sql-injection.php Deprecated code It is strongly recommend not to use the mysq_* functions. !!! These functions are deprecated and will recieve no more support and will be removed in an upcoming version of PHP. See http://php.net/manual/en/function.mysql-connect.php and look at the big red box. This is potentially a huge security hazzard in the future as well. User Content / Access levels Currently you are not demonstrating any check on access level. If i am to guess your actions i could delete the entire content of your website without logging in! Database connection You seem to open the sql connection somewhere else. Please verify for yourself that you also close the connection when request ends, avoiding lingering MySQL connections. Misc Also, look into the SOLID principles and PSR-2 codestyle, your current code does not match it. These are best practices and provide a good base strategy for your coding. Also, making it more likely the open source community would accept/adopt your code. Good luck in further development of your CMS!
{ "domain": "codereview.stackexchange", "id": 3129, "tags": "php" }
Is the vernal equinox always in zenith somewhere on the equator?
Question: I'm trying to visualize the satellit orbital parameter "right ascension of the ascending node" (RAAN). Measuring the angle from the vernal equinox to the point where the satellite's orbit crosses equator from south to north makes sense, as long as the vernal equinox is always on the equator. And I suppose it usually is. But the Earth's axis precesses--how can the RAAN make any sense, if the vernal equinox is no longer directly above the Earth's equator? Answer: The Vernal Equinox is defined by the point where the sun's path across the sky, the Ecliptic, crosses the Celestial Equator. It is always going to be on the Celestial Equator. Given the various ways that objects in orbit over Earth are perturbed, any set of keplerian orbital parameters that describe the orbit of any object over the Earth (including the Moon!) will need updating long before the drift of the Vernal Equinox along the celestial equator becomes significant.
{ "domain": "astronomy.stackexchange", "id": 4413, "tags": "orbital-mechanics, orbital-elements, right-ascension, equinox" }
Why isn't Indigo soluble in water?
Question: Why isn't Indigo soluble in water? I mean, it has 2 H-N groups, shouldn't it then be able to hydrogen bond with water? Answer: True, Indigo can form hydrogen bonds, but with what? Is it water? Nope. It makes hydrogen bonds with itself. This makes the molecule incapable of bonding with water. Not only that, but the molecule itself is quite symmetric, with oppositely oriented polar bonds, that cancel out each others' dipole moments. Notice the molecule's similarity to an ace of diamonds: This type of symmetry is called center of symmetry as it is about it's center we can observe the symmetry. So we got it, it can't associate itself with water, and it's nonpolar. What do we expect? It doesn't dissolve in water.
{ "domain": "chemistry.stackexchange", "id": 8328, "tags": "solubility, hydrogen-bond" }
Dimensional reduction from DWT with threshold
Question: I have been trying to find out how can the discrete wavelet transform (DWT) be possible to reduce dimension of data. Then I saw the question which is seemingly related to my work: Feature extraction/reduction using DWT But after seeing the post, I have a question in my mind. Is it possible to say that setting DWT coefficient values to zero which is lower than threshold is the dimensional reduction? I mean, suppose we have 5 dimensional vector <1,4,3,5,2> which can be thought as result of DWT. And after setting some vector's values to zero in case the threshold is 3, we get <0,4,3,5,0>. I think it seems inappropriate to say that this is the dimensional reduction. Because I understand the dimensional reduction is the actually reduction such <1,4,3,5,2> -> <4,3,5> So I searched some paper to give some light on my curiousness. Some researches say that they select DWT coefficients for dimensional reduction. But I wander how it can be possible. Because,,, what if we want to reconstruct original signal or data from DWT coefficients? I guess ,by selecting some values from DWT coefficients (<1,4,3,5,2> -> <4,3,5>), they already lose the meaning of original data. It should be <0,4,3,5,0> not <4,3,5>. So I think they should post some encoding-decoding technique to replace selected DWT coefficients (<4,3,5> -> <0,4,3,5,0>), but they don't. Am I misunderstanding? Answer: I totally agree with your interrogations, and I appreciate the reference that was not on my watch. Basically, dimension reduction supposes that, for sufficiently informative data $d$ in dimension $N$, there exists a close approximation that lives in a very lower dimension $M$, i.e. that can be parametrized in a different $M$ dimensional fashion. For instance, 2D points gathered around a circular shape don't really need 2 coordinates, as they could be approximated by a 1D variable (an angle for instance). As one can see, the reduction doesn't not have to be linear. Basically, if the data in a huge space can be approximated by a regular-enough surface (e.g. a manifold) of small dimension, we have it! Non-linear can be quite complicated. And at least-locally, people can look at linear combinations of simple shapes $s_k$, of course with little non-zero combinations, compared to a canonical set of vectors $e_n$ $$d = \sum_1^N d_n.e_n \sim \sum_1^M a_m .s_m\,.$$ In practical PCA, one first computes eigenvectors (that depend on data statitics, and heuristically on its covariance matrix). And sometimes performs dimension reduction keeping the $M$ largest eigenvalues, that best explain the energy of the signal. How do we recover the data then? Using the $M$ "largest" vectors, whose knowledge was somewhat based on the whole original data (or at least their second order statistics). With practical DWT, the paradigm changed a little. You assume that the data is somehow piece-wise regular, and some fixed set of wavelet vectors is used to project the data. It does not depend on the data anymore. Under some conditions, one expect that the coefficients with highest magnitude approximate the data well. What is indeed misleading, as you mentioned, is that one should know to which wavelet vector coordinates they are attached. The hidden notion of the best $M$-estimation (keep the best $M$ wavelet coefficients for approximation) is a complicated issue, highly non-linear in a second sense of linearity. But it appear that in practice, keeping the highest coefficients can both serve as compression and denoising, and for a given data, the structure of the highest coefficients can be arranged in trees, that can keep track of coefficient location, without too much cost. Moreover, a second dimension reduction appears: the quantization of the data. By keeping only the most-significant bits of the coefficients, the data keeps being nicely approximated by less coefficients and less bits at the same time. This is about what you refer to as "encoding-decoding technique". All of the above is subject to more precision: which is your approximation space and distance, but wavelets are good approximants in many theoretical spaces. So yes, putting coefficients to zero is a little bit of cheating, but this appears to be sound for several classes of piecewise regular data, and works correctly in practice.
{ "domain": "dsp.stackexchange", "id": 6582, "tags": "wavelet, approximation, dwt" }
Why linear forces can be added whereas torque cannot (example: internal combustion engines)?
Question: In a internal combustion engine, for example, we know only one piston is fired at one time, no matter on the number of cylinders in the engine: 4, 6, 8, 12, etc. Why cannot two pistons be fired at the same time? In fact, linear forces can be added: for example, when two or more people push a stuck car, by adding engines to an airplane. In a conveyor belt there cannot be 2 electric motors. Why torque cannot be added whereas linear forces do? Answer: Torques can be added. Imagine a large horizontal wheel that turns a shaft. Now if one man applies a force to turn the wheel, the net torque will be some value. If two or three men join in and apply forces in the same direction (clockwise/anti-clockwise) then the net torque will increase. This is commonly applied in machines such as human powered capstans... This is fine if the machines have a low RPM such as merry-go-rounds and capstan shafts. But in high RPM machines, the complication to having multiple torques acting at the same time is that they need to be synced perfectly and they need to act at the exact same instant in time. Small variations in timing could result in large vibrations on the shaft. In high RPM machine aircraft turbines, the torque added by each blade is not "timed", instead, the torque acts continuously as the blade revolves around the shaft. So multiple blades around the periphery of a shaft act in perfect unison to generate a large combined torque.
{ "domain": "physics.stackexchange", "id": 39074, "tags": "newtonian-mechanics, classical-mechanics, rotational-dynamics" }
Why didn't the CMB clump into denser parts like the matter in galaxies with voids inbetween since the photons exert gravitational pull on each other?
Question: There are other questions on this site about the CMB, but none of them answer my question specifically. I am not asking about fluctuations in the CMB. How could inflation affect the CMB? Now as far as I understand, photons do have stress-energy, and do exert gravitational pull on each other, especially in the early universe, when the CMB was very dense. If you use General Relativity instead you'll find that photons make a contribution to the stress energy tensor, and therefore to the curvature of space. Does a photon exert a gravitational pull? How did space expansion overcome uniformly this gravitational pull that the CMB photons' had on each other? Now just to clarify, imagine the early universe, very energetic photons, very dense CMB, so the photons' gravitational pull on each other must be significant. How can space expansion overcome this uniformly? Analogously space was filled with matter particles, those had gravitational pull on each other too. These matter particles clumped up into denser parts, which became galaxies, and voids inbetween. Why didn't this happen with photons the same way? Those energetic photons in the early universe had significant gravitational pull on each other, like matter particles. But, CMB is uniform everywhere, no denser parts. Another interesting thing is, there is even something called the geon. In theoretical general relativity, a geon is a nonsingular electromagnetic or gravitational wave which is held together in a confined region by the gravitational attraction of its own field energy. https://en.wikipedia.org/wiki/Geon_(physics) Question: Why didn't the CMB clump into denser parts like the matter in galaxies with voids inbetween since the photons exert gravitational pull on each other? Answer: The (baryonic) matter clumps because it can interact dissipatively. i.e. It can lose kinetic energy and sink into potential wells without flying out again. If a photon "falls" into a potential well it either emerges on the other side or it is absorbed. But the whole point about the CMB formation is that the radiation decouples from the baryonic matter and thus no longer follows the matter density.
{ "domain": "physics.stackexchange", "id": 78006, "tags": "general-relativity, cosmology, photons, spacetime" }
From NFA to DFA
Question: Let $A = (Q,Σ,\delta,q_0,F)$ be an NFA such that $L = L(A)$. We define a DFA $A'=(Q',Σ,\delta,q_0',F')$ as: $$ Q'=2^Q, \qquad q_0'=\{q_0\}\\F'=\{q\in Q \mid q'\cap F \neq\emptyset\} $$ My question is how we know what elements $Q'$ has? What is the formal definition to obtain these elements? Because it can contain all elements of $2^Q$, and in the DFA obtained from a NFA, the states are not all the elements of $2^Q$. Answer: In the DFA you defined, the states of this DFA are all the elements of $2^Q$. There are other DFA's that might have fewer states, but those are different DFAs. For instance, in some variants of this construction, we only include the states that are reachable from $q'_0$ (we remove all states that can't be reached). You might want to read, e.g., https://en.wikipedia.org/wiki/Powerset_construction and standard textbook explanations of the subset construction of a DFA from a NFA.
{ "domain": "cs.stackexchange", "id": 7627, "tags": "automata, finite-automata" }
Representing non-contiguous memory regions as a standard vector with O(1) access
Question: What interesting alternatives are there for constructing an arbitrarily large static array from smaller scattered blocks of fixed-sized non-contiguous blocks of memory? This is similar to the problem solved by hardware MMU pagetable mappings from virtual memory to physical memory, but in this particular case the scale is different, and it needs to be implemented in software, with virtual memory, and also in userland. To clarify a bit, I'm using baker's treadmill as my GC, but it only supports fixed sized allocations (the current chunk size is 128 bytes, but I have some slight flexibility in re-defining it). The language I'm implementing requires arbitrary sized vectors with something close to O(1) access/write. Currently I'm just building a pyramid of chunks, which gives something like log12(n) lookups, but the memory overhead sucks. Are there any interesting hashing solutions for mapping monotone index ranges to pointers? What about run-time generation of a minimal perfect hash function? It seems they require at minimum 1.44 bits per key. If we bump the allocation block size to 512, we can support vectors with up to 1454080 elements, since we only need to map block numbers and not vector indices. Edit: See the paper "Z-rays: divide arrays and conquer speed and flexibility" (http://dl.acm.org/citation.cfm?id=1806596.1806649&coll=DL&dl=GUIDE) for more information regarding optimization of this array format: These scattered blocks are sometimes called arraylets and are used by other GC algorithms: https://www.ibm.com/developerworks/websphere/techjournal/1108_sciampacone/1108_sciampacone.html Also related: the Staccato GC: http://researcher.watson.ibm.com/researcher/files/us-groved/rc24504.pdf Answer: A simple approach is to have a one-level lookup table that maps each block of the array to where it is stored. In other words, for a logical array $A[0\dots n-1]$ containing $n$ bytes, we have a lookup table $T[0\dots \frac{n}{128}-1]$ containing $n/128$ pointers; the bytes $A[128k \dots 128k+127]$ are stored at the physical address $T[k]$. To look up $A[i]$, we compute $p = T[i \gg 7]$ (where $\gg$ indicates logical right shift), then read the byte at address $p + (i \bmod 128)$. This does require a single contiguous region large enough to store $n/128$ pointers (e.g., holding $n/32$ or $n/16$ bytes, depending on whether you're on a 32-bit or 64-bit system). Given that, array operations become fast: you basically have one extra table lookup and indirection. If you can't even have any contiguous region, not even a smaller one for storing the lookup table, but have to build everything out of 128-byte chunks that can be at arbitrarily-inconvenient locations with arbitrarily-bad fragmentation, then I don't think it's possible to do any better than your solution. Rather than looking for a better data structure to handle this kind of arbitrarily-bad fragmentation, a cleaner solution is probably to design your memory management system to avoid creating worst-case memory fragmentation in the first place. One architecture that can help deal with this is a buddy allocator. The entire memory address space can be divided into 128-bit chunks, but we then impose a binary tree structure on it: the 128-byte chunk at address $256k$ is the sibling/buddy of the 128-byte chunk at address $256k+128$. Those two chunks, if they're both available, can be coalesced into a (fused) 256-byte chunk. Each 256-byte chunk has a buddy, and if they're both available, they too can be merged to obtain a 512-byte chunk. And so on. This allows you flexibility to deal with small 128-byte chunks when you need them, but also provides a way to build up larger chunks of contiguous memory and try to avoid arbitrarily-bad fragmentation.
{ "domain": "cs.stackexchange", "id": 7261, "tags": "data-structures, arrays, virtual-memory" }
Deadbeef : finding all words made of hexadecimal digits
Question: Hexadecimal 0xdead, 0xbeef are the magic numbers because they're also English words. I decided to find such words as many as possible. How to do it? We need large English text let's say Ulysses by James Joyce and a program which extracts all words consists of hexadecimal digits. For simplicity, I decided to drop leet-language support. It dramatically shrinks the range but keeps real words only. The code below extract magic numbers from given text and prints them to stdout in lower case #include <ctype.h> #include <stdio.h> #define MAX_LEN 256 int process_file(FILE* file); int main(int argc, char* argv[]) { if (argc == 1) { process_file(stdin); } else { size_t i = 0; char* filename; FILE* file; int err; while ((filename = argv[++i]) != NULL) { file = fopen(filename, "r"); if (!file) { perror("fopen() failed"); return 1; } err = process_file(file); fclose(file); if (err) { return 2; } } } return 0; } int process_file(FILE* file) { char word[MAX_LEN]; size_t p = 0; int c; while (1) { c = getc(file); if (isspace(c) || c == EOF) { /* end of word or end of emptiness */ if (p > 0 && p < MAX_LEN && p % 2 == 0) { word[p] = 0; printf("%s\n", word); } if (c == EOF) { break; } p = 0; continue; } if (p > MAX_LEN - 1) { continue; } if ((c >= 'A' && c <= 'F') || (c >= 'a' && c <= 'f')) { /* abcdef ABCDEF */ word[p++] = tolower(c); } else { /* skip this word */ p = MAX_LEN; } } if (feof(file)) { return 0; } if (ferror(file)) { perror("i/o error occurred"); } return 1; } Commands echo "Dead of being fed with beef for a decade" | ./deadbeaf | sort | uniq should give beef dead decade Answer: It is customary to put helper functions first and main() last, to avoid having to write forward declarations like int process_file(FILE* file);. process_file() is a very generic name. I suggest renaming it to print_hex_words(). The process_file() function returns an error code. Therefore, the responsibility for printing any error message for I/O errors should lie with main(). You assume that words are delimited by whitespace, and have neglected to deal with punctuation. Your algorithm is very tedious. Instead of using getc() to read a byte at a time, use fscanf() to read a whitespace-delimited word at a time. To skip to the end of a sequence consisting solely of A-F characters, use strspn(…, "ABCDEFabcdef"). #define xstr(s) str(s) #define str(s) #s int print_hex_words(FILE* file) { char word_buf[MAX_LEN + 1]; while (1 == fscanf(file, "%" xstr(MAX_LEN) "s", word_buf)) { char *word, *end, *trail_punct; /* Skip leading punctuation */ for (word = word_buf; ispunct(*word); word++); end = word + strspn(word, "ABCDEFabcdef"); /* Skip trailing punctuation */ for (trail_punct = end; ispunct(*trail_punct); trail_punct++); if (word != end && *trail_punct == '\0') { /* NUL-terminate the word and convert it to lowercase */ *end = '\0'; for (end = word; (*end = tolower(*end)); end++); printf("%s\n", word); } } return ferror(file); } Instead of … | sort | uniq, you can use … | sort -u.
{ "domain": "codereview.stackexchange", "id": 31942, "tags": "c, file" }
"Alien" distinguisher
Question: I am practicing some simple Java coding problems in an attempt to learn while doing them. I would like to know if my code is redundant/is there an easier way to accomplish the same thing? Question: Which Alien? "A person who witnessed the appearance of the alien has come forward to describe the alien's appearance." The program will determine which alien has arrived. The three alien species that it could be is: TroyMartian, has at least 3 antenna and at most 4 eyes VladSaturnian, has at most 6 antenna and at least 2 eyes GraemeMercurian, has at most 2 antenna and at most 3 eyes Sample session (with output shown in text, user input in italics) How many antennas? 2 How many eyes? 3 VladSaturnian GraemeMercurian If the description does not match any of the aliens, there is no output. My code: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Alien { public static int antenna; public static int eye; public static void main(String args[]) { try (BufferedReader in = new BufferedReader(new InputStreamReader( System.in))) { System.out.println("How many antennas?"); antenna = Integer.parseInt(in.readLine()); System.out.println("How many eyes?"); eye = Integer.parseInt(in.readLine()); if(troy(antenna, eye)) { System.out.println("TroyMartian"); } if(vlad(antenna, eye)) { System.out.println("VladSaturnian"); } if(graeme(antenna, eye)) { System.out.println("GraemeMercurian"); } return; } catch (IOException e) { System.err.println("Error"); } } public static boolean troy(int antenna, int eye) { if ((antenna >= 3) && (eye <= 4)) { return true; } else { return false; } } public static boolean vlad(int antenna, int eye) { if ((antenna <= 6) && (eye >= 2)) { return true; } else { return false; } } public static boolean graeme(int antenna, int eye) { if ((antenna <= 2) && (eye <= 3)) { return true; } else { return false; } } } Answer: You're on the right track. Good job using the try-with-resources block for the BufferedReader. You could use a Scanner instead for convenience. (You could even use Scanner.nextInt(), but it would work slightly differently: the newline in the input would be optional.) The variables antenna and eye can be local to main(), and therefore should not be static members of the class. In Java, a common naming convention for methods that return a boolean is isSomething() or hasSomething(). In this case, though, maybeSomething() seems more appropriate. It is rarely necessary to write return true; or return false; explicitly. Usually, you would be better off returning a boolean expression. import java.util.Scanner; public class Alien { public static void main(String args[]) { try (Scanner in = new Scanner(System.in)) { System.out.println("How many antennas?"); int antenna = Integer.parseInt(in.nextLine()); System.out.println("How many eyes?"); int eye = Integer.parseInt(in.nextLine()); if (maybeTroy(antenna, eye)) { System.out.println("TroyMartian"); } if (maybeVlad(antenna, eye)) { System.out.println("VladSaturnian"); } if (maybeGraeme(antenna, eye)) { System.out.println("GraemeMercurian"); } } } public static boolean maybeTroy(int antenna, int eye) { return ((antenna >= 3) && (eye <= 4)); } public static boolean maybeVlad(int antenna, int eye) { return ((antenna <= 6) && (eye >= 2)); } public static boolean maybeGraeme(int antenna, int eye) { return ((antenna <= 2) && (eye <= 3)); } }
{ "domain": "codereview.stackexchange", "id": 11936, "tags": "java, beginner" }
Complex conjugate of the Dirac equation
Question: (Following the calculations done in 'Quantum Field Theory in a Nutshell' [Second Edition] by Zee, Page 101) The Dirac equation in the presence of an electromagnetic field is given by: $$ [i \gamma^{\mu} (\partial_{\mu} - i e A_{\mu}) - m]\psi = 0 $$ where $\gamma^{\mu}$ are the gamma matrices, $A_{\mu}$ is the gauge field, $e$ is charge, $m$ is mass and $\psi$ is a spinor. The complex conjugate of this equation is given by: $$ [-i \gamma^{\mu *} (\partial_{\mu} + i e A_{\mu}) - m]\psi^{*} = 0 $$ Zee defines: $$ -\gamma^{\mu *} = (C \gamma^{0})^{-1} \gamma^{\mu} (C \gamma^{0}) $$ where $C$ is the charge conjugation operator. Zee states that you can plug this into the complex conjugated Dirac equation to get: $$ [i \gamma^{\mu} (\partial_{\mu} + i e A_{\mu}) - m]\psi_{c} = 0 $$ where $\psi_{c} = C \gamma^{0} \psi^{*}$. When I try to do this I get the following: $$ [i (C \gamma^{0})^{-1} \gamma^{\mu} (C \gamma^{0}) (\partial_{\mu} + i e A_{\mu}) - m]\psi^{*} = 0 $$ $$ (C \gamma^{0}) \cdot [i (C \gamma^{0})^{-1} \gamma^{\mu} (C \gamma^{0}) (\partial_{\mu} + i e A_{\mu}) - m]\psi^{*} = (C \gamma^{0}) \cdot 0 $$ $$ [-i (C \gamma^{0})(C \gamma^{0})^{-1} \gamma^{\mu} (C \gamma^{0}) (\partial_{\mu} + i e A_{\mu}) - m(C \gamma^{0})]\psi^{*} = 0 $$ $$ [-i (C \gamma^{0})(C \gamma^{0})^{-1} \gamma^{\mu} (\partial_{\mu} - i e A_{\mu})(C \gamma^{0}) - m(C \gamma^{0})]\psi^{*} = 0 $$ $$ [-i (C \gamma^{0})(C \gamma^{0})^{-1} \gamma^{\mu} (\partial_{\mu} - i e A_{\mu}) - m]C \gamma^{0}\psi^{*} = 0 $$ $$ [i (C \gamma^{0})(C \gamma^{0})^{-1} \gamma^{\mu} (-\partial_{\mu} + i e A_{\mu}) - m]\psi_{c} = 0 $$ where I used the antilinear property $Ci = -iC$. Now I have run into two issues. I am unsure of how to treat $(C \gamma^{0})(C \gamma^{0})^{-1}$ and also the sign of the $\partial_{\mu}$ term does not match the sign in Zee's expression. Can anyone point me in the right direction? Answer: The charge conjugation matrix $C$ is not an antilinear map. It is ordinary matrix with the property that $$ C\gamma^\mu C^{-1} =-(\gamma^\mu)^T. $$ (or maybe $C^{-1} \gamma^\mu C = -(\gamma^\mu)^T$. Conventions differ.).
{ "domain": "physics.stackexchange", "id": 74797, "tags": "field-theory, dirac-equation, complex-numbers, dirac-matrices, charge-conjugation" }
creating metapackage with catkin-tools
Question: Is it possible to create a ROS metapackage with catkin-tools? With catkin_make its possible with: catkin_create_pkg <MY_META_PACKAGE> --meta Originally posted by fjp on ROS Answers with karma: 200 on 2020-10-26 Post score: 0 Answer: No, catkin_create_pkg is a standalone executable not related to catkin_make or catkin-tools (other than the catkin prefix and that it creates a metapacakge to be built with either of the two tools). Use catkin_create_pkg to create the metapackage and you can compile with either of the two build tools above... Originally posted by mgruhler with karma: 12390 on 2020-10-27 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 35679, "tags": "ros, catkin, metapackage" }
Shortest path in a directed weighted graph
Question: Suppose we have a directed, weighted graph, $G = (V, E, w)$, with non-negative weights. We define the weight of the shortest path different from the original definition. The weight of a path with at most 5 edges (including 5) will be defined as usual, meaning the sum of the edges weights. The weight of a path with at least 6 edges will be the sum of the edges weights multiplied by 2. Propose and analyze an efficient algorithm (as you can), That given a graph $G$ and two vertices $u, v$, the algorithm will find the shortest path from $u$ to $v$, as we defined the paths weights. Describe in words why the algorithm works and analyze it's runtime. My thought was to run first $BFS$ on the graph and for every edge that in a path with 6 or more edges, just multiply the edge weight by 2, and then run dijkstra, but I think it won't work. So then I was tending to do a brute force algorithm and find all the paths from $u$ to $v$ and then find the shortest one, but it is not efficient. So I'm stuck on an answer to this one and would glad for some help. Answer: You can take this approach: Step one: Duplicate G's nodes 6 times to get a new directed graph with 6 levels $L_1,...,L_6.$ For $1\le i\le5$ and nodes $a,b: $ There will be an edge from $a \in L_i$ to $ b \in L_{i+1} $ if there was an edge from $a$ to $b$ in the original graph, and the weights will be same as the original graph. From $L_6$ there will be no way to move between the nodes, or to move to any other level. In this way we can run an SP algorithm from node $u$ in $L_1$ to find the shortest path to $v$ (in any level) with respect to $w$, and each path's length will be no longer than 5. Assume that we got a shortest path $s_1$ (if no path was found $s_1 = \infty).$ Step two: Create a new weight function $f(w)=2w$ for the original graph's weights and run Dijkstra's again on the original graph with the new weight function to get another shortest path $s_2$ (again if no path was found $s_2 = \infty).$ Step Three: return $\min\{s_1,s_2\}.$ Time complexity is that of Dijkstra's since constructing the graph in step 1 takes linear time.
{ "domain": "cs.stackexchange", "id": 20087, "tags": "graphs, dijkstras-algorithm" }
Summation notation for Kronecker delta
Question: I'm having some problems on notation for indices: I've found in Goldstein, 3rd edition, that the Kronecker delta satisfies the following property: $$\delta_{ij}\delta_{ik}=\delta_{jk}$$ But imagine that $i \neq j$ and $j=k$. In this case, $$\delta_{ij}\delta_{ik}=0$$ but, $$\delta_{jk}=1.$$ So how does this work? I've seen the following affirmation: $$\delta_{ii}=3$$ By the previous property isn't this possible?: $$\delta_{ii}\delta_{jj}=9$$ But, when we make $\delta_{ij}\delta_{ik}=\delta_{jk}$, we can only have $\delta_{jk}=3$ for $j=k$ as maximum. Answer: You're getting tripped up by summation notation. Whenever you have a repeated index, this means that that index is to be summed from 1 to 3: $$ \delta_{ij} \delta_{ik} \equiv \sum_{i=1}^3 \delta_{ij} \delta_{ik}. $$ You're right that there are two terms in this sum where $i \neq j$, and so the contribution to the sum from these terms is zero. But the remaining term has $i = j$, and so gives 1, and so the entire thing sums to 1 when $j = k$. Oh, and the Levi-Civita symbol is something else entirely.
{ "domain": "physics.stackexchange", "id": 22582, "tags": "classical-mechanics, notation" }
Why do proteins targeted for chloroplasts require two signal peptides?
Question: Question #11 of the GRE Biology Practice Test says that proteins targeted for chloroplasts require two signal peptides: Targeting of a newly synthesized protein is most likely to require two different signal peptides for which of the following destinations? (A) Plasma membrane (B) Lysosome (C) Cytosol (D) Chloroplast (Correct Answer) (E) Endoplasmic reticulum I can't find a reference for this. I understand that a ribosome starts the synthesis of a protein in the cytosol and a signal peptide takes that to the endoplasmic reticulum (ER) which finishes protein synthesis in the ER lumen and then the protein is secreted in a vesicle to its destination (if it doesn't become an ER membrane protein). What's the purpose of the second signal peptide to enter a chloroplast? Answer: Protein import to chloroplasts (and also mitochondria) can still have multiple destinations, because these structures themselves have sub-compartments: both have an inter-membrane space as well as an 'inner space' (the stroma in chloroplasts, matrix in mitochondria). proteins can also be targeted towards either of the two membranes additionally chloroplasts have the Thylakoid stacks in the stroma which are separated by another membrane layer. Due to this complex structure you need multiple signal peptides to properly target proteins to their destination. I don't remember (or can find) all the details now but basically you (can) have: a 'main' signal peptide that targets proteins directly towards the stroma (which will be removed there by the stromal processing peptidase (SPP)). an additional signal peptide for import to the the thylakoid stacks (which will be 'made avaibalble' by cleavage of the first peptide) there should be other signal peptides or at least variants that allow import to the intermembrande space and the inner membrane (via the TIC23 complex). [I couldn't find any sources for this right now] Sources I could find: TIC/TOC complex in wikipedia in-depth scientific review
{ "domain": "biology.stackexchange", "id": 9029, "tags": "homework, chloroplasts" }
How does the polycrystalline structure of graphite produce the circular rings in electron diffraction experiment?
Question: When one performs the electron diffraction experiment to measure the lattice spacing of a polycrystalline graphite one gets the following pattern: My lab manual and other online sources give the reason that since the sample is polycrystalline the set of planes for a particular incident angle can be randomly oriented as the bonds become the layers of graphite are weak so all the possible orientations are possible and hence for a particular angle a circle is traced on the screen. The above seems to be just a vague picture and I am not able to imagine (visualize or draw on paper) how this would happen. Also, would the pattern simply be alternating patches for a moncrystalline crystal? Answer: You are right, a monocrystalline solid will produce a characteristic set of bright spots with a distinctive symmetry pattern that depends on the angle between the principal axes of the crystal arrangement and the incoming X-ray beam. Now imagine that that same crystal has been hit with a hammer and fractured into many tiny crystals. The X-ray beam is still coming in at the same angle relative to the glob of fractured crystallites, but because their orientation is random, those bright patches produced by each turn into rings with characteristic radii, with the ring centers on the central axis of the X-ray beam.
{ "domain": "physics.stackexchange", "id": 98886, "tags": "diffraction" }
Sea Level and its variability with regard to 'Altitude'
Question: I'm wondering how we are managing 'things' that relate to 'sea level'. Mean Sea Level is taken to be the halfway point between high- and low tide... but as the sea level rises, presumably this will change over time... Within the world of cartography, where elevations can often be given in 5m increments, has someone decided what '0m' is, and fixed it? Presumably cartography and systems like GPS have the same problem... I'm also curious with regard to barometric pressure... With the rising sea level one could reason that the absolute value (e.g: in bar) of the pressure at sea level will be falling (the atmosphere is now surrounding a 'larger' planet, and thus is spread more thinly). I don't expect this to be of significant concern to aviation (presumably they are working with large-ish tolerances), but pressure sensors in consumer applications (e.g: mobile phones) have a high enough resolution that they can detect a change in pressure over 1-2 metres or less. I wonder where else they are employed... I thought I'd follow up (24th Oct 2017) with a link to Tom Scott's video, where he mentions variations in gravity causing problems for "sea level" too: What is sea level, anyway? Answer: Sea level is defined by the "reference geoid ellipsoid"--the current version is WGS 84. This has sub-1 meter resolution, and its value is $6378137.0$ meters at the equator and $6356752.314$ meters at the poles. Note that the polar radius is calculated by the flattening of $1/298.257223563$ (as defined by the WGS 84 standard; see the linked page for information on how that is determined). NOTE that the equatorial radius (semi-major axis) and the flattening are defined; the polar radius (semi-minor axis) is calculated. Modern technologies such as GPS and similar can be used to measure sea level (and variations thereof) with accuracy down to centimeters (maybe more). Also, GPS uses the reference ellipsoid when telling your smartphone or handheld GPS thingy what altitude you're at. As for the barometric pressure, I intuitively say yes, it would decrease, but the amount would be negligible--way below any current means of measurement. I'm certain other environmental factors would have a bigger impact on average barometric pressure. Note that the chemical composition of air has changed over geological timescales, and this affects the density (and thus the mass and pressure). ADDED: This is an interesting page to read regarding the ellipsoid and what we know as "sea level". Side note: The Panamá Canal empties into the Pacific Ocean 20 cm higher than at the Atlantic Ocean (and this is not due to diurnal tides, etc.).
{ "domain": "physics.stackexchange", "id": 39460, "tags": "pressure" }
What's the $\ell$ in the Bicep2 paper mean?
Question: The BICEP experiment's recent announcement included the preprint of their paper, BICEP2 I: Detection of $B$-mode polarization at degree angular scales. BICEP2 Collaboration. To be submitted. BICEP-Keck preprint, arXiv:1403.3985. Gravitational lensing of the CMB’s light by large scale structure at relatively late times produces small deflections of the primordial pattern, converting a small portion of E-mode power into B-modes. The lensing B-mode spectrum is similar to a smoothed version of the E-mode spectrum but a factor 100 lower in power, and hence also rises toward sub-degree scales and peaks around $\ell$ = 1000. I think the $\ell$ is this: For example $\ell=10$ corresponds to roughly 10 degrees on the sky, $\ell=100$ corresponds to roughly 1 degree on the sky. (From CMB introduction, by Wayne Hu.) But how does that apply here? When BICEP looks for something with an $\ell$ around 80, does that mean a "multipole moment" which spans 80 degrees across the sky? Answer: It's the same $\ell$ that indexes the spherical harmonics $Y_{\ell m}$ (or $Y_\ell^m$ if you prefer). We can decompose functions defined on the sphere (like anything defined on the sky) into a countably infinite sum of appropriately weighted spherical harmonics. $\ell$ counts the number of nodes, while different values of $m$, $0 \leq \lvert m \rvert \leq \ell$, give different arrangements of those nodes. Higher values of $\ell$ correspond to components that have more nodes and fluctuations. The angular scale of variations corresponding to a given $\ell$ scale like $1/\ell$. For more information, you might want to look at an answer I wrote to Relation between multipole moment and angular scale of CMB. One thing cosmologists do is plot correlations between different quantities as a function of $\ell$. You can imagine decomposing two functions \begin{align} f(\theta, \phi) & = \sum_{\ell=0}^\infty \sum_{m=-\ell}^\ell a_{\ell m} Y_{\ell m}(\theta, \phi) \\ g(\theta, \phi) & = \sum_{\ell=0}^\infty \sum_{m=-\ell}^\ell b_{\ell m} Y_{\ell m}(\theta, \phi), \end{align} where the $a$'s and $b$'s are complex numbers. Then you might plot quantities like $$ Q_\ell = \sum_{m=-\ell}^\ell a_{\ell m}^* b_{\ell m} $$ over a run of $\ell$ for which you have good data, comparing theory to observation. $Q_{80}$, for example, will be built from information about ${\sim}16^\circ$ scales. BICEP doesn't look at the whole sky, by the way, so they can't even measure the low-$\ell$ components of anything. What they focus on is the high-$\ell$ stuff that might be harder to get with a space-based mission designed to scan the whole sky at lower resolution. The assumption is that the high-$\ell$ signal you get in one part of the sky is representative of the high-$\ell$ signal everywhere. (If this weren't the case we'd live in a very weird universe indeed.)
{ "domain": "physics.stackexchange", "id": 16667, "tags": "polarization, cosmological-inflation, cosmic-microwave-background, multipole-expansion" }
Definition of non-degenerate metric tensor
Question: We know that a metric has a property which is called non-degeneracy. I was searching for what does that mean and saw it associated with the fact that $det(g_{\mu\nu})\neq0$. How does this relate to that? Answer: 1- A degenerate matrix is a matrix whose rank is smaller than its dimension. 2- A singular (non-invertible) matrix is one that has a vanishing determinant. Equivalence of the two : A matrix whose rank is smaller than it's dimension when diagonalized will have at least one zero eigenvalue, and consequently a vanishing determinant.
{ "domain": "physics.stackexchange", "id": 84471, "tags": "differential-geometry, terminology, metric-tensor, definition" }
Do celestial bodies actually appear larger along the horizon?
Question: Whether it be the moon (especially when full) or tonight, as Mars is closer than it has been in decades, it appears that these bodies are larger when close to the horizon than overhead. Is this an optical illusion (i.e. the actual appearance is no larger), or is there some refraction or other effect that actually makes the appearance larger? Answer: No, it's an illusion. Probably the ancient one. Simple experiment you can do is, set grid on telescope, measure the angle subtended when moon is at horizon and when moon is atop. You will see angle subtended by moon is same, hence the size are same. It's merely an illusion. For more details: https://en.m.wikipedia.org/wiki/Moon_illusion
{ "domain": "astronomy.stackexchange", "id": 3052, "tags": "the-moon, size, terrestrial-planets" }
waitForTransform Issues and errors
Question: I am trying to have my code use the self.listener.waitForTransform(from_frame, to_frame, rtn, rospy.Duration(0.5)) and TransformPoint to transform points around my robot from map to base_scan frame, as the robot moves. In order to get this to work I have to use the same time stamp for each run so that every point around the robot at any given time gets the same exact transform information to use for the transform calculation. So this is the psuedocode of how I tried to make this work. def transform(self,coord,from_frame='map',to_frame='base_scan',rtn=None): if rtn is None: rtn=rospy.get_rostime() self.scope.header.frame_id = from_frame self.scope.header.stamp = rtn # rospy.TIme(0) self.scope.point.x = coord[0] self.scope.point.y = coord[1] self.scope.point.z = 0 try: self.listener.waitForTransform(from_frame, to_frame, rtn, rospy.Duration(0.5)) p = self.listener.transformPoint(to_frame, self.scope) return [p.point.x,p.point.y] except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException): raise def use_transform(): points=gather all points around robot at any given time rtn=rospy.get_rostime() for point in points: transform(point,rtn) from map to base scan ........ When I run this code I get this below: File "/opt/ros/melodic/lib/python2.7/dist-packages/tf/listener.py", line 74, in waitForTransform can_transform, error_msg = self._buffer.can_transform(strip_leading_slash(target_frame), strip_leading_slash(source_frame), time, timeout, return_debug_tuple=True) File "/opt/ros/melodic/lib/python2.7/dist-packages/tf/listener.py", line 45, in strip_leading_slash return s[1:] if s.startswith("/") else s AttributeError: 'Time' object has no attribute 'startswith' Is it because I'm using rospy.get_rostime()? Should I be using something else? I've tries rospy.Time(0), rospy.Time.now() and have the same issue. I have searched the listener.py script the error talks about. I have no diea waht I should change in the script to fix this error. Originally posted by distro on ROS Answers with karma: 167 on 2022-04-04 Post score: 0 Answer: It's not related to how you constructed your time object. From the error it looks like you're passing a Time object where it's expecting a string frame_id. As it's trying to strip a potentially leading slash from the frame_id. You should make sure to check your arguments that you're inputting to the functions. There's presumably truncated backtrace information above the section that you've quoted which will show you your problem. Originally posted by tfoote with karma: 58457 on 2022-04-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37558, "tags": "ros-melodic, transform" }
Is there a relationship between the properties of different charges of a fundamental particle?
Question: To begin with, I'm a high school student and so my understanding of QFT is quite basic. Due to this, I'd prefer a simple answer (it would be great if it's yes/no) along with a very basic explanation. Essentially, I know that the three fundamental forces - electric, strong and weak force are results of spontaneous symmetry breaking. At low energies, the symmetry breaks and the forces "split". My question is based on this. Now that the forces have "split" is there any direct relationship between these forces? For example, an electron has an electric charge of -1, a strong charge of 0, and a weak charge of -1/2. So is there a connection between the -1, the 0 and the -1/2? If one of the values was to change, would any of the other two values change? If yes, would it be both or would it just be one of them? So in essence, could there exist a fundamental particle that for example has an electric charge of -2, a color charge and a weak charge of 1/2? I'm not sure if there is another restriction that doesn't allow the electric charge to go below -1, but ignoring these other restrictions, just based on the pure relationship between these charges, would changing one affect the other 2, and if it does then is there only a certain number of combinations of these 3 charges? Answer: Let me give you the view from the experiment side: The model, called the standard model of particle physics, is a mathematical quantum field theory that fits the data up to now and its predictions are continually fulfilled. In addition, the integral and differential functions of the model have to obey a specific group theory, in order to fit the various measured charges in experiments, that lead to describing the fundamental particles and their composites, that you are asking about. The local SU(3)×SU(2)×U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. Group theories are as strict as integration and differentiation, once decided upon, one cannot pick and choose the way the particles are represented, but the group theory imposes the specific way charges are combined that allow real particles and composites of particles to exist. One cannot pick and choose among the members of the group. A good example is the prediction that a particle called $Ω^-$ should exist, that was found experimentally and confirmed the quark model of the standard model. The particle was found and confirmed the theoretical research that led to the development of the standard model. So it is not possible to arbitrarily attach charges to particles, it is the standard model itself as it has developed that does the assignment, in order to agree with observations. If in future experiments observations bring more combinations of the charges that exist in the symmetries of the standard model, the model should change in order to agree with nature , and keep having predictive power.
{ "domain": "physics.stackexchange", "id": 80844, "tags": "particle-physics, charge, symmetry-breaking, color-charge" }
Can I use one-hot vectors for text classification?
Question: For an upcoming project I'm trying to write a text classifier for the IMDb sentiment analysis dataset. This needs to vectorize words using an embedding layer and then reduce the dimensions of the output with global average pooling. This is proving however to be very difficult for my low experience level, and I am struggling to wrap my head around the dimensionality involved, bearing in mind I must avoid libraries such as tensorflow that would make it very basic exercise. I am hoping that I could make it easier by encoding each word in the reviews as a one-hot vector, and passing it through a few regular dense layers. Would this work and yield decent results? Answer: One hot encoding is a good strategy to apply with categorical variables that assume few possible values. The problem with text data is that you easily end up with corpora with a really large vocabulary. If I remember correctly the IMDb dataset contains around 130.000 unique words, which means that you should create a network with an input matrix of size 130.000 x max_length where max_length is the fixed maximum length allowed for each review. Apart from the huge size, this matrix would also be extremely sparse, and that's another big issue in using one-hot encoding with text. For these reasons, I really doubt you would achieve any good results with a simple one-hot encoding. Embeddings where actually designed precisely to overcome all these issues, they have fixed reasonable size, they assume continue values between 0 and 1, which is desirable for deep neural networks, and they can be treated as "extra" trainable weights of a network. If you really want to avoid embeddings I would suggest you to use (or implement, I don't think it will be so hard) a term frequency–inverse document frequency vectoriser. It is closer to one-hot encoding in the fact that it is based on the creation of a huge co-occurances matrix between words, but at least the values are continuous and not dichotomous. Nevertheless I would not expect high performances with the tf-idf either, simply because this type of encoding works best with shallow models like the Naive Bayes rather than deep models.
{ "domain": "ai.stackexchange", "id": 2283, "tags": "neural-networks, machine-learning, text-classification" }
Chirp after T seconds
Question: Suppose we have this signal from this wiki page $a(t)=\sin(\varphi_o + f_0t + (f_1-f_0)t^2/T)$ The article tells that T is a period of chirp modulation in which frequency changes. So my question is how to create signal with constant $f_1$ frequency after T? Answer: how to create signal with constant $f_1$ frequency after T? Find the instantaneous phase at the end of the $T$ interval and keep adding it to the phase that makes the $\sin()$ "go round". A plain simple oscillator at some frequency $f$ has its phase increasing at a constant rate. Here is an example (in GNU Octave but easily transferable to other platforms as well): Fs = 44100; % Sampling frequency in Hz T = 1; % Total duration of the signal in seconds f = 120; % Frequency of the oscillator t = 0:(1./Fs):(T - (1./Fs)); % Time vector p = 2.0 .* pi .* t; %Phase vector y = sin(f.*p); Now y contains our 120 Hz sinusoid sampled at 44.1kHz. Notice here that phase (p) is a "straight line" that grows at a constant rate of $\frac{2 \pi f}{Fs}$. When the oscillator is "chirping", the rate of change of the phase is variable. For example: Fs = 44100; % Sampling frequency in Hz T = 1; % Total duration of the signal in seconds f0 = 1; % Start chirp at f1 = 120; % end chirp at (Better keep f1>f0) t = 0:(1./Fs):(T - (1./Fs)); % Time vector p = 2.0 .* pi .* t; %Phase vector y = sin(f0.*p + 2.*pi.*(((f1-f0)/(2.*T)).*t.^2)); Now, this looks like: And the rate of change of the phase of the $\sin$ is: Now, from the first example, the phase "step" is f.*p(2)-f.*p(1) == 0.017097 and from the second example, after doing: s = diff(f0.*p + 2.*pi.*(((f1-f0)/(2.*T)).*t.^2)); % Get the first derivative of phase to find the **rate of change**. The "last" rate of change is s(end) == 0.017097. So, to have the oscillator stay at the chirp's concluding frequency, keep accumulating the phase at the constant rate you are after from the last known value of the chirp's phase (to avoid phase jumps). For example: r = f0.*p + 2.*pi.*(((f1-f0)/(2.*T)).*t.^2); % The phase as above. z = r(end) + cumsum(ones(1,T.*Fs).*0.017097); % A running sum of duration T.*Fs (samples). Now, if you feed the combined result of r, z into the oscillator y with something like y = sin([r,z]); it would look like: ...with a rate of change of: Hope this helps.
{ "domain": "dsp.stackexchange", "id": 7448, "tags": "linear-chirp" }
Why does it take so long to get to the ISS?
Question: I don't understand why when first launched Space X's Dragon capsule had to orbit the Earth many times in order to match up with the ISS? Was this purely to match it's speed, or to get closer (as in altitude) to the ISS? In the stages when it gets to about 200m, it seemed like it was able to go directly up to the ISS, how come it couldn't do that the entire way. (Additionally, in Scifi movies you see smaller shuttles able to go directly to space stations in orbit, is that type if travel not possible?) Answer: In space you don't just "go somewhere". You have to match orbits, while not wasting too much fuel. If you're in a low circular orbit, and you want to get to a high circular orbit, it takes two tangential burns, one to elongate your orbit into an ellipse, and another at the high point of the ellipse to make it circular again. This is called a Hohman transfer. You may have to do this multiple times, depending on how much thrust you have. If your orbit is in a different plane from the orbit of the space station, you have to wait until you reach the plane of the other orbit, then do a lateral burn. You may have to do this several times to change your orbit's angle sufficiently, each time having to wait another half-orbit. EDIT: to give some perspective on this, if your orbit crosses the plane of the other orbit at an angle of 10 degrees, that means you are crossing that plane at about one mile per second. (Orbit velocity times sin(10 degrees).) If your rocket motor generates 1G of thrust, you need to run it around 2.5 minutes to get aligned with that plane. (5280/32/60) REVISED: If you're in the same orbit as your destination, but some distance behind it (say), the way you catch up is by getting into a lower orbit by a Hohman transfer, with greater angular velocity, and then another such transfer to get back to the original orbit. This is called orbit phasing. If you just accelerate toward the object, that would put you in an orbit that rises above the target, and then eventually falls further behind because it is a higher orbit.
{ "domain": "physics.stackexchange", "id": 3588, "tags": "newtonian-mechanics, space, newtonian-gravity, orbital-motion, rocket-science" }
How to get global pose of a specific model's link in a gazebo world?
Question: How do I get exact pose of a specific link of a model in Gazebo simulated world? To get model's (i.e. base_link) global position I call rosservice call gazebo/get_model_state '{model_name: my_model_name} but what I really need is a global pose of my lidar's link. Originally posted by Constantine on ROS Answers with karma: 3 on 2016-06-23 Post score: 0 Answer: You can get a list of gazebo links and their states from /gazebo/link_states topic. Here is a python script that republishes poses in geometry_msgs/Pose format. You can find the names of links either in Gazebo GUI or on /gazebo/link_states topic itself. Originally posted by Boris with karma: 3060 on 2016-06-23 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 25039, "tags": "gazebo" }
Vector Potential that vanishes outside infinite solenoid
Question: Consider the magnetic field $\vec{B}$ generated by an infinite solenoid on the $z$-axis with radius $R$. Then $$\vec{B}(r)=\begin{cases} B_z \hat{z} & \text{ if }r<R, \\ 0 & \text{ elsewhere.} \end{cases}$$ I would like to find a continuous vector potential $A$ such that curl$(A)=B$. In the following I am considering cylindrical coordinates. I was able to find $$A(r)=\begin{cases} \frac{B_z}{2}\left(r-\frac{R^2}{r}\right) \hat{\varphi} & \text{ if }r<R, \\ 0 & \text{ elsewhere.} \end{cases}$$ This is continuous at $r=R$ and $$\text{curl}(A) =\frac{1}{r}\frac{\partial}{\partial r}(rA) = \frac{B_z}{2}\left(1-\frac{R^2}{r^2}\right)+ \frac{B_z}{2}\left(1+\frac{R^2}{r^2}\right)=B_z$$ as desired. My problem is that my professor stated that it wasn‘t possible to find a vector potential that vanishes everywhere outside the solenoid, yet mine does, so why is mine not valid? His argument was that if we consider $$U=\{\vec{x} \in \mathbb{R}^3 : x_3=0 \wedge x_1^2+x_2^2 < a^2 \}$$ for an arbitrary $a$, i.e. the circle of radius $a$ in the $xy$-plane, and take the line integral of $A$ along its boundary $\partial U$ then it couldn‘t be $0$ because according to Stokes‘ theorem it has to be equal to the magnetic flux through the $xy$-plane (which clearly isn’t zero inside the solenoid). Answer: Your proposed vector potential diverges at $r = 0$. This may not seem like an insurmountable problem—after all, we see infinite potentials all the time for things like point charges & line currents, right? It turns out, in fact, that there's an infinite flux hiding in this problem. Consider a a loop of radius $\epsilon$ about the origin in the $xy$-plane. The magnetic flux through this surface is equal to the line integral of $\vec{A}$ around this curve, which is: $$ \Phi = \oint \vec{A} \cdot d\vec{l} = \int_0^{2 \pi} \frac{B_z}{2} \left( \epsilon - \frac{R^2}{\epsilon} \right) \hat{\phi} \cdot (\epsilon \,d\theta \hat{\phi}) = \pi B_z \left(\epsilon^2 - R^2 \right) $$ The first term is what we would expect from the infinite solenoid; but what's that second term doing there? It's independent of $\epsilon$, which means that the loop is "catching" this flux no matter how small the loop is. In other words, your proposed vector potential hides an infinitely strong, dense magnetic flux concentrated on the $z$-axis, of magnitude $-\pi B_z R^2$. You can think of this as the limit of two nested solenoids, with opposing currents and radii $R_i < R_o$, in the limit as $R_i \to 0$. In terms of delta-functions, the curl of this vector field would be $$ \vec{B} = - \pi B_z R^2 \delta(x) \delta(y) \hat{z} + \begin{cases} B_z \hat{z} & r < R \\ 0 &r > R \end{cases}. $$ The reason this doesn't show up when you take the curl using the formulas in the endpapers of Griffiths or Jackson, by the way, is that those formulas are only guaranteed to work for points where the coordinates are not singular. Roughly speaking, a singular point of a coordinate system is any point where one or more of the basis vectors are not well-defined. At $r = 0$ in a cylindrical coordinate system, the basis vectors $\hat{r}$ and $\hat{\phi}$ are not well-defined, and so the coordinates are singular there and the usual vector calculus formulas require caution to use.
{ "domain": "physics.stackexchange", "id": 87520, "tags": "electromagnetism, magnetic-fields, differential-geometry, vector-fields, singularities" }
Data saving/sharing standards for objects, actions and senses
Question: I am working on a language (computer language) for robotics to communicate with each other. I am looking for naming standards that is unique and usable for robotics, for example when two robots are communicating with this language, they will use a word(standard) for an object like "door" that is understandable for both because they are using one unique naming standard. As I searched the internet I couldn't find something helpful for naming objects, senses and actions may robots share with each other and they understand what are they meaning for. Syntax of my language SEND BY loc ON object = door This language is a query language like SQL that programmer based on programming conditions writes communication queries to archive some data from destination robot or requesting some actions from it. In the code above, loc and door are names that should be declared by a standard that both robots can understand them. I'm asking you if you can suggest any naming standard for saving and sharing names on robots and if there is robotics communication standard to suggest especially scholars. thanks. Answer: "ontology" is the search term you are looking for. There are a few. You will have to review them to see if any meet your needs. A little background learning about ontologies will help you navigate the options. You may be tempted to only look at ontologies made by roboticists or with the word robot in the title, but each ontology is created with a set of assumptions or contexts in mind. You may miss a useful ontology if you only look at robotics.
{ "domain": "robotics.stackexchange", "id": 1724, "tags": "communication" }
Mersenne Prime Oracle
Question: Please give me any feedback regarding good coding standards and readability. fun main(arg: Array<String>) { //do while user guess != Mersenne Prime. do { //initializing variable var number: Int = 3 try { number = readLine()!!.toInt() if (isMersennePrime(number)) println("${number} is a Mersenne Prime. Congratulations.") else println("${number} is not a Mersenne Prime.") } catch (e: NumberFormatException) { println(" is not a valid number. Exiting.") } } while (!isMersennePrime(number)) } // is Number both prime and 2^n - 1? fun isMersennePrime(Number: Int): Boolean = (twonminus1(Number.toDouble()) && isPrime(Number)) // is x Prime? fun isPrime(x: Int): Boolean = (2..x-1).all{x % it != 0 && x != 2} // is x = 2^n - 1? fun twonminus1(x: Double): Boolean = Math.log(x + 1.0)/Math.log(2.0) % 1.0 == 0.0 Answer: Others already made some good points. Calling isMersennePrime twice is really bad because that can be very computationally intensive. I'm not sure why you are not allowing 2 to be a prime number. Where I'm from 2 is prime. When scanning for primes, you can stop at sqrt(x) instead of x - 1. If y * z = x, one of y or z is smaller than sqrt(x). @Roland gave you a trick on checking if an integer is a power of two, but I just wanted to add a reference to wikipedia with some explanation. Also var number: Int = 3 is quite odd. It's better to use immutable val. Instead: while (true) { print("Enter a number: ") val text = readLine() val number = try { text!!.toInt() } catch (e: NumberFormatException) { println("$text is not a valid number. Exiting.") break } if (isMersennePrime(number)) { println("${number} is a Mersenne Prime. Congratulations. Exiting.") break } else { println("${number} is not a Mersenne Prime. Try again.") } }
{ "domain": "codereview.stackexchange", "id": 26889, "tags": "primes, kotlin" }
Estimating whether the flow through a valve or nozzle cavitates
Question: My understanding is that cavitation occurs in the flow of a liquid when the static pressure drops below the vapor pressure, even intermittently. So even if the time-averaged static pressure (what you might measure) is above the vapor pressure, the pressure fluctuations from turbulence or other unsteadiness could be large enough to cause cavitation locally. So comparing the time-averaged static pressure against the vapor pressure isn't enough; you need to add some extra cushion to account for the pressure fluctuations. (This is my interpretation, not having read too deeply into this.) So, in various books, websites, and journal articles I have seen two different types of dimensionless numbers for estimating whether the flow through a valve or nozzle cavitates. They are generally called the cavitation index or cavitation number. They take one of two forms: $$\sigma = \frac{p_\text{in} - p_\text{vapor}}{p_\text{in} - p_\text{out}}$$ or $$\sigma = \frac{p_\text{in} - p_\text{vapor}}{\tfrac{1}{2} \rho V^2}$$ where $p_\text{in}$ is the inlet pressure, $p_\text{out}$ is the outlet pressure, $p_\text{vapor}$ is the vapor pressure, $\rho$ is the liquid density, and $V$ is some characteristic velocity of the flow (say, in the nozzle case, the velocity at the outlet). Some forms of this number are inversions of the numbers above, but these aren't that different. What is the difference between these parameters? Based on energy conservation you can relate the pressure drop to the flow rate, but typically there is an empirical coefficient added in to account for non-idealities. Is there something else I am missing? Is one form preferred over the other? Best I can tell whether to use one or the other depends on what sort of data you have (so, for flow over a turbine blade, the velocity form is preferred), but I've seen both even for nozzles. Where can I get accurate data to predict cavitation based on these numbers? I've tried using some data on atomizer nozzles from various journal articles but generally they use different forms of the cavitation number. Some of the data suggests the flow through the nozzle will cavitate at the pressures I want, but other data for similar nozzles suggests it won't. I'm not sure what the source of the inconsistency is. My understanding could be faulty, the cavitation number model could be too simplistic, the data could be inaccurate, etc. Answer: The difference between the two equations The cavitation number is the ratio of the static pressure difference to the dynamic pressure difference. So, if you want to use the first equation, you would need to take the pressure using a Pitot tube to measure the total pressure, whereas if you want to use the second equation you will need to measure the freestream velocity, but I would recommend measuring it upstream rather than downstream because of possible effects of acceleration and boundary layer growth. Also, your $V$ should be $V_{in}$ such that it corresponds to the same location where $p_{in}$ is measured, because this equation is derived from Bernoulli's equation which says the energy is conserved along a streamline. Is one form preferred over the other? In all my experience working in cavitation research for many years, we have almost always used the latter equation you mentioned (although I have mainly been working in hydrofoils and propulsion systems). The reason is that we could get more accurate non-intrusive velocity measures using Laser Doppler velocimetry (LDV) than by using an intrusive method. Where can I get accurate data to predict cavitation based on these numbers? It is difficult to use experimental data to predict the cavitation number because of the differences in things like turbulence intensity and air nuclei content, which are difficult to match in reality with controlled laboratory methods. Traditionally, in my circles, this is done by running some CFD analysis codes on your design. There are two different approaches here: (1) compute the average mean flow using a RANS or LES technique, and (2) using a bubble dynamics code which will model the air nuclei, but requires a flowfield (either from experimental measures or from the from the CFD model). If you use a typical RANS CFD model to compute the flow-field, it should give you the the pressure coefficient which has a very similar definition to the cavitation number: $$C_P = \frac{P-P_\infty}{\frac{1}{2}V^2}$$ If you are doing some CFD calculation on your nozzle, you should find the location of minimum pressure, and that is the place where cavitation should occur. You can infer the cavitation number from this pressure coefficient as: $$\sigma = -C_P^{min}$$ where $C_P^{min}$ is the minimum pressure in your nozzle. I explain this in more detail in this paper. However, this will only give you an idea of the time averaged cavitation inception number. Most people don't go to such detail in trying to get such an accurate prediction of cavitation inception, unless it is absolutely critical. If you want to get a more accurate number, you need to consider that cavitation inception requires three things to happen at the same time: (1) a local area of pressure which is below the vapor pressure of water, (2) an air nuclei which enters into that low pressure region, and (3) the air nuclei must be in the low pressure for a significant enough time that it basically rapidly grows, becomes unstable and hence collapses. The way people have been able to more accurately estimate this is by using a using a Lagrangian method that simulates sending air nuclei through an Eulerian CFD dataset. Some of the real experts in this field are the people at Dynaflow-inc.com. I might suggest taking a look at this paper: Chahine, G.L. "Nuclei Effects on Cavitation Inception and Noise", 25th Symposium on Naval Hydrodynamics, St. John's, NL, Canada, Aug. 8-13, 2004. PDF here However, if you don't want to go to all that trouble, I would recommend that you compute an estimate of the pressure fluctuations $p'$ based on the ambient turbulence intensity of your flow, and then subtract this value off of your mean pressure to get a better estimate of cavitation number. You should be able to get this value out of the turbulence model if your using a RANS technique. If you are looking at possible CFD techniques to use, unless you have a lot of money to spend, I might suggest looking into using OpenFOAM.
{ "domain": "engineering.stackexchange", "id": 59, "tags": "mechanical-engineering, fluid-dynamics, multiphase-flow" }
Find transfer function from step response and root locus?
Question: I am given a step response of magnitude of 3 and the root locus and I have to find the transfer function of the system. The function I find gives me the step response(magnitude of 3 again) of the last diagram. I'm a beginner at this so I've done something stupid probably but I have trouble finding answers regarding control engineering on the internet. This is what I tried doing: I found the poles and the zeros from the root locus. z=-5,+4 p=-6,-10,-3 I think my transfer function is given from this formula but I'm not sure if we have an H(s) in the feedback and it is not stated : $$ T(s)= \frac{KG(s)}{1+KG(s)} $$ From the poles and the zeros my open-loop transfer function G(s) is : $$ G(s)= \frac{(s+5)(s-4)}{(s+10)(s+6)(s+3)}$$ Doing the calculations I find : $$ T(s)= \frac{Ks^2+Ks-20K}{s^3+(K+19)s^2+(108+K)+180-20K}$$ From the step response(final value is 4) and the final value theorem I find $\frac{-20K}{180-20K}=-4/3=>K=5.14$ I divided 4 by 3 because the first step response is of magnitude of 3. With this K the step response is the one in the third diagram.It's close to the first one but it's not the one I'm looking for. What am I missing here? Answer: I think you did mix this up with proportional feedback. The transfer function is given by $$G(s)=K\frac{(s+5)(s-4)}{(s+10)(s+6)(s+3)},$$ in which $K$ is a parameter that needs to be determined. Because $G(s)$ is a stable plant (all poles are in the left half plane) we can determine the DC gain by the final value theorem. $$G(s=0)=K\frac{5\cdot (-4)}{10\cdot 6 \cdot 3}=-\frac{K}{9}$$ And $G(s=0)=y(s=0)/u(s=0)=\frac{-4}{3}\implies a=12.$ Hence, $$G(s)=12\frac{(s+5)(s-4)}{(s+10)(s+6)(s+3)}.$$ MATLAB testing: s = tf('s'); G = 12*(s+5)*(s-4)/((s+10)*(s+6)*(s+3)); step(3*G); % 3 to scale the unit step response
{ "domain": "engineering.stackexchange", "id": 1489, "tags": "control-engineering, control-theory" }
How does normal force work?
Question: From what i read: Normal force is the force that prevents objects from passing through eachother,which is the force of the repulsion from the charge. The normal force will get as large as required to prevent objects from penetrating each other. My question is about the scenario of a person inside an elevator: The elevator has a mass of $1000kg$ and the person has a mass of $10kg$ At the first few seconds the variables are($_e$ is for "elevator" and $_p$ is for "person", i'm assuming that the acceleration due to gravity is $-10m/s^2$, "-" is for downward): $v_e$ = $0m/s$ $a_e$ = $0m/s^2$ $v_p$ = $0m/s$ $a_p$ = $0m/s^2$ And the forces are: The force of gravity on the elevator $f_g(elevator)=m_e*-10/s^2$ The force of gravity on the person $f_g(person)=m_p*-10m/s^2$ The force of the wire keeping the elevator in place(without considering the weight of the person becuase that's one of my questions) $f_w = +f_g(elevator)$ Now, there's a force of gravity applied on the person which is $f_g=10kg*-10m/s^2=-100n$ So the person is supposed to accelerate downward,but it can't go through the elevator becuase of the normal force which I said what I think it does at the start of the question Here's what I think is happening: If the normal force were to be applied on the elevator by the person's feet, then it would be greater than if it were to be applied on the person's feet by the elevator(becuase the mass of the person would require less force for the elevator to stop it,than the mass of the elevator would require for the person to get the elevator moving with her/him so she/he doesn't penetrate the elevator) Therefore the normal force is applied on the person by the elevator (as small as it can be) for them to not penetrate eachother, $f_n=f_g(person)$ When there is a net force on the elevator which accelerates it upward,the normal force is applied on the person by the elevator to prevent them from penetrating eachother because that way it is less than if the normal force were applied on the elevator by the person(becuase the mass of the person would require less force for the elevator to get the person moving with it,than the mass of the elevator would require for the person to get the elevator to stop,so they don't penetrate). And the normal force in that case is $f_n=m_p*(a_g+a_e)$ applied on the person by the elevator. The main thing: IlIs my interpretation of normal force correct??,or does the normal force have to be applied on the "moving" object?? I heard a lot that when the elevator starts decelerating(acclerating in the downward direction) the elevator would apply a normal force on the person which is as small as it can be to prevent her/him from penetrating the elevator,and because the elevator is decelerating,the force will be less than gravity(assuming that the person has the velocity of the elevator before it was decelerating) But if the elevator is slowing down(the same goes if the velocity was negative), that means for sometime the person wouldn't be in contact with the elevator(because the person's velocity has to be the same as the elevator's for her/him to not penetrate the elevator,the elevator has to change its velocity first before the velocity of the person can change due to gravity's downward accleration) So how can there be a normal force applied?? Does normal force come in pairs?? and if it does, in what way?? If not,what is the opposite and equal force to the normal force?? I tried to make my question as clear as possible.......(: Answer: Yes, normal forces come in pairs - the elevator exerts a normal force on the person and the person exerts a normal force on the elevator. These two normal forces are equal in magnitude and opposite in direction - this is Newton's Third Law. The best and simplest approach to this type of problem is to consider each object separately, work out the forces on each object, and use Newton's Second Law $F=ma$ to relate the forces to the acceleration of the object. Then you can see if you have enough information to determine the values of any unknown forces or accelerations. It might help if you draw a diagram for each object showing the forces acting just on that object - these are called "free body" diagrams. When the person and the elevator are stationary, we know there are two forces on the person: Gravity, which produces a force of $100$ Newtons downwards (by the way, $10$ kg is a very small person, but that is the figure you gave for their mass). The normal force from the floor of the lift - let's call this $N$ Newtons upwards. The person has an acceleration of $0$, so Newton's Second Law tells us that the net force on the person must be $0$. So $100-N=0$, and so we know that $N=100$ Newtons. Turning now to the elevator, there are three forces on the elevator: Gravity, which produces a force of $10000$ Newtons downwards. The normal force from the person, which is a force of $N$ Newtons downwards. We know that $N$ here has the same value as the normal force acting on the person, because Newton's Third Law tells us that if the lift exerts a force on the person then the person exerts an equal and opposite force on the list. The tension in the wire, which we will call $T$ Newtons upwards. The elevator also has an acceleration of $0$, so we know that the net force on it must be $0$, so $T = 10000 + N$. But we know from our analysis of the person that $N=100$ Newtons. Therefore $T=10100$ Newtons. This makes intuitive sense, because the wire must support the weight of the elevator and the person. Exactly the same analysis is true if the elevator is moving at a constant velocity (because its acceleration and the person's acceleration are still zero). However, if the elevator is accelerating upwards at an acceleration of $a$ metres per second squared, then the force equation for the person becomes: $N - 100 = 10a \\ \Rightarrow N=100+10a$ In other words, the normal force $N$ increases (this is why you feel heavier in an elevator that is accelerating upwards - what you feel is the increased normal force on your feet). And for the elevator we have $T - 10000 - N = 1000a \\ \Rightarrow T = 10000 + N + 1000a = 10100 + 1010a$ In other words the tension in the wire increases because it must now support the weights of the elevator and the person and provide enough additional force to accelerate them both upwards at an acceleration of $a$. Notice that it does not matter whether the velocity of the elevator is zero, upwards or downwards - it is only the acceleration that matters. Similarly, if the elevator is accelerating downwards, the normal forces and the tension in the wire will be reduced - but note that normal forces and tensions in wires cannot become negative. If we want to accelerate the elevator and the person downwards with an acceleration greater than $10$ m/s^2 then we would have to replace the wire with a stiff rod so that $T$ can act downwards, and we would have to give the person some means of gripping onto the floor so that $N$ can act downwards too.
{ "domain": "physics.stackexchange", "id": 70731, "tags": "newtonian-mechanics, forces, charge, acceleration" }
How do scientists decide which version of a polymorphism is the main one?
Question: This in fact has bugged me for years, but now I finally remembered to ask. I suppose that if one variation is more frequent, it can be labeled as the default, but what about variations that are equally as common? The example that I just saw, it seems that the frequency of both variants is the same, and yet somebody claims that G is the default and C is the polymorphism: Polymorphism with genotypes CC, CG and GG occuring in 30, 40 and 30% of the control group, respectively http://hell.org.pl/~kamyk/stackexchange/Zorena1.jpg Maybe it is possible to determine which one was first, but how? Is there ancient data about polymorphism frequencies? Answer: A DNA locus may have two (or more) variants (alleles), but there isn't one termed a main or default variant. In the example you cite, Myśliwska 2009, the only asymmetric distinction between the G and C alleles that I could see was in this passage: The polymorphic region −174G>C of IL-6 encoding gene is implicated in transcription of this cytokine. The G>C nucleotide substitution creates... This passage uses "substitution" (and "G>C" in −174G>C), which suggests that "G" is what is termed the ancestral variant, and "C" a later variant. The ancestral variant was the form that is believed to have been essentially the only form in some ancestral population; the later variant is a mutation (substitution in this example) which became established in part of a later occurring population. Not all polymorphisms have their variants characterized in this way. Those that do usually have DNA sequence evidence showing that species other than humans have a homologous gene, and those genes appear to have only the "ancestral" variant. For the locus in the example you cite, the human IL6 -174 locus, a early paper by Fishman et al. (1998) says this: Considerable interethnic variation in the frequencies of these polymorphisms has been demonstrated (39), which is consistent with our data for the -174C allele, which is considerably rarer in Gujarati Indians and Afro-Caribbeans, compared with UK Caucasians. As all of the primates examined were GG homozygotes, it is likely that this allele is ancestral and that the C allele represents a relatively recent change in the IL-6 5' flanking sequence. This assessment seems to have become generally accepted. The rather even distribution of G and C variants in the table you show is likely due to the subjects being of European ancestry, as suggested by this from SNPedia: It [IL6 -174] tends to be quite polymorphic in Caucasians, but Asian and African populations are almost monomorphic (for the (G) allele).
{ "domain": "biology.stackexchange", "id": 3471, "tags": "human-genetics, nomenclature" }
Say you run qubit $A = (|0⟩ + |1⟩) / √2$ and qubit $B = |0⟩$ through a CNOT gate. What is the state of qubit $B$ afterwards?
Question: I am new to the weeds of quantum computing and this question is probably pretty elementary. Say you run qubit A = (|0⟩ + |1⟩) / √2 and qubit B = |0⟩ through a CNOT gate. What is the state of qubit B afterwards? Here is how I have tried to reason my way to an answer: The state of the system after the CNOT gate is (|00⟩ + |11⟩) / √2 The state of qubit A remains (|0⟩ + |1⟩) / √2 Here's where I feel like I'm doing something wrong. My intuition tells me if I have a global state ac|00⟩ + ad|01⟩ + bc|10⟩ + bd|11⟩, I should be able to deconstruct it into the states of the individual qubits to acquire A = a|0⟩ + b|1⟩ and B = c|0⟩ + d|1⟩ Using the above intuition and observations, I have a = 1/√2, b = 1/√2, ad = 0, bc = 0 ac = 1/√2, and bd = 1/√2. I'm looking for c and d. This system of equations is inconsistent. By the first four equations, it must be the case that c = d = 0. But by the last two, that can't be the case. Where did my thinking go wrong? Answer: This is correct. This is not correct. Particle A does not have a definite state anymore. Neither does particle B. Previously, you could write the two-particle state as $\frac{|0\rangle+|1\rangle}{\sqrt 2}|0\rangle$, so the state factored and you could talk about "the state of A" and "the state of B". After the CNOT, this is no longer true. You can no longer talk about the "state of A", you can only talk about the state of the SYSTEM. This is because A and B are entangled. Punchline: There are some two-particle states that cannot be represented as $(a|0\rangle+b|1\rangle)(c|0\rangle+d|1\rangle)$. The state of your A and B particles after applying the CNOT is one of them.
{ "domain": "physics.stackexchange", "id": 56687, "tags": "hilbert-space, quantum-information, quantum-entanglement, quantum-computer" }
Is there a term for a Venus visibility period
Question: Is there a term for the period of time when Venus is first visible in the evening to when it switches to being the "morning star", or vice versa? For example, as depicted in the image below, from Early Oct 2022 to late July 2023, Venus will be visible in the evening. I know the Mayans took a particular interest in the 8 different patterns produced (for where they were), but never found a word they used for the patterns, nor the time periods they represent. I'm not looking for a Mayan word specifically, just anything other than "the time when Venus is visible in the evening/morning this time around". Code to produce image above Answer: Looks like the most common word used for this is "apparition". I just saw this mentioned in Meeus' "Mathematical Astronomy Morsels IV", and searching for the term, it seems pretty common.
{ "domain": "astronomy.stackexchange", "id": 6745, "tags": "terminology, venus" }
Parition a multiset of numbers into two subsets, how to maximize the sum of their medians?
Question: Given a multiset $S$ of numbers, partition it into two subsets $S_1 $ and $S_2$. How to maximize the sum of their medians? For example, the median of {1,2} is 1.5. I've found a greedy algorithm that $S_1$ contains the maximum of $S$ (if multiple, select one), $S_2$ contains the rest of $S$. It's intuitively correct but I can't prove it. Answer: Your algorithm is correct. The following is its proof of correctness. Let $S_1$ and $S_2$ be the optimal partitions of $S$. Let their medians be $m_1$ and $m_2$. Let the maximum element is $M$ that belongs to $S_1$ (without loss of generality). Let $m$ is the median of $S_1 \cup S_2 \setminus \{M\}$. Then, we can show that the value $M + m$ is at least $m_1+m_2$. Proof: When we merge the sets: $S_1 \setminus \{M\}$ and $S_2$. Then, in $S_1 \cup S_2 \setminus \{M\}$, there are at least $\lfloor |S_1|/2 \rfloor + \lfloor |S_2|/2 \rfloor-1$ elements that are larger than $\min\{m_1,m_2\}$. Suppose there are at least $\lfloor |S_1|/2 \rfloor + \lfloor |S_2|/2 \rfloor$ elements larger than $\min\{m_1,m_2\}$; then, we are done. That is, $\lfloor |S_1|/2 \rfloor + \lfloor |S_2|/2 \rfloor \geq \lfloor (|S_1|+|S_2|-1)/2 \rfloor$, the median $m$ of $S_1 \cup S_2 \setminus \{M \}$ has value at least $\min\{m_1,m_2\}$. Since $M \geq \max\{m_1,m_2\}$. We get $m+M \geq m_1 + m_2$. Hence proved. Therefore, let us assume that there are exactly $\lfloor |S_2|/2 \rfloor + \lfloor |S_2|/2 \rfloor-1$ elements that are larger than $\min\{m_1,m_2\}$. Let us make some cases: Case 1: $|S_1|$ and $|S_2|$ are odd. In this case, there are always $\lfloor |S_2|/2 \rfloor + \lfloor |S_2|/2 \rfloor$ elements with value larger than $\min\{m_1,m_2\}$. So we are done. Case 2: $|S_1|$ and $|S_2|$ are even. Let $m_1 = \frac{(l_1+r_1)}{2}$ and $m_2 = \frac{(l_2+r_2)}{2}$ such that $l_1,r_1$ are middle elements of $S_1$, and $l_2,r_2$ are middle elements of $S_2$. Note that in $S_1 \cup S_2 \setminus \{M\}$, there are at least $\lfloor |S_1|/2 \rfloor + \lfloor |S_2|/2 \rfloor$ elements larger than $\min\{l_1,l_2\}$. Therefore, observe that the median $m$ is at least $\max\{l_1,l_2\}$ or $\min\{r_1,r_2\}$. Since $M \geq \max\{r_1,r_2\}$, it is easy to see that $m+M$ is at least $\frac{(l_1+r_1)}{2} + \frac{(l_2+r_2)}{2} = m_1 + m_2$. Hence proved. Case 3: $|S_1|$ is odd and $|S_2|$ is even. Let $m_2 = \frac{(l_2+r_2)}{2}$ such that $l_2,r_2$ are middle elements of $S_2$. Note that the median of $S_1 \cup S_2 \setminus \{M\}$ is either at least $m_1$ or $\frac{(l_2+r_2)}{2}$ or $\frac{(l_2+m_1)}{2}$. Since $M \geq \max\{m_1,l_2,r_2\}$, it is easy to see that $m+M$ is at least $m_1+m_2$ for each of the possible values of $m$. Hence proved. Case 4: $|S_1|$ is even and $|S_2|$ is odd. It is similar to Case 3.
{ "domain": "cs.stackexchange", "id": 19557, "tags": "algorithms, greedy-algorithms, partitions" }
How does manure & other organic matter improve soil structure?
Question: How does manure & other organic matter improve soil structure? Based on my understanding, soil structure refers to the arrangement of the aggregates within the soil; a good soil structure prevents waterlogging, "nutrient-lockup" (and ultimately the death of the crops). However, I don't understand how manure can improve this structure to the benefit of the plants. In my textbook, it has been—without any further explanation—written that: Manure and other organic matter gives the soil a good structure and improves its water-holding properties. Conversely, artificial fertilizers do little to maintain a good soil structure because they contain no organic matter. Googling led me to the following paragraph, but again, it hardly explains how: Organic matter causes soil to clump and form soil aggregates (how though?), which improves soil structure. With better soil structure, permeability (infiltration of water through the soil) improves, in turn improving the soil's ability to take up and hold water. [I have a couple of theories as to how this might be, but I'd rather ask here and not mess up] Answer: Surface area Organic material in soil is finely divided and provides a large surface area to mass ratio. Clay particles have a similarly large surface area, but most clays stick to each other because of their micro-structure and chemistry, making soils less permeable and prone to over-compaction. Organic matter, regardless of its ultimate source, is the opposite. Its surface structure keeps the individual particles separated, and its chemistry actively encourages water to infiltrate between, and in fact through, its particles. Well rotted organic matter therefore creates a high reactive surface on which vital soil chemical reactions can take place and opens the structure of the soil column to greater water movement. The surface area created by soil organic matter also hosts soil bacteria and fungi that help to bind soil particles together into larger aggregates, and facilitates semi-permanent chemical binding by creating zones of interaction between water with dissolved minerals and air in the soil.
{ "domain": "earthscience.stackexchange", "id": 2375, "tags": "soil, agriculture, human-influence, soil-science" }
C++ Snake Game, rewritten based on a C implementation
Question: I have a c# background and mostly worked on Web Applications. Recently I wanted to learn c++. I read many online resources and inspect 10s of source code shared on Stack Overflow / Code Review or from C++ forums. During my research I've come accross this post: Snake game in C++ and based on answer to that I decided to try and rewrite it in an object-oriented manner. I have many questions about the language itself but I think working on a project is a good way to learn any language's dynamics. I'd like to represent this source to you and hoping to get pointed out what I did wrong and what I did correct. My thought proccess was to keep it as standart compliant as possible. _kbhit() and _getch() are platform specific but I didn't want to refactor them yet as it was already a hard work for me to refactor this much. To be able to keep it standart compliant I decided to use a base Renderer class and inherited a Win32ConsoleRenderer class from it to render it on Windows Console. My thought was, if someone else want to port this to Linux or Mac they'd simply need to create a XXRenderer class based on Renderer and compile it without issues. I tried to avoid using raw pointers and used std::unique_ptr for my pointer needs, however I'm not sure if I used them correct or not. Below is the entire project's source. I'm really eager to learn c++ and would like to know what areas I should be looking to improve myself and how much of my thought proccess was actually correct in my approach. Initially I wanted to pass a "mapData" reference to my renderer. Bu then I couldn't manage to initialize a std::vector>& in base Renderer and then I decided to use a pointer container instead. Also I'm having a hard time getting a grasp of when to use references and when to use pointers (hence my struggle with Renderer.uptrMapData). One last problem I solved but didn't understand what went wrong; My initial thought was that if I avoid raw pointers and use std containers, I wouldn't need to worry much about "delete"ing those resources. However, if I remove this line from GameEngine.cpp: if (p_renderer != nullptr) { p_renderer.release(); } My program crashes on quit (after Renderer's destructor called, to be more specific) with an "can't delete incomplete object" exception being thrown in memory library. My tests confirmed that destructors are called in proper order (i.e: Win32ConsoleRenderer's destructor first, and then Renderer's destructor) but I didn't quite understand what object would left incomplete in that situation. My development environment is Visual Studio 2017 and I used it's compiler. Snake.h #ifndef SNAKE_H #define SNAKE_H namespace SnakeGame { struct Snake { unsigned headX; unsigned headY; unsigned bodyLength; int currentDirection; void move(int dX, int dY) { headX += dX; headY += dY; } }; } #endif Shared.h #ifndef SHARED_H #define SHARED_H namespace SnakeGame { namespace Shared { constexpr unsigned MAP_WIDTH = 40; constexpr unsigned MAP_HEIGHT = 30; constexpr unsigned FPS = 10; enum MapTile { wall = -2, food, walkable, snakeBody }; enum Direction { up, right, down, left }; } } #endif Map.h #ifndef MAP_H #define MAP_H #include <vector> namespace SnakeGame { class Map { public: Map(unsigned mapWidth, unsigned mapHeight); void initalizeMapData(); void generateFood(); void clearFood(); //Methods int getMapValue(unsigned x, unsigned y); void setMapValue(unsigned x, unsigned y, int val); void clearSnakeTiles(); //Accessors const std::vector<std::vector<int>>& getMapData() const; private: unsigned m_width, m_height; unsigned m_lastFoodX, m_lastFoodY; std::vector<std::vector<int>> m_mapData; }; } #endif Map.cpp #include "Map.h" #include "Shared.h" SnakeGame::Map::Map(unsigned mapWidth, unsigned mapHeight) : m_width(mapWidth), m_height(mapHeight), m_lastFoodX(0), m_lastFoodY(0), m_mapData(mapHeight, std::vector<int>(mapWidth, 0)) { } void SnakeGame::Map::initalizeMapData() { for (unsigned i = 0; i < m_width; i++) { //all columns of the first row is wall. m_mapData[0][i] = Shared::MapTile::wall; //all columns of the last row is wall. m_mapData[m_height - 1][i] = Shared::MapTile::wall; } for (unsigned i = 0; i < m_height; i++) { //first column of each row is wall. m_mapData[i][0] = Shared::MapTile::wall; //last column of each row is wall. m_mapData[i][m_width - 1] = Shared::MapTile::wall; } } void SnakeGame::Map::generateFood() { unsigned x, y; do { x = rand() % (m_width - 2) + 1; y = rand() % (m_height - 2) + 1; } while (m_mapData[y][x] != Shared::MapTile::walkable); m_mapData[y][x] = Shared::MapTile::food; m_lastFoodX = x; m_lastFoodY = y; } void SnakeGame::Map::clearFood() { m_mapData[m_lastFoodY][m_lastFoodX] = Shared::MapTile::walkable; } int SnakeGame::Map::getMapValue(unsigned x, unsigned y) { if (x >= m_width) x = m_width - 1; if (y >= m_height) y = m_height - 1; return m_mapData[y][x]; } void SnakeGame::Map::setMapValue(unsigned x, unsigned y, int val) { m_mapData[y][x] = val; } const std::vector<std::vector<int>>& SnakeGame::Map::getMapData() const { return m_mapData; } void SnakeGame::Map::clearSnakeTiles() { for (unsigned y = 0; y < m_height; y++) { for (unsigned x = 0; x < m_width; x++) { if (m_mapData[y][x] > Shared::MapTile::walkable) { m_mapData[y][x]--; } } } } GameEngine.h #ifndef GAMEENGINE_H #define GAMEENGINE_H #include "Map.h" #include "Snake.h" #include <chrono> #include "Win32ConsoleRenderer.h" namespace SnakeGame { class GameEngine { typedef std::chrono::milliseconds ms; typedef std::chrono::high_resolution_clock clock; public: explicit GameEngine(Core::Renderer* const pRenderer); ~GameEngine(); //Methods void run(); //Accessors unsigned getScore() const { return m_score; } private: bool m_running; int m_score; int m_msPerFrame; Map m_map; Snake m_snake; std::unique_ptr<Core::Renderer> p_renderer; //Methods void processInput(); void update(); void draw() const; }; } #endif GameEngine.cpp #include <conio.h> #include <thread> #include "GameEngine.h" #include "Shared.h" SnakeGame::GameEngine::GameEngine(Core::Renderer* const pRenderer) : m_running(false), m_score(0), m_msPerFrame(1000 / Shared::FPS), m_map(Shared::MAP_WIDTH, Shared::MAP_HEIGHT), m_snake{ Shared::MAP_WIDTH / 2, Shared::MAP_HEIGHT / 2, 3, 0 }, p_renderer(pRenderer) { if (p_renderer != nullptr) { p_renderer->setMapData(m_map.getMapData()); } } SnakeGame::GameEngine::~GameEngine() { if (p_renderer != nullptr) { p_renderer.release(); } } void SnakeGame::GameEngine::run() { //initialize map walls m_map.initalizeMapData(); //place snake object at it's initialized position (center of the map) m_map.setMapValue(m_snake.headX, m_snake.headY, Shared::MapTile::snakeBody); //generate first food on the map. m_map.generateFood(); //set game state to running m_running = true; //main game loop while (m_running) { auto start = clock::now(); //process user input processInput(); //update game objects & conditions update(); //draw (render) the scene. draw(); auto sleep = std::chrono::duration_cast<ms>(start + ms(m_msPerFrame) - clock::now()); std::this_thread::sleep_for(sleep); } } void SnakeGame::GameEngine::processInput() { //check if there is a keyboard interrupt if (_kbhit()) { auto key = static_cast<char>(_getch()); switch (key) { case 'w': if (m_snake.currentDirection != Shared::Direction::down) { m_snake.currentDirection = Shared::Direction::up; } break; case 's': if (m_snake.currentDirection != Shared::Direction::up) { m_snake.currentDirection = Shared::Direction::down; } break; case 'a': if (m_snake.currentDirection != Shared::Direction::right) { m_snake.currentDirection = Shared::Direction::left; } break; case 'd': if (m_snake.currentDirection != Shared::Direction::left) { m_snake.currentDirection = Shared::Direction::right; } break; default:; } } } void SnakeGame::GameEngine::update() { //update snake position switch (m_snake.currentDirection) { case Shared::Direction::up: m_snake.move(0, -1); break; case Shared::Direction::right: m_snake.move(1, 0); break; case Shared::Direction::left: m_snake.move(-1, 0); break; case Shared::Direction::down: m_snake.move(0, 1); break; default:; } int currentMapValue = m_map.getMapValue(m_snake.headX, m_snake.headY); //check if we hit a food if (currentMapValue == Shared::MapTile::food) { //increase snake body length m_snake.bodyLength++; //clear current food m_map.clearFood(); //generate new food. m_map.generateFood(); //update score m_score += 10; } else if (currentMapValue != Shared::MapTile::walkable) { m_running = false; } m_map.clearSnakeTiles(); m_map.setMapValue(m_snake.headX, m_snake.headY, m_snake.bodyLength); } void SnakeGame::GameEngine::draw() const { if (m_running) { p_renderer->render(Shared::MAP_WIDTH, Shared::MAP_HEIGHT); } else { p_renderer->clearScreen(); } } Renderer.h #ifndef RENDERER_H #define RENDERER_H #include <vector> #include <memory> namespace SnakeGame { namespace Core { class Renderer { public: Renderer(); virtual ~Renderer(); virtual void setMapData(const std::vector<std::vector<int>>& mapData) = 0; virtual void render(unsigned sceneWidth, unsigned sceneHeight) = 0; virtual void clearScreen() = 0; protected: std::unique_ptr<const std::vector<std::vector<int>>> uptrMapData; }; } } #endif Renderer.cpp #include "Renderer.h" SnakeGame::Core::Renderer::Renderer() : uptrMapData(nullptr) { } SnakeGame::Core::Renderer::~Renderer() { if(uptrMapData != nullptr) { uptrMapData.release(); } } Win32ConsoleRenderer.h #ifndef WIN32_CONSOLE_RENDERER_H #define WIN32_CONSOLE_RENDERER_H #define NOMINMAX #define WIN32_LEAN_AND_MEAN #include <vector> #include <Windows.h> #include "Shared.h" #include "Renderer.h" namespace SnakeGame { namespace Core { class Win32ConsoleRenderer : public Renderer { public: Win32ConsoleRenderer(); ~Win32ConsoleRenderer(); void setMapData(const std::vector<std::vector<int>>& mapData) override; void render(unsigned sceneWidth, unsigned sceneHeight) override; void clearScreen() override; private: bool m_buffered; std::vector<std::vector<int>> m_mapDataCache; static HANDLE m_outputHandle; static void setWindowSize(); static void setCursorPosition(unsigned x, unsigned y); static void hideCursor(); static char mapValueToChar(Shared::MapTile mapValue); void initializeBuffer(); }; } } #endif Win32ConsoleRenderer.cpp #include "Win32ConsoleRenderer.h" #include <iostream> HANDLE SnakeGame::Core::Win32ConsoleRenderer::m_outputHandle = nullptr; SnakeGame::Core::Win32ConsoleRenderer::Win32ConsoleRenderer() : m_buffered(false), m_mapDataCache({}) { m_outputHandle = GetStdHandle(STD_OUTPUT_HANDLE); Win32ConsoleRenderer::clearScreen(); setWindowSize(); hideCursor(); } SnakeGame::Core::Win32ConsoleRenderer::~Win32ConsoleRenderer() { if (this->uptrMapData != nullptr) { this->uptrMapData.release(); } } void SnakeGame::Core::Win32ConsoleRenderer::setMapData(const std::vector<std::vector<int>>& mapData) { uptrMapData.reset(&mapData); m_mapDataCache.resize(uptrMapData->size()); initializeBuffer(); } void SnakeGame::Core::Win32ConsoleRenderer::render(unsigned sceneWidth, unsigned sceneHeight) { for (unsigned y = 0; y < sceneHeight; y++) { for (unsigned x = 0; x < sceneWidth; x++) { auto currentMapValue = this->uptrMapData->at(y)[x]; auto cachedMapValue = m_mapDataCache[y][x]; if (currentMapValue == cachedMapValue) continue; this->m_mapDataCache[y][x] = currentMapValue; setCursorPosition(x, y); std::cout << mapValueToChar(static_cast<Shared::MapTile>(currentMapValue)); } } std::cout.flush(); } void SnakeGame::Core::Win32ConsoleRenderer::setWindowSize() { HWND windowHandle = GetConsoleWindow(); RECT r; GetWindowRect(windowHandle, &r); MoveWindow(windowHandle, r.left, r.top, Shared::MAP_WIDTH * 10, Shared::MAP_HEIGHT * 20, TRUE); } void SnakeGame::Core::Win32ConsoleRenderer::clearScreen() { CONSOLE_SCREEN_BUFFER_INFO bufferInfo; COORD topLeft{ 0,0 }; std::cout.flush(); if (!GetConsoleScreenBufferInfo(m_outputHandle, &bufferInfo)) { std::cout << "BUFFER ERROR" << std::endl; } DWORD length = bufferInfo.dwSize.X * bufferInfo.dwSize.Y; DWORD written; FillConsoleOutputCharacter(m_outputHandle, TEXT(' '), length, topLeft, &written); FillConsoleOutputAttribute(m_outputHandle, bufferInfo.wAttributes, length, topLeft, &written); SetConsoleCursorPosition(m_outputHandle, topLeft); } void SnakeGame::Core::Win32ConsoleRenderer::setCursorPosition(unsigned x, unsigned y) { COORD coord{ static_cast<short>(x), static_cast<short>(y) }; SetConsoleCursorPosition(m_outputHandle, coord); } void SnakeGame::Core::Win32ConsoleRenderer::hideCursor() { CONSOLE_CURSOR_INFO cursorInfo{ 100,FALSE }; SetConsoleCursorInfo(m_outputHandle, &cursorInfo); } char SnakeGame::Core::Win32ConsoleRenderer::mapValueToChar(Shared::MapTile mapValue) { switch (mapValue) { case Shared::MapTile::wall: return '='; case Shared::MapTile::food: return '@'; case Shared::MapTile::snakeBody: return 'o'; case Shared::MapTile::walkable: return ' '; default: return 'o'; } } void SnakeGame::Core::Win32ConsoleRenderer::initializeBuffer() { if (!m_buffered) { for (unsigned int y = 0; y < m_mapDataCache.size(); y++) { auto colSize = this->uptrMapData->at(y).size(); m_mapDataCache[y].resize(colSize); } m_buffered = true; } } main.cpp #include "GameEngine.h" #include <iostream> int main() { //Create renderer SnakeGame::Core::Win32ConsoleRenderer renderer; //Create game engine and pass renderer to it. SnakeGame::GameEngine snakeGame(&renderer); //run the game. snakeGame.run(); std::cout << "== Game over! ==" << std::endl; std::cout << "Your score: " << snakeGame.getScore() << std::endl; std::cin.ignore(); return 0; } Answer: One problem I see is with your use of unique_ptr for p_renderer in GameEngine. Fundamentally, you have a problem with ownership of the Renderer object. Assigning a pointer to a unique_ptr variable is a transfer of ownership of that pointer to the variable. Once that happens, the original pointer (or object) should not call delete or the destructor for it. The GameEngine constructor is assuming that it should take ownership of the Renderer pointer it is given, while the caller of the constructor (main) keeps ownership (since the pointer is to a local, stack based variable). The fix is to change the definition of p_renderer to be just a pointer (Core::Renderer *p_renderer;) and not do anything with it in the destructor. Leave it up to the caller to clean up (delete) the pointer.
{ "domain": "codereview.stackexchange", "id": 25998, "tags": "c++, beginner, snake-game" }
Is there a book on gazebo?
Question: Hi all, I am working on a turtlebot with gazebo. However, I am looking for more documentation on gazebo with ros using turtlebots as the ground robots. So far, I have not gotten far to find good book on it. Please help. Originally posted by Vinh K on Gazebo Answers with karma: 17 on 2016-09-01 Post score: 1 Answer: as far as I know, no such book exists. They will be mentions of gazebo in various ROS books, but no book solely on gazebo. Originally posted by Peter Mitrano with karma: 768 on 2016-09-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Vinh K on 2016-09-02: I see. What is the best way to learn Gazebo so one can incorporate it with ROS? Comment by chapulina on 2016-09-02: I think the best way to learn about Gazebo is following the tutorials, they're constantly updated. http://gazebosim.org/tutorials Comment by shpower on 2016-09-04: In addition to tutorials there are two good books: 'ROS BY EXAMPLE' vol.2 (vol.1 is good for turtlebot, but without gazebo) and 'Mastering ROS for Robotics Programming'
{ "domain": "robotics.stackexchange", "id": 3981, "tags": "gazebo" }
Equation for heat dissipation from flat surface (ignoring convection)
Question: As part of a school project, I would like to calculate whether the efficiency of the Peltier effect is greater, less or equal at higher voltages. In other words, I'm testing if the temperature differential as a function of voltage is linear. For thoroughness, instead of testing the temperature of both sides of the plate directly, I would like to calculate the power output/drain of both sides of the plate. To do this, I need the equation for heat dissipation from an flat, uninsulated surface into air, given the temperature of the plate, the ambient temperature, and the conductivity of the plate material. I will set up a fan to blow air across the plate to ensure that the air around the plate remains at ambient temperature, so convection currents can (I think) be ignored. NOTE: If you have any suggestions regarding the experiment, I will be happy to hear them, but the question still primarily concerns the equation itself. Answer: There is no good theoretical model for the heat dissipation into air. You could work with really large heat sinks, but even that will be hard to model with any precision. What you are trying to do is called calorimetry and it is usually done with large thermal baths that are well insulated from the environment. For your purposes I would suggest a large water bath on one side of the Peltier element. If you fill it with water and ice and you keep stirring the temperature will stay constant. The thermal connection between the water and the Peltier should be done with a finned heat sink and the water needs to be agitated. If you use one of the pre-assembled Peltier elements, you don't have to work too hard on this. You probably already know that they look something like this: http://www.mpja.com/images/15312.jpg Putting one side of this assembly into a water bath is going to make a much more reliable heat transfer than air cooling. You could also use a copper or thick walled stainless steel pot with an even bottom, if you can borrow one from your Mom's kitchen. Bolting the Peltier to the bottom might be a bit harder than with an industrial heat sink that you can drill into or that already comes with the Peltier attached. You can probably come up with other ways to make a good thermal connection to the ice-water. That's part of the fun... there is always more than one solution. The other side of your element should be thermally connected to a well insulated thermal mass with very good thermal conductivity. Ideally you could use a copper or aluminum block. This thermal mass has to be well insulated with e.g. styrofoam. While you are doing your experiment, you will be "pumping" heat in and out of this second thermal mass while monitoring its temperature with a thermometer or thermo-element, if you have access to electronic measurement equipment. Curiously people are even selling aluminum blocks for this kind of purpose... see e.g. http://www.capitolscientific.com/Benchmark-Scientific-BSW01-Digital-Dry-Bath-Solid-Aluminum-Heating-Block-for-Microscope-Slides-or. OK, that's a bit pricy for what you are trying to do... try to find a chunk of metal with an even surface for less money. Saving cost is also one of the things that the scientists has to learn. Use a thermal compound when connecting the Peltier to the heat sink and your thermal mass to ensure that the heat flow is as good as possible. The white aluminum oxide paste kind is cheap, just be careful not to get it on your clothes (wear a lab coat as a proper science special effect, anyway!). Now that you have one side of your Peltier on a constant temperature stabilized by the melting ice you can focus on the temperature change on the other side. This makes your life much easier. You can still model the entire system dynamically with thermal resistances and multiple thermal masses, if you want to, but if you chose a large enough thermal mass for your measurement, so that the thermal mass of the Peltier doesn't matter much, then this won't be necessary. Moreover, your system automatically resembles a near optimal application. In real life the device will probably never perform nearly as well as it will on the water bath, i.e. your measurement will approximate the best case scenario, already.
{ "domain": "physics.stackexchange", "id": 27310, "tags": "thermodynamics, thermoelectricity" }
Type inference with overloading
Question: I am working on a type system supporting overloading. I have a rough idea of how type inference is usually implemented in such a scenario, but I am wondering how - after type inference is completed - the correct implementation of an overloaded operator can be chosen. Or, in other words, how the inferred type can be passed back down the syntax tree to the operator. For a small example, consider the expression (x + y) + 1 where x :: N | S, y :: a, + :: (N -> N -> N) | (S -> S -> S), 1 :: N. :: stands for type of, and a | b stands for type a or type b. The way, I assume, type inference would now work, is to traverse the syntax tree, and for each node return a type constraint: (x + y) + 1 => ((N & (N[a=N] | S[a=S])), (N & N) -> N) | ((S & (N[a=N] | S[a=S])), (S & N) -> S) => N[a=N] 1 => N + => (N -> N -> N) | (S -> S -> S) x + y => ((N & (N | S)), (N & a) -> N) | ((S & (N | S)), (S & a) -> S) => N[a=N] | S[a=S] x => N | S y => a + => (N -> N -> N) | (S -> S -> S) a & b in this example stands for unifying the types a and b, [a=T, b=U] is a set of equality constraints for type variables. As expected the return type of the given expression is inferred as N[a=N], that is N where the type variable a is expected to be N. Therefore, of the two provided implementations for the + operator (N -> N -> N, S -> S -> S), N -> N -> N should be used. In the given example, the resulting type is inferred, but not the type of the overloaded operator. My question is if there is a common pattern that is used to inform the + node in the syntax tree of the used implementation. Answer: You could organize type inference as follows. Suppose your input syntax has type Input. Define output syntax Output to be like Input but with explicit type annotations on all variables. The type inference would have type infer : Input -> List (Output * Type) That is, given some input e, it returns a list of possible answers. Each answer is a pair (e', t) where e' is e with variables annotated by types, and t is the inferred type of e. You can view this as all happenning in the nondeterminism monad. Whenever you need to infer the type of a variable x, you look up its possible types S | T | ... and branch on each one of them. This way you do not have to "pass back" any information to sub-expressions. Instead, each sub-expressions already comes annotated, in all possible ways.
{ "domain": "cs.stackexchange", "id": 15847, "tags": "programming-languages, type-theory, type-inference" }
How do biologists quantify "gene expression" in experiments?
Question: I've read papers which contain statements such as "control of gene expression is critical in biological processes". How exactly does one quantify "gene expression"? Isn't gene expression an umbrella term describing all of the mechanism by which DNA is synthesized into an organism's phenotype? Answer: The primary product of protein coding genes are mRNAs. When we talk about measuring gene expression we want to assay the steady-state levels of a specific mRNA within a cell. This is usually accomplished by starting with a large number of cells and harvesting all of the mRNAs from all of the cells. One way to measure the expression level of just one gene's mRNA is to perform a Northern Blot. Other sensitive methods include: an S1 nuclease protection assay, an RNAse protection assay, and a primer extension assay. Microarrays have also been used extensively to measure expression levels of thousands of genes at the same time in a single experiment. With the advent of RNA-Seq methodology it is possible to count the number of transcripts in an experiment (if you have a sequenced reference genome)
{ "domain": "biology.stackexchange", "id": 5459, "tags": "genetics, gene-expression, genomics, gene" }
What Is the Complexity Class of Deciding Whether a Problem Is in NP? Is It Decidable?
Question: Title says it all, but to clarify: Define a problem, called $IsInNP$, as follows: Given a Turing Machine $M$ that always halts, $IsInNP$ is the problem of deciding if the problem that $M$ recognizes is in $NP$. What is the complexity class of $IsInNP$? Is it even decidable? Is the answer the same for any other complexity class, like $NP$-hard? And are those questions even sensible to ask? By the way, I am aware that the class $NP$ is not enumerable, but since I do not quite understand enumerability and it seems that recursively enumerable problems can be decidable, I do not know if that means that deciding whether a problem is in $NP$, or any other complexity class, is decidable. Also, I am aware of Rice's Theorem, and I believe it can be interpreted as saying that deciding whether a problem is in $NP$ is undecidable, but I am not certain. Bonus question if the above questions are sensible: given a property $S$ that only $NP$ problems possess, does the above also mean that deciding whether a problem decided by a Turing Machine $M_2$ has property $S$ is in the same complexity class as $IsInNP$? Answer: Here are several interpretations of your question: $L_1$ consists of all descriptions of Turing machines which are deciders (always halt) and the language that they decide is in NP. This is undecidable, by reduction from the halting problem. Given a Turing machine $T$, construct a new Turing machine $M$ which erased its input and transfers control to $T$. Then $\langle M \rangle \in L_1$ iff $T$ halts on the empty input. Indeed, if $T$ halts then $L(M) = \Sigma^*$, while if $T$ doesn't halt, then $M$ is not a decider. $L_2$ consists of all descriptions of Turing machines which are either not deciders, or the language that they decide is in NP. This is undecidable, by reduction from the halting problem. Given a Turing machine $T$, construct a new Turing machine $M$ which simulates $T$ on the empty tape, and if $T$ halts, solves some NEXP-complete problem on the original input. Then $\langle M \rangle \in L_2$ iff $T$ doesn't halt on the empty input. Indeed, if $T$ halts on the empty input then $M$ decides an NEXP-complete problem, which by the nondeterministic time hierarchy theorem doesn't belong to NP. In contrast, if $T$ doesn't halt on the empty input, then $M$ is not a decider. $L_3$ is the promise problem in which the input is a description of a Turing machine which is promised to be a decider, and the goal is to determine whether the language it decides is in NP. Alternatively, you can ask whether there exists any decidable language $L_4$ such that The description of any machine deciding a language in NP belongs to $L_4$. The description of any machine deciding a language not in NP doesn't belong to $L_4$. This is undecidable, in the sense that there is no Turing machine that halts and answers YES on all YES instances, and halts and answers NO on all NO instances. Given a Turing machine $T$, construct a new Turing machine $M$ with two inputs $n,x$. The Turing machine simulates $T$ for $n$ steps. If $T$ halts within these $n$ steps, then it solves some NEXP-complete problem on $x$. Otherwise, it simply returns YES. The new machine $M$ is a YES instance of $L_3$ iff $T$ doesn't halt on the empty input. Indeed, if $T$ halts on the empty input then there exists an NEXP-complete language $L$ and an integer $n$ such that $(n,x) \in L(M)$ iff $x \in L$, and so $L(M) \notin \mathsf{NP}$. In contrast, if $T$ doesn't halt on the empty input then $L(M)$ consists of all pairs $(n,x)$, and so $L(M) \in \mathsf{NP}$.
{ "domain": "cs.stackexchange", "id": 14585, "tags": "complexity-theory, computability" }
gazebo urdf tutorial robots not visible
Question: Hello, I am virtualizing in virtual box a copy of ubuntu 11.10 for a robotics class. I have ros electric installed. I have 3d acceleration enabled. My problem is whenever I load a model from the urdf tutorials into gazebo, such as the visual robot, it is not rendered. Either nothing is rendered or only the root link is rendered. Simple objects such as the tables or primitives are rendered. Originally posted by avatarofwill13 on ROS Answers with karma: 11 on 2012-04-27 Post score: 1 Original comments Comment by hsu on 2012-04-28: Please be more specific when asking a question, i.e. reference models you are having trouble with. In general, if you would like to dynamically simulate things in gazebo, you need to add physical properties (e.g. inertial elements), see http://ros.org/wiki/urdf/Tutorials/Adding%20Physical%20and% Comment by avatarofwill13 on 2012-04-28: One of the models was called visual, which was my reference. its the end result of the building a visual robot model in urdf from scratch. its in the urdf_tutorial package with the model name 05-visual.urdf. Do all links need physical properties to be visible or only the main link? Comment by hsu on 2012-04-28: all links. Answer: The models in the URDF tutorials are not meant for Gazebo, only RViz. Originally posted by David Lu with karma: 10932 on 2012-04-27 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 9166, "tags": "gazebo, urdf" }
How to quantify phase distortion of a filter specific to a signal?
Question: For example, a EKG input signal is filtered with a known transfer function that does not have linear phase. How do you quantify the phase distortion of the output signal given you have the input? I know that the phase response of the transfer function gives the phase delay at all frequencies but I'm not sure how to make use of this. Answer: For a phase distortion metric I recommend using “group delay variation”. The definition of Group Delay is the negative derivative of phase with respect to frequency. The Group Delay is the delay in time that “group” of signals over a band of frequencies would have. A frequency response that is linear in phase (constant group delay with no variation) is not considered a distortion since all the frequency components would have the same delay so no actual distortion of the signal results. When the phase is not linear versus frequency (as in your plots) different frequency components of the signal arrive at different times at the output of the system, which can result in considerable distortion. Group Delay variation can be quantified as peak variation, peak to peak, or rms as in any other distortion metric. As MattL points out in the comments below, a more comprehensive metric would be deviation from linear phase, where linear phase strictly means proportional to frequency with no phase offset term. If a phase offset exists, it would not appear in the result for the group delay computation yet indeed contribute to a distortion due to a varying delay versus frequency (the Hilbert Transformer is an excellent example of this: in order for all frequency components to have a 90° relationship with the input waveform, each component must have a different delay). For further details on this see Matt's answer here: Group Delay for Hilbert Transformer and Resulting Dispersion
{ "domain": "dsp.stackexchange", "id": 8130, "tags": "phase, filtering, digital, non-linear, distortion" }
dynamic pseudo-code for simplified coin changing algorithm
Question: As a homework exercise our professor presented to us a simplified version of the coin-changing problem in which we do not need to minimize the number of coins used or track the number of possible combinations. Instead we need only to determine if a certain subset of coins of same or varying denomination, equal exactly to, some amount M. My recursion equation of the problem is as follows: let k = 0 If ∑ (v1 + v2 + … + vn) = M return true If ∑ (v1 + v2 + … + vn) < M return false Else Increment k by 1 make recursive call on coin subset {v1, v2, … v(n-k)} The next question is to convert this into pseudo-code that represents a dynamic programming solution to this problem. I'm a bit stuck. If you were to make a table of every possible sum {v1 + v2}, {v1 + v2 + v3}, {v1 + v2 + v3 + v4}, etc. You could eventually find a solution but wouldn't that be a much less efficient brute force approach? I assume the solution would include some implementation of memoization or some way to store and retrieve sums already encountered, but being that the problem is not an optimization problem, I'm having a hard time envisioning a dynamic solution. Much of the dynamic programming examples that I see suggest iteratively subtracting elements, one item at a time, which is similar to the composition of my recursion equation but I don't see a modification that would make this pseudo code dynamic in nature. If anyone could please enlighten me, I'd be very grateful Answer: Given an integer $M>0$ and $n$ coins with integer values $v_1, v_2,\cdots, v_n$, where $M, n, v_i$ are positive, how can we determine if some of the coins can add up to $M$? This problem is a common version of subset sum problem. There are many strategies to solve the problem. Here is the pseudocode by dynamic programming that is about as simple as possible. Let $s$ be an array of size $M+1$ with default value 0 and starting index 0. Let $s[0]=1$. For $i$ from 1 to $n$, do: For $j$ from 0 to $M$, do: If $s[j]=1$, let $s[j+v_i]= 1$. If $s[M]=1$, return yes. Otherwise, return no. Here is an example to illustrate the algorithm. Let $M=7$. Coins are $1,3,3,5$. $s=[1,0,0,0,0,0,0,0]$. Here are $s$ at the end of each iteration. $s=[1,1,0,0,0,0,0,0]$ $s=[1,1,0,1,1,0,0,0]$ $s=[1,1,0,1,1,0,1,1]$ $s=[1,1,0,1,1,5,1,1]$ return yes since $s[7]=1$ We can optimize the algorithm above in various ways. We can stop the algorithm once we have found $s[M]=1$. We can change $s$ to a double linked-list, whose elements are the realizable values in increasing order. We can split the given array of values to two arrays of equal size or sizes with difference 1. Run the algorithm on each of two arrays. Check if we can select one number from each resulting array $s$ such that their sum is $M$, using two-pointer technique. This last item means all other ways, as always. Exercise. Adapt the algorithm so that it also tells the least number of coins used if the answer is yes.
{ "domain": "cs.stackexchange", "id": 13469, "tags": "dynamic-programming, coin-change, pseudocode" }
Executing remote command using expect from C++
Question: I wrote some code to execute remote commands over ssh, using expect, from c++. The code works but is a bit of a mess of c / c++ since there doesn't seem to be an idiomatic way to achieve much of what I wanted in pure c++. I appreciate that there are nominally some security concerns with the way I'm using passwords, but these aren't a concern for my use cases. remote_commmand.h #pragma once #include <stdexcept> struct RemoteException : public std::runtime_error { using std::runtime_error::runtime_error; }; void remote_command(const std::string &host, const std::string &user, const std::string &password, const std::string &cmd); remote_command.cpp #include "remote_command.h" #include <cstdio> #include <cstring> #include <unistd.h> #include <fstream> #include <vector> using std::string; namespace { const char* expect_template = "set timeout 60\n" "spawn ssh %s@%s %s\n" "expect {\n" " \"*yes/no*\"\n" " {\n" " send \"yes\\r\"\n" " exp_continue\n" " }\n" " \"*?assword:*\"\n" " {\n" " send -- \"%s\\r\"\n" " send -- \"\\r\"\n" " expect eof\n" " }\n" "}\n"; class TempFile { public: explicit TempFile(const char* contents){ file_name = std::tmpnam(nullptr); std::ofstream f(file_name.c_str(), std::ios::out); if(!f.write(contents, std::strlen(contents))){ throw std::runtime_error("ofstream"); } } ~TempFile(){ std::remove(file_name.c_str()); } TempFile(TempFile &other) = delete; TempFile& operator=(TempFile &other) = delete; std::string file_name; }; } void remote_command(const string &host, const string &user, const string &password, const string &cmd){ std::vector<char> formatted_cmd; const std::size_t formatted_size = std::strlen(expect_template) + host.size() + user.size() + password.size() + cmd.size(); formatted_cmd.resize(formatted_size); const std::size_t res = std::snprintf(&formatted_cmd[0], formatted_size, expect_template, user.c_str(), host.c_str(), cmd.c_str(), password.c_str()); if (res <= 0 || res >= formatted_size ){ throw RemoteException("infsufficient size for snprintf"); } TempFile tempfile(&formatted_cmd[0]); int r = execl("/usr/bin/expect", "-f", tempfile.file_name.c_str(), (char *)0); if (r == -1){ throw RemoteException(string("execl") + std::strerror(errno) ); } } example usage remote_command("my-host", "user", "pw", "/path/to/my/script.sh"); Answer: Consider using raw literals Your expect_template is a prime candidate to be written as a raw string literal: const char* expect_template = R"(set timeout 60 spawn ssh %s@%s %s expect { "*yes/no*" { send "yes\r" exp_continue } "*?assword:*" { send -- "%s\r" send -- "\r" expect eof } } )"; I may have missed something in the editing, but the basic idea is pretty simple: you don't use escapes at all, just insert exactly what you want the string to contain (including new-lines). Given an R immediately before the quote, the compiler treats the content as a raw literal. The opening delimiter is the quote, other optional "stuff", and an open paren. The closing delimiter is a close paren, the same optional stuff, and the close quote. In your case, the optional "stuff" isn't needed--you'd use it if you might need to include )" as part of your string. In that case, you might have something like R"<^>( at the beginning so the end of the string would only be signaled by )<^>" (the exact content doesn't matter a whole lot--you just have to be sure it's something that can't occur inside the string). Use of .c_str() C++98/03 required that you use s.c_str() when passing s to a stream's constructor or open member function. C++11 eliminated that requirement, so you can just pass the string directly. Passing strings as parameters Absent a reason to do otherwise, I'd at least consider passing std::string const & as the parameter to (for one example) TempFile::TempFile. This can simplify some of the content a little, and still lets you pass a C-style string when/if you want: explicit TempFile(std::string const &contents) { file_name = std::tmpnam(nullptr); std::ofstream f(file_name); if(!(f << contents)) { throw std::runtime_error("writing ofstream"); } } Use of snprintf You might want to consider using a stringstream instead of snprintf. This does ease at least a few parts of the job, although it's not exactly perfect either. std::ostringstream s; s << R("set timeout 60 spawn ssh )" << user << "@" host << " " command << R"( expect { "*yes/no*" { send "yes\r" exp_continue } "*?assword:*" { send -- ")" << password << R"(\r" send -- "\r" expect eof } } )"; TempFile f(s.str());
{ "domain": "codereview.stackexchange", "id": 21542, "tags": "c++, c++11, ssh, tcl" }
Getting power from iRobot Create Command Module?
Question: Hello, I'm getting ready to follow the tutorial for powering a Kinect off an iRobot Create, and I was wondering: I have the Command Module for the Create and the User Manual shows that the DB9 connectors on the Command Module have pins for 5VDC and "Create Battery Voltage". This leads me to two questions: I need 5V power to a USB hub on the Create--does anyone know if I can get this off one of the DB9s? And if so, what would be the mA rating? Can I power the Kinect off the "Create Battery Voltage" pin on one of the DB9s? Or is that unregulated power? Thanks! patrick Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2011-03-07 Post score: 2 Original comments Comment by Pi Robot on 2011-03-08: Just an update that using pins 12 and 25 plus a 12V regulator works like a charm with the Kinect. I'm also using pins 14 and 10 together with a 5V regulator to power a small USB hub since my onboard laptop only has a single USB port and I am also using a Hokuyo laser scanner and an IMU. Comment by Pi Robot on 2011-03-07: Thanks Murph--that's good to know! I like the idea of hooking up to the 25-pin connector directly so I'll give that a try. Comment by Murph on 2011-03-07: That tutorial is incorrect about using the Serial connectors. They will not supply enough amperage and your kinect will have many transient issues as it fails to power the video cameras properly. I'm not sure about the command module, but I used the DB25 connector that the command module plugs into directly without any issue. I used pin 12 for vin and pin 25 for ground. Someone (who knows more than me) did say that I should be careful about it drawing the battery too low and should add an 'enable' pin to the regulator, but I have not looked into that yet, so I'll pass the warning on to you. Comment by lifelonglearner on 2013-07-25: I am having problem while connecting the kinect power using DB25 connector of Irobot create base. The connector circuit is designed by clearpath robotics to provide 12V DC power to kinect by taking power from pin 10 and 14 of DB25 connector of irobot create base. Answer: The manual for the Create can be found here: http://www.irobot.com/filelibrary/create/Create%20Manual_Final.pdf The 5V switched output in the cargo bay is rated at only 100mA. I imagine that the command module is similar in rating. You'll note that the battery voltage is about 14-15V, far too high for the Kinect. However, adding a 12V regulator as shown in the tutorial you reference will solve that problem. Originally posted by fergs with karma: 13902 on 2011-03-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Pi Robot on 2011-03-07: Thanks Fergs. Yeah I found that manual and saw the 100mA rating. So I have everything hooked up now to a 12V regulator attached to pins 12 and 25 of a DB25 connector (thanks to Murph). Comment by lifelonglearner on 2013-07-25: I am having problem while connecting the kinect power using DB25 connector of Irobot create base. The connector circuit is designed by clearpath robotics to provide 12V DC power to kinect by taking power from pin 10 and 14 of DB25 connector of irobot create base. please tell solution.
{ "domain": "robotics.stackexchange", "id": 4981, "tags": "ros, kinect, create-robot" }
How do you embed a POVM matrix in a Unitary?
Question: In QuantumKatas Measurement Task 2.3 - Peres-Wooter's Game, we are given 3 states A,B and C. We construct a POVM of these states. But how do we convert that POVM into a Unitary that we can apply. Basically what I am asking is How do we get from $M = \frac{1}{\sqrt{2}}\left(\begin{array}{rrr}1 & 1 & 1 \\\ 1 & \omega & \omega^2 \end{array}\right)$ to $M' = \frac{1}{\sqrt{3}}\left(\begin{array}{cccc}1 & -1 & 1 & 0 \\\ 1 & -\omega^2 & \omega & 0 \\\ 1 & -\omega & \omega^2 & 0 \\\ 0 & 0 & 0 & -i\sqrt3\end{array}\right)$ Answer: I'm not sure that I agree with what is presented as the solution (although the final answer seems OK). Let me explain what I would do. That task gives you 3 states $|A\rangle$, $|B\rangle$ and $|C\rangle$. You want a POVM that, for example, cannot give the answer "0" is the state was in $|A\rangle$, cannot give the answer "1" if the state was in $|B\rangle$ etc. So, the POVM elements are orthogonal to those states. So, let me write $|A^\perp\rangle$ where $\langle A|A^\perp\rangle=0$. So, we will be defining POVM elements $$ E_0=\alpha_0|A^\perp\rangle\langle A^\perp|,\quad E_1=\alpha_1|B^\perp\rangle\langle B^\perp|,\quad E_2=\alpha_2|C^\perp\rangle\langle C^\perp|. $$ It might help to also have $E_3=I-E_0-E_1-E_2$. All these operators must be non-negative, and we want the $\alpha_i$ to be as large as possible. There's actually a certain symmetry here. If you set $\alpha_0=\alpha_1=\alpha_2$ then $$ E_3=I-\alpha\frac32 I, $$ so $E_3$ is non-negative if $\alpha\leq\frac23$, so we set $\alpha=\frac23$. Now, how do we implement such a measurement. There need to be at least 3 measurement results, and since we're using qubits, the space needs to be $2^k\geq3$ dimensional, i.e. we'll pick $k=2$. This means we'll introduce one ancila, which we'll be able to assume is in a known, fixed state. For simplicity, let that be $|0\rangle$. Now, remember that we want to find a unitary that's going to help us make the measurement. Indeed, each measurement result will have to correspond to an orthogonal state, such as $|00\rangle$, $|01\rangle $ and $|10\rangle$, and the unitary will need to map us to these states. But unitaries map orthogonal states to orthogonal states and our states $|A^\perp\rangle|0\rangle$, $|B^\perp\rangle|0\rangle$ and $|C^\perp\rangle|0\rangle$ are not orthogonal to each other. What we need to do is find components such as $|\tilde A\rangle$ below: $$ |\psi_0\rangle=\sqrt\alpha_0|A^\perp\rangle|0\rangle+\sqrt{1-\alpha_0}|\tilde A\rangle|1\rangle $$ such that all three states are orthogonal. With this in mind, we can start to specify $U$: $$ U=|00\rangle\langle\psi_0|+|01\rangle\langle\psi_1|+|10\rangle\langle\psi_2|+|11\rangle\langle\psi_3|, $$ and so we already know some of the elements: $$ U=\frac{1}{\sqrt{3}}\left(\begin{array}{cc} 1 & -1 & ? & ? \\ 1 & -\omega^2 & ? &?\\ 1 & -\omega & ? &?\\ 0 & 0 & ? & ? \end{array}\right) $$ You then just have to complete this matrix, however you like, subject to the orthogonality and normalisation conditions of the rows. I'd start by completing the top row with 1,0, at which point everything else falls into place: $$ U=\frac{1}{\sqrt{3}}\left(\begin{array}{cc} 1 & -1 & 1 & 0 \\ 1 & -\omega^2 & \omega &0\\ 1 & -\omega & \omega^2 &0\\ 0 & 0 & 0 & \sqrt3 \end{array}\right) $$ You can put any phase you like on the bottom-right element, such as $-i$. Which one you want will basically be determined by whatever is easiest to implement with a circuit.
{ "domain": "quantumcomputing.stackexchange", "id": 1728, "tags": "quantum-gate, textbook-and-exercises, unitarity, povm" }
Does the canonical partition function count microstates?
Question: The microcanonical partition function is the density of states. The canonical one, from a dimensional point of view, is still a number of states, but does it actually count microstates? I tried figuring it out from this common derivation: the heat bath B has a density of states deducible from the energy E of the system in contact with it, so the probability of finding the system at energy E is $P(E) \propto \Omega_B(E_{TOT} - E)$ In the thermodynamic limit the Taylor expansion around E=0 of the log of this has negligible second-order terms: $\log \Omega_B(E_{TOT} - E) \sim \Omega_B(E_{TOT}) -E/(KT)$ and going back: $\Omega_B(E_{TOT} - E) \sim \Omega_B(E_{TOT})e^{-E/KT}$ thus the result is $P(E) \propto e^{-E/KT}$ But, while the first equation I wrote has the clear meaning of counting microstates (bar a constant), I'm not sure the manipulation done to it still retains this meaning in the last equation I wrote. Thus I don't know the same about the caonical partition function. Thanks. Answer: What you have written are all correct. But we should note that $P(E)$ alone is not the partition function. The canonical partition function is $$ Z \propto \Omega_\mathrm{tot}(E_\mathrm{tot}),\qquad (1) $$ which counts the microstates of the univserse, i.e., system and bath. But $P(E)$ only counts the microstates of the bath! Let us see how to use the definition to continue your argument. First, since the system and body are weakly interacting, they can be treated as independent. Thus the number of joint microstates with the system energy being $E$ and bath energy being $E_\mathrm{tot} - E$ is given by $$ \Omega_\mathrm{tot}(E_\mathrm{tot}, E) \approx \Omega_\mathrm{sys}(E) \, \Omega_B(E_\mathrm{tot} - E) $$ Now we wish to count all microstates no matter the value of the system energy $E$. So $$ \begin{aligned} Z &\propto \Omega_\mathrm{tot}(E_\mathrm{tot}) \\ &=\int_0^{E_\mathrm{tot}} \Omega_\mathrm{tot}(E_\mathrm{tot}, E) \, dE\\ &\approx \int_0^{E_\mathrm{tot}} \Omega_\mathrm{sys}(E) \, \Omega_B(E_\mathrm{tot} - E) \, dE. \end{aligned} $$ Now using your result, $$ \Omega_B(E_\mathrm{tot} - E) = \Omega_B(E_\mathrm{tot}) \, e^{-E/(KT)} \propto e^{-E/(KT)} , $$ we get $$ \begin{aligned} Z &\propto \Omega_\mathrm{tot}(E_\mathrm{tot}) \\ &\propto \int_0^{E_\mathrm{tot}} \Omega_\mathrm{sys}(E) \, e^{-E/(KT)} \, dE. \end{aligned} $$ This is indeed the usual definition of the canonical partition function.
{ "domain": "physics.stackexchange", "id": 26131, "tags": "statistical-mechanics, partition-function" }
What is the purpose of environment variables in launch files and how can we use them correctly (optenv in husky)?
Question: Hello! I have trouble grasping the necessity as long as the way of usage of the three environment variables in the husky_gazebo/launch/spawn_husky.launch file as shown in the code below. What would be the difference of using simply false as default? <arg name="laser_enabled" default="$(optenv HUSKY_LMS1XX_ENABLED false)"/> <arg name="kinect_enabled" default="$(optenv HUSKY_UR5_ENABLED false)"/> <arg name="urdf_extras" default="$(optenv HUSKY_URDF_EXTRAS)"/> Originally posted by smarn on ROS Answers with karma: 54 on 2020-03-06 Post score: 0 Answer: The difference is that you are able to change the argument value with your environment variables, meaning you could avoid typing roslaunch husky_gazebo spawn_husky.launch laser_enabled:="VALUE" to set the argument, instead you would do export HUSKY_LMS1XX_ENABLED="VALUE". Now there isn't a necessity of using them, it can simply be handy for some situations (essentially for reusabiltiy). In your example the arguments are related to the sensors of your robot, if you don't have a kinect then it should be normal to have the argument kinect_enabled to false. But if you do have one, it's not something that would change everyday so you wouldn't want to call the launch files related with specifying the arg (i.e. always call roslaunch husky_gazebo spawn_husky.launch kinect_enabled:="true"). You could indeed modify the launch file to have this value to false or true by default but if someday you reuse this launch file with a different robot you would have to be aware of this modification. This launch file is from the husky_gazebo package which is provided for users with different configurations, instead of requiring each users to change the launch file depending on their configurations, each user can use the same launch file by simply changing their environment variables. The fact that the substitution args are optenv also suggests this intention, the user doesn't need to have those variables set to use the robot because by default the husky doesn't have those sensors but you can still use it. You can find some packages (like for the turtlebot3_gazebo) using env isntead of optenv for the robot model. Using env forces the user to have this variable set (roslaunch would fail if the variable is missing) which seems normal because there isn't a default turtlebot3 model. Originally posted by Delb with karma: 3907 on 2020-03-09 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by smarn on 2020-03-10: Thanks Delb you were really helpful, I think I get it! I will try some tests in order to grasp it completely Comment by Delb on 2020-03-10: Glad it helped you, if you feel that your question has been fully answered can you please mark it as correct (using the chek mark icon) please ?
{ "domain": "robotics.stackexchange", "id": 34551, "tags": "ros-melodic, husky" }
Why does holding an electrical switch in between on and off states cause sparks?
Question: If air is a bad conductor, then why do sparks develop when an electrical switch is held in between on and off states? Why are sparks generated when cables carrying heavy electric current are brought too close? Is it because the electrons are jumping from the live cable to the other due to the presence of high voltage? Answer: The switch really has 2 positions: on and off. However, when you move the switch very slowly, it may leave the closed position slowly. When the switch is just barely open, the field may cause the air to break down and start conducting, to form a spark (as @anna v explained). To rephrase, the reason why sparks happen is because the switch may only be open a tiny amount, not enough to stop current from flowing through the air. If the gap then increases further, the spark may persist because the air is now acting like a conductor rather than an insulator. Switches are usually designed to prevent this from happening. They have built-in springs that act to open the contacts quickly and completely, thus preventing sparks. However, with many switches, moving the toggle very slowly may cause the contacts to separate a tiny bit, before they fly completely apart. Older designs are likely to suffer more from this. Switch design is easier for low-voltage switches, because high voltages are more likely to cause the air to break down and cause a spark. It is the voltage that causes electrons to jump across the gap and create the spark. For that reason, high voltage switches are also larger: they have to be large enough to keep the contacts far enough apart when the switch is open. Remember that high enough voltages can cause electrons to jump between clouds and the ground - that's called lightning.
{ "domain": "physics.stackexchange", "id": 30054, "tags": "electricity, electrostatics, electric-current" }
Creating queries on the fly and general manipulation for dataset of half a million data records
Question: What would one use for manipulating data of the kind below ? a) Data is bio-markers of different globs GlobA 3 4 5 .... GlobB 2 1 1 .... GlobC 3 2 1 .... b) Manipulations are queries like: - show me the Globs where average of each Glob in file Efficiency is greater than 50% - or show me sorted Glob list where first and second sort criterion are xx and yy) - Construct a chart of globs that differ in criteria x by integer 3 (show me globs whose average over 5 runs for Structure and Efficiency differ from their nearest neighbors by 3) Currently, this data is stored in a 100MB Excel file that is painfully slow to load on the speediest computer our lab can afford. Ideally, there would be some open source program that accepts csv files of this data and has ability for user to construct queries that can be stored in a library for easy pulling up, charting abilities would be great too. Here are 2 files (real data would be around 40 such files each file containing 20K rows): Efficiency File: Glob,Run1,Run2,Run3,Run4,Run5 SigX,6.2,4.8,2.4,4.32,5.59 SigY,8.44,8.16,5.99,0.98,9.6 SigZ,0.00,0.00,0.00,0.01,0.20 Structure File: Glob,Run1,Run2,Run3,Run4,Run5 SigX,3.2,3.8,2.4,7.32,6.32 SigY,2.4,5.16,6.99,0.98,9.6 SigZ,1.02,0.00,2.23,0.01,0.20 Answer: You can do this in pandas since your data set is small. For "big" data that does not fit in memory you would want to use a database; PostgreSQL with the PostGIS extension would be ideal, since it handles the nearest neighbor part, which is the most challenging aspect. Here are some sample queries, in python. Show me the Globs where average of each Glob in file Efficiency is greater than 50% import pandas efficiency = pandas.read_csv('efficiency.csv', sep=',', index_col=0) structure = pandas.read_csv('structure.cv', sep=',', index_col=0) efficiency[efficiency.mean(1) > 0.5] Sort the Structure list descending on Run3, then ascending on Run2. structure.sort_values(by=["Run3", "Run2"], ascending=[False, True]) I'm not sure how you're defining the distance so I am unable to demonstrate the last part.
{ "domain": "datascience.stackexchange", "id": 741, "tags": "visualization, data, csv" }
Why is the direction of pressure always perpendicular to surface area for fluids?
Question: Why is the direction of pressure always perpendicular to surface area of a body for fluids ? We also assume that it is an ideal fluid here. So , pressure acts in all directions because fluid has a tendency to flow. Now , in book it says that for a block in water. Pressure must only act perpendicular to the surface area of block because otherwise , there is no friction between adjacent layers of the fluid. Also ,pressure must act in perpendicular direction on the sides on the tub if a block is put inside a tub which has water contained in it. How does pressure being perpendicular does not cause that. How can we prove that pressure always act perpendicular in this way. Also , in real life. This law must not be valid right since it only for making calculations easy? Is that true so. Answer: This is actually something not easily answered because it is part of the definition of pressure. I will instead point you to other answers which hopefully make sense. The following is an excerpt from Lumen Physics. *The force exerted on the end of the tank is perpendicular to its inside surface. This direction is because the force is exerted by a static or stationary fluid. We have already seen that fluids cannot withstand shearing (sideways) forces; they cannot exert shearing forces, either. Fluid pressure has no direction, being a scalar quantity. The forces due to pressure have well-defined directions: they are always exerted perpendicular to any surface. (See the tire in Figure , for example.) Finally, note that pressure is exerted on all surfaces. Swimmers, as well as the tire, feel pressure on all sides. (See Figure 3.) * Basically, a key part of the above is saying that pressure is always perpendicular it is because if there were a component of force parallel to the surface, then the object will also exert force on the fluid parallel to it as a consequence of Newton's third law. Additionally, there is a similar question to physics stackexchange.
{ "domain": "engineering.stackexchange", "id": 3778, "tags": "fluid-mechanics, fluid" }
What is the correct algorithm to see whether N points lie on the same side of a line?
Question: What I tried first was to find the equation of the line and then compare its y-intercepts with the y-intercepts of each point. I just need the proper approach to this algorithm. Answer: A quick theorem, in general: A hyperplane in $\mathbb{R}^d$ can be defined by a vector $\mathbf{v}$ and scalar $b$ as the set of points $\mathbf{x}$ such that $\mathbf{v}^T\mathbf{x} = b$. It splits the space $\mathbb{R}^d$ up into two half-spaces, one where $\mathbf{v}^T\mathbf{x} \geq b$ and one where $\mathbf{v}^T\mathbf{x} < b$. So in 2D, if your line is defined by $\mathbf{v}, b$, all you have to do is check whether either $\mathbf{v}^T\mathbf{x} \geq b$ or $\mathbf{v}^T\mathbf{x} < b$ holds for all your points $\mathbf{x}$. You are probably used to a line being defined as $y = ax + b$. Note that we have $\mathbf{x} = \begin{bmatrix}x\\y\end{bmatrix}$ (beware of the bold $\mathbf{x}$ as opposed to $x$). Thus we can rewrite to a matrix equation: $$y = ax + b$$ $$-ax + y = b$$ $$\begin{bmatrix}-a, 1\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} = b$$ $$\mathbf{v}^T\mathbf{x} = b$$ and we recover the above form, where $\mathbf{v} = \begin{bmatrix}-a\\1\end{bmatrix}$. Thus all we have to do to check is if your points are a list of $(x, y)$ pairs is if either $-ax + y \geq b$ or $-ax + y < b$ for all of them.
{ "domain": "cs.stackexchange", "id": 17862, "tags": "algorithms, c" }
How to conceptualize the action potential?
Question: In my AP Biology class, we were taught that action potentials are not electrical impulses in the same way current travels through a wire. Rather, we were taught that action potentials are changing concentration gradients of sodium and potassium ions. However, when I looked into modeling action potentials I saw capacitances, inductances, and currents--properties associated with electrical circuits. My question: action potentials--are they electrical currents in the classical sense? If so, what is the charge that is flowing (is sodium and potassium ions?), and how is the overall charge neutrality of the body maintained? Answer: First of all, electric current is defined as movement of charges, $I=\frac{dQ}{dt}$. In electronics that you see around that is not very useful definition, because electrons move much slower than signals, that is changing electric field. Speed of electric field propagation reaches speed of light, whereas electrons move at 1%-30% or $c$. In biology, especially regarding action potential, only movement of charges is important, not propagation of electric field, probably because conductance is much smaller than in copper wires. that is signal propagates not through interaction between charges, but via intermediate agent/amplifier -- ion channels. Charge in body is more or less constant, because action potentials are cyclic: after depolarization membrane comes back to -70mV or something. There is no flow of charge out of the body.
{ "domain": "biology.stackexchange", "id": 3908, "tags": "biophysics, neurophysiology, action-potential" }
3D hologram point from lasers
Question: Firstly, I'm not a physics guy. I want to try to create a 3D hologram point with lasers. As I know, if I have at least 3 lasers in the same wavelength and their beam is crossing each other at a single point then it will become visible. Here for an example: But I've tried this method and it's not working for me. Is there any theoretical problem with this set up, or did I miss something in the experiment? Answer: Your assumption, that crossing lasers will make them visible, is unfortunately incorrect. I’m afraid the photo has led you astray. In order to see light, there needs to be something present to direct the light toward your eyes. In your photo, the laser beams are not simply in free, clear air. It appears that the beams are grazing along a table. Moreover, it looks like there is some kind of frosted glass cylinder that the beams are lighting up at the focus (see how the beams look far more diffuse after the cylinder, and the blue light even looks shadowed). In any case, there is plenty of scattering happening, redirecting some light to the camera. Do this experiment in vacuum, with no cylinder and no table, and you won’t see a thing. The light from the cylinder looks white because of the mixture of several colors across the spectrum, and it looks uniform on the cylinder because the camera is probably saturating.
{ "domain": "physics.stackexchange", "id": 91201, "tags": "optics, laser, hologram" }
Intersection of a recognizable language and a decidable language is decidable?
Question: I'm having trouble with proving that "Intersection of a recognizable language anda decidable language is decidable. I assume this is true although I have no idea how to proof it. Can somebody point me in the right direction? Answer: It's false. Let $L_1=\Sigma^*$ be a decidable language and $L_2=L_{HALT}$ be the (recognizable) language of all halting TM-string pairs. Then $L_1\cap L_2=L_2$, which is not decidable.
{ "domain": "cs.stackexchange", "id": 14095, "tags": "computability" }
How is gradient being calculated in Andrej Karpathy's pong code?
Question: I was going through the code by Andrej Karpathy on reinforcement learning using a policy gradient. I have some questions from the code. Where is the logarithm of the probability being calculated? Nowhere in the code I see him calculating that. Please explain to me the use of dlogps.append(y - aprob) line. I know this is calculating the loss, but how is this helping in a reinforcement learning environment, where we don't have the correct labels? How is policy_backward() working? How are the weights changing to the loss function mentioned above? More specifically, what's dh here? Answer: logp seen in code is actually logit p which has this story behind: Given a probability p, the corresponding odds are calculated as p / (1 – p). For example if p=0.75, the odds are 3 to 1: 0.75/0.25 = 3. The logit function is simply the logarithm of the odds: logit(x) = log(x / (1 – x)). Sigmoid near logp is like follows: The inverse of the logit function is the sigmoid function. That is, if you have a probability p, sigmoid(logit(p)) = p. Source: [1] In reinforcement learning we know in the end of game, if taken actions were successful or not. Then before next round we can adjust the gradients. From your link (commentary section): For example in Pong we could wait until the end of the game, then take the reward we get (either +1 if we won or -1 if we lost), and enter that scalar as the gradient for the action we have taken (DOWN in this case). In the example below, going DOWN ended up to us losing the game (-1 reward). So if we fill in -1 for log probability of DOWN and do backprop we will find a gradient that discourages the network to take the DOWN action for that input in the future (and rightly so, since taking that action led to us losing the game). In the very same commentary section (later) there is a pic, and explanation of what h is. Unfortunately, you have to check it by yourself, pic was not compatible to be attached here. By describing the pic I could say h is equal to weights in hidden layer and in gradient case the dh is the derivative of h. Roughly speaking the backpropagation is correcting the weights backwards the network after the round is done. More thorough explanation is in the mentioned comments section. Sources: [1] https://www.google.com/amp/s/nathanbrixius.wordpress.com/2016/06/04/functions-i-have-known-logit-and-sigmoid/amp/
{ "domain": "ai.stackexchange", "id": 1523, "tags": "deep-learning, reinforcement-learning, backpropagation, policy-gradients" }
Is it ok to break long html elements down into multiple lines?
Question: I often see very long code lines like this: <input type="email" name="email" autocomplete="email" id="email_address" value="<?= $block->escapeHtmlAttr($block->getFormData()->getEmail()) ?>" title="<?= $block->escapeHtmlAttr(__('Email')) ?>" class="input-text" data-mage-init='{"mage/trim-input":{}}' data-validate="{required:true, 'validate-email':true}"> Is it ok to break them down like: <input type="email" name="email" autocomplete="email" id="email_address" value="<?= $block->escapeHtmlAttr($block->getFormData()->getEmail()) ?>" title="<?= $block->escapeHtmlAttr(__('Email')) ?>" class="input-text" data-mage-init='{"mage/trim-input":{}}' data-validate="{required:true, 'validate-email':true}" > Because this makes it much clearer and easier to read in my opinion. I usually do this if there are more than about 80-100 chars. Validation with https://validator.w3.org/ Answer: I'd already answered this in the comments (which I shouldn't, we got answer fields for this), but anyway... Because this makes it much clearer and easier to read in my opinion. Absolutely. And that's a good reason to do it. Your 80-100 character limit makes sense as well. 80 is considered a standard 'limit' of sorts in many languages (Python is famous for it, but back in the day punch cards and teletypes were already limited to 80 columns). Google's HTML style guide states: Break long lines (optional). While there is no column limit recommendation for HTML, you may consider wrapping long lines if it significantly improves readability. When line-wrapping, each continuation line should be indented at least 4 additional spaces from the original line. You've already verified your validator doesn't have a problem with it either, so, go for it. In hindsight, if the validator would've had problems with it, it would've been time for a better validator. Better readability => better maintainability => less bugs and less development time.
{ "domain": "codereview.stackexchange", "id": 37795, "tags": "html" }
Why aren't things massless when they have balanced forces acting on them?
Question: Since $0$ divided by anything is $0$, why isn't everything with a net force of $0$ massless? Since $F = ma$ rearranges to $m = F/a$, when $F = 0$, $m$ should equal $0$, shouldn't it? This obviously can't be true, because it doesn't make any sense; but why? Answer: When you have $F=0$ all you can say is that $m a=0$. Now, in classical mechanics, the mass $m$ is a property of the object you are studying, and it is generally assumed that $m > 0$. Thus you can divide $ma=0$ on both sides by $m$ and obtain $a=0$, which gives you Newton's first law: an object with no net force applied to it, moves at constant velocity. See that you cannot divide both sides of $ma=0$ by $a$ because $a=0$ and division by $0$ is not defined. The reason we take $m>0$ in classical mechanics is as follows. Classical mechanics lets you calculate the movement of objects using your knowledge of the forces applied to it and the equation $F=ma$. You see that if you were to put $m=0$ in the equation, you need either $F=0$ (there can be no force whatsoever) or $a=\infty$ (things fly around at infinite velocity, whatever that means) or other combinations in which the theory would be completely useless. So classical mechanics $F=ma$ is a useful theory only when considering objects which do have a mass. Now, there are objects with no mass (like the photons: the particles of light), but for those you need other theories.
{ "domain": "physics.stackexchange", "id": 28006, "tags": "newtonian-mechanics, forces" }
When a volume decreases in a real gas, what is more likely: temperature decrease or pressure increase?
Question: The ideal gas law states that when the volume is lowered, either the temperature drops or the pressure rise. Under real-life situations, how does nature "decides" what to increase or decrease in value? Why would the temperature drop instead of the pressure increase and vice-versa? Suppose a submarine from OceanGate is diving in deep seas, and it implodes. What would happen to the air inside it when compressed by the water (thereby lowering the volume)? Would its temperature decrease while its pressure stays constant? Would its temperature stay constant while its pressure increases? Would its temperature decrease a little while its pressure increases a little? Out of these 3 options, what would happen? Or is my reasoning wrong and the temperature would increase for some reason? What is the temperature of the gas inside the submarine after imploding? I'm extremely confused because I expected the temperature and pressure to rise like when a piston decreases the volume of its chamber in an engine, but that is not what the ideal gas law tells me. Answer: The ideal gas law states that when the volume is lowered, either the temperature drops or the pressure rise. It does not say this. It says only that $PV=nRT$; for instance, the pressure and temperature could both rise as the volume drops. Under real-life situations, how does nature "decides" what to increase or decrease in value? Why would the temperature drop instead of the pressure increase and vice-versa? Nature seeks to increase the total entropy as rapidly as possible, subject to existing constraints. What this means is that we see the dominant intensive property gradient—or spatial difference—even out, through energy transfer in the form of shifting of the conjugate extensive property. The suppression of gradients in this way generates entropy. As an example, a pair of objects has the highest entropy when both are at the same temperature, as opposed to one being hot and the other cold. Thus, we predict that in the absence of any other effect, temperatures spontaneously equilibrate when things are placed in thermal contact. In the case of pressure vessel implosion, the dominant gradient can be considered the pressure difference, and we can expect volumes to shift to eliminate this difference. (Pressure is an intensive property, and volume is its conjugate extensive property.) In this case, water ingress compresses the air. This does work on the air, as the moving boundary gives each colliding air molecule a little momentum kick. The resulting internal energy increase of the air produces a temperature increase. If compression occurs fast enough that there's no time for heat to flow between the water or air (certainly the case in this example), we can model the process as adiabatic, giving us the relation $$\frac{T_\text{end}}{T_\text{start}}=\left(\frac{P_\text{end}}{P_\text{start}}\right)^{\frac{\gamma-1}{\gamma}},$$ where the heat capacity ratio $\gamma$ is 1.4 for air. This equation provides new information and answers your question of how Nature elects to change the temperature during the compression process. The equation supplements the constitutive gas law (e.g., $PV=nRT$), which can now be applied with the additional knowledge that the pressure and temperature are increasing as the volume decreases. Compression stops when the water and air pressures are equal. Now we have a new gradient, a thermal one: The water and heated air are at different temperatures. Conductive heat transfer at the boundary cools the air until it matches the water temperature. But we're not done here. A density gradient between water and air causes the latter to rise. Further, a chemical potential difference at the bubble interface humidifies the air and dissolves some of it into the water. Ignoring weather, ultimate equilibrium would correspond to a humidified atmosphere of air above aerated water, all at the same temperature. Again, all of these processes occur under the principle of maximizing the total entropy, through eliminating any existing gradients as fast as the kinetics of the system (e.g., involving the material properties) allow.
{ "domain": "physics.stackexchange", "id": 96096, "tags": "thermodynamics, ideal-gas, gas" }
Turtlebot electric service does not start properly
Question: I am running electric on my turtlebot, and when the turtlebot laptop boots up, the turtlebot service starts, but the dashboard remains grey, with no input from the turtlebot. The only way I can get the turtlebot to do something is to stop the service and roslaunch something like minimal.launch or the teleoperation stack. Any ideas as to why the service might not be doing anything at the beginning? I have tried waiting up to 15 minutes for it to start up properly, with no results. Restarting the service does not work either, it just produces the same stale result on the dashboard. This is a followup to this post: http://answers.ros.org/question/3743/turtlebot-restarting-continuously which I thought fixed my problems, but I have not been able to duplicate the result of electric starting the turtlebot correctly after that initial success. The problem still exists, and this is my current launch file: <launch> <param name="turtlebot_node/gyro_scale_correction" value="1.0"/> <param name="turtlebot_node/odom_angular_scale_correction" value="1.0"/> <include file="$(find turtlebot_bringup)/minimal.launch"> <arg name="urdf_file" value="$(find xacro)/xacro.py '$(find turtlebot_arm_description)/urdf/arm.urdf.xacro'" /> </include> </launch> Originally posted by Aroarus on ROS Answers with karma: 122 on 2012-02-07 Post score: 2 Original comments Comment by mmwise on 2012-02-13: okay.. can you check that nothing is wrong with the launch file in /etc/ros/electic Comment by Aroarus on 2012-03-27: I have updated the post. Comment by McMurdo on 2012-03-28: I had the same problems with electric version. Diamondback is good. I reverted to diamondback and everything works fine. Comment by Aroarus on 2012-03-28: Unfortunately I need electric for the arm that I am using :( Comment by mmwise on 2012-03-28: which version of electric are you using? is your wireless interface set properly? Do you see any errors in the dashboard? Answer: Ahhh... I know what's wrong! When we did the latest release the urdf files changed around and the urdf for the arm didn't update to reflect the new files. So when you launch the arm it can't find the urdf file it's looking for.. I just fixed it in electric. You can edit the arm.urdf.xacro file to read: <include filename="$(find turtlebot_description)/urdf/turtlebot.urdf.xacro" /> I released the update in turtlebot_arm 0.1.1, it should be in debian soon. Originally posted by mmwise with karma: 8372 on 2012-04-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 8141, "tags": "ros, turtlebot, ros-electric, turtlebot-bringup, turtlebot-dashboard" }
Are loops connected in one node used in real life electricity and how does current behave in them?
Question: I am learning electricity and quite often see loops that are connected in one node. This is a simplified example: But I found many cases, with more elements. The thing that makes me question their usefulness is that (so far) they have always behaved as independent circuits. So, I wonder if circuits are ever designed this way in real life and if this concept (two loops connected at one node) has its own name or is used at all with any purpose. I also wonder what would happen if one of the loops does not have any voltage source in it, like this: Would current behave as shown in A or in B or a mix of both and why? Teacher says: "current is smart and always takes the laziest road" (in this case A), but I don't see how that can be a solid reasoning. Maybe with a clear why. Answer: I might not be the right person to answer the first part of your question, but I can surely answer the second one. In the second circuit(which I have redrawn and labelled), let's assume the the points $A$ and $B$ are two distinct points and not the same ones. Now let's assume that the resistance of wire $AB$ is $R$. So $R$ will be some function of the length $AB$. But whatever that function be, it will always follow the given result, $$\lim_{{l(AB)} \to 0} R =0$$ where $l(AB)$ is the length of the segment $AB$ and $R$ is the resistance. So what I am saying is that as the length of $AB$ reduces to $0$, the resistance also goes to zero. So, when $A$ and $B $ get close the resistance $R_2$ is short-circuited. And thus no current flows through it. Mathematical explanation :- If you calculate the total resistance of this circuit hy using laws of combination of resistors in series and parallel, then you will obtain, $$ R_{equivalent} = R_1 + \frac{R R_2}{R+R_2}$$ Now if you decrease the value of $R $ such that it approaches $0$, then $R_{equivalent}$ approaches $R_1$. But $R_1$ is the resistance it faced by the current when it travels like it does in the image A(in your question). Thus the current must be travelling in the way it does in the image A. Then only would the resistance of the circuit be $R_1$.
{ "domain": "physics.stackexchange", "id": 63639, "tags": "electricity, electric-circuits" }
Can fusors be used to turn Uranium 238 into Plutonium 239?
Question: Since a Farnsworth–Hirsch fusor is apparently a good fast neutron source that is simple enough to build at home, why can't it be used by rogue states or even terrorists to turn non fissile U-238, depleted Uranium that the US often shoots at its enemies in the form of shells, into fissile Plutonium-239 through neutron bombardment? I'm assuming that the neutron output from such a device would only be enough to create a microscopic amount of plutonium and therefore wouldn't be useful, but I was wondering what other physical constraints prevent this from being a proliferation threat. Answer: The Fusor emits $10^7$ neutrons/sec. One being developed hopefully will emit $10^{11}$ neutrons/sec. Compare that to $6 \times 10^{23}$ atoms/mole.
{ "domain": "physics.stackexchange", "id": 72125, "tags": "nuclear-physics, fusion" }
Number of binary trees with given height
Question: I was wondering how many binary trees we have with height of $h$ with $n$ nodes(another question is how many binary trees we have with height $ \lfloor{lg (n)}\rfloor$). Edit: I forgot to add the number of nodes. Answer: Take the height $h$ as the length of the longest root to leaf path. After fixing the root, we count the number in two cases: both left and right subtrees are of height $h$. number of trees $=A_h^2$ only one subtree has height $h$. number of trees $=2 \cdot A_h \cdot (A_0+A_1+...+A_{h-1})$ $$ A_{h+1} = A_h^2 + 2 \cdot A_h \cdot (A_0+A_1+...+A_{h-1}) $$
{ "domain": "cs.stackexchange", "id": 17727, "tags": "data-structures, counting" }
Branching fraction for Kaon decay
Question: I'm attempting to calculate the branching fraction of a particular Kaon decay, namley $K^{+}\rightarrow{\pi^{+}\pi^{0}}$. I know what the branching fraction equation is, namely: $$ BR=\frac{\Gamma_j}{\Gamma} $$ Where $\Gamma=1/\tau$. Now, I have been given $\Gamma_{j}$ as $1.2\times{10^{-8}}\,\mathrm{eV}$, and $\tau$ as $1.2\times{10^{-8}}\,\mathrm{s}$, rather this is stated as the mean lifetime of the $K^+$ species. Putting this all together I get a branching fraction of $1.44\times10^{-16}\,\mathrm{eV}{\mathrm{s}}$. Surely this is way too small to be a viable branching fraction...? Usually it is quoted as a percentage so I was expecting something like 0.2...? Answer: Energies are equivalent to the (sometimes angular) frequencies of the photons which have those energies via $E = h f.$ The dimensionless value you are looking for is probably your current value divided by $\hbar,$ but it strongly depends on how the $\Gamma_j$ in units of $\text{eV}$ was being calculated. (You see $h$ when people are quoting optical spectra because they care about real frequencies; you see $\hbar$ when people are using units which set $\hbar = 1.$)
{ "domain": "physics.stackexchange", "id": 25741, "tags": "homework-and-exercises, particle-physics" }
(Optionally Concurrent) FIFO
Question: Based on Concurrent FIFO in C++11 and my review I implemented a queue and its concurrent pendant. Is there anything left to improve regarding clarity, usability, code-style, lock-times or general efficiency? #ifndef FIFO_H #define FIFO_H #include <array> #include <mutex> #include <condition_variable> #include <atomic> #include <type_traits> template<class T, std::size_t CAPACITY> class ST_FIFO { static_assert(CAPACITY, "Needs to have non-zero capacity"); T data[CAPACITY + 1]; std::size_t input_index = 0; std::size_t output_index = 0; inline static constexpr std::size_t wrap_index(std::size_t index) noexcept { return index > CAPACITY ? index - CAPACITY - 1 : index; } public: static constexpr std::size_t capacity() noexcept { return CAPACITY; } bool empty() const noexcept { return input_index == output_index; } std::size_t size() const noexcept { return input_index >= output_index ? input_index - output_index : input_index + CAPACITY + 1 - output_index; } template<class X> auto push(X&& x) noexcept(noexcept(pop(*data), *data = std::forward<X>(x))) -> decltype(*data = std::forward<X>(x), true) { if(size() == CAPACITY) pop(data[input_index]); data[input_index] = std::forward<X>(x); input_index = wrap_index(input_index + 1); return true; } template<class X> auto try_push(X&& x) noexcept(noexcept(*data = std::forward<X>(x))) -> decltype(*data = std::forward<X>(x), true) { if(size() == CAPACITY) return false; data[input_index] = std::forward<X>(x); input_index = wrap_index(input_index + 1); return true; } std::size_t multi_push(const T ts[], size_t count) noexcept(noexcept(push(*ts))) { for (size_t i = 0; i < count; ++i) push(ts[i]); return count; } std::size_t try_multi_push(const T ts[], size_t count) noexcept(noexcept(try_push(*ts))) { for (size_t i = 0; i < count; ++i) if(!try_push(ts[i])) return i; return count; } bool pop(T &t) noexcept(noexcept(t = std::move(t))) { if (empty()) return false; t = std::move(data[output_index]); output_index = wrap_index(output_index + 1); return true; } std::size_t multi_pop(T ts[], size_t count) noexcept(noexcept(pop(*ts))) { for (size_t i = 0; i < count; ++i) if(!pop(ts[i])) return i; return count; } bool peek(std::size_t ind, T &t) const noexcept(noexcept(t = t)) { if (ind >= size()) return false; t = data[wrap_index(output_index + ind)]; return true; } }; template<class T, std::size_t CAPACITY> class MT_FIFO : ST_FIFO<T, CAPACITY> { std::atomic_bool wait_flag = true; mutable std::mutex mutex; mutable std::condition_variable cv; using base = ST_FIFO<T, CAPACITY>; template<bool wait = false, class... X> inline std::unique_lock<std::mutex> lock(X... x) const { std::unique_lock<std::mutex> lock(mutex, x...); if(wait) cv.wait(lock, [this]{return !(base::empty() && wait_flag);}); return lock; } inline MT_FIFO(const MT_FIFO& other, std::unique_lock<std::mutex>&&) : base(other) , wait_flag(other.wait_flag.load()) {} template<bool all = false, class F> inline auto locked(F f) noexcept(noexcept(f())) -> decltype(f()) { auto result = (lock(), f()); if(result) all ? cv.notify_all() : cv.notify_one(); return result; } public: MT_FIFO() = default; MT_FIFO(const MT_FIFO& o) : MT_FIFO(o, o.lock()) {} MT_FIFO& operator=(const MT_FIFO& o) noexcept(noexcept(base::operator=(o))) { if(this == &o) return *this; auto a = lock(std::defer_lock); auto b = o.lock(std::defer_lock); std::lock(a, b); base::operator=(o); wait_flag = o.wait_flag; return *this; } using base::capacity; bool empty() const noexcept { return lock(), base::empty(); } std::size_t size() const noexcept { return lock(), base::size(); } template<class X> auto push(X&& x) noexcept(noexcept(base::push(std::forward<X>(x)))) -> decltype(base::push(std::forward<X>(x))) { return locked([&]{return base::push(std::forward<X>(x));}); } template<class X> auto try_push(X&& x) noexcept(noexcept(base::try_push(std::forward<X>(x)))) -> decltype(base::try_push(std::forward<X>(x))) { return locked([&]{return base::try_push(std::forward<X>(x));}); } std::size_t multi_push(const T ts[], size_t count) noexcept(noexcept(base::multi_push(ts, count))) { return locked<true>([&]{return base::multi_push(ts, count);}); } std::size_t try_multi_push(const T ts[], size_t count) noexcept(noexcept(base::try_multi_push(ts, count))) { return locked<true>([&]{return base::try_multi_push(ts, count);}); } bool pop(T &t) noexcept(noexcept(base::pop(t))) { return lock<true>(), base::pop(t); } std::size_t multi_pop(T ts[], size_t count) noexcept(noexcept(base::multi_pop(ts, count))) { return lock<true>(), base::multi_pop(ts, count); } bool peek(std::size_t ind, T &t) const noexcept(noexcept(base::peek(ind, t))) { return lock(), base::peek(ind, t); } void wait_on() noexcept { lock(), wait_flag = true; } void wait_off() noexcept { locked<true>([&]{return wait_flag = false;}); } }; #endif // FIFO_H Answer: Your popped elements will not be destructed. You move them but that isn't the same as being destructed and it doesn't guarantee that memory will be released. Furthermore your container requires T to be default constructible which prevents immutable objects from being used with the queue. See this question for how to resolve both: Implementation of fixed size queue using a ring (cyclic) buffer. Your push function is not exception safe. I dislike all capital names as typically all capital identifiers are used for macros. The name FIFO is poor in my opinion because the concept you're modelling is a queue or a pipe (these are similar but different). First in first out is just a description of how data enters and leaves the container. FIFO doesn't allude to the fact that it will overwrite on push if the container is full which is a bit surprising if you're not familiar with the container. Personally I do not like this behaviour as it violates the principle of least surprise. Also I'm not a fan of the ST_ prefix which I assume means single threaded... If you're adding that prefix to this class, you should add it to all classes which are not concurrent safe, which quickly becomes obnoxious. As for the MT_ prefix, that is acceptable but as it is an attribute of the queue, I'd rather see it as a suffix. I would prefer naming the class something like concurrent_fixed_pipe.
{ "domain": "codereview.stackexchange", "id": 25689, "tags": "c++, c++11, multithreading, template, queue" }
Uncertainty principle - momentum so precise that uncertainty of position is outside light-cone?
Question: Thought experiment: what happens if we measure momentum of a particle so precisely, that the uncertainty of its position becomes absurd? For example, what if the uncertainty of the position exceeds 1 light year? We know for a fact that the particle wasn't a light year away from the measuring device, or else how could the momentum have been measured? What if the uncertainty extended beyond the bounds of the universe? Isn't there some point at which we know for certain the particle was closer than what the uncertainty allows for? Answer: You assume that you can instantly measure the momentum to arbitrary precision, and this isn't the case. Let's consider a plane light wave to keep things simple, and suppose you want to measure the momentum so precisely that the position uncertainty becomes exceedingly large. How precisely do we have to measure the momentum? Well the uncertainty principle tells us (discarding numerical factors since this is all very approximate): $$ \Delta p \approx \frac{h}{\Delta x} $$ For a photon the momentum is $p = hf/c$, so this means we have to measure the frequency to a precision of: $$ \frac{h}{c}\Delta f \approx \frac{h}{\Delta x} $$ or: $$ \Delta f \approx \frac{c}{\Delta x} $$ Suppose we want our $\Delta x$ to be one light year, our expression becomes: $$ \Delta f \approx \frac{1}{1 \space \text{year}} $$ But to measure the frequency of a wave accurate to some precision $\Delta f$ takes a time of around $1/\Delta f$. This is because the frequency you measure is the wave frequency convolved with the Fourier transform of an envelope function, and in this case the width of the envelope function is the time you take to do the measurement. So the time $T$ we take to measure our momentum to the required accuracy is: $$ T \approx \frac{1}{\Delta f} \approx 1 \space \text{year} $$ The conclusion is that to measure the momentum precisely enough to make the position uncertainty 1 light year will take ... 1 year!
{ "domain": "physics.stackexchange", "id": 14386, "tags": "heisenberg-uncertainty-principle, locality" }
Work Done by Vibrating String - Without Small-Amplitude Assumption
Question: I'm trying to derive the equation for work done by a vibrating string, but I'm running into problems. The easiest way - the method used by the other question by this name - makes the approximation $\sin\theta\approx\tan\theta$, that is, the small angle approximation. I'm fairly sure this doesn't reflect some underlying physical concept that changes the expression for high-amplitude high-frequency waves - for starters, I do have another derivation, but it makes the assumption without justification that $\frac{dK}{dx}=\frac{dU}{dx}$, $K$ kinetic energy and $U$ potential energy. So can anyone explain an alternate derivation, or else justify that assumption? Answer: You might need to say some more about what you want to do with this equation, because you can descend into as much complexity as you like. Do you, for example, want to think about variable length strings, i.e. those where the tension lengthens the string and the tension itself is a function of position along the string? Do you want to think about a general, nontangential force? You could pull out Landau and Lifshitz "Theory of Elasticity" or Stephen Timoshenko "Strength of Materials Volume 2" or "Theory of Elasticity" and build something pretty complicated, but each new effect modeled is going to yield diminishing returns. Assuming a constant tension $T$ in the string of linear density $\mu$ along its length $z$ and assuming still predominantly transverse motion $y(z,\,t)$ in one plane, I get: $$T\, \cos\theta(z,t)\, \kappa(z,t) = \mu\, \partial_t^2 y\quad\quad\quad(1)$$ where $\theta$ is the string's angle made with the horizontal and $\kappa$ its curvature. Substituting for $\cos\theta$ and $\kappa$ yields: $$T\,\partial_z^2 y = \mu\,(1+(\partial_z y)^2)^2\, \partial_t^2 y\quad\quad\quad(2)$$ which will give you a nice nonlinearity to chew on. Next, you might consider a constant tension, constant length string with vibration but with motion in both transverse directions. So you're going to get two coupled nonlinear differential equations. Let our two transverse displacement components be $x(z,t)$ and $y(z,t)$, then the local tangent to the string be defined by the unit vector components $X = \partial_z x/\sqrt{1 + (\partial_z x)^2+(\partial_z y)^2}$ and $Y = \partial_z y/\sqrt{1 + (\partial_z x)^2+(\partial_z y)^2}$ so that (here $s$ is the arclength): $$T\, \mathrm{d}_s X = T\,\partial_z\left(\frac{\partial_z x}{\sqrt{1+(\partial_z x)^2+(\partial_z y)^2}}\right) \mathrm{d}_s z = \mu\, \partial_t^2 x\quad\quad\quad(3)$$ $$T\, \mathrm{d}_s Y = T\,\partial_z\left(\frac{\partial_z y}{\sqrt{1+(\partial_z x)^2+(\partial_z y)^2}}\right) \mathrm{d}_s z = \mu\, \partial_t^2 y\quad\quad\quad(4)$$ whence (since $\mathrm{d}_z s = \sqrt{1+(\partial_z x)^2 + (\partial_z y)^2}$): $$\left(1+ (\partial_z y)^2\right)\,\partial_z^2 x- \partial_z x\,\partial_z y\,\partial_z^2 y = \frac{\mu}{T}\,\left(1+(\partial_z x)^2+(\partial_z y)^2\right)^2\, \partial_t^2 x\quad\quad\quad(5)$$ $$\left(1+(\partial_z x)^2\right)\,\partial_z^2 y- \partial_z x\,\partial_z y\,\partial_z^2 x = \frac{\mu}{T}\,\left(1+(\partial_z x)^2+(\partial_z y)^2\right)^2\, \partial_t^2 y\quad\quad\quad(6)$$ which reduce to Eq. (2) when there is vibration in one plane only. You'll get some really interesting effects from these coupled equations: whirling, coupling of energy from $x$ to $y$ and back again and so forth. The next step would be to think of the axial motion of the string and the attendant variable tension along the string's length. This would only be apparent well into the nonlinear régime and likely (5) and (6) should model most of the nonlinear effects you will need. Energies in the String If you are seeking to find out the work done by the end of the string, then you would need a model of what it's linked to and therefore a tension to displacement expression - likely a differential equation, which will be a differential equation. Now the tension $T$ is a function of time, so you're beginning to get seriously interesting! You might also be interested in looking at a tension varying with length at this point, with the local tension defined by $E\,A\,\epsilon(z,t) = k_T\, \epsilon(z,t)$, where $E$ is the string's Young's modulus, $A$ its cross-sectional area and $\epsilon(z,t)$ the strain. It makes more sense to use $k_T$ and measure experimentally: it's not going to be easy to work out $k_T$ from first principles from the material elastic constants for a braided or stranded string! The sting's curvature begets the strain: $\mathrm{d} s = \sqrt{1+(\partial_z x)^2 + (\partial_z y)^2} \mathrm{d} z$ so that $$\epsilon(z, t) =\sqrt{1+(\partial_z x)^2 + (\partial_z y)^2} - 1 \approx \frac{1}{2}\left((\partial_z x)^2 + (\partial_z y)^2\right)\quad\quad\quad(7)$$ If you looking for loss in the string, a good model of air drag force is $−\lambda\,\partial_t x$ , $−\lambda\,\partial_t y$ (i.e. proportional to transverse velocity), which terms you'll need to include in the dynamical equation, then work out loss from the power dissipated by these terms. Internal material bending losses are complicated to model: often you can do this kind of thing by replacing material elastic constants with lossy elastic operators - so you would replace the Young's modulus $E$ for example by something of the form $E+E_t \partial_t$, for some loss constant $E_t$; equivalently, you would work with $k_T + k_1 \partial_t$ for the string's effective srping constant. But, at last, if you, as I now understand from your questions, are looking simply to find out the energy needed to set the vibration up (the energy stored in the string) in a lossless string, then you can work as follows. The kinetic energy per unit length is obvious: it is simply: $$K(z,t) = \frac{1}{2}\,\mu\,\left((\partial_t x)^2 + (\partial_t y)^2\right)\quad\quad\quad(8)$$ Now, if we assume that the displacement is small, such that the at first high tension $T$ does not change much as the string vibrates, then the work done by $T$ in straining a length $\mathrm{d}z$ of string is $T\,\epsilon\,\mathrm{d}z$, so that the potential energy stored per unit length is, from Eq. (7): $$U(z,t) = T\,\epsilon\ = \left(\sqrt{1+(\partial_z x)^2 + (\partial_z y)^2} - 1\right)\,T\approx\frac{1}{2}\,T\, \left((\partial_z x)^2 + (\partial_z y)^2\right)\quad\quad\quad(9)$$ the approximation holding when $|\partial_z x|,\,|\partial_z y|\ll 1$. These are the general equations. To find the dispersion relationship for the uncoupled linear vibration equations $T\,\partial_z^2 y = \mu\, \partial_t^2 y$, $T\,\partial_z^2 x = \mu\, \partial_t^2 x$ we study solutions of the form $\exp(i\,(k\,z\pm\omega\,t))$ where $k$ is the wavenumber and $\omega$ the angular frequency; on substitution into the linear equations, we get $T\,k^2 = \mu\,\omega^2$ or: $$c = \left|\frac{\omega}{k}\right| = \sqrt{\frac{T}{\mu}}\quad\quad\quad(10)$$ so for such a wave, Eq. (8) and Eq. (9) (the latter in the small vibration $|\partial_z x|,\,|\partial_z y|\ll 1$ approximation) can be combined to show that $U(z,t) = K(z,t)$, as you state. Likewise, by using this relationship as well as Parseval's theorem for Fourier series for any superposition of frequencies such that the waveshape is periodic, you can prove that the total kinetic and potential energies integrated over a wavelength are equal. But this is for the linear régime only. More generally, you must use Eq. (5) and Eq. (6) together with Eq. (8) and Eq. (9) separately. Even with these equations, it would be altogether reasonable to assume the small vibration approximation with Eq. (9), because none of the above considers $z$-directed components of the force, which will become significant with angles that are big enough to make the small vibration approximation of Eq. (9) invalid. Therefore, your final set (approximating the RHS of (5) and (6) in the same way as (9))might be: $$\begin{array}{rcl} \left(1+ (\partial_z y)^2\right)\,\partial_z^2 x- \partial_z x\,\partial_z y\,\partial_z^2 y &=& c^2\,\left(1+2\,(\partial_z x)^2+2\,(\partial_z y)^2\right)\, \partial_t^2 x\\ \left(1+ (\partial_z x)^2\right)\,\partial_z^2 y- \partial_z x\,\partial_z y\,\partial_z^2 x &=& c^2\,\left(1+2\,(\partial_z x)^2+2\,(\partial_z y)^2\right)\, \partial_t^2 y\\ K(z,t) &=& \frac{1}{2}\,\mu\,\left((\partial_t x)^2 + (\partial_t y)^2\right)\\ U(z,t) &=& \frac{1}{2}\,\mu\,c^2\, \left((\partial_z x)^2 + (\partial_z y)^2\right)\end{array}\quad\quad\quad(11)$$ with $c$ defined by Eq.(10).
{ "domain": "physics.stackexchange", "id": 9656, "tags": "homework-and-exercises, waves" }
Unable to contact my own server
Question: I'm in the midst of trying to set up turtlebot and I have encountered this problem. When I type "roslaunch turtlebot_bringup robot.launch" on my laptop, the following message appears Unable to contact my own server at [http://10.217.252.66:58698/]. This usually means that the network is not configured properly. I tried pinging myself and it works. Any ideas what might be wrong? Thanks Originally posted by ccm on ROS Answers with karma: 226 on 2011-06-12 Post score: 4 Original comments Comment by Haooo on 2016-03-05: How to you check your .bashrc form and how do you find your ROS_IP Comment by skr_robo on 2016-09-12: You can open .bashrc by typing gedit ~/.bashrc in the terminal. Answer: I solved it. I changed the following in my .bashrc from export ROS_MASTER_URI=http://10.217.252.66:11311 export ROS_HOSTNAME=10.217.252.66 to export ROS_MASTER_URI=http://laptop_name:11311 export ROS_HOSTNAME=laptop_name Originally posted by ccm with karma: 226 on 2011-06-12 This answer was ACCEPTED on the original site Post score: 9 Original comments Comment by Daniel Stonier on 2011-06-13: If you wanted to use numbers, you may have needed ROS_IP instead of ROS_HOSTNAME. Comment by rupendra on 2018-11-28: Hi , i am getting the same problem as you said, i am able to get the ping result, but i am unable to run rviz . in that answer laptop_name means user name ?? .
{ "domain": "robotics.stackexchange", "id": 5821, "tags": "ros, turtlebot, ros-master-uri" }
Moles of gas in a VLE mixture
Question: I have a system in vapor-liquid equilibrium which also has some gas inside (let's say that the VLE system is Water and the gas is Air). The system looks like this: There is some heat applied to it and therefore more water will vaporize and the Saturation Pressure of the vapor will increase. Since it is a closed system, I the amount of gas stays the same, while the amount of vapor increases. I would like to calculate the pressure of this system and I'm not sure 100% that my approach is correct. I was thinking that the total pressure of the system is: $P_{tot} = P_{sat} + P_{gas}$ So the Saturation Pressure can be calculated using the Antoine equation or taken from tables. I'm fine with that. Here Question 1 arises: how can one check if the VLE condition still applies once heat is applied? Compare Antoine with Ideal Gas? Now the main question is... what to do with the gas? If I assume that it is ideal gas, it can be written as: $P_{gas} = \frac{n_{gas} R T}{V_{gas}}$ And here is where I get confused. Question 2: Is $V_{gas}$ the volume of the gas phase only or it is considered together with the vapor like in the figure ($V_{gas}=V_{vap}$)? And if it is considered together with the vapor, then obviously it should be a variable, i.e. $V_{gas} = m_{vap} \rho_{vap}$. Another question arises with this: Question 3: If I don't know the amount of gas, how do I find it? (I know that one can find the amount at the initial state of the system or STP conditions $n_{gas} = \frac{(P_{init}-P_{sat})V}{RT}$, but there are two unknowns: $n_{gas}$ and $V$, unless the V is known). So basically the volume is the one that confuses me and I am thinking of using Raoult's law to get the fractions between the gas and the vapor, but I'm not sure this is the right approach. Can anyone advice me on this? Thank you! Paul Answer: If you assume that the gas is insoluable in the liquid, then you could analyze it as follows (neglecting the effect of temperature on the density of the liquid): Let $n_g$ equal the number of moles of the "gas" in the container Let $n_v$ equal the total number of moles (vapor plus liquid) of the "vapor" species in the container Let V be the total volume of the container Let x be the fraction of the "vapor species" that is liquid Then the volume of liquid in the container is:$$V_L=n_vxv_L\tag{1}$$where $v_L$ is the molar volume of the liquid. The remaining volume of the container is occupied by gas and vapor. Since the vapor is saturated, the partial pressure of the "vapor" is equal to the equilibrium vapor pressure:$$p_v=P_{sat}(T)$$This must be consistent with the remaining volume and the ideal gas law:$$P_{sat}(T)(V-V_L)=n_v(1-x)RT\tag{2}$$ Eqns. 1 and 2 can be used to solve for x, the split between the number of moles in the liquid and the number of moles in the vapor. As far as the gas is concerned, its partial pressure can be determined from the ideal gas law (now that the volume of liquid is known):$$p_g(V-V_L)=n_gRT$$
{ "domain": "physics.stackexchange", "id": 34147, "tags": "thermodynamics, pressure, ideal-gas, equilibrium, evaporation" }
How do seismologists locate the epicenter and focus of an earthquake?
Question: I know the focus of an earthquake is where the earthquake originated from, but what I could never figure out is, how to scientists find out where exactly the focus (and epicenter) are located? Answer: Earthquake epicenters are located using triangulation, this is possible once seismograms of the earthquake - coming from at least three locations - have been analyzed properly. Here is a good explanation on a site for seismology students at Michigan Tech which takes its seismogram illustrations from Bolt's textbook on earthquakes (1978). Read this page and you will have a good explanation of how seismologists determine the location of an earthquake epicenter. How Do I Locate That Earthquake's Epicenter?
{ "domain": "earthscience.stackexchange", "id": 260, "tags": "geophysics, geology, earthquakes, seismology" }
Why are interacting field theories called nonlinear? Explanation for interacting EM field, in particular
Question: The classical equation of motion for the electromagnetic field interacting with a charged fermion field $\psi$ of charge $eq$ is given by $$\Box A^\mu(x)=j^\mu(x)$$ where $j^\mu(x)=eq\bar{\psi}(x)\gamma^\mu\psi(x)$. The equation of motion for $\psi$ reads $$(i\gamma^\mu\partial_\mu-m)\psi(x)=eq\gamma^\mu A_\mu\psi.$$ According to the nomenclature of differential equations, both the equations are linear, inhomogeneous partial differential equations. Then, why are interacting field theories called non-linear? Answer: I think the following is a clean and correct way to see why the equation for $\psi$ is nonlinear. The solution of the first equation is $$A^\mu(x)=\int d^4y ~G(x-y)j^\mu(y)=eq\int d^4y ~G(x-y)\bar{\psi}(y)\gamma_\mu\psi(y)$$ where $$G(x-y)=\int\frac{d^4p}{(2\pi)^4}\frac{e^{ip\cdot(x-y)}}{p^2+i\epsilon}.$$ Substituting it in the equation for $\psi$, we realize that $$(i\gamma^\mu\partial_\mu-m)\psi(x)=(eq)^2\gamma^\mu\left(\int d^4y ~G(x-y)\bar{\psi}(y)\gamma_\mu\psi(y)\right)\psi(x).$$ Now the source term is manifestly non-linear.
{ "domain": "physics.stackexchange", "id": 76127, "tags": "field-theory, differential-equations" }
Reducing database access time and connection count
Question: I have 2 connections. How can I reduce this to one connection? //AuthentificationController class: public string Register(string nickName, string email, string password) { try { if(!UserWorker.IsUserRegistered(nickName)) //connect { UserWorker.RegisterUser(nickName, email, password)) //connect return "Done"; } else { return "You are already registered"; } } catch(Exception ex) { //log return "Server error"; } } //UserWorker class: //... bool IsUserRegistered(string nickName) { using(var context = new XContext) { return context.Users.Contains(x => x.NickName == nickName); } } //... void RegisterUser(string nickName, string email, string password) { using(var context = new XContext) { User newUser = new User(nickName, email, password); context.Users.Add(newUser); context.SaveChanges(); } } Answer: I only see one connection. you should only have one method that registers the user. I mean that Register and RegisterUser are the same thing. This : public string Register(string nickName, string email, string password) { try { if(!UserWorker.IsUserRegistered(nickName)) //connect { UserWorker.RegisterUser(nickName, email, password)) //connect return "Done"; } else { return "You are already registered"; } } catch(Exception ex) { //log return "Server error"; } } //... bool IsUserRegistered(string nickName) { using(var context = new XContext) { return context.Users.Contains(x => x.NickName == nickName); } } //... void RegisterUser(string nickName, string email, string password) { using(var context = new XContext) { User newUser = new User(nickName, email, password); context.Users.Add(newUser); context.SaveChanges(); } } should be public string Register(string nickName, string email, string password) { try { if(!UserWorker.IsUserRegistered(nickName)) //connect { using(var context = new XContext) { User newUser = new User(nickName, email, password); context.Users.Add(newUser); context.SaveChanges(); } return "Done"; } else { return "You are already registered"; } } catch(Exception ex) { //log return "Server error"; } } //... bool IsUserRegistered(string nickName) { using(var context = new XContext) { return context.Users.Contains(x => x.NickName == nickName); } } Or better yet, you should get rid of the boolean as well and only use 1 context for the entire thing public string Register(string nickName, string email, string password) { try { using(var context = new XContext) { if (context.Users.Contains(x => x.NickName == nickName) { User newUser = new User(nickName, email, password); context.Users.Add(newUser); context.SaveChanges(); return "Done"; } else { return "You are already registered"; } } } catch(Exception ex) { //log return "Server error"; } }
{ "domain": "codereview.stackexchange", "id": 11920, "tags": "c#, database, entity-framework" }
Electric field due to a finite line charge
Question: I was wondering what would happen if we were to calculate electric field due to a finite line charge. Most books have this for an infinite line charge. In the given figure if I remove the portion of the line beyond the ends of the cylinder. I believe the answer would remain the same. Also if I imagine the line to be along the $x$-axis then would it be correct to say that electric field would always be perpendicular to the line and would never make any other angle (otherwise the lines of force would intersect)? Image source: Electric Field of Line Charge - Hyperphysics Answer: You can find the expression for the electric field of a finite line element at Hyperphysics which gives for the $z$-component of the field of a finite line charge that extends from $x=-a$ to $x=b$ $$E_z = \frac{k\lambda}{z}\left[\frac{b}{\sqrt{b^2+z^2}} + \frac{a}{\sqrt{a^2+z^2}}\right]$$ You can follow the approach in that link to determine the $x$-component (along the wire) as well. The field will not be perpendicular to the $x$-axis everywhere - at the ends of the line, they "flare out" since the field obviously has to go to zero far from the line segment.
{ "domain": "physics.stackexchange", "id": 21540, "tags": "homework-and-exercises, electrostatics, electric-fields" }
Resistivity in terms of temperature coefficient of resistance
Question: $$R = R_0 * (1 + \alpha (T - T_0))$$ Where $R_0$ and $T_0$ are the reference resistance and temperature. Is it then safe to say that: $$\rho = \rho _0 * (1 + \alpha (T - T_0))$$ (where $\rho _0$ is the reference resistivity)? Answer: Yes; under the relationship $R=\rho L/A$, the two expressions are equivalent for a given geometry. Resistivity generally depends on temperature in a complex way, but for small temperature differences, this dependence is approximately linear, as captured by the coefficient $\alpha$. All resistivity values for materials (should) include a reference temperature because of this temperature sensitivity. In this Wikipedia list of material resistivities, for example, the reference temperature is 20°C.
{ "domain": "physics.stackexchange", "id": 39408, "tags": "temperature, electrical-resistance" }
having issue with numbers in Polymorphic Lambda Calculus
Question: It is said that Church Numbers are encoded as following c0 = λX. λs:X->X. λz:X. z; c1 = λX. λs:X->X. λz:X. s z; c2 = λX. λs:X->X. λz:X. s (s z); c3 = λX. λs:X->X. λz:X. s (s (s z)); The church numbers are terms, right? but the terms are given by the following syntax. t ::= terms x //variable λx:T.t //abstraction t t //application λX<:T.t //type abstraction t [T] //type application It seems church numbers do not respect the syntax of terms to me, strange. I think it should be like c0 = λX<:some type here. λs:X->X. λz:X. z So, how to write church numbers in a consistent way? If possible provide simple examples to explain, i.e. 2 * 2, how to write this? source1 Thanks in advance! Answer: The syntax you written is probably of the System F<:. However, the typed Church numerals you written are introduced in the context of System F (a.k.a. λ2) at Fig. 11 of the source pdf. The terms of System F are defined like below in Types and programming languages (Benjamin C. Pierce, MIT press, 2002): t ::= terms: x variable λx:T.t abstraction t t application λX.t type abstraction t [T] type application The typed Church numerals at Fig. 11 are terms in this definition. Moreover, in System F<:, Church numerals are also well-typed and can be more generalized. Below examples are taken from Types and programming languages. Top is the maximum type. szero = λX<:Top. λS<:X. λZ<:X. λs:X→S. λz:Z. z; sone = λX<:Top. λS<:X. λZ<:X. λs:X→S. λz:Z. s z;
{ "domain": "cs.stackexchange", "id": 7361, "tags": "lambda-calculus" }
What is the possibility of a railgun assisted orbital launch?
Question: Basic facts: The world's deepest mine is 2.4 miles deep. Railguns can acheive a muzzle velocity of a projectile on the order of 7.5 km/s. The Earth's escape velocity is 11.2 km/s. It seems to me that a railgun style launch device built into a deep shaft such as an abandoned mine could reasonably launch a vehicle into space. I have not run the calculations and I wouldn't doubt that there might be issues with high G's that limit the potential for astronauts on such a vehicle, but even still it seems like it would be cheaper to build such a launch device and place a powerplant nearby to run it than it is to build and fuel single-use rockets. So, what is the possibility of a railgun assisted orbital launch? What am I missing here? Why hasn't this concept received more attention? Answer: Ok David asked me to bring the rain. Here we go. Indeed it is very feasible and very efficient to use an electromagnetic accelerator to launch something into orbit, but first a look at our alternative: Space Elevator: we don't have the tech Rockets: You spend most of the energy carrying the fuel, and the machinery is complicated, dangerous, and it cannot be reused (no orbital launch vehicle has been 100% reusable. SpaceShipOne is suborbital, more on the distinction in a moment). Look at the SLS that NASA is developing, the specs aren't much better than the Saturn V and that was 50 years ago. The reason is that rocket fuel is the exact same - there is only so much energy you can squeeze out of these reactions. If there is a breakthrough in rocket fuel that is one thing but as there has been none and there is none on the horizon, rockets as an orbital launch vehicle are dead end techs which we have hit the pinnacle of. Cannons: Acceleration by a pressure wave is limited to the speed of sound in the medium, so you cannot use any explosive as you will be limited by this (gunpowder is around $2\text{ km/s}$ , this is why battleship cannons have not increased in range over the last 100 years). Using a different medium you can achieve up to 11km/s velocity using hydrogen. This is the regime of 'light gas guns' and a company wants to use this to launch things into orbit. This requires high accelerations ( something ridiculous like thousands of $\mathrm{m/s^2}$) which restricts you to very hardened electronics and material supply such as fuel and water. Maglev: Another company is planning on this (http://www.startram.com/) but if you look at their proposal it requires superconducting loops running something like 200MA generating a magnetic field that will destroy all communications in several states, I find this unlikely to be constructed. Electromagnetic accelerator (railgun): This is going to be awesome! There is no requirement on high accelerations (A railgun can operate at lower accelerations) and no limit on upper speed. See the following papers: Low-Cost Launch System and Orbital Fuel Depot Launch to Space with Electromagnetic Rail Gun Some quick distinctions, there is suborbital and orbital launch. Suborbital can achieve quite large altitudes which are well into space, sounding rockets can go up to 400miles and space starts at 60miles. The difference is if you have enough tangential velocity to achieve orbit. For $1\text{ kg}$ at $200\text{ km}$ from earth the energy to lift it to that height is $0.5 m g h = 1\text{ MJ}$, but the tangential velocity required to stay in orbit is $m v^2 / r = G m M / r^2$ yielding a $KE = 0.5 m v^2 = 0.5 G m M / r = 30\text{ MJ}$ , so you need a lot more kinetic energy tangentially. To do anything useful you need to be orbital, so you don't want to aim your gun up you want it at some gentle angle going up a mountain or something. The papers I cited all have the railgun going up a mountain and about a mile long and launching water and cargo. That is because to achieve the $6\text{ km/s}+$ you need for orbital velocity you need to accelerate the object from a standstill over the length of your track. The shorter the track the higher the acceleration. You will need about 100 miles of track to drop the accelerations to within survival tolerances NASA has. Why would you want to do this? You just need to maintain the power systems and the rails, which are on the ground so you can have crews on it the whole time. The entire thing is reusable, and can be reused many times a day. You can also just have a standard size of object it launches and it opens a massive market of spacecraft producers, small companies that can't pay 20 million for a launch can now afford the 500,000 for a launch. The electric costs of a railgun launch drops to about 3\$/kg, which means all the money from the launch goes to maintenance and capital costs and once the gun is paid down prices can drop dramatically. It is the only way that humanity has the tech for that can launch large quantities of object and in the end it is all about mass launched. Noone has considered having a long railgun that is miles long because it sounds crazy right off the bat, so most proposals are for small high-acceleration railguns as in the papers above. The issue is that this limits what they can launch and as soon as you do that noone is very much interested. Why is a long railgun crazy? In reality it isn't, the raw materials (aluminum rails, concrete tube, flywheels, and vacuum pumps) are all known and cheap. If they could make a railroad of iron 2000miles in the 1800s why can't we do 150miles of aluminum in the 2000s? The question is of money and willpower, someone needs to show that this will work and not just write papers about this but get out there and do it if we ever have a hope of getting off this rock as a species and not just as the 600 or so that have gone already. Also the large companies and space agencies now are not going to risk billions into a new project while there is technology which has been perfected and proven for the last 80 years that they could use. There are a lot of engineering challenges, some of which I and others have been working on in our spare time and have solved, some which are still open problems. I and several other scientists who are finishing/have recently finished their PhDs plan on pursuing this course ( jeff ross and josh at solcorporation.com , the website isn't up yet because I finished my PhD 5 days ago but it is coming). CONCLUSIONS Yes it is possible, the tech is here, it is economic and feasible to launch anything from cargo to people. It has not gotten a lot of attention because all the big boys use rockets already, and noone has proposed a railgun that can launch more than cargo. But it has caught the attention of some young scientists who are going to gun for this, so sit back and check the news in a few years.
{ "domain": "physics.stackexchange", "id": 4507, "tags": "electromagnetism, rocket-science, space, propulsion" }
On coloured Gaussian noise
Question: It is known that the PSD of additive white Gaussian noise (AWGN) is constant and equal to its variance. What about coloured Gaussian noise (CGN)? For example, given the following PSD of CGN $$S(f) = \frac 1f $$ Is the spectral density of such noise frequency-dependent? If so, how to get the PDF by some "inverse" autocorrelation function? Answer: Colored Gaussian noise is by definition a wide-sense-stationary (WSS) process; that is, it has constant mean (all the random variables constituting the process have the same mean) and its autocorrelation function $R_X(t_1, t_2) = E[X(t_1)X(t_2)]$ depends only on the difference $t_2-t_1$ of the arguments. It is conventional to use $\tau$ to denote the difference $t_2-t_1$, and abuse notation by writing $R_X(\tau)$ instead of the more prolix $R_X(0, \tau) = R_X(t,t+\tau)$ for the autocorrelation function. The power spectral density (PSD) of the process is then the Fourier transform of $R_X(\tau)$: $$S_X(f) = \int_{-\infty}^\infty R_X(\tau)e^{-j2\pi ft} \,\mathrm dt.$$ The PSD is an even nonnegative function of $f$. White noise is a zero-mean process for which $R_X(\tau) = K\delta(\tau)$ where $\delta(\cdot)$ is the Dirac delta or impulse and its PSD has constant value $K$ for $-\infty < f < \infty$. Colored noise is a zero-mean process whose PSD is not constant for all $f$. Colored Gaussian noise is a process in which all the random variables are zero-mean correlated (jointly) Gaussian random variables with random variables separated by time $\tau$ having covariance $R_X(\tau)$. Note that the variance of all the random variables is $\sigma^2 = R_X(0)$. The PSD has the connection to the PDF that the PSD determines the variance of the random variables in question via the following corollary to the inverse Fourier transform formula: $$\sigma^2 = R_X(0) = \int_{-\infty}^\infty S_X(f) \,\mathrm df.$$ Note that all the random variables constituting the process have the same (Gaussian) PDF (and so the same mean and same variance) and the variance is not a time-varying function due to the noise being colored.
{ "domain": "dsp.stackexchange", "id": 5737, "tags": "noise, gaussian" }
Reimplementation of Diep.io in C++ with SFML and Box2D
Question: Here's my attempt at reimplementing part of https://diep.io/, a 2D game where tanks battle with each other. The tanks are circular and they have cannons which fire bullets. The bullets can hit other tanks and they disappear after three seconds. Here's a random YouTube video if you want to see how the original game works: https://www.youtube.com/watch?v=9R6zsD5rdd8. I'm using Box2D for the physics and SFML for graphics and input. Currently, I have only implemented basic tanks and bullets, but I want to make sure the overall structure is good before continuing so that I don't spend a bunch of time refactoring later. I plan on implementing health and damage in the future but I'm not asking for help on those. arena.fwd.h: #ifndef CPPDIEP_ARENA_FWD_H #define CPPDIEP_ARENA_FWD_H /// @file /// Forward declaration for Arena used to avoid circular dependencies. namespace cppdiep { class Arena; } // namespace cppdiep #endif // CPPDIEP_ARENA_FWD_H arena.h: #ifndef CPPDIEP_ARENA_H #define CPPDIEP_ARENA_H #include "arena.fwd.h" #include <concepts> #include <cstdint> #include <memory> #include <utility> #include <vector> #include <Box2D/Common/b2Math.h> #include <Box2D/Dynamics/b2World.h> #include <SFML/Graphics/Color.hpp> #include <SFML/Graphics/RenderTarget.hpp> #include "bullet.h" #include "tank.h" #include "time.h" namespace cppdiep { /// The Arena class manages all of the objects in the game. class Arena { public: /// Construct an arena. /// @param size the side length of the arena. /// @param time_step the number of seconds that each time step represents. Arena(float size, float time_step); /// Draw the arena to an SFML render target. /// @param target the SFML render target to draw to. void draw(sf::RenderTarget &target) const; /// Advance the state of the arena by one time step. void step(); /// Spawn a non-tank object in the arena. /// @tparam ObjectType the type of the object to spawn /// @param args the arguments to be forwarded to the object's constructor /// @return A reference to the new object. // clang-format and doxygen doen't handle the requires expression properly. // clang-format off template <std::derived_from<Object> ObjectType, typename... Args> /// @cond requires(!std::derived_from<ObjectType, Tank>) /// @endcond ObjectType &spawnObject(Args &&...args) { ObjectType *object = new ObjectType(*this, std::forward<Args>(args)...); objects.emplace_back(object); return *object; } // clang-format on /// Spawn a tank in the arena. /// @tparam TankType the type of the tank to spawn /// @param args the arguments to be forwarded to the tank's constructor /// @return A reference to the new tank. template <std::derived_from<Tank> TankType, typename... Args> TankType &spawnObject(Args &&...args) { TankType *tank = new TankType(*this, std::forward<Args>(args)...); tanks.emplace_back(tank); return *tank; } /// Get the time step size. /// @return The time step size. float getTimeStep() const { return time_step; } /// Get the current time. /// @return The current time in steps. Time getTime() const { return time; } private: friend Object; /// Alias for the type of the smart pointers used to store the polymorphic /// objects. /// @tparam ObjectType the type of the object that the pointer points to. template <std::derived_from<Object> ObjectType> using ObjectPtr = std::unique_ptr<ObjectType, typename ObjectType::Deleter>; /// Alias for the type of the container used to store objects. /// @tparam ObjectType the type of the objects stored in the container. template <std::derived_from<Object> ObjectType> using ObjectContainer = std::vector<ObjectPtr<ObjectType>>; /// Get the arena's Box2D world. /// @return A reference to the arena's Box2D world. b2World &getB2World() { return b2_world; } /// @copydoc getB2World() const b2World &getB2World() const { return b2_world; } /// The Box2D world of the arena. The gravity vector is zero since the world /// is horizontal. This has to be destructed after all of the objects have /// been destructed since the destructors of the objects will access the world /// to destroy their bodies. b2World b2_world{b2Vec2(0.f, 0.f)}; /// Container of all of the objects in the arena except for tanks. ObjectContainer<Object> objects; /// Container of all of the tanks in the arena. Tank barrels can overlap with /// other objects, so they have to be kept separately and drawn in a /// consistent order after other objects. ObjectContainer<Tank> tanks; /// The number of seconds in each time step. const float time_step; /// The current time as the number of time steps since the arena was created. Time time = 0; }; } // namespace cppdiep #endif // CPPDIEP_ARENA_H arena.cpp: #include "arena.h" #include <array> #include <concepts> #include <memory> #include <utility> #include <vector> #include <Box2D/Collision/Shapes/b2ChainShape.h> #include <Box2D/Common/b2Math.h> #include <Box2D/Dynamics/b2Body.h> #include <Box2D/Dynamics/b2Fixture.h> #include <Box2D/Dynamics/b2World.h> #include <SFML/Graphics/RenderTarget.hpp> #include "box2d_categories.h" #include "bullet.h" #include "render_utils.h" #include "tank.h" #include "time.h" namespace cppdiep { Arena::Arena(float size, float time_step) : time_step(time_step) { b2BodyDef border_body_def; b2Body &border_body = *b2_world.CreateBody(&border_body_def); std::array border_vertices = { b2Vec2(size / 2.f, size / 2.f), b2Vec2(-size / 2.f, size / 2.f), b2Vec2(-size / 2.f, -size / 2.f), b2Vec2(size / 2.f, -size / 2.f)}; b2ChainShape border_chain; border_chain.CreateLoop(border_vertices.data(), border_vertices.size()); b2FixtureDef border_fixture_def; border_fixture_def.shape = &border_chain; border_fixture_def.friction = 0.25f; border_fixture_def.restitution = 0.25f; border_fixture_def.filter.categoryBits = box2d_categories::BORDER; border_fixture_def.filter.maskBits = box2d_categories::TANK; border_body.CreateFixture(&border_fixture_def); } void Arena::draw(sf::RenderTarget &target) const { target.clear(colors::BACKGROUND); // Bullets are drawn first so that they are underneath the tank barrels. for (const ObjectPtr<Object> &object : objects) { object->draw(target); } for (const ObjectPtr<Tank> &tank : tanks) { tank->draw(target); } } void Arena::step() { // Replace objects that need to be destroyed with objects moved from the end // of the vector. Iterating in reverse simplifies things since we don't have // to worry about skipping over objects when removing an object. auto new_end = objects.end(); for (auto it = objects.rbegin(); it != objects.rend(); ++it) { if ((*it)->step()) { *it = std::move(*--new_end); } } objects.erase(new_end, objects.end()); // The tanks have to be rendered in a consistent order because their barrels // may overlap with other tanks. std::erase_if(tanks, [](const ObjectPtr<Tank> &tank) { return tank->step(); }); b2_world.Step(time_step, 8, 3); ++time; } } // namespace cppdiep object.h: #ifndef CPPDIEP_OBJECT_H #define CPPDIEP_OBJECT_H #include <Box2D/Common/b2Math.h> #include <Box2D/Dynamics/b2Body.h> #include <Box2D/Dynamics/b2World.h> #include <SFML/Graphics/RenderTarget.hpp> #include "arena.fwd.h" namespace cppdiep { /// A game object, such as a tank, a bullet, or a polygon. class Object { public: Object(const Object &) = delete; /// Get the current position of the object. /// @return The current position of the object. b2Vec2 getPosition() const { return getB2Body().GetPosition(); } /// Get the current velocity of the object. /// @return The current velocity of the object. b2Vec2 getVelocity() const { return getB2Body().GetLinearVelocity(); } /// Draw the object to an SFML render target. /// @param target the SFML render target to draw to. virtual void draw(sf::RenderTarget &target) const = 0; protected: /// Construct an object. /// @param arena the arena that contains the object. The object will keep a /// reference to the arena so the /// @param b2_body_def the Box2D body definition that will be used to create /// the object's body. Object(Arena &arena, const b2BodyDef &b2_body_def); /// Destruct an object. virtual ~Object(); /// Advance the state of the object by one time step and return whether the /// object should be destroyed now. /// @return Whether the object should be destroyed now. virtual bool step() { // Health and damage haven't been implemented yet so this just returns // false. return false; } /// Get a reference to the arena that contains the object. /// @return A reference to the arena that contains the object. Arena &getArena() const { return arena; } /// Get a reference to the Box2D body of the object. /// @return A reference to the Box2D body of the object. b2Body &getB2Body() { return b2_body; } /// @copydoc getB2Body() const b2Body &getB2Body() const { return b2_body; } private: friend Arena; /// A deleter that the arena passes to the smart pointer. This is necessary /// since the destructor is not public. struct Deleter { void operator()(Object *object) const { delete object; } }; /// The arena that contains the object. Arena &arena; /// The Box2D body of the object. b2Body &b2_body; }; } // namespace cppdiep #endif // CPPDIEP_OBJECT_H object.cpp: #include "object.h" #include "arena.h" namespace cppdiep { Object::Object(Arena &arena, const b2BodyDef &b2_body_def) : arena(arena), b2_body(*arena.getB2World().CreateBody(&b2_body_def)) {} Object::~Object() { arena.getB2World().DestroyBody(&b2_body); } } // namespace cppdiep tank.h: #ifndef CPPDIEP_TANK_H #define CPPDIEP_TANK_H #include <Box2D/Collision/Shapes/b2Shape.h> #include <Box2D/Common/b2Math.h> #include <Box2D/Dynamics/b2Body.h> #include <Box2D/Dynamics/b2Fixture.h> #include <Box2D/Dynamics/b2World.h> #include <SFML/Graphics/Color.hpp> #include <SFML/Graphics/RenderTarget.hpp> #include "arena.fwd.h" #include "object.h" #include "render_utils.h" namespace cppdiep { /// A generic tank. class Tank : public Object { public: /// Get the radius of the tank body. /// @return the radius of the tank body. float getRadius() const { return getB2Body().GetFixtureList()->GetShape()->m_radius; } /// Get the current target position of the tank. /// @return the target position of the tank relative to the tank's position. virtual b2Vec2 getTarget() const = 0; /// Get the direction of the tank's target relative to the tank. /// @return The direction of the tank's target as an angle in radians. float getTargetAngle() const { b2Vec2 target = getTarget(); return std::atan2(target.y, target.x); } /// Get the color of the tank. /// @return The color of the tank. sf::Color getColor() const { return color; } void draw(sf::RenderTarget &target) const override; protected: /// Construct a Tank. /// @param arena the arena that contains the tank. /// @param position the initial position of the tank. /// @param radius the radius of the tank's body. /// @param color the color of the tank. Tank(Arena &arena, const b2Vec2 &position, float radius, const sf::Color &color); /// Helper function for drawing cannons. /// @param target the SFML render target to draw to. /// @param length the length of the barrel. The barrel starts from the center /// of the tank. /// @param width the width of the barrel. /// @param angle the angle that the cannon is pointing towards in radians. void drawCannon(sf::RenderTarget &target, float length, float width, float angle) const; /// Apply a force to move the tank. /// @param vec the direction and speed to move in. A magnitude of 1 represents /// full speed. void move(const b2Vec2 &vec) { getB2Body().ApplyForceToCenter(getMoveForce() * vec, true); } /// Fire the tank's cannon(s). virtual void fire() = 0; private: friend Arena; /// Draw the tank's cannon(s). /// @param target the SFML render target to draw to. virtual void drawCannons(sf::RenderTarget &target) const = 0; /// Get the magnitude of the force used to move the tank. /// @param the magnitude of the force used to move the tank. virtual float getMoveForce() const = 0; /// The color of the tank. sf::Color color; }; } // namespace cppdiep #endif // CPPDIEP_TANK_H tank.cpp: #include "tank.h" #include <Box2D/Collision/Shapes/b2CircleShape.h> #include <Box2D/Common/b2Math.h> #include <Box2D/Dynamics/b2Body.h> #include <Box2D/Dynamics/b2Fixture.h> #include <Box2D/Dynamics/b2World.h> #include <SFML/Graphics/CircleShape.hpp> #include <SFML/Graphics/Color.hpp> #include <SFML/Graphics/RectangleShape.hpp> #include <SFML/Graphics/RenderTarget.hpp> #include "arena.h" #include "box2d_categories.h" #include "render_utils.h" namespace cppdiep { namespace { b2BodyDef makeB2BodyDef(const b2Vec2 &position) { b2BodyDef body_def; body_def.type = b2_dynamicBody; body_def.position = position; body_def.fixedRotation = true; body_def.linearDamping = 1.f; return body_def; } } // namespace Tank::Tank(Arena &arena, const b2Vec2 &position, float radius, const sf::Color &color) : Object(arena, makeB2BodyDef(position)), color(color) { b2CircleShape body_shape; body_shape.m_radius = radius; b2FixtureDef fixture_def; fixture_def.shape = &body_shape; fixture_def.density = 1.f; fixture_def.friction = 0.3f; fixture_def.restitution = 0.25f; fixture_def.filter.categoryBits = box2d_categories::TANK; fixture_def.filter.maskBits = box2d_categories::TANK | box2d_categories::BORDER | box2d_categories::BULLET; getB2Body().CreateFixture(&fixture_def); } void Tank::draw(sf::RenderTarget &target) const { drawCannons(target); drawCircle(target, getPosition(), getRadius(), getColor()); } void Tank::drawCannon(sf::RenderTarget &target, float length, float width, float angle) const { sf::RectangleShape cannon_shape(sf::Vector2f(length, width)); cannon_shape.setOrigin(0.f, width / 2.f); cannon_shape.setPosition(convertVector(getPosition())); cannon_shape.setFillColor(colors::CANNON); cannon_shape.setOutlineThickness(OUTLINE_THICKNESS); cannon_shape.setOutlineColor(darken(colors::CANNON)); cannon_shape.setRotation(radiansToDegrees(angle)); target.draw(cannon_shape); } } // namespace cppdiep basic_tank.h: #ifndef CPPDIEP_BASIC_TANK_H #define CPPDIEP_BASIC_TANK_H #include <SFML/Graphics/RenderTarget.hpp> #include "tank.h" namespace cppdiep { /// A tank with a single cannon. class BasicTank : public Tank { protected: using Tank::Tank; void fire() override; private: void drawCannons(sf::RenderTarget &target) const override; float getMoveForce() const override { return 15.f * getRadius(); } }; } // namespace cppdiep #endif // CPPDIEP_BASIC_TANK_H basic_tank.cpp: #include "basic_tank.h" #include <cmath> #include <Box2D/Common/b2Math.h> #include <SFML/Graphics/RenderTarget.hpp> #include "arena.h" #include "bullet.h" namespace cppdiep { void BasicTank::drawCannons(sf::RenderTarget &target) const { float radius = getRadius(); drawCannon(target, 2 * radius, radius, getTargetAngle()); } void BasicTank::fire() { b2Vec2 target_vec = getTarget(); target_vec.Normalize(); float bullet_radius = getRadius() / 2.f; float impulse_magnitude = 10.f * getRadius(); // The bullet is spawned in the barrel just outside of the tank body to avoid // teleportation due to the bullet intersecting the tank body. This causes // some teleportation if the spawned bullet intersects another object. In the // future, collisions between a bullet and the tank that fired it will be // disabled and the bullet will be spawned inside the tank body. getArena().spawnObject<Bullet>( getPosition() + (getRadius() + bullet_radius) * target_vec, getVelocity(), impulse_magnitude * target_vec, bullet_radius, getColor()); // Simulate recoil by applying the same impulse in the opposite direction to // the tank. getB2Body().ApplyLinearImpulse(-impulse_magnitude * target_vec, getB2Body().GetWorldCenter(), true); } } // namespace cppdiep external_control_tank.h: #ifndef CPPDIEP_EXTERNAL_CONTROL_TANK_H #define CPPDIEP_EXTERNAL_CONTROL_TANK_H #include <concepts> #include <Box2D/Common/b2Math.h> #include "tank.h" namespace cppdiep { /// A tank controlled externally using the move() and fire() methods. /// @tparam BaseTank the type of the tank to be controlled externally. template <std::derived_from<Tank> BaseTank> class ExternalControlTank final : public BaseTank { public: using BaseTank::BaseTank; /// Apply a force to move the tank in the direction of the given vector. /// @param vec a vector indicating the direction and speed to move in. A /// magnitude of 1 indicates full speed. void move(const b2Vec2 &vec) { BaseTank::move(vec); } /// @copydoc Tank::getTarget() b2Vec2 getTarget() const override { return target; } /// Set the target point of the tank. This is the point that the tank will aim /// towards. /// @param target the target point relative to the position of the tank. void setTarget(const b2Vec2 &target) { this->target = target; } /// Fire the tank's cannon(s). void fire() { BaseTank::fire(); } private: /// The current target point of the tank. b2Vec2 target{0.f, 0.f}; }; } // namespace cppdiep #endif // CPPDIEP_EXTERNAL_CONTROL_TANK_H bullet.h: #ifndef CPPDIEP_BULLET_H #define CPPDIEP_BULLET_H #include <Box2D/Common/b2Math.h> #include <Box2D/Dynamics/b2Body.h> #include <Box2D/Dynamics/b2Fixture.h> #include <Box2D/Dynamics/b2World.h> #include <SFML/Graphics/RenderTarget.hpp> #include "arena.fwd.h" #include "object.h" #include "render_utils.h" #include "time.h" namespace cppdiep { /// A bullet fired from a cannon. Bullets disappear after three seconds. class Bullet : public Object { public: /// Get the radius of the bullet. /// @return The radius of the bullet. float getRadius() const { return getB2Body().GetFixtureList()->GetShape()->m_radius; } void draw(sf::RenderTarget &target) const override; protected: /// Construct a bullet. /// @param arena the arena that contains the bullet. /// @param position the initial position of the bullet. /// @param velocity the initial velocity of the bullet. /// @param impulse the impulse applied to the bullet on top of the initial /// velocity. /// @param radius the radius of the bullet. /// @param color the color of the bullet. Bullet(Arena &arena, const b2Vec2 &position, const b2Vec2 &velocity, const b2Vec2 &impulse, float radius, const sf::Color &color); bool step() override; private: friend Arena; /// The color of the bullet. sf::Color color; /// The time when the bullet should be destroyed. Time destroy_time; }; } // namespace cppdiep #endif // CPPDIEP_BULLET_H bullet.cpp: #include "bullet.h" #include <Box2D/Collision/Shapes/b2CircleShape.h> #include <Box2D/Dynamics/b2Body.h> #include <Box2D/Dynamics/b2World.h> #include <SFML/Graphics/RenderTarget.hpp> #include "arena.h" #include "box2d_categories.h" #include "render_utils.h" namespace cppdiep { namespace { b2BodyDef makeB2BodyDef(const b2Vec2 &position, const b2Vec2 &velocity) { b2BodyDef body_def; body_def.type = b2_dynamicBody; body_def.position = position; body_def.linearVelocity = velocity; body_def.linearDamping = 0.5f; body_def.angularDamping = 0.5f; body_def.bullet = true; return body_def; } } // namespace Bullet::Bullet(Arena &arena, const b2Vec2 &position, const b2Vec2 &velocity, const b2Vec2 &impulse, float radius, const sf::Color &color) : Object(arena, makeB2BodyDef(position, velocity)), color(color), destroy_time(arena.getTime() + 3.f / arena.getTimeStep()) { b2CircleShape body_shape; body_shape.m_radius = radius; b2FixtureDef fixture_def; fixture_def.shape = &body_shape; fixture_def.density = 1.f; fixture_def.friction = 0.3f; fixture_def.restitution = 0.25f; fixture_def.filter.categoryBits = box2d_categories::BULLET; fixture_def.filter.maskBits = box2d_categories::BULLET | box2d_categories::TANK; getB2Body().CreateFixture(&fixture_def); getB2Body().ApplyLinearImpulse(impulse, getB2Body().GetWorldCenter(), true); } void Bullet::draw(sf::RenderTarget &target) const { drawCircle(target, getPosition(), getRadius(), color); } bool Bullet::step() { if (Object::step()) { return true; } return getArena().getTime() >= destroy_time; } } // namespace cppdiep box2d_categories.h: #ifndef CPPDIEP_B2_CATEGORIES_H #define CPPDIEP_B2_CATEGORIES_H #include <Box2D/Common/b2Settings.h> namespace cppdiep { /// Box2D collision category bitmasks. namespace box2d_categories { /// Box2D collision category bitmask for the arena border. inline constexpr uint16 BORDER = 1u << 0; /// Box2D collision category bitmask for tanks. inline constexpr uint16 TANK = 1u << 1; /// Box2D collision category bitmask for bullets. inline constexpr uint16 BULLET = 1u << 2; } // namespace box2d_categories } // namespace cppdiep #endif // CPPDIEP_B2_CATEGORIES_H render_utils.h: #ifndef CPPDIEP_RENDER_UTILS_H #define CPPDIEP_RENDER_UTILS_H /// @file /// Constants and helper functions used for rendering. #include <numbers> #include <Box2D/Common/b2Math.h> #include <SFML/Graphics/CircleShape.hpp> #include <SFML/Graphics/Color.hpp> #include <SFML/Graphics/RenderTarget.hpp> #include <SFML/System/Vector2.hpp> namespace cppdiep { /// Colors used for rendering. namespace colors { /// Color for blue tanks and bullets. inline const sf::Color BLUE(0, 178, 225); /// Color for red tanks and bullets. inline const sf::Color RED(241, 78, 84); /// Color for tank cannons. inline const sf::Color CANNON(153, 153, 153); /// Background color of the arena. inline const sf::Color BACKGROUND(205, 205, 205); } // namespace colors /// The thickness of the outlines around the objects. This is negative to make /// the outline inside the edge of the object. inline constexpr float OUTLINE_THICKNESS = -0.125f; /// Darken a color to get the color of the outline. /// @param color the color to darken. /// @return The darkened color. inline sf::Color darken(const sf::Color &color) { return sf::Color(color.r * 0.75, color.g * 0.75, color.b * 0.75, color.a); } /// Draw a circle. /// @param target the SFML render target to draw to. /// @param position the position of the center of the circle. /// @param radius the radius of the circle. /// @param color the color of the circle. inline void drawCircle(sf::RenderTarget &target, const b2Vec2 &position, float radius, const sf::Color &color) { sf::CircleShape shape(radius); shape.setOrigin(radius, radius); shape.setPosition(position.x, position.y); shape.setFillColor(color); shape.setOutlineThickness(OUTLINE_THICKNESS); shape.setOutlineColor(darken(color)); target.draw(shape); } /// Convert radians to degrees. /// @param radians an angle in radians. /// @return The angle in degrees. inline float radiansToDegrees(float radians) { return radians * 180.f / std::numbers::pi_v<float>; } /// Convert a Box2D vector to an SFML vector. /// @param b2_vec a Box2D vector. /// @return The SFML vector. inline sf::Vector2f convertVector(const b2Vec2 &b2_vec) { return sf::Vector2f(b2_vec.x, b2_vec.y); } /// Convert an SFML vector to a Box2D vector. /// @param sf_vec an SFML vector. /// @return The Box2D vector. inline b2Vec2 convertVector(const sf::Vector2f &sf_vec) { return b2Vec2(sf_vec.x, sf_vec.y); } } // namespace cppdiep #endif // CPPDIEP_RENDER_UTILS_H time.h: #ifndef CPPDIEP_TIME_H #define CPPDIEP_TIME_H #include <cstdint> namespace cppdiep { /// The signed integer type that will be used to represent time in the arena as /// a number of steps. using Time = std::int64_t; } // namespace cppdiep #endif // CPPDIEP_TIME_H main.cpp: #include <Box2D/Common/b2Math.h> #include <SFML/Graphics/RenderWindow.hpp> #include <SFML/Graphics/View.hpp> #include <SFML/Window/ContextSettings.hpp> #include <SFML/Window/Event.hpp> #include <SFML/Window/Keyboard.hpp> #include <SFML/Window/Mouse.hpp> #include "arena.h" #include "basic_tank.h" #include "external_control_tank.h" #include "render_utils.h" int main() { // Set up the window. sf::ContextSettings settings; settings.antialiasingLevel = 4; sf::RenderWindow window(sf::VideoMode(800, 800), "CppDiep", sf::Style::Titlebar | sf::Style::Close, settings); constexpr int frame_rate = 60; window.setFramerateLimit(frame_rate); constexpr float arena_size = 20.f; // The Y size of the view is negative to flip things vertically since SFML // uses a downwards vertical axis while the arena uses an upwards vertical // axis. sf::View view(sf::Vector2f(0.f, 0.f), sf::Vector2f(arena_size, -arena_size)); window.setView(view); // Create the arena and spawn two tanks for testing. cppdiep::Arena arena(arena_size, 1.f / frame_rate); auto &tank = arena.spawnObject<cppdiep::ExternalControlTank<cppdiep::BasicTank>>( b2Vec2(0.f, 0.f), 1.f, cppdiep::colors::BLUE); arena.spawnObject<cppdiep::ExternalControlTank<cppdiep::BasicTank>>( b2Vec2(0.f, 5.f), 1.f, cppdiep::colors::RED); while (window.isOpen()) { // Make the tank cannon point towards the mouse. b2Vec2 mouse_position = cppdiep::convertVector( window.mapPixelToCoords(sf::Mouse::getPosition(window))); tank.setTarget(mouse_position - tank.getPosition()); // Process events. sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) { window.close(); } else if (event.type == sf::Event::MouseButtonPressed && event.mouseButton.button == sf::Mouse::Left) { tank.fire(); } } if (sf::Keyboard::isKeyPressed(sf::Keyboard::W)) { tank.move(b2Vec2(0.f, 1.f)); } if (sf::Keyboard::isKeyPressed(sf::Keyboard::A)) { tank.move(b2Vec2(-1.f, 0.f)); } if (sf::Keyboard::isKeyPressed(sf::Keyboard::S)) { tank.move(b2Vec2(0.f, -1.f)); } if (sf::Keyboard::isKeyPressed(sf::Keyboard::D)) { tank.move(b2Vec2(1.f, 0.f)); } arena.step(); arena.draw(window); window.display(); } } CMakeLists.txt: cmake_minimum_required(VERSION 3.18) project( CppDiep DESCRIPTION "Diep.io reimplemented in C++ with SFML and Box2D" LANGUAGES CXX) set(CMAKE_CXX_STANDARD 20) set(CMAKE_CXX_STANDARD_REQUIRED ON) if(MSVC) add_compile_options(/W4 /WX) else() add_compile_options(-Wall -Wextra -pedantic -Werror) endif() find_package( SFML 2.5 COMPONENTS graphics REQUIRED) find_package(Box2D REQUIRED) add_executable(cppdiep arena.cpp basic_tank.cpp bullet.cpp main.cpp object.cpp tank.cpp) target_link_libraries(cppdiep PRIVATE sfml-graphics Box2D) Github link: https://github.com/bkrl/cppdiep/tree/72222654a22320b2372a306894047e59ecace9a8 When you run the code, it should create a window with a blue tank and a red tank. You can move the blue tank with the WASD keys and aim with the mouse. Left-clicking will fire a bullet. Bullets collide with tanks but should pass through the border. Answer: Overall the code looks quite nice; a lot of attention to details of the C++ language, it's readable and concise. Upgrade Box2D It seems you are using an older version of Box2D. Since Box2D 2.4.0, the structure of the header files has changed significantly, and your code doesn't compile with the newer version. If you are "stuck" with an old version of a library and can't upgrade it (yet), then at least make sure the documentation and build system of your project correctly specify the desired version of that library. Template specialization vs. if constexpr I see you have two specializations of spawnObject(); one that spawns non-Tank objects, one that spawns Tanks. However, the code it mostly identical except for the container in which the object is put. Consider using if constexpr instead: template <std::derived_from<Object> ObjectType, typename... Args> ObjectType &spawnObject(Args &&...args) { ObjectType *object = new ObjectType(*this, std::forward<Args>(args)...); if constexpr (std::derived_from<ObejctType, Tank>) { tanks.emplace_back(object); } else { objects.emplace_back(object); } return *object; } Use more auto where appropriate In Arena::draw(), you can use auto inside the for-statements to avoid having to write out the types of object and tank. In Arena::step(), you can use auto in the parameter list of the lambda you pass to std::erase_if(). While auto might hide the type of a variable you declare, often you don't care about the actual type. Furthermore, type deducation can sometimes prevent errors; sometimes you can write the wrong type name but an implicit cast is possible, so the compiler won't complain. Use algorithms consistently It looks to me like you should be able to use std::erase_if() for objects in Arena::step(), just like you do for tanks. If you don't need the algorithm to preserve order, consider using std::ranges::partition() (see this post). Don't store a reference to the Arena in Object Your game has only one Arena, but you add a reference to it to every instance of Object. Apart from the constructor using it to initialize b2_body, most objects don't need that reference afterwards. Instead, consider passing a reference to arena only to those functions that need it, like BasicTank::fire() and Bullet::step(). You can even remove the use of getArena() from Bullet::step(), by having Bullet not store the "destroy time" of the bullet, but just the remaining steps left, and then decrement that each time step() is called. Naming things spawnObject() could be renamed to spawn(), the Object part looks redundant. convertVector() is a bit vague. Also, what if you had three vector types to deal with in your code? I'd split the two overloads into to_Vector2f() and a to_b2Vec2(). drawCannons() implies it draws multiple cannons, yet only one cannon is ever drawn per tank. Are you planning for multi-cannon tanks? If not I would rename it to drawCannon(). Reduce the size of main() Most of your functions are very small, but main() stands out as being the longest. Consider refactoring it and splitting off some of the things it does into separate functions.
{ "domain": "codereview.stackexchange", "id": 43771, "tags": "c++, game, c++20, sfml, cmake" }
frequencies in sound: multiple possibilities?
Question: First, I am by no mean a sound engineer (as you will guess later). I was just wondering something while looking at the waveform of a .wav for a given shape of waveform on a duration of 2 sec for example, how can we make sure that the frequencies fft gives are the only correct one? what if very little parts of a sinusoid could be considered instead of a full continuous sinusoidal movement that just varies in amplitude? or, what if a sinusoid had a lot of very fast varying amplitudes ? that would lead to an infinity of solutions I guess.. Answer: The Discrete Fourier Transform, or DFT (the FFT is an algorithm that computes the DFT) of a length of finite duration (which any practical transform would need to be) is identical to the result of the transform of an infinitely long sequence formed by repeating the original sequence in time. Knowing this provides the following insights related to your question: First, anything that repeats (exact repetition) in time can only exist at discrete frequencies in the frequency domain. We see this with the Fourier Series Expansion specifically as shown in the graphic below. The concept behind the Fourier Series Expansion is that any single valued continuous function can be represented as a sum of sinusoidal components and notably each component MUST have a frequency that is an integer of 1/T where T is the duration of the signal in time. Therefore IF those sinusoidal components were allowed to play out for all time (rather than being bound to the time interval [0,T]), the next cycle immediately after T would have to commence and proceed exactly as the waveform did at the start of the sequence (as each sinusoidal component would do the same). Thus it is often described that the Fourier Series Expansion decomposes any periodic function into a sum of sines and cosines (or equivalently and I believe mathematically simpler, complex exponentials). The mathematical model of the time limited signal from [0,N-1] (for N samples of the DFT, which is equivalent to the analog bound of [0,T] in time), to also be a signal extending to infinity repeating with time provides for further intuition into the behavior and result of the DFT. Specifically we see that repeating a signal in time results in discrete uniform spaced impulses in the frequency domain, that can only exist at integer multiples of 1/T (including 0) where T is the length of the base waveform in time. The second characteristic of the DFT is that it is done on a waveform that is sampled in time. Without going into significant detail, sampling in time is associated with repetition in frequency (for those familiar with A/D and D/A conversion this will be readily apparent). So with the DFT we have both characteristics of repetition in time and sampling in time, which therefore means we will have "sampling" in frequency (discrete frequencies where only non-zero values can exist) and repetition in frequency. The repetition is an implied construct that actually helps significantly to provide an intuitive understanding of many signal processing constructs- especially when considering both analog and digital domains. To be clear, the DFT involves only a fixed duration sequence both in the time and frequency domains, but mathematically these sequences can be repeated for infinite duration with the same result. For example, to your question of what happens where there is a partial cycle? If we realize this equally represents a waveform repeating for all time, we see that in this case there would be an abrupt transition in the waveform. Such a waveform cannot be created or represented with a single sinusoidal tone. Going back to the Fourier Series Expansion, we know that it can be represented by multiple tones as long as they are spaced at integer multiples of the repetition rate. So where we thought one tone exists, similarly in the DFT there will be multiple tones as required to create such an abrupt transition. According to the DFT these frequencies really exist as what we are solving for in that process is the frequency components that are needed to create the time limited time domain waveform. Below shows the case above (lower plot) compared to what we would get if there was a complete integer number of cycles over the time duration used (upper plot). This is one explanation of "Spectral Leakage" with the DFT but given as example insight into the relationship between the time duration of the DFT chosen and the frequencies that would result. If the waveform was changing with time (either in frequency, or amplitude beyond the sinusoidal component itself, such as an envelope) this would require many frequency components to represent it. This is no different than a modulation view of the time domain waveform: to transmit signals over the air we modulate the amplitude or frequency of a carrier frequency (that is better suited to go over the air) which results in several frequencies being present around that carrier. If the waveform is not actually repeating with time (which is likely the case), then instead of discrete tones we will get a continuous band of frequencies around the carrier.
{ "domain": "dsp.stackexchange", "id": 6764, "tags": "fft, amplitude, wave" }
How can I understand these two equations about the indirect measurement?
Question: I'm reading an article about environmental monitoring and information transfer. Suppose $S$ represents a quantum system and $E$ is the environment. Assume at time $t=0$ there are no correlations between $S$ and $E$: $\rho_{SE}(0)=\rho_{S}(0)\otimes\rho_{E}(0)$, and this composite density operator evolved under the action of $U(t) = e^{-iHt/h}$, where $H$ is the total Hamiltonian. Let $P_\alpha$ be a projective operator on $E$. Then, the probability of obtaining outcome $α$ in this measurement when $S$ is described by the density operator $\rho_s(t)$ is given as $$ \text{Prob}(\alpha|\rho_s(t))=\text{Tr}_E (P_αρ_E(t)) $$ and the density matrix of $S$ conditioned on the particular outcome $\alpha$ is $$ \rho_s^{\alpha}(t)= \frac{\text{Tr}_E\{(I\otimes P_\alpha)\rho_{SE}(t)(I\otimes P_\alpha)\}}{\text{Prob}(\alpha|\rho_s(t))} $$ I'm wondering how those two equations coming from? Also, since the indirect measurement aims to yield information about S without performing a projective (and thus destructive) direct measurement on S, why there's $P_\alpha$ in the equation? Thanks!! Answer: The two equations are part of the measurement postulate of quantum mechanics which states that probability of the outcome $m$ in a measurement described by operators $M_m$ on a state $\rho$ is $$ p(m) = \mathrm{tr}(M_m^\dagger M_m \rho)\tag1 $$ (c.f. $(2.159)$ in Nielsen & Chuang) and the post-measurement state is $$ \frac{M_m\rho M_m^\dagger}{\mathrm{tr}(M_m^\dagger M_m \rho)}\tag2 $$ (c.f. $(2.160)$ in Nielsen & Chuang). The first equation in the question follows from substitutions $$ m = \alpha \\ \rho = \rho_E(t) \\ M_m = P_\alpha $$ in $(1)$. The second follows from substitutions $$ \rho = \rho_{SE}(t) \\ M_m = I\otimes P_\alpha $$ followed by partial trace over the environment on the post-measurement state.
{ "domain": "quantumcomputing.stackexchange", "id": 2367, "tags": "measurement, decoherence, linear-algebra" }
Is the frequency/wavelength of light modified when multiple light sources are combined?
Question: Let's say I light a wall with two spotlights: One red and one green one. Where they overlap, I'll see a yellow area at the wall. My question is, whether this is caused by an modification of the frequency/wavelength or simply by my eye combining the two incoming lights. Light is "added", wavelength is modified: The eye combines two separate lights: Answer: My question is, whether this is caused by an modification of the frequency/wavelength or simply by my eye combining the two incoming lights. Short answer No So your second picture is accurate. Frequency is not modified, the two different waves are added up by your eye to produce the light that you perceive. Most computer screens operate on a RGB colour model, i.e. they only have red green and blue lights. So the "yellow" light you see coming from your screen contains zero photons of yellow frequency but the exact right proportion of red green and blue to trick your eye into firing neurons the same way it would if photons of the yellow spectrum were striking it. When "adding" light colours you start with a white wall, that is un-illuminated (black). And then you add red green and blue (the additive primary colours) lights to get any colour you want. However when "subtracting" colours, if you start with a white sheet of paper illuminated with white light (white) and start adding paints (subtracting colour) you can make any colour by using the tree primary colours red YELLOW and blue (the subtract-ative primary colours), or more accurately as referred to in the printing industry CMYK, cyan magenta yellow and key (black), black is needed because adding enough coloured paint to make an image black is inefficient and expensive. If you are allowed to add and subtract colours you can mix (almost) any three different colours to generate the illusion of any desired colour. ie you get to choose any three colours as your primary colours. Exactly three colours is required since there are three different types of cones in your eye (three degrees of freedom). Also note that the sRGB does not encapsulate all different colours, so watching a movie in cinema with a reel film may display hues that cannot be rendered on a computer monitor. So far I have only dealt with the eye and completely ignored the interaction of the light with the interface. In general light is absorbed, reflected or transmitted through the medium. Reflected light can be coherent light in a mirror or diffuse like a painted wall. However there are a large number of non-linear optical effects where the photons do interact with each other and may combine. There are also interactions with the medium such as those seen with a black light. Links are to the respective Wikipedia pages.
{ "domain": "physics.stackexchange", "id": 53590, "tags": "optics, visible-light, wavelength" }
Could a star have a Saturn-like ring?
Question: Saturn's rings will never clump together, because they are within the Roche limit. Which makes me wonder if a star could have rings that are kept from clumping together due to tidal forces. Have any ring systems been observed within the Roche limit of other stars? The answers to this question consider rings in a more general sense. I would like to know specifically about rings around a star that exist due to the same mechanism as Saturn's rings. Answer: As noted in the other question, stars can form rings around them. We call them circumstellar disc. But can stars can have "Saturn-like" rings that had a formation mechanism similar to Saturn (which are thought to be pieces of comets, asteroids or shattered moons that broke up before they reached the planet, torn apart by Saturn's powerful gravity)? The answer is "speculative". However, I found two cases: LSPM J0207+3331, a white dwarf is thought to have rings. The reason is unknown but scientists speculates that: [...] some white dwarfs — between one and four percent — show infrared emission indicating they’re surrounded by dusty disks or rings. Scientists think the dust may arise from distant asteroids and comets kicked closer to the star by gravitational interactions with displaced planets. As these small bodies approach the white dwarf, the star’s strong gravity tears them apart in a process called tidal disruption. The debris forms a ring of dust that will slowly spiral down onto the surface of the star. HR 4796A
{ "domain": "astronomy.stackexchange", "id": 6679, "tags": "star, planetary-ring, roche-limit, hill-sphere" }