anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
how to run gazebo docker container in Mac
Question: Although gazebo can be installed on Mac directly, I found it's not very reliable during the use - for example, it spins all the time when I even try to drag a simple object into the model editor. I try to setup the docker on my Mac. But no material about how to get the gzclient running although there is way to get gzserver up and running form the gazebo docker container. Originally posted by JohnFred on ROS Answers with karma: 36 on 2019-02-19 Post score: 0 Original comments Comment by gonzalocasas on 2019-02-19: Maybe obvious question, but did you try the official gazebo docker images? https://hub.docker.com/_/gazebo Comment by JohnFred on 2019-02-19: Yes. The gazebo docket container only provides a gzserver. It does not prescribe how to get the gzclient side. Just looking for someone to share the experience because I believe many people use Mac for development. Answer: Did quite some research, including using xquartz server for mac and then get gazebo docker container to xquartz. None works because many of the legacy compatibility issues here and there, including XQartz support to new OpenGL etc. Also in the long run, Mac is moving to METAL then OpenGL... Also have looked into other alternative like Gzweb (web version of Gazebo client). Eventually, I started a typical PC + Ubuntu setup and everything is smooth. Originally posted by JohnFred with karma: 36 on 2019-02-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ruffsl on 2019-02-26: The osrf/docker_images repo does host dockerfiles for gzweb you could try. I haven't tested them myself in a while though: https://github.com/osrf/docker_images/blob/086440f267921cf4ff39e90bb8e07c81e243dddc/gazebo/9/ubuntu/bionic/gzweb9/Dockerfile
{ "domain": "robotics.stackexchange", "id": 32499, "tags": "ros, gazebo, macbook, docker, ros-indigo" }
Relation of conformal symmetry and traceless energy momentum tensor
Question: In usual string theory, or conformal field theory textbook, they states traceless energy momentum tensor $T_{a}^{\phantom{a}a}=0$ implies (Here energy momentum tensor is usual one which is symmetric and follows conservation law) conformal theory. (i.e, see page 3 ) I wonder how they are related to each other. I found similar question Why does Weyl invariance imply a traceless energy-momentum tensor? and get some idea about weyl invariance. and get some another useful information from Conformal transformation/ Weyl scaling are they two different things? Confused! which dictates that conformal transformation and weyl transformation is totally different things . Answer: Note that under an infinitesimal change in the metric of the form $g \to g + \delta g$ the action changes to $$ \delta S = \int T^{ab} \delta g_{ab} $$ Now, under Weyl transformations we have $$ g_{ab} \to e^{2\omega} g_{ab} \qquad \implies \qquad \delta g_{ab} = 2 \omega g_{ab} $$ For Weyl transformations $\omega$ is completely arbitrary. If we consider a conformal transformation then, the metric also transforms as above except that $\omega = \frac{1}{d} \nabla_a \xi^a$ where $\xi^a$ is a conformal killing vector, i.e. $\omega$ takes a specific functional form. Either way, for both conformal or Weyl transformations $\delta g_{ab} =2\omega g_{ab}$. Thus, for either of these transformations, the variation in the metric is $$ \delta S = 2 \int \omega T $$ Thus, if the trace of energy momentum tensor vanishes, $T = 0$, then $$ \delta S = 0 $$ and we have a symmetry of our theory! OK. So we have shown that if $T = 0$, then the theory is invariant under Weyl and conformal transformations. What about the inverse statement? Can we infer from Weyl and conformal invariance that $T = 0$? The latter is a more subtle question. Weyl or conformal invariance implies $$ \int \omega T = 0 $$ Now, when talking about Weyl invariance, the above is true for arbitrary $\omega$. In this case, we can most certainly conclude that $T = 0$ (for instance take $\omega \propto \delta^4(x)$ or some smoothed out version thereof and we immediately reach this conclusion. When talking about conformal invariance, $\omega$ is not arbitrary and we cannot conclude that $T$ must vanish. For instance, in a flat background, $\omega$ takes the form $\lambda + a_\mu x^\mu$ where $\lambda$ and $a_\mu$ are arbitrary constants. Thus, all we can conclude is that we must have $$ \int T = 0 ~, \qquad \int x^\mu T = 0 $$ These two conditions no longer imply that $T = 0$. Thus, as per this argument the inverse statement is not necessarily true in conformal field theories. I'm not sure if there is any other argument that can be used to justify that $T$ must vanish in CFTs, but so far, all the CFTs we study have $T = 0$.
{ "domain": "physics.stackexchange", "id": 94518, "tags": "conformal-field-theory, stress-energy-momentum-tensor, scale-invariance" }
Simple automation executing platform in Python
Question: I'm building a platform like Rundeck/AWX but for server reliability testing. People could log into a web interface upload scripts, run them on servers and get statistics on them ( failure / success). Each script is made up of three parts, probes to check if the server is fine, methods to do stuff in the server, and rollback to reverse what we did to the server. First we run the probes, if they past we run the methods, wait a certain time the user that created the method put, then run the probes again to check if the server self healed, if not then we run the rollbacks and probes again, then send the data to the db. I have limited experience with programming as a job and am very unsure if what I’m doing is good let alone efficient so I would love to get some really harsh criticism. This is the micro-service that is in charge of running the scripts of the user’s request, it gets a DNS and the fault name (fault is the whole object of probes/methods/rollbacks). #injector.py import requests from time import sleep import subprocess import time import script_manipluator as file_manipulator class InjectionSlave(): def __init__(self,db_api_url = "http://chaos.db.openshift:5001"): self.db_api_url = db_api_url def initiate_fault(self,dns,fault): return self._orchestrate_injection(dns,fault) def _orchestrate_injection(self,dns,fault_name): try : # Gets fault full information from db fault_info = self._get_fault_info(fault_name) except Exception as E : return { "exit_code":"1" ,"status": "Injector failed gathering facts" } try : # Runs the probes,methods and rollbacks by order. logs_object = self._run_fault(dns, fault_info) except : return { "exit_code":"1" ,"status": "Injector failed injecting fault" } try : # Sends logs to db to be stored in the "logs" collection db_response = self._send_result(dns,logs_object,"logs") return db_response except Exception as E: return { "exit_code":"1" ,"status": "Injector failed sending logs to db" } def _get_fault_info(self,fault_name): # Get json object for db rest api db_fault_api_url = "{}/{}/{}".format(self.db_api_url, "fault", fault_name) fault_info = requests.get(db_fault_api_url).json() # Get the names of the parts of the fault probes = fault_info["probes"] methods = fault_info["methods"] rollbacks = fault_info["rollbacks"] name = fault_info["name"] fault_structure = {'probes' : probes , 'methods' : methods , 'rollbacks' : rollbacks} # fault_section can be the probes/methods/rollbacks part of the fault for fault_section in fault_structure.keys(): fault_section_parts = [] # section_part refers to a specific part of the probes/methods/rollbacks for section_part in fault_structure[fault_section]: section_part_info = requests.get("{}/{}/{}".format(self.db_api_url,fault_section,section_part)).json() fault_section_parts.append(section_part_info) fault_structure[fault_section] = fault_section_parts fault_structure["name"] = name return fault_structure def _run_fault(self,dns,fault_info): try: # Gets fault parts from fault_info fault_name = fault_info['name'] probes = fault_info['probes'] methods = fault_info['methods'] rollbacks = fault_info['rollbacks'] except Exception as E : logs_object = {'name': "failed_fault" ,'exit_code' : '1' , 'status' : 'expirement failed because parameters in db were missing ', 'error' : E} return logs_object try : method_logs = {} rollback_logs = {} probe_after_method_logs = {} # Run probes and get logs and final probes result probes_result,probe_logs = self._run_probes(probes,dns) # If probes all passed continue if probes_result is True : probe_logs['exit_code'] = "0" probe_logs['status'] = "Probes checked on victim server successfully" # Run methods and get logs and how much time to wait until checking self recovery methods_wait_time, method_logs = self._run_methods(methods, dns) # Wait the expected recovery wait time sleep(methods_wait_time) probes_result, probe_after_method_logs = self._run_probes(probes, dns) # Check if server self healed after injection if probes_result is True: probe_after_method_logs['exit_code'] = "0" probe_after_method_logs['status'] = "victim succsessfully self healed after injection" else: probe_after_method_logs['exit_code'] = "1" probe_after_method_logs['status'] = "victim failed self healing after injection" # If server didnt self heal run rollbacks for rollback in rollbacks: part_name = rollback['name'] part_log = self._run_fault_part(rollback, dns) rollback_logs[part_name] = part_log sleep(methods_wait_time) probes_result, probe_after_method_logs = self._run_probes(probes, dns) # Check if server healed after rollbacks if probes_result is True: rollbacks['exit_code'] = "0" rollbacks['status'] = "victim succsessfully healed after rollbacks" else: rollbacks['exit_code'] = "1" rollbacks['status'] = "victim failed healing after rollbacks" else : probe_logs['exit_code'] = "1" probe_logs['status'] = "Probes check failed on victim server" logs_object = {'name': fault_name ,'exit_code' : '0' , 'status' : 'expirement ran as expected','rollbacks' : rollback_logs , 'probes' : probe_logs , 'method_logs' : method_logs, 'probe_after_method_logs' : probe_after_method_logs} if logs_object["probe_after_method_logs"]["exit_code"] == "0" : logs_object["successful"] = True else: logs_object["successful"] = False except Exception as E: logs_object = {'name': fault_name ,'exit_code' : '1' , 'status' : 'expirement failed because of an unexpected reason', 'error' : E} return logs_object def _inject_script(self,dns,script_path): # Run script proc = subprocess.Popen("python {} -dns {}".format(script_path,dns), stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True) # get output from proc turn it from binary to ascii and then remove /n if there is one output = proc.communicate()[0].decode('ascii').rstrip() return output def _run_fault_part(self,fault_part,dns): script, script_name = file_manipulator._get_script(fault_part) script_file_path = file_manipulator._create_script_file(script, script_name) logs = self._inject_script(dns, script_file_path) file_manipulator._remove_script_file(script_file_path) return logs def _str2bool(self,output): return output.lower() in ("yes", "true", "t", "1") def _run_probes(self,probes,dns): probes_output = {} # Run each probe and get back True/False boolean result for probe in probes : output = self._run_fault_part(probe, dns) result = self._str2bool(output) probes_output[probe['name']] = result probes_result = probes_output.values() # If one of the probes returned False the probes check faild if False in probes_result : return False,probes_output return True,probes_output def _get_method_wait_time(self,method): try: return method['method_wait_time'] except Exception as E : return 0 def _get_current_time(self): current_time = time.strftime('%Y%m%d%H%M%S') return current_time def _run_methods(self,methods,dns): method_logs = {} methods_wait_time = 0 for method in methods: part_name = method['name'] part_log = self._run_fault_part(method, dns) method_wait_time = self._get_method_wait_time(method) method_logs[part_name] = part_log methods_wait_time += method_wait_time return methods_wait_time,method_logs def _send_result(self,dns,logs_object,collection = "logs"): # Get current time to timestamp the object current_time = self._get_current_time() # Creating object we will send to the db db_log_object = {} db_log_object['date'] = current_time db_log_object['name'] = "{}-{}".format(logs_object['name'],current_time) db_log_object['logs'] = logs_object db_log_object['successful'] = logs_object['successful'] db_log_object['target'] = dns # Send POST request to db api in the logs collection db_api_logs_url = "{}/{}".format(self.db_api_url,collection) response = requests.post(db_api_logs_url, json = db_log_object) return response.content.decode('ascii') #script_manipulator.py import os import requests def _get_script(fault_part): file_share_url = fault_part['path'] script_name = fault_part['name'] script = requests.get(file_share_url).content.decode('ascii') return script, script_name def _create_script_file(script, script_name): injector_home_dir = "/root" script_file_path = '{}/{}'.format(injector_home_dir, script_name) with open(script_file_path, 'w') as script_file: script_file.write(script) return script_file_path def _remove_script_file( script_file_path): os.remove(script_file_path) ``` Answer: This is a bit much to go through all at once. It would be better if you could separate out the general concept illustrated by examples as a single review, and then specific implementation of components for other reviews. I'm afraid I can't give much feedback on the overall concept, but I will highlight some areas that stood out to me. Configuration You have hardcoded configuration scattered throughout your code. This not only makes it more difficult to update, but also makes it inflexible. There are a range of options, but it will depend on your specific preferences and needs. def __init__(self,db_api_url = "http://chaos.db.openshift:5001"): current_time = time.strftime('%Y%m%d%H%M%S') def _str2bool(self,output): return output.lower() in ("yes", "true", "t", "1") Path manipulation Don't do it manually! Trying to use string manipulation to concatenate file paths is full of pitfalls. Instead, you should use the pathlib standard library which removes all the headaches of worrying about getting the correct separator characters etc. You should also not hard code configuration into your functions, at least provide a means of overriding it. For example your _create_script_file function: def _create_script_file(script, script_name): injector_home_dir = "/root" script_file_path = '{}/{}'.format(injector_home_dir, script_name) with open(script_file_path, 'w') as script_file: script_file.write(script) return script_file_path Could be rewritten: def _create_script_file(script, script_name, injector_home_dir = "/root"): script_file_path = Path(injector_home_dir).joinpath(injector_home_dir, script_name) with open(script_file_path, 'w') as script_file: script_file.write(script) return script_file_path Even better, load your injector_home_dir from configuration or load as a Path object in an initializer or somewhere. String literals This may be more of a personal preference, but I think fstrings are far more readable than string formatting: db_fault_api_url = "{}/{}/{}".format(self.db_api_url, "fault", fault_name) vs db_fault_api_url = f"{self.db_api_url}/fault/{fault_name}") List/dictionary comprehension In this section you appear to be essentially filtering a dictionary. This can be greatly simplified since you're reusing the keys: # Get the names of the parts of the fault probes = fault_info["probes"] methods = fault_info["methods"] rollbacks = fault_info["rollbacks"] name = fault_info["name"] fault_structure = {'probes' : probes , 'methods' : methods , 'rollbacks' : rollbacks} # Get the names of the parts of the fault parts = ["probes", "methods", "rollbacks", "name"] fault_structure = {key: value for key, value in fault_info.items() if key in parts} The keys used in parts appear to be reused in various places so they are a good candidate for storing in configuration. Exception handling I'm not keen on this section. There is a lot of repeated code, I would much prefer to return a value based on the exception. You also have what is essentially a bare exception where you catch any type of exception. def _orchestrate_injection(self,dns,fault_name): try : # Gets fault full information from db fault_info = self._get_fault_info(fault_name) except Exception as E : return { "exit_code":"1" ,"status": "Injector failed gathering facts" } try : # Runs the probes,methods and rollbacks by order. logs_object = self._run_fault(dns, fault_info) except : return { "exit_code":"1" ,"status": "Injector failed injecting fault" } try : # Sends logs to db to be stored in the "logs" collection db_response = self._send_result(dns,logs_object,"logs") return db_response except Exception as E: return { "exit_code":"1" ,"status": "Injector failed sending logs to db" } Use a single try/catch block, store the response and then finally return at the end: def _orchestrate_injection(self,dns,fault_name): try : # Gets fault full information from db fault_info = self._get_fault_info(fault_name) # Runs the probes,methods and rollbacks by order. logs_object = self._run_fault(dns, fault_info) # Sends logs to db to be stored in the "logs" collection db_response = self._send_result(dns,logs_object,"logs") except SpecificExceptionType as E: # Examine exception and determine return message if e.args == condition: exception_message = "" else: exception_message = str(E) db_response = { "exit_code":"1" ,"status": exception_message } return db_response Repetition and encapsulation Consider where you're repeating code or large functions can be broken down into smaller, reusable parts. Your run_fault method is large, with a lot of branching. An obvious repetition is where you update the exit code: # Check if server healed after rollbacks if probes_result is True: rollbacks['exit_code'] = "0" rollbacks['status'] = "victim succsessfully healed after rollbacks" else: rollbacks['exit_code'] = "1" rollbacks['status'] = "victim failed healing after rollbacks" This makes for a nice little function: def update_exit_status(log, exit_code, status_message = ""): if not status_message: if exit_code: status_message = "victim successfully healed after rollbacks" else: status_message = "victim failed healing after rollbacks" log["exit_code"] = "1" if exit_code else "0" log["status"] = status_message return log You use a lot a dictionary manipulation throughout, it could be worthwhile to make a small class to contain this information. This would have the benefit of removing the need for so many magic strings where you retrieve information by keys, instead you could use the properties of your class. You could also then contain some of the data handling logic within you class, instead of spread throughout the rest of your methods.
{ "domain": "codereview.stackexchange", "id": 38594, "tags": "python, python-3.x, mongodb, automation" }
Current flowing through a closed conducting loop in a time-dependent magnetic field
Question: Consider a closed conducting loop (e.g., a circular wire) in a time-dependent magnetic field. Assume that the resistance per unit length of the wire is $r$ and that its total length is $L$. If the quantity $$ \frac{dB}{dt} $$ is known, is it possible to compute the current $I$ flowing in the circular loop? From Maxwell equations, we know that $$ \nabla \times \vec{E} = -\frac{\partial \vec{B}}{\partial t} $$ Moreover, since a potential $V$ cannot be introduced, we cannot use the simple Ohm's law $$ \Delta V =I (rL) $$ Answer: Yes, using Faraday's Law of Induction. The electric field along the circular wire is $$\int_{\partial C}\bf{E}\cdot d\bf {r}= \int\int_{C} \frac{\partial \bf{B}}{\partial t} \cdot d\bf{A} , $$ where $\bf{E}$ is the electric field and $C$ is the disk enclosed by the loop of wire, and ${\partial C}$ is the circular wire. Then one can use Ohm's law to relate the current to the electric field via the conductivity $\sigma$ $$\bf{J} = \sigma \bf{E},$$ where $\bf{J}$ is the induced current. Generally, the conductivity is related to the resistivity reciprocally $$\sigma = \frac{1}{\rho}.$$
{ "domain": "physics.stackexchange", "id": 68937, "tags": "electromagnetism, electrical-resistance, maxwell-equations" }
jQuery substitute for multiple hovers
Question: I would like to find a more efficienct way to use jQuery for my question and answer page. Here is the code which I want to change. If a .q_container is clicked, I want its corresponding answer div will slide down. $(document).ready(function() { $('#qc1').hover( function () { $('#a1').slideDown('fast'); }, function () { $('#a1').slideUp('fast'); } ); $("#qc2").hover( function () { $('#a2').slideDown('fast'); }, function () { $('#a2').slideUp('fast'); } ); $("#qc3").hover( function () { $('#a3').slideDown('fast'); }, function () { $('#a3').slideUp('fast'); } ); }); All of my Code: <html> <head> <style type="text/css"> * { padding: 0px; margin: 0px; } body { background-color: black; } #container { width:1000px; min-height: 500px; background-color: #3b3b3b; padding: 10px; margin-left: auto; margin-right: auto; } .q_container { width:300px; } .question { border-radius: 5px 5px 0px 0px; width:300px; height:30px; background-color: red; padding: 4px; } .answer { border-radius: 0px 0px 5px 5px; width:300px; height:100px; display:none; background-color: blue; overflow: hidden; padding: 4px; } </style> <script type="text/javascript" src="http://code.jquery.com/jquery-1.7.2.min.js"> </script> <script type="text/javascript"> $(document).ready(function() { $('#qc1').hover( function () { $('#a1').slideDown('fast'); }, function () { $('#a1').slideUp('fast'); } ); $("#qc2").hover( function () { $('#a2').slideDown('fast'); }, function () { $('#a2').slideUp('fast'); } ); $("#qc3").hover( function () { $('#a3').slideDown('fast'); }, function () { $('#a3').slideUp('fast'); } ); }); </script> </head> <body> <div id="container"> <div class="q_container" id="qc1"> <div class="question"> What is a question? </div> <div class="answer" id="a1"> It is a way to discover something. </div> </div> <div style="width:300px;height:10px;"></div> <div class="q_container" id="qc2"> <div class="question"> What is a question2? </div> <div class="answer" id="a2"> It is a way to discover something2. </div> </div> <div style="width:300px;height:10px;"></div> <div class="q_container" id="qc3"> <div class="question"> What is a question3? </div> <div class="answer" id="a3"> It is a way to discover something3. </div> </div> </div> </body> </head> Answer: Working Demo http://jsfiddle.net/jEjYp/2/ Rest feel free to play around with the code or demo. Hope it fits your cause. :) code $(document).ready(function() { $('#qc1,#qc2,#qc3').hover( function () { $(this).find('.answer').slideDown('fast'); }, function () { $(this).find('.answer').slideUp('fast'); } ); });​
{ "domain": "codereview.stackexchange", "id": 2119, "tags": "javascript, jquery, performance, html, css" }
When does the total time derivative of the Hamiltonian equal its partial time derivative?
Question: When does the total time derivative of the Hamiltonian equal the partial time derivative of the Hamiltonian? In symbols, when does $\frac{dH}{dt} = \frac{\partial H}{\partial t}$ hold? In Thornton & Marion, there is an identity in one of the problems, for any function $g$ of the generalized momentum and generalized coordinates, the following is true: $$ \frac{dg}{dt} = [g,H] + \frac{\partial g}{\partial t},$$ where H is the Hamiltonian. It seems to me that if we let $g = H$, then, since the Hamiltonian clearly commutes with itself, then $\frac{dH}{dt} = \frac{\partial H}{\partial t}$ is always true. Is this the correct way to look at it? Answer: I claim that The partial and total time derivatives of the hamiltonian are equal whenever the hamiltonian is evaluated on a solution to Hamilton's equations of motion. For conceptual simplicity, let's restrict the discussion to systems with a two-dimensional phase space $\mathcal P$ with generalized coordinates $(q,p)$. It's important to note what the total time derivative and partial time derivative mean in this context. In particular, recall that the Hamiltonian is a function that maps a pair consisting of a point $(q,p)$ in phase space and a point $t$ in time, to a real number $H(q,p,t)$. When we say that we are taking the partial time derivative of $H$, we mean that we are taking a derivative with respect to its last argument (in my notation). When we say that we are taking a total time derivative, we have in mind evaluating the phase space arguments of the Hamiltonian on a parameterized path $(q(t), p(t))$ in phase space, then then taking the derivative with respect to $t$ of the resulting expression, like this; \begin{align} \frac{d}{dt}\Big(H(q(t), p(t), t)\Big) \end{align} If we use the chain rule, we find that this total time derivative can be related to the partial time derivative of $H$ as follows: \begin{align} \frac{d}{dt}\Big(H(q(t), p(t), t)\Big) = \frac{\partial H}{\partial q}(q(t), p(t), t) \dot q(t) + \frac{\partial H}{\partial p}(q(t), p(t), t) \dot p(t) + \frac{\partial H}{\partial t}(q(t), p(t), t) \end{align} I have deliberately not abbreviated notation here to make explicit what exactly is going on so that there is no confusion. For example, the expression \begin{align} \frac{\partial H}{\partial q}(q(t), p(t), t) \end{align} means that we take the partial derivative of $H$ with respect to its first argument (which I labeled $q$), then then I evaluate the resulting function on $(q(t), p(t), t)$. Now the question is, when are the total and partial time derivatives the same? Well, the relationship between them that we derived above shows that this happens if and only if the other stuff in the equation vanishes; \begin{align} \frac{\partial H}{\partial q}(q(t), p(t), t) \dot q(t) + \frac{\partial H}{\partial p}(q(t), p(t), t) \dot p(t)=0 \end{align} Notice, now, that this equation definitely does not hold for a general path $(q(t), p(t))$ in phase space. I'll leave it to you to find a simple counterexample. So, for what paths does this relationship hold? Well, notice that this relationship is satisfied provided the path satisfies Hamilton's equations; \begin{align} \dot q(t) &= \frac{\partial H}{\partial p}(q(t), p(t), t) \\ \dot p(t) &= -\frac{\partial H}{\partial q}(q(t), p(t), t) \end{align} In other words, we have demonstrated the claim I started with.
{ "domain": "physics.stackexchange", "id": 92088, "tags": "classical-mechanics, time, hamiltonian-formalism, differentiation, hamiltonian" }
Metabolic efficiency for fats and sugars
Question: I am making an exercise for physics students about the first law of thermodynamics, burning heat and evaporation heat. So my idea is to use cycler which runs on fats and sugars where proportion determined by respiration quotient. The amount of evaporated sweats would be determined from inefficiency of using the fuel. Unfortunately I am unable to find efficiency data to form the exercise. So what are typical efficiencies to run skeletal muscles on either fat or sugar molecules? EDIT Assuming respiratory quotient measured at the time of cycling is 0.7 so all energy is obtained from oxidizing fats: $$ C_{16}H_{32}O_2 + 23 O_2 -> 16CO_2 + 16 H_2O + \Delta G$$ what part of the $\Delta G$ can do mechanical work $A$ or simply what is typical value of efficiency coefficient: $$\mu = \frac{A}{\Delta G}$$ Answer: This is a very complicated question. You need to rephrase your question to be more specific. Efficiencies of energy utilization from glucose in the presence of O2 (aerobic respiration) are considered to be in the 40% range. Quite good when you consider that an automobile combustion engine is in the 20-25% range. There is some evidence that betahydroxybutarate (B-OHB) actually provides more available energy for work than does glucose. See Paper Here. However, what is often overlooked is the energy needed to generate B-OHB from fat during ketosis, which results in a net loss in efficiency compared to glucose. One of my favorite blogs is Peter Attia's "The Eating Academy." He has a Post that goes into this subject somewhat. I know this isn't a complete answer to your question, but again, you need to be more specific. Respiration with or without O2. Efficiencies in generating ATP from source material or only net efficiencies after the ATP -> ADP.
{ "domain": "biology.stackexchange", "id": 4633, "tags": "metabolism, bioenergetics, fat-metabolism" }
How can I find the force of a solenoid with a moving plunger?
Question: It seems like it should be a simple equation, until I realized that the core isn't magnetized until it is induced, then there is a dipole moment, and then as it moves the core of the solenoid gradually changes from air to the core material. This should be a differential I believe. I'm having a hard time finding any information on it. Answer: The origin of the force is indeed complicated and linked to side effects. But if the core plunges deep and comes out of the solenoid widely, the force can be found through an energy balance. The magnetic excitation in the solenoid is $H = nI$ inside and 0 outside.(n is the number of spire by unit length) At the top of the magnetic core, the magnetic field is zero and at the bottom: in the magnetic core $B=\mu nI$ and outside the core, $B=\mu_0 nI$ When the magnetic core of section $ s $ moves from $ dx $ upwards, we replace on the volume $ sdx $ the field $\mu nI$ by $\mu_0 nI$. In this volume, the magnetic energy $\left(B^2/2\mu\right)sdx=sdx\mu{(nI)}^2/2$ is replaced by $sdx\mu_0{(nI)}^2/2$ and the variation of magnetic energy is: $dE_m=-sdx(\mu{-\mu_0)(nI)}^2/2$ When the current is imposed, we know that the magnetic force is: $F_x=+\frac{dE_m}{dx}=-s(\mu{-\mu_0)(nI)}^2/2$ : attractive force. Hope it can help. (and sorry for my poor english)
{ "domain": "physics.stackexchange", "id": 80881, "tags": "electromagnetism, magnetic-fields, electromagnetic-induction, inductance, magnetic-moment" }
Octomap + Rosjava?
Question: First of all, sorry if the question is a foolishness. I am going to substitute Player (which I use with PlayerClient) by ROS (which I ment to use with Rosjava). In the current implementation I use a custom-defined map type, but I need to change it by Octomap. I know that there is a Octomap plugin to use with ROS, but I need to know if it is possible to recover the map using Rosjava and obtain information about it. Thank you in advance. Originally posted by AdrianGonzalez on ROS Answers with karma: 27 on 2014-03-03 Post score: 0 Answer: You can receive the octomap message with rosjava and interpret the information yourself. However the octomap library itself that makes this usable is written in C++. AFAIK there are no java bindings. Originally posted by dornhege with karma: 31395 on 2014-03-03 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by AdrianGonzalez on 2014-05-05: Thank you. I have implemented a basic java binding for the main functionalities of Octomap, so I hope integrate to integrate Octomap in my planner. Comment by Daniel Stonier on 2014-05-13: If you are happy with the bindings, these would be nice to package in the core rosjava repos - perhaps rosjava_extras.
{ "domain": "robotics.stackexchange", "id": 17141, "tags": "octomap, rosjava" }
Best regression model to use for sales prediction
Question: I have the following variables along with sales data going back a few years: date # simple date, can be split in year, month etc shipping_time (0-6 weeks) # 0 weeks means in stock, more weeks means the product is out of stock but a shipment is on the way to the warehouse. Longer shipping times have a siginificant impact on sales. sales # amount of products sold I need to predict the sales (which vary seasonally) while taking into account the shipping time. What would be a simple regression model that would produce reasonable results? I tried linear regression with only date and sales, but this does not account for seasonality, so the prediction is rather weak. Edit: As a measure of accuracy, I will withold a random sample of data from the input and compare against the result. Extra points if it can be easily done in python/scipy Data can look like this -------------------------------------------------- | date | delivery_time| sales | -------------------------------------------------- | 2015-01-01 | 0 |10 | -------------------------------------------------- | 2015-01-01 | 7 |2 | -------------------------------------------------- | 2015-01-02 | 7 |3 | ... Answer: This is a pretty classic ARIMA dataset. ARIMA is implemented in the StatsModels package for Python, the documentation for which is available here. An ARIMA model with seasonal adjustment may be the simplest reasonably successful forecast for a complex time series such as sales forecasting. It may (probably will) be that you need to combine the method with an additional model layer to detect additional fluctuation beyond the auto-regressive function of your sales trend. Unfortunately, simple linear regression models tend to fare quite poorly on time series data.
{ "domain": "datascience.stackexchange", "id": 1888, "tags": "predictive-modeling, regression" }
Basic expressive Swift background thread execution
Question: I created a basic background thread class in swift to replace the ugliness of: dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) { () -> Void in // background thread code dispatch_async(dispatch_get_main_queue()) { () -> Void in // done, back to main thread } } with the expressiveness of: Background { () -> () in // background code }.afterInterval(3) { () -> () in // main thread, run after interval IF not done }.completion { () -> () in // done, main thread } afterInterval and completion are optional and interchangeable. How reliable is this approach? Can we ever return from init(task) before completion is added, using the dot syntax? I have tested to my abilities and it seems rather bulletproof, but am wondering about the thoughts here. class Background { private var done = false { didSet { if done { taskInterval = nil dispatch_async(dispatch_get_main_queue()) { () -> Void in completion?() } } } } private var taskInterval: (() -> ())? private var completion: (() -> ())? init(task: () -> ()) { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) { () -> Void in task() self.done = true } } func afterInterval(interval: NSTimeInterval, closure: () -> ()) -> Background { taskInterval = closure let delayTime = dispatch_time(DISPATCH_TIME_NOW, Int64(interval * Double(NSEC_PER_SEC))) dispatch_after(delayTime, dispatch_get_main_queue()) { [weak self] in self?.taskInterval?() } return self } func completion(closure: () -> ()) -> Background { completion = closure return self } } "SwiftyThreads" on Github Answer: Yes, I see what you mean: there do appear to be dodgy race conditions. I do like where you are going with this, though – dot syntax and all. Still I found at least one failing test. If you paste your code in a playground and then add the following: /* your code here */ import XCPlayground XCPlaygroundPage.currentPage.needsIndefiniteExecution = true var i = 0 Background { i += 1 // don't call `print` here! }.afterInterval(0) { i += 10 }.completion { i += 100 } let t = dispatch_time(DISPATCH_TIME_NOW, Int64(NSEC_PER_SEC)) dispatch_after(t, dispatch_get_main_queue()) { print(i) } // prints: 101 (instead of 111) True, delay of 0 seems a little contrived, but it does prove that your worries are founded. One way around this is to trigger a dispatch of those closures on demand (and repeatedly): import Foundation class Background { let qos: qos_class_t let task: (() -> ())? private(set) var delay: NSTimeInterval? = nil private(set) var delayedTask: (() -> ())? = nil private(set) var completionHandler: (() -> ())? = nil init(_ qos: qos_class_t = QOS_CLASS_BACKGROUND, task: (() -> ())? = nil) { self.qos = qos self.task = task } func delay(interval: NSTimeInterval, task: () -> ()) -> Background { self.delay = interval self.delayedTask = task return self } func completion(task: () -> ()) -> Background { self.completionHandler = task return self } func async() { dispatch_async(dispatch_get_global_queue(qos, 0)) { // TODO: unnecessary if `task == nil` self.task?() if let delayedTask = self.delayedTask, let delay = self.delay { let t = dispatch_time(DISPATCH_TIME_NOW, Int64(delay * NSTimeInterval(NSEC_PER_SEC))) dispatch_after(t, dispatch_get_main_queue(), delayedTask) } if let completionHandler = self.completionHandler { dispatch_async(dispatch_get_main_queue(), completionHandler) } } } } Which can be used like so: import XCPlayground XCPlaygroundPage.currentPage.needsIndefiniteExecution = true var i = 0 let bg = Background { i += 1 }.delay(0) { i += 10 }.completion { i += 100 } // nothing dispatched yet! bg.async() // repeated invocations possible bg.async() Background().delay(1) { print(i) }.async() // prints: 222 (as it should) In addition, I would probably rename Background...
{ "domain": "codereview.stackexchange", "id": 16369, "tags": "multithreading, swift" }
Interpreting a normalized Power Spectral Density (PSD)
Question: I am using software to produce power spectral density (PSD) plots of time-series (voltage versus time). Unfortunately, the units of the produced plots are alien to me. I'm used to reading and interpreting PSD's in more common, "tangible" units like dBm/Hz or W/Hz, however these plots are described as: Returns a PSD in $dB$ units that is normalized and divided by frequency bin width (i.e. it is normalized to the time-integral squared amplitude of the time domain and then divided by frequency bin width). How is a PSD in units of dB to be interpreted, and what is the purpose of "normalizing to the time-integral squared amplitude of the time domain"? No further context is provided. Answer: You can interpret the PSD in units of dB the same way as you interpret it in more common units of dBm/Hz. The time integral squared amplitude of the voltage signal in the time domain can be thought of as the total "energy" in the signal (area under the curve). This energy is not physical and need not make sense. Now if you divide the signal with that quantity, you are normalizing the signal so that the total "energy" is now 1. When you fourier transform this signal to get the PSD, you preserve this total energy and therefore your PSD can have units of dB. The reference is not 1mW or 1uW but just 1. The additional benefit of dividing out by the frequency bin width is that now your calculated quantity is now free of the choice of bin width. All this is done to focus on the qualitative nature of the PSD and not the quantitative values themselves. Hope this helps
{ "domain": "physics.stackexchange", "id": 88872, "tags": "fourier-transform, power, signal-processing" }
Notation of Mixed Tensors: Risk of Confusing Index Positions?
Question: The convention for notating indices of a tensor is to write a contravariant index superscript and a covariant index subscript. If one has a pure contravariant or a pure covariant tensor of $2$nd order, then the association of the $i$th index with the $i$th dimension of the tensor is clear: $$F^{\alpha\beta},\quad F_{\alpha\beta}.$$ In this case, $\alpha$ gives the index of the $1$st dimension, $\beta$ the index of the $2$nd dimension. However, if it comes to a mixed tensor of $2$nd order, I frequently come across the notation $$F^\alpha_\beta,$$ where both indices are positioned right above each other, directly after the tensor symbol. In my understanding, this neglects the index position and with that the association of an index with its dimension. It is not clear if this notation is intended to mean $${F^\alpha}_\beta\quad\text{or}\quad{F_\beta}^\alpha.$$ Am I missing something? Even if $F$ was symmetric in the indices $\alpha$ and $\beta$, ${F^\alpha}_\beta\neq{F_\beta}^\alpha$ in general since they transform differently under a transformation $T$: $${\overline{F}^\alpha}_\beta=\left(T^{-1}\right)_{\alpha\mu}T_{\nu\beta}{F^\mu}_\nu\quad\Leftrightarrow\quad\overline{F}=T^{-1}FT\quad\quad\;\\ {\overline{F}_\beta}^\alpha=T_{\mu\beta}\left(T^{-1}\right)_{\alpha\nu}{F_\mu}^\nu\quad\Leftrightarrow\quad\overline{F}=T^\text{T}F\left(T^{-1}\right)^\text{T}$$ Even common literature uses this position-insensitive notation (Theoretical Physics 4 by Wolfgang Nolting, e.g.), and so do some of my professors in particle physics, where contravariant and covariant tensors of $2$nd order appear on a daily basis. Answer: You're not missing anything at all -- it's simply sloppy notation, and the people who do it just don't want to bother putting in the spacing correctly. However, you are missing something about the case of symmetric tensors. In this case, there is no ambiguity: an upper index transforms by contracting against the lower index of $\Lambda^{\mu'}_{\ \ \nu}$, while a lower index transforms by contracting against the upper index of $\Lambda^{\mu}_{\ \ \nu'}$. You might think that it makes a difference if you want to write the contraction as a matrix multiplication. But matrix multiplication is nothing more than a trick for remembering the general rules I just said, and a rather limited one at that. It might be true that the matrix multiplication representation differs between the two cases you gave, but that just means it's adding unnecessary complication. The transformation rule, in index notation, is the real definition, and unambiguous.
{ "domain": "physics.stackexchange", "id": 66337, "tags": "special-relativity, spacetime, differential-geometry, tensor-calculus" }
Boost filesystem3 not found when building Gazebo from source
Question: I am building Gazebo 1.3.1 on Ubuntu 12.04 and have all the dependencies needed for the build in place. I am using Boost library version 1.52. The build process has no problems through configuring using cmake, but during linking the gazebo executable file, I run into the message common/libgazebo_common.so.1.3.1: undefined reference to boost::filesystem3::detail::status(boost::filesystem3::path const&, boost::system::error_code*)'`, along with other errors in finding Boost filesystem3 calls. I have applied the patch from here but it still does not fix the filesystem3 compatibility issue. Has anyone compiled Gazebo using Boost library 1.52, and resolved the filesystem3 conflicts? Originally posted by yipenghuang0302 on Gazebo Answers with karma: 26 on 2013-01-14 Post score: 0 Answer: This problem was a result of having multiple installations of lib Boost. Originally posted by yipenghuang0302 with karma: 26 on 2013-01-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 2921, "tags": "gazebo" }
contact sensor: how to get the force information
Question: Hi, All, I am working on add a contact sensor on a link, and I want to get the torque of the joint of this link, now I meet some question: msgs::contacts contacts, the order, I cannot find the member function 'contacts' of msgs, where can find the definition of msgs::contacts, and what type Variable is the contacts ? and how can I use its member function or events? when I use the function:unsigned int c = parentSensor->GetCollisionContactCount(const string&test_collision3) an error will appear: ContactPlugin.cc:56:57: error: expected primary-expression before ‘const’ how to solved it? and other function including const appear this error too. I want to input the data of sensor in a .txt file, how can I do it? where can the document define the relationship between the sensor plugin.so and the plugin.cc, in CMakelist.txt or in the .world file? because I want to add 3 contact sensors to 3 links, so the plugins must be Distinguished. thanks a lot best gdlu Originally posted by lugd1229 on Gazebo Answers with karma: 75 on 2013-02-01 Post score: 0 Original comments Comment by hsu on 2013-02-06: please post your code in your question. thanks. Answer: Here's another example of how the information from a force torque sensor is simulated in drcsim: first, get the lFootContactSensor, next, connect lContactUpdateConnection to sensor updates, and, Structure your OnLContactUpdate callback like so. Alternatively, there is a ticket to implement a ForceTorqueSensor class, which may simplify this process. Originally posted by hsu with karma: 1873 on 2013-02-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by lugd1229 on 2013-02-06: thank you, but how to solve the question 2: function:unsigned int c = parentSensor->GetCollisionContactCount(const string&test_collision3) an error will appear: ContactPlugin.cc:56:57: error: expected primary-expression before ‘const’ how to solved it? and other function including const appear this error too. Comment by lugd1229 on 2013-02-06: and I set the updatetime is 1 second , then there are 58 groups(contacts size)* 14 groups (position size) data, what difference between the contacts and the positions data? Comment by hsu on 2013-02-06: A message may contain contact from multiple time steps, in this case, you have 58 groups, or 58 simulation time steps has occurred before the last contact message. Within each contact group, the number of contact joints actually generated by collision detection and solved by the physics engine is reflected by its position_size (14 in your example) Comment by lugd1229 on 2013-02-11: if I want to obtain the realtime data and draw the figure of the link's torque along the time, how can I deal with the data? Comment by nkoenig on 2013-02-13: It's easiest to handle realtime data through a plugin. You can write a sensor or model plugin that process contact sensor data every iteration.
{ "domain": "robotics.stackexchange", "id": 2997, "tags": "gazebo" }
Do sulfate reducing bacteria ingest their sulfate as solid, or as dissolved in water?
Question: Do sulfate reducing bacteria ingest their sulfate as solid, or liquid or gas? I understand that almost all forms of sulfate are solid.. and sulfate can be dissolved in water(thus no longer solid, but liquid). So, do sulfate reducing bacteria ingest sulfate in its solid form, or do they require it to be dissolved in water? A similar question in fact, for nitrifying bacteria, though if that's complex I might make it a separate question. Also, even though it's respiration(which tends to be associated either with the breathing process - in the case of physiological respiration, or with having breathed prior - in the case of cellular respiration), here it's not using a gas. So i'm curious if biologists ever use the word "breathing" for such a process. wikipedia for example, says https://en.wikipedia.org/wiki/Sulfate-reducing_bacteria " In a sense, these organisms "breathe" sulfate.." I wonder if biologists use the term 'breath' in such a general sense(to include consuming a substance dissolved in water), or only in a specific sense of gas or air, or if they don't use the term 'breath' at all. So, how strictly is the term breath defined in biology. Answer: Sulfates in water would not be liquid. Their melting points are far to high. When a sulfate dissolves into sulfate ions and some cation such as potassium, we say it is solvated, not liquid. Sulfates would also not be present as gasses due to their ionic nature as well as high molecular weights. So that leaves solvated sulfate ions and solid sulfates that have not completely dissolved. To be ingested in bulk as a solid would require endocytosis which is not performed by prokaryotes: http://faculty.ccbcmd.edu/courses/bio141/lecguide/unit1/proeu/proeu.html Further research indicates sulfate, as ions, enters the cell through permeases: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.532.943&rep=rep1&type=pdf Edit: Here's some files to help with the idea of how ions and other molecules enter cells. Again, bacteria do not engulf substances whole, in the process of endocytosis, such as eukaryotes are capable of. Here is an image of a nitrate ion within a transport protein. Only a few amino acids of the protein are shown here, the actual protein is much larger and spans across the entire membrane creating a channel for the ion to enter. http://www.rcsb.org/pdb/ngl/ngl.do?pdbid=4U4W&preset=ligandInteraction&sele=[NO3] Here's another channel, aquaporin, which allows transport of water across the membrane. http://pdb101.rcsb.org/motm/173 The top left image shows the channel in the protein leading to the interior. The third image down shows the interaction between the water and the amino acids (backbones only are shown) which facilitate the water's passage.
{ "domain": "biology.stackexchange", "id": 6599, "tags": "bacteriology, cellular-respiration, anaerobic-respiration" }
What recurrence describes the time complexity of this algorithm?
Question: The problem is as follows: The input is an array $A$ of $n$ natural numbers such that: (1) if the maximum occurs in $A[p]$ for an index $p$, then $$A[1] \leq \ldots \leq A[p-1] \leq A[p]$$ and $$A[p] \geq A[p+1] \ldots \geq A[n];$$ (2) if $A[i]=A[j]=x$ then $A[k]=x$ for $i \leq k \leq j$. The goal is to find the maximum element of $A$. Restriction (2) imposes the additional difficulty that $A$ might contain plateaus, and hence a simple binary search does not work. I came up with the following algorithm, but I'm having a hard time analyzing its time complexity. // p is start index and q the ending index (inclusive) FindPeak(A, p, q) 1 if q - p = 0 then return p 2 m = (p + q) / 2 // assume that the division applies the floor 3 i = FindPeak(A, p, m) 4 j = FindPeak(A, m + 1, q) 5 if A[i] = A[j] then // A[i..j] is a plateau 6 i = FindPeak(A, p, i) 7 j = FindPeak(A, j, q) 8 if A[i] < A[j] then return FindPeak(A, j, q) 9 else return FindPeak(A, p, i) Let $n=q-p$ and $T(n)$ be the function describing the running time of FindPeak. I know that lines 3 and 4 are $T(n/2)$ each, but I'm not sure about lines 6 through 9. Could you help me constructing the recurrence relation for $T(n)$? (And any eventual bug that might be lurking there.) PS 1. If the entire array is a plateau (all the elements are equal), then is it correct to say that the worst-case running time of FindPeak is $\Omega(n)$? PS 2. After reading a comment, I realized that I didn't understand the problem properly. The simplified algorithm is as follows: // p is start index and q the ending index (inclusive) FindPeak(A, p, q) 1 if q - p = 0 or A[p] = A[q] then 2 return p 2 m = (p + q) / 2 // assume that the division applies the floor 3 i = FindPeak(A, p, m) 4 j = FindPeak(A, m + 1, q) 5 if A[i] < A[j] then 6 return FindPeak(A, j, q) 7 else 8 return FindPeak(A, p, i) I still have trouble to analyze the time complexity because of lines 5 through 8. Nevertheless, due to lines 3 and 4, I think the running time in the worst-case is at least $\Omega(n)$, right? Is it possible to devise a $O(\log n)$ algorithm? Answer: Partial answer: The problem admits an $O(n^{\log_3(2)}) \leq O(n^{0.631})$ algorithm. The algorithm $\mathcal A$ works as follows: Query $x = A[n/3]$ and $y = A[2n/3]$. If $x = y$, return $\max \{x, \mathcal A(A[1, n/3]), \mathcal A(A[2n/3, n])\}$ If $x < y$, return $\max \{\mathcal A(A[n/3, 2n/3]), \mathcal A(A[2n/3, n])\}$ If $x > y$, return $\max \{\mathcal A(A[1, n/3]), \mathcal A(A[n/3, 2n/3])\}$ ($A[p, q]$ denotes the sub-array of $A$ starting at index $p$ and ending at index $q$.) It should be easy to see that $\mathcal A$ works correctly. The running time $T(n)$ of $\mathcal A$ is given by $T(n) = O(1) + 2 \cdot T(n / 3)$, which solves to $T(n) = O(n^{\log_3(2)})$.
{ "domain": "cs.stackexchange", "id": 21074, "tags": "algorithm-analysis, recurrence-relation, divide-and-conquer" }
What is the mechanism of oxidation of phenol to benzoquinone?
Question: I looked up the mechanism in a lot of organic chemistry books, including Clayden, Klein, Solomons, McMurry but I couldn't find the mechanism of oxidation of a phenol. I also looked it up in pharmacognosy books, but with no luck. All I could find in these books is the following scheme, with no mechanism: It seems like other oxidizing agents can be used, such as hydrogen peroxide and silver oxide. Unlike primary and secondary alcohols, phenol doesn't have an hydrogen atom attached to the carbon directly bound to oxygen. My guess is that the double bond of phenol will react with the oxidizing agent, but not sure of how this will work. Answer: The reaction is used industrially to make hydroquinone (source: wikipedia), but often using hydrogen peroxide as the oxidation agent in the presence of a catalyst. A study on catalytic hydroxylation of benzene comes to the following conclusion: experimental evidences have allowed first discarding Fenton-like mechanisms and later proposing the plausible competition between an electrophilic aromatic substitution pathway with the alternative rebound (hydrogen abstraction) route. The Fenton-like mechanism refers to addition of a free oxygen radical species to benzene, which they reject based on experiments with radical scavengers. Instead, they hypothesize formation of a radical oxygen:catalyst complex. This either reacts with benzene in an electrophilic substitution, or abstracts a hydrogen from benzene, which then rebounds and reacts with the oxygen still attached to the catalyst. Although the study I am citing is for hydroxylation of benzene instead of phenol, the paper also talks about substituted benzenes. [OP:] My guess is that the double bond of phenol will react with the oxidizing agent, but not sure of how this will work. You can't pin down a single double-bond in an aromatic system. What you can say is that the existing hydroxyl group will steer the reaction to make an ortho- or para- product. Once hydroquinone is formed, it oxidizes quickly to benzoquinone under the oxidizing conditions. [OP:] What is the mechanism of oxidation of phenol to benzoquinone? For the reactions conditions given by the OP, I did not manage to find a mechanism. However, the mechanisms I discussed probably are sufficiently general that they apply in this special case as well. That is just a guess, though.
{ "domain": "chemistry.stackexchange", "id": 12517, "tags": "organic-chemistry, reaction-mechanism, phenols" }
Which can support more weight, one thick rope or many thinner ropes?
Question: Given the same total amount of material used and the same length of an individual rope, will many thinner ropes support more weight (withstand more tension) than one thick rope? Answer: What withstands tension is molecular bonds, so what matters is just the surface area across which the load is carried. Multiple thin ropes with the same total area as one thick rope should withstand tension just as well as the thick rope. ...except that "a chain is not stronger than its weakest link". If one rope breaks if there are multiple ropes this is not a disaster, but it is when there is just one rope. If there are N fibers and the probability of one breaking as a function of the load on it is $P(F)$, then the probability that none will break is $(1-P(F))^N$. If $N$ is large this is $\approx 1$, as long as $P(F)$ does not increase fast for thinner fibers.
{ "domain": "physics.stackexchange", "id": 65361, "tags": "forces, material-science, moduli" }
Command line multipart or single file downloader
Question: I am looking for a code review for this multipart or single file chunk downloader using threading and queues. downloader.py import argparse import logging import Queue import urllib2 import os import utils as _fdUtils import signal import sys import time import threading DESKTOP_PATH = os.path.expanduser("~/Desktop") appName = 'FileDownloader' logFile = os.path.join(DESKTOP_PATH, '%s.log' % appName) _log = _fdUtils.fdLogger(appName, logFile, logging.DEBUG, logging.DEBUG, console_level=logging.DEBUG) queue = Queue.Queue() out_queue = Queue.Queue() STOP_REQUEST = threading.Event() class SplitBufferThread(threading.Thread): """ Splits the buffer to ny number of threads thereby, concurrently downloading through ny number of threads. """ def __init__(self, url, numSplits, queue, out_queue): super(SplitBufferThread, self).__init__() self.__url = url self.__splits = numSplits self.queue = queue self.outQue = out_queue self.__fileName = url.split('/')[-1] self.__path = DESKTOP_PATH def run(self): print "Inside SplitBufferThread: %s\n URL: %s" % (self.getName(), self.__url) sizeInBytes = int(_fdUtils.getUrlSizeInBytes(self.__url)) byteRanges = _fdUtils.getRange(sizeInBytes, self.__splits) mode = 'wb' for _range in byteRanges: first = int(_range.split('-')[0]) self.outQue.put((self.__url, self.__path, first, self.queue, mode, _range)) mode = 'a' class DatamineThread(threading.Thread): """Threaded Url Grab""" def __init__(self, out_queue): threading.Thread.__init__(self) self.out_queue = out_queue def run(self): while True: #grabs host from queue chunk = self.out_queue.get() if self._grabAndWriteToDisk(*chunk): #signals to queue job is done self.out_queue.task_done() def _grabAndWriteToDisk(self, url, saveTo, first=None, queue=None, mode='wb', irange=None): fileName = url.split('/')[-1] filePath = os.path.join(saveTo, fileName) file_size = int(_fdUtils.getUrlSizeInBytes(url)) req = urllib2.Request(url) if irange: req.headers['Range'] = 'bytes=%s' % irange urlFh = urllib2.urlopen(req) file_size_dl = 0 if not first else first with open(filePath, mode) as fh: block_sz = 8192 while not STOP_REQUEST.isSet(): fileBuffer = urlFh.read(block_sz) if not fileBuffer: break file_size_dl += len(fileBuffer) fh.write(fileBuffer) status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size) status = status + chr(8)*(len(status)+1) sys.stdout.write('%s\r' % status) time.sleep(.05) sys.stdout.flush() if file_size_dl == file_size: STOP_REQUEST.set() if queue: queue.task_done() _log.info("Download Completed %s%% for file %s, saved to %s", file_size_dl * 100. / file_size, fileName, saveTo) return True class ThreadedFetch(threading.Thread): """ docstring for ThreadedFetch """ def __init__(self, queue, out_queue): super(ThreadedFetch, self).__init__() self.queue = queue self.lock = threading.Lock() self.outQueue = out_queue def run(self): items = self.queue.get() url = items[0] saveTo = DESKTOP_PATH if not items[1] else items[1] split = items[-1] # grab split chiunks in separate thread. if split > 1: bufferThreads = [] print url splitBuffer = SplitBufferThread(url, split, self.queue, self.outQueue) splitBuffer.start() else: while not STOP_REQUEST.isSet(): self.setName("primary_%s" % url.split('/')[-1]) # if downlaod whole file in single chunk no need # to start a new thread, so directly download here. if self.outQueue.put((url, saveTo, 0, self.queue)): self.out_queue.task_done() def main(appName, flag='with'): args = _fdUtils.getParser() urls_saveTo = {} if flag == 'with': _fdUtils.Watcher() elif flag != 'without': _log.info('unrecognized flag: %s', flag) sys.exit() # spawn a pool of threads, and pass them queue instance # each url will be downloaded concurrently for i in xrange(len(args.urls)): t = ThreadedFetch(queue, out_queue) t.daemon = True t.start() split = 1 try: for url in args.urls: # TODO: put split as value of url as tuple with saveTo urls_saveTo[url] = args.saveTo # urls_saveTo = {urls[0]: args.saveTo, urls[1]: args.saveTo, urls[2]: args.saveTo} # populate queue with data for url, saveTo in urls_saveTo.iteritems(): queue.put((url, saveTo, split)) for i in range(split): dt = DatamineThread(out_queue) dt.setDaemon(True) dt.start() # wait on the queue until everything has been processed queue.join() out_queue.join() print '*** Done' except (KeyboardInterrupt, SystemExit): _log.critical('! Received keyboard interrupt, quitting threads.') utils.py import argparse import logging import os import requests import signal import sys def getUrlSizeInBytes(url): return requests.head(url, headers={'Accept-Encoding': 'identity'}).headers.get('content-length', None) def getRange(sizeInBytes, numsplits): """ Splits the range equally based on file size and number of splits. """ if numsplits <= 1: return ["0-%s" % sizeInBytes] lst = [] for i in range(numsplits): if i == 0: lst.append('%s-%s' % (i, int(round(1 + i * sizeInBytes/(numsplits*1.0) + sizeInBytes/(numsplits*1.0)-1, 0)))) else: lst.append('%s-%s' % (int(round(1 + i * sizeInBytes/(numsplits*1.0),0)), int(round(1 + i * sizeInBytes/(numsplits*1.0) + sizeInBytes/(numsplits*1.0)-1, 0)))) return lst def getParser(): parser = argparse.ArgumentParser(prog='FileDownloader', description='Utility to download files from internet') parser.add_argument('-v', '--verbose', default=logging.DEBUG, help='by default its on, pass None or False to not spit in shell') parser.add_argument('-st', '--saveTo', action=FullPaths, help='location where you want files to download to') parser.add_argument('-urls', nargs='*', help='urls of files you want to download.') return parser.parse_args() def sizeof(bytes): """ Takes the size of file or folder in bytes and returns size formatted in kb, MB, GB, TB or PB. Args: bytes(int): size of the file in bytes Return: (str): containing size with formatting. """ alternative = [ (1024 ** 5, ' PB'), (1024 ** 4, ' TB'), (1024 ** 3, ' GB'), (1024 ** 2, ' MB'), (1024 ** 1, ' KB'), (1024 ** 0, (' byte', ' bytes')), ] for factor, suffix in alternative: if bytes >= factor: break amount = int(bytes/factor) if isinstance(suffix, tuple): singular, multiple = suffix if amount == 1: suffix = singular else: suffix = multiple return "%s %s" % (str(amount), suffix) class FullPaths(argparse.Action): """ Expand user- and relative-paths """ def __call__(self, parser, namespace, values, option_string=None): setattr(namespace, self.dest, os.path.abspath(os.path.expanduser(values))) def fdLogger(appName, logFile, fileDebugLevel, file_level, console_level=None): logger = logging.getLogger(appName) # By default, logs all messages logger.setLevel(logging.DEBUG) if console_level != None: # StreamHandler logs to console ch = logging.StreamHandler() ch.setLevel(fileDebugLevel) chFormat = logging.Formatter('%(levelname)s - %(message)s') ch.setFormatter(chFormat) logger.addHandler(ch) fh = logging.FileHandler(logFile) fh.setLevel(file_level) fhFormat = logging.Formatter('%(asctime)s - (%(threadName)-10s) - %(levelname)s: %(message)s') fh.setFormatter(fhFormat) logger.addHandler(fh) return logger Answer: I'll try it: You are requesting for range queries in parallel, and write them all to the disk: Where are you enforcing the order of chunks? I think you may get an unordered file as a result. I think you loose time instead of saving it, by using thread for ThreadedFetch and SplitBufferThread as they do nothing more than variable setting and pushing in into queues. You win nothing trying to set variables in parallel (remember the GIL, the fact that setting variables will cost virtually no time, but creating a thread costs some time), what is worth running concurrently is what actually costs time, like waiting for the network (or doing huge computation in a context we're not blocked by the GIL). Your code is not PEP8 compliant. I don't think it's useful to download chunks of file in parallel, you're in both way limited by your bandwidth. I'm OK with the fact to download files in parallel from multiple servers, just in case a server is slow, but I'm not OK with using as many threads as URLs to download. If you want to download 500 files you'll fork 500 threads, spending a lot of time in context-switch, sharing 1/500 of your bandwidth for each file, all you'll get is timeouts. Better use something like 2 or 3 threads, remember: you're limited by your bandwidth. Depending on your libc, your program will NOT be thread-safe, at the name resolution level, because it uses getaddrinfo, that is known to be thread-safe on Linux but may NOT be (When it uses a AF_NETLINK socket to query on which interface the DNS query must be sent. (The bug has been fixed but you may not be up to date)). urllib2 will however use a lock on other OSes, known to have a not thread safe getaddrinfo: ./python2.6-2.6.8/Modules/socketmodule.c:180 /* On systems on which getaddrinfo() is believed to not be thread-safe, (this includes the getaddrinfo emulation) protect access with a lock. */ #if defined(WITH_THREAD) && (defined(__APPLE__) || \ (defined(__FreeBSD__) && __FreeBSD_version+0 < 503000) || \ defined(__OpenBSD__) || defined(__NetBSD__) || \ defined(__VMS) || !defined(HAVE_GETADDRINFO)) #define USE_GETADDRINFO_LOCK #endif You may not mix logger and print usage, stick to logger, use a debug level when needed.
{ "domain": "codereview.stackexchange", "id": 8492, "tags": "python, design-patterns, multithreading, unit-testing, queue" }
how to find parameters used in decision tree algorithm
Question: I use a machine learning algorithm, for example decision tree classifier similar to this: from sklearn import tree X = [[0, 0], [1, 1]] Y = [0, 1] clf = tree.DecisionTreeClassifier() clf = clf.fit(X, Y) clf.predict([[2., 2.]]) How to find out what parameters are used? Answer: Just type clf after defining it; in your case it gives: clf # result: DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini', max_depth=None, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort='deprecated', random_state=None, splitter='best') i.e. all arguments with their default values, since you did not specify anything in the definition clf = tree.DecisionTreeClassifier(). You can get the parameters of any algorithm in scikit-learn in a similar way. Tested with scikit-learn v0.22.2 UPDATE As Ben Reiniger correctly points out in the comment below, from scikit-learn v0.23 onwards, we need to set the display configuration first in order for this to work: sklearn.set_config(print_changed_only=False)
{ "domain": "datascience.stackexchange", "id": 8364, "tags": "machine-learning, classification, scikit-learn, decision-trees" }
Method that checks whether to disable or enable the buttons depending on whether data is null or not
Question: I have this bit of code that needs refactoring/simplification. The below grabs the planning data for view A and B. Then it joins the planning and matching image for view details. So then it determines if it should set the buttons to be disabled or not based on whether or not, if the data is empty/null or not. I think the major refactoring might be needed for adding the elements to the dictionaries for each view. [OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")] public ActionResult CheckButtons() { int planId = (int)UserSession.GetValue(StateNameEnum.ID, "PlanID"); planning resultSetA = db.planning.Where(t => t.PlanId == planId).Join(db.matching_image, p => p.MatchingImageId, i => i.Id, (p, i) => new { planning = p, i.View }).Where(j => j.View == "A").Select(j => j.planning).FirstOrDefault(); structure sA = db.structure.FirstOrDefault(s => s.PlanId == planId && s.Type == "A"); planning resultSetB = db.planning.Where(t => t.PlanId == planId).Join(db.matching_image, p => p.MatchingImageId, i => i.Id, (p, i) => new { planning = p, i.View }).Where(j => j.View == "B).Select(j => j.planning).FirstOrDefault(); structure sB = db.structure.FirstOrDefault(s => s.PlanId == planId && s.Type == "B"); Dictionary<string, string> buttonADictionary = new Dictionary<string, string>(); Dictionary<string, string> buttonBDictionary = new Dictionary<string, string>(); buttonADictionary.Add("C", resultSetA.CorrespondingId != null ? "disabled" : "enabled"); buttonADictionary.Add("R", resultSetA.ResponseId != null ? "disabled" : "enabled"); buttonADictionary.Add("S", (sA != null && sA.Point_x != null) ? "disabled" : "enabled"); buttonADictionary.Add("D", resultSetA.DistalId != null ? "disabled" : "enabled"); buttonADictionary.Add("P", resultSetA.ProximalId != null ? "disabled" : "enabled"); buttonBDictionary.Add("C", resultSetB.CorrespondingId != null ? "disabled" : "enabled"); buttonBDictionary.Add("R", resultSetB.ResponseId != null ? "disabled" : "enabled"); buttonBDictionary.Add("S", (sB != null && sB.Point_x != null) ? "disabled" : "enabled"); buttonBDictionary.Add("D", resultSetB.DistalId != null ? "disabled" : "enabled"); buttonBDictionary.Add("P", resultSetB.ProximalId != null ? "disabled" : "enabled"); bool aHasData = buttonADictionary.ContainsValue("disabled"); bool bHasData = buttonBDictionary.ContainsValue("disabled"); return Json(new { buttonADictionary, buttonBDictionary, aHasData, bHasData }, JsonRequestBehavior.AllowGet); } Answer: In looking at your method, you have a lot of repetition in building your two dictionaries before composing your final result. If you were to simply regroup your code to separate the building of your first and second dictionaries, and the data that supports the construction, you would produce the following method: [OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")] public ActionResult CheckButtons() { int planId = (int)UserSession.GetValue(StateNameEnum.ID, "PlanID"); planning resultSetA = db.planning.Where(t => t.PlanId == planId).Join(db.matching_image, p => p.MatchingImageId, i => i.Id, (p, i) => new { planning = p, i.View }).Where(j => j.View == "A").Select(j => j.planning).FirstOrDefault(); structure sA = db.structure.FirstOrDefault(s => s.PlanId == planId && s.Type == "A"); Dictionary<string, string> buttonADictionary = new Dictionary<string, string>(); buttonADictionary.Add("C", resultSetA.CorrespondingId != null ? "disabled" : "enabled"); buttonADictionary.Add("R", resultSetA.ResponseId != null ? "disabled" : "enabled"); buttonADictionary.Add("S", (sA != null && sA.Point_x != null) ? "disabled" : "enabled"); buttonADictionary.Add("D", resultSetA.DistalId != null ? "disabled" : "enabled"); buttonADictionary.Add("P", resultSetA.ProximalId != null ? "disabled" : "enabled"); bool aHasData = buttonADictionary.ContainsValue("disabled"); planning resultSetB = db.planning.Where(t => t.PlanId == planId).Join(db.matching_image, p => p.MatchingImageId, i => i.Id, (p, i) => new { planning = p, i.View }).Where(j => j.View == "B").Select(j => j.planning).FirstOrDefault(); structure sB = db.structure.FirstOrDefault(s => s.PlanId == planId && s.Type == "B"); Dictionary<string, string> buttonBDictionary = new Dictionary<string, string>(); buttonBDictionary.Add("C", resultSetB.CorrespondingId != null ? "disabled" : "enabled"); buttonBDictionary.Add("R", resultSetB.ResponseId != null ? "disabled" : "enabled"); buttonBDictionary.Add("S", (sB != null && sB.Point_x != null) ? "disabled" : "enabled"); buttonBDictionary.Add("D", resultSetB.DistalId != null ? "disabled" : "enabled"); buttonBDictionary.Add("P", resultSetB.ProximalId != null ? "disabled" : "enabled"); bool bHasData = buttonBDictionary.ContainsValue("disabled"); return Json(new { buttonADictionary, buttonBDictionary, aHasData, bHasData }, JsonRequestBehavior.AllowGet); } It's evident at this point that your code at the top and at the bottom is effectively identical, only differing on a value passed into two queries and then on variable names. Inside this single method, you are violating the DRY principle: don't repeat yourself. So let's make the first extraction and create the method here: private Dictionary<string, string> GetButtonDictionary(int planId, string value) { planning resultSet = db.planning.Where(t => t.PlanId == planId).Join(db.matching_image, p => p.MatchingImageId, i => i.Id, (p, i) => new { planning = p, i.View }).Where(j => j.View == value).Select(j => j.planning).FirstOrDefault(); structure st = db.structure.FirstOrDefault(s => s.PlanId == planId && s.Type == value); Dictionary<string, string> buttonDictionary = new Dictionary<string, string>(); buttonDictionary.Add("C", resultSet.CorrespondingId != null ? "disabled" : "enabled"); buttonDictionary.Add("R", resultSet.ResponseId != null ? "disabled" : "enabled"); buttonDictionary.Add("S", (st != null && st.Point_x != null) ? "disabled" : "enabled"); buttonDictionary.Add("D", resultSet.DistalId != null ? "disabled" : "enabled"); buttonDictionary.Add("P", resultSet.ProximalId != null ? "disabled" : "enabled"); return buttonDictionary; } The method above takes two parameters: a planId that you have obtained, as well as a value. This second variable probably isn't a great name, but you have overloaded the meaning of your string in your original method (it's used to compare against a View property and a Type property) such that I couldn't immediately think of a better name. If you can think of one, just sub it in. With this extraction, it cleans up your original method nicely, because it can be cleaned up to the following: [OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")] public ActionResult CheckButtons() { int planId = (int)UserSession.GetValue(StateNameEnum.ID, "PlanID"); Dictionary<string, string> buttonADictionary = GetButtonDictionary(planId, "A"); Dictionary<string, string> buttonBDictionary = GetButtonDictionary(planId, "B"); bool aHasData = buttonADictionary.ContainsValue("disabled"); bool bHasData = buttonBDictionary.ContainsValue("disabled"); return Json(new { buttonADictionary, buttonBDictionary, aHasData, bHasData }, JsonRequestBehavior.AllowGet); } Could we clean this up a little more? Possibly, but it's simplified enough for now so let's go back to the method we extracted. It has a couple of database queries that should probably individually be their own methods (SRP - Single Responsibility Principle), so let's go ahead and do that. private Dictionary<string, string> GetButtonDictionary(int planId, string value) { planning resultSet = GetPlanningResultSet(planId, value); structure st = GetStructure(planId, value); Dictionary<string, string> buttonDictionary = new Dictionary<string, string>(); buttonDictionary.Add("C", resultSet.CorrespondingId != null ? "disabled" : "enabled"); buttonDictionary.Add("R", resultSet.ResponseId != null ? "disabled" : "enabled"); buttonDictionary.Add("S", (st != null && st.Point_x != null) ? "disabled" : "enabled"); buttonDictionary.Add("D", resultSet.DistalId != null ? "disabled" : "enabled"); buttonDictionary.Add("P", resultSet.ProximalId != null ? "disabled" : "enabled"); return buttonDictionary; } private planning GetPlanningResultSet(int planId, string value) { planning resultSet = db.planning.Where(t => t.PlanId == planId).Join(db.matching_image, p => p.MatchingImageId, i => i.Id, (p, i) => new { planning = p, i.View }).Where(j => j.View == value).Select(j => j.planning).FirstOrDefault(); return resultSet; } private structure GetStructure(int planId, string value) { return db.structure.FirstOrDefault(s => s.PlanId == planId && s.Type == value); } After doing this, you have the obvious bit of repetition left where you get your enabled settings where you repeat your null checks. So let's extract a method to deal with that. private Dictionary<string, string> GetButtonDictionary(int planId, string value) { planning resultSet = GetPlanningResultSet(planId, value); structure st = GetStructure(planId, value); Dictionary<string, string> buttonDictionary = new Dictionary<string, string>(); buttonDictionary.Add("C", GetEnabledSetting(resultSet.CorrespondingId)); buttonDictionary.Add("R", GetEnabledSetting(resultSet.ResponseId)); buttonDictionary.Add("S", GetEnabledSetting(st != null ? st.Point_x : null)); buttonDictionary.Add("D", GetEnabledSetting(resultSet.DistalId)); buttonDictionary.Add("P", GetEnabledSetting(resultSet.ProximalId)); return buttonDictionary; } private string GetEnabledSetting(object value) { return value != null ? "disabled" : "enabled"; } After we have completed this extraction, the full code now looks like the following: [OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")] public ActionResult CheckButtons() { int planId = (int)UserSession.GetValue(StateNameEnum.ID, "PlanID"); Dictionary<string, string> buttonADictionary = GetButtonDictionary(planId, "A"); Dictionary<string, string> buttonBDictionary = GetButtonDictionary(planId, "B"); bool aHasData = buttonADictionary.ContainsValue("disabled"); bool bHasData = buttonBDictionary.ContainsValue("disabled"); return Json(new { buttonADictionary, buttonBDictionary, aHasData, bHasData }, JsonRequestBehavior.AllowGet); } private Dictionary<string, string> GetButtonDictionary(int planId, string value) { planning resultSet = GetPlanningResultSet(planId, value); structure st = GetStructure(planId, value); Dictionary<string, string> buttonDictionary = new Dictionary<string, string>(); buttonDictionary.Add("C", GetEnabledSetting(resultSet.CorrespondingId)); buttonDictionary.Add("R", GetEnabledSetting(resultSet.ResponseId)); buttonDictionary.Add("S", GetEnabledSetting(st != null ? st.Point_x : null)); buttonDictionary.Add("D", GetEnabledSetting(resultSet.DistalId)); buttonDictionary.Add("P", GetEnabledSetting(resultSet.ProximalId)); return buttonDictionary; } private planning GetPlanningResultSet(int planId, string value) { planning resultSet = db.planning.Where(t => t.PlanId == planId).Join(db.matching_image, p => p.MatchingImageId, i => i.Id, (p, i) => new { planning = p, i.View }).Where(j => j.View == value).Select(j => j.planning).FirstOrDefault(); return resultSet; } private structure GetStructure(int planId, string value) { return db.structure.FirstOrDefault(s => s.PlanId == planId && s.Type == value); } private string GetEnabledSetting(object value) { return value != null ? "disabled" : "enabled"; } This is enough contribution for me. I'll leave it up to you to possibly create better variable names and get the code into a bit better compliance with C# standards. For example, you seem to have classes starting in lowercase (planning, structure) and those should start with capital letters. You seem to have DbSets that should be plural (and capitalized). Finally, this seems to be a lot of work to be in a controller, perhaps it could be be further extracted to a pure business layer. Those modifications would, of course, be up to you.
{ "domain": "codereview.stackexchange", "id": 13777, "tags": "c#, asp.net-mvc" }
Cmake error during building a rospackage
Question: Hello all I am trying to learn how to use the ROS-PCL library with the Velodyne 64E lidar. I have stored my code in the same directory as that of the Velodyne_height_map project and I want to modify the existing CMakeList.txt file to incorporate my own code. My main goal is to find the distances of the obstacles by subscribing to the velodyne_obstacles topic. This is the error I am getting when I am building the project. > *** No rule to make target > '/usr/lib/x86_64-linux-gnu/libpthread.so/opt/ros/kinetic/lib/libpcl_ros_filters.so', > needed by > '/home/aditya/catkin_ws/devel/lib/velodyne_height_map/myheight'. > Stop. This is the addition I have done to the CMakeLists.txt file by following few tutorials. add_executable(myheight src/my_height_sub.cpp) target_link_libraries(myheight ${catkin_LIBRARIES}${catkin_LIBRARIES} ${Boost_LIBRARIES} ${PCL_LIBRARIES}) This is the program I am trying to run. #include <ros/ros.h> #include <pcl_ros/point_cloud.h> #include <pcl/point_types.h> #include <boost/foreach.hpp> typedef pcl::PointCloud<pcl::PointXYZ> PointCloud; void callback(const sensor_msgs::PointCloud2ConstPtr&); int main(int argc, char** argv) { ros::NodeHandle nh; std::string topic = nh.resolveName("point_cloud"); uint32_t queue_size = 1; // to create a subscriber, you can do this (as above): ros::Subscriber sub = nh.subscribe<sensor_msgs::PointCloud2> ("velodyne_points", queue_size, callback); } I know I am missing some dependencies in the CMakelists, but cant seem to figure out what they are. Can someone help me out please? Many thanks in advance! Originally posted by aditya369007 on ROS Answers with karma: 18 on 2018-04-27 Post score: 0 Original comments Comment by mgruhler on 2018-04-27: I'm no expert on this, but it seems that pcl_ros_filter (which is in your error message) cannot be used. Why this should be linked, however, I don't know. I don't see anything apparent relating to that. Comment by aditya369007 on 2018-04-27: Thats what I am unable to understand, I have not used pcl_ros_filter still this is showing up. But I have still added ${PCL_ROS_FILTER} in the link libraries, but still I am facing the same error. Comment by robotchicken on 2018-04-27: Can you paste the entire CMakeList.txt(without the comments)? Comment by aditya369007 on 2018-04-27: pastebin It is attached here Answer: /usr/lib/x86_64-linux-gnu/libpthread.so/opt/ros/kinetic/lib/libpcl_ros_filters.so This path seems super wierd. Can you paste the result of "echo $LD_LIBRARY_PATH" on you terminal. Was this a CmakeList.txt file written by you? can you try changing to: target_link_libraries( myheight ${catkin_LIBRARIES} ${Boost_LIBRARIES} ${PCL_LIBRARIES} ) Originally posted by robotchicken with karma: 51 on 2018-04-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by aditya369007 on 2018-04-27: In function main': /home/aditya/catkin_ws/src/velodyne_height_map/src/my_height_sub.cpp:16: undefined reference to callback(boost::shared_ptr<sensor_msgs::PointCloud2_std::allocator<void > const> const&)' I got this error now I think its a programming error. Will rectify it Comment by aditya369007 on 2018-04-27: And no this was not written by me Comment by robotchicken on 2018-04-27: Yup, i think you need to define the function callback. Comment by robotchicken on 2018-04-27: Mark as resolved if it works out.
{ "domain": "robotics.stackexchange", "id": 30743, "tags": "ros-kinetic, velodyne" }
Show a diamond shape with numbers
Question: After seeing this post on StackOverflow, I thought I'd try it out. What I have now is a diamond shape with numbers like this: 1 212 32123 4321234 543212345 4321234 32123 212 1 My code looks like this: int i, j, k, n = 6, l = 3, o = 2; for (i = 1; i <= 5; i++) { for (k = n; k >= 1; k--) { Console.Write(" "); } n--; for (j = i; j >= 1; j--) { Console.Write(j); } o = 2; for (j = 1; j < i; j++) { Console.Write(o); o++; } Console.WriteLine(); } for (i = 4; i >= 1; i--) { for (k = l; k >= 1; k--) { Console.Write(" "); } l++; for (j = i; j >= 1; j--) { Console.Write(j); } o = 2; for (j = i; j > 1; j--) { Console.Write(o); o++; } Console.WriteLine(); } Console.Read(); I don't think this is a nice solution for this task. Is there any nicer and shorter way to solve this without using that many loops? Answer: Code issues that decrease readability and maintainability Plenty of loops. Non-meaningful variable names: n, l, o. We can declare the following helper methods: private static string CreateDigitsString(int length) { char[] buffer = new char[length]; for (int i = 0; i < buffer.Length; i++) { buffer[i] = (char)('1' + Math.Abs(buffer.Length / 2 - i)); } return new string(buffer); } private static string CreateLine(int length, int spaces) { return new string(' ', spaces) + CreateDigitsString(length - 2 * spaces) + new string(' ', spaces); } Then use it like: const int N = 5; const int TotalRows = N * 2 - 1; for (int row = 0; row < TotalRows; row++) { Console.WriteLine(CreateLine(TotalRows, Math.Abs(N - row - 1))); } Another approach In the 2-dimensional loop (i, j) we can calculate the distance between the current cell and the center of the diamond as follows: int rowDistance = Math.Abs(row - N + 1); int colDistance = Math.Abs(col - N + 1); Let's output the colDistance for cells that are within rowDistance + colDistance <= N - 1, and space - otherwise. const int N = 5; const int TotalSize = N * 2 - 1; for (int row = 0; row < TotalSize; row++) { for (int col = 0; col < TotalSize; col++) { int rowDistance = Math.Abs(row - N + 1); int colDistance = Math.Abs(col - N + 1); Console.Write(rowDistance + colDistance <= N - 1 ? (char)('1' + colDistance) : ' '); } Console.WriteLine(); }
{ "domain": "codereview.stackexchange", "id": 23376, "tags": "c#, ascii-art" }
Hydraulic press equilibrium equation inconsistency
Question: Suppose we have a hydraulic press with a smaller area $A_1$ and a bigger area $A_2$, with the smaller area being higher with a height difference of $\Delta h$. We first calculate the pressure at point A. By Pascal's law, the increase in pressure at point A comes from the pressure created by forces $F$ and the weight $G_m$ of the mass $m$. These pressures get added at each point in the liquid, so for point A we get $p_A = F_1/A_1+F_2/A_2$. With similar reasoning for point B we get $p_B = F_1/A_1+F_2/A_2+\rho g \Delta h$ where we have now taken into account the hydrostatic pressure from the height difference. However, if the system is now in equilibrium, at the area $A_2$ we must have equality with pressure from above and pressure from below, so that the forces acting on both sides are equal. This would give $F_1/A_1$ for the pressure from above and $p_A = F_1/A_1+F_2/A_2$ for the pressure from below. This would imply that $F_2=0$ which is a contradiction. Where am I going wrong? I'm stuck with this and there must be a fundamental misunderstanding somewhere in there. I would really appreciate any pointers. Answer: These pressures get added at each point in the >liquid, so for point A we get =1/1+2/ This is where you go wrong. In static equilibrium, the pressure is the same in all directions, and neglecting the weight of the fluid, at all points in the fluid. This means that the two pressures must be equal, but never that they add together. If you wanted to find the pressure by considering the total influence of both sides, you could do so, but the formula would be P=(F1+F2)/(A1+A2) , not F1/A1+F2/A2. It is as if you are trying to find the density of a composite object, and you have approached it by adding up the densities of each component.
{ "domain": "physics.stackexchange", "id": 57226, "tags": "pressure" }
Trying to publish information from subscriber node
Question: I'm trying to make system consist of several sections, it must surely have at least 1 node doing 2 roles, both subscriber and publisher. and here it my codes. ##First section## #!/usr/bin/env python3 import rospy as R from std_msgs.msg import Float64 def sender_3(): x = 0 y = float(input("Give us the float num: ")) pub = R.Publisher("line3", Float64, queue_size=10) R.init_node("sender_3", anonymous=True) rate = R.Rate(10) while not R.is_shutdown() and x < y: x += 0.1 t = x print(x) pub.publish(t) rate.sleep() if __name__ == "__main__": try: sender_3() except R.ROSInterruptException: pass Second section #!/usr/bin/env python3 import rospy as R from std_msgs.msg import Float64, String def callback(data): global r_data r_data = data.data R.loginfo("sawadee krub i've received as info: %f", data.data) global t_o t_o = str(r_data) R.loginfo("Applied num to chatGPT: %s", t_o) def receiver_2(): R.init_node("receiver_2", anonymous=10) R.Subscriber("line3", Float64, callback, queue_size=True) pub = R.Publisher("translne", String, queue_size=10) rate = R.Rate(10) while not R.is_shutdown: print(t_o) pub.publish(t_o) rate.sleep() R.spin() #def transporter(): # R.init_node("transporter send you", anonymous=10) # pub = R.Publisher("translne", String, queue_size=10) # while not R.is_shutdown: # print(receiver_2) # pub.publish(receiver_2) # R.Rate(10) if __name__ == "__main__": receiver_2() Last section #!/usr/bin/env python3 import rospy as R from std_msgs.msg import String def callback(data): R.loginfo("I've received an information: %s", data.data) def sub_receiver(): R.init_node("sub_receiver", anonymous=True) R.Subscriber("translne", String , callback, queue_size=10) R.spin() if __name__ == "__main__": sub_receiver() My problem is during the transition between second and last section, it seems like last section didn't receive any information which is translated into String and published to last section, cause there's nothing appears even error. P.S I commented on third section for several previous tries. Edited: Mistakes have been fixed. Thank you in advance. Originally posted by Nongtuy on ROS Answers with karma: 3 on 2023-08-08 Post score: 0 Answer: Your code is not doing this correctly. Please look at section 2 of http://wiki.ros.org/rospy/Overview/Time for the proper way to construct a while/Rate loop in python. while not R.is_shutdown: # <- wrong print(t_o) pub.publish(t_o) R.Rate(10) # <- wrong R.spin() # <- wrong Originally posted by Mike Scheutzow with karma: 4903 on 2023-08-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Nongtuy on 2023-08-08: Oh silly me, I always forget to add () in definition. After I added that it works properly now Thank you Best regard, Tuy
{ "domain": "robotics.stackexchange", "id": 38489, "tags": "ros" }
What does the 'V' stand for in ECG electrode names?
Question: In electrocardiography, electrodes have the typical names of: RA (Right Arm) LA (Left Arm) RL (Right Leg) LL (Left Leg) V1, V2, V3, V4, V5 and V6 What does the V stand for? Is it perhaps from vector, ventricular, or maybe from something else entirely? Answer: Short answer V stands for voltage. Background The various leads in an ECG montage are shown below: ECG electrodes. Source: American Heart Association I am not an expert in ECGs, but as far as I can see, RA, LA and LL are return electrodes as they are situated far away from the heart. In other words, they act as reference electrodes. RL is the ground, basically to correct for baseline shifts. This leaves the V electrodes, which are the chest leads, or precordial leads, and are situated right above where the action is, i.e., the heart. The ECG is determined between the V electrodes and a distant reference, with a ground added. The V electrodes are the active electrodes, picking up the signal. The others are references and a ground only.
{ "domain": "biology.stackexchange", "id": 8376, "tags": "terminology, electrocardiography" }
Retrieve Data from TOC
Question: I aim to convert the TOC of The Python Language Reference — Python 3.6.3 documentation to a structured data with the following steps: Copy contents to a plr.md file: In [1]: with open('plr.md') as file: ...: content = file.read() In [2]: content Out[2]: '\n\n- \\1. Introduction\n - [1.1. Alternate Implementations] (https://docs.python.org/3.6/reference/introduction.html#alternate-implementations)\n - [1.2. Notation](https://docs.python.org/3.6/reference/introduction.html#notation)\n- \\2. Lexical analysis\n - [2.1. Line structure] (https://docs.python.org/3.6/reference/lexical_analysis.html#line-structure)\n - [2.2. Other tokens](https://docs.python.org/3.6/reference/lexical_analysis.html#other-tokens)\n Get chapters: In [47]: chapters = content.split('\n- \\') ...: #subtract the unqualified part ...: chapters = chapters[1:] In [50]: chapters[0] Out[50]: '1. Introduction\n - [1.1. Alternate Implementations](https://docs.python.org/3.6/reference/introduction.html#alternate-implementations) \n - [1.2. Notation](https://docs.python.org/3.6/reference/introduction.html#notation)' Separate chapter name and section name in each chapters: chapter_details = chapters[0].split('\n -') sections = chapter_details[1:] chapter = chapter_details[0] In [54]: chapter Out[54]: '1. Introduction' In [55]: sections Out[55]: [' [1.1. Alternate Implementations](https://docs.python.org/3.6/reference/introduction.html#alternate-implementations)', ' [1.2. Notation](https://docs.python.org/3.6/reference/introduction.html#notation)'] Convert section: def convert_section(s): start = s.index('[') + 1 end = s.index(']') return s[start:end] In [57]: print(convert_section(' [1.1. Alternate Implementations](https://docs.python.org/3.6/reference/i ...: ntroduction.html#alternate-implementations)')) 1.1. Alternate Implementations sections = map(convert_section, sections) sections = list(sections) Create a dict: key = chapter {key:sections} {'1. Introduction':['1.1. Alternate Implementations', '1.2. Notation']} Encapsulate code in a class and get the result: class TOC: def __init__(self, filename): self.filename = filename def read(self, filename): with open (filename) as file: content = file.read() return content def convert_section(self, s): start = s.index('[') + 1 end = s.index(']') return s[start:end] def get_chapters(self, filename): content = self.read(filename) chapters = content.split('\n- \\') #subtract the unqualified part chapters = chapters[1:] return chapters def create_chapter_dict(self, chapter): chapter_details = chapter.split('\n -') sections = chapter_details[1:] key = chapter_details[0] value = map(self.convert_section, sections) return {key: list(value)} def get_chapters_dict(self): chapters = self.get_chapters(self.filename) chapters_dict = {} for chapter in chapters: chapter_dict = self.create_chapter_dict(chapter) chapters_dict.update(chapter_dict) return chapters_dict Run and get the result: In [89]: TOC('plr.md').get_chapters_dict() Out[89]: {'1. Introduction': ['1.1. Alternate Implementations', '1.2. Notation'], '2. Lexical analysis': ['2.1. Line structure', '2.2. Other tokens', '2.3. Identifiers and keywords', '2.4. Literals', '2.5. Operators', '2.6. Delimiters'], '3. Data model': ['3.1. Objects, values and types', '3.2. The standard type hierarchy', '3.3. Special method names', '3.4. Coroutines'], This solution is a bit too much for a daily common operation. Is there a standard or easy method for such a task? Answer: I think you are overcomplicating the problem. I would probably go with a proper Markdown parser (like mistune), or parse the generated HTML instead. Here is how I would do it using BeautifulSoup parser: from pprint import pprint from bs4 import BeautifulSoup import requests response = requests.get("https://docs.python.org/3.6/reference/index.html") soup = BeautifulSoup(response.content, "html.parser") contents = soup.select_one("#the-python-language-reference ul") pprint({ li.a.get_text(): [li.a.get_text() for li in li("li")] for li in contents.find_all("li", recursive=False) }) Prints: {'1. Introduction': ['1.1. Alternate Implementations', '1.2. Notation'], '10. Full Grammar specification': [], '2. Lexical analysis': ['2.1. Line structure', '2.2. Other tokens', '2.3. Identifiers and keywords', '2.4. Literals', '2.5. Operators', '2.6. Delimiters'], '3. Data model': ['3.1. Objects, values and types', '3.2. The standard type hierarchy', '3.3. Special method names', '3.4. Coroutines'], '4. Execution model': ['4.1. Structure of a program', '4.2. Naming and binding', '4.3. Exceptions'], '5. The import system': ['5.1. importlib', '5.2. Packages', '5.3. Searching', '5.4. Loading', '5.5. The Path Based Finder', '5.6. Replacing the standard import system', '5.7. Special considerations for __main__', '5.8. Open issues', '5.9. References'], '6. Expressions': ['6.1. Arithmetic conversions', '6.2. Atoms', '6.3. Primaries', '6.4. Await expression', '6.5. The power operator', '6.6. Unary arithmetic and bitwise operations', '6.7. Binary arithmetic operations', '6.8. Shifting operations', '6.9. Binary bitwise operations', '6.10. Comparisons', '6.11. Boolean operations', '6.12. Conditional expressions', '6.13. Lambdas', '6.14. Expression lists', '6.15. Evaluation order', '6.16. Operator precedence'], '7. Simple statements': ['7.1. Expression statements', '7.2. Assignment statements', '7.3. The assert statement', '7.4. The pass statement', '7.5. The del statement', '7.6. The return statement', '7.7. The yield statement', '7.8. The raise statement', '7.9. The break statement', '7.10. The continue statement', '7.11. The import statement', '7.12. The global statement', '7.13. The nonlocal statement'], '8. Compound statements': ['8.1. The if statement', '8.2. The while statement', '8.3. The for statement', '8.4. The try statement', '8.5. The with statement', '8.6. Function definitions', '8.7. Class definitions', '8.8. Coroutines'], '9. Top-level components': ['9.1. Complete Python programs', '9.2. File input', '9.3. Interactive input', '9.4. Expression input']} Note that this would work for the limited depth of the nested lists. If it would be needed, you can generalize parsing the nested lists using something like this dictify() function.
{ "domain": "codereview.stackexchange", "id": 28420, "tags": "python, file" }
How does Organ Transplant work well despite having Foreign DNA
Question: My question is little bit related to this one Why does organ transplant work although it seems organ's motor neuron isn't connected to recipient's CNS but only little bit. Problem Suppose the case in which person A got his kidney transplanted (and person B is donor). Now both A and B have different DNAs and how come lung (organ) transplant works well for person A and how does person A's DNA going to "accept" the lung's DNAs and how is replication going to work across the foreign lung? Question Will person A's lung contain: Person A's DNA Person B's DNA DNA of both Person A and B (will convert to person A's after some-time having a certain half-life for it) or Forever contain both DNAs Thanks in advance and kindly provide reference for your answer. Thanks Answer: 4. Forever contain both DNAs Yes, the donor kidney still has (and will remain to have) the donor's DNA. That is irrelevant to the acceptance and continued functioning of the organ, though. The thing that could make an organ transplant fail is the recipient's immune system. If it detects the organ as "alien", it will attack it like any other foreign particle, and the body will reject the transplant. To avoid this, two things are being done: A donor is sought whose organs have similar protein markers to those of the recipient. (Those markers, and not the DNA, are what the immune system looks for when trying to tell friend and foe apart. DNA is inside the cell, the protein markers are on the outside.) Similar protein markers are usually found in people with similar DNA, which is why close relatives are tested for this kind of compatibility first. The recipient recieves special drugs that partially suppress the immune system. If the recipient's body does not reject the organ, it can continue to function normally despite its different DNA. If, for example, there is a genetic defect in there that will lead to some kidney-related issues at a later time, it will be the donor's defect. If there has been a defect in the recipients kidneys that would have led to issues later in his life, that defect would be gone with his kidney. Other cells in his body would still carry the same defect, but those cells are not kidney cells, and thus that particular issue will not manifest itself. Replication of cells will follow the same pattern: Cells from the recipient will replicate the recipient's DNA, cells from the donor's organ will replicate the donor's DNA.
{ "domain": "biology.stackexchange", "id": 5066, "tags": "molecular-biology" }
Null geodesics and hypersurfaces
Question: I have 3 questions: Is a vector perpendicular to any tangent vector of any null geodesic also a null vector? How can we find hypersurface to a null geodesic? Suppose we have a null geodesic in 4d, the transverse metric to it is 2d (You can look it up online. It is 2 dimensional) Intuitively, I don't understand how is this possible. Answer: 1. Nope. For example, in Minkowski spacetime, consider the vector $(v^\mu)=(1,0,0,1)$, which is a null vector. The vector $ (u^\mu)=(0,1,0,0) $ is orthogonal to it, yet, it is not a null vector. 2. I don't understand this question. 3. Consider a null geodesic with tangent vector $u^\mu$ ($u^\mu u_\mu$=0). Let $\lambda$ be the parameter along the null geodesic. Let $\Sigma_p<T_pM$ be the orthogonal complement to $u^\mu$ at $p\in M$. Note that because $u^\mu$ is a null vector, it is orthogonal to itself, hence $u_p\in\Sigma_p$. Let us choose two additional vectors in $\Sigma_p$, $e^\mu_1$ and $e^\mu_2$. We can choose these vectors such that $e^\mu_Ae^\nu_Bg_{\mu\nu}=\delta_{AB}$ ($A,B=1,2$), and because $u^\mu$ is a normal vector, we have $u^\mu e^\nu_A g_{\mu\nu}=0$. The line element at $p$ can be expressed as $$ ds^2(p)=g_{\mu\nu}(p)u^\mu u^\nu d\lambda^2+2g_{\mu\nu}(p)e^\mu_A u^\nu d\sigma^Ad\lambda+g_{\mu\nu}(p)e^\mu_A e^\nu_B d\sigma^A d\sigma^B, $$ where $\sigma^A$ are some coordinates for which at $p$, $e^\mu_A$ are the coordinate basis vectors. Writing in the relations between the basis vectors gives $$ ds^2(p)=g_{\mu\nu}(p)e^\mu_A e^\nu_B d\sigma^A d\sigma^B\equiv \delta_{AB}d\sigma^A d\sigma^B, $$ since the other contractions are all vanishing. We can then see that if you make an infinitesimal displacement $d\xi=(d\lambda,d\sigma^1,d\sigma^2)$ that is orthogonal to the null curve, the contribution from $d\lambda$ is zero, hence it doesn't matter. After all, $\lambda$ is a null parameter, along which there is vanishing arc length, and so the physical/geometric displacement corresponding to $d\xi$ depends only on $d\sigma^1$ and $d\sigma^2$, which is why the metric is effectively two dimensional.
{ "domain": "physics.stackexchange", "id": 48875, "tags": "general-relativity, differential-geometry, metric-tensor, vectors, geodesics" }
SQL to find table containing all specified columns
Question: I have the below code to list all tables which have columns DataAreaId and CountryRegionId. This works, but requires me to change the code in two places if I amend the column list (i.e. both the name in ('DataAreaId','CountryRegionId') code to list the required column names, and also the having COUNT(1) = 2 to match the number of specified columns. select * from sys.tables t where object_id in ( select object_id from sys.columns c where name in ('DataAreaId','CountryRegionId') group by object_id having COUNT(1) = 2 ) order by Name I can tweak it to make things more dynamic (i.e. so I only have to define the list of columns; and not have to remember to amend count(1) = 2 to match the number of values as so: declare @cols table(name sysname) insert @cols values('DataAreaId'),('CountryRegionId') select * from sys.tables t where object_id in ( select object_id from sys.columns c where name in (select name from @cols) group by object_id having COUNT(1) = (select COUNT(distinct name) from @cols) ) order by Name But that has a bad smell to it / doesn't look elegant. Any thoughts on how this could be improved, or is this just one of those scenarios where elegance isn't possible? I'm thinking the seldom used ALL keyword may help somehow; though not sure how. Answer: In this case, I would consider using a CTE to contain the columns you are interested in. This saves having to declare the variables, and also saves the inserts, etc. The basic concept is the same as your second query though... with FindColumns as ( select 'DataAreaId' as Seek UNION select 'CountryRegionId' as Seek ), MyTables as ( select object_id as Tab, count(*) as ColCount from sys.columns inner join FindColumns on name = Seek group by object_id ) select * from sys.tables inner join MyTables on object_id = Tab where ColCount = (select count(*) from FindColumns) Note how this is really just a re-expression of your second query using Common Table Expressions I put that query together on the Stack Exchange data explorer as an example using a couple of different column names.... you can see it working there.
{ "domain": "codereview.stackexchange", "id": 15115, "tags": "sql, t-sql" }
Dielectric in a parallel plate capacitor
Question: Uniform charge: each atom has charge $q$. Magnitude of dipole moment is $q s$, where $s$ is the distance the nucleus is shifted. According to my notes, the charge on the surface of a dielectric in between the plates is $N q s S$, where $N$ is the number of dipoles and $S$ is the surface area of the plate. But surely this should be $N q s$, because the charge should on the surface should be the same irrespective of the surface area (because we are using the number of charges on the surface $N$). Answer: There are two misconceptions present in your explanation of the problem. $N$ is not number of dipoles, but their volumetric density $Q$ is not total charge, but equivalent charge at boundaries of the dielectric. The idea is that (a) dielectric of the area $A$ and height $L$ polarized homogeneously along its height and (b) two plan-parallel plates of the area $A$, distanced by $L$ and with charges $N A p$ and $-N A p$ produce macroscopically the same electric field ($N$ is volumetric density and $p = q s$ is polarization of one dipole). This effect can be relatively simply understood if you imagine that you have charges of volumetric density $N$ homogeneously distributed all along material. Initially positive and negative charges are on the same positions, all material is electrically neutral and polarization equals zero (picture left). Now you pull all positive charges for $s/2$ up and all negative charges for $s/2$ down (picture right), so you actually get total dipole moment $P' = N V p = N A L q s$. Figure: red = positive charge, blue = negative charge, violet = neutral. What is the effect of such movements? The bulk of dielectric material remains neutral in terms of charge, but you do get excess charge $Q = N A s q$ at the top and excess charge $-Q = -N A s q$ at the bottom of the dielectric ($A s$ is the volume at the top or bottom where only one type of charge is present). The point of this simple derivation is that surface charge density $\sigma = \frac{Q}{A} = N s q$ equals polarization volumetric density $P = \frac{P'}{A L} = N q s$, i.e. $\sigma = P$. (Polarization density is by definition total dipole moment of the dielectric divided by its volume.)
{ "domain": "physics.stackexchange", "id": 3480, "tags": "electrostatics, capacitance, dipole, dielectric" }
Does this example contradict Earnshaw's theorem in one dimension?
Question: This is basically a continuation of the post here. Consider electrostatics in $1$-dimension (say, the $x$-axis). Now consider a positive charge $+q$ located at $x=0$, and two equal negative charges $-q$ are held fixed at $x=+a$ and $x=-a$. In this configuration, the total force on $+q$ at $x=0$ is zero i.e., the charge at $x=0$ is in equilibrium. Moreover, it is also a stable equilibrium i.e., if we slightly displace $q$ towards left or right, then it would oscillate about $x=0$. This means that it is possible to keep the charge $+q$ in stable equilibrium by electrostatic forces alone. But this again goes against Earnshaw's theorem. Again I must be missing something. Is it that when I say the charges at $x=\pm a$ are held fixed, I am using mechanical forces and thus move outside the purview of Earnshaw's theorem? Answer: Earnshaw's theorem indeed holds in any number of spatial dimensions. The problem here is your assumption about the electric field. In any number of dimensions, the electric field obeys Gauss's law. So in $d$ spatial dimensions, $$E(r) \propto \frac{1}{r^{d-1}}.$$ In particular, for $d = 1$, the electric field of a charge is constant. This makes sense, because the field lines don't have any direction to "spread out" in. In your setup, the electric field due to the negative charges is exactly zero everywhere between them, so the equilibrium is only neutrally stable, in accordance with Earnshaw's theorem.
{ "domain": "physics.stackexchange", "id": 57228, "tags": "electrostatics, potential, potential-energy, classical-electrodynamics, equilibrium" }
Detect strings that consist of some a's followed by some b's
Question: This program needs to detect strings in which the first few characters are 'a', than few 'b'. Is it possible that in this implementation may be undefined behavior. string str; while (cin >> str) { unsigned int i = 0; while (str[i] == 'a' && i < str.length()) { i++; } while (str[i] == 'b' && i < str.length()) { i++; } if (i == str.length()) { cout << "ok" << endl; } else { cout << "Not ok" << endl; } } Answer: So: // Note: str is a string. // containing "a" // So length = 1 while (str[i] == 'a' && i < str.length()) { i++; } // First Loop // str[0] == 'a' && 0 < 1 // true && true // true // So we enter the loop body and increment i to 1. // Second Loop // str[1] // access beyond the end of the string. // // thus undefined behavior. If we swap the test around while (i < str.length() && str[i] == 'a') { i++; } // First Loop // 0 < 1 && str[0] == 'a' // true && true // true // So we enter the loop body and increment i to 1. // Second Loop // 1 < 1 && // The left hand side of the test is false // // && is shortcut operator so the right hand // // side is not evaluated and the loop exited.
{ "domain": "codereview.stackexchange", "id": 25226, "tags": "c++, strings" }
Person Info Manager
Question: In the process of teaching myself JavaScript. Starting to branch off into OOP designs and did not want to continue until I knew I was doing it correctly. Is this a common or accepted way of formatting an object in JavaScript, along with its constructor and methods? I have seen many different ways of doing this (at least it seems like it). Any other irregularities that could be brought to my attention would also be nice. function Person(firstName, lastName, age){ this.firstName = firstName; this.lastName = lastName; this.age = age; } Person.prototype = { fullName:function(){ return this.firstName + " " + this.lastName; }, changeFirstName:function(name){ this.firstName = name; }, changeLastName:function(name){ this.lastName = name; }, changeAge:function(age){ this.age = age; }, displayInfo:function(){ document.write("Fullname: " + this.fullName() + "<br />"); document.write("Age: " + this.age + "<br />"); } } // Was just testing the functions. var person = new Person("first", "last", 20); document.write(person.fullName() + "<br />"); person.changeFirstName("FIRST"); document.write(person.fullName() + "<br />"); person.displayInfo(); Answer: There a few techniques to write a class in JavaScript, and each has its strengths and weaknesses. The prototypical technique you use is fine, but there are a few things you should note: Overwriting the prototype as opposed extending it Like Joseph said, when you assign Person.prototype = {, you are overwriting the original prototype which can result in some lost properties. If you rely on the constructor property of a class, this will not be accessible once you've overwritten the prototype. To keep the constructor property, you could add it to your custom prototype: Person.prototype = { constructor: Person, fullName: function () { return this.firstName + " " + this.lastName; }, //... } For more info on the subject, these two Stack Overflow questions go into more detail: Overwriting Prototype Bad Practice, Using Prototype Best Practice. Public Mutable Data When you use this.firstName = firstName, you give a user of a person object full access to that data member; they can directly modify it with person.firstName = "newName". This is true for all the data members of your Person class which makes your changeX functions obsolete. If you want the data in the Person class to be private, there's another technique for making classes that utilizes closures: function createPerson(firstName, lastName, age) { return { getFirstName: function () { return firstName; }, getLastName: function () { return lastName; }, getAge: function () { return age; }, }; } A downside to this technique is that you create separate functions for each instance of the class instead of each instance sharing the functions in the prototype. Cohesion The function displayInfo feels out of place in your function class. As a general practice, it's more clear to have each class/function/module perform one specific task. Your person class appears to do two things: manage the data for a person AND display it's data to an html document. One downside to less cohesive code is that it can create dependencies that probably shouldn't be there. For example, your Person class depends on the document object which wouldn't be available in a node.js setting. You might consider breaking out displayInfo into it's own function: function displayPersonInfo(person) { document.write("Fullname: " + person.fullName() + "<br />"); document.write("Age: " + person.age + "<br />"); }
{ "domain": "codereview.stackexchange", "id": 21280, "tags": "javascript, beginner, object-oriented" }
Matlab: Calculation range
Question: I have this Matlab/Octave code: % needed for Octave ---------------------- pkg load signal % ------------------------------------------------ % Square ----------------------------------------- figure N=50; % harmonics fs = 1000; r_start=0; r_end=10; r_step=1/fs; r = r_start:r_step:r_end; % range w_sqr = square(r)/2; % Square wave [-0.5:0.5] %w_sqr = square(r); % [-1:1] plot(r,w_sqr); axis([r_start r_end -1.2 1.2]); grid on; hold on; % Fourier ----------------------------------------- i=1; sum=0; for t=r for n=1:N sum = sum + (2*sin(n*t)+sin(pi*n-n*t)-sin(n*t+pi*n))/(2*pi*n); end F(i)=sum; i=i+1; sum = 0; end; F=F'; plot(r,F); axis([r_start r_end -1.2 1.2]); which results this: By changing the value of variable w_sqr to (square(r)+1)/2 and calculation of F to F(i)=(1/2)+sum I get the result in range [0:1]. What changes are needed to do in Stage 2 to get the result in range [-1:1]? Answer: I guess f(i) = 2*sum should do:
{ "domain": "dsp.stackexchange", "id": 9256, "tags": "matlab" }
How does fire heat air?
Question: I understand that fire heats its surroundings via conduction, convection and radiation. I've read that conduction is nearly irrelevant to this process as air is a poor heat conductor. In descriptions of convection, people often just say "fire heats the air and the air circulates heat to the environment". But, if air is a poor heat conductor and a poor absorber of radiation, how is the air heated in order to convect heat in the first place? Answer: Fire is a reaction that includes the air. To be precise, it includes a part of the air, oxygen. It releases the results of the reaction as a new part of the air. Those reaction results are hot and mix in with the rest of the surrounding air. Air is a poor conductor when there is no ability to mix. The "output" of fire (for our purposes assume it is carbon dioxide) mixes with the air, raising the average temperature.
{ "domain": "physics.stackexchange", "id": 94092, "tags": "thermodynamics, radiation, thermal-radiation, thermal-conductivity, radiative-transfer" }
rostopic echo to webpage
Question: How can iI export the output from rostopic echo (for example the battery level) to a webpage? Originally posted by agrirobot-George on ROS Answers with karma: 1 on 2013-06-04 Post score: 0 Answer: You can look at rosbridge for this. Or write a node yourself that generates a html file with the level. Originally posted by davinci with karma: 2573 on 2013-06-04 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by agrirobot-George on 2013-06-05: Excellent! Thank you for this information. I just checked it out. Does it run with ros electric, too? also, Do you have any experience with Json? What the structure of such a file be, for example to get the data from rostopic echo battery?
{ "domain": "robotics.stackexchange", "id": 14422, "tags": "ros, rostopic-echo" }
Two dimensional bicubic interpolation implementation in C
Question: This is a follow-up question for Two dimensional bicubic interpolation implementation in Matlab and Two dimensional gaussian image generator in C. Besides the Matlab version code, I am attempting to make a C version two dimensional bicubic interpolation function BicubicInterpolation here. The experimental implementation BicubicInterpolation function implementation: RGB* BicubicInterpolation(const RGB* const image, const int originSizeX, const int originSizeY, const int newSizeX, const int newSizeY) { RGB* output; output = malloc(sizeof *output * newSizeX * newSizeY); if (output == NULL) { printf(stderr, "Memory allocation error!"); return NULL; } float ratiox = (float)originSizeX / (float)newSizeX; float ratioy = (float)originSizeY / (float)newSizeY; for (size_t y = 0; y < newSizeY; y++) { for (size_t x = 0; x < newSizeX; x++) { for (size_t channel_index = 0; channel_index < 3; channel_index++) { float xMappingToOrigin = (float)x * ratiox; float yMappingToOrigin = (float)y * ratioy; float xMappingToOriginFloor = floor(xMappingToOrigin); float yMappingToOriginFloor = floor(yMappingToOrigin); float xMappingToOriginFrac = xMappingToOrigin - xMappingToOriginFloor; float yMappingToOriginFrac = yMappingToOrigin - yMappingToOriginFloor; unsigned char* ndata; ndata = malloc(sizeof *ndata * 4 * 4); if (ndata == NULL) { printf(stderr, "Memory allocation error!"); return NULL; } for (int ndatay = -1; ndatay < 2; ndatay++) { for (int ndatax = -1; ndatax < 2; ndatax++) { ndata[(ndatay + 1) * 4 + (ndatax + 1)] = image[ clip(yMappingToOriginFloor + ndatay, 0, originSizeY - 1) * originSizeX + clip(xMappingToOriginFloor + ndatax, 0, originSizeX - 1) ].channels[channel_index]; } } unsigned char result = BicubicPolate(ndata, xMappingToOriginFrac, yMappingToOriginFrac); output[ y * newSizeX + x ].channels[channel_index] = result; free(ndata); } } } return output; } The other used functions: unsigned char BicubicPolate(const unsigned char* const ndata, const float fracx, const float fracy) { float x1 = CubicPolate( ndata[0], ndata[1], ndata[2], ndata[3], fracx ); float x2 = CubicPolate( ndata[4], ndata[5], ndata[6], ndata[7], fracx ); float x3 = CubicPolate( ndata[8], ndata[9], ndata[10], ndata[11], fracx ); float x4 = CubicPolate( ndata[12], ndata[13], ndata[14], ndata[15], fracx ); float output = clip_float(CubicPolate( x1, x2, x3, x4, fracy ), 0.0, 255.0); return (unsigned char)output; } float CubicPolate(const float v0, const float v1, const float v2, const float v3, const float fracy) { float A = (v3-v2)-(v0-v1); float B = (v0-v1)-A; float C = v2-v0; float D = v1; return D + fracy * (C + fracy * (B + fracy * A)); } size_t clip(const size_t input, const size_t lowerbound, const size_t upperbound) { if (input < lowerbound) { return lowerbound; } if (input > upperbound) { return upperbound; } return input; } float clip_float(const float input, const float lowerbound, const float upperbound) { if (input < lowerbound) { return lowerbound; } if (input > upperbound) { return upperbound; } return input; } base.h /* Develop by Jimmy Hu */ #ifndef BASE_H #define BASE_H #include <math.h> #include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #define MAX_PATH 256 #define FILE_ROOT_PATH "./" #define True true #define False false typedef struct RGB { unsigned char channels[3]; } RGB; typedef struct HSV { long double channels[3]; // Range: 0 <= H < 360, 0 <= S <= 1, 0 <= V <= 255 }HSV; typedef struct BMPIMAGE { char FILENAME[MAX_PATH]; unsigned int XSIZE; unsigned int YSIZE; unsigned char FILLINGBYTE; unsigned char *IMAGE_DATA; } BMPIMAGE; typedef struct RGBIMAGE { unsigned int XSIZE; unsigned int YSIZE; RGB *IMAGE_DATA; } RGBIMAGE; typedef struct HSVIMAGE { unsigned int XSIZE; unsigned int YSIZE; HSV *IMAGE_DATA; } HSVIMAGE; #endif The full testing code /* Develop by Jimmy Hu */ #include "base.h" #include "imageio.h" RGB* BicubicInterpolation(const RGB* const image, const int originSizeX, const int originSizeY, const int newSizeX, const int newSizeY); unsigned char BicubicPolate(const unsigned char* ndata, const float fracx, const float fracy); float CubicPolate(const float v0, const float v1, const float v2, const float v3, const float fracy); size_t clip(const size_t input, const size_t lowerbound, const size_t upperbound); float clip_float(const float input, const float lowerbound, const float upperbound); int main(int argc, char** argv) { char *FilenameString; FilenameString = malloc( sizeof *FilenameString * MAX_PATH); printf("BMP image input file name:(ex:test): "); scanf("%s", FilenameString); BMPIMAGE BMPImage1 = bmp_file_read(FilenameString, false); RGBIMAGE RGBImage1; RGBImage1.XSIZE = BMPImage1.XSIZE; RGBImage1.YSIZE = BMPImage1.YSIZE; RGBImage1.IMAGE_DATA = raw_image_to_array(BMPImage1.XSIZE, BMPImage1.YSIZE, BMPImage1.IMAGE_DATA); RGBIMAGE RGBImage2; RGBImage2.XSIZE = 1024; RGBImage2.YSIZE = 1024; RGBImage2.IMAGE_DATA = BicubicInterpolation(RGBImage1.IMAGE_DATA, RGBImage1.XSIZE, RGBImage1.YSIZE, RGBImage2.XSIZE, RGBImage2.YSIZE); printf("file name for saving:(ex:test): "); scanf("%s", FilenameString); bmp_write(FilenameString, RGBImage2.XSIZE, RGBImage2.YSIZE, array_to_raw_image(RGBImage2.XSIZE, RGBImage2.YSIZE, RGBImage2.IMAGE_DATA)); free(FilenameString); free(RGBImage1.IMAGE_DATA); free(RGBImage2.IMAGE_DATA); return 0; } RGB* BicubicInterpolation(const RGB* const image, const int originSizeX, const int originSizeY, const int newSizeX, const int newSizeY) { RGB* output; output = malloc(sizeof *output * newSizeX * newSizeY); if (output == NULL) { printf(stderr, "Memory allocation error!"); return NULL; } float ratiox = (float)originSizeX / (float)newSizeX; float ratioy = (float)originSizeY / (float)newSizeY; for (size_t y = 0; y < newSizeY; y++) { for (size_t x = 0; x < newSizeX; x++) { for (size_t channel_index = 0; channel_index < 3; channel_index++) { float xMappingToOrigin = (float)x * ratiox; float yMappingToOrigin = (float)y * ratioy; float xMappingToOriginFloor = floor(xMappingToOrigin); float yMappingToOriginFloor = floor(yMappingToOrigin); float xMappingToOriginFrac = xMappingToOrigin - xMappingToOriginFloor; float yMappingToOriginFrac = yMappingToOrigin - yMappingToOriginFloor; unsigned char* ndata; ndata = malloc(sizeof *ndata * 4 * 4); if (ndata == NULL) { printf(stderr, "Memory allocation error!"); return NULL; } for (int ndatay = -1; ndatay < 2; ndatay++) { for (int ndatax = -1; ndatax < 2; ndatax++) { ndata[(ndatay + 1) * 4 + (ndatax + 1)] = image[ clip(yMappingToOriginFloor + ndatay, 0, originSizeY - 1) * originSizeX + clip(xMappingToOriginFloor + ndatax, 0, originSizeX - 1) ].channels[channel_index]; } } unsigned char result = BicubicPolate(ndata, xMappingToOriginFrac, yMappingToOriginFrac); output[ y * newSizeX + x ].channels[channel_index] = result; free(ndata); } } } return output; } unsigned char BicubicPolate(const unsigned char* const ndata, const float fracx, const float fracy) { float x1 = CubicPolate( ndata[0], ndata[1], ndata[2], ndata[3], fracx ); float x2 = CubicPolate( ndata[4], ndata[5], ndata[6], ndata[7], fracx ); float x3 = CubicPolate( ndata[8], ndata[9], ndata[10], ndata[11], fracx ); float x4 = CubicPolate( ndata[12], ndata[13], ndata[14], ndata[15], fracx ); float output = clip_float(CubicPolate( x1, x2, x3, x4, fracy ), 0.0, 255.0); return (unsigned char)output; } float CubicPolate(const float v0, const float v1, const float v2, const float v3, const float fracy) { float A = (v3-v2)-(v0-v1); float B = (v0-v1)-A; float C = v2-v0; float D = v1; return D + fracy * (C + fracy * (B + fracy * A)); } size_t clip(const size_t input, const size_t lowerbound, const size_t upperbound) { if (input < lowerbound) { return lowerbound; } if (input > upperbound) { return upperbound; } return input; } float clip_float(const float input, const float lowerbound, const float upperbound) { if (input < lowerbound) { return lowerbound; } if (input > upperbound) { return upperbound; } return input; } All suggestions are welcome. The summary information: Which question it is a follow-up to? Two dimensional bicubic interpolation implementation in Matlab and Two dimensional gaussian image generator in C. What changes has been made in the code since last question? I am attempting to make a C version two dimensional bicubic interpolation function in this post. Why a new review is being asked for? If there is any possible improvement, please let me know. Answer: Avoid unnecesary allocation of temporary storage In the innermost loop, you do this: unsigned char* ndata; ndata = malloc(sizeof *ndata * 4 * 4); This is slow and completely unnecessary; you can just declare an array on the stack like so: unsigned char ndata[4 * 4]; Possible improvements to the algorithm It is likely that many of the intermediate values you are calculating in BicubicPolate() might be the same as those for neighbouring pixels. Also in CubicPolate(), none of the values of A to D depend on fracy, and some preprocessing of the image might allow you to avoid many of the operations. Also consider that the ratio between the source and destination can be larger than 1 or smaller than 1, and different algorithms might be better for each case, and ratios of the form n or 1 / n, where n is an integer, might especially be candidates for algorithmic improvements.
{ "domain": "codereview.stackexchange", "id": 45318, "tags": "algorithm, c, reinventing-the-wheel, image, numerical-methods" }
Correspondence between complexity classes and logic
Question: I took a class once on Computability and Logic. The material included a correlation between complexity / computability classes (R, RE, co-RE, P, NP, Logspace, ...) and Logics (Predicate calculus, first order logic, ...). The correlation included several results in one fields, that were obtained using techniques from the other field. It was conjectured that P != NP could be attacked as a problem in Logic (by projecting the problem from the domain of complexity classes to logics). Is there a good summary of these techniques and results? Answer: Neil Immerman produced a beautiful diagram that provides at-a-glance correspondences between complexity classes and logics interpreted by finite models. It's on the cover of his book, and also at the bottom of his web page here: http://www.cs.umass.edu/~immerman/
{ "domain": "cstheory.stackexchange", "id": 97, "tags": "cc.complexity-theory, lo.logic, computability" }
The relationship between time, relativity and entropy
Question: I came across a discussion about the nature of time and whether or not time is an illusion on a physics forum. I'm not so much interested in the philosophical issues regarding time, but the following, which is more related to physics, is something that confuses me: Is our world a three dimensional world that keeps changing or is it a four dimensional world that remains static? Relativity seems to say that the world is actually a four dimensional static object, because it is difficult to imagine how the world could be a three dimensional changing world in a spacetime where simultaneity is relative. On the other hand, statistical mechanics does seem to imply that the world is a three dimensional changing world because entropy increases. The laws of physics are invariant under time reflection, so if the world is a four dimensional static one, why should an arbitrary solution to the laws of physics be the kind of solution were entropy increases in 1 direction of time? I once read somewhere is has something to do with the initial conditions of our universe. This clears things up a little but I'm still confused. Answer: There is an ongoing philosophical argument about this between the presentists (only the present world exists) and the four-dimensionalists (things elsewhere in time exist just as things do elsewhere in space). I personally think relativity theory and especially nontrivial wormhole spacetimes make presentism untenable, but one can make presentist arguments even in time travel situations (see Keller, S., & Nelson, M. (2001). Presentists should believe in time-travel. Australasian Journal of Philosophy, 79(3), 333-345. ) That entropy increases does not necessarily imply presentism. We could imagine a universe that had a preferred direction in space, for example that the CMB was hotter on one side than the other and that travel in that direction would make things warmer. That would not be particularly weird, it is just initial conditions making things anisotropic. Similarly we may argue that it is just initial conditions that give us a time direction where entropy increases that is also roughly orthogonal to spacetime slices where matter is roughly static. A universe where half had high entropy and the other half low entropy from the start would in a sense have a time direction defined by entropy that made an angle to the time direction defined by matter being roughly static. But the actual source of time's arrow is still an issue. It is just separate from whether there is just a present. The arrow issue is about the nature of time and entropy, the present issue is about the metaphysics about what stuff actually exists (a full 4-manifold, or an evolving 3-manifold).
{ "domain": "physics.stackexchange", "id": 58397, "tags": "cosmology, spacetime, entropy, time, relativity" }
Dual arm UR10 connection with universal robot stack
Question: Hey, I'm working with two ur10 arms. i can easily set up a connection with the universal robot stack from github using the driver.py with one robot arm. I'm using the ipa320 fork (github.com/ipa320/universal_robot) with groovy. I'm sure both arms are reachable over the network and i can set up a connection with each arm individually. My problem is when i wont to establish the connections simultaniously.. when i'm connecting to the second ur10 the connection attempt fails. I'm not really familliar wit tcp and sockets but in my opinion the connection fails cause the ur10 box trys to establish the connection only through port 500001. I know that this python scripts tells the ur10 box to setup a connection. But i dont't know how to tell the box over whitch port the connection should be established. In the driver.py is a variable define REVERSE_PORT = 50001 but it is not used in the code. Does somebody has deeper insights in the connection process or already accomplished to set up a connection with two ur10's and moveit? Any help is appreciated. Thanks, Matthias. Originally posted by Equanox on ROS Answers with karma: 36 on 2014-02-17 Post score: 0 Original comments Comment by gvdhoorn on 2014-02-17: Is this with a single or dual controller setup? Comment by Equanox on 2014-02-18: dual controller Answer: All right, I found the answer. Thers a file named "prog" in the ur_driver package. You can change the port for the connection between arm and pc in line 116, with the second argument of the command socket_open(HOSTNAME, 50001). You also need to change the port in line 642 in the driver.py. Right now there is no solution added for doing this during runtime. Cheers, Matthias Originally posted by Equanox with karma: 36 on 2014-02-17 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2014-04-16: See also https://github.com/ros-industrial/universal_robot/issues/39 for a discussion about this. Comment by yyf on 2017-08-17: I meet a strange problem.I want to setup a connection between PC and two UR5 arms , but the topic "follow joint tracjetory" cannot be created.The error displayed is "Action client not connected:left_arm_controller/follow_joint_trajectory" . Comment by xioaheqiufeng on 2019-11-15: Hello, I have some questions. I have established two ur10 configuration files in moveit, but I can't control two ur robots at the same time. Do you control the two robots through one robot controller or through the controller respectively? If not, how do you inform the two robots to perform tasks? Can you tell me? Thank you very much
{ "domain": "robotics.stackexchange", "id": 16988, "tags": "ros, ur10, universal-robots" }
Mars Lander pygame
Question: Here is my code about our final project in uni called 'Mars Lander'. The thing is that my code seems to be all over the place (more specifically in the 'main' function at the end of the code...) therefore I would like to get some help with putting things in place! It's my first time making a game using the 'pygame' library and this one was definitely worth the effort! import pygame import sys from random import uniform, randint # used for the random starting velocity of the lander, random clock time etc. from time import clock # to show the time elapsed import math # used to calculate the magnitude of the gravity force applied on the lander WIDTH = 1200 # width of the game window HEIGHT = 750 # height of the game window FPS = 20 # frames per second pause = False # variable which is used to determine whether the game is paused or not # Initialise pygame pygame.init() screen = pygame.display.set_mode((WIDTH, HEIGHT)) clock_game = pygame.time.Clock() pygame.font.init() # you have to call this at the start if you want to use this module. myfont = pygame.font.SysFont('Comic Sans MS', 15) alert_large = pygame.font.SysFont("Comic Sans MS", 18) large_text = pygame.font.SysFont("Comic Sans MS", 50) class Background(pygame.sprite.Sprite): # class for the background image def __init__(self, image_file, location): pygame.sprite.Sprite.__init__(self) # call Sprite initializer self.image = pygame.image.load(image_file) self.rect = self.image.get_rect() self.rect.left, self.rect.top = location # the location of the image should be inputted as a tuple (x,y) # where x is the left side position of the image whereas y is the top side position # obstacles landing pad meteors classes class Lander(pygame.sprite.Sprite): # class for the lander image def __init__(self, image_file, location): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load(image_file) self.rect = self.image.get_rect() self.rect.left, self.rect.top = location # the location of the image should be inputted as a tuple (x,y) # where x is the left side position of the image whereas y is the top side position self.rot_image = self.image self.angle = 0 self.veloc_y = uniform(0.0, 1.0) self.veloc_x = uniform(-1.0, 1.0) self.fuel = 500 self.altitude = 0 self.damage = 0 def free_fall(self): self.rect.y += self.veloc_y self.rect.x += self.veloc_x self.veloc_y += 0.1 def reset_stats(self): self.rect.top = 0 self.veloc_y = uniform(0.0, 1.0) self.veloc_x = uniform(-1.0, 1.0) self.fuel = 500 self.angle = 0 self.damage = 0 self.rot_image = pygame.transform.rotate(self.image, self.angle) def check_boundaries(self): if self.rect.top < 0: self.rect.top = 0 self.veloc_y = uniform(0.0, 1.0) if self.rect.right < 0: self.rect.left = WIDTH if self.rect.left > WIDTH: self.rect.right = 0 if self.rect.bottom > HEIGHT: self.reset_stats() self.rect.left = randint(0, 1123) return True else: return False def get_fuel(self): return self.fuel def burn_fuel(self): # decreases the fuel when 'space' key is pressed self.fuel -= 5 def start_engine(self): self.burn_fuel() self.veloc_x = self.veloc_x + 0.33 * math.sin(math.radians(-self.angle)) self.veloc_y = self.veloc_y - 0.33 * math.cos(math.radians(self.angle)) def rotate_left(self): self.angle += 1 % 360 self.rot_image = pygame.transform.rotate(self.image, self.angle) def rotate_right(self): self.angle -= 1 % 360 self.rot_image = pygame.transform.rotate(self.image, self.angle) def to_ground(self): self.altitude = 1000 - self.rect.top*1.436 return self.altitude def get_damage(self): return self.damage def check_landing(self, pad): check_velocity_y = [True if self.veloc_y < 5 else False] check_velocity_x = [True if -5 < self.veloc_x < 5 else False] check_angle = [True if -7 <= self.angle <= 7 else False] check_above_pad = [True if (self.rect.left > pad.rect.left and self.rect.right < pad.rect.right) else False] check_touch = [True if (self.rect.bottom == pad.rect.top) else False] if check_above_pad[0] and check_angle[0] and check_velocity_x[0] and check_velocity_y[0] and check_touch[0]: return True else: return False class EngineThrust(pygame.sprite.Sprite): # class for the thrust image def __init__(self, image_file, location): pygame.sprite.Sprite.__init__(self) # call Sprite initializer self.image = pygame.image.load(image_file) self.rot_image = self.image self.rect = self.image.get_rect() self.rect.left, self.rect.top = location # the location of the image should be inputted as a tuple (x,y) # where x is the left side position of the image whereas y is the top side position self.thrst_angle = lndr.angle def rotate_thrust(self): self.rot_image = pygame.transform.rotate(self.image, self.thrst_angle) class LandingPad(pygame.sprite.Sprite): # class for the landing pad image def __init__(self, image_file, location): pygame.sprite.Sprite.__init__(self) # call Sprite initializer self.image = pygame.image.load(image_file) self.rect = self.image.get_rect() self.rect.left, self.rect.top = location # the location of the image should be inputted as a tuple (x,y) # where x is the left side position of the image whereas y is the top side position class GameScore: # class for the score of the game def __init__(self): self.score = 0 def successful_land(self): self.score += 50 def get_score(self): # Returns game score return self.score class Lives: # class for the lives of the player def __init__(self, lives): self.lives = lives # Holds lives left def crashed(self): # Decrement lives by 1 self.lives -= 1 def get_lives(self): # Return lives number return self.lives def game_over(self): # Check if there are no lives left return self.lives == 0 class SysFailure: # class for the lander system errors def __init__(self): self.random_alert = 0 # Will carry alert time self.random_key = 0 # Will holds key value def get_alert(self): # Set a new alert time and return it self.random_alert = randint(int(clock()+5), int(clock() + 15)) return self.random_alert def get_key(self): # Randomize and return key value self.random_key = randint(1, 3) return self.random_key class Obstacle(pygame.sprite.Sprite): # class for the obstacle images def __init__(self, image_file, location): pygame.sprite.Sprite.__init__(self) # call Sprite initializer self.image = pygame.image.load(image_file) self.rect = self.image.get_rect() self.rect.left, self.rect.top = location # the location of the image should be inputted as a tuple (x,y) # where x is the left side position of the image whereas y is the top side position self.destroyed = False def get_status(self): # Return the status of the obstacle return self.destroyed def obstacle_collision(self, lander): # Increment lander damage by 10 % if the meteor collides with the lander if lander.rect.colliderect(self.rect): lander.damage += 10 return True else: return False class Meteor(pygame.sprite.Sprite): # Class for the meteor images def __init__(self, image_file, location): pygame.sprite.Sprite.__init__(self) # Call Sprite initializer self.image = pygame.image.load(image_file) self.rect = self.image.get_rect() self.rect.left, self.rect.bottom = location # The location of the image should be inputted as a tuple (x,y) # where x is the left side position of the image whereas y is the top side position self.speed_y = uniform(5, 10) self.speed_x = uniform(-2, 2) self.destroyed = False def storm_fall(self): # Set y and x-axis speed of the meteor self.rect.x += self.speed_x self.rect.y += self.speed_y def meteor_collision(self, lander): # Increment lander damage by 25 % if the meteor collides with the lander if lander.rect.colliderect(self.rect): lander.damage += 25 return True else: return False def get_status(self): # Return the status of the meteor return self.destroyed def reset_stats(self): # Set the bottom of the sprite to its initial value self.rect.bottom = 0 class Storm: # Class for the meteor storms def __init__(self): self.random_storm = 0 # Holds random storm time def storm_time(self): # Randomize and return storm time self.random_storm = randint(int(clock()+3), int(clock() + 12)) return self.random_storm def resume(): # Resume the game global pause pause = False def paused(): # Pause the game global game_status crash_msg = large_text.render('You Have Crashed!', False, (255, 0, 0)) screen.blit(crash_msg, (420, 300)) # Display crash message in the middle of the screen while pause: for event in pygame.event.get(): if event.type == pygame.QUIT: # Quit the game if the 'X' button is clicked sys.exit() if event.type == pygame.KEYDOWN: # Wait for a key to be pressed and if so resumes the game resume() pygame.display.update() clock_game.tick(FPS) obstacles = pygame.sprite.Group() # Create obstacle sprite group meteors = pygame.sprite.Group() # Create meteor sprite group bckgd = Background('mars_background_instr.png', [0, 0]) lndr = Lander('lander.png', [randint(0, 1123), 0]) pad_1 = LandingPad('pad.png', [randint(858, 1042), 732]) pad_2 = LandingPad('pad_tall.png', [randint(458, 700), 620]) pad_3 = LandingPad('pad.png', [randint(0, 300), 650]) """ Create 5 obstacles each being placed on a fixed location on the background image! """ obstacle_1 = Obstacle('pipe_ramp_NE.png', [90, 540]) obstacle_2 = Obstacle('building_dome.png', [420, 575]) obstacle_3 = Obstacle('satellite_SW.png', [1150, 435]) obstacle_4 = Obstacle('rocks_ore_SW.png', [1080, 620]) obstacle_5 = Obstacle('building_station_SW.png', [850, 640]) # Add to the sprite group 'obstacles' obstacles.add(obstacle_1, obstacle_2, obstacle_3, obstacle_4, obstacle_5) """ Create 10 meteors using the Meteor class which are placed at random x-axis locations starting with the bottom of the image rectangle lying at 0 on the y-axis! """ meteor1 = Meteor('spaceMeteors_1.png', [randint(300, 900), 0]) meteor2 = Meteor('spaceMeteors_2.png', [randint(300, 900), 0]) meteor3 = Meteor('spaceMeteors_3.png', [randint(300, 900), 0]) meteor4 = Meteor('spaceMeteors_4.png', [randint(300, 900), 0]) meteor5 = Meteor('spaceMeteors_1.png', [randint(300, 900), 0]) meteor6 = Meteor('spaceMeteors_2.png', [randint(300, 900), 0]) meteor7 = Meteor('spaceMeteors_1.png', [randint(300, 900), 0]) meteor8 = Meteor('spaceMeteors_4.png', [randint(300, 900), 0]) meteor9 = Meteor('spaceMeteors_1.png', [randint(300, 900), 0]) meteor10 = Meteor('spaceMeteors_3.png', [randint(300, 900), 0]) # Add to the sprite group assigned to the 'meteors' variable meteors.add(meteor1, meteor2, meteor3, meteor4, meteor5, meteor6, meteor7, meteor8, meteor9, meteor10) storm = Storm() # Storm variable lives_left = Lives(3) # Each time the player starts with 3 lives lander_score = GameScore() # Holds the score of the game alert_signal = SysFailure() # Holds the lander system failure causes game_status = True # Holds the status of the game def main(): # The main function which runs the game global game_status, pause # change name random_signal = alert_signal.get_alert() # Holds the randomized alert signal time random_key = alert_signal.get_key() # Carries the randomized key used to decide which control failure will occur # during the alert signal random_storm = storm.storm_time() # Random meteor storm time meteor_storm = False # Set to True whenever a storm should occur meteor_shower = False # Set to True whenever a storm should occur print(random_storm) meteor_number = randint(1, 10) # Determines the number of meteors the storm will contain print(meteor_number) while game_status: # main game loop clock_game.tick(FPS) screen.fill([255, 255, 255]) # Fill the empty spaces with white color screen.blit(bckgd.image, bckgd.rect) # Place the background image screen.blit(pad_1.image, pad_1.rect) # Put the first landing pad on the background screen.blit(pad_2.image, pad_2.rect) # Put the second landing pad on the background screen.blit(pad_3.image, pad_3.rect) # Put the last landing pad on the background for obstacle in obstacles: # draw every one of the obstacles # if a collision occurs the obstacle gets destroyed and it is no longer shown if not obstacle.get_status(): screen.blit(obstacle.image, obstacle.rect) if obstacle.obstacle_collision(lndr): obstacle.destroyed = True # Waits for an event for event in pygame.event.get(): # If the user clicks the 'X' button on the window it quits the program if event.type == pygame.QUIT: sys.exit() pressed_key = pygame.key.get_pressed() # Take pressed key value if not meteor_storm: # As soon as the clock passes the random storm time it causes meteor rain if clock() > random_storm: meteor_storm = True meteor_shower = True if meteor_shower: delay = 0 # Each meteor is drawn with 1 second delay count = 0 # Counts the meteors number for meteor in meteors: if count < meteor_number: delay += 1 if clock() > random_storm + delay: if not meteor.get_status(): # Draw every one of the meteors # if a collision occurs the meteor gets destroyed and it is no longer shown meteor.storm_fall() # Give x-axis and y-axis velocity to the meteors screen.blit(meteor.image, meteor.rect) if meteor.meteor_collision(lndr): meteor.destroyed = True count += 1 if pressed_key[pygame.K_ESCAPE]: # Stop game if the 'Esc' button is pressed game_status = False elif lives_left.game_over(): # Terminate program if the player has no lives left game_status = False elif lndr.get_fuel() <= 0: # Remove lander controls if it is out of fuel screen.blit(lndr.rot_image, lndr.rect) else: if not random_signal < clock() < random_signal+2: # While the clock is not in the 2 sec alert time if lndr.get_damage() < 100: # While the lander hasn't sustained 100% damage if pressed_key[pygame.K_SPACE]: # Show thrust image when 'space' is pressed thrst = EngineThrust('thrust.png', [lndr.rect.left+31, lndr.rect.bottom-10]) # Create thrust # sprite lndr.start_engine() # Call 'start engine' function (Lander) thrst.rotate_thrust() # Call 'rotate_engine' which rotates the thrust along with the lander screen.blit(thrst.rot_image, thrst.rect) pygame.display.update() if pressed_key[pygame.K_LEFT]: # Rotate lander anticlockwise when 'left' is pressed lndr.rotate_left() if pressed_key[pygame.K_RIGHT]: # Rotate lander clockwise when 'left' is pressed lndr.rotate_right() if lndr.check_landing(pad_1) or lndr.check_landing(pad_2) or lndr.check_landing(pad_3): # Call # 'check_landing' method on each pad sprite which checks whether the lander has landed on # the landing pad lander_score.successful_land() # Increment score with 50 pts for meteor in meteors: meteor.destroyed = False meteor.reset_stats() random_signal = alert_signal.get_alert() random_key = alert_signal.get_key() random_storm = storm.storm_time() meteor_number = randint(1, 10) print(random_storm) print(meteor_number) meteor_shower = False meteor_storm = False for obstacle in obstacles: obstacle.destroyed = False lndr.reset_stats() else: lndr.damage = 100 # Stop lander damage at 100 % else: alert_msg = alert_large.render('*ALERT*', False, (0, 0, 255)) screen.blit(alert_msg, (190, 80)) # Display alert message if random_key == 1: if pressed_key[pygame.K_SPACE]: # Show thrust image when 'space' is pressed thrst = EngineThrust('thrust.png', [lndr.rect.left + 31, lndr.rect.bottom - 10]) lndr.start_engine() thrst.rotate_thrust() screen.blit(thrst.rot_image, thrst.rect) pygame.display.update() if pressed_key[pygame.K_LEFT]: # Rotate lander anticlockwise when 'left' is pressed lndr.rotate_left() elif random_key == 2: if pressed_key[pygame.K_LEFT]: # Rotate lander anticlockwise when 'left' is pressed lndr.rotate_left() if pressed_key[pygame.K_RIGHT]: # Rotate lander clockwise when 'right' is pressed lndr.rotate_right() else: if pressed_key[pygame.K_SPACE]: # Show thrust image when 'space' is pressed thrst = EngineThrust('thrust.png', [lndr.rect.left + 31, lndr.rect.bottom - 10]) lndr.start_engine() thrst.rotate_thrust() screen.blit(thrst.rot_image, thrst.rect) pygame.display.update() if pressed_key[pygame.K_RIGHT]: # Rotate lander clockwise when 'right' is pressed lndr.rotate_right() screen.blit(lndr.rot_image, lndr.rect) time_passed = myfont.render('{:.1f} s'.format(clock()), False, (255, 0, 0)) screen.blit(time_passed, (72, 10)) # Display clock in seconds velocity_y = myfont.render('{:.1f} m/s'.format(lndr.veloc_y), False, (255, 0, 0)) screen.blit(velocity_y, (280, 56)) # Display y-axis velocity (downward, meters per second) velocity_x = myfont.render('{:.1f} m/s'.format(lndr.veloc_x), False, (255, 0, 0)) screen.blit(velocity_x, (280, 33)) # Display x-axis velocity (sideways, meters per second) fuel_remaining = myfont.render('{:d} kg'.format(lndr.fuel), False, (255, 0, 0)) screen.blit(fuel_remaining, (72, 33)) # Display remaining fuel in kg altitude = myfont.render('{:.0f} m'.format(lndr.to_ground()), False, (255, 0, 0)) screen.blit(altitude, (280, 10)) # Display altitude in meters lander_damage = myfont.render('{} %'.format(lndr.get_damage()), False, (255, 0, 0)) screen.blit(lander_damage, (95, 56)) # Display damage suffered by the mars lander game_score = myfont.render('{:.0f} pts'.format(lander_score.get_score()), False, (255, 0, 0)) screen.blit(game_score, (77, 82)) # Display altitude in meters lndr.free_fall() # Call 'free_fall' method in class 'Lander' if lndr.check_boundaries(): # Call 'check_boundaries' method located in 'Lander' class for meteor in meteors: meteor.destroyed = False meteor.reset_stats() random_signal = alert_signal.get_alert() # Get a new random time for the alert random_key = alert_signal.get_key() # Get a new random key for the lander control failure random_storm = storm.storm_time() # Get a new random time for the storm meteor_number = randint(1, 10) # Get a new random number for the meteors lives_left.crashed() # Reduce lives with 1 print(random_storm) print(meteor_number) meteor_shower = False meteor_storm = False for obstacle in obstacles: # Reset all obstacles and make them visible obstacle.destroyed = False pause = True # Set 'pause' to True so the game pauses when 'paused' method is called paused() # Call 'paused' pygame.display.update() # Refresh (update) display pygame.quit() # quit game if game_status = False main() Answer: You try to extract behaviour into classes, which is a good idea, but you failed at recognising patterns that could be abstracted by a single class and created way too much classes for the same thing: a Sprite set at certain coordinates; heck, even your LandingPad and Background classes are exactly the same, that should have raised a red flag. The same goes with your Lives and GameScore classes which could be a simple integer as they add nothing more. You should also avoid globals and code at top-level. They, however, are better in an __init__ of some class or other. Thus I’d create a MarsLanding class to hold that and the main function. This would help for refactoring, also. Lastly, you perform drawing, sprites updates and collision detection manually but pygame allows to automate it through groups. See for instance the spritecollide function or the Group.draw method. Proposed improvements follows: import math from time import clock from random import uniform, randint, choice import pygame def init(): pygame.init() pygame.font.init() class MarsLander: def __init__(self, fps=20, width=1200, height=750): self.screen = pygame.display.set_mode((width, height)) self.clock = pygame.time.Clock() self.FPS = fps self.regular_font = pygame.font.SysFont('Comic Sans MS', 15) self.alert_font = pygame.font.SysFont('Comic Sans MS', 18) self.large_font = pygame.font.SysFont('Comic Sans MS', 50) self.score = 0 self.lives = 3 self.obstacles = pygame.sprite.Group() self.meteors = pygame.sprite.Group() self.landing_pads = pygame.sprite.Group() self.background = Sprite('mars_background_instr.png', 0, 0) self.lander = Lander(width) self.height = height # Create sprites for landing pads and add them to the pads group # TODO have coordinates dependent on actual width and height Sprite('pad.png', 732, randint(858, 1042)).add(self.landing_pads) Sprite('pad_tall.png', 620, randint(458, 700)).add(self.landing_pads) Sprite('pad.png', 650, randint(0, 300)).add(self.landing_pads) self.reset_obstacles() self.create_new_storm() self.create_new_alert() @property def game_over(self): return self.lives < 1 def reset_obstacles(self): """Create obstacles at a fixed location and add the to the obstacles group""" # TODO have coordinates dependent on actual width and height self.obstacles.empty() Sprite('pipe_ramp_NE.png', 540, 90).add(self.obstacles) Sprite('building_dome.png', 575, 420).add(self.obstacles) Sprite('satellite_SW.png', 435, 1150).add(self.obstacles) Sprite('rocks_ore_SW.png', 620, 1080).add(self.obstacles) Sprite('building_station_SW.png', 640, 850).add(self.obstacles) def create_new_storm(self, number_of_images=4): """Create meteors and add the to the meteors group""" # TODO have coordinates dependent on actual width and height now = int(clock()) self.random_storm = randint(now + 3, now + 12) self.meteors.empty() for i in range(randint(1, 10)): image_name = 'spaceMeteors_{}.png'.format(randint(1, number_of_images)) Meteor(image_name, -2 * i * self.FPS, randint(300, 900)).add(self.meteors) def create_new_alert(self): self.random_alert = randint(int(clock() + 5), int(clock() + 15)) self.alert_key = choice((pygame.K_SPACE, pygame.K_LEFT, pygame.K_RIGHT)) def draw_text(self, message, position, color=(255, 0, 0)): text = self.regular_font.render(message, False, color) self.screen.blit(text, position) def run(self): meteor_storm = False # Set to True whenever a storm should occur while not self.game_over: self.clock.tick(self.FPS) # If the user clicks the 'X' button on the window it quits the program if any(event.type == pygame.QUIT for event in pygame.event.get()): return self.screen.fill([255, 255, 255]) # Fill the empty spaces with white color self.screen.blit(self.background.image, self.background.rect) # Place the background image self.landing_pads.draw(self.screen) self.obstacles.draw(self.screen) # Check for collisions with obstacles and remove hit ones obstacles_hit = pygame.sprite.spritecollide(self.lander, self.obstacles, True) self.lander.damage += 10 * len(obstacles_hit) pressed_key = pygame.key.get_pressed() # Take pressed key value if not meteor_storm and clock() > self.random_storm: # As soon as the clock passes the random storm time it causes meteor rain meteor_storm = True if meteor_storm: self.meteors.update() self.meteors.draw(self.screen) # Check for collisions with meteors and remove hit ones meteors_hit = pygame.sprite.spritecollide(self.lander, self.meteors, True) self.lander.damage += 25 * len(meteors_hit) if pressed_key[pygame.K_ESCAPE]: # Stop game if the 'Esc' button is pressed return if self.random_alert < clock() < self.random_alert + 2: alert_msg = self.large_font.render('*ALERT*', False, (0, 0, 255)) self.screen.blit(alert_msg, (190, 80)) thrust = self.lander.handle_inputs(pressed_key, self.alert_key) else: thrust = self.lander.handle_inputs(pressed_key) if thrust: self.screen.blit(thrust.rot_image, thrust.rect) self.screen.blit(self.lander.rot_image, self.lander.rect) self.draw_text('{:1.f} s'.format(clock()), (72, 10)) self.draw_text('{:.1f} m/s'.format(self.lander.veloc_y), (280, 56)) self.draw_text('{:.1f} m/s'.format(self.lander.veloc_x), (280, 33)) self.draw_text('{:d} kg'.format(self.lander.fuel), (72, 33)) self.draw_text('{:.0f} m'.format(self.lander.altitude), (280, 10)) self.draw_text('{} %'.format(self.lander.damage), (95, 56)) self.draw_text('{:.0f} pts'.format(self.score), (77, 82)) self.lander.free_fall() pygame.display.update() landing_pad_reached = pygame.sprite.spritecollideany(self.lander, self.landing_pads) if landing_pad_reached or self.lander.rect.bottom > self.height: self.create_new_alert() self.create_new_storm() self.reset_obstacles() meteor_storm = False if landing_pad_reached and self.lander.has_landing_position(): self.score += 50 else: self.lives -= 1 should_exit = self.show_crash() if should_exit: return self.lander.reset_stats() def show_crash(self): """Display crash message in the middle of the screen and wait for a key press""" crash_msg = self.large_font.render('You Have Crashed!', False, (255, 0, 0)) self.screen.blit(crash_msg, (420, 300)) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: # Quit the game if the 'X' button is clicked return True if event.type == pygame.KEYDOWN: # Wait for a key to be pressed and if so resumes the game return False pygame.display.update() self.clock.tick(self.FPS) class Sprite(pygame.sprite.Sprite): def __init__(self, image_file, top, left): super().__init__() self.image = pygame.image.load(image_file) self.rect = self.image.get_rect() self.rect.top = top self.rect.left = left class EngineThrust(Sprite): # class for the thrust image def __init__(self, lander_rect, lander_angle): super().__init__('thrust.png', lander_rect.bottom - 10, lander_rect.left + 31) self.rot_image = pygame.transform.rotate(self.image, lander_angle) class Meteor(Sprite): def __init__(self, image_file, top, left): super().__init__(image_file, top, left) self.speed_y = uniform(5, 10) self.speed_x = uniform(-2, 2) def update(self): self.rect.x += self.speed_x self.rect.y += self.speed_y class Lander(Sprite): def __init__(self, width): super().__init__('lander.png', 0, 0) self.width = width self.reset_stats() def reset_stats(self): self.rect.top = 0 self.rect.left = randint(0, self.width - self.rect.width) self.veloc_y = uniform(0.0, 1.0) self.veloc_x = uniform(-1.0, 1.0) self.fuel = 500 self.angle = 0 self.damage = 0 self.rot_image = self.image def free_fall(self): self.rect.y += self.veloc_y self.rect.x += self.veloc_x self.veloc_y += 0.1 if self.rect.top < 0: self.rect.top = 0 self.veloc_y = uniform(0.0, 1.0) if self.rect.rigth < 0: self.rect.left = self.width if self.rect.left > self.width: self.rect.right = 0 def start_engine(self): self.fuel -= 5 self.veloc_x = self.veloc_x + 0.33 * math.sin(math.radians(-self.angle)) self.veloc_y = self.veloc_y - 0.33 * math.cos(math.radians(self.angle)) @property def altitude(self): return 1000 - self.rect.top * 1.436 @property def can_land(self): return self.fuel > 0 and self.damage < 100 def has_landing_position(self): return self.can_land and (self.veloc_y < 5) and (-5 < self.veloc_x < 5) and (-7 <= self.angle <= 7) def handle_inputs(self, pressed_key, alert_key=None): if not self.can_land: return thrust = None rotated = False if alert_key != pygame.K_SPACE and pressed_key[pygame.K_SPACE]: # Show thrust image when 'space' is pressed thrust = EngineThrust(self.rect, self.angle) self.start_engine() if alert_key != pygame.K_LEFT and pressed_key[pygame.K_LEFT]: # Rotate lander anticlockwise when 'left' is pressed self.angle += 1 rotated = True if alert_key != pygame.K_RIGHT and pressed_key[pygame.K_RIGHT]: # Rotate lander clockwise when 'left' is pressed self.angle -= 1 rotated = True if rotated: self.angle %= 360 self.rot_image = pygame.transform.rotate(self.image, self.angle) return thrust if __name__ == '__main__': init() game = MarsLander() game.run() pygame.quit() You may have seen that I changed some constants into parameters with default values, this will allow you to improve the game customization if you need to by integrating argparse for instance. Other changes may include restarting hazards every once in a while (spritecollideany might be of some help to detect when every meteor have run off the background)
{ "domain": "codereview.stackexchange", "id": 30289, "tags": "python, game, pygame" }
Ant colony optimization algorithm
Question: if i have an equation like $$f_n = x + y + z + a + b$$ and each variable has a discrete answer like $a = 0, 1 , 3$ . $b = 2 ,4,5$ etc. i want to find the global optimum minimization point. I used Ant colony optimization (ACO) to solve the equation but i am stuck in the heuristic information and how to compute these parameters, as I saw in the traveling sale's man problem eta = one over distance between two cities. But here there is no relation between $x$ and $y$ . Second i want to make sure that when computing $\delta_\tau$ pheromone updating trail , its equal to $$f_n(\text{best answer in iteration k})\over f_n(\text{worth answer in iteration k})$$ as i saw another equations in papers and got distracted. I built a matlab model but it didn't give me the optimum point at each run time . Thank you Answer: There is no simple answer for how to choose parameters. This is a heuristic, so there are no guarantees. Here are a few strategies that are widely used: Random search: Try many different parameters. In each trial, you randomly choose all parameters, solve the optimization problem using those parameters, and then remember the solution. Take the best solution found. Grid search: Systematically search over the space of possible parameters. For instance, if there are two parameters $\alpha$ and $\beta$, you might let $\alpha$ range over values $0.1$, $1$, $10$, and $100$, and let $\beta range over a similar set, and try all combinations; then take the best solution found. Try smaller problems: Construct a smaller problem (e.g., synthetically). Find the best parameter settings for it (by applying one of the prior two methods). Then, use those parameter settings on your larger problem, and hope they'll be good for your larger problem, too.
{ "domain": "cs.stackexchange", "id": 9407, "tags": "algorithms, graphs, optimization" }
problem of adding the range_sensor_layer to costmap_2D plugins
Question: I am a fresh man to ROS system.I bulid a robot platfom use ros. i use move_base in my systerm. I have a laser scan and some ultrasonic sensors in my systerm,but for using the ultrasonic sensor i got some problem. i dont konw how to add the range_sensor_layer as a costmap plugins. i tried just add In costmap_common_params.yaml add the layer of ultrasonic ultrasonic_layer: enabled: true max_obstacle_height: 0.4 origin_z: 0.0 z_resolution: 0.2 z_voxels: 2 unknown_threshold: 15 mark_threshold: 0 combination_method: 1 track_unknown_space: true #true needed for disabling globalpath planning through unknown space obstacle_range: 1.0 raytrace_range: 3.0 publish_voxel_map: false observation_sources: ultrasonic ultrasonic: data_type: Range topic: ultrasonic marking: true clearing: true In local_costmap_params.yaml like plugins: - {name: obstacle_layer, type: "costmap_2d::VoxelLayer"} - {name: ultrasonic, type: "range_sensor_layer::RangeSensorLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer"} but when i run the move base i got some error said: "range_sensor_layer::RangeSensorLayer" is not surported by the plugin . i can get the range informations using : $rostopic echo ultrasonic did i missed something? or did i do somrthing wrong? and can anybody tell me the correct way to use the range_sensor_layer as a plugin? thanks a lot Originally posted by zink on ROS Answers with karma: 1 on 2016-08-10 Post score: 0 Original comments Comment by David Lu on 2016-08-21: Can you please post the actual error message that you got? Answer: My guess is your issue stems from how you defined your ultrasonic plugin. Try the following instead: ultrasonic: topics: ["/ultrasonic"] no_readings_timeout: 1.0 clear_on_max_reading: true Then also remove all of your "ultrasonic_layer:" definition to prevent confusion (Unless you are using that plugin elsewhere). You only need to define a layer/plugin once. Also the RangeSensorLayer has no 'marking' or 'clearing' parameters (unlike ObstacleLayer) so you don't need those lines. Let us know if this fixes your problem. Originally posted by biglotusturtle with karma: 165 on 2017-09-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25490, "tags": "navigation, move-base, costmap-2d" }
rosserial failed under Ubuntu 11.10 ROS electric
Question: I was trying rosseial with Arduino based IR range finder. Ubuntu 11.10 ROS electric The details is as below: [INFO] [WallTime: 1320969440.580833] ROS Serial Python Node [INFO] [WallTime: 1320969440.584986] Connected on /dev/ttyUSB0 at 57600 baud [ERROR] [WallTime: 1320969442.700559] Creation of publisher failed: unpack requires a string argument of length 4 [ERROR] [WallTime: 1320969442.718333] Tried to publish before configured, topic id 125 [ERROR] [WallTime: 1320969442.777416] Tried to publish before configured, topic id 125 [ERROR] [WallTime: 1320969442.837355] Tried to publish before configured, topic id 125 [ERROR] [WallTime: 1320969442.896387] Tried to publish before configured, topic id 125 Anybody knows what's the problem? Thanks in advance. Originally posted by bona on ROS Answers with karma: 101 on 2011-11-10 Post score: 0 Original comments Comment by DrBot on 2012-12-10: I've tried updating to the latest arduino 1.0.2 and the latest rosserial under electric with the same error. Answer: This looks like you upgraded the computer to rosserial 0.3.0, but did not update your Arduino libraries and/or re-upload the code to the Arduino. We recently added the md5 checksum to each topic negotiation so that users are alerted about changes in messages (which causes difficult to debug errors). Originally posted by fergs with karma: 13902 on 2011-11-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 7260, "tags": "arduino, rosserial, ubuntu-oneiric, ubuntu" }
MoveIt multiple planning pipelines
Question: I recently found the pilz_industrial_motion_planner, It can plan industrial tipical movements (PTP, LIN and CIRC). Following the MoveIt tutorials pilz_industrial_motion_planner tutorial and the prbt_moveit_config I've managed to work with ompl and pilz_industrial_motion_planner in RVIZ. As the tutorial explains, using the pilz_industrial_motion_planner with the Python/C++ MoveIt interface is as simple as setting the planner_id to "PTP", "LIN" or "CIRC". The main issue is when you try to work with ompl and pilz_industrial_motion_planner in the same script. Apparently you can only work with one planning pipeline at a time. When I try to work with the pilz_industrial_motion_planner having set ompl to default I get the following warning: Cannot find planning configuration for group 'robot_1' using planner 'PTP'. Will use defaults instead. When I set pilz_industrial_motion_planner to default I get the same error the other way (I can only plan with "PTP", "LIN" or "CIRC"). ¿I'm mising something or is there a way or a workaround to work with both planning pipelines with the Python/C++ MoveIt interface? Originally posted by IgnacioUD on ROS Answers with karma: 47 on 2021-04-26 Post score: 1 Original comments Comment by xxiaoxiong on 2022-05-31: hi, do you know how to set the pilz as the default planner when there're multiple pipelines, cause in my case, the default planner is always ompl even if i changed the default planner as pilz in the move_group launch file Answer: You need to be on Noetic or later, or build MoveIt from source to use multiple planning pipelines in Melodic or earlier. You already did this (since it works in Rviz), I just want to state it for the record. You need to use not only setPlannerId (LIN, PTP, CIRC) but also setPlanningPipelineId (pilz_industrial_motion_planner). If you find that a part of the tutorial is insufficient or an error message is not clear enough, please feel free to submit a pull request to improve it. Originally posted by fvd with karma: 2180 on 2021-04-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by IgnacioUD on 2021-04-26: Hi fvd, as always thank you for your accurate and quick response. I will indeed submit a pull request, a simple example script can be selfexplanatory for such things. ¿Is there a way of changing the planning pipeline using the Python interface? Comment by fvd on 2021-04-26: Yes, soon. In the meantime you can use these changes.
{ "domain": "robotics.stackexchange", "id": 36367, "tags": "ros, moveit, ompl, move-group-interface" }
Who was the first to call the phase gates $P(\pi/2)$ and $P(\pi/4)$ the $S$ and $T$ gates, and were they motivated by generators of the modular group?
Question: Within the theory of quantum gates, a common pair of single-qubit phase gates are the $P(\pi/2)=S$ and $P(\pi/4)=T$ gates, with $$S= \begin{bmatrix} 1 & 0 \\ 0 & i \end{bmatrix},\:T = \begin{bmatrix} 1 & 0 \\ 0 & e^{i \frac{\pi}{4}} \end{bmatrix}.$$ We have, for example, $T^4=S^2=Z$. See this Wikipedia article. Within the theory of presentations of $\mathrm{PSL}(2, \mathbb Z)$ (the modular group), we have the two generators, $S$ and $T$, with: $$S : z\mapsto -\frac1z,\:T : z\mapsto z+1.$$ We have, for example, $S^2=(ST)^3=I$ See this Wikipedia article. But, in matrix form, these generators do not look like those above: $$S = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}, \: T = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}.$$ Are the $S$ and $T$ labels used in quantum computation just coincidentally the same as those used to describe generators of the modular group? Or is there some deeper relation that I'm not immediately seeing? What is the origin of $S$ and $T$ gates used in quantum computation? Answer: I believe Neilsen and Chuang were the first to use this particular notation. Previous work had referred to $S$ and $T$ as $\sigma_z^{1/2}$ and $\sigma_z^{1/4}$, respectively (Boykin et al. 1999). The use of $S$ may have been inspired by Deutsch's "S-matrix" (Deutsch 1989), though this was really a root-of-NOT gate. The use of $T$ may have been inspired by the transformation "T" matrix of a universal beam splitter (DiVincenzo 1989), which is equivalent to the modern $T$ matrix, up to a global phase, for certain parameter values.
{ "domain": "quantumcomputing.stackexchange", "id": 3090, "tags": "quantum-gate, mathematics, terminology-and-notation" }
Which is better: Increase in collision frequency or increase collision energy
Question: What would have a greater effect on the rate of a reaction, increasing the frequency of collisions or increasing the collision energy of the particles? (assuming that using a catalyst is not an option) I got this from a past paper question: Explain in terms of collision frequency and collision energy how the rate would change if the temperature was increased, and which of these causes the greater effect. I know the first part of the question. I assume that the collision energy is more significant, which is why we use catalysts, but not sure how. Answer: Collision energy is what matters Both the frequency of collisions and the average energy of those collisions are increased at higher temperatures. But the frequency doesn't matter much for the rate of reaction because collisions are already very very common and are not the limiting factor in causing a reaction to happen. The distribution of energy in moving particles (and therefore the energy involved in collisions) follows a statistical distribution. Simplifying a bit (by ignoring the orientation of colliding molecules and a bunch of other probabilistic factors) the reason a reaction follows a collision is that the collision has enough energy to overcome some threshold energy barrier for a reaction to take place (simplifying again, collisions without enough energy just cause molecules to bounce off each other perhaps exchanging some kinetic or vibrational energy in the process). Only when the net energy involved is sufficient to, for example, break a bond in one of the molecules, does a reaction result. A lot of collisions are happening all the time and most don't have enough energy to cause a reaction to happen. In fact, if the population of collisions with enough energy is too low there will be no reaction at all no matter how many collisions there are and increasing the number will make no difference at all. As the temperature increases, more of the molecules are pushed into the "have sufficient energy" part of the distribution (which might be a very small proportion of all the molecules in the mixture). Then reactions start to happen. But the number of collisions is very large compared to the number leading to a reaction so it is the number of molecules with enough energy that dominates the result.
{ "domain": "chemistry.stackexchange", "id": 11916, "tags": "kinetics" }
Should the tip of the burette of an automatic titrator be immersed in the analyte?
Question: In all the tutorials I found online, the tip of the burette (containing the titrant - NaOH) was immersed into the analyte solution. My question is: shouldn't the tip be out of the solution? Isn't there a possibility for the analyte to crawl up the tip and react with the NaOH inside the burette that hasn't been dispensed yet?. Everyone does that... Even in the official tutorials of the company Answer: In classical titrimetry you would avoid dipping the burette into the sample. However, for autotitrators, the electronic dispenser tip (electronic burette) should be dipped in the sample. This is to avoid any error due to the a drop clinging to the tip. In manual titration one would wash it. Most autotitrators are like this design including Karl Fischer system. Your concern can be alleviated by the fact that you need pressure to cause a back flow of the sample into the tip of the dispensor.
{ "domain": "chemistry.stackexchange", "id": 11504, "tags": "acid-base, experimental-chemistry, analytical-chemistry, ph, titration" }
Converting a Dante Lab VCF file
Question: Is there an easy way to grab the rsids (hg38) of a VCF file? I know one of the tabs can contain RSIDs, but this file doesn't contain any. I have a software I created for mapping 23AndMe DTC testing to SNPedia, and I'm trying to integrate more DTC testing. (https://www.github.com/mentatpsi/osgenome) The user found this https://gist.github.com/KlausTrainer/26c2996cf32677e4c107bd8aaee67794 which is a script to use variant summary data to map to the RSIDs, but I'm hesitant to perform a SQL dump to find a table that contains chromosome, postion, and rsID columns. As a bonus question, 23AndMe reports all positive orientations. How would I go about converting strands to match this orientation? Does the VCF contain information that would allow this action autonomously? Answer: If your file doesn't contain the rsIDs, you can use a tool like Ensembl's VEP to annotate the file and get rsIDs as well. Of course, not all variants have an rsID so don't expect to find one for everything reported in your file. More importantly, if these are the raw, unfiltered data from commercial direct-to-consumer sequencing products like 23&me, you need to be aware that they are incredibly untrustworthy. This is why 23&me don't include them in their official report. They only show you the quality results that have been filtered. When working with the raw data, you are very likely to have completely bogus results and will need to have a full VCF file and filter on various quality criteria. I have looked at the raw 23&me data from 4 individuals, and all of them had extremely rare pathogenic variants reported that would basically have killed them long before they learned to walk. They were clearly false positives. I stress that I am not suggesting these companies are producing crap: none of these variants was reported in the official report of 23&me, they were filtered out. The bad calls are only present when you work on the raw, unfiltered data but if that is what you are working on, you need to be extremely careful.
{ "domain": "bioinformatics.stackexchange", "id": 2435, "tags": "vcf, gene, snp, genotyping, rsid" }
Work done by frictional force
Question: I used to assume that work done by friction is dependent of path it follows but i am confused as this question or answer of this question suggests that it has nothing to do with give theta So am i wrong or am i missing something? ! Answer: The work done on each slope is proportional to the friction force and the distance traveled: Friction force is $$F = \mu\, N = \mu\, m g \cos \theta$$ Distance traveled is $$ \ell= \sqrt{ (\Delta x)^2 + (\Delta y)^2} = \Delta x \sqrt{1+\tan^2 \theta} = \frac{\Delta x}{\cos \theta} $$ Combined, the work is $$W = F \ell = \mu\, m g \cos \theta \frac{\Delta x}{\cos \theta} = \mu \, m g \, \Delta x$$ So the slope cancels out for this case. This is a nice problem showing how on special circumstances friction is path independent.
{ "domain": "physics.stackexchange", "id": 40555, "tags": "homework-and-exercises, newtonian-mechanics, friction, work" }
Calculating the number of unique BST generatable from n keys, why is my number so large
Question: I want to find the number of distinct BSTs I can get with 3 unique keys (i.e. 1, 2, 3) Here's my solution: In case 1, we have each node have possibility, 3, 2, 1, respectively, so 3*2*1 = 6 ways In case 2, we have the same situation, the top node can be 1, 2, 3, three choices, second node two choices, so and so forth, so I get 6 ways In case 3, it is same to case 2 and I get 6 ways In the end I have 6 + 6 + 6 number of the beast = 18 different threes. (Edited!) Why does this answer from Stackoverflow based on so called Catalan Number only give me 5 trees? Answer: You overcounted some of the trees and left out two other possibilities. In your Case 1, for example, there is only one possible BST of that form, namely the one with 2 in the root, 1 in the left subtree and 3 in the right subtree (the leftmost tree below). Remember that you have to maintain the binary search tree condition: a node with value $k$ must have each of the values in its left subtree less than or equal to $k$ and each of the values in the right subtree must be greater than or equal to $k$. As a consequence, a BST with node values $\{1, 2, \dotsc,n\}$ must have inorder traversal equal to $\langle1, 2, \dotsc,n\rangle$. When $n=3$, then, we'll have these five BSTS:
{ "domain": "cs.stackexchange", "id": 3837, "tags": "trees, probability-theory, permutations" }
How to follow a nav_msgs/path?
Question: Hello, I am using this package to publish an astroid path. I would like my robot to be able to follow this published path, I was wondering if anyone have a clue about how to approach this problem? If I would like to redefine the path to something similar to a sine wave, how would I go about that? Thank you, Aaron Originally posted by aarontan on ROS Answers with karma: 135 on 2018-06-27 Post score: 2 Original comments Comment by jayess on 2018-06-27: Have you taken a look at the navigation stack? Comment by jayess on 2018-06-27: Please don't crosspost. It wastes time and effort of everyone. Please see our support page. Comment by aarontan on 2018-06-27: Oh okay, sorry I did not know about that but I understand now. And yes I have looked at the navigation stack however I am not sure how to integrate move_base with a published path message. I am wondering if anyone have experience with this before and could provide me with some pointers. Thanks Comment by Ahmed_Desoky on 2020-07-20: I have the same problem. Are you finished and solve this problem? Thanks Comment by gregbowers on 2023-06-22: Hello To make your robot follow a published path: -Publish the desired path using the package you have. -Subscribe to the published path in your robot's control system. -Extract the necessary path information from the received message. -Implement a control algorithm that interprets the path information and generates appropriate commands for the robot. -Execute the control commands in your robot's hardware or simulation environment. Answer: Basically, you have three options. Write your own "planner" that follows the path. This can actually be a simple controller without any collision avoidance and should be straightforward. You can write a GlobalPlannerPlugin to move_base that subscribes to the path you create (or creates it itself) and then passes it down to the local planner. There are quite some questions about this (e.g. here or here). Then you have the full capabilities of move_base available later on Use move_base_flex. This exposes additional actions which should let you do what you want directly. See this question for details. I personally have done 1 (pretty easy) and 2 (works quite will). EVen though I haven't tested it, I'd say 3 would be the way to go now (though move_base_flex is currently still not very stable, but quite actively developed). EDIT About option 1: No, there are no tutorials (that I know of). Just create a subscriber to your path message, and try to follow this using a simple controller. Most basic way would be to implement a simple P[I][D]-controller that trys to stay on the path with a fixed x-velocity (in my experience, PD is totally fine for such a task). If your robot is a differential drive, this is the easiest way. About option 2: The link to the tutorial is in one of the linked answers above. For reference. Originally posted by mgruhler with karma: 12390 on 2018-06-28 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by aarontan on 2018-06-28: Thank you for your reply, I am interested in the first two options, do you have any tutorials on how to achieve this? Comment by mgruhler on 2018-06-29: see edit above. Comment by hanks on 2021-03-19: @mgruhler Hey, I have created a package that computes a global path and I want to execute option 2 from your answer where the global plugin subscribes to this path being published and passes it down to the local planner but am unable to implement it. Here is the code of the plugin I've written to subscribe to the global path and use it but it doesn't work - https://drive.google.com/drive/folders/1kyLQqCFD5ex7VnVXjuRbn9Bn8XsO1fZX?usp=sharing Please let me know what should I do to solve this. Comment by mgruhler on 2021-03-19: no. Post a new question including your code, describe what you are trying to do, what you have tried so far and where you are stuck... Comment by hanks on 2021-03-19: @mgruhler Here is my new question - https://answers.ros.org/question/374307/how-to-use-a-subscriber-in-a-global-planner-plugin/
{ "domain": "robotics.stackexchange", "id": 31104, "tags": "navigation, move-base, ros-kinetic" }
Could we detect dark matter by black holes gaining unexplained mass?
Question: Dark matter is said to interact only gravitationally, so it won't commonly form black holes by itself. But if a black hole is already there, and dark matter encounters the event horizon, it should go in and never come out. This means black holes would vacuum up dark matter and (very slowly) gain mass. Is this something we could hypothetically measure, as in "these black holes are heavier than they should be for the amount of cosmic gas and background radiation they eat"? (In fact, could this be why the oldest galactic black holes seem to be heavier than we expect?) Answer: The accretion rate is far too small to make much difference to Galactic black holes, but how could this be distinguished from the accretion of normal, baryonic matter in any case? In fact it is easier for black holes to accrete normal matter, since it is easier for such matter to lose its angular momentum, via friction in an accretion disk, and be able to drop into the black hole. The effective cross-section for the accretion of non-interacting dark matter is determined by an effective geometric size for the black hole, which will be just dependent on its mass and the speed with which it moves relative to the dark matter. This is the so-called "Hoyle-Lyttleton radius" given by $$R_{\rm HL} = \frac{2GM}{v^2}, $$ where $M$ is the black hole mass and $v$ is its speed with respect to the dark matter background. The accretion rate is then just $$\frac{dM}{dt} = \pi R_{\rm HL}^2 \rho v, $$ where $\rho$ is the density of the dark matter. For Galactic black holes we might assume $M=10M_\odot$, a speed with respect to the Galactic dark matter of 250 km/s (if it is in orbit around the Galaxy at a similar position to the Sun) and $\rho \simeq 0.01 M_\odot$/pc$^3$ at the Sun's position. Putting the numbers in, we find $R_{\rm HL}= 4.3\times 10^{10}$ m (about 0.28 au) and a mass accretion rate of $10^{-17} M_\odot$/year. Thus, even over the $10^{10}$ year life of the Galaxy, a stellar black hole increases its mass by a neglible amount due to the accretion of dark matter.
{ "domain": "astronomy.stackexchange", "id": 6714, "tags": "black-hole, gravity, supermassive-black-hole, dark-matter" }
Efficient algorithm for a particular graph closure property
Question: In the context of an unusual compiler problem, I have a graph in which the vertices are variables, and the edges correspond to whether the instruction set has an instruction that copies the source vertex to the target vertex. The instruction set also has the unusual property that the instructions may destroy (overwrite with non-predictable values) the contents of some of the variables. Hence the edges are labeled with sets: subsets of the (fixed) set of all variables that are destroyed by the instruction. So for example, I might have instructions like the following: a -> b, clobber set = {d} b -> c, clobber set = {} a -> c, clobber set = {d,f} Note that there is a direct path from a -> c, but that there is also an indirect path from a -> b -> c, where the union of the clobber sets is {d}, i.e., a strict subset of the clobber set from the direct path, namely {d,f}. Hence the path a -> b -> c should be preferred as it has a lesser effect on the other variables. Consider the following variation on the above example: a -> b, clobber set = {e} b -> c, clobber set = {} a -> c, clobber set = {d,f} By similar reasoning, we have two paths from a -> c: the direct one, which clobbers {d,f} and the indirect one a -> b -> c which clobbers {e}. Given that {d,f} and {e} are incomparable subsets of the variable set, we would like to retain information about both of these paths. So the problem is as follows: given such a directed graph with edges labeled with sets, compute all "minimal" paths through the graph with respect to the clobber sets. For the first example, we should return a -> b: {d}, b -> c: {}, a -> c: {d}. For the second example, we should return a -> b: {e}, b -> c: {}, a -> c: {d,f}, a -> c: {e} (where this last fact was derived from the other edges). The problem has some flavor of a shortest-paths problem, but given the weaker partial ordering rather than the stricter total ordering on the edge labels, the existing algorithms are inapplicable. References to literature would also be appreciated. Answer: I suggest you use a fixpoint algorithm, using a worklist implementation. I'll start by providing an intuitive, English-language description of an algorithm for this problem; then a more formal presentation. Intuitive algorithm. Basically, we'll apply the following closure rules to the graph as many times as possible, until they don't produce anything new: If there is an edge $u\to v$ with clobber set $S$ and an edge $v\to w$ with clobber set $T$, then add an edge $u\to w$ with clobber set $S\cup T$. If there is an edge $u \to v$ with clobber set $S$ and an edge $u \to v$ with clobber set $T$, where $S \subset T$, then remove the edge with clobber set $T$. Intuitively, we'd like to keep applying these in all possible ways until the graph doesn't get any bigger; then you're done. Note that I'm treating the graph as a multigraph, so there can be multiple edges between any pair of vertices. In practice, you need to implement this carefully (you don't want to add an edge that would be immediately removed by the other closure rule). So, here is an algorithm that applies the above idea, in a reasonable way: Initially, mark all of the edges of the graph. While there exists at least one marked edge: a. Pick any marked edge; suppose it is $u \to v$ with clobber set $S$. Remove the mark on this edge. b. For each edge $v\to w$ with clobber set $T$, call $\text{MaybeAddEdge}(u,w,S\cup T)$. Here the subroutine $\text{MaybeAddEdge}(u,v,S)$ is defined to do the following steps: For each edge $u \to v$ with clobber set $T$: a. If $T \subseteq S$, return immediately (without adding any edges). b. If $S \subset T$, remove the edge with clobber set $T$. Add an edge $u \to v$ with clobber set $S$, and mark it. At the end of execution, after we've finished applying the closure rules, the set of clobber sets on all of the edges from $u$ to $v$ gives the set of minimal clobber sets for all paths from $u$ to $v$. This is effectively a worklist/fixpoint algorithm, and it follows pretty directly from the problem statement. If you prefer a more formal treatment of this problem, see below. Notation. Let $V$ denote the set of variables. Define a partial order $\mathcal{L}$ as follows: Consider the pre-order $(2^{2^V},\le)$, where an element $\ell \in 2^{2^V}$ is a set of clobber sets, and we define $\ell \le \ell'$ iff for every clobber set $S \in \ell$, there exists a clobber set $S' \in \ell'$ such that $S' \subseteq S$. Now we can define the equivalence relation $\equiv$ by $\ell \equiv \ell'$ iff $\ell \le \ell'$ and $\ell' \le \ell$. Modding out by $\equiv$ yields a partial order $(\mathcal{L},\le) = (2^{2^V},\le)/\equiv$. In other words: every set $\ell \in 2^{2^V}$ of clobber sets is equivalent to one where there is no pair of sets $S,S' \in \ell$ such that $S \subset S'$ and $S\ne S'$ (if there is such a pair of sets, we can remove $S'$ from $\ell$ and get another equivalent set), so we let $\mathcal{L}$ denote the set of elements $\ell$ that are in this canonical form. As it happens, $\mathcal{L}$ is a meet semi-lattice with a meet operation $\wedge$. In particular, the join operation is defined by $\ell \vee \ell' \equiv \ell \cup \ell'$. Let $\bot = \{2^V\}$ denote the bottom element in this lattice and $\top = \{\emptyset\}$ denote the top element. Of course, if $S$ is a clobber set (i.e., $S \subseteq V$), then $\{S\}$ is a lattice element of $\mathcal{L}$ (corresponding to the single clobber set $S$). Notice that a lattice element $\ell \in \mathcal{L}$ corresponds to a set of minimal clobber sets. Thus, the lattice will be helpful for recording this kind of information as the algorithm progresses. Define the binary operation $\circ$ by $\ell \circ \ell' \equiv \{S \cup S : S \in \ell, S' \in \ell'\}$. Conceptually, this is going to represent the effect of concatenating two paths (where $\ell$ is the set of minimal clobber sets for some path $u\to v$ and $\ell'$ is the set for some path $v \to w$). Approach. We are going to build a 2-dimensional table $T[\cdot,\cdot]$. At any point in the algorithm, for each pair of variables $u,v \in V$, $T[u,v]$ will hold an element of $\mathcal{L}$ that denotes an over-approximation of the desired set of minimal clobber sets for all paths from $u$ to $v$. If $u,v \in V$ are variables, at the end of the algorithm $T[u,v]$ will contain the set of all minimal clobber sets for all paths from $u$ to $v$. We'll find a least fixpoint for $T[\cdot,\cdot]$, subject to the following condition: For all vertices $u$ and all edges $v\to w$ (with clobber set $S$), $T[u,v] \circ \{S\} \le T[u,w]$. For all vertices $u$, $T[u,u] = \top$. To find the least fixpoint, we'll use a standard worklist approach. Algorithm. Let $W$ denote a set data structure that can hold pairs of vertices. Then the algorithm is as follows: Initialize $T[u,v] \gets \{S\}$ for each edge $u\to v$ with clobber set $S$ in the graph, $T[u,u] \gets \top$ for each vertex $v \in V$, and otherwise $T[u,v] \gets \bot$. Initialize $W$ to contain the set of pairs $(u,v)$ such that there is an edge $u\to v$ in the graph together with the set of pairs $(u,u)$ such that $u$ is a vertex in the graph. While $W \ne \emptyset$, do: a. Pop an arbitrary vertex-pair $(u,v)$ from $W$. b. For all edges $v \to w$ (with clobber set $S$) out of $v$, do: Let $o \gets T[u,w]$. Set $T[u,w] \gets T[u,w] \vee (T[u,v] \circ \{S\})$. If $o \le T[u,w]$, insert $(u,w)$ into $W$. The final value of $T[\cdot,\cdot]$ will have the desired form (it is a minimal fixpoint subject to all of the conditions above), and thus will be a valid solution to your problem. Each $T[u,v]$ contains the set of minimal clobber sets, taken over all paths from $u$ to $v$. Notice that the above algorithm does not specify the order in which you pull items out of the worklist. Good scheduling algorithms can improve the performance of this scheme. One standard heuristic is to use reverse post-order (derived from a DFS on the graph) to prioritize which elements to pull out of the worklist. Also note that the running time of this algorithm could be exponential, if your graph is unfortunate. This is unavoidable: there might be exponentially many clobber sets, so the output size might be exponential in the size of the input in the worst case.
{ "domain": "cstheory.stackexchange", "id": 2233, "tags": "graph-algorithms" }
Alternatives to Logistic Regression
Question: I have age, gender, height, weight and some other similar parameters of 15000 subjects. I also have one column showing if they had a medical condition (present in about 20% subjects). I now want to analyze the factors which are most contributing to the medical condition. The usual test for this is Logistic regression analysis. However, that assumes that the relationships are linear. What other machine learning tests can I apply for this purpose with this kind of data? Thanks for your insight. Answer: If you choose your alternative to Tree based models, then you really have an upper edge here as compared to all other linear/logistic Regressions etc.. People generally useCo-relation, Co-variance and heat maps.. etc as a process which often generates tables of distances with even more numbers than the original data, but making Dendrograms in fact simplifies our understanding of the data. Distances between objects can be visualized in many simple and evocative ways, one of them is Hierarchial Clustering.. What is Hierarchical Clustering? Hierarchical clustering is where you build a cluster tree (a dendrogram) to represent data, where each group (or “node”) links to two or more successor groups. The groups are nested and organized as a tree, which ideally ends up as a meaningful classification scheme. So now, What is a Dendrogram? A dendrogram is a type of tree diagram showing hierarchical clustering — relationships between similar sets of data. They are frequently used in biology to show clustering between genes or samples, but they can represent any type of grouped data. The columns under the same split at the leaves are somewhat having relationships between them or have similar attributes.., that is what we try to Explore and deepen out understanding about in order to cut down redundant Features.... Sample Dendrogram looks like this one... So Let's try to interpret the Diagram Now, Removing redundant features One thing that makes it harder to interpret a variable is that there seem to be some variables with very similar meanings(redundant features...) Let's try removing some of these related features to see if the model can be simplified without impacting the accuracy. def get_oob(df): m = RandomForestRegressor(n_estimators=30, min_samples_leaf=5, max_features=0.6, n_jobs=-1, oob_score=True) x, _ = split_vals(df, n_trn) m.fit(x, y_train) return m.oob_score_ Here's our baseline. get_oob(df_keep) 0.88999425494301454 Now we try removing each variable one at a time.... for c in ('saleYear', 'saleElapsed', 'fiModelDesc', 'fiBaseModel', 'Grouser_Tracks', 'Coupler_System'): print(c, get_oob(df_keep.drop(c, axis=1))) Output(s) saleYear 0.889037446375 saleElapsed 0.886210803445 fiModelDesc 0.888540591321 fiBaseModel 0.88893958239 Grouser_Tracks 0.890385236272 Coupler_System 0.889601052658 It looks like we can try one from each group for removal. Let's see what that does. to_drop = ['saleYear', 'fiBaseModel', 'Grouser_Tracks'] Looks good even after dropping some of the columns.... References -: Wiki Link Notebook Link - fast.ai(Jeremy is Just Awesome..) Blog Link Edit - (just to make the answer complete) We also have something known as Partial Dependencies while using RF's which is also a very helpful insight to explore even further...
{ "domain": "datascience.stackexchange", "id": 2902, "tags": "machine-learning, neural-network, deep-learning, classification, machine-learning-model" }
KDL iterative IK solver joint angles not normalized
Question: I'm using the arm_kinematics_tools package for solving the IK of a 6DOF arm. Everything is working fine, with the little caveat that the returned joint angles are not normalized. An example: position: [577.1623005464028, 565.4397480457859, 43040.16494263808, 441.2180765483309, -17007.80488691719, 939.0145541995307] If those are sent to the robot_state_publisher, the arm position is fine, implying that the angles are correct, just not normalized. I'm aware that this behavior is also related to the seed state, but I was wondering if this is normal/intended behavior of KDL and if others have experienced this too. Originally posted by Stefan Kohlbrecher on ROS Answers with karma: 24361 on 2013-02-07 Post score: 0 Answer: Hi, I remember seeing that too. In the package you'll find there are 3 types of the Newton-Raphson solver you can test. I found only one of them had this behaviour. The "best" of the three, with joint limits, did not. But, for a 6 DOF robot, I'd generate an IKFast plugin. ;-) Originally posted by dbworth with karma: 1103 on 2013-02-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Stefan Kohlbrecher on 2013-02-07: Ah ok, thanks :) Yes, I'd do that too (re IKFast), unfortunately there are some kinematics for which closed form IK generation fails. Comment by dbworth on 2013-02-07: DRC arm or snake robot? ;-) The basic N.R. solver is very bad. Using KDL to make a Jacobian-based solver is one, better option. Comment by Stefan Kohlbrecher on 2013-02-07: DRC arm :) See also http://openrave-users-list.185357.n3.nabble.com/Generating-IK-for-6DOF-humanoid-appendages-prototype-GFE-robot-fails-td4025593.html . Is there a Jacobian solver for KDL available? Comment by dbworth on 2013-02-07: There's functions in the KDL library to do with jacobians. I think the NASA R2 uses one, as does REEM: https://github.com/pal-robotics/reem_kinematics/blob/master/reem_kinematics_constraint_aware/src/ik_solver.cpp
{ "domain": "robotics.stackexchange", "id": 12790, "tags": "ros, inverse-kinematics, kdl" }
R value of a hole?
Question: Suppose I have a room, 10x10x10m with insulated walls (R-value: 4) and a single double-pane window 1x1m (R-value: 0.35). Now the total thermal power required to keep the room at a desired temperature is given by: $$ P = \left( \frac{6\cdot 10\cdot 10 -1\cdot 1 }{4} + \frac{1\cdot 1}{0.35} \right) \Delta T $$ I am wondering what would happen if the window is left open? In that case, wouldn't the R value be zero and power go to infinity? Clearly this can't be right... I should be able to assume a finite R value for this case. But which and why? EDIT: I am somehow suspicious in this case the hole is completely ignored in this formula but taken into account as convection term? Answer: If this hole is the only opening in the room, there will be a buoyancy-driven exchange flow through it (if the interior is hotter than the exterior, in through the bottom half of the hole and out through the top half of the hole, and vice versa). This exchange flow is known as "mixing mode ventilation". Since the outgoing air is at a different temperature from the incoming air, this means a net energy flow. I'd expect the pressure differences driving the flow to be proportional to $\mathrm{\Delta}T$, and therefore the flow rate (through the usual formula for an orifice flow) to be proportional to $\sqrt{\mathrm{\Delta}}T$. The rate of energy transfer $P$ will be proportional to the product of flow rate and $\mathrm{\Delta}T$, i.e. $P$ will be proportional to $\mathrm{\Delta}T^{3/2}$ and your $R$ value proportional to $1/\sqrt{\mathrm{\Delta}}T$, with a constant of proportionality that depends on the top-to-bottom height of the opening. I'd guess a good place to look for a detailed formula for that constant of proportionality would be volume A of the CIBSE guide (or ASHRAE's equivalent if you're of the American persuasion), but I don't have access to a copy right now to check. If there's another opening somewhere at a different height, there's more likely to be a one-way flow through each hole ("displacement mode ventilation"), and the overall exchange flow (and therefore the $R$-value) will be a combined property of the pair of openings, so there's not a well-defined $R$ value for one of the openings in isolation. Nevertheless, I'd still expect the proportionality of the combined $R$ value to $1/\sqrt{\mathrm{\Delta}}T$ to hold up, just with a different constant of proportionality that depends on the height difference between the two openings and the ratio between their areas. Again, volume A of the CIBSE guide is probably the place to look for a detailed formula.
{ "domain": "engineering.stackexchange", "id": 5413, "tags": "thermodynamics, thermal-conduction, thermal-insulation" }
Is there any evolutionary advantage of selection of L-amino acid over D-amino acid?
Question: After listening to a scientific talk, I had this question that why in the natural selection process, are the L-amino acids selected over the D- form. However, we still we produce D-amino acids; specifically in the brain there is a higher concentration of D- amino acids. One possible reason is that they act as neurotransmitters. Answer: As you say yourself, biological molecules are usually available in both chiralic possibilities, yet nature uses only one of the two possibilities. At some point in our molecular evolution (and at a very early one) L-amino acids were stochastically "chosen" over their D-equivalents (I think that the choices would have been equiprobable). There is no reason why D-amino acids shouldn't work here as well. There is some discussion on why L-amino acids are used, see the references for some ideas. It might be that the proteins made from it where slightly more stable or because they were more abundant. D-amino acids are in use today as a protective step - most proteases for example cannot hydrolyse the D-amino acids in the peptidoglycan wall of bacteria. Using this "unusual" amino acids gives some protection here. References: Mirror symmetry breaking in biochemical evolution Mirror symmetry breaking at the molecular level Origin of homochirality in biological systems
{ "domain": "biology.stackexchange", "id": 5400, "tags": "evolution, amino-acids, chirality" }
The rounded box that wanted to be an arc
Question: This is from Bjarne Stroustrup's C++ Programming: Principles and Practice, Chapter 13 Exercise 2: Draw a box with rounded corners. Define a class Box, consisting of four lines and four arcs. I've already defined class Arc and using existing facilities (class Lines) for drawing the lines. Here is the code that draws a rounded box of random size: roundedBox.h //#include "GUI.h" //#include "Simple_window.h" //#include <iostream> namespace Graph_lib{ // Class Arc bool validInput(int w, int h); class Arc: public Ellipse{ public: Arc(Point p, int w, int h, double s, double e) : Ellipse(Point(p.x, p.y), w, h), start(s), end(e) { if (!validInput(w,h)) error("Invalid input Arc"); } void draw_lines() const; // starting and ending angle of the arc: start from 3'oclock counterclockwise double start; double end; }; // Member function void Arc::draw_lines() const{ if(color().visibility()) fl_arc(point(0).x, point(0).y, major() , minor() , start, end); } //------------------------------------------------------------------------------------------------ // Class RoundedBox class Box: public Lines{ public: Box(Point p, int w, int h); Box(Point ul, Point dr); private: Point upperLeft; Point downRight; int width; int height; // 1/4th of width and height, respectively int roundWidth; int roundHeight; }; //-------------------------------------------------------------------------------------------------- // Function Implementation // Helper function bool validInput(int w, int h){ if (w < 0 || h < 0) return false; else return true; } bool validInput(Point ul, Point dr){ if(ul.x - dr.x < 0 || ul.y - dr.y < 0) return false; return true; } // Class Constructors Box::Box(Point p, int w, int h) : upperLeft(p), downRight(p.x + w, p.y + h), roundWidth(w / 4), roundHeight(h / 4) { if (!validInput(w,h)) error("Invalid input Box"); Lines::add(Point(p.x + roundWidth, p.y), Point(p.x + w - roundWidth, p.y)); Lines::add(Point(p.x + w, p.y + roundHeight), Point(p.x + w, p.y + h - roundHeight)); Lines::add(Point(p.x + w - roundWidth, p.y + h), Point(p.x + roundWidth, p.y + h)); Lines::add(Point(p.x, p.y + h - roundHeight), Point(p.x, p.y + roundHeight)); } Box::Box(Point ul,Point dr) : upperLeft(ul), downRight(dr), roundWidth((dr.x - ul.x) / 10.), roundHeight((dr.y - ul.y) / 10.) { if(!validInput(ul, dr)) error ("Invalid input Box"); Lines::add(Point(ul.x + roundWidth, ul.y), Point(dr.x - roundWidth, ul.y)); Lines::add(Point(dr.x, ul.y + roundHeight), Point(dr.x, dr.y - roundHeight)); Lines::add(Point(dr.x - roundWidth, dr.y), Point(ul.x + roundWidth, dr.y)); Lines::add(Point(ul.x, dr.y - roundHeight), Point(ul.x, ul.y + roundHeight)); } } // end of Graph_lib namespace roundedBox.cpp #include "GUI.h" #include "Simple_window.h" #include <iostream> #include "roundedBox.h" //----------------------------------------------------------------------------------------------------------------------- int main(){ // window parameters int winWidth = 800; int winHeight = 600; Point center((x_max() - winWidth) / 2., (y_max() - winHeight) / 2.); Simple_window* sw = new Simple_window(center, winWidth, winHeight, "Chapter 13 Exercise 2"); // rounded box parameters int width = 500; int height = 400; int xCoord = (sw->x_max() - width) / 2; int yCoord = (sw->y_max() - height) / 2; try{ // draw the 4 conrer arcs clockwise, starting from the top left corner // each center of an arc is shiftet away from the center of the rounded box by multiples of its width and height Graph_lib::Arc ulArc(Point(xCoord + width/2, yCoord + height/2), width/2, height/2, 90, 180); ulArc.draw_lines(); sw->attach(ulArc); Graph_lib::Arc urArc(Point(xCoord + width, yCoord + height/2), width/2, height/2, 0, 90); urArc.draw_lines(); sw->attach(urArc); Graph_lib::Arc drArc(Point(xCoord + width, yCoord + height), width/2, height/2, 270, 0); drArc.draw_lines(); sw->attach(drArc); Graph_lib::Arc dlArc(Point(xCoord + width/2, yCoord + height), width/2, height/2, 180, 270); dlArc.draw_lines(); sw->attach(dlArc); // create a box with rounded corners by creating four non-intersecting lines that will match the arcs // both arcs and roundedbox are parametrized in function of the width and height Graph_lib::Box rb(Point(xCoord, yCoord), width, height); sw->attach(rb); sw->wait_for_button(); delete sw; }catch(exception& e){ cerr << e.what() << endl; getchar(); }catch(...){ cerr <<"Default exception!"<< endl; getchar(); } return 0; } Output: Suggestion: Could I alter the definition of class Box such that includes both arcs and lines? Note: Until now: I've (unsuccessfully) tried including an override of the function void draw_lines(): as a result I get nothing on the screen. In addition, if class Arc is included as base class and its constructor initialized via the class Box constructor list, I get ambiguities in the void draw_lines() function. I've (unsuccessfully) tried passing a Simple_window w* object to the class Box and then use it to attach() Arc objects within the function void draw_lines(). Another idea is to define class Arcs that holds multiple Arc objects and to add them either in the body of the class Box constructor or in void draw_line(). Additional dependencies are here. The FLTK could be found here. Answer: I didn't find a way to use/combine the existent facilities, thus using directly the FLTK library here is what I came with: namespace Graph_lib{ class RoundedBox: public Shape{ public: RoundedBox(Point ul, Point dr); RoundedBox(Point ul, int w, int h); void draw_lines() const; private: Point upperLeft; int width; int height; // roundedness of the box // should be changed together with Arc center, major and minor axes int roundWidth; int roundHeight; }; // Class member implementations RoundedBox::RoundedBox(Point ul, Point dr) : upperLeft(ul), width(abs(ul.x - dr.x)), height(abs(ul.y - dr.y)), roundWidth(abs(ul.x - dr.x) / 4), roundHeight(abs(ul.y - dr.y) / 4) { add(ul); } RoundedBox::RoundedBox(Point ul, int w, int h) : upperLeft(ul), width(w), height(h), roundWidth(w / 4), roundHeight(h / 4) { add(ul); } void RoundedBox::draw_lines() const{ if(color().visibility()){ // Arcs // upper left arc fl_arc(point(0).x , point(0).y, width/2, height/2, 90, 180); // upper right arc fl_arc(point(0).x + width/2, point(0).y, width/2, height/2, 0, 90); // down right arc fl_arc(point(0).x + width/2, point(0).y + height/2, width/2, height/2, 270, 0); // down left arc fl_arc(point(0).x, point(0).y + height/2, width/2, height/2, 180, 270); // Lines // top horizontal fl_xyline(point(0).x + roundWidth, point(0).y, point(0).x + width - roundWidth); // right vertical fl_yxline(point(0).x + width, point(0).y + roundHeight, point(0).y + height - roundHeight); // bottom horizontal fl_xyline(point(0).x + roundWidth, point(0).y + height, point(0).x + width - roundWidth); // left vertical fl_yxline(point(0).x, point(0).y + roundHeight, point(0).y + height - roundHeight); } } } // end of namespace Graph_lib The execution looks like this: #include "GUI.h" #include "Simple_window.h" #include <iostream> #include "Chapter13Exercise2Version2.h" //-------------------------------------------------------------------------- int main(){ // window parameters int winWidth = 800; int winHeight = 600; Point center((x_max() - winWidth) / 2., (y_max() - winHeight) / 2.); Simple_window* sw = new Simple_window(center, winWidth, winHeight, "Chapter 13 Exercise 2"); // rounded box parameters int width = 400; int height = 200; int xCoord = (sw->x_max() - width) / 2; int yCoord = (sw->y_max() - height) / 2; try{ Graph_lib::RoundedBox rb(Point(xCoord, yCoord), width, height); sw->attach(rb); sw->wait_for_button(); delete sw; }catch(exception& e){ cerr << e.what() << endl; getchar(); }catch(...){ cerr <<"Default exception!"<< endl; getchar(); } return 0; } And the result is:
{ "domain": "codereview.stackexchange", "id": 15770, "tags": "c++, object-oriented, inheritance, fltk" }
Web Service that gets data from multiple tables in a database using EF Core Database-First approach
Question: I have never created a web service before. I followed most of this Pluralsight tutorial to give me an idea of how to create one using ASP.NET MVC along with .NET Core 2.0 and Entity Framework Core. The goal of this web service is to provide users with data from a database. It doesn't really do anything other than filter data down to what was requested and then return that data. Here is an example request body: { "buildIds": [ "BuildId.1", "BuildId.2" ], "cRs": [ 100, 400 ] } The buildIds property is what is used to get "CRs". The cRs property is used to filter these CRs down to a specific set. The cRs property can be omitted if the user doesn't want to filter by anything. This question is somewhat two-fold: I would like to know if there are any other cases for which I should handle certain things coming in as requests and also what you think of my code overall. Controller: [Route("api/metabuildCRs")] public class MetabuildCRsController : Controller { private IQSARepository _repository; public MetabuildCRsController(IQSARepository repository) { _repository = repository; } [HttpPost] // POST is used here because you can't send a body with GET public IActionResult GetMetabuildCrs([FromBody] MetabuildCRsRequest model) { if (model == null || model.BuildIds == null) { return BadRequest(); } var metabuildCRs = new List<MetabuildCR>(); foreach (var productBuildId in model.BuildIds) { var imageBuildIds = _repository.GetImageBuildsInProductBuild(productBuildId); foreach (var imageBuildId in imageBuildIds) { var crNumbers = _repository.GetJobDetailsForSoftwareImageBuild(imageBuildId)? .Select(jd => jd.ChangeRequestNumber) .Distinct(); if (model.CRs != null && model.CRs.Count() > 0) { // filter down to only crs we care about crNumbers = crNumbers.Where(cr => model.CRs.Contains(cr)); } foreach (var crNumber in crNumbers) { var imageBuild = _repository.GetSoftwareImageBuild(imageBuildId); var bulletinInfo = _repository.GetBulletinInformationForCR(crNumber); var exception = _repository.GetCRException(crNumber, imageBuildId); var dependentCRs = _repository.GetCRsThatDependOnCR(crNumber); metabuildCRs.Add(new MetabuildCR { ChangeRequestNumber = crNumber, // Build Info SoftwareImageBuildId = imageBuildId, BuildDate = imageBuild.CrmbuildDate, // Exception Info RequestText = exception?.RequestText, RequestedBy = exception?.RequestedBy, RequestedOn = exception?.RequestedOn, ExpiresOn = exception?.ExpiresOn, JiraIssueKey = exception?.JiraIssueKey, ReasonCode = exception?.ReasonCode, ResponseBy = exception?.ResponseBy, ResponseText = exception?.ResponseText, ResponseOn = exception?.ResponseOn, ExemptionNotes = exception?.Notes, //Bulletin Info SecurityBulletinDcn = bulletinInfo?.SecurityBulletinDcn, DocumentType = bulletinInfo?.DocumentType, DocumentReleaseDate = bulletinInfo?.DocumentReleaseDate, DependentCRs = dependentCRs }); } } } return Ok(metabuildCRs); } } Request object: public class MetabuildCRsRequest { public IEnumerable<string> BuildIds { get; set; } public IEnumerable<int> CRs { get; set; } } Repository (service layer): public class QSARepository : IQSARepository { private QSAContext _context; public QSARepository(QSAContext context) { _context = context; } public IEnumerable<string> GetImageBuildsInProductBuild(string buildId) { return _context.SoftwareProductBuildCompositions.Where(x => x.SoftwareProductBuildId == buildId)?.Select(y => y.SoftwareImageBuildId); } public SoftwareImageBuild GetSoftwareImageBuild(string buildId) { return _context.SoftwareImageBuilds.FirstOrDefault(sib => sib.SoftwareImageBuildId == buildId); } public IEnumerable<VerifySourceJobDetail> GetJobDetailsForSoftwareImageBuild(string buildId) { var crNumbers = (from job in _context.VerifySourceJobs join details in _context.VerifySourceJobDetails on job.Id equals details.VerifySourceJobId where job.SoftwareImageBuildId == buildId select details).Distinct(); return crNumbers; } public CRException GetCRException(int crNumber, string softwareImage) { return _context.CRExceptions.FirstOrDefault(e => e.ChangeRequestNumber == crNumber && e.SoftwareImage == softwareImage); } public PrismCRDocument GetBulletinInformationForCR(int crNumber) { return _context.PrismCRDocuments.FirstOrDefault(b => b.ChangeRequestNumber == crNumber); } public IEnumerable<int> GetCRsThatDependOnCR(int crNumber) { return from r in _context.PrismCRRelationships where r.ChangeRequestNumber2 == crNumber && r.Relationship == "DependsOn" select r.ChangeRequestNumber1; } } Answer: Controllers should be kept as lean as possible. Consider adding another layer of abstraction specific the controller in order to separate concerns. public interface IMetabuildCRsService { List<MetabuildCR> GetMetabuildCrs(IEnumerable<string> BuildIds, IEnumerable<int> CRs = null); } Its implementation will encapsulate the core functionality currently being done in the controller. public class DefaultMetabuildCRsService : IMetabuildCRsService { private readonly IQSARepository repository; public DefaultMetabuildCRsService(IQSARepository repository) { this.repository = repository; } public List<MetabuildCR> GetMetabuildCrs(IEnumerable<string> BuildIds, IEnumerable<int> CRs = null){ var metabuildCRs = new List<MetabuildCR>(); foreach (var productBuildId in BuildIds) { var imageBuildIds = repository.GetImageBuildsInProductBuild(productBuildId); foreach (var imageBuildId in imageBuildIds) { var crNumbers = repository.GetJobDetailsForSoftwareImageBuild(imageBuildId)? .Select(jd => jd.ChangeRequestNumber) .Distinct(); if (CRs != null && CRs.Count() > 0) { // filter down to only crs we care about crNumbers = crNumbers.Where(cr => CRs.Contains(cr)); } var imageBuild = repository.GetSoftwareImageBuild(imageBuildId); foreach (var crNumber in crNumbers) { var bulletinInfo = repository.GetBulletinInformationForCR(crNumber); var exception = repository.GetCRException(crNumber, imageBuildId); var dependentCRs = repository.GetCRsThatDependOnCR(crNumber); metabuildCRs.Add(new MetabuildCR { ChangeRequestNumber = crNumber, // Build Info SoftwareImageBuildId = imageBuildId, BuildDate = imageBuild.CrmbuildDate, // Exception Info RequestText = exception?.RequestText, RequestedBy = exception?.RequestedBy, RequestedOn = exception?.RequestedOn, ExpiresOn = exception?.ExpiresOn, JiraIssueKey = exception?.JiraIssueKey, ReasonCode = exception?.ReasonCode, ResponseBy = exception?.ResponseBy, ResponseText = exception?.ResponseText, ResponseOn = exception?.ResponseOn, ExemptionNotes = exception?.Notes, //Bulletin Info SecurityBulletinDcn = bulletinInfo?.SecurityBulletinDcn, DocumentType = bulletinInfo?.DocumentType, DocumentReleaseDate = bulletinInfo?.DocumentReleaseDate, DependentCRs = dependentCRs }); } } } return metabuildCRs; } } This simplifies the controller to [Route("api/metabuildCRs")] public class MetabuildCRsController : Controller { private readonly IMetabuildCRsService service; public MetabuildCRsController(IMetabuildCRsService service) { this.service = service; } [HttpPost] public IActionResult GetMetabuildCrs([FromBody] MetabuildCRsRequest model) { if (model == null || model.BuildIds == null) { return BadRequest(); } List<MetabuildCR> metabuildCRs = service.GetMetabuildCrs(model.BuildIds, model.CRs); return Ok(metabuildCRs); } } If anything changes in the core functionality then there is no need to touch the controller as it is performing its Single Responsibility of handling requests. The service can be modified independently of the controller. It can also be reused elsewhere if needed. I am personally not a big fan of using underscore prefixes on variable names, so will notice that I removed them all. As for your concern about additional functionality, they can be isolated to their own service abstraction and added to this controller or its own controller depending on your choice. Splitting functionality into small easy to maintain modules helps separate concerns within the application and allows the code to grow softly
{ "domain": "codereview.stackexchange", "id": 30066, "tags": "c#, asp.net-mvc, web-services, asp.net-core, entity-framework-core" }
Question about exterior derivatives
Question: I know from Carroll that the integration in GR is basically a mapping from n-form to the real number. And it's given that $$d^nx=dx^0\wedge\ldots\wedge dx^{n-1}=\frac{1}{n!}\epsilon_{\mu_1\ldots\mu_n}dx^{\mu_1}\ldots dx^{\mu_n}$$ Now, I have an expression that is given in spherical coordinate system, where I have $$\int_\Sigma f(\theta,\phi) d\theta\wedge d\phi$$ when I want to integrate this (the epsilon part is already computed), do I just have $\int\int d\theta d\phi$ to integrate, or do I need to put the part from integration in spherical coordinate system $\int\int\sin\theta d\theta d\phi$? I haven't done much integration that involved forms before, so any help is appreciated :) EDIT: From the article I have: $$k_\xi[h,\bar{g}]=k^{[\nu\mu]}_\xi[h,\bar{g}](d^{n-2}x)_{\nu\mu}$$ $$(d^{n-p}x)_{\mu_1\ldots\mu_p}:=\frac{1}{p!(n-p)!}\epsilon_{\mu_1\ldots\mu_n}dx^{\mu_p+1}\ldots dx^{\mu_n}$$ $$k_\xi^{[\nu\mu]}[h,\bar{g}]=-\frac{\sqrt{-\bar{g}}}{16\pi}\ldots$$ where $\ldots$ is an expression. Does this mean, since I have an $n-2$ form and I'm in 4 dimensional space, I need to include this $\sin\theta$ after all? EDIT2: I need to add the $\sin\theta$. I got it. Thanks :D Answer: The integral you wrote down would simply be computed as follows: \begin{align} \int_\Sigma f\,d\theta\wedge d\phi = \int_0^{2\pi}d\phi\int_0^\pi d\theta f(\theta, \phi) \end{align} You just "erase the wedge." The extra factor of $\sin\theta$ is included if you are integrating a 2-form $\omega$ that is proportional to the volume form; \begin{align} \omega = f\,\epsilon \end{align} Here $\epsilon$ is the standard volume form on the sphere; \begin{align} \epsilon = \sqrt{|\det(g_{ij})|}d\theta\wedge d\phi = \sin\theta\,d\theta\wedge d\phi, \qquad (g_{ij}) = \mathrm{diag}(1,\sin^2\theta) \end{align} So, for example, we would have \begin{align} \int_\Sigma \omega = \int_\Sigma f\epsilon = \int_{\Sigma}f\sin\theta \,d\theta\wedge d\phi = \int_0^{2\pi}d\phi\int_0^\pi d\theta \,\sin\theta f(\theta, \phi) \end{align}
{ "domain": "physics.stackexchange", "id": 10497, "tags": "differential-geometry, integration" }
How to tell which species has the highest ionization energy?
Question: I am preparing for my final exam, and I am very confused about ionization energy. An example question would be: Between the species $\ce{Ne, Na+, Mg^2+, Ar, K+, $\&$~Ca^2+}$, which one has the highest ionization energy? I thought that ionization energy increased from left to right in a period and from down to up in a group in the periodic table, so I thought that $\ce{Ne}$ would be the one with the highest ionization energy. But the right answer is $\ce{Mg^2+}$. I assume that this has something to do with the electrons, but I don't know what. Answer: Like most chemical conundrums you need to simplify the problem using some knowledge of chemistry. First there are two electron configurations. $\ce{Ne, Na+}$ and $\ce{Mg^{2+}}$ all have the configuration ${1s^2 2s^22p^6}$. $\ce{Ar, K+}$ and $\ce{Ca^{2+}}$ all have the configuration ${1s^2 2s^22p^6 3s^2 3p^6}$. For $\ce{Ne, Na+}$ and $\ce{Mg^{2+}}$, $\ce{Mg}$ has the highest atomic number so of these three, it would be most difficult to ionize $\ce{Mg^{2+}}$. For $\ce{Ar, K+}$ and $\ce{Ca^{2+}}$, $\ce{Ca}$ has the highest atomic number so of these three, it would be most difficult to ionize $\ce{Ca^{2+}}$. Now it is just necessary to compare the ionization of $\ce{Mg^{2+}}$ to $\ce{Ca^{2+}}$. Since the $2p$ orbitals are closer to the nucleus than the $3p$ orbitals, it will be harder to ionize $\ce{Mg^{2+}}$ than $\ce{Ca^{2+}}$.
{ "domain": "chemistry.stackexchange", "id": 4661, "tags": "periodic-trends, ionization-energy" }
Strange ice found in my garden
Question: This morning I found a really strange ice formation in my garden. I can't figure out how it appeared, because there was nothing above. The night was particularly cold (Belgium). To give an idea, it has the size of a common mouse (5 cm of Height and 2 cm for the base of the inverted pyramid). Answer: Congratulations, you found an inverted pyramid ice spike, sometimes called an ice vase! The Bally-Dorsey model of how it happens is that first the surface of the water freezes, sealing off the water below except for a small opening. If the freezing rate is high enough the expansion of ice under the surface will increase pressure (since the ice is less dense than the water and displaces more volume), and this forces water up through the opening, where it will freeze around the rim. As the process goes on a spike emerges. If the initial opening or the crystal planes near it are aligned in the right way the result is a pyramid rather than a cylinder/spike. The process is affected by impurities, the water has to be fairly clean. It also requires fairly low temperatures so freezing is fast enough (but not too fast).
{ "domain": "physics.stackexchange", "id": 47020, "tags": "everyday-life, water, phase-transition, crystals, ice" }
Units in gravitational $N$ body simulations
Question: I am trying to write a code in Python to simulate $N$ bodies interacting through gravity. In particular I am trying to see whether a system of particles with random initial positions and zero velocity will fall into a viral equilibrium. I understand that in $N$ body simulations, it is advisable to set $G=1$ and $M=1$. However, having done this what then are my units of time or length or energy? Answer: Most of the time, scientific computer code is written in such a way that variables have no "knowledge" of the units they are intended to represent. (Of course, you could be arbitrarily sophisticated in the way you write your program, e.g. by defining classes that keep track of dimensionality and used units, and then use these classes to define your variables. But that is the exception, in my experience.) Assume, for generality, that you have bodies of different masses. You may choose one of them, called, say, $M_0 \ne 0$, to be the scale you use to measure masses. So, the mass of object $1$ may now be measured in multiples of $M_0$, e.g. you could have $M_1 = 42 M_0$, and likewise $M_2 = 0.01 M_0$. It is then common to set $M_0=1$, both on paper as well as in the program, and consequently drop the term $M_0$ altogether. Similarly, you can set $G=1$. This is often done, but in what follows, keep the initial situation with $M_0$ still present in mind. So, $R$ being the distance between $M_1$ and $M_2$, you may write for the potential energy $V$, either on paper or in your code, $$ V = - \frac{42 \cdot 0.01}{R} $$ Now, $R$ may as well be measured as multiples of some characteristic length $R_0$ which you set to unity, so that all that remains for $V$ is a real number. Your question, if I understood it correctly, is now how one can make sense of a calculated result of, say, $V = - 0.42$ (as in the case where, by coincidence or design, $R=1$.) To do this, it may be helpful to review what has been done to reach this point: By convention, we agreed to measure masses as multiples of $M_0$, gravitational coupling strengths as multiples of $G$, and distances as multiples of $R_0$. However, let's not set $M_0=G=R_0=1$ this time, and consequently let's not drop those constants from our expressions. So, instead of $$ V = - G \frac{M_1 M_2}{R} $$ one can write without loss of generality (see e.g. the definitions of $M_1$ and $M_2$ above) $$ \left(v \cdot V_0\right) = - \left(g \cdot G\right) \frac{ \left(m_1 \cdot M_0\right) \left(m_2 \cdot M_0\right)}{\left(r \cdot R_0\right)} $$ where all of $g=1$, $m_1 = 42$, $m_2 = 0.01$, $v$, and $r$ are dimensionless real numbers, and are the quantities you probably encounter in your program. The potential energy is now measured as a multiple of some characteristic energy $V_0$, i.e. $V=v \cdot V_0$. So, what your computer program may calculate is actually the dimensionless quantity $v=-0.42$, which measures the multiples of $V_0$, so that the potential energy, with the correct dimensions, is $V=-0.42 V_0$. What remains is to understand what $v=-1$, i.e. $V=-V_0$, means in terms of, say, SI units. You would achieve $v=-1$ for example for $m_1=m_2=g=r=1$, so that $$ -V_0 = - G \frac{M_0^2}{R_0} $$ Depending on the values chosen for $M_0$ and $R_0$ (and $G$, if that is regarded a "choice"), $V_0$ is now fixed and known as well. If you wanted to produce a graph of the dependence of the potential energy on the spatial separation of two fixed masses, you would the probably label the x-axis either $r$ or $R/R_0$, and the y-axis either $v$ or $V / \left(G \frac{M_0^2}{R_0}\right)$. I hope this helps. Please feel free to ask for clarifications.
{ "domain": "physics.stackexchange", "id": 20270, "tags": "homework-and-exercises, newtonian-gravity, simulations, units, dimensional-analysis" }
Why are hash map look-ups assumed to be $O(1)$ on average
Question: To look up a key in a hash map you have to calculate its hash find the entry in the resulting hash bucket Hash calculation takes at least $O(l)$ operations when the hashes are $l$-bit-numbers. When using an index (like a binary tree) for each bucket, finding an entry within a bucket that contains $k$ entries can be done in $O(\log k)$. With $n$ being the total number of entries in the hash map and $m$ being the number of buckets, $k$ averages to $n/m$. Due to $m=2^l$ we thus get $O(\log k) = O(\log n/m) = O(\log n - \log m) = O(\log n - l)$. Combining these two runtimes one gets a total look-up time of $O(l + \log n - l) = O(\log n)$, which conforms to the intuition that a lookup in a collection with $n$ entries is not possible below $O(\log n)$ operations. In short, it is generally assumed that $l$ and $k$ are both constant with regard to $n$. But if you fix $l$ then $k$ grows with $n$. Am I missing something here? Answer: Because we generally use the RAM model of computation with uniform cost model when computing the running time of operations on a hash table, and the RAM model with uniform cost states that the time to do a single operation on an entire machine word is $O(1)$. Also, we generally assume that the hash value fits within a single machine word. Thus, the running time of computing a hash value is not $O(l)$, but rather $O(1)$ [assuming both the value being hashed and the hash value fit within one word, or a constant number of words]. Moreover, when you choose the hash function and size of the hashtable appropriately, the expected value of $k$ is $O(1)$. In particular, the number of buckets is not fixed, but increases as the number of items in the hashtable grows. The number of buckets it is usually chosen to be some function of $n$; say $m = 4n$, or something like that. In any case, we usually choose $m$ so that $n/m = O(1)$. Therefore, the (expected) running time to find an item within the bucket is $O(1)$. Therefore, the total (expected) running time is $O(1)+O(1) = O(1)$. So why do we use the RAM model with uniform cost model? Because it's often a better match to reality than other alternatives.
{ "domain": "cs.stackexchange", "id": 7113, "tags": "algorithm-analysis, data-structures, runtime-analysis, hash-tables" }
Finding Stagnation Points from the complex potential
Question: I am trying to find the stagnation point of a fluid flow from a complex potential. The complex potential is given by $$\Omega(z) = Uz + \cfrac{m}{2\pi}\ln z.$$ From this I found the streamfunction to be $\psi=Ur\sin\theta + \cfrac{m}{2\pi}\theta$ and the velocity potential to be $\phi=Ur\cos\theta + \cfrac{m}{2\pi}\ln r$. I think the stagnation points occur when $u=v=0$, where $u = \cfrac{\partial \phi}{\partial x}$ and $v = \cfrac{\partial \psi}{\partial y}$. If so, would I have to convert back into Cartesian coords? Any help appreciated! Answer: You are mostly correct (except that $v$ is actually $\frac{\partial \phi}{\partial y}$). However, it is easiest to deal with $\Omega(z)$ directly. Since the velocity components are $u=\cfrac{\partial \phi}{\partial x}=\cfrac{\partial \psi}{\partial y}$ and $v=\cfrac{\partial \phi}{\partial y}=-\cfrac{\partial \psi}{\partial x}$, a stagnation point with zero velocity needs both to vanish. You can translate this back to the complex derivative of $\Omega$ as $$\frac{d}{dz}\Omega=\frac{\partial \phi}{\partial x}+i\frac{\partial \psi}{\partial x}=\frac{\partial \psi}{\partial y}-i\frac{\partial \phi}{\partial y}=0.$$ This means that you can work directly in (complex) cartesian coordinates to find the stagnation point easily: $$ 0=\frac{d\Omega}{dz}=U+\frac m{2\pi}\frac{1}{z},\quad\text{so}\quad z=x+iy=-\frac{2\pi}m U+0i. $$ Easy!
{ "domain": "physics.stackexchange", "id": 17642, "tags": "homework-and-exercises, fluid-dynamics, complex-numbers" }
Which combinations of pre-, post- and in-order sequentialisation are unique?
Question: We know post-order, post L(x) => [x] post N(x,l,r) => (post l) ++ (post r) ++ [x] and pre-order pre L(x) => [x] pre N(x,l,r) => [x] ++ (pre l) ++ (pre r) and in-order traversal resp. sequentialisation. in L(x) => [x] in N(x,l,r) => (in l) ++ [x] ++ (in r) One can easily see that neither describes a given tree uniquely, even if we assume pairwise distinct keys/labels. Which combinations of the three can be used to that end and which can not? Positive answers should include an (efficient) algorithm to reconstruct the tree and a proof (idea) why it is correct. Negative answers should provide counter examples, i.e. different trees that have the same representation. Answer: First, I'll assume that all elements are distinct. No amount of sequentialisations is going to tell you the shape of a tree with elements [3,3,3,3,3]. It is possible to reconstruct some trees with duplicate elements, of course; I don't know what nice sufficient conditions exist. Continuing on the negative results, you can't fully rebuild a binary tree from its pre-order and post-order sequentializations alone. [1,2] preorder, [2,1] post-order has to have 1 at the root, but 2 can be either the left child or the right child. If you don't care about this ambiguity, you can reconstruct the tree with the following algorithm: Let $[x_1,\dots,x_n]$ be the pre-order traversal and $[y_n,\ldots,y_1]$ be the post-order traversal. We must have $x_1=y_1$, and this is the root of the tree. $x_2$ is the leftmost child of the root, and $y_2$ is the rightmost child. If $x_2 = y_2$, the root node is unary; recurse over $[x_2,\ldots,x_n]$ and $[y_n,\ldots,y_2]$ to build the single subtree. Otherwise, let $i$ and $j$ be the indices such that $x_2 = y_i$ and $y_2 = x_j$. $[x_2,\ldots,x_{j-1}]$ is the pre-order traversal of the left subtree, $[x_j,\ldots,x_n]$ that of the right subtree, and similarly for the post-order traversals. The left subtree has $j-2=n-i+1$ elements, and the right subtree has $i-2=n-j+1$ elements. Recurse once for each subtree. By the way, this method generalizes to trees with arbitrary branching. With arbitrary branching, find out the extent of the left subtree and cut off its $j-2$ elements from both lists, then repeat to cut off the second subtree from the left, and so on. As stated, the running time is $O(n^2)$ with $\Theta(n^2)$ worst case (in the case with two children, we search each list lineraly). You can turn that into $O(n\,\mathrm{lg}(n))$ if you preprocess the lists to build an $n\,\mathrm{lg}(n)$ finite map structure from element values to positions in the input lists. Also use an array or finite map to go from indices to values; stick to global indices, so that recursive calls will receive the whole maps and take a range as argument to know what to act on. With the pre-order traversal $[x_1,\ldots,x_n]$ and the in-order traversal $[z_1,\ldots,z_n]$, you can rebuild the tree as follows: The root is the head of the pre-order traversal $x_1$. Let $k$ be the index such that $z_k = x_1$. Then $[z_1,\ldots,z_{k-1}]$ is the in-order traversal of the left child and $[z_{k+1},\ldots,z_n]$ is the in-order traversal of the right child. Going by the number of elements, $[x_2,\ldots,x_k]$ is the pre-order traversal of the left child and $[x_{k+1},\ldots,x_n]$ that of the right child. Recurse to build the left and right subtrees. Again, this algorithm is $O(n^2)$ as stated, and can be performed in $O(n\,\mathrm{lg}(n))$ if the list is preprocessed into a finite map from values to positions. Post-order plus in-order is of course symmetric.
{ "domain": "cs.stackexchange", "id": 47, "tags": "algorithms, binary-trees" }
Understanding velocity gradients in fluids
Question: So I'm having trouble understanding velocity gradients conceptually, I have little physics training passed physics 101 (I'm a biologist), but I'm currently working in an endothelium research lab with a lot of fluid physics. I came up with an example in my kitchen and I tried to work it out based on youtube videos of fluid mechanics I was watching. If you have a tall glass and you fill it with water, then spin it around it's long axis (so the bottom doesn't move but it spins like a disc) it seems like the water inside doesn't move as quickly as you spin the glass. I'm guessing based on "no-slip" the water touching the glass is moving at the speed of the glass it touches... the water in the center of the column probably moves the least. If you stop spinning the glass the water continues to move (inertia) but it slowly stops (not sure why)... So my question is this: where is the water moving in fastest in that moment you stop spinning the cup/column... originally the velocity in the center was the lowest and the velocity on the glass was highest, but then wouldn't the glass be the source of friction and stop it? so is the velocity initially fasted on the periphery but then the layer with the fastest velocity moves away toward the center of the cup/column? Answer: Just to add my hypothesis: "So my question is this: where is the water moving in fastest in that moment you stop spinning the cup/column... originally the velocity in the center was the lowest and the velocity on the glass was highest, but then wouldn't the glass be the source of friction and stop it?" "However if you where able to stop the cup instantly, theoretically the highest velocity will be located infinitesimally close to the wall." The only thing not explained is that the total net force acting on the 'spinning' water is not only the action/reaction of water with the glass (kinematical friction) but also the action of spinning water (with a relatively lower angular velocity) with the spinning cross-sectional segment of the water with the highest velocity et cetera -- which is actually a natural unit called as velocity gradient. However, even though velocity gradient may be challenging to measure, recalling molecular interactions within a Newtonian fluid may give an excellent visual when discussing fluid viscosity.
{ "domain": "physics.stackexchange", "id": 40973, "tags": "fluid-dynamics" }
Why isn't converting from an NFA to a DFA working?
Question: I am just beginning to learn computation theory. I wrote up a non-deterministic finite automata that accepts strings that contain the substring "abba": I tried to convert it to a DFA by putting together sets of states in the NFA to be states of the DFA: However, I just realized that my DFA doesn't accept strings such as "abbaa" that do not end in "abba." That means that my methodology was wrong. Why? I thought it would make sense to combine states of the NFA to make states of the DFA. Answer: Your methodology for creating the DFA from the NFA is fine. The problem is that you started with the wrong NFA (you'll notice that your NFA doesn't accept abbaa either). Try this one instead:
{ "domain": "cs.stackexchange", "id": 476, "tags": "automata, finite-automata" }
Ward Identity and Proca Fields
Question: I'm following the book Quantum Field Theory and the Standard Model by Schwartz and I came to the rigorous non-perturbative proof of the Ward identity with path integrals via the Schwinger-Dyson equations in subsections 14.8.1-3. Since it is clear to me that the proof of the Ward-Takahaski identity is the "quantum version" of the Noether trick, I don't understand the passage from the Ward-Takahashi identity to the "standard" Ward identity. The latter can be thought of as a direct consequence of the photons being massless/without longitudinal polarization, but the proof followed by Schwartz does not seem to exclude the case of a massive vector boson. However this one breaks gauge invariance with its mass (or again equivalently it admit a longitudinal d.o.f.) so intuitively it would have no meaning that $p_\mu M^\mu=0$ for a generic amplitude $M^\mu$. Where am I wrong? Answer: I'm just transferring some of what I wrote in the comments to an answer -- I may add more to this later. There is no Ward identity for a massive spin-1 field; the massive and massless cases work differently. For a massive photon, there exists a rest frame, so $p_\mu$ is timelike (on shell), so the fact that on shell $p_\mu A^\mu=0$ for a massive photon means that a timelike component is removed from external states. For internal lines, the numerator of the propagator is $\eta_{\mu\nu} + p_\mu p_\nu / m^2$; if you contract this with $p^\mu$ you get $p_\nu - (m^2/m^2) p_\nu=0$; this property follows from the second class constraint and is responsible for removing the time-like mode. For a massless photon, there is no rest frame, and so on shell $p_\mu$ is null. Furthermore we need to remove two components from $A_\mu$ since there are only two polarizations. The Ward identity guarantees that the unphysical polarizations decouple from the other dofs, and so are never excited (so long as they are not present in external states) . Another way to think of all this is in terms of first class and second class constraints (google "Dirac-Bergman quantization"). A first class constraint (gauge symmetry) removes two degrees of freedom, while a second class constraint (just a normal constraint) removes one. Massless electromagnetism has a first class constraint, Proca theory (without the Stuckelberg trick) has a second class constraint. The story is different if you use the Stuckelberg trick in the massive case; then you introduce a new field, so naively you have 5 degrees of freedom (4 components of the vector field plus a scalar field). You also a new gauge symmetry, with an associated first class constraint. The first class constraint removes two degrees of freedom, and 5-2=3, which is the correct number of degrees of freedom for a massive spin-1 particle.
{ "domain": "physics.stackexchange", "id": 80037, "tags": "quantum-field-theory, gauge-theory, quantum-electrodynamics, gauge-invariance, ward-identity" }
How much energy needed to liquify H?
Question: I want to know how much pounds (or the correct measure) would an air compressor need to liquify Hydrogen As its boiling point is somewhat close to -250°C i want to know how to calculate the pressure needed to liquify it, and how much "men" pushing a 20 meters lever compressing a serial of one-directioned-flux valves, would be required to achieve such mechanical power. Or how much "Gasoline regular sedan car engines" would be required for it Note: Im not refering to H2 because im refering to hydrogen itself, i say that because my question has been edited to saying "$H_2$" at the title which i believe is kind of deuterium or so.. and im not meaning this Answer: You'll probably be wanting to use the equation $$pV = nRT$$ where $p$ is the pressure, $V$ is the volume, $n$ is the number of moles $R$ is the molar gas constant $(8.31)$ and $T$ is the temperature in Kelvin. You can easily rearrange this equation to get $$p = \frac{nRT}{V}$$ I'm a bit confused as to why you've brought in fusion however hydrogen has a boiling point of around 20.25 Kelvin. If you were to use $1$ mole of hydrogen and pressurise say a volume of 1 $m^3$ then the pressure would need to be approximately $168$Pa or about $0.00166$ atm (quite small)!
{ "domain": "physics.stackexchange", "id": 42582, "tags": "homework-and-exercises, thermodynamics, temperature, gas, cryogenics" }
How is controlled constant addition implemented for binary polynomials?
Question: So, currently I am going through the paper Concrete Quantum Cryptanalysis of Binary Elliptic Curves. The section on point addition mentions that for adding two points $P_1$ and $P_2$, they assume that $P_2$ is a fixed (non-quantum) point. Moreover, they assume a generic case where $P_1 \ne P_2 \ne O$, $P_1 \ne -P_2$. On studying another paper, I found that these are mostly valid assumptions to make. My issue is with the const_ADD function they mention. I can see that the function basically adds a quantum and a fixed (non-quantum) polynomial, but there are no implementation details for it. In addition, while performing the ctrl_const_ADD on line $(2)$, they multiply a qubit $q$ with a fixed polynomial $y_2$. There are no hints given as to how that can be implemented either. If anyone has an idea of how to implement these in practice, please do guide me. Thanks! Answer: This kind of thing would typically falls under the "we know we can do it, but we won't go into the actual implementation" practice. Adding a known, constant polynomial to another one is something that can be described classically. Once that's done, we convert this procedure into a quantum circuit and voilà. Expliciting the implementation requires to describe the way our data is encoded, the algorithm we use, etc... Fortunately here, if I'm not mistaken, const_ADD is quite a simple operation. A binary polynomial is represented as a bitstring, and an addition between two of these polynomials is simply an XOR between their respective bitstrings. Thus, implementing const_ADD(x, x_2) is simply done by applying $X$ gates on qubit number $i$ if the corresponding bit in $x_2$'s bitstring is set. The multiplication would be a bit more involved, since there are (IIRC) more efficient algorithms to do so than the naive method. However, you could still do just like for the addition: translate this to an operation on the bitstrings and apply the corresponding gates.
{ "domain": "quantumcomputing.stackexchange", "id": 5399, "tags": "programming, quantum-algorithms, cryptography" }
Why doesn't the Weinberg-Witten theorem forbid collinear photons?
Question: The Weinberg-Witten theorem tells us that any theory that has an effective graviton, i.e. a massless helicity-2 particle as a state in the free-particle Fock space, cannot have a gauge-invariant and Lorentz-covariant stress-energy tensor that gives the graviton nonzero energy. This is intended as a no-go theorem ruling out composite gravitons, because if the theory can be expressed using only particles of spin $\le 1$ then it presumably will have such a tensor. A composite graviton would be a bound state of lower-spin particles such as gauge bosons, with that sum of their spins in the direction of propagation equal to 2. My question is: why does the state need to be bound? Why are we only interested in states that can be called "particles"? QED, for instance, includes massless states of helicity 2: states with 2 photons that just happen to have the same direction and spin. They are not single particles, but they are part of the Hilbert space, and matrix elements exist for them. The argument of Weinberg-Witten would seem to apply. Yet QED does have a covariant stress-energy tensor, and collinear two-photon states do have nonzero energy. Why isn't this forbidden? I think I have a partial answer: the proof of the WW theorem derives a contradiction by writing down the tensor (at the origin) as an operator on the gauge-fixed Fock space, taking its matrix elements between single-graviton states of different momenta, and taking a limit as the momenta approach coincidence while we shift the Lorentz frame to make the momenta equal and opposite. Thus it's not enough for the graviton to have nonzero expected energy; it must have nonzero matrix elements even between states of unequal momenta. Since we are talking about an operator at a point, this seems like a reasonable assumption. Yet it seems to me that this is where my "collinear photon" case falls out: We can (I think) write the electromagnetic stress energy tensor as a sum of term of the form $a^\dagger_k a_{k'}$, meaning that we only get a nonzero matrix element when at most one photon has different momenta between the two states. Since we want states with different directions for the momenta of the collinear particle pairs, we get zero and cannot derive a contradiction. Is this correct? Or am I perhaps confusing myself by thinking about pure momentum states rather than normalizable wavepackets? This was done in the original proof, but perhaps it introduces problems with more than one particle? Of course, the really important question is: what changes when the state is bound; i.e. an actual composite graviton? Is it possible that we have a new loophole for the WW theorem, where we can have composite gravitons as long as we somehow force the stress-energy tensor to de diagonal in the momentum? Answer: Okay, I think I am satisfied that the "partial answer" I included in the question is the correct answer. The proof of the WW theorem involves matrix elements of the form $$\langle p|T^{\mu\nu}(x)|p'\rangle, $$ where $|p\rangle$ is a momentum eigenstate of the spin-2 particle and $p,p'$ are two nearly equal null momenta. The proof relies on this matrix element being nonzero, while it does equal zero for my case of two collinear photons. The operator $T^{\mu\nu}$ cannot change the momenta of two different photons, because it is only quadratic in the photon field $A^\mu$. Therefore the proof does not apply, and such states can of course exist. The reason I was dissatisfied with the answer, and called it only "partial", was that I didn't understand why we should be so confident that the above matrix element does not vanish when $|p\rangle$ is a single-particle state such as an "emergent graviton" that is a bound state of other fields. If it can vanish when the underlying particles are not bound, then how do we know that creating a bound state will change things? Now I think I get it: the condition $\langle p|T^{\mu\nu}(x)|p'\rangle\ne0 $ is basically an intuitive physical requirement. It means that if (hypothetically) there was a an interaction of the form $T^{\mu\nu}h_{\mu\nu}$ (and of course there is such an interaction, namely gravity, but we do not require this), then a background field $h_{\mu\nu}$ with a small gradient could cause slight changes in the momentum of our particle, making it move along a curved trajectory. This can be taken as a reasonable definition of what is meant by the particle being "charged under $T^{\mu\nu}$", and this is what is required for the WW theorem to apply. This behavior is also pretty close to what we mean by a state being "bound", namely that the different components maintain a shared trajectory when "pushed around" by mild forces. But it does not describe particles that just happen to be collinear; in that case the force will tend to spread the trajectories slightly apart.
{ "domain": "physics.stackexchange", "id": 88479, "tags": "photons, quantum-gravity, matrix-elements" }
Velocity of interaction between particles in classical mechanics
Question: We always hear in classical mechanics that the interaction between particles happens instantaneously, and I think this assumption is obvious just by seeing Newton's third law. But I was wondering, is it possible to show that the velocity of this interaction is infinitely fast, just assuming the force is dependent only on the position of the particles (my professor said it is, but I can't see how to do it). I couldn't find about it anywhere so, if anyone could help, I'd appreciate. Thank you. Answer: When people say the force acting on particle 1 at some definite time depends only on positions of the other particles at the same time, this actually means there is no propagation of the interaction. The interaction is just everywhere and reflects current state of particles in the whole world. The concept of speed of interaction is superfluous. "Infinite speed" of interaction is just a figure of speech that refers to this kind of theory.
{ "domain": "physics.stackexchange", "id": 32954, "tags": "classical-mechanics, velocity" }
General way to add persistence to a class in Python
Question: The idea here is to write a function that gives you back a persistent version of a class that you supply. So you if you run PersistentList = make_persistent(list, "PersistentList", ['append', 'extend', 'insert', 'pop', 'remove', 'reverse', 'sort']) You will get back a class that behaves like a list but automatically persists itself after each operation. So given this test code: import time def get_hhmmss(): return time.strftime('%H:%M:%S', time.localtime()) def test_list(): PersistentList = make_persistent(list, "PersistentList", get_mutators(list, tuple)) filepath = os.path.expanduser("~/Desktop/test_list") with contextlib.closing(PersistentList(filepath)) as pl: pl.append(get_hhmmss()) pl += ['foo'] pl += ['bar'] print(pl) def test_set(): PersistentSet = make_persistent(set, "PersistentSet", get_mutators(set, frozenset)) filepath = os.path.expanduser("~/Desktop/test_set") with contextlib.closing(PersistentSet(filepath)) as ps: ps.add(get_hhmmss()) ps.add('spam') print(sorted(ps)) You can get this output: >>> test_set() ['21:38:27', 'spam'] >>> test_set() ['21:38:27', '21:38:31', 'spam'] >>> test_set() ['21:38:27', '21:38:31', '21:38:34', 'spam'] >>> test_list() ['21:38:39', 'foo', 'bar'] >>> test_list() ['21:38:39', 'foo', 'bar', '21:38:43', 'foo', 'bar'] >>> test_list() ['21:38:39', 'foo', 'bar', '21:38:43', 'foo', 'bar', '21:38:47', 'foo', 'bar'] Here's the code that accomplishes it. I ask in part because I'm tempted to try to share this but assume that if it were a reasonable way to do things it would be out there by now. import pickle, errno, os, functools, contextlib def touch_new(filepath): "Will fail if filepath already exists, or if relevant directories don't already exist" os.close(os.open(filepath, os.O_WRONLY | os.O_CREAT | os.O_EXCL)) def get_mutators(mutable_class, frozen_class): "Convenience function for identifying mutators. Methods such as __iadd__ won't wrap properly so this function throws them out; you'll need to call `_save` or `close` to sync after such operations" def qualifying_methodname(methodname): return not(methodname.startswith("__") and methodname.endswith("__")) return sorted(mn for mn in set(dir(mutable_class)) - set(dir(frozen_class)) if qualifying_methodname(mn)) # Inspiration: http://stackoverflow.com/a/9449852/2829764 def make_persistent(original_class, new_classname, mutator_methodnames): class NewClass(original_class): def __init__(self, filepath): self._filepath = filepath self._closed = False try: with open(filepath, "rb") as ifile: loaded = pickle.load(ifile) if type(loaded) != original_class: # Don't even allow subclasses raise TypeError("{} exists but does not contain a {}".format(filepath, original_class)) original_class.__init__(self, loaded) except IOError as ioe: if ioe.errno != errno.ENOENT: raise touch_new(filepath) original_class.__init__(self) for attr in mutator_methodnames: setattr(self, attr, self._autosave(getattr(self, attr))) def _autosave(self, func): @functools.wraps(func) def _func(*args, **kwargs): if self._closed: raise ValueError("Invalid operation on closed "+self.__class__) ret = func(*args, **kwargs) self._save() return ret return _func def _save(self): pickle.dump(original_class(self), open(self._filepath, "wb")) def close(self): if not self._closed: self._save() self._closed = True NewClass.__name__ = new_classname return NewClass Answer: No time to do a proper review, but some quick notes: There are no docstrings! How am I supposed to use your code? Doesn't follow the Python style guide (PEP8): in particular, line lengths are too long, so we have to scroll horizontally to read it here. You should use super to call superclass methods, otherwise your code won't play nicely with multiple inheritance. This relies on original_class.__init__ taking one optional argument which is copied to self. This seems rather inflexible: it works for some built-in constructors like list but it is not very general. It would be better to leave __init__ alone and use a factory function to do the construction. It seems clumsy to have to specify the names of all the methods that you need. Python has the tools (dir, __getattribute__, etc.) to avoid this. Copying attributes from one object to another works for methods but not for properties. It would be better to actually use the context manager protocol (__enter__ and __exit__) and not contextlib.closing, which requires the object to have a close method which might conflict with original_class.close. Consider using three-argument type instead of making a class with the wrong name and then rewriting its __name__ attribute. At the moment you write the object to disk after every change, which is costly. Better to postpone this until the object is finished with. Consider saving the object via the __del__ method. Test cases should be organized into unit tests so they are runnable via the unittest module. Consider catching FileNotFoundError instead of catching IOError and then re-raising if the error code is not ENOENT.
{ "domain": "codereview.stackexchange", "id": 8511, "tags": "python, meta-programming" }
Cannot find my own packages
Question: Dear ROS users, I don't have any experience with ROS but I want to learn to use it. I just installed it in Ubuntu 12.04 (fuerte), follow some steps in the tutorials, but I got stuck. In "Creating a ROS Package (rosbuild)" it says to create a new package: $ roscreate-pkg beginner_tutorials std_msgs rospy roscpp and check if ROS can find it. $ rospack find beginner_tutorials It can't. And I don't understand this: "If this fails, it means ROS can't find your new package, which may be an issue with your ROS_PACKAGE_PATH. Please consult the installation instructions for setup from SVN or from binaries, depending how you installed ROS. If you've created or added a package that's outside of the existing package paths, you will need to amend your ROS_PACKAGE_PATH environment variable to include that new location. Try re-sourcing your setup.sh in your fuerte_workspace." First I didn't have a ROS_WORKSPACE. So I made some research and used "export" to do it. Now root@to-vb:/opt/ros/fuerte_workspace# is my workspace. And it's where I created my new package (not in $ cd ~/fuerte_workspace/sandbox as it's said). Second, if it's an issue with ROS_PACKAGE_PATH, of course I created a package that's outside of the existing package paths, and $ cd ~/fuerte_workspace/sandbox is out of it! Then how do I amend it? My Ubuntu is in a virtual machine. I deleted and created a new one to install ROS again and it is still the same. So, I concluded that this tutorial steps are not coherent with my system. Could anyone help me, please? Nat0ne Originally posted by Nat0ne on ROS Answers with karma: 1 on 2012-11-16 Post score: 0 Answer: First off, I would suggest to set your workspace in somewhere ~/, and not in /opt. It is preferable to work in a directory where you only need standard user permissions. Here is an example how to set your workspace to ~/ros, and also make sure the workspace is part of the package path. I have this appended at the end of my ~/.bashrc file. ROS_WORKSPACE=$HOME/ros export ROS_WORKSPACE ROS_PACKAGE_PATH=$ROS_WORKSPACE:$ROS_PACKAGE_PATH export ROS_PACKAGE_PATH If you now create your packages in ~/ros, everything should work fine. Originally posted by Ivan Dryanovski with karma: 4954 on 2012-11-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Lorenz on 2012-11-18: Manually managing ROS_PACKAGE_PATH is not recommended. The best way is to use rosws. The relevant wiki page can be found here.
{ "domain": "robotics.stackexchange", "id": 11776, "tags": "ros" }
1D bound state for a real potential
Question: The prof says: "for 1Dimensional bound states with a real potential, the wave function is real, up to a phase". The proof goes like this: 1D bound states are never degenerated. So $\Psi_{real}$ and $\Psi_{imaginary}$ are linearly dependent. So $\Psi \equiv \Psi_{real} +i\Psi_{imaginary}=\Psi_{real} (1+ic)=(1+c^2)e^{iArg(1+ic)}\Psi_{real}$ Whatever the proof, I don't understand the statement since any complex number (the wavefunction is one complex number) is in some way real up to a phase. So I don't really understand what this theorem is trying to teach us. PS: I cannot ask directly the professor because I study from a video recorded 6 years ago Answer: No, the wavefunction $\psi(\vec{r})$ is not just 1 complex number: it is infinitely many complex numbers, 1 for each value of position $\vec{r}$. In contrast, the professor is making the non-trivial statement that there exists a global (i.e. $\vec{r}$-independent) complex constant $c$. For more details, see also this & this related Phys.SE posts.
{ "domain": "physics.stackexchange", "id": 88330, "tags": "quantum-mechanics, wavefunction, potential, schroedinger-equation, complex-numbers" }
Where is the catkin_ws binaries?
Question: hi is it possible to use just use binaries(in another computer) of the catkin workspace after complied. If it is possible how and where are these Binaries ? thanks Originally posted by Robotics on ROS Answers with karma: 1 on 2017-12-09 Post score: 0 Answer: Hi @Robotics , when you run catkin_make, the generated files are normally in ~/catkin_ws/devel/ If you run catkin_make install, they are normally on ~/catkin_ws/install/. Cheers. Originally posted by Ruben Alves with karma: 1038 on 2017-12-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Robotics on 2017-12-10: thanks Ruben
{ "domain": "robotics.stackexchange", "id": 29559, "tags": "ros, binaries" }
IPC between 64-bit and 32-bit installations
Question: I have two different packages that I need to run in ROS (fuerte), but one works correctly in the 32-bit installation, while the other works only in the 64-bit installation. I do not have the skill set to force either package into the opposing version of ROS. So, can I run two computers, one with a 32-bit installation and the other with 64-bit, and communicate between them normally? In other words, can I set up a standard roscore on one machine and have the other simply feed data to it? I've been trying to test this with simulated data, and gotten some errors - but I'm not sure if it was because the difference in 32 vs 64 bit, or because I made some other mistake. And, unfortunately, I need to answer this question before buying the sensor, so I can't test it in hardware yet. Thank you in advance Originally posted by ebbeowulf on ROS Answers with karma: 15 on 2013-02-07 Post score: 0 Answer: Since ROS message exchange is based on TCP-sockets, it shouldn't be a problem mixing 32- and 64-bit computers. We are using two 64-bit clients with a 32-bit roscore ourselves, because we need a 32-bit only CANBus driver on our robot. But you should consider, that depending on your sensor, the network connection can become a bottleneck. So if your sensor requires some heavy bandwidth you may be forced to go the painful way and port one of your components. Originally posted by Ben_S with karma: 2510 on 2013-02-07 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12798, "tags": "ros" }
ROS Qt creator GUI
Question: Hi, everyone, I am a freshman on ROS and Qt. Sorry to post long question in next, I am overwhelming by incorporate Qt GUI in ROS for almost 4 days, I follow the former posts, post1, and post2 and their related links, and thanks for their answers, I could use Qt creator to compile the ROS packages. However, when I try to bind the Qt program to ROS program, I do not find way to do, (mainly on the CMakeList.txt ). So next is my question: I want to build a simple interface to play with the turtle in learning_joy package instead of using joystick. According to the tutorials on joystick, I could control the turtle with my joystick. So I use the CMakeList.txt as the copy to add some Qt creator CMake inside. The original learing_joy CMakeList.txt is cmake_minimum_required(VERSION 2.4.6) include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake) #set(ROS_BUILD_TYPE RelWithDebInfo) rosbuild_init() #set the default path for built executables to the "bin" directory set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin) #set the default path for built libraries to the "lib" directory set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib) # Set the example from the Qt #uncomment if you have defined messages rosbuild_genmsg() #uncomment if you have defined services #rosbuild_gensrv() #common commands for building c++ executables and libraries #rosbuild_add_library(${PROJECT_NAME} src/example.cpp) #target_link_libraries(${PROJECT_NAME} another_library) #rosbuild_add_boost_directories() #rosbuild_link_boost(${PROJECT_NAME} thread) #rosbuild_add_executable(example examples/example.cpp) rosbuild_add_executable(turtle_teleop_joy src/teleop_turtle_joy.cpp) #target_link_libraries(example ${PROJECT_NAME}) By searching CMake and Qt example, I find one example "HelloworldQt", which includes 5 files: HelloWidget.cpp, HelloWidget.h, Mainwindow.cpp, MainWindow.h, and main.cpp, I also ran this Qt project on my Qt creator, it's CMakeList.txt is" CMAKE_MINIMUM_REQUIRED(VERSION 2.6) PROJECT(HelloWorldQt) SET(CMAKE_BUILD_TYPE Release) SET(CMAKE_CXX_FLAGS "-Wall") # QT4 Handling FIND_PACKAGE(Qt4 REQUIRED) INCLUDE(${QT_USE_FILE}) SET( HWQ_Qt4_SRC src/MainWindow.h src/HelloWidget.h ) SET( HWQ_Qt4_UI ) SET( HWQ_Qt4_RES ) QT4_WRAP_CPP(HWQ_MOC_CPP ${HWQ_Qt4_SRC}) QT4_WRAP_UI(HWQ_UI_CPP ${HWQ_Qt4_UI}) QT4_ADD_RESOURCES(HWQ_RES_H ${HWQ_Qt4_RES}) INCLUDE_DIRECTORIES( . ) # General SET( HWQ_SRC src/main.cpp src/MainWindow.cpp src/HelloWidget.cpp ${HWQ_MOC_CPP} ${HWQ_UI_CPP} ${HWQ_RES_H} ) SET( HWQ_LIB ${QT_LIBRARIES} ) ADD_EXECUTABLE(HelloWorldQt ${HWQ_SRC} ) TARGET_LINK_LIBRARIES(HelloWorldQt ${HWQ_LIB} ) INSTALL_TARGETS( /bin HelloWorldQt)" So I want to incorporate these two files as one, and combine the main.cpp and add #include <QtGui/QApplication> #include "MainWindow.h" #include #include #include in teleop_turtle_joy.cpp, but I find no matter what sequences I put these CMakeLists files, it can run CMake, but when I build, Qt creator show " error: undefined reference to QMainWindow::QMainWindow(QWidget*, QFlags<Qt::WindowType>); error: undefined reference to vtable for MainWindow" and etc, For the second kind of error, it because lacking moc files, so my incorporated cMakeList does not produce moc files, but why? It looks that combined CMakeList.txt really run on the ROS, but it does not produce the same file as when I only run HelloWidget, which produces the moc files for two **.cpp. So can you point one or more direction for me? the corporated CMakeList.txt " cmake_minimum_required(VERSION 2.4.6) PROJECT(learning_joy) # Set the build type. Options are: # Coverage : w/ debug symbols, w/o optimization, w/ code-coverage # Debug : w/ debug symbols, w/o optimization # Release : w/o debug symbols, w/ optimization # RelWithDebInfo : w/ debug symbols, w/ optimization # MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries #set(ROS_BUILD_TYPE RelWithDebInfo) # add from Qt SET(CMAKE_BUILD_TYPE Release) SET(CMAKE_CXX_FLAGS "-Wall") # Set the example from the Qt FIND_PACKAGE(Qt4 REQUIRED) INCLUDE(${QT_USE_FILE}) SET( HWQ_Qt4_SRC src/MainWindow.h src/HelloWidget.h ) MESSAGE(STATUS "step1.") SET( HWQ_Qt4_UI ) SET( HWQ_Qt4_RES ) QT4_WRAP_CPP(HWQ_MOC_CPP ${HWQ_Qt4_SRC}) QT4_WRAP_UI(HWQ_UI_CPP ${HWQ_Qt4_UI}) QT4_ADD_RESOURCES(HWQ_RES_H ${HWQ_Qt4_RES}) MESSAGE(STATUS "step2.") INCLUDE_DIRECTORIES( . ) SET( HWQ_SRC src/MainWindow.cpp src/HelloWidget.cpp src/teleop_turtle_joy.cpp ${HWQ_MOC_CPP} ${HWQ_UI_CPP} ${HWQ_RES_H} ) MESSAGE(STATUS "step3.") SET( HWQ_LIB ${QT_LIBRARIES} ) ADD_EXECUTABLE(turtle_teleop_joy1 ${HWQ_SRC} ) TARGET_LINK_LIBRARIES(turtle_teleop_joy1 ${HWQ_LIB} ) INSTALL_TARGETS( /bin turtle_teleop_joy1) include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake) rosbuild_init() MESSAGE(STATUS "step4.") #set the default path for built executables to the "bin" directory set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin) #set the default path for built libraries to the "lib" directory set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib) #uncomment if you have defined messages rosbuild_genmsg() #uncomment if you have defined services #rosbuild_gensrv() MESSAGE(STATUS "step5.") #common commands for building c++ executables and libraries #rosbuild_add_library(${PROJECT_NAME} src/example.cpp) #target_link_libraries(${PROJECT_NAME} another_library) #rosbuild_add_executable(example examples/example.cpp) rosbuild_add_executable(turtle_teleop_joy src/teleop_turtle_joy.cpp) # Qt add MESSAGE(STATUS "step6.")" Sorry to put this long question, I really appreciate someone could help me for this Makefile problem. Thanks in advance!! Originally posted by tairen on ROS Answers with karma: 56 on 2012-01-27 Post score: 1 Answer: That looks quite chaotic. Actually it shouldn't be so hard. First: You need to start from the ROS side, i.e. create a normal non-qt ROS package first. Now you only need some minor tweaks to the CMakeLists.txt. After rosbuilt_init() I usually put: find_package(Qt4 REQUIRED) # enable/disable some Qt features set( QT_USE_QTGUI TRUE ) set( QT_USE_QTOPENGL TRUE ) set( QT_USE_QTXML TRUE ) include(${QT_USE_FILE}) ADD_DEFINITIONS(-DQT_NO_KEYWORDS) Qt specific files might need to be moc'd. This includes the headers, so put those here: set(qt_srcs src/qtfile.cpp) set(qt_hdrs src/qtfile.h) qt4_automoc(${qt_srcs}) QT4_WRAP_CPP(qt_moc_srcs ${qt_hdrs}) Finally GUI files (.ui) might need to be processed: QT4_WRAP_UI(uis_h src/qtfile.ui) # include this for ui_h include_directories(${CMAKE_CURRENT_BINARY_DIR})</pre> When you actually build the executable, make sure you include the files processed above: <pre>rosbuild_add_executable(qttest src/myMain.cpp ${uis_h} ${qt_srcs} ${qt_moc_srcs}) target_link_libraries(qttest ${QT_LIBRARIES}) Originally posted by dornhege with karma: 31395 on 2012-01-27 This answer was ACCEPTED on the original site Post score: 9 Original comments Comment by 2ROS0 on 2016-08-04: That "Actually it shouldn't be so hard" is so reassuring !
{ "domain": "robotics.stackexchange", "id": 8023, "tags": "ros, qtcreator" }
How to understand the claim that ''positions'' of entangled particles are correlated?
Question: Are both particles detected in the exact same spot of the detector? Something else? Answer: Suppose we know the total momentum of the entangled pair. For example they might have been created by the decay of a stationary particle in which case the total momentum is zero. That means the momenta of the two particles must be equal and opposite, which means (assuming equal masses) the velocities must be equal and opposite. The positions are just the integrals of the velocities, so the positions must be correlated i.e. if we measure the position of one of the particles we know where the other must be.
{ "domain": "physics.stackexchange", "id": 41545, "tags": "quantum-mechanics, quantum-entanglement, epr-experiment" }
anti-alias filter of square pulses
Question: I'm trying to determine whether or not an anti-alias filter is needed for sampling square waves. The goal is to sample square wave pulses from a video detector with an ADC, do some time-domain digital processing on it, and reconstruct it with a DAC. I do understand that signals with frequencies above the nyquist rate will alias into the "wrong" frequency bin. I guess the best analogy is the camera taking a picture of a car wheel turning at a certain rate, and at certain speeds it looks like the wheel stops or starts spinning backwards (a "misinterpretation" caused by the aliased frequencies). A square wave is made up of an infinite amount of odd harmonics but.... From an ADC perspective, it is just taking a sample of the voltage in time. I fail to see how a "misinterpretation" could be made since there is no "turning car wheel" to take pictures of at the wrong time. Do the harmonics alias in such a way that the wave shape is preserved? In my mind, adding a filter to the signal will modify the shape of the original signal, getting rid of those upper harmonics. Depending on the filter design, it could add overshoot, ripple, and/or rise/fall time changes. So wouldn't the best representation of the pulse be obtained by direct sampling? Answer: From an ADC perspective, it is just taking a sample of the voltage in time. I fail to see how a "misinterpretation" could be made since there is no "turning car wheel" to take pictures of at the wrong time. Do the harmonics alias in such a way that the wave shape is preserved? You can reason this out yourself, in the time domain. Consider a square wave with values -1 and 1; and sample it at exactly five times its period. You'll get something like {+1, +1, -1, -1, -1, +1, +1 ...}. If you just have to think about this in the frequency domain, you can, after a great deal of work, demonstrate to yourself that not only do the harmonics not alias in a way that preserve the shape, but in fact alias in a way that does not preserve the shape -- you end up with edges aligned with the ADC sample points. In my mind, adding a filter to the signal will modify the shape of the original signal, getting rid of those upper harmonics. Depending on the filter design, it could add overshoot, ripple, and/or rise/fall time changes. So wouldn't the best representation of the pulse be obtained by direct sampling? That depends on what you're trying to do. If your goal is to accurately capture the timing of the edges, to something less than the ADC sampling interval, then you need to round those edges out -- because a signal that goes {+1, +1, -1, -1} has less information about the location of the edge than one that goes, e.g. {+1, 0.75, -0.25, -1} or {+1, 0, -1, -1}. And finally: I'm trying to determine whether or not an anti-alias filter is needed for sampling square waves. That depends entirely on your problem at hand. If you need every bit of information contained in the wave (and not just its timing), and if it truly has energy content out to infinity (which is physically impossible) then you need to sample infinitely fast. If you know it's bandlimited, then you know the sampling rate you need to use. If you know that, for example, the edges are super-sharp but what you really care about is their timing, then you can filter and acquire with an ADC, and infer the actual position of the edges from the way that the measured "square-ish" wave transitions.
{ "domain": "dsp.stackexchange", "id": 8221, "tags": "sampling, nyquist, square" }
Fall/Winter Viewing
Question: I live in Seattle and am thinking of purchasing a telescope. Is fall/winter a decent time of year for viewing (aside from summer)? Are there any major viewings/events during that this of year? I know I live in a city of rain, but there are some nice nights. Answer: The sky is a constantly changing tapestry of interesting sights and events: there is no time better than any other. If you''re interested, now is the best time! Because of our location in the Milky Way Galaxy, summer and winter are the best times for viewing objects within the galaxy: open clusters and nebulae. Spring and fall are the best times to view objects outside our galaxy: globular clusters and other galaxies. Because of the Earth's rotation, if you stay up late, you can also get a sampling of the next season. Look at the autumn galaxies this evening, then stay up past midnight to view the winter clusters and nebulae. Superimposed on the "deep sky" are the solar system objects, which operate on their own clock. Right now, Saturn is disappearing in the west at sunset but Venus will soon replace it; Jupiter rises around 10 p.m. and dominates the rest of the night. Mars is still far away in the morning sky, but is gradually getting closer.
{ "domain": "physics.stackexchange", "id": 3165, "tags": "astronomy, telescopes" }
One hot encoding alternatives for large categorical values
Question: I have a data frame with large categorical values over 1600 categories. Is there any way I can find alternatives so that I don't have over 1600 columns? I found this interesting link. But they are converting to class/object which I don't want. I want my final output as a data frame so that I can test with different machine learning models? Or, is there any way I can use the generated matrix to train the other machine learning models other than Logistic regression or XGBoost? Is there anyway I can implement it? Answer: One option is to map rare values to 'other'. This is commonly done in e.g. natural language processing - the intuition being that very rare labels don't carry much statistical power. I have also seen people map 1-hot categorical values to lower-dimensional vectors, where each 1-hot vector is re-represented as a draw from a multivariate Gaussian. See e.g. the paper Deep Knowledge Tracing, which says this approach is motivated by the idea of compressed sensing: BARANIUK, R. Compressive sensing. IEEE signal processing magazine 24, 4 (2007). Specifically, they map each vector of length N to a shorter vector of length log2(N). I have not done this myself but I think it would be worth trying.
{ "domain": "datascience.stackexchange", "id": 6447, "tags": "machine-learning, dataset, dataframe, dimensionality-reduction, encoding" }
image_view symbol lookup error
Question: Hi there, Whenever I try to use image_view to visualize a topic published by usb_cam with the following command: rosrun image_view image_view image:=/usb_cam/image_raw I get this: init done opengl support available [ INFO] [1457614339.953957839]: Using transport "compressed" /opt/ros/indigo/lib/image_view/image_view: symbol lookup error: /opt/ros/indigo/lib/image_view/image_view: undefined symbol: _ZN9cv_bridge18cvtColorForDisplayERKN5boost10shared_ptrIKNS_7CvImageEEERKSsbdd The same happens even with the "raw" transport: init done opengl support available [ INFO] [1457614438.464431134]: Using transport "raw" /opt/ros/indigo/lib/image_view/image_view: symbol lookup error: /opt/ros/indigo/lib/image_view/image_view: undefined symbol: _ZN9cv_bridge18cvtColorForDisplayERKN5boost10shared_ptrIKNS_7CvImageEEERKSsbdd Am I doing something wrong? has anyone got similar errors? Thanks in advance. Originally posted by nvoltex on ROS Answers with karma: 131 on 2016-03-10 Post score: 0 Original comments Comment by BennyRe on 2016-03-10: Please provide more information. What commands do you run so that these errors occur? Comment by nvoltex on 2016-03-10: You are right, I'm sorry I forgot to add the command. Thanks for warning me. Comment by mgruhler on 2016-03-11: this seems to point to a problem with linking. Did you install 'image_view' from source or via 'apt-get'? Which OS/ROS-Distro? Maybe also check if this answer helps... Comment by hafager on 2016-08-04: If you installed via apt-get, you might have to source setup.bash. source /opt/ros/indigo/setup.bash Comment by huisen liu on 2016-09-10: @nvoltex Have you solved this problem?I met the same recently,I tried the other methods suggested by others,but all didn't work. Comment by grmarcil on 2017-04-19: In addition to Ziwen Qin's suggestion, I needed to run sudo apt-get install ros-indigo-cv-bridge. Versions of cv-bridge lower than 1.11.13 appear to cause this problem. Answer: This appears to be a bug with ROS' apt-get image-view program (version 1.12.18): https://github.com/ros-perception/image_pipeline/issues/215 I'd recommend running $ sudo apt-get update $ sudo apt-get install ros-indigo-image-view to pull down the latest fixed package. Originally posted by Ziwen Qin with karma: 136 on 2016-11-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24062, "tags": "usb-cam, image-view, camera" }
Simple for loop to calculate headers size based on a specific ratio
Question: How can I improve the following for loop (wannabe function) to create harmonic sizes for headers based on a specific ratio? var base = 16; var goldenRatio = 1.618; var list = ""; for(var i=6; i>0; i--){ list = "\nh" + i + " { font-size: " + base + "px; }" + list; base *= Math.round(base*goldenRatio); } console.log(list) Answer: This is what your code produces and I'm wondering if this really what you want: " h1 { font-size: 1.0957612059345655e+45px; } h2 { font-size: 2.602367950332107e+22px; } h3 { font-size: 126822144384px; } h4 { font-size: 279968px; } h5 { font-size: 416px; } h6 { font-size: 16px; }" The numbers are huge. Looking at this page, it seems you should not have base inside the multiplication here: base *= Math.round(base*goldenRatio); Also, by rounding at this step, you lose precision. It would be better to round right before you put the value in the string. Like this: var base = 16; var list = ""; for (var i = 6; i > 0; i--) { list = "\nh" + i + " { font-size: " + Math.round(base) + "px; }" + list; base *= goldenRatio; } I'm also wondering if you really want the \n at the start of the resulting string. It would seem more natural to put it at the end of each line, like this: list = "h" + i + " { font-size: " + Math.round(base) + "px; }\n" + list; Notice also that I added more spaces around the operators in the for. There is no real standard for this in JavaScript, but this is a practice I borrow from other languages where it is a standard, for improved readability. Compare these two versions, I hope you'll agree that this is an improvement: for(var i=6; i>0; i--){ for (var i = 6; i > 0; i--) {
{ "domain": "codereview.stackexchange", "id": 10098, "tags": "javascript, beginner" }
Calculating an N point FFT by calculating 4X N/4 point FFTs in parallel
Question: I have an FPGA based application where I need to perform 4096 point FFTs in real time on a 1GS/s data stream. Data comes to the FFT from an A/D converter as 4 samples in parallel at 250Mhz. My data consists entirely of real values. I would like the FFT to process 4 real samples per clock. Rather than starting from scratch, I would like to use four 1024 point FFT cores in parallel, and then write some VHDL code to combine the results from the four FFTs into a single 4096 point FFT. I found this post which has an excellent example: Perform non-power-of-two FFT using ARM CMSIS library I was able to easily modify that example code to work with 4X 1024 point FFTs rather than 5X 256 point FFTs. At a high level, I understand how this works. Fs = 1000; % Sampling frequency T = 1/Fs; % Sampling period L = n; % Length of signal t = (0:L-1)*T; % Time vector x = 0.7*sin(2*pi*50*t); figure; plot(x); fx = fft(x); % Break down into five signals of 1024 points each, interleaved p = x(1:4:end); q = x(2:4:end); r = x(3:4:end); s = x(4:4:end); % FFT each of those. This is a 1024 point power-of-two standard FFT fp = fft(p); fq = fft(q); fr = fft(r); fs = fft(s); fp4 = [fp fp fp fp]; fq4 = [fq fq fq fq]; fr4 = [fr fr fr fr]; fs4 = [fs fs fs fs]; fp4 = reshape(fp4,n,1); fq4 = reshape(fq4,n,1); fr4 = reshape(fr4,n,1); fs4 = reshape(fs4,n,1); % calculate the 4096 twiddle factors k4 = (0:n-1)'; W4 = exp(-1i*2*pi*k4/n); % assemble the result fy4 = fp4 + W4.*fq4 + W4.^2.*fr4 + W4.^3.*fs4; figure; plot(abs(fx(1:n/2))); figure; plot(abs(fy4(1:n/2))); I am having trouble understanding, and coming up with a hardware efficient implementation for the complex arithmetic step "fy4 = fp4 + W4.*fq4 + W4.^2.*fr4 + W4.^3.*fs4;" from that example. This statement does not translate directly to hardware very easily, and I suspect that there are some optimizations that could be done to reduce the computational complexity. I would greatly appreciate it if someone could help me understand how to re-write that step of the algorithm into a form that would translate more easily into hardware. I am looking for an explanation that is similar to how the radix-2 butterfly is described below, but for the butterfly that I need to implement to combine four N/4 point FFTs into a single N point FFT. Thank you! Update: Below is a version of Hilmar's code that generates two samples per loop. I also separated out the real and imaginary components since the hardware implementation can only handle real arithmetic. I plan to calculate power spectra from the FFT results, so I only need to keep the first N/2 points from the FFT. Therefore I only need to calculate two output points for every four input points. This works, and it is in a state where I can translate it to VHDL. It uses 10 lookup tables (5 sine, 5 cosine), and 24 multiplies per loop. Because the lookup tables will be implemented in FPGA block RAM, I cannot really take advantage of the circular addressing trick. I need all of the twiddle factors to be available on every clock cycle. I still have a suspicion that there is a more efficient way to do this. Are there simplifications that would reduce the number of operations, and reduce the number of twiddle factor lookup tables that I need? I would also like to understand if this operations is the same as a Radix-4 butterfly. The references that I have seen on the radix-4 butterfly indicate that it uses fewer lookup tables and fewer multiplications than this solution, but I do not understand how to get from one to the other. n = 4096; Fs = 1000; % Sampling frequency T = 1/Fs; % Sampling period L = n; % Length of signal t = (0:L-1)*T; % Time vector x = 0.7*sin(2*pi*50*t)*(2^16); figure; plot(x); % calculate FFT using MATLAB native fft() function. % We'll use this as a reference to prove it works fx = fft(x); % Break down into four signal of 1024 points each, interleaved p = x(1:4:end); q = x(2:4:end); r = x(3:4:end); s = x(4:4:end); % FFT each of those. This is a 1024 power-of-two standard FFT fp = fft(p); fq = fft(q); fr = fft(r); fs = fft(s); fp4 = [fp fp fp fp]; fq4 = [fq fq fq fq]; fr4 = [fr fr fr fr]; fs4 = [fs fs fs fs]; fp4 = reshape(fp4,n,1); fq4 = reshape(fq4,n,1); fr4 = reshape(fr4,n,1); fs4 = reshape(fs4,n,1); % calculate the 4096 twiddle factors k4 = (0:n-1)'; W4 = exp(-1i*2*pi*k4/n); % assemble the result fy4 = fp4 + W4.*fq4 + W4.^2.*fr4 + W4.^3.*fs4; figure; plot(abs(fy4(1:n/2))); %use sines and cosine instead of exp C = cos(2*pi*k4/n); C2 = cos(2*pi*k4*2/n); C3 = cos(2*pi*k4*3/n); S = -sin(2*pi*k4/n); S2 = -sin(2*pi*k4*2/n); S3 = -sin(2*pi*k4*3/n); fy4a = 0*fy4; fy4b = 0*fy4; s = 2^20; %Scaling factor for integer lookup tables for i = 1:n/4 fy4a(i) = fp4(i) + W4(i)*fq4(i) + W4(i)^2*fr4(i) + W4(i)^3*fs4(i); xa = real(fp4(i)); ya = imag(fp4(i)); xb = real(fq4(i)); yb = imag(fq4(i)); xc = real(fr4(i)); yc = imag(fr4(i)); xd = real(fs4(i)); yd = imag(fs4(i)); War = round(C(i)*s); Wai = round(S(i)*s); Wbr = round(C2(i)*s); Wbi = round(S2(i)*s); Wcr = round(C3(i)*s); Wci = round(S3(i)*s); War2 = round(C(i+n/4)*s); Wai2 = round(S(i+n/4)*s); %Can resuse the C2 value from the first calculation %Saves two lookup tables. %Wbr2 = round(C2(i+n/4)*s); Wbr2 = round(-C2(i)*s); %Wbi2 = round( S2(i+n/4)*s); Wbi2 = round( -S2(i)*s); Wcr2 = round(C3(i+n/4)*s); Wci2 = round(S3(i+n/4)*s); %Calculate Intermediate terms. This will be pipe stage 1 in the VHDL %divide by scaling factor and round to simulate fixed point math Waixb = round((Wai*xb)/s); Waiyb = round((Wai*yb)/s); Warxb = round((War*xb)/s); Waryb = round((War*yb)/s); Wbixc = round((Wbi*xc)/s); Wbiyc = round((Wbi*yc)/s); Wbrxc = round((Wbr*xc)/s); Wbryc = round((Wbr*yc)/s); Wcixd = round((Wci*xd)/s); Wciyd = round((Wci*yd)/s); Wcrxd = round((Wcr*xd)/s); Wcryd = round((Wcr*yd)/s); Wai2xb = round((Wai2*xb)/s); Wai2yb = round((Wai2*yb)/s); War2xb = round((War2*xb)/s); War2yb = round((War2*yb)/s); Wbi2xc = round((Wbi2*xc)/s); Wbi2yc = round((Wbi2*yc)/s); Wbr2xc = round((Wbr2*xc)/s); Wbr2yc = round((Wbr2*yc)/s); Wci2xd = round((Wci2*xd)/s); Wci2yd = round((Wci2*yd)/s); Wcr2xd = round((Wcr2*xd)/s); Wcr2yd = round((Wcr2*yd)/s); Xr = xa + (Warxb - Waiyb) + (Wbrxc - Wbiyc) + (Wcrxd - Wciyd); %Xi = ya + ((War+Wai)*(xb+yb) - Warxb - Waiyb) + ((Wbr+Wbi)*(xc+yc) - Wbrxc - Wbiyc) + ((Wcr+Wci)*(xd+yd) - Wcrxd - Wci*yd); %Xi = ya + ( (War*xb + Wai*xb + War*yb + Wai*yb) - Warxb - Waiyb) + ((Wbr*xc + Wbi*xc + Wbr*yc + Wbi*yc) - Wbrxc - Wbiyc) + ((Wcr*xd + Wcr*yd + Wci*xd + Wci*yd ) - Wcrxd - Wci*yd); %Xi = ya + ( Warxb + Waixb + Waryb + Waiyb - Warxb - Waiyb + Wbrxc + Wbixc + Wbryc + Wbiyc - Wbrxc - Wbiyc + Wcrxd + Wcryd + Wcixd + Wciyd - Wcrxd - Wciyd); Xi = ya + ( Waixb + Waryb + Wbixc + Wbryc + Wcryd + Wcixd); %Yr = xa + (War2*xb - Wai2*yb) + (Wbr2*xc - Wbi2*yc) + (Wcr2*xd - Wci2*yd); Yr = xa + (War2xb - Wai2yb) + (Wbr2xc - Wbi2yc) + (Wcr2xd - Wci2yd); %Yi = ya + ((War2+Wai2)*(xb+yb) - War2*xb - Wai2*yb) + ((Wbr2+Wbi2)*(xc+yc) - Wbr2*xc - Wbi2*yc) + ((Wcr2+Wci2)*(xd+yd) - Wcr2*xd - Wci2*yd); %Yi = ya + ( (War2xb + Wai2xb + War2yb + Wai2yb) - War2xb - Wai2yb) + ((Wbr2xc + Wbi2xc + Wbr2yc + Wbi2yc) - Wbr2xc - Wbi2yc) + ((Wcr2xd + Wcr2yd + Wci2xd + Wci2yd ) - Wcr2xd - Wci2yd); Yi = ya + ( Wai2xb + War2yb + Wbi2xc + Wbr2yc + Wcr2yd + Wci2xd); fy4b(i) = complex(Xr,Xi); fy4b(i+n/4) = complex(Yr,Yi); end figure; plot(abs(fy4a(1:n/2))); figure; plot(abs(fy4b(1:n/2))); ``` Answer: Let me try to describe in words how this works: Calculate 4 individual 1k FFTs Repeat each 4 times to make it a 4k length Multiply each result with a vector of the twiddle factors raised to a power: 0 for the first , 1 for the second, etc. Sum them up I primarily did this way since I was lazy. We can make this a lot more hardware friendly by using circular addressing, e.g. for base 1024, we would count 1021, 1022, 1023, 0, 1 ,2 ... Since all the bases are a power of two, that can be implement with a simple bit-wise and with n-1. We can also make use of the fact that taking the power of a twiddle factor is the same as multiplying the index, i.e. $$W(k)^m = W(k\cdot m)$$ So instead of taking the power, we can just use a different step size, provided we use it with circular addressing. Here is a snippet of Matlab code that demonstrates this by unrolling the last summation step %% unroll the final loop and only use 1k length input vectors fy4a = 0*fy4; % modulo 4096 mask moduloMask = n-1; moduloMaskShort = n/4-1; % mod 1024 mask M = 1; % Matlab array indexing offset for i = 1:n realIndex = i-1; % remove matlab indexing offset k = M + bitand(realIndex,moduloMaskShort); % mod 1024 % add FFTs 0 and 1 fy4a(i) = fp4(k)+W4(i)*fq4(k); % FFt 2 with a step size of 2 fy4a(i) = fy4a(i) + W4(M+bitand(2*realIndex,moduloMask))*fr4(k); % FFT 3 with a step side of 3 fy4a(i) = fy4a(i) + W4(M+bitand(3*realIndex,moduloMask))*fs4(k); end This is ugly Matlab code primarily since Matlab starts counting at 1 which makes the whole circular addressing awkward, but should work fine on hardware or C. Please note that this replaces steps 2, 3 & 4 above, so there is no need to replicate the FFT results tp full 4k length. It works with the 1k results "as is". EDIT: Radix 4 formulation This can be done as a radix 4 operation. Below is the Matlab. Note that you only need 3 twiddle factors per butterfly. I implemented the table lookup by have three different pointers with different step sizes (1,2 & 3). You only need to table up 3069 twiddle factors and not the whole 4095. The three twiddle factors are related as $W_2 = W_1 \cdot W_1$ and $W_3 = W_1 \cdot W_2$, so if complex multiplication is faster than table lookup, you can do that. If you do, you only need a twiddle factor table up to 1023. Multiplication with $j$ or $-j$ doesn't require any actual multiplications, just swapping real and imaginary parts and flipping the proper signs. %% do it as a 4in 4out operation fy4b = 0*fy4; % initialze n4 = n/4; M = 1; % Matlab array offset i2 = 0; % index for W^2 i3 = 0; % index for W^3 j = 1i; % imaginary unit for i1 = 0:n4-1 im = i1 + M; % index into Matlab arrays starting at 1 % get the tiwddle factors and multipy with inputs a0 = fp4(im); a1 = fq4(im)*W4(M+i1); a2 = fr4(im)*W4(M+i2); a3 = fs4(im)*W4(M+i3); % perform the radix as 4 indivdiual operations fy4b(im) = a0 + a1 + a2 + a3; fy4b(im+n4) = a0 - j*a1 - a2 + j*a3; fy4b(im+2*n4) = a0 - a1 + a2 - a3; fy4b(im+3*n4) = a0 + j*a1 - a2 - j*a3; % update the twiddle factor indices counters i2 = i2 + 2; i3 = i3 + 3; end d = (fy4b-fy4); fprintf('Relative Error = %6.2fdB \n',20*log10(sum(abs(d))./sum(abs(fy4))));
{ "domain": "dsp.stackexchange", "id": 10451, "tags": "fft" }
Are HeLa cells edible?
Question: I'm curious if HeLa cells are intrinsically poisonous or dangerous to ingest. My understanding is that some of the contamination in HeLa cells such as HPVs are not readily expressed. I have no plans to do so, but I'm trying to understand why they might or might not be. Answer: I think HELA cells are edible, although from moral point of view this would be cannibalism. Despite they are cancer cells, they are safe for foreign organism, because any ate matter is destroyed. Even if these cells were implanted into another being by surgery, they will be safe since immunity will recognize them as foreign and kill. They are much less cancerous for recipient than just foreign. Remember, then when we are eating meat, we don't care if cow or pig had cancer. Even if we avoid eating explicit tumors, we can still eat metastatic cells from sick animals. I think this happening all the time and has no consequences. Simultaneously HELA cells are dangerous for other cell cultures in laboratory. There were several cases, when other cultures were infected by HELA cells and crowd out. HELA cells were traveled by air and on objects unexpectedly. Scientists were not ready for this phenomenon.
{ "domain": "biology.stackexchange", "id": 3824, "tags": "cell-culture, eukaryotic-cells" }
Have we really measured the wavelength of light?
Question: Have we practically measured the distances between the variations of electromagnetic radiations in space in nanometers or is it just theoritical because of calculations? Also the one who have marked duplicate i didn't understand the answer so please anyone who can answer please explain in simple language as well as less mathematical notions. Answer: Using a process called interference, we can find wavelength, because the way that waves interfere is reliant of wavelength. Interference is based off of two key principles of waves: they are made up of peaks and troughs. When troughs overlap, they go lower. When peaks overlap, thy go higher. When a peak meets a trough, they cancel. Of course, the positions of waves and troughs is dependent on wavelength, therefore, one can calculate the wavelength by looking at the interference pattern. As you can see in the picture, using two waves, we can find wavelength.
{ "domain": "physics.stackexchange", "id": 21441, "tags": "electromagnetism, waves, electromagnetic-radiation, photons, wavelength" }
Expandable text boxes for legends of fieldsets
Question: What is the best way to refactor the following script? <script type="text/javascript"> $(document).ready(function() { $(".rolesList").hide(); }); $("#legendFunction").click(function() { $("#divChkUserRoles").toggle('slow'); var text = $("#lblExpandFunction").text() == '+' ? '-' : '+'; $("#lblExpandFunction").text(text); }) $("#legendMISheet").click(function() { $("#divChkUserRolesMISheet").toggle('slow'); var text = $("#lblExpandMISheet").text() == '+' ? '-' : '+'; $("#lblExpandMISheet").text(text); }) </script> The above script will be applied to the following HTML and ASP.NET code: <fieldset> <legend><span id="legendFunction" style="cursor: pointer" title="Click here to toggle show or collapse Function."> <label id="lblExpandFunction" style="padding: 5px;">+</label> KengLink Function </span></legend> <div id="divChkUserRoles" class="rolesList"> <asp:CheckBoxList ID="UserRoles" runat="server" /> </div> </fieldset> <fieldset> <legend><span id="legendMISheet" style="cursor: pointer" title="Click here to toggle show or collapse MI Sheet."> <label id="lblExpandMISheet" style="padding: 5px;">+</label> MI Sheet </span></legend> <div id="divChkUserRolesMISheet" class="rolesList"> <asp:CheckBoxList ID="UserRolesMISheet" runat="server" /> </div> </fieldset> Answer: Don't use ids at all. (You almost never should be using them. Ids get to be really problematic once you have composite views and/or multiple people on a project) Place a few appropriate classes in your html. And use the jquery composite and relative references to do it all at once. $('.expander').click(function() { $(this).closest('fieldset').find('.expandable').toggle('show'); var l = $(this).find('label'); l.text(l.text() === '+' ? '-' : '+'); }); where your clicker span gets a class of expander and your rolesList gets the class of expandable. You could just use rolesList directly, but I find expandable to be somewhat more descriptive.
{ "domain": "codereview.stackexchange", "id": 1818, "tags": "javascript, jquery, form, animation" }
How to interface ros controller and orocos?
Question: How do i use orocos rtt with ros controller for hard real time controll? Originally posted by dinesh on ROS Answers with karma: 932 on 2018-05-12 Post score: 0 Answer: I created a basic example of this some time ago, might be useful to get started: https://github.com/skohlbr/rtt_ros_control_example Originally posted by Stefan Kohlbrecher with karma: 24361 on 2018-05-14 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by dinesh on 2018-05-16: great, but why r the read and write functions empty? it would be more helpful if they were also present. what single board computer were u using to run ros and how did u connect sensors and motors? through gpio, ethercat or serial port? Comment by dinesh on 2018-05-16: so sir, basically the only part needed to interfaced with orocos is the hardware interface part right? Comment by Stefan Kohlbrecher on 2018-05-17: The read and write functions depend on the hardware you use and how it can be interfaced (perhaps via a driver/library provided by the vendor).
{ "domain": "robotics.stackexchange", "id": 30811, "tags": "ros, ros-controllers, ros-kinetic" }
stringByAppendingString: vs. stringWithFormat:
Question: I am refactoring code which contains many lines of NSString* image = [@"ipad_" stringByAppendingString:imageOrigName]; and wondered which is more optimized: stringByAppendingString: or stringWithFormat: In the first, you take one NSString object and concatenate another onto its tail. In the second, you can use a formatter: NSString* image = [NSString stringWithFormat:@"ipad_%@", imageOrigName]; The result is the same, but for optimization purposes, which is better? For the above example, I would expect that a simple concatenation would be faster than having it parse the string for '%' symbols, find a matching type (NSString for the %@), and do all its background voodoo. However, what happens when we have (sloppily written?) code which contains multiple stringByAppendingStrings? NSString* remainingStr = [NSString stringWithFormat:@"%d", remaining]; NSString* msg = "You have "; msg = [msg stringByAppendingString:remainingStr]; msg = [msg stringByAppendingString:@" left to go!"]; if (remaining == 0) msg = [msg stringByAppendingString:@"Excellent Job!"]; else msg = [msg stringByAppendingString:@"Keep going!"]; Here, a single stringWithFormat (or initWithFormat if we use [NSString alloc]) would seem to be the smarter path: NSString* encouragementStr = (remaining == 0 ? @"Excellent Job!" : @"Keep going!"); NSString* msg = [NSString stringWithFormat:@"You have %d left to go! %@", remaining, encouragementStr]; Thoughts? Sites/blogs that you've found that helps answer this? Answer: The only way to know is to measure it. And, of course, the only reason to measure it is if it actually matters; if you have a real world performance issue that can be tracked to this code. If not, don't worry about it. However, there are a few issues to consider: every append* is going to cause an allocation and memory copying, those are expensive every *WithFormat: is going to cause a string to be parsed (and there will be allocations/copying) all of this is likely entirely moot unless you are doing this 10s of thousands of times often. If you are, it begs the question as to why? And, finally, note that this kind of string manipulation is going to make localization more difficult.
{ "domain": "codereview.stackexchange", "id": 923, "tags": "objective-c" }
colliding point particles
Question: when I draw e.g. the diagram of compton scattering I assume that the electron of given momentum gets 'hit' by a photon and interacts with it. How close does the photon have to get to the electron that it is considered a 'hit'. Obviously if passing at a macroscopic distance it shouldn't get affected. And an actual hit should be impossible when dealing with point particles... Answer: While electrons are point particles in the sense that their position eigenstates are (as far as we know) $\delta$-like. Photons can't be said to be point particles in this sense, as you cannot transform to their rest-frame (although they are featureless with respect to small scales as far as we know). The correct way to thing about electrons and photons is as quatum fields, and as such they are extended. So the relevant length scale is not related to the "size" of an electron, but to its scattering cross-section with photons of a given energy. Never forget, a photon is not a point-particle travelling on a trajectory through space. A photon is a quantized portion of an electromagnetical field. But there is a natural distance scale involved, namely the Compton wavelength. Photons with wavelengths long compared to the Compton wavelength will not Compton scatter on electrons.
{ "domain": "physics.stackexchange", "id": 22221, "tags": "quantum-field-theory, standard-model, feynman-diagrams, scattering-cross-section" }
Complexity of taking mod
Question: This seems like a question that should have an easy answer, but I don't have a definitive one: If I have two $n$ bit numbers $a, p$, what is the complexity of computing $a\bmod p$ ? Merely dividing $a$ by $p$ would take time $O(M(n))$ where $M(n)$ is the complexity of multiplication. But can $\bmod$ be performed slightly faster ? Answer: Shoup (Section 3.3.5, Theorem 3.3, p. 62) gives a bound for computing the residue $r$ in time $O(n\log q)$ where $a = q\cdot p +r$ and $\log a = n$. I guess that if $p$ and $a$ are both roughly $n$ bit numbers, then $q$ (and hence $\log q$) should be rather small, giving $O(n)$. If $a$ is an $n$-bit number, and $p$ is relatively small, then the multiplication approach should be faster.
{ "domain": "cs.stackexchange", "id": 1544, "tags": "algorithms, number-theory" }
Can I calculate the training performance of GPUs by comparing their specification?
Question: I am currently using Nvidia GTX1050 with 640 CUDA cores and 2GB GDDR5 for Deep Neural Network training. I want to buy a new GPU for training, but I am not sure how much performance improvement I can get. I wonder if there is a way to roughly calculate the training performance improvement by just comparing GPUs' specification? Assuming all training parameters are the same. I wonder if I can roughly assume the training performance improvement is X times because the CUDA core number and memory size increased X times? For example, Is RTX2070 with 2304 CUDA cores and 8GB GDDR6 roughly 4 times faster than GTX1050? And is RTX2080Ti with 4352 CUDA cores and 11GB GDDR6 roughly 7 times faster than GTX1050? Thanks. Answer: A lot matters when it comes to comparison of GPU's, I will give you a broad overview of the matter (it is not possible to go into exact details as huge number of factors are actually involved): Cores - Number of CUDA cores increases means the parallelism has increased, thus multiple calculations can be done in parallel but is of no significance if your algorithm is inherently sequential. Then CUDA cores will not matter. Your library will parrallelize what it can parallelize and will take that many CUDA cores only, the rest will remain idle. Memory - Memory is useful if you are working on data whose one instance requires huge memory (like pictures). Thus with greater memory you can load greater amount of data at the same time and the cores will process on that. If memory is too low then cores want data but it is not getting it (basically data available in RAM's is the fuel while cores are the engine, you cannot run jets on fuel tank the size of car, it will consume time to constantly refill the fuel). But according to convention of Machine Learning one should load only small mini-batches at a time. Micro-architecture - Lastly, architecture matters. I do not exactly know how but NVIDIA's RTX is faster for deep learning than GTX. NVIDIA has two affordable architectures (Pascal - GTX and Turing - RTX). Thus even for exactly same specs Turing architecture will run faster for deep learning. But for more details you can explore NVIDIA's website on what architecture specialises in what. For example NVIDIA P-series is good for CAD purposes. Also there are some very high end GPU's using Tesla architecture. So AFAIK, these are the factors that matter. The library you will be using also matters, as a lot depends on how the library unrolls your program and maps them on several GPU's. Also related these 2 answers I previously gave: CPU preferences and specifications for a multi GPU deep-learning setup Does fp32 & fp64 performance of GPU affect deep learning model training? Hope this helps!
{ "domain": "ai.stackexchange", "id": 1064, "tags": "training, gpu" }
Fermi energy from gravitational collapse
Question: When a cloud of hydrogen collapses to form a star, the particles gain energy; potential energy is converted into heat. This eventually causes the star to ignite; the thermal energy becomes high enough to allow fusion to take place. Eventually the nuclear fuel of this star is all consumed and it collapses again. Part of it would blow away in a supernova if it was big enough, but some part may remain to form a neutron star of perhaps a black hole. My question is concerned with this final collapse. Surely, the material must again heat up. One can even imagine that the heat must become quite extreme. Moreover, even if it collapses to form a black hole, the matter would be a ball of almost pure neutrons. So one can imagine that the Fermi energy of this ball must be extremely high. To make things more interesting, this ball is being compressed into a smaller volume. This should also increase the Fermi energy. Is it possible that the Fermi-energy in this final state of collapse (before the event horizon forms) could be so high (perhaps above the QCD scale or even the electro-weak scale) that it could actually allow some interesting physics to take place? Answer: Yes of course! That is what motivates much of the study of neutron stars. Tackling your first point, yes the interior temperatures of recently collapsed neutron stars are very high - perhaps $10^{11}$ K, but cool rapidly (in seconds) through the emission of thermal neutrinos to "cold" configurations where the Fermi energies are much greater than $kT$. There is plenty of theoretical interest in how matter behaves in these circumstances and how the neutrinos interact with other matter, since it controls the physics of supernovae. The physics of neutron stars is reasonably well understood up to the nuclear saturation density at $3\times 10^{17}$ kg/m$^3$, but a typical neutron star probably has a density several times to an order of magnitude larger than this. The higher Fermi energies at these increased densities could well lead to exotic possibilities such as the creation of massive hadrons like hyperons or to the production of pions or kaons which then form a boson condensate. Alternatively, quarks may attain asymptotic freedom at high densities leading to a quark-gluon plasma or neutron stars that are entirely made of (strange) quark matter. Another alternative is that neutrons form some sort of solid core, held in a lattice by the strong nuclear force. I suppose what you are really asking is when a neutron star exceeds the Tolman-Oppenheimer-Volkhoff limit and collapses, does the density increase even more to the extent that even more exotic (non-equilibrium) physics becomes possible? One thing to note here is that the radius of neutron star when it collapses is probably only a factor $\sim 1.5$ larger than its Schwarzschild radius, so the average density is only going to increase by a factor of a few before it departs the practically observable universe (I am not going to speculate on what cannot be observed). Given the current uncertainties in the behaviour of matter beyond the nuclear saturation density, I suspect that all the candidate physics considered for neutron cores is also relevant for collapsing neutron stars. EDIT: Thus to answer your edit: At the highest densities inside neutron stars and at the highest densities achieved by material collapsing to a black hole (perhaps a few $10^{18}$ kg/m$^3$, the Fermi (kinetic) energy of the neutrons becomes large enough (a few hundred MeV $\sim$ the QCD scale) (or equivalently, the separation between neutrons becomes small enough $<10^{-15}$ m) that quarks may attain asymptotic freedom and quark matter is formed. The Fermi energies never get anywhere near the electroweak scale of 246 GeV.
{ "domain": "physics.stackexchange", "id": 35521, "tags": "gravity, black-holes, neutron-stars, fermi-energy" }
Minimum velocity needed to throw a stone to another planet
Question: I was recently learning about gravitational fields in high school (senior year), when my physics teacher presented us with a problem that we were supposed to analyze the energy of the particle qualitatively. However it struck me on how to find the minimum velocity. I asked my physics teacher, and he told me it used some calculus. He didn't explain further, but I am curious for an answer. It goes something like this : Two planets A and B are separated by a distance of R. A stone (point mass) is thrown from the surface of planet A towards planet B. Mass of A and B are known. Radius of planet A and B are the same and known. Find the minimum velocity to throw the stone from planet A to B. Mass of stone is also known. This question may be incomplete in known variables, because I thought about the problem and made my up own question. Answer: You've formulated the question very well, but you have left out just one variable. (Trying to understand something that is not in your grade level is a major crime, but that is your teacher's fault: he hasn't properly taught you not to think). The way to split up the problem is: Get from the surface of planet A to "outer space" in the vicinity of planet A. Have enough velocity left over to get to the vicinity of planet B. Fall onto planet B. It is convenient to consider "outer space" as being infinity: it makes the calculations easier and it makes only the tiniest difference to the figures. 1. From the surface of A to outer space The initial velocity required to get from the surface of a planet to "infinity" is called the escape velocity. For the Earth I find it convenient to remember it as $11\ \mathrm{km/s}$, or as $\sqrt2$ times the orbital velocity of a very low orbit, which I remember as $8\ \mathrm{km/s}$. (That $\sqrt2$ is true for any gravitating body, anywhere). You can look up the exact figures in Wikipedia, but I hope you don't. Despite what your teacher thinks, science is not about blind obedience to established authority. According to Newton's law of gravitation, the gravitational force of a planet of mass $M$ on an object of mass $m$ at a distance $r$ from its centre is $$-\frac{GMm}{r^2}\text{,}$$ where $G$ is the Newton's gravitational constant, which is the same always and everywhere. You can look it up, and also celebrate the fact that out of all the constants of nature it is the least accurately known. The minus sign is because it makes most sense to treat all distances, velocities, accelerations and forces as acting upwards – and of course gravity pulls downwards. Now, even at your grade you should know that the work performed by a force is equal to the force times the distance moved. So on a tiny bit of the object's journey up from the planet's surface (a distance $\Delta{r}$, say) the work performed by gravity is $-\frac{GMm}{r^2}\,\Delta{r}$. Adding up all the little pieces, the total amount of work performed by gravity on the object's journey from the surface into outer space is $$-\int\limits_{r_\mathrm A}^{\infty}\frac{GMm}{r^2}\,\mathrm d{r}\text.$$ (If your teacher says that integration is beyond your grade level, strangle him. Integration is easy. Get an elementary calculus book and read it for fun and see). Doing the integration, the total work done by gravity turns out to be $-\frac{GMm}{r_\mathrm A}$. If you launched your particle with a velocity $v$, that means that it started with a kinetic energy of $\frac12{m}v^2$. When it gets to outer space, the work done by gravity means that the resultant kinetic energy is $$\frac12{m}v^2-\frac{GMm}{r_\mathrm A}$$ and you'll see that this makes sense, because for small $v$ it's negative (the particle never gets that far), for $v$ equal to the escape velocity it's exactly zero (the particle escapes but that's that), and for larger $v$ it's positive, so there's still some kinetic energy left. A couple of points: There is a factor of $m$ in both halves of the equation. This shows that the mass of the particle isn't relevant to the dynamics of its motion. If planet A is the Earth, you don't know $M$ without looking it up in a book, and you don't know the value of $G$ without looking it up in a book. That would be immoral. On the other hand, you could measure the radius of the Earth if you wanted (Eratosthenes seems to have been the first to do this, and it's quite a doable experiment for everybody), and you could also measure the acceleration due to gravity at the Earth's surface. You would therefore be able to use "acceleration = $GM/r^2$" to work out the value of the product $GM$, and thus be able to work out the escape velocity without looking up anything at all. 2. From outer space near planet A to outer space near planet B I'll be much briefer here. Planet A and planet B are both (I hope) orbiting the Sun. If planet B is further away than planet A, you will need some extra kinetic energy to climb out of the Sun's "gravity well". If you prefer, you can think of it as needing "surplus velocity" after escaping from planet A. I will now cheat and say that if you are going out from the Earth to Mars, you need to have $2.9\ \mathrm{km/s}$ of velocity left over, once you have got to outer space, to get out from the vicinity of the Earth to the vicinity of Mars. You could do this working out for yourself, by deducing the acceleration due to the Sun's gravity at the Earth's distance from the Sun (using the length of the year) and comparing it to that at Mars's distance from the Sun (using the Martian year). But I do need to let you do some of the work yourself! Just one other point: it isn't $11.2\ \mathrm{km/s}+2.9\ \mathrm{km/s}=14.1\ \mathrm{km/s}$ you'd need to get to Mars. You need a starting kinetic energy which gets you to $2.9\ \mathrm{km/s}$ when you get to "outer space", and because kinetic energy is proportional to the square of velocity, this means that you only need $11.6\ \mathrm{km/s}$ to start with. On the other hand, if Planet B is nearer to the Sun than planet A (Venus, for example), then you don't need any extra velocity at all. The escape velocity is enough. The relative orbits of planets A and B are the variable that you left out of your question. 3. From outer space near Planet B to the surface of Planet B. No extra velocity needed. Start at zero, and Planet B's gravity will carry you in all the way. I've taken a long time over this because you sound like the sort of person who doesn't just want canned answers from books. Working things out for yourself is what science ought to be about (life, too). It's just unfortunate that so many schools seem to teach the opposite.
{ "domain": "physics.stackexchange", "id": 32730, "tags": "homework-and-exercises, newtonian-gravity, projectile" }
Robot drives arm into table on its way to target pose
Question: Greetings, Apologies for newbie question. Is there a command that moves "straight" to a target set of joint angles, where "straight" means the end effector moves in a roughly straight line? Background: I use my Kinova MICO 4DOF arm for art projects. I drive it with shell commands. It currently plays a decent game of chess, using a DGT sensory board. I'm an end user who programs in bash, consumed by art projects that leave little time for mastering robotics. I've recorded joint angles for all robot arm poses of interest. I can get current joint positions with: rostopic echo -c /m1n4s200_driver/out/joint_angles I subtract these from the target-position angles, and drive the arm like this: rosrun kinova_demo joints_action_client.py -v -r m1n4s200 degree -- $M1 $M2 $M3 $M4 This works fine much of the time. However, all joints seem to move at the same speed independently, toward their individual goals. So the end effector follows a ponderous curvy path. Depending on where the joints are, the robot will often try to drive straight through the table surface (presumably hoping to emerge somewhere else en route to the target angles). Is there a command that will move the end effector straight, or avoid the table surface? I'm using joint angles, not Cartesian positions. I'm using kinova-ros, as described here: https://github.com/Kinovarobotics/kinova-ros/blob/master/README.md Everything I know comes from this document -- it's a pretty spare tutorial for me. Thanks, Kevin PS. My software versions: % rosversion -d kinetic % lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.4 LTS Release: 16.04 Codename: xenial Originally posted by kevin.crawford.knight on ROS Answers with karma: 3 on 2018-08-19 Post score: 0 Answer: You need to plan a path for the end effector to follow and then control the joint motors at varying speeds over time to move the end effector along this path. Planning can involve making sure the robot doesn't collide with itself, making sure any part of the robot doesn't try to drive through the table, varying the speeds of joints to give smooth motion of the end effector, and many other factors. Sounds difficult, right? It is! That's why we all use the MoveIt! stack to control our manipulators. Fortunately, it looks like the Kinova Robotics ROS software supports MoveIt! as of version 1.20. MoveIt is a very complex piece of software, but the simple things like what you want to do are relatively straightforward. You do need to learn a little about how it works, though. There are a lot of tutorials available. I recommend you start by going through that Kinova Robotics page to make sure it's all working, and then move on to the tutorials provided by MoveIt. The quickstart tutorial could be a good place to begin. If you want a simple API to do simple tasks, then the Python interface is the one for you. That tutorial will take you through the basics of telling the end effector to go somewhere. In particular, pay attention to the bit about adding objects to the planning scene. This is how you tell MoveIt that you have a table and chess pieces so that it doesn't try to drive through the table or throw chess pieces around the room like an angry toddler. The tutorials on the C++ API give a lot more detail on the things that are possible with MoveIt. The majority of them are possible with the Python API as well so don't feel like you have to learn C++ just to use MoveIt. If ultimately you want to program it all using bash, then I recommend you make a simple Python script that sends an end effector goal to the robot using MoveIt and executes that motion before returning. Then you can do the rest through bash. Originally posted by Geoff with karma: 4203 on 2018-08-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by PeteBlackerThe3rd on 2018-08-20: I'd just like to add that the OP's goal of moving the end effector I'm a straight line to a goal is actually a very challenging task and in some cases impossible where that route passes through a singularity. Hence the requirement to use a path planning algorithm to achieve this goal. Comment by kevin.crawford.knight on 2018-08-21: Thank you very much for the clear explanation. I got 90% of the way there with rosrun shell commands, but for the last 10%, looks like I should learn MoveIt and Python and 3D models. At least I know the path! Thanks for your patience @gvdhorn, much obliged. Comment by Geoff on 2018-08-21: You won't need to learn too much about 3D models to use MoveIt. The robot comes with its own 3D model, and for representing things in the environment you can get away with using boxes.
{ "domain": "robotics.stackexchange", "id": 31577, "tags": "ros, ros-kinetic, kinova" }