anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
CNN output shape explanation
Question: I have the following sequential model: model = models.Sequential() model.add(Reshape(([1]+in_shp), input_shape=in_shp)) model.add(ZeroPadding2D((0, 2))) model.add(Conv2D(256, (1, 3),padding='valid', activation="relu", name="conv1",data_format="channels_first", kernel_initializer='glorot_uniform')) model.add(Dropout(dr)) model.add(ZeroPadding2D((0, 2))) model.add(Conv2D(80, (2, 3), padding="valid", activation="relu", name="conv2",data_format="channels_first", kernel_initializer='glorot_uniform')) model.add(Dropout(dr)) model.add(Flatten()) model.add(Dense(256, activation='relu', kernel_initializer='he_normal', name="dense1")) model.add(Dropout(dr)) model.add(Dense( len(classes), kernel_initializer='he_normal', name="dense2" )) model.add(Activation('softmax')) model.add(Reshape([len(classes)])) model.compile(loss='categorical_crossentropy', optimizer='adam') model.summary() and I got the following summary: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= reshape_1 (Reshape) (None, 1, 2, 128) 0 _________________________________________________________________ zero_padding2d_1 (ZeroPaddin (None, 1, 6, 128) 0 _________________________________________________________________ conv1 (Conv2D) (None, 256, 6, 126) 1024 _________________________________________________________________ dropout_1 (Dropout) (None, 256, 6, 126) 0 _________________________________________________________________ zero_padding2d_2 (ZeroPaddin (None, 256, 10, 126) 0 _________________________________________________________________ conv2 (Conv2D) (None, 80, 9, 124) 122960 _________________________________________________________________ dropout_2 (Dropout) (None, 80, 9, 124) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 89280) 0 _________________________________________________________________ dense1 (Dense) (None, 256) 22855936 _________________________________________________________________ dropout_3 (Dropout) (None, 256) 0 _________________________________________________________________ dense2 (Dense) (None, 8) 2056 _________________________________________________________________ activation_1 (Activation) (None, 8) 0 _________________________________________________________________ reshape_2 (Reshape) (None, 8) 0 ================================================================= Total params: 22,981,976 Trainable params: 22,981,976 Non-trainable params: 0 The model works fine. But, I want to understand something regarding conv1 layer. Why the width value has been reduced from 128 to 126? I am really confused about that shouldn't it be the same value as the previous layer? Also, the same thing for the conv2 layer too where the height and width have decreased from (10,126) to (9,124). Could someone explain me why? Answer: In the convolution layer, the filter (3x3 in your case) is applied to the images in order to produce the output (feature map) and is slid to the right and bottom by a parameter called stride (in your case, it is not defined, the default is 1). Now if padding='valid', the output dimension will change, but if you change it to padding='same', the output dimension will be the same as input and this is because of the idea of zero padding (i.e., padding image borders with zero).
{ "domain": "datascience.stackexchange", "id": 4907, "tags": "machine-learning, neural-network, deep-learning, cnn, convolution" }
Replacing all occurrences in a String
Question: I have a string like this : val content = "some_text{macro1}another_text{macro2}text" I want to replace the {macro1} and {macro2} with macro1 and macro2, i.e. just to remove the { and }. I wrote some piece of code which works fine, but for me it seems very hard to read: val Pattern = """\{(.*)\}""".r Pattern.findAllIn(content).matchData.foldLeft(content) ( (newContent: String, current: Regex.Match) => { newContent.replace(current.group(0), current.group(1)) } ) How I can improve this code? Please note: since it's in Scala, I prefer it in the functional way. Answer: Your desire to use the "functional way" is not well-motivated. Why do it "the functional way" when the "other way" is not only easier to read, but also common practice, and well-understood? val stripCurly = "[{}]".r val replaced = stripCurly.replaceAllIn(a, "") If you want to have forced-matching of the braces consider: val pure = """\{([^}]*)\}""".r val pured = pure.replaceAllIn(content, "$1") Note the use of the "not a } inside the {}" logic in the regex. The examples above are running here in ideone
{ "domain": "codereview.stackexchange", "id": 26872, "tags": "strings, regex, scala" }
How to delete pending jobs on IBM Quantum Computer to retrieve units?
Question: I am trying to run some code using qiskit, but I get the error message, that I have run out of necessary Experiment Units. I tried to remove pending jobs using the API with the following code for job in api.get_jobs(): if job["status"] == "RUNNING": api.cancel_job(id_job=job["id"], hub=None, group=None, project=None, access_token=None, user_id=None) but it didn't work. Am I even going in the right direction or is there some other way to retrieve these used Experiment Units? I have read, that they normally are given back just after the execution of program is finished or after 24 hours (depending on which one ends earlier), but I am waiting now for over than two days and nothing happens. Answer: Cancel Job is only available for the IBM Q Network, not for IBM Q Experience: https://github.com/QISKit/qiskit-api-py/blob/master/IBMQuantumExperience/IBMQuantumExperience.py#L795 In the next weeks, we hope that it is available for IBM Q Experience too. Regarding to the credits... we are analyzing the problem. We have refilled your credits. If you have any other issue, please post in qiskit (https://qiskit.org/) slack public channel :).
{ "domain": "quantumcomputing.stackexchange", "id": 163, "tags": "ibm-q-experience, qiskit" }
How does a double axis movement system work?
Question: I don't know what the right term for the machine is, but I would like to look into machines that can move something with two bars. For example, the machine that moves the basketball hoop in Stuff Made Here's never-miss basketball hoop. Does anyone know what this is actually called and how I might start to build one that can move, say, six inches in any direction? Answer: The overall "study" of such movement, particularly with regard to the math, mechanics and operation/programming is called kinematics. You'll find references to x-y or cartesian kinematics and in the 3D printer world, delta kinematics. There's a few other unusual implementations such as the hang printer, a few arm printer designs with one using polar kinematics. For light, fast x/y movement, consider to check out coreXY studies/constructions. The stepper motors are fixed to a frame, with the belts running through a series of pulleys to create carriage movement.
{ "domain": "engineering.stackexchange", "id": 4591, "tags": "mechanical-engineering" }
Finding whether the language is CFL or regular
Question: $L = (0^i 1)^n$ where i=1,2,3,4...n and n>=0 For eg :- 00010001 doesn't belong to the language as n=2 but i=3 at the beginning. 001001001 belongs to L as n=3 and i=2 in all cases. I know the above language is not regular for sure as value of 'i' depends on value of 'n' and it requires comparison. So DFA not possible. Now I am not sure to which class of language does the above language belong because to check number of zeroes or 'i', we need value of 'n' which can be found out only after reading the entire string. Can someone please help? Answer: Suppose there exists a CFG whose language is $L$, and let $k$ be the pumping length. Consider the string $(0^k1)^k \in L$. Then, by pumping lemma, there exists a decomposition $(0^k1)^k = uvwxy$ where $|vwx| \leq k$, $vx \neq \varepsilon$, and for each $i \geq 0$, $uv^iwx^iy \in L$. Since $|vx| \leq |vwx| \leq k$, there are four types of values of $vx$: either $vx \in 0^+$, $vx \in 0^+1$, $vx \in 10^+$, or $vx \in 0^+10^+$. Try showing that in each of these cases, pumping $vx$ gives a string not in $L$, thus giving a contradiction and demonstrating that $L$ is not context-free.
{ "domain": "cs.stackexchange", "id": 10356, "tags": "formal-languages, regular-languages" }
Image multi class classifier CNN
Question: I have a problem, im designing a multiclass classifier to classify medic images, I have to classify in which grade of desease is it, this are 6 grades , each time the joint deforms a little, so, mi original dataset was imabalanced, each class have like 16 to 200 images that are very very similar with some little deatils of change, ,so i did some contrasts and flips to the images to have like 500 images per class aprox, i used a vgg16 architechture to train the model but, it soesnt work 30% acc, i ddid a simple model with 2 layers of convolution, I have like 70% of accurracy but my recall is very very low like 0.001 to 0.3, and when I predict with an image, all the images goes to the incorrect class, i dont know how to correct my model , use other metrics or some architechture for this type of images?.Here's a sample of the code https://github.com/Franciscogtu/OARSI-IMAGE-CLASSIFIER.git Thank you for the help Answer: Have you tried a batch normalization layer between the conv2D layers and the dense top? I'd also keep the number of filters low ,maybe 32 or 64. Same for the number of nodes in the top, no point in having a huge dense top with hundreds of nodes.
{ "domain": "datascience.stackexchange", "id": 11241, "tags": "cnn, predictive-modeling, image-classification, multiclass-classification, image-preprocessing" }
What were the intention/conclusions for Michelson-Morley experiment?
Question: Which of the following were the intentions of M&M? to disprove the existence of aether. to show that the aether has no effect on matter and energy and therefore is as good as non-existent. Feel free to insert more reasons/intentions. Which of the following were the corollaries/conclusions? Aether indeed does not exist M&M does not prove or disprove the existence of aether. M&M was a pointless experiment. Feel free to add more corollaries/conclusions. Answer: The purpose of the Morley-Michelson experiment was to detect the motion of the lab relatively to the inertial system of the luminiferous aether, i.e. the "aether wind". The theory that the electromagnetic waves were waves of a composite medium – analogously to sound's being made of waves in the air – predicted that the speed of light should change to $c-v$ and $c+v$ if we move relatively to the preferred frame by the speed $v$ (in the direction of light or against it, respectively). So none of your entries 1,2 in the first list describe the situation correctly. The intent was exactly the opposite (not that it matters too much). The conclusion is 1 in the second list of yours, aether indeed doesn't exist (or doesn't pick a preferred frame) and the electromagnetic waves are waves that don't require any medium and that violate the rules for the additional of velocities (the speed of light is always $c$, not $c\pm v$, regardless of the speed of the source or the detector), except that you should erase "indeed" because no one had expected that result, not even Einstein who was 8 at that time in 1887. The MM experiment may be viewed as the primary experimental support for special relativity. However, it's also another historical fact that it hasn't played a key role for Einstein while developing special relativity – Einstein's reasoning was entirely theoretical, he didn't refer to the MM experiment, and the only historical evidence that he was actually aware of it was Einstein's reference to a paper by Lorentz that did mention the MM experiment.
{ "domain": "physics.stackexchange", "id": 4373, "tags": "special-relativity, experimental-physics, speed-of-light, history, aether" }
Is a recurrent layer same as LSTM or single-layered LSTM?
Question: In MLP, there are neurons that form a layer. Each hidden layer gives a vector of number that is the output of that layer. In CNN, there are kernels that form a convolutional layer. Each layer gives feature maps that are the output of that layer. In LSTM, there are cells that form a recurrent layer. Each layer gives a sequence that is the output of that layer. This is my understanding of the basic terminology regarding MLP, CNN, and LSTM. But consider the following description regarding the number of layers in LSTM in PyTorch num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1 The description uses the "number of recurrent layers" and the "LSTM" in a similar manner. How I can understand this? Is it costmary to consider a recurrent layer as an LSTM? Answer: Depending on the context, when people use the term LSTM, they either refer to an LSTM layer, an LSTM unit (like a recurrent unit in an RNN or neuron in an MLP), or an LSTM neural network (i.e. an RNN that uses LSTM units or layers). In TensorFlow, an LSTM is a layer, so you can stack multiple LSTMs to create deeper architectures. In PyTorch, the class LSTM can create an LSTM layer or multiple LSTM layers stacked together. You also have an LSTMCell, which should be just one LSTM layer. My answer here should also be useful.
{ "domain": "ai.stackexchange", "id": 3145, "tags": "terminology, recurrent-neural-networks, long-short-term-memory" }
Confusion in understanding a DFA question
Question: I am new to DFA and I am trying to understand the following question. The alphabet is Σ = {0, 1} Construct DFA for L = {0^n: n is either a multiple of 3 or a multiple of 5 } Question The above language does not say anything about the 1s. When we write strings in this Language, we get L = {є, 000, 00000, 000000, 000000000, ...} There are no ones in this language. According to the language description, 0010 would not be a valid string. Right? So, in the DFA that we construct do we need to show 1s? Can I just leave them out? Answer: Either you can make a 1 transition from every state to a non-final sink state. Or make a note that any character that doesn't match an available transition goes to a non-final sink state. Depends on what format is usual for the course. I assume the reason that 1 is also in the alphabet is to trip up people who forget about this sink state.
{ "domain": "cs.stackexchange", "id": 17644, "tags": "finite-automata" }
Nullable Implementation for VB6/VBA
Question: Because I was spoiled with C# and the .NET framework, whenever I have to work with VB6 I feel like something's missing in the language. A little while ago I implemented a List<T> for VB6 (here), and before that I implemented String.Format() and a number of string-helper functions (here). Don't go looking for a StringFormat method in the VB6 language specs, that method is the one I've written. Today I would have liked to be able to declare a Nullable<bool> in VB6, so I implemented a class that allowed me to do that. I named this class Nullable and it goes like this: Private Type tNullable Value As Variant IsNull As Boolean TItem As String End Type Private this As tNullable Option Explicit Private Sub Class_Initialize() this.IsNull = True End Sub Now before I go any further I have to mention that I have used "procedure attributes" in the Value property, making it the type's default member: Public Property Get Value() As Variant 'default member Value = this.Value End Property Public Property Let Value(val As Variant) 'damn case-insensitivity... 'default member If ValidateItemType(val) Then this.Value = val this.IsNull = False End If End Property Public Property Set Value(val As Variant) 'used for assigning Nothing. 'Must be explicitly specified (e.g. Set MyNullable.Value = Nothing; Set MyNullable = Nothing will not call this setter) Dim emptyValue As Variant If val Is Nothing Then this.IsNull = True this.Value = emptyValue Else Err.Raise vbObjectError + 911, "Nullable<T>", "Invalid argument." End If End Property The ValidateItemType private method determines whether the type of a value is "ok" to be assigned as the instance's Value: Private Function ValidateItemType(val As Variant) As Boolean Dim result As Boolean If Not IsObject(val) Then If this.TItem = vbNullString Then this.TItem = TypeName(val) result = IsTypeSafe(val) If Not result Then Err.Raise vbObjectError + 911, "Nullable<T>", StringFormat("Type mismatch. Expected '{0}', '{1}' was supplied.", this.TItem, TypeName(val)) Else Err.Raise vbObjectError + 911, "Nullable<T>", "Value type required. T cannot be an object." result = False End If ValidateItemType = result End Function Private Function IsTypeSafe(val As Variant) As Boolean IsTypeSafe = this.TItem = vbNullString Or this.TItem = TypeName(val) End Function That mechanism is borrowed from the List<T> implementation I wrote before, and proved to be working fine. Shortly put, an instance of the Nullable class is a Nullable<Variant> until it's assigned a value - if that value is a Integer then the instance becomes a Nullable<Integer> and remains of that type - so the Value can only be assigned an Integer. The mechanism can be refined as shown here, to be more flexible (i.e. more VB-like), but for now I only wanted something that works. The remaining members are HasValue() and ToString(): Public Property Get HasValue() As Boolean HasValue = Not this.IsNull End Property Public Function ToString() As String ToString = StringFormat("Nullable<{0}>", IIf(this.TItem = vbNullString, "Variant", this.TItem)) End Function Usage Here's some test code that shows how the class can be used: Public Sub TestNullable() Dim n As New Nullable Debug.Print StringFormat("{0} | HasValue: {1} | Value: {2}", n.ToString, n.HasValue, n) n = False Debug.Print StringFormat("{0} | HasValue: {1} | Value: {2}", n.ToString, n.HasValue, n) n = True Debug.Print StringFormat("{0} | HasValue: {1} | Value: {2}", n.ToString, n.HasValue, n) Set n.Value = Nothing Debug.Print StringFormat("{0} | HasValue: {1} | Value: {2}", n.ToString, n.HasValue, n) On Error Resume Next n = "test" 'expected "Type mismatch. Expected 'T', 'x' was supplied." error Debug.Print Err.Description n = New List 'expected "Value type required. T cannot be an object." error Debug.Print Err.Description On Error GoTo 0 End Sub When called from the immediate pane, this method outputs the following: TestNullable Nullable<Variant> | HasValue: False | Value: Nullable<Boolean> | HasValue: True | Value: False Nullable<Boolean> | HasValue: True | Value: True Nullable<Boolean> | HasValue: False | Value: Type mismatch. Expected 'Boolean', 'String' was supplied. Value type required. T cannot be an object. Did I miss anything or this is a perfectly acceptable implementation? One thing did surprise me: if I do Set n.Value = Nothing, the instance remains a Nullable<Boolean> as expected. However if I do Set n = Nothing, not only Debug.Print n Is Nothing will print False, the instance gets reset to a Nullable<Variant> and ...the setter (Public Property Set Value) does not get called - as a result, I wonder if I have written a class with a built-in bug that makes it un-Nothing-able? Bonus After further testing, I have found that this: Dim n As New Nullable Set n = Nothing Debug.Print n Is Nothing Outputs False. However this: Dim n As Nullable Set n = New Nullable Set n = Nothing Debug.Print n Is Nothing Outputs True (both snippets never hit a breakpoint in the Set accessor). All these years I thought Dim n As New SomeClass was the exact same thing as doing Dim n As SomeClass followed by Set n = New SomeClass. Did I miss the memo? UPDATE Don't do this at home. After a thorough review, it appears an Emptyable<T> in VB6 is absolutely moot. All the class is buying, is a HasValue member, which VB6 already takes care of, with its IsEmpty() function. Basically, instead of having a Nullable<Boolean> and doing MyNullable.HasValue, just declare a Boolean and assign it to Empty, and verify "emptiness" with IsEmpty(MyBoolean). Answer: I think the itself class might be mis-named, because it is really 'Empty-able' not Nullable or 'Nothing-able'. You have to keep in mind that Empty, Null, and Nothing are very different concepts in VB6. Setting and object to Nothing is basically just syntactic sugar for releasing the pointer to the Object. This is the same as asking for ObjPtr() to return Null for that instance (although there is no way to test this in VB6 - see the code and explanation below). Null is actually better to conceptualize in VB6 as a type rather than an uninitialized variable, as the code below demonstrates: Dim temp As Variant 'This will return "True" Debug.Print (temp = Empty) 'This will return "False" Debug.Print (IsNull(temp)) temp = Null 'This will return "True" Debug.Print (IsNull(temp)) 'This will return "Null" Debug.Print (TypeName(temp)) This brings me to the explanation of why your class should really be referred to as 'Empty-able'. A Variant is best thought of as an object with 2 properties - a type and a pointer. If it is uninitialized, it basically has a pointer to Nothing and a type of Empty. But is isn't Null, because the Variant itself still exists with its default "properties". However if I do Set n = Nothing, not only Debug.Print n Is Nothing will print False, the instance gets reset to a Nullable and ...the setter (Public Property Set Value) does not get called This is because of VB6's obnoxious default behavior when you use a reference to an object that was set to nothing. It "helpfully" creates a new object for you as can be verified by the code below - before the second call to ObjPtr(temp), it implicitly runs Set temp = New Test. You should be able to verify this with a Debug.Print in Class_Initialize(). Private Sub Testing() Dim temp As New Test Debug.Print (ObjPtr(temp)) Set temp = Nothing 'The code below instantiates a new Test object, because it is used after being released. Debug.Print (ObjPtr(temp)) End Sub VB6 treats setting an Object equal to Nothing as a special case, so it never calls the Property Set. What is it basically doing is: AddressOf(n) = AddressOf(Nothing). EDIT: Excellent explanation of how Variants work under the hood here.
{ "domain": "codereview.stackexchange", "id": 6114, "tags": "vba, type-safety, vb6, null" }
Defining the position of equilibrium
Question: I've read quite a few other answers on this site such as this one, but can't quite seem to understand fully yet. Say we have a container with gases reacting such that the number of moles on both sides of the equation are not equal. If we then, for instance, reduce the volume of the container, the total pressure increases and the value of the quotient Q will also change. The net reaction then favours a particular direction so that the mole fractions change sufficiently for Q to return to the value of K, which remains constant. This is commonly described by saying equilibrium shifts to either the left or the right. Whilst it is true that after the system has again reached equilibrium the mole fractions have changed (so in a sense, the new equilibrium has shifted to one side in terms of number of moles), the value of Q is equal to its initial value so in this sense the position of equilibrium as defined by K has not changed. The same applies to changing the concentration of a species; the actual concentrations of every species after the system has re-equilibrated will be different, however the value of Q will have returned to its previous value. The only situation that I can think of where the position of equilibrium actually shifts is when a change in temperature alters the value of K. My question is, essentially, is the term 'position of equilibrium' more of a qualitative descriptor of the composition of reactants and products? I initially thought it meant the value of Q at equilibrium (or K), but evidently in gaseous reactions the equilibrium yield does change with changes in pressure though K remains constant, so such a definition would not work. Thank you! Answer: "Shifting" an equilibrium Unfortunately for learners of chemistry, statements like "the equilibrium shifts to the right" are quite common jargon. What is meant by this, speaking in more accurate technical terms, is the following: A reaction had reached equilibrium (net reaction is zero), and then some change occurred so that it was no longer in equilibrium. Now, there is a net forward reaction in the direction of the products ("to the right" refers to reactants turning into products). "Shifting to the left" describes a similar sequence of events, just in the reverse direction. New equilibrium or reestablish equilibrium? The OP is correct in saying that unless the temperature changed, the equilibrium constant does not change, and the reaction quotient Q of the first equilibrium state is equal to that of the second. So you could say you are back to the same equilibrium. Typically, however, the mole fractions, concentrations or partial pressures will be distinct for the first and the second equilibrium. So you could also say the reaction reaches a new equilibrium. If you say it reaches equilibrium again, you are sufficiently vague to be correct. Position of equilibrium The term "position of equilibrium" is ill-defined. To avoid any confusion, it is better to distinguish between reaction quotient (which is equal to the equilibrium constant for both the first and second equilibrium) and the set of concentrations (which is typically different).
{ "domain": "chemistry.stackexchange", "id": 12542, "tags": "equilibrium" }
Density of Receptors of a mammal
Question: I know it's a very open question. It's for a paper. So as a reference I'm looking maybe for the average density of insulin receptors per cell of a human tissue. I want to compare it to the density of MC1R (around 1000 per cell) and it says that's a low amount. Answer: It would depend on the type of tissue. But if you are interested in any tissue, then have a look at this BioNumbers entry. It says that there are 100000 insulin receptors per cell in rat adipose tissue. Divide it by the surface area of the cell and you'll get the density. Cell dimensions can also be found in BioNumbers
{ "domain": "biology.stackexchange", "id": 5716, "tags": "cell-membrane, receptor" }
Translate SOAP response into a CSV using Python
Question: I have this XML from a SOAP call: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header/> <soapenv:Body> <SessionID xmlns="http://www.gggg.com/oog">5555555</SessionID> <QueryResult xmlns="http://www.gggg.com/oog/Query" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Code>testsk</Code> <Records> <Record> <dim_id>1</dim_id> <resource_full_name>Administrator, Sir</resource_full_name> <resource_first_name>Sir</resource_first_name> <resource_last_name>Administrator</resource_last_name> <resource_email>username@mailserver.com</resource_email> <resource_user_name>admin</resource_user_name> </Record> <Record> <dim_id>2</dim_id> <resource_full_name>scheduler, scheduler</resource_full_name> <resource_first_name>scheduler</resource_first_name> <resource_last_name>scheduler</resource_last_name> <resource_email>username@mailserver.com</resource_email> <resource_user_name>scheduler</resource_user_name> </Record> My goal: To parse each Record's sub-elements <dim_id> ... <resource_user_name> and save each record as a row in a CSV. My Code: dim_id_list = [] full_name_list = [] first_name_list = [] last_name_list = [] resource_email_list = [] resource_user_name_list = [] root = et.parse('xml_stuff.xml').getroot() for dim_id in root.iter('{http://www.gggg.com/oog/Query}dim_id'): dim_id_list.append(dim_id.text) for resource_full_name in root.iter('{http://www.gggg.com/oog/Query}resource_full_name'): full_name_list.append(resource_full_name.text) for resource_first_name in root.iter('{http://www.gggg.com/oog/Query}resource_first_name'): first_name_list.append(resource_first_name.text) for resource_last_name in root.iter('{http://www.gggg.com/oog/Query}resource_last_name'): last_name_list.append(resource_last_name.text) for resource_email in root.iter('{http://www.gggg.com/oog/Query}resource_email'): resource_email_list.append(resource_email.text) for resource_user_name in root.iter('{http://www.gggg.com/oog/Query}resource_user_name'): resource_user_name_list.append(resource_user_name.text) rows = zip(dim_id_list, full_name_list, first_name_list, last_name_list, resource_email_list, resource_user_name_list) with open('test.csv', "w", encoding='utf16', newline='') as f: writer = csv.writer(f) for row in rows: writer.writerow(row) Is there a better way to loop through the Records? This code is terribly verbose. I tried this: for record in root.findall('.//{http://www.gggg.com/oog/Query}Record'): dim_id = record.find('dim_id').text # Extract each attribute, save to list. etc. But I am getting attribute errors trying to access each record's text property. Answer: It makes little sense to slice the data into "vertical" lists, then transpose them back into rows using zip(). Not only is it cumbersome to do it that way, it's also fragile. If, for example, one records is missing its resource_email child element, then all subsequent rows will be off! You can use writer.writerows(rows) instead of the for row in rows: writer.write(row) loop. Furthermore, you can pass a generator expression so that the CSV writer extracts records on the fly as needed. It's customary to import xml.etree.ElementTree as ET rather than as et. Suggested solution import csv from xml.etree import ElementTree as ET fieldnames = [ 'dim_id', 'resource_full_name', 'resource_first_name', 'resource_last_name', 'resource_email', 'resource_user_name', ] ns = {'': 'http://www.gggg.com/oog/Query'} xml_records = ET.parse('xml_stuff.xml').find('.//Records', ns) with open('test2.csv', 'w', encoding='utf16', newline='') as f: csv.DictWriter(f, fieldnames).writerows( { prop.tag.split('}', 1)[1]: prop.text for prop in xr } for xr in xml_records ) If you are certain that each <Record> always has its child elements in the right order, you can simplify it further by not explicitly stating the element/field names: import csv from xml.etree import ElementTree as ET ns = { '': 'http://www.gggg.com/oog/Query', 'soapenv': 'http://schemas.xmlsoap.org/soap/envelope/', } records = ET.parse('xml_stuff.xml').find('soapenv:Body/QueryResult/Records', ns) with open('test2.csv', 'w', encoding='utf16', newline='') as f: csv.writer(f).writerows( [prop.text for prop in r] for r in records )
{ "domain": "codereview.stackexchange", "id": 43654, "tags": "python, csv, xml" }
Unable to locate package industrial
Question: Hi, I'm trying to setup a connection to an UR5. I use Ubuntu Trusty (14.04) with ROS Indigo. I followed the tutrioal Getting Started with a Universal Robot and ROS-Industrial. There it's said that I need the ROS-Industrial's industrial_core package. Therefore I followed the Industrial/Install-tutorial. But when trying the commands sudo apt-get install ros-hydro-industrial-core or sudo apt-get install ros-hydro-industrial-desktop I get the error E: Unable to locate package ros-hydro-industrial-desktop I also tried the other steps of the Getting Started with a Universal Robot and ROS-Industrial but then I get the Error [FATAL] [1424787993.220925675]: Exception while loading controller manager 'moveit_simple_controller_manager/MoveItSimpleControllerManager': According to the loaded plugin descriptions the class moveit_simple_controller_manager/MoveItSimpleControllerManager with base class type movit_controller_manager::MoveItControllerManager does not exitst. I guess that's then part of the Ros-Industrial-Package? Any help is appreciated. Please consider, that I'm a real beginner in ROS. Thanks and regards! Originally posted by bluefish on ROS Answers with karma: 236 on 2015-02-24 Post score: 0 Answer: You forgot to add it to your question, but I guess you are running Ubuntu Trusty (14.04) with ROS Indigo? The ROS-Industrial packages have not yet been released for Indigo, except those from the Universal Robots repository. So you cannot install industrial-desktop on Indigo yet. In addition, you cannot install ros-hydro-x-x packages on Ubuntu Trusty, as Hydro is not supported on Trusty. Try installing the Universal Robot packages like this: sudo apt-get install ros-indigo-universal-robot that should work. PS: You don't need industrial_core for the Universal Robot packages. That is only needed for the other drivers in ROS-Industrial. I've logged an issue about that (universal_robot/issues/173). Originally posted by gvdhoorn with karma: 86574 on 2015-02-24 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by bluefish on 2015-02-24: It works exactly as you wrote. Thanks for the information!! :) And yes you're right about my Ubuntu and ROS versions as well. Thanks again!!
{ "domain": "robotics.stackexchange", "id": 20973, "tags": "ros, roslaunch, ros-industrial, ur5, ros-indigo" }
TurtleBot3 HDMI not working
Question: Hi, My team and me have problems to get the HDMI output working on Intel Joule, required to setup Ubuntu on TurtleBot3. We tried several monitors, micro HDMI cables and converters, but we never got any output. It does seem that the problem is related to the Intel Joule itself, as we were not able to retrieve any output also during the early boot stage (BIOS). Nevertheless, the board itself is working, as we can interact with the running Joule via the serial command line interface. There are numerous posts on Intel's panels, but none of these came up with a working solution. BIOS version is #193 https://communities.intel.com/thread/113459 https://communities.intel.com/thread/108466?start=15&tstart=0 https://discourse.ros.org/t/waffle-hdmi-no-signal/2687/7 https://github.com/ROBOTIS-GIT/turtlebot3/issues/77 Best Regards, Jürgen Originally posted by juergen on ROS Answers with karma: 1 on 2018-01-08 Post score: 0 Answer: hi, connect your micro hdmi when u startup joule you press the f7 button to startup from usb to install ubuntu Originally posted by bajramg with karma: 16 on 2018-01-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29685, "tags": "turtlebot, turtlebot3" }
Open launch files with gedit using HTML default
Question: Hello, When I open a file with extension .launch using the gedit, the file open without Plain Text. After I choose the option HTML in Plain Text Menu, I closed the file and openned again, the file start with HTML format. My question is: Is there a way to make default HTML for the entire .launch extension? Originally posted by Thadeu Brito on ROS Answers with karma: 15 on 2017-10-16 Post score: 0 Answer: This is more a Linux/platform configuration issue than a ROS issue. Could you please post this sort of question on a more appropriate forum in the future? I'm not sure it's the best way (or even the recommended way), but it would appear that configuring a default highlighter for specific file types is possible using a 'language definition file'. See an I set a default syntax highlighting in Gedit? on Ask Ubuntu. After I choose the option HTML in Plain Text Menu Know that .launch files are XML, not HTML. If Gedit has a highlighter for that, I would use the XML highlighter. Originally posted by gvdhoorn with karma: 86574 on 2017-10-16 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Thadeu Brito on 2017-10-17: Thank you! I understand about that post being a question of other platform. When I did this post, I thought someone users ROS known about it. In the future, I dont will this mistake!.
{ "domain": "robotics.stackexchange", "id": 29092, "tags": "ros" }
How would Newtonian gravity work in a 1-dimensional universe?
Question: I have come across the idea of gravity in different dimensional space. From the standard formula for gravity $F=\frac{GMm}{r^2}$ I have found that the $1/r^2$ term is a result of a three dimensional space which a gravitational field permeates. In general, for an $n$ dimensional space the force of gravity is $F\propto \frac{1}{r^{n-1}}$. In 1 dimensional space this leads to an odd conclusion of $F\propto 1$, the force doesn't decrease as the distance between objects increases. This is quite counter-intuitive. Is there a false assumption which I am making? If this conclusion is true then is there a simple explanation for why the gravitational field doesn't weaken over distance? We are used to thinking of gravity as just a force but really it is a field, perhaps if this was explained in terms of fields it would make sense. Answer: On way to think about it is the gravitational flux: In 3D, when you are further away from an object, you receive a lesser portion of overall flux, whereas in 1D, the flux does not decrease with the distance. This also explains why an infinite plane in 3D also yields a constant gravitational field, as the flux in this case also does not change with distance.
{ "domain": "physics.stackexchange", "id": 38349, "tags": "forces, newtonian-gravity, gauss-law, spacetime-dimensions" }
Law of equivalence
Question: 100g mixture of nitrates two metals a and b is heated to constant weight containing corresponding 50g oxides of the metals {i.e METAL NITRATES(100g) --->OXIDES(50g) +(some nitrogen oxide gases) } . The equivalent weight of a and b are 103 and 31 respectively .What is the percent composition of a and b in the mixture? In this question book has given that equivalent weight of nitrate of a= equivalent weight of a + equivalent weight of nitrate ion And similarly, equivalent weight of oxide of a= equivalent weight of a + equivalent weight of oxide How to find EQUIVALENT WEIGHTS of nitrate and oxide.Can you tell me how should i calculate it or where am i going wrong?Or can you just solve this question? Answer: Finally your problem reads correctly! Do you recall how equivalent weights are calculated for redox species and anions? What is the charge on NO$_3^-$ ? It is -1, so equivalent weight is "molecular weight"/ absolute value charge. = Anything divided by 1 stays the same. Coming to the oxide, O$^{2-}$, the charge is -2. So equivalent weight is 16/2=8. Keep in mind that equivalent weights are highly dependent on the reaction to reaction. Don't apply these rules without understanding the nature of the reaction. Hope it is clear now. As an exercise, calculate the equivalent weight of KMnO$_4$, when it reduces to Mn$^{2+}$ in acidic medium. This time, you have to check how many electrons are involved.
{ "domain": "chemistry.stackexchange", "id": 13605, "tags": "physical-chemistry" }
Spinning basketball on water surface--preferential axis
Question: This question came to me while I was in the pool last month. I took a basketball and I was making it spin on the surface of the water in a few different ways. When the ball rested on the surface of the water, a majority of the ball was above the surface of the water, indicating that the density of the ball is less than half of the density of water. First I would spin the basketball so that the angular momentum vector was vertical, perpendicular to the water surface. Then I spun it so that the angular momentum vector was horizontal, parallel to the surface of the water. In each case, the spinning would slow down, although in the first case where the angular momentum vector was vertical the ball took longer to slow down. Finally, I spun the basketball at an angle, so that the angular momentum vector was neither parallel nor perpendicular to the water surface. What I saw was consistent with what I saw earlier. The horizontal component of the angular momentum vector decayed more quickly than the vertical component, so that after maybe 10 seconds the rotation was essentially with a vertical axis. So the question is why? Why does the horizontal component decay faster than the vertical component? I tried to think of it in terms of friction and normal forces, but there didn't seem to be a difference between the 2 cases. Answer: As a fluid mechanics guy, it seems clear to me that the resistance to spinning with a vertical axis should be much less than when the ball is spinning with a horizontal axis. Viscous friction is very different from ideal dry solid friction (where the shear stress is independent of relative velocity). In the case of viscous friction, the local drag shear stress on the object (ball) is dependent on the local velocity of the ball surface, relative to the (zero) velocity of the stationary fluid far from the ball. The velocities at the surface in each case are equal to the angular velocity times the distance from the axis of rotation. In the case of vertical rotation, the radius of rotation runs from zero at the vertical axis to the submerged contact radius (less than the ball radius) on the edge. In the case of vertical rotation, the radius of rotation is on the order of the ball radius at all locations. So the shear velocities (and surface shear stresses) in the vertical axis case will, on average, be much lower than those with a horizontal axis. But, there is more to the story than this. The rate of change of angular velocity of the spinning ball is determined not by the local shear stress, but by the moment of the shear stress about the axis of rotation, integrated over the submerged surface. Since the radial moment arms in the vertical axis case are going to be significantly smaller than those for the horizontal axis case, the effect is magnified even more. The net results is that, for a given rotation rate, the drag torque with a vertical axis of rotation is going to be much less than the drag torque with a horizontal axis of rotation.
{ "domain": "physics.stackexchange", "id": 52397, "tags": "newtonian-mechanics, fluid-dynamics, rotational-dynamics, everyday-life" }
Let's say I have manufactured a prism from a non-dispersive medium, then light coming from air wouldn't split into colours right?
Question: Let's say I have manufactured a prism from a non-dispersive medium, then light coming from air incident on the prism wouldn't split into colours, right? I mean light still changes direction, but all colours would change direction by the same amount. Answer: In this hypothetical question, you're correct if the index of refraction does not change as a function of wavelength of light, then the angle of refraction for all the wavelengths will be the same for all wavelengths and hence no dispersion. However, the Kramers-Kronig relation requires that there be some dispersion.
{ "domain": "physics.stackexchange", "id": 99274, "tags": "optics, visible-light, refraction, frequency, dispersion" }
Will an ice cream scoop with oil-filled handle cool down my coffee more effectively than without the oil?
Question: This is my first question on the Physics site. I stink at thermodynamics so please forgive errors in my question. Here's the background: my coffeemaker makes coffee over 200°F. I want to rapidly cool it down. Target temp doesn't really matter but let's say to 130°F. For whatever reason* I thought I would cool down my coffee by freezing a thick rod of stainless steel and stirring it around in the coffee for a few seconds. Then it occurred to me that there are ice cream scoops with a non-toxic oil in the handle that is supposed to keep your hand from getting cold and also heat up the ice cream for easy scooping. (Coffee Joulies™ have paraffin wax in them.) https://www.amazon.com/dp/B0002U34EW/?tag=stackoverfl08-20 I like the idea of using this because now I don't have to buy a solid bar of stainless steel plus, I can scoop ice cream (albeit slowly since I am freezing the scoop.) My question is: how does the oil in the handle of the scoop increase the scoop's ability to cool down my coffee? The scoop is made of aluminum. I don't know how much oil is in the handle, or even what exactly the oil is. But this stack said it was oil: https://cooking.stackexchange.com/questions/46157/why-cant-this-ice-cream-scoop-go-in-the-dishwasher I'm not sure how the scoop is supposed to work, but I suspect that the oil stores (room temperature) heat and warms up the ice cream to make it easier to scoop. I'm hoping the opposite is true: that the oil will store (freezer temperature) lack-of-heat and absorb heat from the coffee. Is that sound? Also, if anyone thinks the scoop is going to explode, please mention that too. Thanks! *I don't want to use ice because it will water down the coffee. I don't want to use coffee ice cubes because I'm lazy and also I don't have room in my freezer for an extra tray. I don't want to use metal ice cubes or Coffee Joulies™ or stuff that I have to fish out of my glass because, well just because. I don't want to buy a cold plate like for beer kegs. Let's just go with the premise. Answer: TLDR; Get a heavy mug and chill it. The paraffin in the scoop is functioning as a heat capacitor more than a heat conductor. A solid aluminum scoop of the same dimensions would conduct heat almost as well for large temperature gradients, and better for small temperature gradients (the difference here is whether the temperature gradient can drive significant convective flow). When you first put the scoop in the coffee, the convection will be going like crazy, but my estimation below makes me suspect that the temperature gradient will have become very small before the coffee reaches the desired temperature. It sounds to me like you are looking for a method to reach your desired temperature very quickly, so I will ignore heat exchange with the surroundings, which occurs more slowly. To simplify the math, lets say you are cooling the coffee from 100 degrees C and the scoop is initially chilled to 0 degrees C. Lets also assume that there is 200 g of coffee, 100 g of aluminum, and 50 g of paraffin (I have the scoop you linked and weighed it). The heat capacity of the paraffin is about 2.5 J/gK, that of the aluminum is about 0.9 J/gK while that of the coffee is about 4.1 J/gK. So without exchanging heat with the surroundings, the system will reach a temperature of about 79 degrees C. That's progress, but it's still pretty hot! That was ignoring the coffee mug, however. The thick mugs in my cabinet range from 400 g to 600 g, and the heat capacity of the ceramic is around 0.8 J/gK. So with the right chilled mug you can get almost twice the conductive cooling that the scoop gives you! That would bring you to a drinkable temperature very quickly, even without the scoop. So Paparazzi hit the nail on the head with his comment. As a final comment, the ice cream scoop manufacturer warns on their website to keep the scoop below 140 degrees F.
{ "domain": "physics.stackexchange", "id": 46926, "tags": "thermodynamics" }
Why does Edema occur in Kwashiorkor?
Question: Edema in lower leg and face is a symptom of Kwashiorkor. It is the most distinguishing feature of it which distinguishes it from Marasmus. Why would decrease in amount of proteins cause Edema? Why doesn't it occur in Marasmus (which is both protein and energy deficiency)? Answer: Why would decrease in amount of proteins cause Edema? Proteins in the blood (esp. albumin, because it's the most abundant one there) cause fluid to come from the interstitial space into the capillary. This phenomenon is called oncotic pressure. On the other side there is hydrostatic pressure on the vessel wall, which causes fluid to exit the capillaries. This hydrostatic pressure gets lower as the blood passes through the capillary, while the oncotic pressure stays the same, because under normal conditions, proteins don't exit the capillary very much compared to their amount in blood. In the first part (closer to the arteriole), hydrostatic pressure is bigger, so the fluid exits the capillary site, while as you get closer to the venous site, the hydrostatic pressure gets lower than the oncotic pressure and the fluid flow is reversed. (see picture) If you lower the amount of proteins in the blood drastically (as in Kwashiorkor) the oncotic pressure is lower and therefore less fluid is reabsorbed from the tissue, which leads to its accumulation there - this results in edema. Why doesn't it occur in Marasmus (which is both protein and energy deficiency)? Sometimes, both features of kwashiorkor and marasmus are present in the patient, which is called marasmic kwashiorkor and the distinction between these two conditions isn't that easy. However, normally in marasmus, there is also dehydratation present, which balances the effects of decreased oncotic pressure. Cardiac output is usually decreased too, which leads to decrease of hydrostatic pressure, so again, this can counterbalances the decrease of oncotic pressure.
{ "domain": "biology.stackexchange", "id": 8128, "tags": "physiology, proteins, human-physiology" }
tf2_ros buffer transform PointStamped?
Question: I'd like to transform a PointStamped using a python tf2_ros.Buffer, but so far I'm only getting type exceptions. The following example a point with only a y component is supposed to be trivially transformed into the frame it is already in: #!/usr/bin/env python import rospy import sys import tf2_ros from geometry_msgs.msg import PointStamped if __name__ == '__main__': rospy.init_node('transform_point_tf2') tf_buffer = tf2_ros.Buffer() tf_listener = tf2_ros.TransformListener(tf_buffer) rospy.sleep(1.0) pt = PointStamped() pt.header.stamp = rospy.Time.now() pt.header.frame_id = "map" pt.point.x = 0.0 pt.point.y = 1.0 pt.point.z = 0.0 try: pt2 = tf_buffer.transform(pt, "map") except: # tf2_ros.buffer_interface.TypeException as e: e = sys.exc_info()[0] rospy.logerr(e) sys.exit(1) rospy.loginfo(pt2) This results in: [ERROR] [/transform_point_tf2] [./transform_point_tf2.py]:[27] [<class 'tf2_ros.buffer_interface.TypeException'>] If PointStamped isn't the right type, then what type can I use? To do this manually I'm currently doing this: trans = self.tf_buffer.lookup_transform("map", target_frame, rospy.Time()) quat = [trans.transform.rotation.x, trans.transform.rotation.y, trans.transform.rotation.z, trans.transform.rotation.w] mat = tf.transformations.quaternion_matrix(quat) pt_np = [pt.point.x, pt.point.y, pt.point.z, 1.0] pt_in_map_np = numpy.dot(mat, pt_np) pt_in_map.x = pt_in_map_np[0] pt_in_map.y = pt_in_map_np[1] pt_in_map.z = pt_in_map_np[2] Originally posted by lucasw on ROS Answers with karma: 8729 on 2016-12-05 Post score: 5 Original comments Comment by jsanch2s on 2016-12-05: Could you post the output of tf_buffer.registration.print_me()? Comment by lucasw on 2016-12-05: In this example it is {} Answer: The code that registers transforms is in tf2_geometry_msgs. The following code will allow you to call the BufferInterface.transform function. import tf2_ros from tf2_geometry_msgs import PointStamped from geometry_msgs.msg import Point ... # Let src_pt be the point you want to transform. tf_buf = tf2_ros.Buffer() tf_listener = tf2_ros.TransformListener(tf_buf) target_pt = tf_buf.transform(src_pt, "target_frame") Note that you should not include a leading slash in the name of the target frame. Originally posted by J. Bromley with karma: 66 on 2017-01-30 This answer was ACCEPTED on the original site Post score: 5
{ "domain": "robotics.stackexchange", "id": 26404, "tags": "ros, tf2-ros, transform" }
Continuity of wave function derivative
Question: A particle is defined by a wave function, $Be^{-2x}$ for $x<0$ and $Ce^{4x}$ for $x>0$. For the wave function to be continuous at $x=0$, $B=C$. A wave function must be continuous for it to be valid. However, another condition we were taught and I can find all over the internet, is that the first spatial derivatives of the wave function must also be continuous. For this to be true at $x=0$, $B$ cannot equal $C$. Therefore why is this a valid wave function? Another problem: $\psi = iC/3 \times (x-2)$ from $x=2,5$ and $-iC/5 \times (x-10)$ from $x = 5,10$. else $\psi = 0$. Again, the derivative is discontinuous at $x=5$ since the lines have different slopes. Still, this example is considered a valid wave-function by the text. (Solid State Electronic Devices, 7th ed., 2.6(c) and 2.7) Can we simply ignore isolated points of discontinuity? Answer: The derivative of $\psi(x)$ is continuous only where there is no infinite discontinuity in the potential. Examples of situations where $\psi'(x)$ is not continuous include a $\delta(x)$ potential and both ends of an infinite well. The quick argument follows by integrating $\psi''(x)$ over a small region: \begin{align} -\frac{\hbar^2}{2m}\int_{-\epsilon}^\epsilon \psi^{''}(x)dx &=-\frac{\hbar^2}{2m}\left(\psi'(\epsilon)-\psi'(-\epsilon) \right)\\ &= \int_{-\epsilon}^\epsilon \,dx\, (E - V(x))\psi(x)\, . \end{align} Thus, if the integrand on right hand side remains finite in the interval, the integral on the right goes to $0$ as $\epsilon\to 0$ and hence on the left hand side goes to $0$, implying continuity. If as stated there is an infinite discontinuity in the integrand, then the integral on the right may give a non-zero value, which in turns gives a discontinuous $\psi'(x)$.
{ "domain": "physics.stackexchange", "id": 98016, "tags": "quantum-mechanics, wavefunction, schroedinger-equation" }
What happens when you are at $r=2M$ distance from a $M$ mass black hole?
Question: I was reading this question: Are gravitational time dilation and the time dilation in special relativity independent? And JohnRennie's answer: (sorry for the syntax in formulas, I don't know how to make them look good) Now consider general relativity, and the effect of gravity. But first let me rewrite the special relativity equation for the line element in polar co-ordinates: $$\mathrm ds^2 = -\mathrm dt^2 +\mathrm dr^2 + r^2 (\mathrm d\theta^2 + \sin^2\theta~\mathrm d\phi^2) $$ and now I'll write the equation for the line element near a black hole, i.e. the Schwarzschild metric: $$ \mathrm ds^2 = -\left(1-\frac{2M}{r}\right)\mathrm dt^2 + \frac{\mathrm dr^2}{\left(1-\frac{2M}{r}\right)} + r^2 (\mathrm d\theta^2 + \sin^2\theta~\mathrm d\phi^2) $$ Question: So what happens when the mass M and the distance are set so that 2M/r=1? If you are at r=2M distance from a M mass black hole then (1-2M/r)=0 and what will that mean? This will be zero −(1−2Mr)dt2. And this will be infinite dr2(1−2Mr). What does that mean? Answer: So what happens when the mass M and the distance are set so that $2M/r=1$? If you are at $r=2M$ distance from a $M$ mass black hole then $(1-2M/r)=0$ and what will that mean? This will be zero $−(1−2Mr)dt^2$. The correspondence between the Schwarzchild metric and the flat scale metric implies that the the time coordinate used here $t$ is the same as the time coordinate used by a distant. observer. And this will be infinite $dr^2(1−2M/r)$. When $r$ equals $2m$, you get a singularity, the value of $r$ that causes this is the Schwarzchild radius, in the case of the Sun, this is deep inside the body of the Sun, about 3 Km from the core. What you must do is ensure that this is not a singularity caused by your choice of coordinates, remember you can use any coordinate, but it would make sense to use the most convenient one. The method used to check for coordinate singularites is is to calculate, $$R_{abcd}R^{abcd} = \frac {48m^2}{r^6} $$ At $r =0$, this is still a singularity, a scalar that is the same in all systems, so it's a true singularity. What do you mean by $r=2M$ is a coordinate singularity (metric becomes singular) but not a true singularity (no invariants blow up)" What does that mean? what is a true singularity then? From ACuriousMind's comment below: What you get at $r=2M$ is a coordinate singularity (metric becomes singular) but not a true singularity (no invariants blow up). There are other coordinates in which the metric remains finite, the event horizon is not a true singularity, although it is a region of some significance. The only true singularity in the Schwarzschild case is located at the center of the black hole $r = 2M$ is precisely the location of the event horizon of the black hole.
{ "domain": "physics.stackexchange", "id": 34976, "tags": "general-relativity, black-holes, metric-tensor, coordinate-systems, event-horizon" }
Could someone give an example of this pic?
Question: This is a picture from Wiki(https://en.wikipedia.org/wiki/Quantum_logic_gate). Can someone give me a simple example by using two qubits? Answer: Since Fourier transform and inverse Fourier transform for one qubit is only Hadamard gate, for two qubit case following two circuits are equivalent. First circuit (Fourier transform applied on qubit $q_0$) First circuit (inverse Fourier transform applied on qubit $q_1$) Both circuits return state $$ |\psi\rangle = \frac{1}{2}(|00\rangle + |01\rangle + |10\rangle - |11\rangle). $$ EDIT: I have just realized that the gate $F$ is general unitary transformation and not the QFT (I was missleaded by F = Fourier). However, my example is also valid. It is a particular case for two qubits asked for in the question.
{ "domain": "quantumcomputing.stackexchange", "id": 1432, "tags": "circuit-construction, ibm-q-experience, unitarity" }
Shell Script Image Replication
Question: I have a shell script which is being used on an embedded human machine interface (HMI). This script is used to copy a few files from a USB stick to a different place on the device, but with multiple instances of the same file under different names. If it helps: OS is Unix-like (BusyBox v1.11.2) Commands available are located here The filesystem is JFFS2 Any thoughts on improvements/optimizations would be appreciated. #! /bin/sh echo " ****Project Customisation Daemon STARTING**** " echo "Main Background:" # First, check the new image exists if [ -s "/disk/usbsda1/New_Main.png" ] then echo " -> Found new file to be used!" # Check if our directory is already present if [ -d "/opt/pclient/projekte/Main/" ] then # If our directory exists, remove the files echo " -> Found old directory, removing contents!" rm -rf /opt/pclient/projekte/Main/* else # If the directory isnt present, create it! echo " -> Creating new directory!" mkdir -p /opt/pclient/projekte/Main/ fi # Now copy our files! echo " -> Copying new files, please wait!" cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page2.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page3.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page4.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page5.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page6.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page7.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page8.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page9.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page10.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page11.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page12.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page13.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page14.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page15.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page16.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page17.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page18.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page19.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page20.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page21.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page22.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page23.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page24.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page25.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page26.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page27.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page28.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page29.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page30.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page31.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page32.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page33.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page34.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page35.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page36.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page38.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page62.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page63.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page64.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page65.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page66.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page67.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page68.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page69.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page610.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page611.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page615.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page616.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page618.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page619.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page620.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page621.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page622.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page623.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page624.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page626.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page627.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page628.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page629.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page630.png cp /disk/usbsda1/New_Main.png /opt/pclient/projekte/Main/Main_Page631.png else echo " -> Could not find new image to use, skipping!" fi # Inform the user that the files have been created echo " -> All background images created!" echo " -> Now copying to project folder!" # move the files into the appropriate place cp /opt/pclient/projekte/Main/* /opt/pclient/projekte/default_prj/terminal_files/ # Next we need to create copies of any additional images we will need echo "Help Pages:" # First, check the new image exists echo " -> Now creating Help logos" if [ -s "/disk/usbsda1/New_Help.png" ] then echo " -> Found new file to be used!" # Check if our directory is already present if [ -d "/opt/pclient/projekte/Help/" ] then # If our directory exists, remove the files echo " -> Found old directory, removing contents!" rm -rf /opt/pclient/projekte/Help/* else # If the directory isnt present, create it! echo " -> Creating new directory!" mkdir -p /opt/pclient/projekte/Help/ fi # Now copy our files! echo " -> Copying new files, please wait!" cp /disk/usbsda1/New_Help.png /opt/pclient/projekte/Help/Help_Page.png cp /disk/usbsda1/New_Help.png /opt/pclient/projekte/Help/Help_Page2.png cp /disk/usbsda1/New_Help.png /opt/pclient/projekte/Help/Help_Page3.png cp /disk/usbsda1/New_Help.png /opt/pclient/projekte/Help/Help_Page4.png cp /disk/usbsda1/New_Help.png /opt/pclient/projekte/Help/Help_Page5.png cp /disk/usbsda1/New_Help.png /opt/pclient/projekte/Help/Help_Page6.png cp /disk/usbsda1/New_Help.png /opt/pclient/projekte/Help/Help_Page7.png cp /disk/usbsda1/New_Help.png /opt/pclient/projekte/Help/Help_Page8.png else echo " -> Could not find new image to use, skipping!" fi # Inform the user that the files have been created echo " -> Applying changes to project!" # move the files into the appropriate place cp /opt/pclient/projekte/Help/* /opt/pclient/projekte/default_prj/terminal_files/ echo "Icon Pages:" # Check the new Icon pages exists echo " -> Now creating Icon logos" if [ -s "/disk/usbsda1/New_Icon.png" ] then echo " -> Found new file to be used!" # Check if our directory is already present if [ -d "/opt/pclient/projekte/Icon/" ] then # If our directory exists, remove the files echo " -> Found old directory, removing contents!" rm -rf /opt/pclient/projekte/Icon/* else # If the directory isnt present, create it! echo " -> Creating new directory!" mkdir -p /opt/pclient/projekte/Icon/ fi # Now copy our files! echo " -> Copying new files, please wait!" cp /disk/usbsda1/New_Icon.png /opt/pclient/projekte/Icon/Icon_Page.png cp /disk/usbsda1/New_Icon.png /opt/pclient/projekte/Icon/Icon_Page2.png else echo " -> Could not find new image to use, skipping!" fi # Inform the user that the files have been created echo " -> Applying changes to project!" # move the files into the appropriate place cp /opt/pclient/projekte/Icon/* /opt/pclient/projekte/default_prj/terminal_files/ echo "Additional Logos:" # Check the new logo page exists echo " -> Now creating Logo" if [ -s "/disk/usbsda1/New_Logo.png" ] then echo " -> Found new file to be used!" # Check if our directory is already present if [ -d "/opt/pclient/projekte/Logo/" ] then # If our directory exists, remove the files echo " -> Found old directory, removing contents!" rm -rf /opt/pclient/projekte/Logo/* else # If the directory isnt present, create it! echo " -> Creating new directory!" mkdir -p /opt/pclient/projekte/Logo/ fi # Now copy our files! echo " -> Copying new files, please wait!" cp /disk/usbsda1/New_Logo.png /opt/pclient/projekte/Logo/Logo2.png else echo " -> Could not find new image to use, skipping!" fi # Inform the user that the files have been created echo " -> Applying changes to project!" # move the files into the appropriate place cp /opt/pclient/projekte/Logo/* /opt/pclient/projekte/default_prj/terminal_files/ # Next we need to copy the boot logo and startup screen logo echo "Boot Logos:" # First, check the new image exists echo " -> Now creating boot logos" if [ -s "/disk/usbsda1/New_Boot.png" ] then echo " -> Found new file to be used!" # Check if our directory is already present if [ -d "/opt/pclient/projekte/Boot/" ] then # If our directory exists, remove the files echo " -> Found old directory, removing contents!" rm -rf /opt/pclient/projekte/Boot/* else # If the directory isnt present, create it! echo " -> Creating new directory!" mkdir -p /opt/pclient/projekte/Boot/ fi # Now copy our files! echo " -> Copying new files, please wait!" cp /disk/usbsda1/New_Boot.png /opt/pclient/projekte/Boot/loading_screen.png else echo " -> Could not find new image to use, skipping!" fi # Inform the user that the files have been created echo " -> Applying changes to project!" # move the files into the appropriate place cp /opt/pclient/projekte/Boot/* /opt/pclient/projekte/default_prj/terminal_files/additional_files setbootlogo /opt/pclient/projekte/default_prj/terminal_files/additional_files/loading_screen.png echo "Wrapping up:" # Fully tidy up, by removing any un needed file paths echo " -> Removing un-needed files!" rm -rf /opt/pclient/projekte/Main/* rm -rf /opt/pclient/projekte/Boot/* rm -rf /opt/pclient/projekte/Help/* rm -rf /opt/pclient/projekte/Icon/* rm -rf /opt/pclient/projekte/Logo/* echo " -> Removing un-needed directories!" rmdir /opt/pclient/projekte/Main rmdir /opt/pclient/projekte/Boot rmdir /opt/pclient/projekte/Help rmdir /opt/pclient/projekte/Icon rmdir /opt/pclient/projekte/Logo echo " ****Project Customisation Daemon COMPLETE**** " Answer: Extract repeated values to variables The most important thing is to avoid copy-pasting absolute paths everywhere. If the path ever changes it's a hassle to replace all strings in the script. It also makes it difficult to test the script with dummy directories instead of the real ones. So, always constants in variables, for example: SRCDIR=/disk/usbsda1 WORKDIR=/opt/pclient/projekte DSTDIR=/opt/pclient/projekte/default_prj/terminal_files Similarly, when using these constants repeatedly, it's good to introduce a temporary variable to capture the common element. For example, instead of this: cp "$SRCDIR"/New_Main.png "$WORKDIR"/Main/Main_Page.png cp "$SRCDIR"/New_Main.png "$WORKDIR"/Main/Main_Page2.png cp "$SRCDIR"/New_Main.png "$WORKDIR"/Main/Main_Page3.png I would do like this: IMAGE="$SRCDIR"/New_Main.png DIR="$WORKDIR"/Main cp "$IMAGE" "$DIR"/Main_Page.png cp "$IMAGE" "$DIR"/Main_Page2.png cp "$IMAGE" "$DIR"/Main_Page3.png These kind of extractions to variables make the script more flexible, as you can change a path in one place and it will affect all the places where it's used. Another good thing is that a variable serves as a label of the purpose, the intention, which can be often more descriptive than the actual hardcoded string. A note about quoting I didn't quote SRCDIR=/disk/usbsda1 because it's unnecessary. But afterwards I quote "$SRCDIR" everywhere, in case "somebody" might ever set SRCDIR to a path with spaces. This is good precaution. Finally, these are equivalent: cp "$IMAGE" "$DIR"/Main_Page.png cp "$IMAGE" "$DIR/Main_Page.png" Extract repeated logic to functions In the processing each of the image types, you use the same kind of logic to check if a directory exists and remove its contents, or else create the directory. This could be in a function: prepare_dir() { DIR=$1 if [ -d "$DIR" ] then echo " -> Found old directory, removing contents!" rm -rf "$DIR"/* else echo " -> Creating new directory!" mkdir -p "$DIR" fi } You can call this with prepare_dir "$WORKDIR"/Main, prepare_dir "$WORKDIR"/Help, prepare_dir "$WORKDIR"/Logo, and so on, reducing duplication and shortening your code. Excessive comments Your many echo statements already explain what the code does. The comments are redundant, I would remove all of them. Moving code to the right enclosing block In the processing of some of the image types, you run a cp after the main processing, even if a matching file did not exist, for example: echo " -> Now creating Help logos" if [ -s "/disk/usbsda1/New_Help.png" ] then echo " -> Found new file to be used!" # ... echo " -> Copying new files, please wait!" cp /disk/usbsda1/New_Help.png /opt/pclient/projekte/Help/Help_Page.png # ... else echo " -> Could not find new image to use, skipping!" fi echo " -> Applying changes to project!" cp /opt/pclient/projekte/Help/* /opt/pclient/projekte/default_prj/terminal_files/ The last cp is outside of the if-then block. But intuitively, it seems that if the if condition was false, then this last cp would find no files to copy, in which case it should be within the then block above. Misc Do you really need to copy files to a temporary directory first? Based on just the script you posted, this seems pointless, you could have copied directly to the destination. Suggested implementation Applying the suggestions above, I would rewrite your script like this (but without spelling out all the individual files in some of the longer blocks): #! /bin/sh SRCDIR=/disk/usbsda1 WORKDIR=/opt/pclient/projekte DSTDIR=/opt/pclient/projekte/default_prj/terminal_files prepare_dir() { DIR=$1 if [ -d "$DIR" ] then echo " -> Found old directory, removing contents!" rm -rf "$DIR"/* else echo " -> Creating new directory!" mkdir -p "$DIR" fi } echo " ****Project Customisation Script STARTING**** " echo "Main Background:" IMAGE="$SRCDIR"/New_Main.png if [ -s "$IMAGE" ] then echo " -> Found new file to be used!" DIR="$WORKDIR"/Main prepare_dir "$DIR" echo " -> Copying new files, please wait!" cp "$IMAGE" "$DIR"/Main_Page.png cp "$IMAGE" "$DIR"/Main_Page2.png # ... and so on ... echo " -> All background images created!" echo " -> Now copying to project folder!" cp "$DIR"/* "$DSTDIR" else echo " -> Could not find new image to use, skipping!" fi echo "Help Pages:" echo " -> Now creating Help logos" IMAGE="$SRCDIR"/New_Help.png if [ -s "$IMAGE" ] then echo " -> Found new file to be used!" DIR="$WORKDIR"/Help prepare_dir "$DIR" echo " -> Copying new files, please wait!" cp "$IMAGE" "$DIR"/Help_Page.png cp "$IMAGE" "$DIR"/Help_Page2.png # ... and so on ... echo " -> Applying changes to project!" cp "$DIR"/* "$DSTDIR" else echo " -> Could not find new image to use, skipping!" fi echo "Icon Pages:" IMAGE="$SRCDIR"/New_Icon.png if [ -s "$IMAGE" ] then echo " -> Found new file to be used!" DIR="$WORKDIR"/Icon prepare_dir "$DIR" echo " -> Copying new files, please wait!" cp "$IMAGE" "$DIR"/Icon_Page.png cp "$IMAGE" "$DIR"/Icon_Page2.png echo " -> Applying changes to project!" cp "$DIR"/* "$DSTDIR" else echo " -> Could not find new image to use, skipping!" fi echo "Additional Logos:" echo " -> Now creating Logo" IMAGE="$SRCDIR"/New_Logo.png if [ -s "$IMAGE" ] then echo " -> Found new file to be used!" DIR="$WORKDIR"/Logo prepare_dir "$DIR" echo " -> Copying new files, please wait!" cp "$IMAGE" "$DIR"/Logo2.png echo " -> Applying changes to project!" cp "$DIR"/* "$DSTDIR" else echo " -> Could not find new image to use, skipping!" fi echo "Boot Logos:" echo " -> Now creating boot logos" IMAGE="$SRCDIR"/New_Boot.png if [ -s "$IMAGE" ] then echo " -> Found new file to be used!" DIR="$WORKDIR"/Boot prepare_dir "$DIR" echo " -> Copying new files, please wait!" cp "$IMAGE" "$DIR"/loading_screen.png echo " -> Applying changes to project!" cp "$DIR"/* "$DSTDIR"/additional_files else echo " -> Could not find new image to use, skipping!" fi setbootlogo "$DSTDIR"/additional_files/loading_screen.png echo "Wrapping up:" # Fully tidy up, by removing any un needed file paths echo " -> Removing un-needed files!" rm -rf "$WORKDIR"/Main/* rm -rf "$WORKDIR"/Boot/* rm -rf "$WORKDIR"/Help/* rm -rf "$WORKDIR"/Icon/* rm -rf "$WORKDIR"/Logo/* echo " -> Removing un-needed directories!" rmdir "$DSTDIR"/Main rmdir "$DSTDIR"/Boot rmdir "$DSTDIR"/Help rmdir "$DSTDIR"/Icon rmdir "$DSTDIR"/Logo echo " ****Project Customisation Script COMPLETE**** " Looking at this code, there's still a lot of repetition: the processing of each image type looks very similar. You could go a step further and generalize that logic too, for example: process_image() { TITLE=$1; shift IMAGE=$1; shift DIR=$1; shift CP_FUNCTION=$1; shift echo $TITLE if [ -s "$IMAGE" ] then echo " -> Found new file to be used!" prepare_dir "$DIR" echo " -> Copying new files, please wait!" $CP_FUNCTION "$IMAGE" "$DIR" echo " -> Applying changes to project!" cp "$DIR"/* "$DSTDIR" else echo " -> Could not find new image to use, skipping!" fi } cp_main() { IMAGE=$1; shift DIR=$1; shift cp "$IMAGE" "$DIR"/Main_Page.png cp "$IMAGE" "$DIR"/Main_Page2.png # ... and so on ... } process_image "Main Background:" "$SRCDIR"/New_Main.png "$WORKDIR"/Main cp_main cp_help() { IMAGE=$1; shift DIR=$1; shift cp "$IMAGE" "$DIR"/Help_Page.png cp "$IMAGE" "$DIR"/Help_Page2.png # ... and so on ... } process_image "Help Pages:" "$SRCDIR"/New_Help.png "$WORKDIR"/Help cp_help ... but this risks the script becoming too cryptic. At some point you have to draw the line and find the right balance between optimization and overengineering.
{ "domain": "codereview.stackexchange", "id": 7786, "tags": "file-system, shell, embedded, sh" }
How to determine which embedding method to use for QML?
Question: So there are a lot of feature mapping techniques out there for Quantum Machine Learning, but I'm not sure which one to use for my next VQC. Can anyone explain when and why to use each of the following? Amplitude embedding Basis embedding Angle embedding Although I understand the difference in the architecture of each method, I don't see why you would choose one over another. Thank you! Answer: PennyLane provides a great starting point for looking into various types and reasons for embedding: https://pennylane.ai/qml/glossary/quantum_embedding. It is also easy to take some of the tutorials/templates and modify/run code locally on your desktop but then you can scale up/move to actual hardware by switching to a Braket backend: https://github.com/aws/amazon-braket-examples/blob/main/examples/pennylane/0_Getting_started/0_Getting_started.ipynb
{ "domain": "quantumcomputing.stackexchange", "id": 5084, "tags": "machine-learning, quantum-enhanced-machine-learning" }
Can we lock or freeze a moving joint?
Question: Hi all, I want the wheels of a mobile robot to lock, say on some given command, so that there is no need to run the wheel controllers, and upon an unlock command to go back to usual behavior. I saw in DRC http://gazebosim.org/wiki/DRC/UserGuide#Freeze that there is a freeze functionality. So, I was wondering whether this is possible or not. Thank you Peshala Originally posted by peshala on Gazebo Answers with karma: 197 on 2013-12-19 Post score: 0 Answer: You can do this programmatically using the physics API if you have a pointer to the Joint object. See the Joint::SetHighStop and Joint::SetLowStop commands. There is an example of using these commands in one of the physics tests (test/integration/physics.cc:848). So my approach would be to get a pointer to the joint object, read the current joint position, then set the joint limits to be equal to the current position. gazebo::physics::JointPtr joint; joint = getTheJointPtrSomehow(); // Get the current joint position gazebo::math::Angle currentPosition = joint->GetAngle(0); joint->SetHighStop(0, currentPosition); joint->SetLowStop(0, currentPosition); That's how I would do it currently. It would be nice to have an API that would do this automatically. Please make a feature request on the bitbucket issue tracker if you have an idea for how you would like it to work. Originally posted by scpeters with karma: 2861 on 2013-12-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by peshala on 2013-12-20: Thanks for your answer @scpeters I will try this and make a feature request. Comment by peshala on 2013-12-20: and when we need to revert back to the original controller we can set: joint->SetHighStop(0, gazebo::math::Angle(10000000000000000)) where 1E16 being the default value set by gazebo for a continuous joint
{ "domain": "robotics.stackexchange", "id": 3527, "tags": "gazebo" }
Write in file the n-th term in iccanobiF series
Question: Siruri2 | www.pbinfo.ro Fibonacci, a famous Italian mathematician in the Mediaeval Era, had discovered a series of natural numbers with multiple applications, a string that bears his name: Fibonacci (n) = 1, if n = 1 or n = 2 Fibonacci (n) = Fibonacci (n−1) + Fibonacci (n−2), if n>2 Fascinated by Fibonacci's line, and especially the applications of this string in nature, Iccanobif, a mathematician in the making, created a string and he named him: Iccanobif (n) = 1, if n = 1 or n = 2 Iccanobif (n) = reversed (Iccanobif (n-1)) + reversed (Iccanobif (n-2)), if n > 2 Obtaining the following rows: Fibonacci: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ... Iccanobif: 1, 1, 2, 3, 5, 8, 13, 39, 124, 514, 836, ... Iccanobif now wonders which number has more natural divisors: the n-th term in the Fibonacci row or the n-th term in the Iccanobif row. Requirements: Write a program that reads n natural number and displays: a) the n-th term in Fibonacci's row and its number of divisors b) the n-th term of the Iccanobif string and its number of divisors Input data The input file siruri2.in contains on the first line a natural number p. For all input tests, the p number can only have a value of 1 or value 2. A natural number n is found on the second line of the file. Output data If the value of p is 1, only a) of the requirements will be solved. In this case, in the output file siruri2.out, the n-th term in Fibonacci string and its number of divisors will be written. If the value of p is 2, only the point b) of the requirements will be solved. In this case, in the output file siruri2.out the n-th term in Iccanobif string and its number of divisors will be written Restrictions and clarifications 1 ≤ n ≤ 50 For the correct resolution of the first requirement, 50% of the score is awarded, and 50% of the score is awarded for the second requirement. Limits: - time: 1 second; - memory: 2 MB/ 2 MB. Example 1 siruri2.in 1 8 siruri2.out 21 4 Example 2 siruri2.in 2 9 siruri2.out 124 6 Explanations For the first example: The eighth term in Fibonacci's string is 21 and 21 has 4 divisors. (p being 1 solves only requirement a) For the second example: The ninth term in Iccanobif's string is 124, and 124 has six divisors. (p being 2 solves only requirement b) Here is my code, which doesn't execute all given tests in time (exceeds time limit for 3 tests): #include <iostream> #include <fstream> using namespace std; ifstream in("siruri2.in"); ofstream out("siruri2.out"); void numOfDivisors (long long n) { long numOfDivisors = 1, factor = 2; while (factor * factor <= n) { if (!(n % factor)) { long exponent = 0; while (!(n % factor)) { n /= factor; exponent ++; } numOfDivisors *= exponent + 1; } if (factor == 2) factor = 3; else factor += 2; } if (n > 1) { numOfDivisors *= 2; } out << numOfDivisors; } long long inverted (long long a) { long long b=0; while (a) { b = b * 10 + a % 10; a /= 10; } return b; } void fibonacci (short ord) { long long a = 1, b = 1; if (ord < 3) {out << "1 1";} else { ord -= 2; while (ord) { a+=b; ord--; if (!ord) { out << a << " "; numOfDivisors(a); break; } else { b+=a; ord--; if (!ord) { out << b << " "; numOfDivisors(b); } } } } } void iccanobif (short ord) { long long a = 1, b = 1; if (ord < 3) out << "1 1"; else { ord -= 2; while (ord) { a = inverted(a) + inverted(b); ord--; if (!ord) { out << a << " "; numOfDivisors(a); break; } else { b = inverted(a) + inverted(b); ord--; if (!ord) { out << b << " "; numOfDivisors(b); } } } } } int main() { short requirement, ord; in >> requirement >> ord; if (requirement == 1) fibonacci(ord); else iccanobif(ord); } Answer: Efficiency numOfDivisors() numOfDivisors and exponent do not need to be long. Using int is sufficient, and should be slightly faster. n is a long long, where as factor is only a long. So factor will be repeated promoted to a long long for the operations n % factor and n /= factor. You might find a speed improvement by actually declaring factor as a long long to avoid the repeated type promotion. factor is only a long, so factor * factor is also only a long, and may overflow when looping while (factor * factor <= n). Using a long long will avoid this, which may prevent the loop from running for a long time if n is prime and larger than a long. If n % factor == 0, then the exponent counting inner loop is entered, and the first thing that is done is n % factor, which is already known to be zero. Using a do { ... } while (!(n % factor)); loop will prevent the redundant calculation. The outer loop starts at n=2, and has an if statement to choose between incrementing n by 2, or setting it to 3. If 2 was handled as a special case, then the loop could unconditionally increment by 2, eliminating the if statement for another speed gain. To handle the 2^exponent case, simply count the number of trailing zeros in the binary representation of n. Your factor finder is testing all odd numbers whose square is less than n. You only need to test factor numbers which are prime. Other than 2 & 3, all prime numbers can be generated from 6k-1 and 6k+1. Or maybe use a prime sieve ... you are allowed 2MB of memory ... iccanobif() You are computing... while (...) { a = inverted(a) + inverted(b); // #1 ... b = inverted(a) + inverted(b); // #2 } When you are executing statement #2, you've already computed inverted(b) during statement #1, above. If you cached that value, you wouldn't need to invert it a second time. Similarly, when computing statement #1 in subsequent loops, you've already computed inverted(a) during statement #2, below, on the previous iteration. If you cached that value, you wouldn't need to invert it a second time. General Add vertical white space after #includes, after global variables, and between functions. Add whitespace around operators. Ie, a += b; instead of a+=b;. Don't use using namespace std;. Simply use: std::ifstream in("..."); std::ofstream out("..."); numOfDivisors() should return the answer, not print the answer. fibonacci() should return the Fibonacci value, not print the value and call another function which also has the side-effect of additional printing. Ditto for iccanobif(). main() is declare to return an int, but doesn't return anything. If the above changes were made, then in and out don't need to be global variables; they could be made local to the main function: void main() { std::ifstream("siruri2.in"); std::ofstream("siruri2.out"); short requirement, ord; in >> requirement >> ord; long long n = requirement == 1 ? fibonacci(ord) : iccanobif(ord); int num_divisors = numOfDivisors(n); out << n << ' ' << num_divisors; }
{ "domain": "codereview.stackexchange", "id": 33059, "tags": "c++, programming-challenge, time-limit-exceeded, fibonacci-sequence" }
Why did a green light appear white when looked out of the corner of my eye?
Question: The other day I saw a green light emitted from some source far away, and I realised that if I looked at it out of the corner of my eye I perceived it completely white. What is the explanation for this? Should this be more of a biology of the human eye question perhaps? Answer: Take a look at https://xkcd.com/1080/large/. Red and green-sensing cones are mostly found in the center of your visual field. Interestingly, this is not true for blue-sensing cones. It's also not true for rods, which detect black-and-white, so the reason the light appeared white is likely that it was picked up mostly by rods.
{ "domain": "physics.stackexchange", "id": 69915, "tags": "optics, visible-light, vision, perception" }
Difference/similarity between adaptive radiation and species divergence?
Question: I've been reading various answers on different sites but I still don't know whether adaptive radiation and species divergence are different or similar. My questions: 1) On some sites, it says that adaptive radiation is a form of species divergence, while on others it says that they are different. Which one is correct? 2) Is the main difference that adaptive radiation occurs relatively rapidly and species divergence takes longer? 3) Darwin's finches is used as an example for adaptive radiation, but is it also an example of species divergence? 4) Are the finches on the galapagos islands all difference species (one species diverged into many)? Or are they all the same species, just with different characteristics (e.g size of beak)? Answer: Adaptive Radiation A radiation refers to the process by which one species rapidly speciate into a number of different species. A radiation can be adaptive or non-adaptive. An adaptive radiation is a type of radiation when new species are formed through selection into new ecological niches. Such adaptive radiation typically occurs after the raise of a key mutation that allows for further specialization such as a mutation causing a change in the Pharyngeal jaw as observed in cichlids (Albertson et al. 1998). You can learn much more on adaptive radiation in the book The Ecology of Adaptive Radiation by D. Schluter Species divergence Species divergence refers to the process by which two existing species diverge through time, either through the accumulation of neutral (or non-neutral but equally selected) mutations or by selection on different trait values. Species Divergence and Adaptive Radiation Because of the semantic difficulties behind the concept of species (see here), it is unclear from which point can we talk about species divergence and by which point the observed process of divergence is only divergence within a metapopulation. Ignoring the eventual detail of the definition of species, and considering species divergence as referring to the divergence between any two lineage (whether or not in reproductive isolation), then it is clear that species divergence is part of the process of adaptive radiation. To answer your multiple questions directly 1) On some sites, it says that adaptive radiation is a form of species divergence, while on others it says that they are different. Which one is correct? Adaptive radiation is a little more than a form of species divergence as it includes also the event by which divergences starts. 2) Is the main difference that adaptive radiation occurs relatively rapidly and species divergence takes longer? mmhhh... not exactly as the concept of radiation also refers to the event allowing the divergence to happen 3) Darwin's finches is used as an example for adaptive radiation, but is it also an example of species divergence? Yes, indeed! No radiation, without species divergence. On the other hand species divergence will occur between any two existing species regardless of whether they speciated through a radiation 4) Are the finches on the galapagos islands all difference species (one species diverged into many)? Or are they all the same species, just with different characteristics (e.g size of beak)? The Darwin's finches refers to about 15 (different) species. The different species of Darwin's finches do NOT refer to intra-species variation but to different species. In other words, there is some reproductive isolation between these different species. Again, you might want to make sure you understand the concept of species and you might want to read this post (same as the one linked above) on the subject.
{ "domain": "biology.stackexchange", "id": 4947, "tags": "evolution, natural-selection, species" }
Combined gas law in an open atmosphere
Question: The question was asked about pressure vs. Volume increasing in an ideal gas as temperature is increased. My question then is this. What is the formula to determine how much volume and pressure will increase as temperature is increased? Let me frame the question this way. PV/T=P2V2/T2 this formula works for a controlled system where more than one of these values can be maintained. If we apply a known amount of heat, say n, to the atmosphere, what formula would be used to calculate volume and pressure as the temperature is increased? Answer: Technically speaking, If you managed to create a planet with an ideal gas atmosphere, the atmosphere would just float away. Why? One of the approximations of an ideal gas is There are no attractive or repulsive forces between the molecules or the surroundings This means that the gas wouldn't feel the force of gravity! So if I had a jar of ideal gas, the pressure wouldn't increase as I went to a greater depth in the jar(It does increase in gasses too, just like it does in liquids). I know this sounds strange but all it really means is that you cannot apply the ideal gas approximation to a system the size of our atmosphere. This approximation works well for small systems(A jar of ideal gas), because the effects of gravity are pretty negligible. So to analyse effects of change in temperature on the whole atmosphere, you'll need a better model. Maybe considering the atmosphere a non-viscous fluid can help, I don't know. You should research on this. Note that other approximations like the Van der Waals equation wouldn't help too because they too neglect the effect of gravity.
{ "domain": "physics.stackexchange", "id": 8844, "tags": "thermodynamics, heat, ideal-gas" }
Why is the general solution of Schrodinger's equation a linear combination of the eigenfunctions?
Question: Here is a quote from Introduction to quantum mechanics by David J Griffiths: The general solution is a linear combination of separable solutions. As we're about to discover, the time-independent Schroedinger equation (Equation 2.5) yields an infinite collection of solutions ($\psi_1(x)$, $\psi_2(x)$, $\psi_3(x)$,...), each with its associated value of the separation constant ($E_1$, $E_2$, $E_3$,...); thus there is a different wave function for each allowed energy: $$\Psi_1(x, y) = \psi_1(x)e^{-iE_1 t/\hbar},\quad \Psi_2(x, y) = \psi_2(x)e^{-iE_2 t/\hbar}, \ldots.$$ Now (as you can easily check for yourself) the (time-dependent) Schroedinger equation (Equation 2.1) has the property that any linear combination5 of solutions is itself a solution. Once we have found the separable solutions, then, we can immediately construct a much more general solution, of the form $$\Psi(x, t) = \sum_{n = 1}^{\infty}c_n\psi_n(x)e^{-iE_n t/\hbar}\tag{2.15}$$ I am trying to understand it in this way. ...the time independent Schroedinger's equation $\hat H\psi = E\psi$ An eigenvalue equation $Ax = \lambda x$, yields an infinite collection of solutions ($\psi_1(x)$, $\psi_2(x)$, $\psi_3(x)$, $\dots$) has eigen vectors $x_1$, $x_2$, $x_3$, $\dots$ each with it's associated value of separation constant ($E_1$, $E_2$, $E_3$, $\dots$); each with it's associated eigen value $\lambda_1$, $\lambda_2$, $\lambda_3$, $\dots$ thus there is a different wave function for allowed energy: $$\Psi_1(x,t) = \psi_1(x)e^{-iE_1t/\hbar},\quad\Psi_2(x,t) = \psi_2(x)e^{-iE_2t/\hbar}, \dots$$ have equations as $$Ax_1=\lambda_1x_1, \qquad Ax_2=\lambda_2x_2, \dots$$ Once we have found the separable solutions, then, we can immediately construct a much more general solution, of the form $$\Psi(x,t) = \sum_{n=1}^{\infty}c_n\psi_n(x)e^{-E_nt/\hbar}$$ (Forgetting the any other variable dependence) We can construct a more general solution of the form $$X = \sum_{n}c_n x_n$$ This last equation doesn't make any sense to me. There is nothing in linear algebra that says that this last equation logically precedes the previous equations. Trying to understand from linear algebra, what does the last equation mean? Why is the general solution of Schroedinger's equation a linear combination of the eigenfunctions? Answer: You are starting from the incorrect point. The argument follows by linearity of the equation. Suppose $\Psi_k(x,t)$ is solution of the time dependent Schr$\ddot{\hbox{o}}$dinger equation: $$ i\hbar \frac{\partial }{\partial t}\Psi_k(x,t)=-\frac{\hbar^2}{2m}\frac{\partial^2\Psi_k(x,t)}{\partial x^2}+U(x)\Psi_k(x,t)\, . $$ Then: $$ \Phi(x,t)=a_1\Psi_1(x,t)+a_2\Psi_2(x,t) $$ is also a solution since $$ i\hbar \frac{\partial }{\partial t}\Phi(x,t) =a_1\left(i\hbar \frac{\partial }{\partial t}\Psi_1(x,t)\right)+a_2 \left(i\hbar \frac{\partial }{\partial t}\Psi_2(x,t)\right) $$ and \begin{align} -\frac{\hbar^2}{2m}\frac{\partial^2\Phi(x,t)}{\partial x^2}+U(x)\Phi(x,t) &=a_1\left(-\frac{\hbar^2}{2m}\frac{\partial^2\Psi_1(x,t)}{\partial x^2}+U(x)\Psi_1(x,t)\right)\\ &\quad + a_2\left(-\frac{\hbar^2}{2m}\frac{\partial^2\Psi_2(x,t)}{\partial x^2}+U(x)\Psi_2(x,t)\right)\, . \end{align} These follow simply from the known rule valid for any two differentiable functions $f$ and $g$: $\partial (f+g)/\partial t=\partial f/\partial t+\partial g/\partial t$, and similarly for the partials w/r to $x$. Combining these last two equations you get an identity for any $a_1$ and $a_2$ since each $\Psi_k(x,t)$ is independently a solution. Of course this simply extends to an arbitrary number of terms in the linear combination. Note the eigenvalue of the time-independent part never enters in this argument. The final step is to observe that separation of variables in the time-dependent equation yields $\Psi_k(x,t)=e^{-iE_k t}\psi_k(x)$ with $\psi_k(x)$ an eigenfunction of the time-independent equation, but again, this does not enter in the argument. Edit: note this is in contradistinction with the time-independent equation. When $$ -\frac{\hbar^2}{2m}\frac{d^2\psi_k(x)}{dx^2}+U(x)\psi_k(x)=E_k\psi_k(x) $$ the the right hand side is must a multiple of the original function. With this observation, note then that a linear combination $$ \psi(x)=a_1\psi_1(x)+a_2\psi_2(x) $$ will in general NOT be a solution of the time-independent equation because \begin{align} \left(-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+U(x)\right)\psi(x) &=a_1\left(-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+U(x)\right)\psi_1(x)\\ &\qquad+a_2\left(-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+U(x)\right)\psi_2(x) \\ &=a_1E_1\psi_1(x)+a_2E_2\psi_2(x)\\ &=E_1(a_1\psi_1(x)+a_2\psi_2(x))+(E_2-E_1)a_2\psi_2(x)\\ &=E_1\psi(x)+(E_2-E_1)a_2\psi_2(x) \end{align} will NOT be a multiple of $\psi(x)$ unless $E_1=E_2$.
{ "domain": "physics.stackexchange", "id": 38110, "tags": "quantum-mechanics, hilbert-space, schroedinger-equation, linear-algebra, superposition" }
catkin_make install to /opt/groovy|hydro
Question: According to the catkin tutorial you can define the prefix when using catkin_make install in the following way: catkin_make -DCMAKE_INSTALL_PREFIX=/opt/ros/groovy install The question is: how to really install in in my system? because after I do that, the package is installed as catkin_make install without any option. So it stays in the same directory. How can I install it in /opt/ros/groovy? Originally posted by silgon on ROS Answers with karma: 649 on 2014-07-19 Post score: 1 Answer: catkin_make is a fairly thin layer over cmake/make invocations. This means the above command is simply invoking make install on the cmake generated Makefiles. The install prefix is the one for the catkin_make/cmake invocation that generated these make files. If you built your project with catkin_make build ... before and then invoke your command above, cmake will not be rerun to regenerate the files (catkin does not currently keep track of which arguments you passed previously). In short, the following invocation should probably do the right thing: catkin_make -DCMAKE_INSTALL_PREFIX=/opt/ros/groovy install --force-cmake Originally posted by demmeln with karma: 4306 on 2014-07-19 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by silgon on 2014-07-19: Got it. Cmake was taking the cache. Thanks!
{ "domain": "robotics.stackexchange", "id": 18680, "tags": "catkin, ros-groovy, ros-hydro" }
How does the diaphragm in a camera work?
Question: I think I am quite good at math and understand basic geometry, but I have problems understanding the function of a diaphragm in cameras. Let's say a camera is in a specific state. It captures the light from a specific "cone", going outside of the camera lens. Now, if the diaphragm moves (decreases the diameter of the hole), and nothing else moves in the camera, it can only result in a "black material" appearing around the image, right? It will not change the size of objects in the final image, or sharpness of objects, or the "rate" (amount/time) of light, that is getting to the center of the image (which is not covered by a diaphragm). Am I right? But if the diaphragm is never shown in the final image, and only covers rays around it, what is the purpose of it, as those rays would never make it to the image sensor anyway, right? Edit: I think I am starting to get a clue. My intuition was, that if there is an obstacle between two points A and B, A can not see B. The point A is on the object, and B is on the sensor, and the fact that the line between them is not straight (refracted by a lens) does not change anything. I did not realise, that there are lots of rays (in with different directions) going from A, through the lens, and reaching B. The obstacle near the lens behaves very differently. By covering half of the lens, 2x less rays would get from A to B, but the sensor B would still receive light from A (B will still "see" A). EDIT 2023: I have built this simulation of a camera. You can see three points emitting light on the right, a diaphragm, a lens, and a sensor on the left. I recommend trying this: move or remove the lens (to "focus" or "unfocus") change the "hole" of a diaphragm (to allow more or less light hit the sensor) Answer: The argument in your edit is essentially correct. The diaphragm is introduced very close to the lens, where objects are maximally defocused, so that every ray from every object passes through the lens. The diaphragm removes some of those rays, but it still allows multiple rays to come through and form an image. This means that the "shadow" of the diaphragm is completely defocused, so that it covers all of the image by darkening it as compared to what you'd have with an open diaphragm (with more light to go around in total), but it does not form an image on the sensor because it's at the plane of the lens. As to why you have a diaphragm in the first place: this allows you to fiddle with the depth of field, i.e., with the range of distances at which objects will appear in focus. The wider your aperture is, the more rays you have, and that means that the rays that reach the focal plane encompass a wider cone of angles, which in turn implies that there is a reduced tolerance for moving the detector backwards or forwards while still keeping the object in focus. Conversely, for a fixed detector plane, that reduced tolerance means that the wider the aperture, the shallower the range of lengths at which objects will appear in focus. You use a diaphragm when you want to have a larger depth of field (i.e. where you want objects at many different distances to appear in focus) and you're OK with losing some light overall. Thus, you reduce the aperture, killing some rays from the outside of your lens, and in the process you reduce the cone of angles and you expand the depth of the focal plane.
{ "domain": "physics.stackexchange", "id": 55241, "tags": "optics, geometric-optics, camera" }
Specific Neurons that Require Glucose
Question: I've been doing a bit of armchair biology lately, and have been interested in the metabolic flexibility of neurons. My understanding is that, besides glucose, many neurons can metabolize lactic acid or ketones. However, some reports point out that some neurons can only metabolize glucose, without ever indicating which, or in what regions of the brain. (It may be that the following exert is suggesting that each individual neuron is using a mix of fuel sources. I have not been able to find much via the Google.) From the linked book: When changing slowly from a carbohydrate diet to an almost completely fat diet, a person's body adapts to use far more acetoacetic acid than usual, and in this instance, ketosis normally does not occur. For instance, the Inuit (Eskimos), who sometimes live almost entirely on a fat diet, do not develop ketosis. Undoubtdly, several factors, none of which is clear, enhance the rate of acetoacetic acid metabolism by the cells. After a few weeks, even the brain cells, which normally derive almost all of their energy from glucose, can derive 50 to 75 percent of their energy from fats. Which neurons and regions of the brain can only burn glucose? Answer: The source you link does not report that certain neurons only use glucose, and I am unaware of any reputable source that makes this claim. Neurons do tend to use glucose as an energy source when it is available, and the brain does not tolerate sudden drops in glucose. However, as your source notes, other energy sources can be utilized given enough time to adjust, such as during starvation/fasting as well as suckling. Cahill, G. F., Herrera, M. G., Morgan, A., Soeldner, J. S., Steinke, J., Levy, P. L., ... & Kipnis, D. M. (1966). Hormone-fuel interrelationships during fasting. The Journal of clinical investigation, 45(11), 1751-1769. Hawkins, R. A., Williamson, D. H., & Krebs, H. A. (1971). Ketone-body utilization by adult and suckling rat brain in vivo. Biochemical Journal, 122(1), 13-18. Nehlig, A., & de Vasconcelos, A. P. (1993). Glucose and ketone body utilization by the brain of neonatal rats. Progress in neurobiology, 40(2), 163-220. Owen, O. E., Morgan, A. P., Kemp, H. G., Sullivan, J. M., Herrera, M. G., & Cahill, G. J. (1967). Brain metabolism during fasting. The Journal of clinical investigation, 46(10), 1589-1595. Pollay, M., & Alan Stevens, F. (1980). Starvation‐induced changes in transport of ketone bodies across the blood‐brain barrier. Journal of neuroscience research, 5(2), 163-172.
{ "domain": "biology.stackexchange", "id": 8774, "tags": "brain, metabolism, neurophysiology, glucose" }
When is hypertree width more useful than generalized hypertree width?
Question: In analysis of CSPs, there are three width notions that are analogous to treewidth: hypertree width (hw), generalized hypertree width (ghw) and fractional hypertree width (fhw). Moreover the inequalities $\text{fhw} \le \text{ghw} \le \text{hw}$ are known. The only motivation for hw instead of ghw that I have seen is that there is an XP algorithm for computing hw, but not for computing ghw. Is there any other motivation for hw instead of ghw? More concretely: What algorithmic results there are that work when a hypertree decomposition of the smallest width is given, but not when a generalized hypertreewidth decomposition of the smallest width is given? Answer: Every XP/FPT algorithm parametrized by htw also gives an XP/FPT algorithm parametrized by ghtw (provided that the decomposition is given in the input) since there is a linear relation between ghw and hw [1]: $\mathsf{hw} \leq 3\mathsf{ghw}+1$. So basically, you can see hw as a easier to compute constant approximation of ghw. I am not aware however of problems that are XP when parametrized by hw but not by fhw. Side note however: if you are only interested in problems when the input is a CSP (with positive encodings) / Database of bounded (f)hw, then an XP algorithm for htw will often give you an XP algorithm for fhw. Indeed, if the algorithm is XP for hw, then it is polynomial on $\alpha$-acyclic hypergraphs, that is, hypergraphs of hw $1$. Now, if you perform the join of the guards inside each bag of the (fractional / generalized) hw decomposition then you get a new CSP / Databases that is $\alpha$-acyclic. Moreover, each new relation is of XP size in the original DB/CSP. Now if you can rewrite your problem on this new database, this directly give you an XP algorithm for fhw. [1] I. Adler, G. Gottlob, and M. Grohe. Hypertree width and related hy-pergraph invariants.European Journal of Combinatorics, 28(8):2167– 2181, 2007. 24
{ "domain": "cstheory.stackexchange", "id": 4910, "tags": "treewidth, csp, hypergraphs" }
Wavenumber definition in theoretical physics
Question: I am trying to understand the physical meaning of the wavenumber, which as explained in wikipedia, is the magnitude of the wave vector, which, if I am not mistaken, the wave vector gives information about the direction of the propagation of an EM/Matter-wave. The crystallographic def. of the wave number, is pretty clear to me: $$k=\frac 1 \lambda$$ the number of complete cycles that exist in 1 meter of linear space. A complete cycle translates in space as the distance between two points which have a $2\pi$ phase difference between each other (in other words points of same phase), and this distance is equal to a wavelength $\lambda$. This is how I understand the wave number definition in crystallography. If I am wrong about something, please let me know. In theoretical physics, the formula is: $$k=\frac{2\pi}{\lambda}$$ and it's interpreted as the number of radians per unit distance, sometimes called "angular wavenumber". I don't understand this. And I cannot see how the wave numbers of two waves with different wavelengths $\lambda_1$,$\lambda_2$, would be different (considering the above formula). As I said above, 1 wavelength, is a full cycle, which is $2\pi$ radian. This should be valid for both waves. The only difference, is the number of full cycles per unit of time, in other words the frequency. What am I getting wrong here? Answer: I expect that you are happy with the equation that describes the displacement of a plane sinusoidal wave propagating in the $x$ direction, when expressed as: $$y=A\sin2\pi\left(\frac tT -\frac x {\lambda} + \epsilon\right)$$ Every time $t$ changes by a period, $T$, or we change $x$ by a distance $\lambda$, the argument of the sine changes through $2\pi$ so $y$ goes through a complete cycle. If we put $\omega=\frac{2\pi}T$ and $k=\frac{2\pi}{\lambda}$, the equation can be written more compactly as $$y=A\sin(\omega t-kx +\epsilon).$$ I think of $k$ as a conversion factor between distance and angle, such that a distance of one wavelength is, through multiplication by $k$, converted to an angle of $2\pi$, and a distance $x$ into an angle of $2\pi\frac x{\lambda}$. This is just as multiplication by $\omega$ converts a time of one period into an angle of $2\pi$ and a time $t$ into an angle of $2\pi\frac t{T}$. In short, $k$ defined as $2\pi/\lambda$ gives radians per unit distance in the propagation direction, whereas $k'$ defined as $1/\lambda$ gives cycles per unit distance. For a wave propagating in any direction we define $\mathbf k$ as a vector in the direction of wave propagation and of magnitude $k=\frac{2\pi}{\lambda}$. Then our wave propagation equation becomes $$y=A\sin(\omega t-\mathbf k.\mathbf r +\epsilon).$$ Neat, is it not?
{ "domain": "physics.stackexchange", "id": 91313, "tags": "waves, definition, conventions, wavelength" }
Compare vs Radix
Question: Is it better to use comparison or radix sort to sort a long sequences of java int array? I know that I should probably use mergesort (NlogN) for comparison sort, since it is one of the fastest and compare that to LSD or MSD. I thought about how for extremely large N, the logarithm would be larger than runtime for LSD, but other than that, the mergesort (comparison) is better. I wonder if my reasoning is correct because I have seen a question asking about Strings and the answer was the aforementioned. Now this question is about long sequences of java int array and I wonder if I am missing the point. Any help is appreciated :). Answer: In theory sorting a long sequence of int should be a prime candidate for radix sort as it grows linear in the number of elements to be sorted, while any comparison based sorting algorithm can't be faster than N log N.
{ "domain": "cs.stackexchange", "id": 4632, "tags": "sorting, comparison, radix-sort" }
Rails refactoring has_many
Question: I have a RoR 4 app, and its model have these associations: has_many :accreditations has_one :legal, -> { where(kind_cd: Accreditation.legal) }, class_name: 'Accreditation' has_many :departments, -> { where(kind_cd: Accreditation.department) }, class_name: 'Accreditation' So, you see that these associations similar, but legal & departments have more conditions than accreditations. Can I replace class_name: 'Accreditation' with something like using :accreditations? Answer: I don't see what you will gain with your hypothetical using syntax, let's say there is one... your code will look like this: has_many :accreditations has_one :legal, -> { where(kind_cd: Accreditation.legal) }, using: :accreditations has_many :departments, -> { where(kind_cd: Accreditation.department) }, using: :accreditations How is it better than your current code? It is not more expressive, nor is it more succinct, nor DRY. Also, these associations are similar to some extent, but they look as DRY as you can make them - one is has_one, while the others are has_many, one uses the default idiom, while the others have names different than the associated class, conditions are different (and though one might argue you can predict the filter by the association name, one is singular, and the other is plural...) In short, I think your current code is good enough - any change will only harm readability.
{ "domain": "codereview.stackexchange", "id": 6781, "tags": "ruby, ruby-on-rails" }
Strange symmetry of Haldane model edge
Question: In the Haldane model we break both the inversion symmetry and time reversal symmetry, as a consequence I didn't not have any expectations when it comes to symmetry of the energy bands. However, to my big surprise, the energy bands for the armchair edge is symmetric under the $k$ -> $-k$ exchange. How can we explain this symmetry and why are the symmetry of the two cases different? [ Answer: The symmetry shown in the band structure here is not the consequence of inversion symmetry or time-reversal symmetry but the structure of hexagon lattice. If the fermion system really has time-reversal symmetry (the Kane-Mele model for example), all kinds of band structures (zigzag, armchair, etc) will be symmetric around the origin because the time-reversal partner for an eigenstate is also an eigenstate for the system with the same eigenenergy but opposite crystal momentum. Besides, as far as I know, the inversion symmetry will not lead to apparent symmetry for band structures. Back to your problem, the edge states can be viewed as the projections of the Dirac cones on the plane where the good quantum number is preserved. Recall that there are two Dirac points ($K$ and $K^{\prime}$) in graphene where the bulk gaps may close at. You can see that if you choose an armchair strip, $K$ and $K^{\prime}$ will be projected onto the same momentum, while if you choose a zigzag strip, $K$ and $K^{\prime}$ will be projected onto different momenta.
{ "domain": "physics.stackexchange", "id": 67250, "tags": "condensed-matter, symmetry, electronic-band-theory, topological-insulators" }
Using an Arduino to control an ON / OFF connection between two pins
Question: I've got this driver: http://www.pololu.com/catalog/product/1182 ... a A4988 stepper motor driver carrier I'm attempting to control a connection between the RESET and SLEEP pins with logic ( code ) running on my Arduino. The motor runs perfectly when these two pins are connected however I'd like to control when the stepper is powered off from my Arduino ( and thus not generating extra heat ) I'd like to: designate a pin to control the connection between these two pins use a "digitalWrite" to the above pin with a HIGH or LOW to switch power on and off from the stepper NOTE: The data sheet mentioned that for the driver to be powering the stepper both RESET and SLEEP needed to be in switched on ( HIGH ) Answer: You are going about this incorrectly. The reason why pololu is telling you to connect the two pins is because the sleep pin has a pullup resistor on their breakout board. Connecting reset to the sleep pin is equivalent to connecting the reset pin to high. You can achieve your goal by connecting reset pin to high (5V through pullup resistor) and connect the sleep pin directly to your arduino just like the step/dir pins.
{ "domain": "robotics.stackexchange", "id": 2, "tags": "arduino, logic-control, stepper-motor, stepper-driver" }
What is the difference between out of distribution detection and anomaly detection?
Question: I'm currently reading the paper Likelihood Ratios for Out-of-Distribution Detection, and it seems that their problem is very similar to the problem of anomaly detection. More precisely, given a neural network trained on a dataset consisting of classes $A,B,$ and $C$, then they can detect if an input to the neural network is anomalous if it is different than these three classes. What is the difference between what they are doing and regular anomaly detection? Answer: You observation is correct although the terminology needs a little explaining. The term 'out-of-distribution' (OOD) data refers to data that was collected at a different time, and possibly under different conditions or in a different environment, then the data collected to create the model. They may say that this data is from a 'different distribution'. Data that is in-distribution can be called novelty data. Novelty detection is when you have new data (i.e. OOD) and you want to know whether or not it is in-distribution. You want to know if it looks like the data you trained on. Anomaly detection is when you test your data to see if it is different than what you trained the model. Out-of-distribution detection is essentially running your model on OOD data. So one takes OOD data and does novelty detection or anomaly detection (aka outlier detection). Below is a figure from What is anomaly detection? In time series modeling, the term 'out-of-distribution' data is analogous to 'out-of-sample' data and 'in-distribution' data is analogous with 'in-sample' data.
{ "domain": "ai.stackexchange", "id": 2584, "tags": "neural-networks, comparison, papers, anomaly-detection" }
Multiple Instances of GZServer and GZWeb
Question: I am newbie to gazebo and gzweb. What I am doing is to connect multiple users to different sessions of gzweb on my computer. Once a user logged in with their account, system will login to their account on terminal and create a gazebo platform with gzweb. I can run multiple gzweb instances on different ports with different hostnames but, all of them connects to one instance of gzserver. I tried exporting GAZEBO_MASTER_URI variable to different hostnames and ports according to gzweb's host and port. gzserver starts multiple times while users are different but never connects to gzweb or vice versa. Any help would be appreciated. Originally posted by alicanuzunn on Gazebo Answers with karma: 13 on 2017-09-07 Post score: 1 Answer: gzweb's websocket server's address is currently hardcoded to 7681 so you'll probably need to manually change it to a differnt port for each gzweb you run. Here's the port number on server side, and you'll need to update the client side too. Originally posted by iche033 with karma: 1018 on 2017-09-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by alicanuzunn on 2017-09-08: thank you, I just made it according to your answer, in addition, I changed the gz3d.js's port too.
{ "domain": "robotics.stackexchange", "id": 4172, "tags": "gazebo" }
Finding the minimum difference in a list of numbers
Question: I have written following code to find the minimum difference from a list of numbers. Because I am using a loop once and LINQ again to find the minimum, the algorithm is O(N2). Can you please tell me if I am using the framework in the most optimal way (speed and memory utilisation) to achieve this task: using (StreamReader sr = new StreamReader("IN.in")) using (StreamWriter sw = new StreamWriter("OUT.out")) { int T = int.Parse(sr.ReadLine()); for (int i = 1; i <= T; i++) { int N = int.Parse(sr.ReadLine()); List<int> intList = sr.ReadLine().Split(' ').Select(e => int.Parse(e)).ToList(); intList.Sort(); List<int> diff = new List<int>(); int leastDiff = int.MaxValue; for (int k = 0; k < intList.Count - 1; k++) { int iDiff = intList[k + 1] - intList[k]; diff.Add(iDiff); leastDiff = Math.Min(leastDiff, iDiff); } sw.WriteLine(leastDiff); } } Benchmark For 3 test case of 5 integers in each list where as for loop implementation takes 55±5 ms. Mr.Mindor LINQ implementation timing varies from 60±50 ms. Memory usage in both implementation is almost 8.3MB Answer: You're worrying about the wrong problems: there isn't a lot more performance to be squeezed out of your code, but you could make substantial improvements to readability. Choose a better file format Why are you encoding the number of lines to be read into the file itself? That seems like redundant information. Just read all the lines there are and use those. You are also redundantly encoding the number of numbers per line: int N = int.Parse(sr.ReadLine()); You aren't doing anything with the parsed value, which is an indication that it shouldn't be in the file to start with. Make your code reusable Why not define the algorithm for finding the smallest difference inside an extension method? This allows you to separate the concerns of I/O and your minimum difference finding logic. public static int SmallestDifference(this IEnumerable<int> source) { var numbers = source as int[] ?? source.ToArray(); Array.Sort(numbers); int difference = int.MaxValue; for (int i = 1; i < numbers.Length; i++) { difference = Math.Min(difference, numbers[i] - numbers[i - 1]); } return difference; } Be expressive Using the deferred execution streaming File.ReadLines method together with an expressive LINQ query, you can write code that reads fluently and makes sense rather than confusing the reader with details: var minDifferences = from line in File.ReadLines("IN.in") let numbers = from number in line.Split(' ') select int.Parse(number) select numbers.SmallestDifference().ToString(); File.WriteAllLines("OUT.out", minDifferences); Now here's a tiny surprise left for the end: This solution consistently performs at least as well as your original code. (On average, it's a couple of milliseconds faster per fifty thousand lines, but that's in the realm of microbenchmarking which should be avoided. Just trying to give you a rough idea.) So stop worrying about performance; strive for clean code instead. EDIT: I understand that you can't change the format (as per your comment), so I've added the code below to allow you to stick with your current file layout. The modification required is tiny: simply skip the first line, and only look for the smallest difference in lines that have more than one number. var minDifferences = from line in File.ReadLines("IN.in").Skip(1) let numbers = from number in line.Split(' ') select int.Parse(number) where numbers.Count() > 1 select numbers.SmallestDifference().ToString(); File.WriteAllLines("OUT.out", minDifferences);
{ "domain": "codereview.stackexchange", "id": 2382, "tags": "c#, performance, algorithm, linq" }
what is universality (with respect to SUSY and neutral kaon oscillations)?
Question: I'm having trouble putting the pieces together. In SM, neutral kaon oscillation is heavily constrained. This means, roughly, that the squark mass matrices have to be diagonal. And this is called universality of soft parameters. What exactly is universality and why do we have universality in this situation? Furthermore, is having diagonal squark mass matrices sufficient? In a lot of SUSY theories, they go one step further and assume the scalar masses are the same for the first and second generations (pMSSM for instance). Why is this additional step necessary? Answer: By universality, we usually mean not just the diagonal form; we mean a diagonal form proportional to the unit matrix. Universality is flavor-blindness. It could be sufficient to have diagonal squark matrices – in the basis of superpartners of the energy eigenstates of quarks – for the unwanted new effects to be suppressed. However, it is extremely unlikely – unnatural – for both quark and squark matrices to be diagonal in the "same" basis. Such an accident would mean a fine-tuning that would require an extra explanation. Assuming we don't want to look for such an explanation, the only principle that may guarantee that the squark matrix is diagonal in a seemingly arbitrary basis is to assume that it is proportional to the unit matrix – it is universal, flavor-blind. This assumption is legitimate as far as naturalness goes because it increases a symmetry, the unitary rotation symmetry acting on the squarks. The proportionality to the unit matrix is especially important for the first two generations. The third generation may fail to be completely universal and it causes smaller problems because the effects involving the third generation are suppressed due to the heavy top quark mass.
{ "domain": "physics.stackexchange", "id": 4666, "tags": "supersymmetry" }
What is the meaning of the channel gain is normalized
Question: Transmitting the signal $x$ over the channel $h$ and affected by noise $n$ can be expressed: $y = hx + n$ $h$ is the channel gain, that is known for me. I sometimes see phrases saying "when the channel gain is normalized, the performance will be different". What is the effect of normalizing the channel gain? does that mean it cancel the effect of the channel ? Answer: Normalizing $h$ means defining it so that the expected value of $|h|$ is equal to 1. The implication is that multi-path is (on average) neither taking energy from $x$ nor adding energy to it. In an actual wireless channel several effects play out simultaneously: path loss, large-scale fading (e.g. shadowing), short-scale fading, noise, distortion, etc. Often, one wants to separate these processes and simulate them independently. The channel $y=hx+n$ ignores everything except multi-path and Gaussian noise.
{ "domain": "dsp.stackexchange", "id": 10340, "tags": "digital-communications, fading-channel" }
Fourier transform of creation and annihilation operators in Kitaev Chain
Question: I encounter a problem when I use Fourier transformation to transform the real space Kitaev Chain to momentum space. Suppose the real space Kitaev Chain can be written as follow: \begin{equation} H_{KM} = -\sum^{N-1}_{i} (t c^{\dagger}_{i} c_{i+1} + tc^{\dagger}_{i+1} c_{i} + \Delta c^{\dagger}_{i} c^{\dagger}_{i+1} + \Delta^{*}c_{i+1}c_{i}) - \mu \sum^{N}_{i}c^{\dagger}_{i} c_{i} \end{equation} And the expected result should be like this (Hamiltonian for the Periodic Kitaev Model) : \begin{equation} H_{k} = -\sum_{k}( 2t \cos(k) + \mu) c^{\dagger}_{k} c_{k} + \Delta e^{-ik}c^{\dagger}_{k} c^{\dagger}_{-k} + \Delta^{*} e^{ik} c_{k}c_{-k}) \end{equation} However, when I use the Fourier transform $ c_{j} = \frac{1}{\sqrt{N}} \sum_{k} e^{-ikx_{j}} c_{k}$ to manipulate the midterm two terms $c^{\dagger}_{i} c^{\dagger}_{i+1}$ and $c_{i}c_{i+1}$, I got a trouble there since I cannot get the correct phase $e^{\pm ik}$. My calculation steps are as follow: \begin{equation} \begin{split} \sum_{i} c^{\dagger}_{i} c^{\dagger}_{i+1} &= \frac{1}{N} \sum_{kqi} c^{\dagger}_{k} c^{\dagger}_{q} e^{ix_{i}k} e^{iqx_{i+1}} \\ &=\frac{1}{N} \sum_{kqi} c^{\dagger}_{k} c^{\dagger}_{q} e^{ix_{i}k} e^{iqx_{I}} e^{iq} ~~~~ \text{($x_{i+1} = x_{i} + 1$)} \\ &= \sum_{kq} c^{\dagger}_{k} c^{\dagger}_{q}e^{iq} \big(\frac{1}{N} \sum_{i} e^{ix_{i}(k+q)} \big) \\ &= \sum_{kq} c^{\dagger}_{k}c^{\dagger}_{q} e^{iq}\delta_{k,-q}\\ &= \sum_{k} c^{\dagger}_{k}c^{\dagger}_{-k} e^{-ik} \end{split} \end{equation} Similarly for $c_{i+1}c_{i}$ term \begin{equation} \begin{split} \sum_{i} c_{i+1} c_{i} &= \frac{1}{N} \sum_{kqi} c_{k} c_{q} e^{-ix_{i+1}k} e^{-iqx_{i}} \\ &= \frac{1}{N} \sum_{kqi} c_{k} c_{q} e^{-ik} e^{-ikx_{i}} e^{-iqx_{i}} ~~~~ \text{($e^{-ikx_{i+1}} =e^{-ik(x_{i} +1)} $)} \\ &=\sum_{kq} c_{k} c_{q} e^{-ik} \big( \frac{1}{N} \sum_{i} e^{-i(k+q)x_{i}}\big) \\ &= \sum_{kq} c_{k}c_{-k} e^{-ik} \delta_{k,-q} \\ &= \sum_{k} c_{k}c_{-k} e^{-ik} \end{split} \end{equation} Therefore, could anyone help me to point out the mistakes that I made in my calculation? Thank you. Answer: For it to be hermitian, the Hamiltonian should read \begin{equation} H_{k} = -\sum_{k}( 2t \cos(k) + \mu) c^{\dagger}_{k} c_{k} + \Delta e^{-ik}c^{\dagger}_{k} c^{\dagger}_{-k} + \Delta^{*} e^{ik} c_{-k}c_{k}) \ , \end{equation} or something the like. This is what your derivations gives: Substitute $k\to-k$ and use $\sum_k=\sum_{-k}$, and you get $$ \sum k c_kc_{-k} = \sum_q c_{-k} c_{k} e^{ik}\ . $$
{ "domain": "physics.stackexchange", "id": 77597, "tags": "quantum-mechanics, condensed-matter, fourier-transform" }
Does Kepler's law only apply to planets?
Question: Does Kepler's law only apply to planets? If so why doesn't it apply to other objects undergoing circular motion? By Kepler's law I'm referring to $T^2 \propto r^3$ Answer: Kepler's third law, the so-called harmonic law, was published by Johannes Kepler in 1619, ten year after he published his first two laws. Not long thereafter, in 1643, the Flemish astronomer Godefroy Wendelin noted that Kepler's third law not only applies to the planets, but also to the moons of Jupiter. Now we know that this law describes the motion of any two bodies in gravitational orbit around each other. In fact, all you need really is an inverse square central attractive force between the two bodies. This law still holds approximately also if there are other bodies present, as long as their gravitational influence on the smaller of the two bodies is minor compared to the gravity of the larger of the two bodies. Stretching this law to cover Coulombic systems is perfectly ok. Kepler's third law is also observed in Rydberg atoms.
{ "domain": "physics.stackexchange", "id": 5911, "tags": "newtonian-mechanics, planets, orbital-motion" }
Forbidden reaction from symmetries and conservation laws
Question: If $\rm p\bar p$ (proton-antiproton) annihilation at rest proceeds via the $S$ state ($L=0$), why is it that the reaction: $\rm p\bar p\to \pi^0 +\pi^0$ is forbidden as strong interaction (i.e. parity conserved). The initial state has odd parity as $L=0$, the final state must have the same: $L=$ odd for the final state. How do I determine the value of $L$ for pions? (Do I use the fact that total angular momentum $J$ is conserved and the known values of spin $S$?) Answer: You've concluded that, to conserve parity, $L$ must be odd. By the two pions are identical bosons, and so the wavefunction must by symmetric under exchange. If $L$ is odd, the wavefunction is anti-symmetric under exchange, and so this is forbidden.
{ "domain": "physics.stackexchange", "id": 47244, "tags": "particle-physics, conservation-laws, standard-model, parity, pions" }
import tf segfaults python on OS X 10.9 with brewed python
Question: I revently got the desktop-full variant of ROS hydro installed on Mac OS X 10.9 Mavericks. There were a series of smaller issues, e.g. most of the ones mentioned at http://answers.ros.org/question/94771/building-octomap-orocos_kdl-and-other-packages-on-osx-109-solution/, but in the end I got everything to compile. When I start up python and type import tf, it crashes with Segmentation fault: 11 My python version is 2.7.6 from homebrew. I don't even know how to start debugging this. Any help is greatly appreciated. Edit: This seems to be caused by faulty cmake find_package(PythonLibs), which in turn caused by _tf.so to be linked against apple's python, while I'm using brewed python. I opened this issue: https://github.com/mxcl/homebrew/issues/25118. I guess the question that remains is if anybody else has the same issue. For me it is not isolated to tf, but all python modules that are built with find_package(PythonLibs) and link against the found python libs. Edit2: [moved to new question] Edit3: When used to be Edit2 is now a new question as suggested. Thanks. http://answers.ros.org/question/110671/recommended-python-version-on-os-x-with-homebrew/ Originally posted by demmeln on ROS Answers with karma: 4306 on 2013-12-09 Post score: 4 Original comments Comment by William on 2013-12-10: Unless you have a reason for using the Homebrew Python, I would highly recommend using the built-in Python, in my experience it causes far fewer issues. Comment by ahendrix on 2013-12-13: I think Edit2 above should be asked as a new question. Comment by demmeln on 2014-02-11: Can you edit the wiki to reflect that? Comment by demmeln on 2014-02-11: Thank you! Answer: I'm running Mavericks with the system python (2.7.5), and I don't have this problem. Unless you know of a very specific reason that you need python 2.7.6 instead of 2.7.5, I strongly suggest removing your homebrew install of python and using the system version instead. I would try this out by doing brew unlink python and confirm that the system python interpreter does what you want. You can then remove python with brew uninstall python or re-enable your homebrew python install with brew link python Originally posted by ahendrix with karma: 47576 on 2013-12-11 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by demmeln on 2013-12-16: I'll accept this since it sort of works around the issue. Beware that you have to reinstall various formulas, as well as pip packages and recompile ROS when switching python between brewed and system. Comment by Artem on 2013-12-16: I also get Segmentation fault when running stereo image proc on Mac OSX with system python. If someone have any ideas how to resolve it, will very appreciate it! Comment by demmeln on 2013-12-16: You should add a question for that specific problem. What does the OS X problem report state? Comment by Artem on 2013-12-16: I have had this problem for a long time on OSX. I am thinking about debugging it. I don't have the cameras around to try it. Will create a question soon. Thank you! Comment by demmeln on 2013-12-17: I wounder if this could ne related to this issue with message filters, since stereo image proc is using the approx time policy. You could try patching that header file locally. https://github.com/ros/ros_comm/pull/326 Comment by Artem on 2013-12-17: It looks like the problem was in the topic I was subscribing. Because my image_raw originally grayscale I should have subscribed to image_rect instead of image_rect_color. So no segmentation fault for now. :)
{ "domain": "robotics.stackexchange", "id": 16390, "tags": "ros, python, homebrew, osx, transform" }
Why do the dipoles in dichloroethane point in opposite directions?
Question: I understand why the dipoles would align in a polar solvent to maximise dipole-dipole interactions but wouldn't the replulsion due to like charges destabilise the molecule? Is it a MO effect instead? An undergraduate level response would be appreciated. Answer: When dipoles are free to move, all else being equal, they will tend to align "tail-to-tip". That is the case, for example, when the dipoles belong to different molecules in a liquid. But the two polarized bonds within a single molecule of dichloroethane are not free to move to arbitrary positions. The two carbons are stuck together by a bond, and the only degree of freedom is rotation around that central carbon-carbon bond. This rotation doesn't change the distance between the two positive carbon atoms. It also doesn't change the distance between either carbon atom and either chlorine atom. So the only relevant effect of this rotation is to change the distance between the two negatively charged chlorine atoms. Therefore, the most stable conformation is the one where the negatively charged chlorines are furthest apart, since that minimizes the repulsive force.
{ "domain": "chemistry.stackexchange", "id": 17694, "tags": "stereoelectronics" }
Param count in last layer high, how can I decrease?
Question: Not sure where to put this... I am trying to create a convolutional architecture for a DQN in keras, and I want to know why my param count is so high for my last layer compared to the rest of the network. I've tried slowly decreasing the dimensions of the layers above it, but it performs quite poorly. I want to know if there's anything I can do to decrease the param count of that last layer, besides the above. Code: #Import statements. import random import numpy as np import tensorflow as tf import tensorflow.keras.layers as L from collections import deque import layers as mL import tensorflow.keras.optimizers as O import optimizers as mO import tensorflow.keras.backend as K #Conv function. def conv(x, units, kernel, stride, noise=False, padding='valid'): y = L.Conv2D(units, kernel, stride, activation=mish, padding=padding)(x) if noise: y = mL.PGaussian()(y) return y #Network x_input = L.Input(shape=self.state) x_goal = L.Input(shape=self.state) x = L.Concatenate(-1)([x_input, x_goal]) x_list = [] for i in range(2): x = conv(x, 4, (7,7), 1) for i in range(2): x = conv(x, 8, (5,5), 2) for i in range(10): x = conv(x, 6, (3,3), 1, noise=True) x = L.Conv2D(1, (3,3), 1)(x) x_shape = K.int_shape(x) x = L.Reshape((x_shape[1], x_shape[2]))(x) x = L.Flatten()(x) crit = L.Dense(1, trainable=False)(x) critic = tf.keras.models.Model([x_input, x_goal], crit) act1 = L.Dense(self.action, trainable=False)(x) act2 = L.Dense(self.action2, trainable=False)(x) act1 = L.Softmax()(act1) act2 = L.Softmax()(act2) actor = tf.keras.models.Model([x_input, x_goal], [act1, act2]) actor.compile(loss=mish_loss, optimizer='adam') actor.summary() actor.summary(): ________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, 300, 300, 3) 0 __________________________________________________________________________________________________ input_3 (InputLayer) [(None, 300, 300, 3) 0 __________________________________________________________________________________________________ concatenate (Concatenate) (None, 300, 300, 6) 0 input_2[0][0] input_3[0][0] __________________________________________________________________________________________________ conv2d_52 (Conv2D) (None, 294, 294, 4) 1180 concatenate[0][0] __________________________________________________________________________________________________ conv2d_53 (Conv2D) (None, 288, 288, 4) 788 conv2d_52[0][0] __________________________________________________________________________________________________ conv2d_54 (Conv2D) (None, 142, 142, 8) 808 conv2d_53[0][0] __________________________________________________________________________________________________ conv2d_55 (Conv2D) (None, 69, 69, 8) 1608 conv2d_54[0][0] __________________________________________________________________________________________________ conv2d_56 (Conv2D) (None, 67, 67, 6) 438 conv2d_55[0][0] __________________________________________________________________________________________________ p_gaussian (PGaussian) (None, 67, 67, 6) 1 conv2d_56[0][0] __________________________________________________________________________________________________ conv2d_57 (Conv2D) (None, 65, 65, 6) 330 p_gaussian[0][0] __________________________________________________________________________________________________ p_gaussian_1 (PGaussian) (None, 65, 65, 6) 1 conv2d_57[0][0] __________________________________________________________________________________________________ conv2d_58 (Conv2D) (None, 63, 63, 6) 330 p_gaussian_1[0][0] __________________________________________________________________________________________________ p_gaussian_2 (PGaussian) (None, 63, 63, 6) 1 conv2d_58[0][0] __________________________________________________________________________________________________ conv2d_59 (Conv2D) (None, 61, 61, 6) 330 p_gaussian_2[0][0] __________________________________________________________________________________________________ p_gaussian_3 (PGaussian) (None, 61, 61, 6) 1 conv2d_59[0][0] __________________________________________________________________________________________________ conv2d_60 (Conv2D) (None, 59, 59, 6) 330 p_gaussian_3[0][0] __________________________________________________________________________________________________ p_gaussian_4 (PGaussian) (None, 59, 59, 6) 1 conv2d_60[0][0] __________________________________________________________________________________________________ conv2d_61 (Conv2D) (None, 57, 57, 6) 330 p_gaussian_4[0][0] __________________________________________________________________________________________________ p_gaussian_5 (PGaussian) (None, 57, 57, 6) 1 conv2d_61[0][0] __________________________________________________________________________________________________ conv2d_62 (Conv2D) (None, 55, 55, 6) 330 p_gaussian_5[0][0] __________________________________________________________________________________________________ p_gaussian_6 (PGaussian) (None, 55, 55, 6) 1 conv2d_62[0][0] __________________________________________________________________________________________________ conv2d_63 (Conv2D) (None, 53, 53, 6) 330 p_gaussian_6[0][0] __________________________________________________________________________________________________ p_gaussian_7 (PGaussian) (None, 53, 53, 6) 1 conv2d_63[0][0] __________________________________________________________________________________________________ conv2d_64 (Conv2D) (None, 51, 51, 6) 330 p_gaussian_7[0][0] __________________________________________________________________________________________________ p_gaussian_8 (PGaussian) (None, 51, 51, 6) 1 conv2d_64[0][0] __________________________________________________________________________________________________ conv2d_65 (Conv2D) (None, 49, 49, 6) 330 p_gaussian_8[0][0] __________________________________________________________________________________________________ p_gaussian_9 (PGaussian) (None, 49, 49, 6) 1 conv2d_65[0][0] __________________________________________________________________________________________________ conv2d_66 (Conv2D) (None, 47, 47, 1) 55 p_gaussian_9[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 47, 47) 0 conv2d_66[0][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 2209) 0 reshape[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 2000) 4420000 flatten[0][0] __________________________________________________________________________________________________ dense_2 (Dense) (None, 200) 442000 flatten[0][0] __________________________________________________________________________________________________ softmax (Softmax) (None, 2000) 0 dense_1[0][0] __________________________________________________________________________________________________ softmax_1 (Softmax) (None, 200) 0 dense_2[0][0] ================================================================================================== Total params: 4,869,857 Trainable params: 7,857 Non-trainable params: 4,862,000 __________________________________________________________________________________________________ Answer: If I understood correctly you want to decrease the parameters count on the last layer (dense_2 layer right?). It would be nice to know why you want to decrease the number of parameters in the last layers... But I'll proceed with what I see. Firstly, the dense layers (or fully connected in literature) have a deterministic number of parameters (or weights) to learn according to the size of the input and output tensor. The relation is the following: $N_{params} = Y_{output} \cdot (X_{input} +1) = 200 \cdot (2209 +1) = 442000$ Where: $Y_{output}$: Output tensor shape (act2=200 which is the output of dense_2 layer) $X_{input}$: Input tensor shape (x=2209which is the ouput of flatten layer) So if you want to decrease the number of parameters you can: Decrease the input tensor: your self.action2 which I guess is the action space so you might not be able to decrease it Decrease the output tensor: maybe? I would need more context (code) to know if that is even possible So, in short: if you are not willing to change your the input or output tensor to the Dense layers, then no, you can not decrease the number of parameters. BONUS: In case you missed it, I have noticed you have set your dense layers to trainable=False. So in principle you should not care about decreasing number of parameters (which in most cases is motivated to reduce training time) since they are already not being trained. You can check that in the Keras summary output: Total params: 4,869,857 Trainable params: 7,857 Non-trainable params: 4,862,000 Where the non-trainable parameters are $4862000 = 4420000 + 442000 $, which are the number of parameters of your 2 dense layers.
{ "domain": "ai.stackexchange", "id": 1674, "tags": "convolutional-neural-networks, weights" }
cannot access '/dev/video*'
Question: Hi there, I am not sure if this is the right place to ask, but I would really appreciate your help. I am having an issue with accessing to my web cam from my Lenovo Laptop (running Ubuntu 20.04). yuxiang@yuxiang:/etc/modprobe.d$ ls /dev/video* ls: cannot access '/dev/video*': No such file or directory yuxiang@yuxiang:/etc/modprobe.d$ fswebcam -r 640x480 --no-banner image3.jpg --- Opening /dev/video0... yuxiang@yuxiang:~/catkin_ws$ roslaunch my_camera elp.launch uvc_find_device: No such device (-4) [ERROR] [1645657833.102061279, 1575.456000000]: Unable to open camera. ^C[libuvc_camera-2] killing on exit [nodelet_manager-1] killing on exit [ INFO] [1645657841.057052676, 1583.374000000]: Unloading nodelet /libuvc_camera from manager /nodelet_manager shutting down processing monitor... yuxiang@yuxiang:~/catkin_ws$ hwinfo --usb 02: USB 00.0: 0000 Unclassified device [Created at usb.122] Unique ID: Zj8l.Y_aYp4agq10 Parent ID: pBe4.xYNhIwdOaa6 SysFS ID: /devices/pci0000:00/0000:00:14.0/usb2/2-4/2-4:1.0 SysFS BusID: 2-4:1.0 Hardware Class: unknown Model: "Intel(R) RealSense(TM) Depth Camera 435" Hotplug: USB Vendor: usb 0x8086 "Intel Corp." Device: usb 0x0b07 "Intel(R) RealSense(TM) Depth Camera 435" Revision: "50.b1" Serial ID: "933623025074" Module Alias: "usb:v8086p0B07d50B1dcEFdsc02dp01ic0Eisc01ip00in00" Driver Info #0: Driver Status: uvcvideo is not active Driver Activation Cmd: "modprobe uvcvideo" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #14 (Hub) 03: USB 00.0: 0000 Unclassified device [Created at usb.122] Unique ID: POWV.ZGXmp31lfyE Parent ID: k4bc.2DFUsyrieMD SysFS ID: /devices/pci0000:00/0000:00:14.0/usb1/1-9/1-9:1.0 SysFS BusID: 1-9:1.0 Hardware Class: unknown Model: "Synaptics Unclassified device" Hotplug: USB Vendor: usb 0x06cb "Synaptics, Inc." Device: usb 0x00bd Serial ID: "46be90c14e26" Speed: 12 Mbps Module Alias: "usb:v06CBp00BDd0000dcFFdsc10dpFFicFFisc00ip00in00" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #10 (Hub) 04: USB 00.1: 0000 Unclassified device [Created at usb.122] Unique ID: QR8P.z1tM_FB91k0 Parent ID: k4bc.2DFUsyrieMD SysFS ID: /devices/pci0000:00/0000:00:14.0/usb1/1-8/1-8:1.1 SysFS BusID: 1-8:1.1 Hardware Class: unknown Model: "Acer Integrated Camera" Hotplug: USB Vendor: usb 0x5986 "Acer, Inc" Device: usb 0x115f "Integrated Camera" Revision: "56.14" Serial ID: "" Speed: 480 Mbps Module Alias: "usb:v5986p115Fd5614dcEFdsc02dp01ic0Eisc02ip00in01" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #10 (Hub) yuxiang@yuxiang:~$ /usr/sbin/lsmod | sort ac97_bus 16384 1 snd_soc_core acpi_pad 184320 0 acpi_thermal_rel 16384 1 int3400_thermal aesni_intel 372736 4 arp_tables 24576 0 aufs 262144 0 autofs4 45056 2 binfmt_misc 24576 1 bluetooth 552960 11 btrtl,btintel,btbcm,bnep,btusb bnep 24576 2 bpfilter 32768 0 bridge 176128 1 br_netfilter br_netfilter 28672 0 btbcm 16384 1 btusb btintel 24576 1 btusb btrtl 24576 1 btusb btusb 57344 0 ccm 20480 6 cfg80211 708608 3 iwlmvm,iwlwifi,mac80211 coretemp 20480 0 crc32_pclmul 16384 0 crct10dif_pclmul 16384 1 cryptd 24576 2 crypto_simd,ghash_clmulni_intel crypto_simd 16384 1 aesni_intel drm 491520 16 drm_kms_helper,i915 drm_kms_helper 184320 1 i915 e1000e 258048 0 ecc 28672 1 ecdh_generic ecdh_generic 16384 1 bluetooth fb_sys_fops 16384 1 drm_kms_helper ghash_clmulni_intel 16384 0 glue_helper 16384 1 aesni_intel hid 131072 3 i2c_hid,hid_multitouch,hid_generic hid_generic 16384 0 hid_multitouch 28672 0 i2c_algo_bit 16384 1 i915 i2c_hid 28672 0 i2c_i801 32768 0 i915 1994752 39 idma64 20480 0 input_leds 16384 0 int3400_thermal 20480 0 int3403_thermal 20480 0 int340x_thermal_zone 16384 2 int3403_thermal,processor_thermal_device intel_cstate 20480 0 intel_hid 20480 0 intel_lpss 16384 1 intel_lpss_pci intel_lpss_pci 20480 0 intel_powerclamp 20480 0 intel_rapl_common 24576 2 intel_rapl_msr,processor_thermal_device intel_rapl_msr 20480 0 intel_soc_dts_iosf 20480 1 processor_thermal_device intel_wmi_thunderbolt 20480 0 ip6table_filter 16384 0 ip6_tables 32768 1 ip6table_filter iptable_filter 16384 1 iptable_nat 16384 1 ip_tables 32768 2 iptable_filter,iptable_nat iwlmvm 380928 0 iwlwifi 331776 1 iwlmvm joydev 24576 0 kvm 663552 0 ledtrig_audio 16384 4 snd_hda_codec_generic,snd_hda_codec_realtek,snd_sof,thinkpad_acpi libarc4 16384 1 mac80211 libcrc32c 16384 2 nf_conntrack,nf_nat llc 16384 2 bridge,stp lp 20480 0 mac80211 847872 1 iwlmvm mac_hid 16384 0 mc 53248 0 mei 106496 3 mei_hdcp,mei_me mei_hdcp 24576 0 mei_me 40960 1 Module Size Used by msr 16384 0 nf_conntrack 139264 4 xt_conntrack,nf_nat,nf_conntrack_netlink,xt_MASQUERADE nf_conntrack_netlink 45056 0 nf_defrag_ipv4 16384 1 nf_conntrack nf_defrag_ipv6 24576 1 nf_conntrack nf_nat 45056 2 iptable_nat,xt_MASQUERADE nfnetlink 16384 3 nf_conntrack_netlink nls_iso8859_1 16384 1 nvme 49152 2 nvme_core 102400 4 nvme nvram 16384 1 thinkpad_acpi overlay 118784 0 parport 53248 3 parport_pc,lp,ppdev parport_pc 40960 0 pinctrl_cannonlake 36864 1 pinctrl_intel 28672 1 pinctrl_cannonlake ppdev 24576 0 processor_thermal_device 24576 0 psmouse 155648 0 rapl 20480 0 sch_fq_codel 20480 2 serio_raw 20480 0 snd 90112 26 snd_hda_codec_generic,snd_seq,snd_seq_device,snd_hda_codec_hdmi,snd_hwdep,snd_hda_intel,snd_hda_codec,snd_hda_codec_realtek,snd_timer,snd_compress,thinkpad_acpi,snd_soc_core,snd_pcm,snd_soc_skl_hda_dsp,snd_rawmidi snd_compress 24576 1 snd_soc_core snd_hda_codec 135168 6 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec_realtek,snd_soc_hdac_hda,snd_soc_skl_hda_dsp snd_hda_codec_generic 81920 1 snd_hda_codec_realtek snd_hda_codec_hdmi 61440 1 snd_hda_codec_realtek 131072 1 snd_hda_core 90112 11 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_ext_core,snd_hda_codec,snd_hda_codec_realtek,snd_sof_intel_hda_common,snd_soc_hdac_hdmi,snd_soc_hdac_hda,snd_sof_intel_hda,snd_soc_skl_hda_dsp snd_hda_ext_core 32768 4 snd_sof_intel_hda_common,snd_soc_hdac_hdmi,snd_soc_hdac_hda,snd_sof_intel_hda snd_hda_intel 53248 0 snd_hwdep 20480 1 snd_hda_codec snd_intel_dspcfg 28672 3 snd_hda_intel,snd_sof_pci,snd_sof_intel_hda_common snd_pcm 106496 10 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_sof,snd_sof_intel_hda_common,snd_soc_hdac_hdmi,snd_soc_core,snd_hda_core,snd_pcm_dmaengine snd_pcm_dmaengine 16384 1 snd_soc_core snd_rawmidi 36864 1 snd_seq_midi snd_seq 69632 2 snd_seq_midi,snd_seq_midi_event snd_seq_device 16384 3 snd_seq,snd_seq_midi,snd_rawmidi snd_seq_midi 20480 0 snd_seq_midi_event 16384 1 snd_seq_midi snd_soc_acpi 16384 2 snd_sof_pci,snd_soc_acpi_intel_match snd_soc_acpi_intel_match 32768 2 snd_sof_pci,snd_sof_intel_hda_common snd_soc_core 249856 6 snd_sof,snd_sof_intel_hda_common,snd_soc_hdac_hdmi,snd_soc_hdac_hda,snd_soc_dmic,snd_soc_skl_hda_dsp snd_soc_dmic 16384 1 snd_soc_hdac_hda 24576 1 snd_sof_intel_hda_common snd_soc_hdac_hdmi 36864 1 snd_soc_skl_hda_dsp snd_soc_skl_hda_dsp 24576 6 snd_sof 106496 4 snd_sof_pci,snd_sof_intel_hda_common,snd_sof_intel_byt,snd_sof_intel_ipc snd_sof_intel_byt 20480 1 snd_sof_pci snd_sof_intel_hda 20480 1 snd_sof_intel_hda_common snd_sof_intel_hda_common 73728 1 snd_sof_pci snd_sof_intel_ipc 20480 1 snd_sof_intel_byt snd_sof_pci 20480 2 snd_sof_xtensa_dsp 16384 1 snd_sof_pci snd_timer 36864 2 snd_seq,snd_pcm soundcore 16384 1 snd sparse_keymap 16384 1 intel_hid stp 16384 1 bridge syscopyarea 16384 1 drm_kms_helper sysfillrect 16384 1 drm_kms_helper sysimgblt 16384 1 drm_kms_helper thinkpad_acpi 110592 0 thunderbolt 167936 0 typec 45056 1 typec_ucsi typec_ucsi 40960 1 ucsi_acpi ucsi_acpi 16384 0 video 49152 2 thinkpad_acpi,i915 virt_dma 20480 1 idma64 wmi 32768 2 intel_wmi_thunderbolt,wmi_bmof wmi_bmof 16384 0 x86_pkg_temp_thermal 20480 0 xfrm_algo 16384 1 xfrm_user xfrm_user 36864 2 x_tables 40960 8 ip6table_filter,xt_conntrack,iptable_filter,xt_addrtype,ip6_tables,ip_tables,xt_MASQUERADE,arp_tables xt_addrtype 16384 2 xt_conntrack 16384 2 xt_MASQUERADE 20480 2 Originally posted by noname on ROS Answers with karma: 15 on 2022-02-23 Post score: 0 Original comments Comment by bribri123 on 2022-02-24: maybe this tutorial can help http://roboticsweekends.blogspot.com/2017/12/how-to-use-usb-camera-with-ros-on.html Comment by Mike Scheutzow on 2022-02-26: You do not seem to have loaded the video4linux2 drivers or the uvcvideo drivers into the linux kernel. Please edit your description to include the output of this terminal command: /usr/sbin/lsmod | sort Comment by noname on 2022-02-26: Thanks for the help folks. I have added the output after /usr/sbin/lsmod | sort Answer: You have not loaded the video4linux2 drivers or the uvcvideo drivers into the linux kernel. There are many resources on the web to help you configure a usb webcam on linux. I suggest you get it working with an application like cheese before you try to use the camera in ROS. Originally posted by Mike Scheutzow with karma: 4903 on 2022-02-28 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37467, "tags": "ros, linux" }
Brushless DC servo Motor
Question: Hello I am very very very new to ROS so i apologize if this is a dumb question. I have several 48 Volt DC brushless servo motors for a large scale robotics project. They take in 8 bit ascii command from an rs 232. I have done some searching, but i was curious if anyone has seen any stack in ROS that would handle some input output on rs232 so i could read and write some commands to my motors. i.e read some joystick code in and output certain ascii commands based on joystick reads. Generic Joystick values coming from say the ubuntu joystick library The motors have a proprietary program from the manufacture but it is terrible. We are using a linux system for the vehicle and i was curious if ROS has some solutions for us. As i said i apologize if this is not a correct post. Thanks Originally posted by automagp68 on ROS Answers with karma: 1 on 2012-05-19 Post score: 0 Answer: I think that William Woodall has written a pretty good serial library that can be used with ROS. It is available as a unary stack on Github. This library just handles the basic serial transactions, which you can then wrap in your own ROS node. There is an example of interfacing with an AX2550 motor controller as part of the Auburn-Automow Github account, if you are looking for a good reference for using the library. Originally posted by mjcarroll with karma: 6414 on 2012-05-20 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Kevin on 2012-05-20: Yes that one is good too
{ "domain": "robotics.stackexchange", "id": 9462, "tags": "joystick" }
Why can't entropy pile up?
Question: I need help to interpret the following paragraph from Kittel and Kroemer, Thermal Physics 2ed, page 228-229: Work can be completely converted into heat, but the inverse is not true: heat cannot be completely converted into work. Entropy enters the system with the heat, but does not leave the system with the work. A device that generates work from heat must necessarily strip the entropy from the heat that has been converted to work. The entropy removed from the converted input heat cannot be permitted to pile up inside the device indefinitely; this entropy must ultimately be removed from the device. The only way to do this is to provide more input heat than the amount converted to work, and to eject the excess input heat as waste heat, at a temperature lower than that of the input heat (Figure 8.1). Because $\tfrac{\delta Q}{\sigma}=\tau$, the reversible heat transfer accompanying one unit of entropy is given by the temperature at which the heat is transferred. It follows that only part of the input heat need be ejected at the lower temperature to carry away all the entropy of the input heat. Only the difference between input and output heat can be converted to work. To prevent the accumulation of entropy there must be some output heat; therefore it is impossible to convert all the input heat to work! Especially, two sentences are unclear to me: (1) "A device that generates work from heat must necessarily strip the entropy from the heat that has been converted to work." What does it mean to "strip the entropy from the heat"? (2) "The entropy removed from the converted input heat cannot be permitted to pile up inside the device indefinitely"? Why the "The entropy removed from the converted input heat cannot be permitted to pile up inside the device indefinitely" Answer: What Kittel & Kroemer are quietly saying is that "heat" does not convert to work, only work converts to work, if one uses a conventional meaning of the English word "convert". The reason for this is that the total entropy in all, yes all, thermodynamic transformation can never decrease. In other words, entropy is indestructible the same way as gravitating mass or electric charge are but with the difference that if it does not stay the same then it increases. If you assume that heat converts to work then it should follow that the entropy being carried with thermal energy representing "heat" somehow disappears for work has no entropy. Since that is not allowed, Clausius introduced the concept of simultaneous heat compensation, a separate irreversible process to restore that disappearing entropy in the working fluid before rejected at a lower temperature. What Kittel & Kroemer are quietly saying with "A device that generates work from heat must necessarily strip the entropy from the heat that has been converted to work" is that Clausius is wrong about that "heat compensation", there is no compensation, instead the entropy in the thermal energy is, by some magic, removed from it and it is the same entropy that is to be rejected at the lower temperature. Kittel & Kroemer do not explain that if that is indeed the case then what is being converted in the first place, and how does the converted heat become work. There is one consistent explanation to all that is to view the entropy and not heat as being the agent of the work performed. No conversion, just entropy transport from a higher to a lower temperature, the same way as a gravitating mass dropped from a higher gravitational potential to a lower one performs work. And to your question why entropy cannot pile up, is because you are interested in a cycle, and you want to return the engine to its starting state, and whatever entropy was absorbed at the higher temperature it is rejected at the lower one along with any irreversibly generated excess entropy (say, from friction or conduction).
{ "domain": "physics.stackexchange", "id": 96389, "tags": "thermodynamics, work, entropy" }
Class dedicated to transforming API response to the data I need?
Question: I have the following code (in php) that calls the open weather api with my credentials, and returns data for me. Know I'm returning some data I pick from that response, I'm wondering if it's good practice to create a dedicated class for that? Is this something that's commonly used and does it have a name? try { $request = $this->http->get( 'api.openweathermap.org/data/2.5/weather', $options); $response = json_decode($request->getBody()); return [ 'main' => $response->weather[0]->main, 'temperature' => $response->main->temp, ]; } catch (GuzzleException $e) { die(var_dump($e->getMessage())); } ``` Answer: Your might actually take it a step further and create a value object for the data, making it much more type safe. Notice I added PHP 7.4 proeprty typehints. I chosen float for the temperature (maybe it's not appropriate, idk), but i definitely don't know what to choose for the main, but it sure deserves a typehint, if it's nested object, create another class for it or maybe pick the wanted information directly into the WeatherStatus class. final class WeatherStatus { private ??? $main; private float $temperature; public function __construct(??? $main, float $temperature) { $this->main = $main; $this->temperature = $temperature; } public function getMain(): ??? { return $this->main; } public function getTemperature(): float { return $this->temperature; } } You can also define an interface for such a method, including a domain specific exception (because just halting the program with die or exit is not very nice thing to do, in that case it would be better to not catch the exception at all). class WeatherProviderException extends \Exception { } interface IWeatherProvider { /** * @throws WeatherProviderException */ public function getWeather(): WeatherStatus; } In the implementation I would accept the api url rather then hardcoding it. You may add a static named constructor for the version 2.5 api. The credentials for openweathermap.org (whatever they are, let me assume a user name and password) might be also promoted to a class or may be passed to the provider constructor as multiple arguments as well. final class OpenWeatherCredentials { private string $user; private string $password; // constructor + getters ... } class OpenWeatherProvider implements IWeatherProvider { private ClientInterface $client; private string $apiUrl; private OpenWeatherCredentials $credentials; public function __construct(ClientInterface $client, string $apiUrl, OpenWeatherCredentials $credentials) { $this->client = $client; $this->apiUrl = $apiUrl; $this->credentials = $credentials; } public static function createV2_5(ClientInterface $client, OpenWeatherCredentials $credentials): self { return new self($client, 'https://api.openweathermap.org/data/2.5/weather', $credentials); } public function getWeather(): WeatherStatus { $options = [ // whatever is needed, $this->credentials->get* ]; try { // here you called the object $request, but it really is a $response $response = $this->client->get($this->apiUrl, $options); } catch (GuzzleException $e) { throw new WeatherProviderException($e->getMessage(), $e->getCode(), $e); } try { $json = json_decode($response->getBody(), \JSON_THROW_ON_ERROR); } catch (\JsonException $e) { throw new WeatherProviderException($e->getMessage(), $e->getCode(), $e); } return new WeatherStatus( $json->weather[0]->main, (float) $json->main->temp, ); } } Also notice how I wrap each statement in separate try-catch to only catch what can be caught. As long as they are handled the same way you try-catch both together but maybe you should catch their common ancestor exception instead to make it even simpler. And I pulled the instantiation of the WeatherStatus out of the try-catch just because I don't expect it to throw GuzzleException nor \JsonException.
{ "domain": "codereview.stackexchange", "id": 39338, "tags": "php" }
Using tree search
Question: I have some questions regarding tree search and graph search (Uninformed search) as explained in chapter 3 of the book : http://aima.cs.berkeley.edu/ As I see, the only difference between the two is that the graph search handles loops (avoids them). First question: Do both graph search and tree search build dynamic trees of the problem at hand? Second question: I assume graph search was used to solve the map of Romania problem (getting from Arad to Bucharest) with DFS, BFS, UCS as strategies that only sort the frontier queue. Now is there a standard way to change the graph of map of Romania to a tree, and then use tree search? Third question: What are some of the Criteria that help us choose between graph and tree search for different problems? Thanks you in advance Answer: Both BFS and DFS take a graph and induce a subgraph of it. This subgraph has all the nodes reachable from the start node, and is a tree. You could probably convert a graph to a tree and then use tree search, but it seems to me that the easiest way to convert a graph to a tree is, in fact, some kind of search, and it would be redundant to use search to convert the graph, then do another search on the tree, when you could have just used the initial search. You want to use graph search on graphs, and tree search on trees. In particular, this is because graphs which are cyclic can get caught in infinite loops if you use a tree-search on them. (Note, if we're talking directed graphs, there is probably a special kind of tree search for an acyclic graph. For undirected graphs, and acyclic connected undirected graph is just a tree)
{ "domain": "cs.stackexchange", "id": 8040, "tags": "artificial-intelligence, search-algorithms" }
Simple load balancer model in Scala
Question: For one of my cases in algorithms I am required to create a model of a load balancer model for multiple processors. Lately I became interested in Scala so I decided that it would be great to create this program in Scala. I'm not really familiar with the concept of functional programming so any feedback will be welcome. Please note that I am not asking about the correctness of my algorithm as this is my homework. object HelloWorld { def randomElements(arg: Integer): Double = { val r = scala.util.Random return r.nextDouble() } // used for debugging def return1Elements(arg: Integer): Double ={ return 1 } def fillJobs(size: Integer, fillingFunction: (Integer) => Double): Array[Double] = { val array = new Array[Double](size); for (el <- 0 to size -1) { array(el) = fillingFunction(el); } return array } def fillProcessors(size: Integer): Array[Double] = { val array = new Array[Double](size); for (el <- 0 to size -1) { array(el) = 0 } return array } def result(processors: Array[Double]): Double = { return processors.max } def findMaxSolver(processorsArray: Array[Double], jobsArray: Array[Double]): Double = { val sortedJobs = jobsArray.sortWith(_ > _) var sortedProcessors = processorsArray for(processorIndex <- 0 to processorsArray.size -1){ sortedProcessors(processorIndex) += sortedJobs(processorIndex) } for(job <- sortedJobs.slice(sortedProcessors.size, sortedJobs.size)){ sortedProcessors = sortedProcessors.sortWith(_<_) sortedProcessors(0) += job } return result(sortedProcessors) } def solve(processorsArray: Array[Double], jobsArray: Array[Double], solvingFunction: (Array[Double], Array[Double]) => Double): Double = { val processors = processorsArray.clone() val jobs = jobsArray.clone() return solvingFunction(processors, jobs) } def main(args: Array[String]) { val JOB_NUMBER = 1024 val PROCESSOR_NUMBER = 8 val jobs = fillJobs(JOB_NUMBER, randomElements) val processors = fillProcessors(PROCESSOR_NUMBER) println("Job times") println("============================") for (element <- jobs) { println(element) } println("============================") println("Results") println("Find max solver " + solve(processors, jobs, findMaxSolver)) } } Answer: Below are some of the changes I would make to your code. In my opinion one of the nice aspects of Scala is that it gives you all sorts of ways to reduce the amount of work your mind has to do in order to decipher code. As an example, one of the first things I did was declare a type ArrD that is equivalent to Array[Double]. I then just substitued where necessary and the code (to my mind) became more readable. The choice of ArrD was arbitrary on my part, you could if you wanted use ArrayDouble instead. Along these lines I shortened all of you variable and value names. As the program is set up right now you don't need to pass in a procs array, but I left it in anyway. As you mentioned this is a homework assignment so I'll leave a bit of mystery as to why this is. And really you don't need the take and the drop. Next note that jobs.sorted is equivalent to jobs.sortWith(_ > _). And finally (for now) checkout how I initialized jobs and procs. If you still would like to use your randomElements function you should look into the method called tabulate. Cheers. object O { type ArrD = Array[Double] def findMaxSolver(procs: ArrD, jobs: ArrD): Double = { val sJobs = jobs.sorted var sProcs = sJobs.take(procs.size) for(job <- sJobs.drop(procs.size)){ sProcs = sProcs.sorted sProcs(0) += job } sProcs.max } def solve(procs: ArrD, jobs: ArrD, f: (ArrD, ArrD) => Double) = f(procs, jobs) def run = { val jobNum = 1024 val procNum = 8 val jobs = Array.fill(jobNum)(scala.util.Random.nextDouble()) val procs = Array.fill(procNum)(0.0) println(s"RESULTS:\nFind max solver ${solve(procs, jobs, findMaxSolver)}") } }
{ "domain": "codereview.stackexchange", "id": 13399, "tags": "scala" }
ASCII value converter
Question: I have written VERY simple code that converts a string to a character array and then displays the ASCII value of each character. Let me know if this is the most effective/safe way of doing what I described above. #include <iostream> using namespace std; int main(int argc, char *argv[]){ //Declare Variables. string input; int size; //Display message to user asking them to enter a string or character. cout << "What character/string do you need the ascii value(s) of? " << endl; //Assign user input to string input. cin >> input; //Get string length of user input. size = input.length(); //Copy the string into a character array. const char * chars = input.c_str(); //Itterate through the character array using the string size. for(int i = 0; i < size; i++){ //Return Ascii values of each character. cout << "The value of character " << chars[i] << " is " << (int)chars[i] << endl; } return 0; } Answer: int main(int argc, char *argv[]){ //Declare Variables. string input; int size; Don't declare variables until you actually need them, and don't declare variables you don't need. In this case, you're not using the arguments passed into main, either don't name them, or use the version of main that takes no arguments: int main() { /* space or newline between paren and curly is usual */ cin >> input; I'm not sure this is what you want. If the user enters a "sentence" with spaces in it, you'll only get what they typed up to the first space (not included). If you want the whole line, use std::getline. std::string input; std::getline(std::cin, input); //Copy the string into a character array. const char * chars = input.c_str(); This comment is dangerously wrong. You don't get a copy at all, you get a pointer to std::string's internals. The string data is not copied, and the pointer returned will become invalid as soon as input's lifetime ends. std::string's length() member returns an unsigned quantity. While there's no problem in this case assigning it to an int, the correct type to use is: std::string::size_type size = input.length(); or more conveniently: auto size = input.length(); (And you'll need to use an unsigned quantity for the loop counter too, to avoid warnings.) A different way of doing your loop would be to use the range-based variant. It works for std::string. for (auto c: input) { std::cout << "Char \"" << c << "\" has value " << static_cast<int>(c) << std::endl; } This avoid getting the size altogether.
{ "domain": "codereview.stackexchange", "id": 15575, "tags": "c++, beginner, strings" }
What are the possible initial states that can be prepared in a lab for use in a quantum computation?
Question: So here's something that's been bothering me. Given the time evolution of the wavefunction can only be unitary or discontinuous as a process of the measurement. So let the observables for our Hamiltonian be position $\hat x$, momentum $\hat p$ and energy $\hat H$. Does this mean the only possible states I can prepare in the lab are: $$|\phi_1 \rangle = |E_0 \rangle $$ or $$|\phi_2 \rangle = U|x_0 \rangle $$ or $$|\phi_3 \rangle = U|p_0 \rangle $$ where $U$ is the unitary operator at arbitrary $t$, $| x_0\rangle$ is an arbitrary position eigenket, $|p_0 \rangle$ is an arbitrary momentum eigenket and $|E_0 \rangle$ is an arbitrary energy eigenket? Am I correct in interpreting this as meaning that there are only certain quantum computations which can be performed with such a system (in the sense that there is limited initial data one might input)? Cross-posted on physics.SE Answer: This question is often relegated to the hardware side of quantum computing but is very important. Quantum computing theory assumes access to things like qubits (2-level quantum systems) and unitary operations acting on those qubits, where the unitary operations may be noisy. Alternatively, it assumes access to a highly entangled state, such as in measurement-based quantum computing, or to Gaussian states, squeezing operations, and various measurement protocols in continuous-variable quantum computing. All of these theories fundamentally assume that there is some underlying set of states and quantum state transformations that can be programmed to do a quantum computation. Your question matches most closely to continuous variable quantum computation, in which the Hilbert space looks like that of a quantum harmonic oscillator. Then, there are certain initial states that are easier to access: the ground state (ie the vacuum), which would be the state of the system if you got rid of as much energy as possible, and thermal states, which arise when the system equilibrates with a large external bath of a fixed temperature. The possible state transformations are indeed of the form $U|\psi\rangle$, where $U$ is in this case achieved through things like interferometers and nonlinear crystals, which can be combined in various ways so as to make an [almost] universal gate set ("almost" because the measurement process can also be relied on to enact more transformations). There is also the possibility of things like displacement operations, enacted by lasers, which again simply change the form of $U$. So the magic here is that there are enough different types of unitaries $U$, which do not all commute with each other, that can be combined in enough ways to generate a huge variety of states, even from a set of initial states as boring as being in the vacuum. In terms of qubits, things are a little easier to describe mathematically. Somehow, one gets access to a Hilbert space of dimension two, as well as unitary operations that transform the state in that Hilbert space. It turns out that you only need access to [multiple copies of] two noncommuting unitaries in order to generate all of the possible transformations, such as using rotations on the Bloch sphere about two different axes. If the qubit is the spin of an atom with an applied external magnetic field, the ground state will have the spin pointing in the same direction as the magnetic field, so such a state can be prepared using a magnetic field. Then, the other unitaries can be applied by adding other magnetic fields or the like. If the qubit is in two levels of an atom, the ground state can also be prepared by removing as much energy as possible, and then laser pulses can be used to drive transissions between the two levels in order to achieve unitaries $U$. If the qubit is in the polarization degree of freedom of a single photon, a polarizer can prepare any chosen initial state (with some probability of success) and wave plates can be used to enact a variety of unitary transformations. For each architecture, there are different ways of preparing states and of performing unitaries. All in all, having a few (or one) initial states and a variety of noncommuting unitaries that can be applied for different amounts of time leads to a huge number of possible transformations! Different transformations are easier to achieve with different physical systems, so the goal is always to be able to do at least a certain set of transformations that are sufficient to generate arbitrary ones. To then do universal quantum computation, we also need access to gates that can act on more than one qubit at a time, so that again becomes a different question in each physical platform. Often, the easiest states to prepare are ground states of some Hamiltonian. Since we can externally control parameters of Hamiltonians (changing interaction strengths, applied external fields, etc.), we can help guide systems toward a variety of initial states that are easier for manipulation later on. Aside: noisy transformations are described by quantum channels that are more general than unitary operations, so quantum channels are also accessible for state preparation and transformation. In general, quantum computation prefers unitary operations, so the quantum channels are more useful for characterizing why and how a quantum computation may be imperfect.
{ "domain": "quantumcomputing.stackexchange", "id": 2863, "tags": "quantum-state, measurement" }
Does this paper, using neural nets, prove industrialization is irrelevant to global warming?
Question: Recently, a paper was published by Abbot and Morahasy (2017) - see specifically Fig. 2. The thesis of the paper is to train an artificial neural network (ANN) on the temperature time series for pre-industrial ages. Then do forecasting with the ANN to predict the temperature time series in the 20th century. Because the ANN was trained on pre-industrial time series, then we can say that industrialization is irrelevant to global warming. This goes completely contrary to the IPCC AR5 technical summary, which shows graphs that global warming is due to anthropogenic causes. See page 74 of the latter report. So which do you trust and why? Answer: The paper is deeply flawed from both the climate science and machine learning perspectives. The most obvious being the most eye-catching claim that equilibrium climate sensitivity is approximately 0.6C, which if true would overturn our understanding of the climate system. However the paper doesn't actually explain how this figure of 0.6C is obtained from a "largest deviation" of 0.2C, it is basically just a hand-wave. Also the largest deviation is not 0.2, this is the largest average (mean absolute) deviation seen in the proxies, and you can have a mean absolute deviation without there being a trend that you could relate to increased GHG concentrations and hence estimate ECS. More importantly, this would give an estimate of transient climate sensitivity, not equilibrium climate sensitivity, and you can't reliably estimate ECS (which is global) from regional or sub-regional proxy records. Approaches such as the one taken in this paper, which seek to see how much of the data can be attributed to "climate cycles" with the remainder being taken as the anthropogenic component are inherently biased towards low estimates of ECS. This is because of omitted variable bias; because the anthropogenic forcing signals are not included in the model, if the net effect of independent changes in the forcings is correlated with a sinusoidal component, they will be wrongly attributed to these climate cycles, when in fact they are produced by the forcings. Models like this can only be used to estimate lower bounds on ECS, likewise if you make a model using the forcings as inputs and treat the residual as being "natural variability" it will tend to over-estimate ECS, giving an upper bound estimate. The Abbot and Morahasy paper cites a similar paper by Loehle, but sadly does not also cite the comment paper (of which I was the lead author, note the corregendum). This is poor scholarship, and sadly Abbot and Morahasy have gone on to make many of the same basic mistakes (but with a more complicated model), which is a shame. Rather than using the original datasets (many of which are freely available) the authors chose to digitise images of the datasets instead. This seems somewhat bizarre, and Gavin Schmidt points out via twitter that in at least one case the dataset has not been scaled or aligned correctly (andends at 1965, and so does not include the recent warming where anthropogenic contributions are most evident). It also transpires that Figures 5 and 9 are identical. The paper says "However, superior fitting to the temperature proxies are obtained by using the sine wave components and composite as input data. This was established by comparing the spectral analysis composite method versus the ANN method for the training periods.". Evaluating performance on the training data is a classic error in the use of machine learning that people used to make all the time in the late 1980s and 90s, but is rarely seen today. If you have two nested models (one can be implemented as a special case of the other) of different complexities, then the more complex model will always have a lower training set error, if only because it has more capacity to memorize the random noise in the data, but that doesn't mean it is the more accurate model. For that you need out-of-sample comparisons, which are absent from the paper. There is no handling of uncertainty in the model (for instance the periods of the cyclic components are not known exactly, cycles with slightly different periodicities will explain the observations almost as well), and likewise there will be uncertainty in the parameters of the neural net, but none of this is propagated through to give the uncertainty in the estimate of ECS. As seen in the comment on the Loehle paper, this can be substantial. Table 13 seems to suggest that paleoclimate studies give lower ECS estimates than GCMs, which suggests a rather selective view of the paleoclimate studies, which generally indicate high ECS IIRC. The study also has problems with too many degrees of researcher freedom (e.g. how was the particular subset of proxies chosen?) and there is a lot of (automated) exploration of model architectures and feature selection, which is often a recipe for over-fitting in model selection. It is also not clear why the observation should be a non-linear function of the cyclic variables (especially given that the cycles were obtained from the data by linear analysis).
{ "domain": "earthscience.stackexchange", "id": 1188, "tags": "climate-change" }
Calculating orbital path of a planet around a fixed body in a deterministic way given starting conditions
Question: I am making a simulation of the solar system in the unity game engine. A planet is orbiting a stationary star for now using Newton's law of gravitation where $F = Gm_1m_2/r^2$ for the orbit (force is applied after each frame). I need to display the trajectory of the orbit before running the simulation, while adjusting initial starting conditions including velocity. Using the iterative method above makes it difficult to quickly calculate and display the orbit as increasing the time intervals the force is calculated over, decreases accuracy as error accumulates over time. I know that a deterministic method can be used to calculate the path of orbit as a function of time. I have been trying to derive an equation in terms of time for the x and y components of the position of the planet given its initial conditions. This is so that I can plot the orbit from a series of points calculated for different points in time. I have been unable to find any solutions when reading about the Kepler problem. The question is how would I be able to calculate the position of the planet orbiting a stationary star at a certain time, given the mass of both the star and the planet, and the initial position and velocity of the planet? Both bodies are point masses and the sun is the origin. Thank you for any help and if anything mentioned is unclear then please ask. Answer: It seems to me your problem is the inverse of a case that is relatively simple. I am going to assume that the initial velocity is in tangential direction. That means the initial position is either the aphelion or the perihelion. If that applies then that narrows things down a lot. Given an initial distance between Sun and planet, and the eccentricity of the orbit, the total size and shape of the orbit follows mathematically, given that a Kepler orbit is an ellipse with the Sun at one focus. With the size and shape known, the velocity at every point of the orbit follows mathematically. As I wrote: your case is an inverse of that: If it so happens that the initial velocity is just the velocity for circular orbit then circular orbit it is. If the initial velocity is slower than the one for circular orbit then the initial position is the aphelion of the orbit. If the initial velocity is faster than the one for circular orbit then the initial position is the perihelion of the orbit. So I think your starting point should be that you find, maybe on wikipedia, a formula that gives the size and shape of an orbit, given the eccentricity. Then combine that with a calculation of corresponding velocity at aphelion/perihelion You would then need to convert that formula such that it gives a value for the eccentricity of the orbit, with initial velocity as input. With the value for the excentricity obtained it is possible to set up a formula that precomputes the orbit as a function of time.
{ "domain": "physics.stackexchange", "id": 74778, "tags": "newtonian-gravity, astrophysics, orbital-motion, simulations, celestial-mechanics" }
Why are only some of the geologic features in Mars' Noctis Labyrinthus region named?
Question: I found this page Names Approved for Six Cavi and a Tholus on Mars which lists seven features on Mars. I'm curious about the decisions to name only these features: Dalu Cavus, Layl Cavus, Malam Cavus, Nat Cavus, Noc Cavus, Usiku Cavus, and Noctis Tholus in Noctis Labyrinthus. Why only some of the canyon, graben and/or valley locations are named? Does it have to do with the depth of the features? Doing a search for the area of Noctis Labyrinthus, Northernmost Latitude: -3.0, Southernmost Latitude: -13.0, Westernmost Longitude: 259.0, Easternmost Longitude: 268.0, I found only this, Answer: Nomenclature on planetary bodies is meant to ease and standardize communication. If an object is referred to often, or if it is important for someone's research, then the scientist(s) involved can submit a name request to the International Astronomical Union (IAU), or through some other body that submits names to the IAU (such as the United States Geologic Survey's Astrogeology branch in Flagstaff, AZ). After checking for various criteria*, the name may or may not be approved. If it is approved, then the feature is named. If it is not approved, then the scientist(s) can resubmit that name or a different name. I am not sure if it is a full list, but the USGS's Gazetteer of Planetary Nomenclature lists a lot of the themes on different bodies. Based on that, cavi and tholi would be named based on nearby albedo (brightness) features or craters. *Criteria for naming are extensive. Almost every type of feature (such as an impact crater) on many bodies has a theme, and that theme must be followed. If the name does not meet that theme, then it will be rejected. Other criteria are that the name cannot be offensive or induce a strong negative response (such as naming something "Satan" could induce a negative response in many people); the name cannot be used elsewhere in the solar system already (though some historic exceptions exist); the feature must be well defined (a vague region on a surface would not, therefore, qualify); the feature cannot be named after someone who is alive or has died less than three years ago; and there cannot be a preponderance of names that are biased towards or against gender, nationality, region of the world, etc.
{ "domain": "astronomy.stackexchange", "id": 4356, "tags": "mars, iau, nomenclature" }
Precise definition of "Observable Universe" and its alternatives
Question: The Observable Universe is generally said to contain all space that could "in principle" have had a causal impact on Earth, but the exact limits of the "in principle" causal interaction go unspecified. Wikipedia notes some astrophysicists distinguish the Visible Universe, all space that was in our past light cone at recombination, from a broader Observable Universe, all space that was in our past light cone at the end of the inflationary epoch. While obviously the first definition has more practical importance in cosmology, the latter seems to be much truer to the meaning of "in principle". Is this latter definition known to be "final"? In other words, are there theoretical reasons to believe that there actually was no causal influence between the OU and its neighboring regions of space during the inflationary epoch, or that any such interactions had no causal impact on post-inflationary dynamics? If so, what are they? Intuitively, it seems like if you define $OU(t)$ as the Earth-centered ball of present-day space that was in our past light cone $t$ seconds after the Big Bang, the size of this ball grows without limit as $t$ approaches 0. This implies the entire Universe, even if it is infinitely large, is "in principle" causally connected to Earth. At what point does this intuition go wrong? Answer: I will address this: In other words, are there theoretical reasons to believe that there actually was no causal influence between the OU and its neighboring regions of space during the inflationary epoch, or that any such interactions had no causal impact on post-inflationary dynamics. The inflationary period was invented because of the great uniformity of the Cosmic Microwave Background radiation, which shows the radiation detected here, which happened, in the present model, at about 380000 years after the Big Bang at the decoupling of photons from masses . It fits the black body radiation curve better than any laboratory measurements have shown for matter, and this implies that a thermodynamic process homogenized the original soup before the decoupling of photons. The problem is that at that time, there were regions in the universe which could not communicate thermodynamically with the other regions, due to being in different regions of the light cone. This basic discrepancy in the BB model was resolved by introducing quantum mechanics in the beginning of time for the BB evolution. Quantum mechanics with its probabilistic solutions is not constrained by light cone considerations and the inflaton particle beats up the energy content at times before 10^-32 seconds and homogenizes it, the quantum mechanical very small inhomogeneities from the inherent probabilistic nature become the seeds of the tiny inhomogeneities observed in the CMB and the later generated clusters of galaxies etc. So there exists a causal connection from the inflationary period to the present universe, the seeds of an homogeneity from it. The causal impact happened at times before 10^-32 seconds. After 380000 years the galaxies started forming in their separate causal regions
{ "domain": "physics.stackexchange", "id": 28458, "tags": "big-bang, causality, cosmological-inflation, observable-universe" }
What is the significance of the de Broglie wavelength?
Question: I have just learnt quantum physics in school and learnt the concept of wave-particle duality. But I still have trouble understanding what the de Broglie wavelength is. What does it mean for a particle to have wavelike properties? If everything is a wave then why don't we just phase through one another? There's also a question asking you to calculate the de Broglie wavelength of the moon around earth and an electron around an atom and asking you why you can consider the moon as a "particle" but not the electron. I don't really get the answer of "wavelength significantly smaller than the orbital radius, therefore moon is a particle vice versa" though :( Why is it that wavelength being significantly smaller than the radius mean that it is a particle? Answer: I have not written the standard language used in textbook its about intution. But I still have trouble understanding what is de Broglies wavelength De Broglie postulated that associated to a matter particle with momentum p there is a plane wave of wavelength λ given by λ= h / p What does it mean for a particle to have wavelike properties? Its not like that something is either wave or particle. Its like when you wanted to see its wave character in experiments you will find behaving it like waves.(like do interference with electron). There is a formula for interference, you know that and in place of wavelength, you can put h/p and you will get your calculations consistent with results. Now Place a detector , you will get particle character. (Its weird!).But true But why its showed the particle character now? Because by placing detector you wanted to find which particle passed through which hole in an interference experiment . So you wanted to see its particle character , show it showed you. In a wave there is no notion that this wave passed through this hole or like this Don't confuse it with trivial intution of wave. its far more than that, far different than that. But, anyway question is not about this. Why is it that wavelength being significantly smaller than the radius mean that it is a particle? Drop a stone in Water! or Make Wave from rope! Its energy as a whole is spread all over the wave (say in water). now start contracting all the energy (just energy not particle), and condensse the energy in a very miniscule water droplet (in a way we are reducing the wavelength) now that water drop will cross the water like a bullet. now you see its became a particle. So Thats the the effect of increasing or decreasing wavelength. In layman terms wave is a spread like structure, and when you are condensing its parameter or properties at a point it becomes particles. ( at least classically). basically Here you are shortening the wavelength. You precisely know the energy,position momentum all in one place in that droplet.(don't bring Heisenberg ).And that's a particle.
{ "domain": "physics.stackexchange", "id": 70648, "tags": "quantum-mechanics, wavelength, wave-particle-duality" }
So if a problem is more difficult the language it represents is smaller?
Question: I'm reading the definition of polynomial time reducible: Let $L_1, L_2$ be two language. If $L_1$ is polynomial time reducible to $L_2$ then exists $f:\{0,1\}^*$ s.t. $\forall x\in\{0,1\}^*$ $$x\in L_1\iff f(x)\in L_2$$ For me this means the $L_1$ may be bigger (in cardinality) than $L_2$, but $L_2$ is more difficult since $L_1$ can be solved after reduced to $L_2$? Answer: $L_1$ and $L_2$ are always countably infinite, and thus "equally big". If any language is finite, then it is "constant time" recognizable.
{ "domain": "cs.stackexchange", "id": 12931, "tags": "np-complete, reductions, decision-problem" }
How can statistics for all active ROS Topics be obtained?
Question: Is there a way to display statistics for all active ROS topics in a system? By statistics I mean bandwidth usage, publishing rate, number of messages dropped, etc. I looked at "rostopic bw" and "rostopic hz" but they only display information for one topic at a time. Originally posted by liangfok on ROS Answers with karma: 328 on 2013-04-18 Post score: 6 Original comments Comment by joq on 2013-04-19: Seems generally useful. If there is not already a solution, you can probably fork a copy of rostopic and modify it to handle that. Answer: Since ROS Indigo node statistics are available via: rosparam set enable_statistics true (see http://wiki.ros.org/Topics#Topic_statistics) A nice framework including rqt plugins is available on https://github.com/ROS-PSE/arni Originally posted by fivef with karma: 2756 on 2015-10-29 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 13879, "tags": "ros" }
Two player random number guessing game
Question: I've created a two-player number guessing game in Haskell. My main objective was to practice dealing with "state" in a purely functional language (such as player scores, whose turn it is, etc.). Here are the rules: Gameplay: Two players will take turns guessing a random number between 1 and 10. Answers will be typed into the command line. Scoring: If a player guesses the number correctly, they will be awarded 5 points If a player is within two (inclusive) from the answer, the player will be awarded 3 points. If a player is within three (inclusive) from the answer, they will be awarded 1 point. If a player is 7 or more points off, they will lose a point. The score may not be negative. All other offsets will result in zero points. The game will continue until the one of the players reaches 10 points. Caveats: This is was not designed to be an exercise in enjoyable game design -- obviously the optimal solution is to always choose five, which doesn't make for a lot of excitement. :D I am aware that Control.Monad.State exists, but I want to practice tracking state without it. I know that the "mutual recursion" is difficult to follow. I would love some suggestions for getting rid of that which do not involve nesting if statements. import Data.Char import System.Random main = do stdGen <- getStdGen play 0 0 P1 stdGen play :: Int -> Int -> Player -> StdGen -> IO () play p1Score p2Score player stdGen | p1Score < 10 && p2Score < 10 = continueGame p1Score p2Score player stdGen | otherwise = putStrLn $ show (determineWinner p1Score p2Score) ++ " wins!" continueGame :: Int -> Int -> Player -> StdGen -> IO () continueGame p1Score p2Score player stdGen = do putStr $ show player ++ "'s turn. Pick a number between 1 and 10: " chosenNumber <- getLine if isInteger chosenNumber then do let (randomNumber, newGen) = randomR (1, 10) stdGen :: (Int, StdGen) putStrLn $ "The answer is " ++ show randomNumber let pointsEarned = calcPointsEarned randomNumber (read chosenNumber) let newP1Score = min (max (p1Score + calcPointsEarnedForPlayer player P1 pointsEarned) 0) 10 let newP2Score = min (max (p2Score + calcPointsEarnedForPlayer player P2 pointsEarned) 0) 10 putStrLn $ "P1 Score: " ++ show newP1Score putStrLn $ "P2 Score: " ++ show newP2Score play newP1Score newP2Score (changeTurn player) newGen else do putStrLn "The input must be an integer" play p1Score p2Score player stdGen data Player = P1 | P2 deriving (Show, Eq) isInteger :: String -> Bool isInteger = and . map isNumber changeTurn :: Player -> Player changeTurn player | player == P1 = P2 | otherwise = P1 calcPointsEarned :: Int -> Int -> Int calcPointsEarned actualAnswer chosenAnswer | offset == 0 = 5 | offset <= 2 = 3 | offset <= 3 = 1 | offset >= 7 = (-1) | otherwise = 0 where offset = abs $ chosenAnswer - actualAnswer calcPointsEarnedForPlayer :: Player -> Player -> Int -> Int calcPointsEarnedForPlayer actualTurn player pointsEarned | actualTurn == player = pointsEarned | otherwise = 0 determineWinner :: Int -> Int -> Player determineWinner p1Score p2Score | p1Score > p2Score = P1 | otherwise = P2 ``` Answer: I'd contend that State is pure and functional, but I think translating your current code to use State is an excellent exercise so I'll leave that up to you. The first thing I'd address is making your types do more of the bookkeeping. Well designed types lend themselves to correct-by-construction solutions. data Player = P1 | P2 deriving Show data Game = Game { turn :: Player, p1 :: Int, p2 :: Int } deriving Show Prefer pattern matching to equality testing, it is frequently more terse. Decreased line noise often means increased readability. changeTurn :: Player -> Player changeTurn P1 = P2 changeTurn P2 = P1 Your determineWinner function has a (currently unreachable) logic error. If both player's scores are equal then it prefers handing victory to player two. This may not matter in your code as written, but if your code changes or you begin property testing or some other unforeseeable future event comes to pass, it could begin mattering unexpectedly. Handling ties is “morally” the right thing to do. Also as-is it isn't really determining a winner by the rules of the game, only which player's score is higher. winner :: Game -> Maybe Player winner (Game _ p1 p2) = case (max p1 p2 >= 10, p1 > p2, p2 > p1) of (True, True, False) -> Just P1 (True, False, True) -> Just P2 (_, _ , _) -> Nothing Don't validate and then parse, parse and allow for failure. If your validation code is separate from your parsing code you risk them drifting out of sync and causing errors. In this case, use Text.Read.readMaybe from base and leave out your isInteger function entirely. It's a good idea to separate as much of your pure game logic from IO actions as possible. It's easier to test, easier to understand, and enables you to reuse functionality you otherwise couldn't. updateRound :: Int -> Game -> Game updateRound n (Game P1 p1 p2) = Game P2 (boundScore $ p1 + n) p2 updateRound n (Game P2 p1 p2) = Game P1 p1 (boundScore $ p2 + n) clamp :: Ord a => a -> a -> a -> a clamp lo val hi = lo `max` val `min` hi boundScore :: Int -> Int boundScore n = clamp 0 n 10 This also obviates the need for changeTurn. It's also usually handy to separate your display logic from your control logic, even if both are IO actions. It might be useful to you if you make use of the REPL while developing, your types often shouldn't include multiple copies of the same information (e.g., two numbers and their difference) as that carries a risk of the values getting out of sync. Those derived values might be all you want to see while working though. displayGame :: Game -> IO () displayGame (Game _ p1 p2) = do putStrLn $ "P1 Score: " ++ show p1 putStrLn $ "P2 Score: " ++ show p2 That said, it's good to separate your control logic from your sources of input and output also. It makes your program testable without needing to muck about with piping stdin and stdout. There's also a very elegant transformation when you decide to begin using State, but I'll leave that to you to figure out. gameRound :: Int -> Int -> Game -> (Maybe Player, Game) gameRound guess answer game = let score = calcPointsEarned guess answer nextGame = updateRound score game in (winner nextGame, nextGame) receiveGuess :: Player -> IO Int receiveGuess player = do putStr $ show player ++ "'s turn. Pick a number between 1 and 10: " input <- getLine case readMaybe input of Nothing -> do putStrLn "The input must be an integer" receiveGuess player Just guess -> pure guess play :: Game -> IO () play game = do answer <- randomRIO (1, 10) guess <- receiveGuess (turn game) putStrLn $ "The answer is " ++ show answer let (mPlayer, nextGame) = gameRound guess answer game displayGame nextGame case mPlayer of Just player -> putStrLn $ show player ++ " wins!" Nothing -> play nextGame
{ "domain": "codereview.stackexchange", "id": 40850, "tags": "beginner, haskell, functional-programming, random" }
AMCL losting problem
Question: hello... I work on AMCL. if I work in little area AMCL work very well. But if I work in corridor AMCL always is losting in the middle of corridor. My AMCL parameters are good so amcl pose estimation (pose arrays) doesnt spread. why it get lost always same area ??? problem is about LIDAR MAX RANGE or gound type ? lidar hokuyo : 5.4 m max range type of floor : tile The length of the corridor : VIDEO ABOUT PROBLEM : VIDEO INFO : video timing (03:08) : robot is actually in front of the first door, but amcl estimation is still in front of the first floor video timing (04:07) : robot is actually in front of the third door, but amcl estimation is still in front of the second floor <!-- amcl.XML--> <launch> <arg name="use_map_topic" default="true"/> <arg name="initial_pose_x" default="0.0"/> <arg name="initial_pose_y" default="0.0"/> <arg name="initial_pose_a" default="0.0"/> <node pkg="amcl" type="amcl" name="amcl" args="scan:=scan2" output="screen"> <!-- Publish scans from best pose at a max of 10 Hz --> <param name="odom_frame_id" value="odom_combined"/> <param name="base_frame_id" value="base_footprint"/> <param name="global_frame_id" value="map"/> <param name="odom_model_type" value="diff"/> <param name="transform_tolerance" value="0.5" /> <param name="gui_publish_rate" value="10.0"/> <!-- 5 --> <param name="laser_max_beams" value="80"/> <!-- 30 --> <param name="min_particles" value="100"/> <param name="max_particles" value="4000"/> <param name="kld_err" value="0.05"/> <param name="kld_z" value="0.99"/> <param name="odom_alpha1" value="0.02"/> <!-- 0.2 --> <param name="odom_alpha2" value="0.07"/> <!-- 0.2 --> <!-- translation std dev, m --> <param name="odom_alpha3" value="0.08"/> <!-- 0.8 --> <param name="odom_alpha4" value="0.02"/> <!-- 0.2 --> <param name="laser_max_range" value="5.6"/> <param name="laser_z_hit" value="0.95"/> <param name="laser_z_short" value="0.1"/> <param name="laser_z_max" value="0.05"/> <param name="laser_z_rand" value="0.05"/> <param name="laser_sigma_hit" value="0.2"/> <param name="laser_lambda_short" value="0.1"/> <param name="laser_lambda_short" value="0.1"/> <param name="laser_model_type" value="likelihood_field"/> <!-- <param name="laser_model_type" value="beam"/> --> <param name="laser_likelihood_max_dist" value="2.0"/> <param name="update_min_d" value="0.08"/> <!-- 0.2 **0.15--> <param name="update_min_a" value="0.18"/> <!-- 0.5 **0.12--> <param name="resample_interval" value="1"/> <param name="transform_tolerance" value="0.1"/> <!-- 0.1 --> <param name="recovery_alpha_slow" value="0.0"/> <param name="recovery_alpha_fast" value="0.0"/> <param name="initial_pose_x" value="$(arg initial_pose_x)"/> <param name="initial_pose_y" value="$(arg initial_pose_y)"/> <param name="initial_pose_a" value="$(arg initial_pose_a)"/> <param name="use_map_topic" value="$(arg use_map_topic)"/> </node> </launch> Originally posted by osmancns on ROS Answers with karma: 153 on 2015-09-04 Post score: 0 Original comments Comment by mgruhler on 2015-09-04: This cannot be debugged without more information. Please provide at least your configuration (min. amcl), maybe a video (from rviz) how this happens, or a bagfile. Comment by osmancns on 2015-09-04: thanks... I edited my question @mig Comment by osmancns on 2015-09-07: can you help me @mig please ? Comment by allenh1 on 2015-09-07: It's rather difficult to tell what's happening, though I would be concerned that the particles are staying so bundled. Particles are supposed to spread out. This means that the algorithm is resampling, so check your odometry/laser rangefinders/map scale? One of those is likely the problem. Comment by osmancns on 2015-09-07: thank you @allenh1. I placed boxes center of the corridor. and I mapped again. and now AMCL doesnt loss . so work well. ı cant understand ? And I learned that" if particles doesnt spread out , AMCL work so good ", doesnt it ???? I changed always amcl parameters and fixed that spread out particles. Comment by mgruhler on 2015-09-10: I'm really not sure what is happening there. To me, it seems, that you don't have proper odometry measurements. Could you create a bagfile and upload this somewhere? Particles need to spread out to some extent. To me, it seems you are overestimating the (faulty) odometry. but this is just a guess Comment by osmancns on 2015-09-10: . I placed boxes center of the corridor. and I mapped again. and now AMCL doesnt loss. corridor length is longer then my lidar max range. is it a problem you think ? and if particles dont spread out , Does that amcl parameters are very good . ı know so ??? you think my parameters are bad @mig ?? Comment by mgruhler on 2015-09-12: I'd say that you're odometry parameters are too low. On our robots, they are considerably higher and work fine. But as I don't know your robot, I cannot say. Comment by osmancns on 2015-09-12: my robot speed 0.2 m/sn. but when i increased my odom parameters , my amcl pose estimations spread out a lot . you think is it a really problem ? ı think problem about lidar max range , lidat can not see end of the corridor. Comment by allenh1 on 2015-09-19: I'd say that this could be a good sign. Comment by allenh1 on 2015-09-19: Idea: map without odometry (e.g. hector slam) and then try to get your robot to localize. Comment by nadiah on 2020-01-06: Hallo, did you solve this problem? because now I have the same problem like this Answer: Gmapping is locating robot via laser. Even if odometry is poor, gmapping is correcting position of robot with scan matching. But in corridor, your laser's range isn't enough for this. Test your odometry first: ... The first test checks how reasonable the odometry is for rotation. I open up rviz, set the frame to "odom," display the laser scan the robot provides, set the decay time on that topic high (something like 20 seconds), and perform an in-place rotation. Then, I look at how closely the scans match each other on subsequent rotations. Ideally, the scans will fall right on top of each other, but some rotational drift is expected, so I just make sure that the scans aren't off by more than a degree or two. ... For more, look at this navigation tuning guide. Originally posted by Orhan with karma: 856 on 2016-02-10 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by osmancns on 2016-02-10: I control my robot with a joystick so path is no problem. If I do odometry tests in a room (5x5 m) , odometry result is very good and no problem. But when I drive my robot in corridor ( about pic. above), amcl get lost position of robot. Comment by Orhan on 2016-02-10: I'm editing my answer because my comment is long, Comment by osmancns on 2016-02-10: is there a way or parameter to compensate this drifting ? (except a new lidar ) Comment by Orhan on 2016-02-10: No, If This problem about your odometry settings, You must correct them first. Try editing your urdf file by looking other open source robot's urdfs. Comment by osmancns on 2016-02-11: I use a real robot and urdf model is same with robot. And I use laser_scan_matcher to produce odometry. Comment by Orhan on 2016-02-11: I used real informations about my robot before too. But sometimes navigation needs some tricks like this.
{ "domain": "robotics.stackexchange", "id": 22568, "tags": "ros, navigation, pose, amcl" }
Number of photons
Question: When a light source blinks, it "creates" a ball of photons, expanding by speed of light. How many photons are there in one "layer" of the ball (no matter how long is the source active)? Is it a definite number? Thanks Answer: The number of photons may indeed be finite because the energy of the photon in ${\rm J}$ is $$ E = hf$$ where $h=6.626\times 10^{-34}{\rm J}\cdot {\rm s}$ is Planck's constant and $f$ is the frequency in ${\rm Hz}$. For monochromatic light, the number of photons may be determined from the energy in this simple way because $f$ is a fixed constant: $$ N_{\rm photons} = \frac{E_{\rm total}}{hf} $$ For light that is a mixture of many frequencies, one has to separate the energy to the contributions from different frequencies, and apply the rule above separately for each frequency: $$ N_{\rm photons} = \frac{1}{h} \int_0^\infty \frac{df}{f} \frac{dE}{df} $$ where $dE/df$ is the total energy per unit frequency as a function of the frequency. A problem in applying the formula above is that in many cases, the integral may be divergent around $f\to 0$, very tiny frequency limit. The "soft" photons of low frequencies carry an extremely low energy, and that's why it's easy to accumulate or emit or absorb big numbers of photons without transferring or spending too much energy. We talk about the "infrared divergences" if the number of photons is divergent around $f\to 0$. Clearly, the number of photons is proportional to the time over which the light source is turned on, and the intensity (power) of the light source. The number of photons is a whole number – an integer – which means that the classical idea that the energy carried by light may be arbitrary or continuously change isn't accurate.
{ "domain": "physics.stackexchange", "id": 25288, "tags": "visible-light, photons" }
Convert list to a string with double quotes
Question: For an external application I need to send a command as a string, like this: ["START", "1", "2", "3", "4", "STOP"] Note the double quotes! I create this command with the following function: def create_command(amount): command = ["START"] list = create_list(amount) command += list command += ["STOP"] command = str(command ) command = command.replace("\'", "\"") return command And I create a list from a given number with the following function: def create_list(data): list = [] data = str(data) for letter in data: list.append(letter) return list Is the a way to make both functions more pythonic and not so straightforward? I don't really like them now, they look a bit clumpsy and I think there must be a better way do the thing. Answer: create_list is building a list of all the items in the string form of data. And so you can change it to: def create_list(data): return list(str(data)) I find it easier to read create_command if you merge some of the lines together: def create_command(amount): command = ["START"] + create_list(amount) + ["STOP"] return str(command).replace("\'", "\"") And so you can merge the above two changes together: def create_command(amount): command = ["START"] + list(str(amount)) + ["STOP"] return str(command).replace("\'", "\"") Expanding further, rather than using str.replace you can use json.dumps to format the list. This has the benefit that it will escape ' and " characters for you, which your code doesn't correctly handle. (Thanks Mathias) import json def create_command(amount): command = ["START"] + list(str(amount)) + ["STOP"] return json.dumps(command)
{ "domain": "codereview.stackexchange", "id": 31215, "tags": "python, python-2.x" }
How does electron gun accelerates electrons?
Question: I know that in electron guns we see in TV's and lots of other places, we have electron emitter (cold/hot W needle in the simplest case) and electrons are accelerated using lattice with high-voltage potential. But the question is why doesn't this lattice slow down electrons once they passes through it? PS. Please correct if lattice is not the correct word here :-D Answer: I would say "electrodes" rather than "lattice." In a real CRT you have a very complicated set of electrodes, but let's pretend it's a parallel-plate capacitor with a hole in the positive-voltage plate. A parallel-plate capacitor has a strong field between the plates, but a very weak field on the outside. So once an electron flies out the hole, it feels very little field.
{ "domain": "physics.stackexchange", "id": 1344, "tags": "electrostatics" }
Why is the lunar relief not visible in photographs of solar eclipses?
Question: I looked at a lot of high-quality images of solar eclipses and noticed the following thing: In all the photographs I've seen, the lunar disk has a completely clear outline, in which it is impossible to see any irregularities due to the presence of the lunar relief. Although one should expect them to be visible? Source: Times of India - IndiaTimes Answer: In short, you would need a very high resolution photo -- most likely taken through a telescope -- to identify any surface features on the moon by their effect on the limb (that is, the edge of the moon against the sun). In the photo you posted, the moon is about 500 pixels wide. The moon's diameter is around 1,738 km, so in this photo, each pixel represents a bit less than 3.5 km. The tallest mountain on the moon is Mons Huygens, which has about 4.7 km of topographic prominence (that is, its height above its surroundings). So the biggest mountain on the moon is about 1.3 pixels tall -- and most of the moon is much flatter than that. Surface relief features of the moon just aren't big enough to be identifiable at this scale. You probably could see some of the larger features of the moon's limb if you took a photo of the moon with enough magnification that it completely fills the frame of a really good 20 megapixel camera, where you'd be looking at something like 350 meters per pixel. However, there is a phenomenon that shows the surface features of the moon even if we can't see them directly. In the few seconds as the edge of the moon exactly overlaps the very edge of the sun, we can see tiny specks of light with dark between them. This is known as "Baily's Beads": The dark spots are higher points that block the view of the sun, while the lights are low spots or craters that allow the sun to peek through. They are named for astronomer Francis Baily, vice president of the British Royal Astronomical Society, who in 1836 explained what causes the effect.
{ "domain": "astronomy.stackexchange", "id": 7330, "tags": "the-moon, solar-eclipse, relief" }
Trim reads 1kb upstream of sequence
Question: I need a quick way to trim multiple reads in a FASTA file. I need to trim everything that is 1kbp upstream of this sequence AAGAGATGTTCAATCGTTTAAACAAATTCCAAGCTGCTTTAGCTTTGGCCCTTTACTCTCA. I figure a quick python script might be the way to go but I'm not sure if there's a tool that already does that (i.e. Trimmomatic or something like it) that would be able to do that for me. Thank you! Answer: Here's one way using biopython. Note that the Seq object has a number of methods that act just like those of a Python string. One of these is the find() method, which returns the index of the first occurrence of the sequence if found. Replace with rfind() if you want the last occurrence if that makes more sense. This solution uses a generator to avoid storing the entire list of trimmed sequences in memory: import argparse import contextlib import gzip import sys from Bio import SeqIO def trim_fasta(fileobj, sequence, num_bases): """ Trim num_bases upstream of the sequence provided """ for record in SeqIO.parse(fileobj, 'fasta'): index = record.seq.find(sequence) if index != -1 and index > num_bases: record = record[index-num_bases:] yield record def open_file(filename, mode='r'): if filename.endswith('.gz'): return gzip.open(filename, mode) else: return open(filename, mode) def get_argument_parser(): parser = argparse.ArgumentParser(add_help=False) group = parser.add_argument_group('input options') group.add_argument( "fasta", type=str, metavar="FILE", help="The input FASTA file", ) group.add_argument( "sequence", type=str, metavar="STR", help="The sequence to find and trim", ) group = parser.add_argument_group('trimming options') group.add_argument( "-n", "--num_bases", type=int, metavar="INT", default=1000, help="Trim INT bases upstream of input sequence", ) group = parser.add_argument_group('output options') group.add_argument( "-o", "--output", type=str, metavar="FILE", default='-', help="Write the filtered output to FILE (default: stdout)", ) group.add_argument( "-f", "--force", action='store_true', help="Overwrite the output file if it exists", ) group = parser.add_argument_group('additional options') group.add_argument( "-h", "--help", action="help", help="Show this help message and exit", ) return parser def main(): parser = get_argument_parser() args = parser.parse_args() with contextlib.ExitStack() as stack: if args.output != '-': out = stack.enter_context(open_file(args.output, 'wt' if args.force else 'xt')) else: out = sys.stdout input_fasta = stack.enter_context(open_file(args.fasta)) trimmed_reads = trim_fasta(input_fasta, args.sequence, args.num_bases) SeqIO.write(trimmed_reads, out, "fasta-2line") if __name__ == '__main__': main() usage: trim_reads.py [-n INT] [-o FILE] [-f] [-h] FILE STR input options: FILE The input FASTA file STR The sequence to find and trim trimming options: -n INT, --num_bases INT Trim INT bases upstream of input sequence output options: -o FILE, --output FILE Write the filtered output to FILE (default: stdout) -f, --force Overwrite the output file if it exists additional options: -h, --help Show this help message and exit
{ "domain": "bioinformatics.stackexchange", "id": 2395, "tags": "python, trimming, multi-fasta" }
Magnetic forces
Question: I can not understand the nature of magnetic forces. What is the composition of a magnetic force? If it is not composed of anything, how can it act on matter? Thank u. Answer: If you look at magnetism in terms of Quantum Field theory then magnetism is one part of the electromagnetic interaction. In your question you said it wasn't composed of anything, however in QFT electromagnetism and other fundamental interactions are controlled by particles called gauge bosons. So the electromagnetic force is caused by the gauge boson acting on the matter. And for electromagnetism the gauge boson is the photon, so photons interacting with matter is the cause of the electromagnetic force, and consequently the magnetic forces.
{ "domain": "physics.stackexchange", "id": 39977, "tags": "electromagnetism, forces, magnetic-fields" }
Angle of attack for torque calculation from buoyancy force
Question: To calculate the torque caused by the buoyancy force I need the length and the angle but what angle should I use? Should I use the angle of from the center at the bottom of the angle from the center of mass? Answer: you should use angle 2. The buoyancy in this case is due to pressure from 3 sides. the 2 lateral sides undergo triangular pressure distribution that opposes each other and cancels. Therefore the only effective force is the buoyancy pressure times the surface of the bottom, which is the applied force and is imparted at the center of the bottom of your submerged object.
{ "domain": "engineering.stackexchange", "id": 4955, "tags": "fluid-mechanics, torque, forces" }
Bldc motors erratic behavior with Arduino program
Question: I've been making my own quadcopter flight controller using Arduino Mega. This is the sample code I wrote in order to test the esc timers and motors: byte channelcount_1, channelcount_2, channelcount_3, channelcount_4; int receiverinput_channel_1, receiverinput_channel_2, receiverinput_channel_3, receiverinput_channel_4, start; unsigned long channel_timer_1, channel_timer_2, channel_timer_3, channel_timer_4, current_time, esc_looptimer; unsigned long zero_timer, timer_1, timer_2, timer_3, timer_4; void setup() { // put your setup code here, to run once: DDRC |= B11110000; //Setting digital pins 30,31,32,33 as output DDRB |= B10000000;; //Setting LED Pin 13 as output //Enabling Pin Change Interrupts PCICR |= (1 << PCIE0); PCMSK0 |= (1 << PCINT0); //Channel 3 PIN 52 PCMSK0 |= (1 << PCINT1); //Channel 4 PIN 53 PCMSK0 |= (1 << PCINT2); //Channel 2 PIN 51 PCMSK0 |= (1 << PCINT3); //Channel 1 PIN 50 //Wait till receiver is connected while (receiverinput_channel_3 < 990 || receiverinput_channel_3 > 1020 || receiverinput_channel_4 < 1400) { start++; PORTC |= B11110000; delayMicroseconds(1000); // 1000us pulse for esc PORTC &= B00001111; delay(3); //Wait 3 ms for next loop if (start == 125) { // every 125 loops i.e. 500ms digitalWrite(13, !(digitalRead(13))); //Change LED status start = 0; //Loop again } } start = 0; digitalWrite(13, LOW); //Turn off LED pin 13 zero_timer = micros(); } void loop() { // put your main code here, to run repeatedly: while (zero_timer + 4000 > micros()); zero_timer = micros(); PORTC |= B11110000; channel_timer_1 = receiverinput_channel_3 + zero_timer; //Time calculation for pin 33 channel_timer_2 = receiverinput_channel_3 + zero_timer; //Time calculation for pin 32 channel_timer_3 = receiverinput_channel_3 + zero_timer; //Time calculation for pin 31 channel_timer_4 = receiverinput_channel_3 + zero_timer; //Time calculation for pin 30 while (PORTC >= 16) //Execute till pins 33,32,31,30 are set low { esc_looptimer = micros(); if (esc_looptimer >= channel_timer_1)PORTC &= B11101111; //When delay time expires, pin 33 is set low if (esc_looptimer >= channel_timer_2)PORTC &= B11011111; //When delay time expires, pin 32 is set low if (esc_looptimer >= channel_timer_3)PORTC &= B10111111; //When delay time expires, pin 31 is set low if (esc_looptimer >= channel_timer_4)PORTC &= B01111111; //When delay time expires, pin 30 is set low } } //Interrupt Routine PCI0 for Receiver ISR(PCINT0_vect) { current_time = micros(); //Channel 1 if (PINB & B00001000) { if (channelcount_1 == 0 ) { channelcount_1 = 1; channel_timer_1 = current_time; } } else if (channelcount_1 == 1 ) { channelcount_1 = 0; receiverinput_channel_1 = current_time - channel_timer_1; } //Channel 2 if (PINB & B00000100) { if (channelcount_2 == 0 ) { channelcount_2 = 1; channel_timer_2 = current_time; } } else if (channelcount_2 == 1) { channelcount_2 = 0; receiverinput_channel_2 = current_time - channel_timer_2; } //Channel 3 if (PINB & B00000010) { if (channelcount_3 == 0 && PINB & B00000010) { channelcount_3 = 1; channel_timer_3 = current_time; } } else if (channelcount_3 == 1) { channelcount_3 = 0; receiverinput_channel_3 = current_time - channel_timer_3; } //Channel 4 if (PINB & B00000001) { if (channelcount_4 == 0 ) { channelcount_4 = 1; channel_timer_4 = current_time; } } else if (channelcount_4 == 1) { channelcount_4 = 0; receiverinput_channel_4 = current_time - channel_timer_4; } } However, my issue here is that the bldc motors i'm using don't work smoothly when connected to the Arduino. They erratically stop and even change direction of rotation at the same throttle input. I've tested them by connecting them directly to the transmitter and they work fine there with perfect rotation and speed. Can someone please help me out and tell me where I might be going wrong? EDIT: I do realize posting the entire Arduino code might be overkill, but I've been trying to solve this problem for three days (as of 22nd June,16) and I really do hope someone can point out any improvements/corrections in my code. Answer: Can you make a written description (comments, text, or block diagram) of what you think the code is supposed to be doing? I can't figure out what your whole timer scheme is supposed to do. Here's what happens when you execute it, though (assuming you make it to the main loop okay): void loop() { So here you start a loop. By definition, it loops. while (zero_timer + 4000 > micros()); zero_timer = micros(); Here you set a while loop based on the condition (some starting time + constant) > current time, but then on the next line you declare (some starting time) = current time. So this looks like you're setting up an infinite loop... inside the infinite loop. PORTC |= B11110000; You set the four output pins to ON, but then: channel_timer_1 = receiverinput_channel_3 + zero_timer; Here you set channel_time_N equal to receiverinput_channel_3 (note you do this for all of the channel timers - they're all set equal to channel_3) plus zero_timer. Zero timer was just set one line ago equal to the current time. What is receiverinput_channel_3? Well, it's not initialized, so it's either whatever was in the allocated memory when the function was declared, or: receiverinput_channel_3 = current_time - channel_timer_3, which evaluates to whatever the interrupt time step is for the first interrupt. receiverinput_channel_3 gets locked in to the interrupt time interval. This is because it *only gets updated if channelcount_3 is equal to one. However, when you set the value for receiverinput_channel_3, you set channelcount_3 equal to zero, meaning that it never gets updated until the interrupt gets called again. At that point, if the other criteria are met, you set channel_timer_3 equal to current_time and toggle channelcount_3 again. Now that channelcount_3 has been toggled, on the next interrupt call, you again define receiverinput_channel_3 = current_time - channel_timer_3, but channel_timer_3 was set equal to the current_time on the previous interrupt, so receiverinput_channel_3 is always equal to the interrupt interval. So now, realizing that channel_timer_1 is set to the almost-current time (zero_timer) plus the interrupt interval (now + a very small value), we continue: while (PORTC >= 16) { Okay, start a loop to basically pause execution until all of the timers expire, BUT: esc_looptimer = micros(); Set another constant equal to the now-current time, and THEN: if (esc_looptimer >= channel_timer_1)PORTC &= B11101111; As soon as the current time (as continuously defined by esc_looptimer) is greater than the start time outside this inner loop (zero_time) plus the interrupt interval, (channel_timer_1 as defined above), you turn the output pin off. If you manage to hit the timing such that you invoke an interrupt before you hit the esc_looptimer >= channel_timer_1 criteria for turning the output, then you set channel_timer_1 = current_time, meaning that on the next loop iteration, you declare esc_looptimer = (current time) and then you meet the criteria and immediately turn the output off. So, the tl;dr version of this is that you are turning your ESCs on then immediately back off, probably before the ESCs get a chance to evaluate the timing on the motors. Depending on the BLDC motors you're using, they may not have shaft position encoders ("sensorless" BLDC), meaning that: "A stationary motor generates no back EMF, making it impossible for the microcontroller to determine the position of the motor parts at start-up. The solution is to start the motor in an open loop configuration until sufficient EMF is generated for the microcontroller to take over motor supervision. These so-called “sensorless” BLDC motors are gaining in popularity. So you turn the ESC on, then before it has a chance to start the motor to the point it has adequate control, you turn the ESC back off again. Once all the ESCs are off you turn them back on then immediately back off again. This process repeats. With a BLDC motor, the commutator is removed. The commutator was what had previously provided the electrical current to the correct coils at the right time. Instead, a BLDC motor relies on electronic commutation, meaning you have to sequence the coils in the correct order, at the correct time. If you turn the ESC on and then off again before it has a chance to determine the correct timing for energizing the coil then you would get erratic behavior from the motors because they're not being controlled properly. I really have no idea what your whole interrupt sequence is supposed to be doing, but I can pretty much guarantee it's not doing what you intended. If you just want the motor to be on for some period of time, then my suggestion would be to hard-code that run time as a constant and wait for it to elapse. As @Jakob mentioned, the code is made more difficult to read because you're trying to control all four ESCs at once. Try getting one to work as desired first, then try to encapsulate the code for that into a function, then invoke that function for each ESC you want to run. As a final note, I'd say that I'm dubious as to what the output of PINB & B00000100 and similar lines would evaluate to: true or false? I thought PINB was one pin - how do you AND one pin with 8? What is the output of that function?. Also, you don't appear to really initialize anything equal to zero. This means that you're stuck with whatever is in the allocated memory when the program starts - if this isn't a 1 or a 0 in your channelcount_N variables, then your code never does anything. If you revamp the code and it still doesn't work, please post a wiring diagram showing how everything is connected. But again, as it stands I don't think your code is doing what you think it's doing.
{ "domain": "robotics.stackexchange", "id": 1108, "tags": "quadcopter, arduino, brushless-motor, esc" }
Looking for a particular normal form for Context-sensitive grammar
Question: I am wondering if there is a described normal form for Context-sensitive grammar, which is something similar to Kuroda normal form and Greibach normal form. That is to say, each rule in such form might be one of the following: $A\beta\rightarrow a_1\dots a_k B_1 B_2\dots B_n a_{k+1}\dots a_m$ or $A\rightarrow a_1\dots a_k B_1 B_2\dots B_n a_{k+1}\dots a_m$ where $A,B_i\in N$, $\beta\in(N\cup T)$, $a_i\in T$, $m\geq 0$, $n\geq 0$, $k\geq 0$, $(m+n)>0$. The idea is that three conditions are satisfied: In the left-hand side parts of rules, non-terminals always come first. In the right-hand side parts of rules all non-terminals are grouped together. Context-sensitive rules contain no more than two symbols in the left-hand side of the rule. As an answer I would expect either a counter-example, which proves that it is not always possible to transform any context-sensitive grammar into this form, either a link to the existent paper where such normal form is described, either a proof that such normal form exists :-) Answer: A set of grammars in Kuroda form $\mathcal{K}$ is a strict subset of grammars in the described form $\mathcal{L}$:$\mathcal{K}\subset\mathcal{L}$. This follows from the fact that the first form covers first three forms of Kuroda since $k$ and $m$ can be equal to $0$. And the last one covers '$A$ $\mathcal\rightarrow$ $a$' form since $k$ and $n$ can be equal to $0$. Let $\mathcal{G}$ - set of all Context-sensitive grammars. Since it's proven that for any Context-sensitive grammar $G\in\mathcal{G}$ there is a mapping $K:\mathcal{G}\rightarrow\mathcal{K}$ (i.e. there is an algorithm of transforming any CS-grammar into Kuroda form), we can say that mapping $L:\mathcal{G}\rightarrow\mathcal{L}$ also always exists and it is equal to $K$: $L$ = $K$. Indeed, for any $G\in\mathcal{G}$ there should be satisfied the following condition: $L(G)\in\mathcal{L}$. If $L=K$, then $L(G)\in\mathcal{K}\subset\mathcal{L}$. Q.E.D.
{ "domain": "cstheory.stackexchange", "id": 4257, "tags": "fl.formal-languages, grammars" }
What could be a good potential function for flocks of birds?
Question: I am interested in modelling flocks of birds but I have difficulties finding a good candidate for my potential function $V(\vec{r})$. It would need to have the following characteristics: A short range repulsion (birds do not want to crash into their neighbours) A long range attraction (birds aim to stay in packs) (From wikipedia: "Alignment - steer towards average heading of neighbours") What function could be a good candidate for this? In general, for a $N$-body system: In the case of gravity, we may use the potential $$V_g(\mathbf{x})=\sum_{i=1}^{n}-\frac{G m_{i}}{\left|\mathbf{x}-\mathbf{x}_{\mathbf{i}}\right|}$$ In the case of flocking, I tried using Lennard-Jones potential: $$V_{\mathrm{LJ}}=4 \varepsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right]$$ since it satisfies the short range repulsion and the long range attraction, but the resulting modelling does not look like bird flocking at all... Any ideas/advice are welcome and much appreciated. I can share screen shots showing the results. Answer: I don't know how you did your simulation, but you should at least include some kind of damping term in the motion of the birds, otherwise every configuration starting out-of-equilibrium will keep oscillating forever. Maybe add a term like $\overrightarrow{F_{\mathrm{viscous}}} = - \alpha \overrightarrow{v}$. If you are not only interested in the equilibrium position but also the dynamics of the flock for small perturbations, you can also add a random Langevin force $\overrightarrow{F_{\mathrm{Langevin}}}(t) = \eta(t)$, with $\left\langle \eta(t) \eta(t') \right\rangle = A \delta(t-t')$. For the exact form of the interaction force between the birds, I don't know if it is possible to capture all of the complexity of the phenomenon simply by adding pairwise, conservative force to the problem, but we can play around and see if the results are good enough in the end. If you want a "homogeneous" density of birds, you can try $$\overrightarrow{F}_{\overrightarrow{x}\to\overrightarrow{x'}}= \frac{k}{N} \left(\overrightarrow{x} - \overrightarrow{x'}\right) - \beta \frac{\left(\overrightarrow{x} - \overrightarrow{x'}\right)}{\left|\overrightarrow{x} - \overrightarrow{x'} \right|^2},$$ where $\overrightarrow{F}_{\overrightarrow{x}\to\overrightarrow{x'}}$ is the force exerted by a bird at position $\overrightarrow{x}$ on a bird at position $\overrightarrow{x'}$, $N$ is the number of birds in the flock (a normalization factor), and $k$ and $\beta$ are positive constants whose relative value will change the average spacing between birds at equilibrium. You also need to fix the relative value of the Langevin force and viscous force to somewhat realistic values so that you reach an equilibrium fast enough but you can still see the small perturbations in position. I will come with a more detailed explanation for the form of the force and a simulation later. Reason for the choice of the force: The first term in $\overrightarrow{F}_{\overrightarrow{x}\to\overrightarrow{x'}}$, $\frac{k}{N} \left(\overrightarrow{x} - \overrightarrow{x'}\right)$, simply correspond to the force of an harmonic oscillator of strengh $k$ centered on the center of mass of the flock. So, assuming the center of mass is centered around $0$, the force is simply $\overrightarrow{F}(\overrightarrow{x}) = -k \overrightarrow{x}$. The reason for the second term is actually a little bit subtle. By analogy with the 3D Coulomb force, you can show that the field $\overrightarrow{f}(\overrightarrow{x})$ created by a density of birds $\rho(\overrightarrow{x}')$ follows a 2D analog to Gauss law in electromagnetism, which is $\overrightarrow{\nabla} \cdot \overrightarrow{f} = 2 \pi \beta \rho$. You can then show that a constant density of birds $\rho(r) = N/(\pi R^2)$ for $0 < r < R$, and $\rho(r>R) = 0$ is creating a repulsive force of the form $\overrightarrow{f}(\overrightarrow{x}) = \beta N \overrightarrow{x}/R^2$ (for $\left| \overrightarrow{x} \right| < R$), which exactly compensates the harmonic oscillator force when $R = \sqrt{\beta N/k}$. This is of course only true for a continuous "density" of birds, which is not the case in reality but for $N \gg 1$, you can assimilate $\rho$ to the average density of birds. In short, for $N \gg 1$, the birds at equilibrium should arrange themselves in a somewhat homogeneous configuration within a radius $R = \sqrt{\beta N/k}$. Simulation I implemented the problem in Python using all the forces that I described earlier, with $k = \beta = 1$, $A = 3$ and $\alpha = 2$. I integrate in time steps of $dt = 0.01$ from $t = 0$ to $t = T = 10$. The mass $m$ of each bird is also set to $1$. Initially, the position of each bird is drawn randomly and uniformly in a square smaller than the expected radius, so that you can see the out-of-equilibrium dynamics. Here is what the trajectories look like on a typical run And here is a nice gif of the time evolution of the bird flock: As you can see, despite some random fluctuations in position, the birds keep moving around an "equilibrium" configuration where the number of birds per surface area is approximately constant within a given radius. If you're worried that the birds get too close to each other and it is unrealistic, try decreasing the $A$ parameter (controlling the amplitude of the random motion of birds). See below a gif for $A = 0.5$. Code used import numpy as np import matplotlib.pyplot as plt plt.rc('font', size=20) N = 160 #number of birds sizex = 20 #size of the image for plotting sizey = 20 k = 1.0 #strength of the (long-range) attractive force beta = 1.0 #strength of the (short-range) repulsive force alpha = 2.0 #"viscosity" A = 3.0 #intensity of the random erratic movement of each bird dt = 0.01 #timestep T = 10 #final time expected_radius = (beta*N/k)**0.5 #expect radius of the bird flock by analogy with the continuous case X_pos_0 = (np.random.random(N)-0.5)*0.5*sizex #draw the initial X position of birds randomly X_pos_0 -= np.mean(X_pos_0) #center it on 0 Y_pos_0 = (np.random.random(N)-0.5)*0.5*sizey #same in the Y direction Y_pos_0 -= np.mean(Y_pos_0) X_pos = np.zeros((N, int(T/dt)+1)) #array of X position of each bird Y_pos = np.zeros((N, int(T/dt)+1)) #array of Y position of each bird X_speed = np.zeros((N, int(T/dt)+1)) #array of X velocity of each bird Y_speed = np.zeros((N, int(T/dt)+1)) #array of Y velocity of each bird X_pos[:,0], Y_pos[:,0] = X_pos_0, Y_pos_0 @np.vectorize def repulsive_force_x(X,Y): #function used to compute the pairwise forces at each timestep of the algorithm if X==0 and Y==0: return 0.0 else: return -beta*X/(X**2+Y**2) @np.vectorize def repulsive_force_y(X,Y): if X==0 and Y==0: return 0.0 else: return -beta*Y/(X**2+Y**2) time = np.linspace(0, T, int(T/dt)+1) for i in range(int(T/dt)): langevin_force_x = np.random.normal(0, A, N)/dt**0.5 #the Langevin force has to be normalized like this in the limit dt -> 0 langevin_force_y = np.random.normal(0, A, N)/dt**0.5 viscous_force_x = -alpha*X_speed[:, i] viscous_force_y = -alpha*Y_speed[:, i] attractive_force_x = -k*(X_pos[:,i]-np.mean(X_pos[:,i])) attractive_force_y = -k*(Y_pos[:,i]-np.mean(Y_pos[:,i])) X1, X2 = np.meshgrid(X_pos[:,i], X_pos[:,i]) distance_matrix_X = X1-X2 #array containing all pairwise distances x-x' along X Y1, Y2 = np.meshgrid(Y_pos[:,i], Y_pos[:,i]) distance_matrix_Y = Y1-Y2 #array containing all pairwise distances y-y' along Y force_X = np.sum(repulsive_force_x(distance_matrix_X, distance_matrix_Y), axis=1) force_Y = np.sum(repulsive_force_y(distance_matrix_X, distance_matrix_Y), axis=1) X_speed[:, i+1] = X_speed[:, i] + dt*(langevin_force_x + viscous_force_x + attractive_force_x + force_X) #v(t+dt) = v(t) + a(t)*dt (m=1) Y_speed[:, i+1] = Y_speed[:, i] + dt*(langevin_force_y + viscous_force_y + attractive_force_y + force_Y) X_pos[:, i+1] = X_pos[:, i] + dt*X_speed[:, i+1] #x(t+dt) = x(t) + v(t+dt)*dt (symplectic integration) Y_pos[:, i+1] = Y_pos[:, i] + dt*Y_speed[:, i+1] if i%(int(T/dt)//10) ==0: #progression of the calculation print(10*i//(int(T/dt)//10), '%') cmap_cool = plt.get_cmap('cool') #plot initial and final positions only fig, ax = plt.subplots(1, figsize=(15,12)) theta = np.linspace(0, 2*np.pi, 200) ax.plot(expected_radius*np.cos(theta), expected_radius*np.sin(theta), 'black', label="Expected radius") ax.plot(X_pos[:, 0], Y_pos[:, 0], 'o', label="Initial position", color=cmap_cool(0)) #initial position in blue ax.plot(X_pos[:, -1], Y_pos[:, -1], 'o', label="Final position", color=cmap_cool(0.99)) #final position in purple ax.axis("scaled") ax.set_xlim(left=-24, right=24) plt.legend(loc='best') #plot trajectories too fig, ax = plt.subplots(1, figsize=(15,12)) theta = np.linspace(0, 2*np.pi, 200) ax.plot(expected_radius*np.cos(theta), expected_radius*np.sin(theta), 'black', label="Expected radius") ax.plot(X_pos[:, 0], Y_pos[:, 0], 'o', label="Initial position", color=cmap_cool(0)) #initial position in blue ax.plot(X_pos[:, -1], Y_pos[:, -1], 'o', label="Final position", color=cmap_cool(0.99)) #final position in purple ax.axis("scaled") ax.set_xlim(left=-24, right=24) for i in range(20): for bird in range(N): ax.plot([X_pos[bird, i*(int(T/dt)//20)], X_pos[bird, (i+1)*(int(T/dt)//20)]], [Y_pos[bird, i*(int(T/dt)//20)], Y_pos[bird, (i+1)*(int(T/dt)//20)]], '--', color=cmap_cool(i/20), lw=1) plt.legend(loc='best') Hope this helps.
{ "domain": "physics.stackexchange", "id": 74114, "tags": "potential-energy, many-body, biology, models" }
Find primes that are also primes when its digits are rotated
Question: The program below is meant to find all of the fare prime numbers under a user specified value. A fare prime being a number that when its digits are rotated, each combination makes a prime number. An example being 113 because 113, 311, and 131 are all prime numbers. My current problem is that the program takes a very long time to process very large numbers so I need a way of making it run quicker. I've tried to explain the code with the comments but if any part doesn't make sense I'll do my best to explain. #get user input for the number range n = int(input("number")) primes = [] #find all the prime numbers up to the specified value and append them to a list for num in range(2, n+1): for i in range(2, num): if num % i == 0: break else: primes.append(num) #find out if the prime number is a fare prime for i in primes: length = len(str(i)) #if the number has one digit it is automatically a fare prime if length == 1: print(i) #if the number is longer, rotate the digits to see if it is a fare prime if length >= 2: number = i fare_primes = [] #rotate the number to figure out if all combinations are prime for j in range(length): #turns # into a list of digits digit_list = list(str(number)) #rearranges number = [*digit_list[1::], digit_list[0]] part = "" #turns it back into an int value number = part.join(number) int_num = int(number) #check if # is prime for divider in range(2,int_num): if int_num % divider == 0: break else: fare_primes.append(number) #if all combinations of the digits are prime the original # is printed if len(fare_primes) == length: print(fare_primes[-1]) Answer: A 2+ digit fare prime number can never have an even digit, or a 5 digit in its set of digits, because a rotation which moves an even digit or a 5 to the last digit will be divisible by 2 or 5. You could use that as a filter for your possible fare primes from the list of primes you calculate. When calculating primes, you can stop at sqrt(num). Any number greater than sqrt(num) that evenly divides num will have a compliment number less than sqrt(num) that you have already tested. Speaking of primes that you’ve calculated, why don’t you use those for your subsequent prime test? Why try dividing by every number from 2 up to int_num when you could just try the numbers in primes upto int_num. ... or just ask if int_num in primes. Speed tip: turn primes into a set() first for faster inclusion testing. Your digit_list code is very inefficient. For any number, once you’ve split the number into digits, you don’t need to resplit it into digits again for each rotation. Actually, you don’t even need to split it into individual digits. This will give you the rotated values: digits = str(i) for j in range(1, length): int_num = int( digits[j:] + digits[:j])
{ "domain": "codereview.stackexchange", "id": 33024, "tags": "python, python-3.x, time-limit-exceeded, primes" }
Force on a magnet in a magnetic field
Question: I was wondering what happens when i put a wire in which current flows parallel to a magnet, something like this: where the straight line is the wire, the circle is a cylindrical magnet(with outgoing magnetic field) and the red line is a line force for the magnetic field of the wire. In particular i was wondering if the force on the magnet is directly proportional to the magnetic field produced by the wire. Thanks. Answer: In particular i was wondering if the force on the magnet is directly proportional to the magnetic field produced by the wire. If the magnet is far away from the wire, the force on the magnet can be expressed as $$ \mathbf F = \mathbf m \cdot \nabla \mathbf B_w, $$ where $\mathbf m$ is magnetic moment of the magnet and $\nabla\mathbf B_w$ is a gradient of magnetic field of the wire at the position of the magnet (a tensor). However, if the magnet gets close enough to the wire, the above formula breaks down - there is no single relevant value of the tensor that could be easily found and put into the above formula to obtain the right value of the force. Instead, similar formula can be integrated over the region containing the magnet: $$ \mathbf F = \int_{magnet} \mathbf M(\mathbf x) \cdot \nabla \mathbf B_w(\mathbf x) d^3\mathbf x $$ where $\mathbf M$ gives magnetic moment of the magnet per unit volume. Still, the magnetic field $\mathbf B_w$ at all positions is proportional to the current in the wire. From this and the last equation, it follows that the force $\mathbf F$ on the magnet is proportional to the current in the wire.
{ "domain": "physics.stackexchange", "id": 14541, "tags": "electromagnetism" }
Help with setting up Inverse Kinematics
Question: I'm working through the Inverse Kinematic example for the Unimation PUMA 560 from Introduction to Robotics by Craig. In it he specifies the IK equations like so: In my software program I have three sliders on the screen that will give me the rotation of the end point in x, y, z like so (this is in Unity): Each one of these sliders will control a float variable in the code (C#) and I can read these into my script (Using Unity 5). I am trying to replicate the inverse kinematic solution for this PUMA robot inside Unity, so that for a given position and rotation of the end effector the link rotations will update accordingly. I have already written out the IK equations that Craig specified in the example to calculate theta(i), but how do I "read" the slider values and "input" them to these equations? If I am not making any sense I apologize, I have been chipping away at this for some time and hit a mental blank wall. Any advice appreciated. Edit: So in my near-delirious state I have not posited my question properly. So far, these are the equations I have written so far in code: public class PUMA_IK : MonoBehaviour { GameObject J1, J2, J3, J4, J5, J6; public Vector3 J2J3_diff, J3J4_diff; public Slider px_Slider; public Slider py_Slider; public Slider pz_Slider; public Slider rx_Slider; public Slider ry_Slider; public Slider rz_Slider; public float Posx, Posy, Posz, Rotx, Roty, Rotz; float a1, a2, a3, a4, a5, a6; //Joint twist float r1, r2, r3, r4, r5, r6; //Mutual perpendicular length float d1, d2, d3, d4, d5, d6; //Link offset public float t1, t2, t23, t3, t4, t5, t6; //Joint angle of rotation public float J1Rot, J2Rot, J3Rot, J4Rot, J5Rot, J6Rot; float r11, r21, r31, r12, r22, r32, r13, r23, r33, c23, s23, Px, Py, Pz, phi, rho, K; int pose; //1 - left hand, 2 = right hand // Use this for initialization void Start () { pose = 1; J1 = GameObject.FindGameObjectWithTag("J1"); J2 = GameObject.FindGameObjectWithTag("J2"); J3 = GameObject.FindGameObjectWithTag("J3"); J4 = GameObject.FindGameObjectWithTag("J4"); J5 = GameObject.FindGameObjectWithTag("J5"); J6 = GameObject.FindGameObjectWithTag("J6"); J2J3_diff = J3.transform.position - J2.transform.position; J3J4_diff = J4.transform.position - J3.transform.position; //Init modified DH parameters //Joint twist a1 = 0; a2 = -90; a3 = 0; a4 = -90; a5 = 90; a6 = -90; //Link length r1 = 0; r2 = Mathf.Abs(J2J3_diff.x); r3 = Mathf.Abs(J3J4_diff.x); r4 = 0; r5 = 0; r6 = 0; //Link offset d1 = 0; d2 = 0; d3 = Mathf.Abs(J2J3_diff.z); d4 = Vector3.Distance(J4.transform.position, J3.transform.position); d5 = 0; d6 = 0; } void Update () { Posx = px_Slider.value; Posy = py_Slider.value; Posz = pz_Slider.value; Rotx = rx_Slider.value; Roty = ry_Slider.value; Rotz = rz_Slider.value; Px = Posx; Py = Posy; Pz = Posz; c23 = ((cos(t2)*cos(t3)) - (sin(t2)*sin(t3))); s23 = ((cos(t2)*sin(t3)) + (sin(t2)*cos(t3))); rho = Mathf.Sqrt(Mathf.Pow(Px, 2) + Mathf.Pow(Py, 2)); phi = Mathf.Atan2(Py, Px); if (pose == 1) { t1 = Mathf.Atan2(Py, Px) - Mathf.Atan2(d3, Mathf.Sqrt(Mathf.Pow(Px, 2) + Mathf.Pow(Py, 2) - Mathf.Pow(d3, 2))); } if (pose == 2) { t1 = Mathf.Atan2(Py, Px) - Mathf.Atan2(d3, -Mathf.Sqrt(Mathf.Pow(Px, 2) + Mathf.Pow(Py, 2) - Mathf.Pow(d3, 2))); } K = (Mathf.Pow(Px, 2)+ Mathf.Pow(Py, 2) + Mathf.Pow(Px, 2) - Mathf.Pow(a2, 2) - Mathf.Pow(a3, 2) - Mathf.Pow(d3, 2) - Mathf.Pow(d4, 2)) / (2 * a2); if (pose == 1) { t3 = Mathf.Atan2(a3, d4) - Mathf.Atan2(K, Mathf.Sqrt(Mathf.Pow(a2, 2) + Mathf.Pow(d4, 2) - Mathf.Pow(K, 2))); } if (pose == 2) { t3 = Mathf.Atan2(a3, d4) - Mathf.Atan2(K, -Mathf.Sqrt(Mathf.Pow(a2, 2) + Mathf.Pow(d4, 2) - Mathf.Pow(K, 2))); } t23 = Mathf.Atan2(((-a3 - (a2 * cos(t3))) * Pz) - ((cos(t1) * Px) + (sin(t1) * Py)) * (d4 - (a2 * sin(t3))), ((((a2 * sin(t3)) - a4) * Pz) - ((a3 + (a2 * cos(t3))) * ((cos(t1) * Px) + (sin(t1) * Py))))); t2 = t23 - t3; if (sin(t5) != 0) //Joint 5 is at zero i.e. pointing straight out { t4 = Mathf.Atan2((-r13 * sin(t1)) + (r23 * cos(t1)), (-r13 * cos(t1) * c23) + (r33 * s23)); } float t4_detection_window = 0.00001f; if ((((-a3 - (a2 * cos(t3))) * Pz) - ((cos(t1) * Px) + (sin(t1) * Py)) < t4_detection_window) && (((-r13 * cos(t1) * c23) + (r33 * s23)) < t4_detection_window)) { t4 = J4Rot; } float t5_s5, t5_c5; //Eqn 4.79 t5_s5 = -((r13 * ((cos(t1) * c23 * cos(t4)) + (sin(t1) * sin(t4)))) + (r23 * ((sin(t1) * c23 * cos(t4)) - (cos(t1) * sin(t4)))) - (r33 * (s23 * cos(t4)))); t5_c5 = (r13 * (-cos(t1) * s23)) + (r23 * (-sin(t1) * s23)) + (r33 * -c23); t5 = Mathf.Atan2(t5_s5, t5_c5); float t5_s6, t5_c6; //Eqn 4.82 t5_s6 = ((-r11 * ((cos(t1) * c23 * sin(t4)) - (sin(t1) * cos(t4)))) - (r21 * ((sin(t1) * c23 * sin(t4)) + (cos(t1) * cos(t4)))) + (r31 * (s23 * sin(t4)))); t5_c6 = (r11 * ((((cos(t1) * c23 * cos(t4)) + (sin(t1) * sin(t4))) * cos(t5)) - (cos(t1) * s23 * sin(t5)))) + (r21 * ((((sin(t1) * c23 * cos(t4)) + (cos(t1) * sin(t4))) * cos(t5)) - (sin(t1) * s23 * sin(t5)))) - (r31 * ((s23 * cos(t4) * cos(t5)) + (c23 * sin(t5)))); t6 = Mathf.Atan2(t5_s6, t5_c6); //Update current joint angle for display J1Rot = J1.transform.localRotation.eulerAngles.z; J2Rot = J2.transform.localRotation.eulerAngles.y; J3Rot = J3.transform.localRotation.eulerAngles.y; J4Rot = J4.transform.localRotation.eulerAngles.z; J5Rot = J5.transform.localRotation.eulerAngles.y; J6Rot = J6.transform.localRotation.eulerAngles.z; } void p(object o) { Debug.Log(o); } float sin(float angle) { return Mathf.Rad2Deg * Mathf.Sin(angle); } float cos(float angle) { return Mathf.Rad2Deg * Mathf.Cos(angle); } } The issue is not with the mathematics of what is going on per se, I am just confused at how I interface the three values of the X, Y, and Z rotation for the sliders (which represent the desired orientation) with these equations. For the translation component it is easy, the slider values are simply equal to Px, Py and Pz in the IK equation set. But in his equations he references r11, r23, etc, which are the orientation components. I am unsure how to replace these values (r11, r12, etc) with the slider values. Any ideas? Edit 2: I should also say that these sliders would be for positioning the tool center point. The XYZ sliders will give the translation and the others would give the orientation, relative to the base frame. I hope this all makes sense. The goal is to be able to use these sliders in a similar fashion to how one would jog a real robot in world mode (as opposed to joint mode). I then pass these calculated angle values to the transform.rotation component of each joint in Unity. So what I am really asking is given the three numbers that the rotation sliders produce (XRot, YRot and ZRot), how do I plug those three numbers into the IK equations? Answer: You need to turn each rotation angle (XRot, YRot, ZRot) into their own 3x3 rotation matrix. Once you have three 3x3 rotation matrices, you multiply them all together to get a singular "final" 3x3 rotation matrix that describes how the ending orientation is related to the starting orientation. How you define these rotation matrices (global frame or body frame) and how you apply them (x/y/z, z/y/x, y/z/x, etc.) determines the result you'll get. For me, I found that body-centric coordinates made the most intuitive sense. Imagine a ship whose y-axis points forward, z-axis points up, and x-axis points right (starboard). If you define roll as about the y-axis, pitch about the x-axis, and yaw about the z-axis, then (to me at least) those terms make the most sense when you say that first you yaw, then you pitch, then you roll. So, first you yaw, using $R_z(\theta_z)$: $$ R_z(\theta_z) = \left[ \begin{array}{ccc} \cos{\theta_z} & -\sin{\theta_z} & 0 \\ \sin{\theta_z} & \cos{\theta_z} & 0 \\ 0 & 0 & 1 \end{array} \right] $$ Then you pitch, using $R_x(\theta_x)$ $$ R_x(\theta_x) = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos{\theta_x} & -\sin{\theta_x} \\ 0 & \sin{\theta_x} & \cos{\theta_x} \\ \end{array} \right] $$ Then you roll, using $R_y(\theta_y)$ $$ R_y(\theta_y) = \left[ \begin{array}{ccc} \cos{\theta_y} & 0 & \sin{\theta_y} \\ 0 & 1 & 0 \\ -\sin{\theta_y} & 0 & \cos{\theta_y} \\ \end{array} \right] $$ So, when you multiply them all together, you do matrix multiplication from right to left, you get $$ R = R_y R_x R_z $$ I used the y-axis as the "long" or longitudinal axis here because, at my company, that's how we set up our jobs: y-axis is generally the longer axis. I've seen online that typically the x-axis is the long axis, but it's all in how you define your axes. The $R_x$, $R_y$, and $R_z$ matrices I've given are the correct form to use for those axes, so if you redefine the axes for yourself to be something different just be sure to use the correct axis. So, once you get your overall $R$ matrix, that's where you get your $r_11$, $r_21$, etc. values.
{ "domain": "robotics.stackexchange", "id": 1300, "tags": "inverse-kinematics" }
Is there a solution for this maze problem in polynomial time?
Question: Suppose you have a maze represented by a graph where each vertex represents a room and edges represent paths between rooms and each edge has a weight denoting the time it takes to go that way. Now here comes the tricky part: suppose each room needs a set of keys for you to enter, and inside each room you can find another set of keys. Keys can be repeated and one key may be needed to enter on more than one room. You can only enter the room if you have all keys required. You have an arbitrarily large number of chests with gold inside and each chest needs one key which can be found in the maze. You already have a set of keys that you can use. You can use a key more than once. The question is: how do you collect the keys you need as fast as possible? Answer: If the number of keys is unlimited, the problem is NP-hard, by a straightforward reduction from the traveling salesman problem: simply put a different key in each vertex, and have a single chest that requires all the keys.
{ "domain": "cs.stackexchange", "id": 12056, "tags": "graphs, time-complexity, shortest-path, traveling-salesman" }
Which elements tend to form the most phases?
Question: Some combinations of two elements have very complicated binary phase diagrams across the weight% horizontal axis. Others are rather simple. Is this a function of only one or both of the elements involved? I would think both. However, surely there are some elements that tend to form more phases than others. Has this tendency to form more phases (or less) been quantified for the various elements? If no, is it quantifiable? Obviously elements form only a given number of known allotropes. However, it is not known if the number of known allotropes is exhaustive. Therefore the number of pseudomorphs in a binary phase diagram seems like a more thorough method of investigating an element's "binding flexibility." Answer: Much empirical data is needed to either produce or validate phase diagrams. Any phase diagram would likely need to be of practical consequence to justify spending the energy and resources to produce it. An example of a binary mixture with practical consequence would be a metal oxide formed during corrosion. The many forms of iron and oxide are well studied since iron is commonly used to construct tools. In 2015 a group claimed to use the computer code CALYPSO (Crystal structure AnaLYsis by Particle Swarm Optimization) to discover a new phase of beryllium. Beryllium is used in aerospace and the construction of nuclear reactors, so the expenditure of effort was justifiable. So to answer the sub-questions: Is this a function of only one or both of the elements involved? Practically .. No. In the case of iron oxide in an aqueous solution, multiple iron oxide phases can form at the same temperature and pressure if the pH of the solution or the redox potential value of the solution were changed. Has this tendency to form more phases (or less) been quantified for the various elements? - Yes, phases of practical consequence have been quantified at great effort; however, discoveries of new phases are likely to continue. It is possible to estimate the existence of new phases using computer algorithms; however, these phases must be experimentally validated. A computer algorithm could be constructed around pseudomorphs; however, if a new phase were discovered it would need practical validation by experimentation. "Which elements form the most phases?" is an impossible question to give and absolute answer as we are limited to certain pressures and temperatures with which to experiment. The way experimentation is done is likely to bias any verifiable answer toward more practical materials.
{ "domain": "chemistry.stackexchange", "id": 5126, "tags": "phase" }
Can an electron stay still?
Question: Can we somehow force an electron to stay in one position and if we can, how? and what will be its implications? Will it collapse, or will it cease to exist? Can we do it by draining all the energy from an electron by taking it 0k? Answer: This is an excellent question, more at the end. As per Dr. Balakrishnan below: The uncertainty principle actually states a fundamental property of quantum systems, and is not a statement about the observational success of current technology. It has been confused with the observer effect on some measurements. The observer effect was used by Heisenberg as a physical analogy, not an explanation of the quantum phenomenon. This phenomenon is true regardless of the presence or absence of an observer. He reinterpreted quantum theory, not using Schrödinger's wave equation, but using matrix mathematics, although those who use wave mechanics also have their own mathematical explanation of the inherent inability to specify position and momentum. As the first person stated more succinctly, if you accept this thing we call an electron inherently has these two features, position and momentum, and you accept either the wave mechanics or the matrix mathematical theory of quantum theory, both of which explain the inability to quantify simultaneously these features, then it follows an electron by its nature is not immobile. You have discovered something that many high school and university lecturers might mention, and some may, if only in passing, to clarify the nature of the principle that you have so ably brought to the attention of this blog. See the link. Indian Institute of Technology Madras, Professor V. Balakrishnan, Lecture 1 – Introduction to Quantum Physics; Heisenberg's uncertainty principle, National Programme of Technology Enhanced Learning.
{ "domain": "physics.stackexchange", "id": 34019, "tags": "quantum-mechanics, electrons, heisenberg-uncertainty-principle" }
How does anger relate to blood pressure?
Question: Anger is an emotion generated by neural processes in the brain and is associated with elevated blood pressure. How can an emotion, which is totally related to brain, result in blood pressure changes? Answer: Anger is a common emotion in most animals and it is highly related to stress. At time of anger body usually releases stress hormones and the body's way to respond to stress is by sympathetic nervous system activation which results in the fight-or-flight response. Anger is an emotional response related to one's psychological interpretation of having been threatened. Often it indicates when one's basic boundaries are violated. Sheila Videbeck describes anger as a normal emotion that involves a strong uncomfortable and emotional response to a perceived provocation. Anger may have physical correlates such as increased heart rate, blood pressure, and levels of adrenaline and noradrenaline. Some view anger as an emotion which triggers part of the fight or flight brain response. http://en.wikipedia.org/wiki/Anger Stress and anger trigger the release of stress hormone Cortisol in the body. Mild releases of Cortisol can give the body a sudden burst of energy,it also may increase the heart rate to 180 times a minute and blood pressure from 120 over 80 to 220 over 130. Anger is processed in hypothalamus and amygdala,the roles of two peptide hormones, corticotropin-releasing hormone (CRH) is responsible for the production of Cortisol (Primary stress hormone) and arginine-vassopressin (AVP) produces Vasopressin, which triggers blood pressure by increases reabsorption of water by the kidneys, this results in contraction of blood vessels which raises blood pressure. The human stress response involves a complex signaling pathway among neurons and somatic cells. While our understanding of the chemical interactions underlying the stress response has increased vastly in recent years, much remains poorly understood. The roles of two peptide hormones, corticotropin-releasing hormone (CRH) and arginine-vassopressin (AVP), have been widely studied. Stimulated by an environmental stressor, neurons in the hypothalamus secrete CRH and AVP. CRH, a short polypeptide, is transported to the anterior pituitary, where it stimulates the secretion of corticotropin (4). Consequently, corticotropin stimulates increased production of corticosteroids including cortisol, the primary actor directly impacting the stress response (5). Vasopressin, a small hormone molecule, increases reabsorption of water by the kidneys and induces vasoconstriction, the contraction of blood vessels, thereby raising blood pressure. http://dujs.dartmouth.edu/fall-2010/the-physiology-of-stress-cortisol-and-the-hypothalamic-pituitary-adrenal-axis#.VH7k8cnQoX0
{ "domain": "biology.stackexchange", "id": 3139, "tags": "human-biology, neuroscience, psychology" }
Finding a perfect square smaller than an input number (Swift)
Question: This code runs slowly, even with numbers smaller than 1,000,000. Can someone help me optimize this thing? // Finds the highest perfect square below a certain input (int) let target: Int = 758865 // Swap this out for testing other numbers var smlSqr = [Int]() let findLimit = Int(Double(target).squareRoot().rounded(.up)) // Saves CPU time, because perfect squares less than // number a but higher than a.squareRoot is impossible. var index = 0 var maxSqr = 0 // Final result var candidate = 0 repeat { smlSqr.append(index * index) index += 1 } while(index < findLimit) while candidate < smlSqr.count { if smlSqr[candidate] > maxSqr { maxSqr = smlSqr[candidate] } candidate += 1 } print(String(maxSqr)) // For debugging, should print out 100 print(Int(Double(maxSqr).squareRoot())) // For debugging, making sure the number is actually a perfect square I'm using Swift 5.1 on Xcode 11. Also using a playground (if that helps) Answer: Martin is right that playgrounds are notoriously inefficient. Using compiled target will solve this problem. But the other issue is that the chosen algorithm is inefficient. Your algorithm is basically trying every integer, one after another, until you reach the appropriate value. So, with your input of 758,865, you’ll try every integer between 0 and 872. That’s 873 iterations. There are far better approaches. Martin (+1) is right that the easiest solution is to use the built-in sqrt() function. That having been said, these sorts of questions are testing your ability to write algorithms and they don’t generally want you just calling some system function. Binary search So, which algorithm should we use? The “go to” solution for improving searches is often the binary search. Let’s say you wanted square root of n. You know that the answer rests somewhere in the range between 0 and n. Let’s consider 0 to be our “lower bound” and n to be our “upper bound” of possible values. Pick a “guess” value in the middle of the range of possible values; Square this guess by multiplying it by itself; See if the result of this is too big or too small; If it’s too big, throw away the top half of the range (by adjusting the upper bound of our modified range to be what was previously the middle of the range, namely our last guess); Likewise, if it’s too small, we throw away the bottom half of the range (by adjusting the lower bound of our range to be equal to the last guess); and Repeat the process, halving the range of possible values each time, looping back to step 2. This binary search technique finds the answer to 758,865 in 20 iterations rather than 873. That might look like: func nearestSquare(below value: Int) -> Int? { guard value > 0 else { return nil } let target = value - 1 var upperBound = value var lowerBound = 0 repeat { let guess = (upperBound + lowerBound) / 2 let guessSquared = guess * guess let difference = guessSquared - target if difference == 0 { return guessSquared } else if difference > 0 { upperBound = guess } else { lowerBound = guess } } while (upperBound - lowerBound) > 1 return lowerBound * lowerBound } There are refinements you could do, but this is the basic idea. Keep cutting the range of possible solutions in half and trying the middle value until you’ve got a winner. The binary search is a mainstay of efficient searches through large ranges of possible values. Newton-Raphson That having been said, while a binary search is a huge improvement, there are even other, more efficient, algorithms, if you’re so inclined. For example, Newton–Raphson can calculate the result for 758,865 in only 12 iterations. The Newton-Raphson is an iterative technique in which you take a guess; identify where that falls on a curve; calculate the tangent to that point on the curve; identify the x-intercept of that tangent; and use that as your next guess, repeating until you find where it crosses the x-axis. So, the notion is that the square root of n can be represented as the positive x-intercept of the function: \$y = x^2 - n\$ We know that the tangent of a given point on this curve in iteration i is: \$y = m_ix + b_i\$ Where the slope is the first derivative of the above curve: \$m_i = 2x_i\$ and the y-intercept is: \$b_i = y_i - m_ix_i\$ And the x-intercept (i.e. our guess for the next iteration) of that tangent is: \$x_{i+1} = -\frac{b_i}{m_i}\$ So, you can calculate the nearest perfect square below a given value using Newton-Raphson like so: func nearestSquare(below value: Int) -> Int? { guard value > 0 else { return nil } let target = value - 1 func f(_ x: Int) -> Int { // y = x² - n return x * x - target } var x = target var y = f(x) while y > 0 { let m = 2 * x // slope of tangent let b = Double(y - m * x) // y-intercept of tangent x = Int((-b / Double(m)).rounded(.down)) // x-intercept of tangent, rounded down y = f(x) } return x * x } Or you can simplify that formula a bit, doing a few arithmetic substitutions, to: \$x_{i+1} = x_i-\frac{y_i}{2x_i}\$ Thus: func nearestSquare(below value: Int) -> Int? { guard value > 0 else { return nil } let target = Double(value - 1) func f(_ x: Double) -> Double { // y = x² - n return x * x - target } var x = target var y = f(x) while y > 0 { x = x - (y / x / 2).rounded(.up) y = f(x) } return Int(x * x) }
{ "domain": "codereview.stackexchange", "id": 36190, "tags": "performance, beginner, swift" }
how can i use only costmap_2d map? (navigation stack without base_controller)
Question: hi, currently i can create maps with gmapping + laser_scan_matcher and hector_slam. i want to see obstacles in the map i create. so costmap is the answer i guess. i just have lms100 laser scanner, not a robot system to control so in navigation stack, base_controler requirement is not met. so can i use a part of navigation stack which includes costmap_2d especially? if so, what is the proper way to do it? Originally posted by bsk on ROS Answers with karma: 73 on 2016-06-14 Post score: 0 Answer: You can also run the costmap_2d_node (in the costmap_2d package) Originally posted by David Lu with karma: 10932 on 2016-06-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by bsk on 2016-06-14: for doing that properly, should i create and modify a launch file like example.launch (in the costmap_2d launch directory)??? is it enough for me to see obstacles in the map?
{ "domain": "robotics.stackexchange", "id": 24920, "tags": "navigation, mapping, costmap-2d" }
Should I take initial velocity and acceleration due to gravity positive or negative in motion in 1 dimension?
Question: I was studying motion in one dimension, my sir taught that when a body is thrown upwards from the ground it will have a negative initial velocity and negative acceleration and when it will be falling after reaching maximum height, it will have positive velocity and positive acceleration. But when I studied from a channel in YouTube, he said that acceleration due to gravity will be always negative, so when a body is thrown upwards, it will have a positive initial velocity and when it falls after reaching maximum height, it will have negative velocity. I am so confused because of this, can anyone please tell me the concept behind this? Answer: As already pointed out in the comments, whether you consider "upward" or "downwards" positive or negative is just convention. As far as I know, in most cases we consider "down" to be negative so acceleration is always negative (since gravity makes things fall/accelerate downwards). You could however also choose "downwards" to be positive, the only important thing is to be consistent throughout the problem. Thus the statement that the sign or direction of acceleration changes is wrong.
{ "domain": "physics.stackexchange", "id": 79703, "tags": "kinematics, acceleration" }
What is the significance of an organic compound's M peak?
Question: Background On the topic of identifying an organic compounds using peaks generated in Mass Spectrometry, a rule of thumb expressed in the UK educational textbook, CGP, is that that "the M peak is the one with the second highest mass/charge ratio". Question What is the significance of this M peak anyway? Answer: "M" stands for molecular ion. It is often represented as $\ce{M, M^{+} or M^{+~.}}$. In the mass spec experiment, an electron is knocked out of some of the sample molecules. The peak at M/q (q=1) represents the molecular weight of the entire molecule, less an electron, before it fragments to smaller ions. From precise measurement of the molecular ion (e.g. 82.34567890) you can determine the exact molecular formula of your sample compound. This can help greatly with sample identification.
{ "domain": "chemistry.stackexchange", "id": 2989, "tags": "terminology, mass-spectrometry" }
Proving that $\vec{v}\times\sum_i\dfrac{dm_i}{dt}\vec{r}_i=t(\vec{v}\times\sum_i\vec{F}_i)$ when no external torque
Question: There is this idea of relativity in Classical Mechanics: The laws of mechanics valid in an inertial frame must also be valid in any frame moving uniformly with respect to it. I was just trying to apply these to the case of the law of conservation of momentum and the law of conservation of angular momentum. Let there be an inertial frame S and another frame S' moving with velocity $\mathbf{\vec{v}}$ w.r.t to S with: $$\mathbf{\vec{r}}'_i = \mathbf{\vec{r}}_i - \mathbf{\vec{v}}t$$ $$\mathbf{\vec{v}}'_i = \mathbf{\vec{v}}_i - \mathbf{\vec{v}}$$ For momentum conservation: In frame S', putting $\dfrac{d}{dt} \sum_i \mathbf{\vec{p}}'_i = \mathbf{0}$ and substituting $\dfrac{d}{dt} \sum_i \mathbf{\vec{p}}_i = \mathbf{0}$ of frame S in it: $$\dfrac{d}{dt} \sum_i \mathbf{\vec{p}}'_i = \dfrac{d}{dt} \sum_i \mathbf{\vec{p}}_i - \dfrac{d}{dt} \sum_i m_i \mathbf{\vec{v}} = \mathbf{0} - \mathbf{\vec{v}} \dfrac{d}{dt} \sum_i m_i$$ If this has to be $\mathbf{0}$, then $\sum_i m_i = 0$ Now, on to angular momentum. In frame S: $$\dfrac{d}{dt} \sum_i \mathbf{\vec{L}}_i = \dfrac{d}{dt} \sum_i (\mathbf{\vec{r}}_i \times m_i\mathbf{\vec{v}}_i) = \mathbf{0}$$ Am trying to prove the law in frame S' from the law in S: $$\dfrac{d}{dt} \sum_i \mathbf{\vec{L}}'_i = \dfrac{d}{dt} \sum_i \mathbf{\vec{L}}_i - \dfrac{d}{dt} \sum_i (\mathbf{\vec{r}}_i \times m_i\mathbf{\vec{v}}) - \dfrac{d}{dt} \sum_i (\mathbf{\vec{v}}t \times m_i \mathbf{\vec{v}}_i)$$ $$= \mathbf{0} - \dfrac{d}{dt} \sum_i (\mathbf{\vec{r}}_i \times m_i\mathbf{\vec{v}}) - \dfrac{d}{dt} \sum_i (\mathbf{\vec{v}}t \times m_i \mathbf{\vec{v}}_i)$$ $$= - \sum_i m_i (\mathbf{\vec{v}}_i \times \mathbf{\vec{v}}) - \sum_i \dfrac{dm_i}{dt} (\mathbf{\vec{r}}_i \times \mathbf{\vec{v}}) + \sum_i m_i (\mathbf{\vec{v}}_i \times \mathbf{\vec{v}}) - \sum_i m_i (\mathbf{\vec{v}}t \times \mathbf{\vec{a}}_i) - \sum_i \dfrac{dm_i}{dt} (\mathbf{\vec{v}}t \times \mathbf{\vec{v}}_i)$$ $$= - \sum_i \dfrac{dm_i}{dt} (\mathbf{\vec{r}}_i \times \mathbf{\vec{v}}) - \sum_i m_i (\mathbf{\vec{v}}t \times \mathbf{\vec{a}}_i) - \sum_i \dfrac{dm_i}{dt} (\mathbf{\vec{v}}t \times \mathbf{\vec{v}}_i)$$ $$= \mathbf{\vec{v}} \times \sum_i \dfrac{dm_i}{dt} \mathbf{\vec{r}}_i - \mathbf{\vec{v}}t \times \sum_i \mathbf{\vec{F}}_i$$ But this is what I wanted to prove to be $\mathbf{0}$. I stil have to prove the following: For a system of particles at $\mathbf{\vec{r}}_i$ with mass $m_i$, which have forces $\mathbf{\vec{F}}_i$ acting on them such that $\sum_i \mathbf{\vec{r}}_i \times \mathbf{\vec{F}}_i = \mathbf{0}$, given $\sum_i \dfrac{dm_i}{dt} = 0$; how do I prove: $$\mathbf{\vec{v}} \times \sum_i \dfrac{dm_i}{dt} \mathbf{\vec{r}}_i = \mathbf{\vec{v}}t \times \sum_i \mathbf{\vec{F}}_i$$ for any arbitrary $\mathbf{\vec{v}}$ and for all time $t$. Answer: Using your notation of $$ \boldsymbol{r}_{i}' =\boldsymbol{r}_{i}-\boldsymbol{v}\,t \\ \boldsymbol{v}_{i}' =\boldsymbol{v}_{i}-\boldsymbol{v} $$ and with the assumption that $\dot{\boldsymbol{v}}=0$ (uniform motion of frame S') form the linear and angular momentum expressions on the S frame. $$ \boldsymbol{p} =\sum_{i}m_{i}\boldsymbol{v}_{i} \\ \boldsymbol{L} =\sum_{i}\left(\boldsymbol{r}_{i}\times m_{i}\boldsymbol{v}_{i}\right) $$ Now look at linear and angular momentum in the S' frame and relate them to the ones from S. $$\require{cancel} \begin{aligned} \boldsymbol{p}'&=\sum_{i}m_{i}\boldsymbol{v}_{i}'=\sum_{i}m_{i}\left(\boldsymbol{v}_{i}-\boldsymbol{v}\right)=\boldsymbol{p}-\left(\sum_{i}m_{i}\right)\boldsymbol{v}=\boldsymbol{p}-m\,\boldsymbol{v}\\\boldsymbol{L}'&=\sum_{i}\left(\boldsymbol{r}_{i}'\times m_{i}\boldsymbol{v}_{i}'\right)=\sum_{i}\left(\boldsymbol{r}_{i}-\boldsymbol{v}\,t\right)\times m_{i}\left(\boldsymbol{v}_{i}-\boldsymbol{v}\right)\\&=\sum_{i}\left(\boldsymbol{r}_{i}\times m_{i}\boldsymbol{v}_{i}\right)-\sum_{i}(\boldsymbol{r}_{i}\times m_{i}\boldsymbol{v})-\boldsymbol{v}\,t\times\left(\sum_{i}m_{i}\boldsymbol{v}_{i}\right)+\cancel{\boldsymbol{v}\,t\times\left(\sum_{i}m_{i}\right)\boldsymbol{v}}\\&=\boldsymbol{L}+\boldsymbol{v}\times\sum_{i}\left(m_{i}\boldsymbol{r}_{i}\right)+\sum_{i}\left(m_{i}\boldsymbol{v}_{i}\right)\times\boldsymbol{v}\,t \end{aligned}$$ To show that these quantities are conserved, take the derivative (assuming that $\frac{{\rm d}\boldsymbol{p}}{{\rm d}t}=0$ and that $\frac{{\rm d}\boldsymbol{L}}{{\rm d}t}=0$) $$\begin{aligned} \frac{{\rm d}}{{\rm d}t}\boldsymbol{p}'&=\cancel{\frac{{\rm d}}{{\rm d}t}\boldsymbol{p}}-m\,\cancel{\frac{{\rm d}}{{\rm d}t}\boldsymbol{v}}=0\\\frac{{\rm d}}{{\rm d}t}\boldsymbol{L}'&=\cancel{\frac{{\rm d}}{{\rm d}t}\boldsymbol{L}}+\frac{{\rm d}}{{\rm d}t}\left[\boldsymbol{v}\times\sum_{i}\left(m_{i}\boldsymbol{r}_{i}\right)\right]+\frac{{\rm d}}{{\rm d}t}\left[\sum_{i}\left(m_{i}\boldsymbol{v}_{i}\right)\times\boldsymbol{v}\,t\right]\\&=\boldsymbol{v}\times\sum_{i}\left(m_{i}\frac{{\rm d}}{{\rm d}t}\boldsymbol{r}_{i}\right)+\cancel{\frac{{\rm d}}{{\rm d}t}\boldsymbol{p}}\times\boldsymbol{v}\,t+\sum_{i}\left(m_{i}\boldsymbol{v}_{i}\right)\times\boldsymbol{v}\\&=\boldsymbol{v}\times\sum_{i}\left(m_{i}\boldsymbol{v}_{i}\right)+\boldsymbol{p}\times\boldsymbol{v}\\&=\boldsymbol{v}\times\boldsymbol{p}+\boldsymbol{p}\times\boldsymbol{v}=0 \end{aligned}$$
{ "domain": "physics.stackexchange", "id": 47496, "tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics" }
Parameter server dictionary lookups in rosjava
Question: I've got file robot.yaml with massive of parameters nxt_robot: - type: motor name: r_wheel_joint port: PORT_A desired_frequency: 20.0 - type: motor name: l_wheel_joint port: PORT_B desired_frequency: 20.0 - type: ultrasonic frame_id: ultrasonic_link name: ultrasonic_sensor port: PORT_2 spread_angle: 0.2 min_range: 0.01 max_range: 2.5 desired_frequency: 5.0 In Python I read parameters so: config = rospy.get_param("~nxt_robot") for c in config: rospy.loginfo("Creating %s with name %s on %s",c['type'],c['name'],c['port']) How to do it in rosjava? Originally posted by Alexandr Buyval on ROS Answers with karma: 641 on 2011-09-28 Post score: 0 Answer: It is possible to look up a subtree of parameters in rosjava. It's described with an example on the wiki here: http://www.ros.org/wiki/rosjava/Overview/Parameters Originally posted by damonkohler with karma: 3838 on 2011-10-08 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 6815, "tags": "rosparam, rosjava" }
Memory game with Turtle
Question: A while back I had to do a project for school in Python, creating a game or something interesting. I decided to make a memory game. The code isn't really nice and neither are the variable names. It's not supposed to look good, and all that matters is the functionality. I had done a lot of movements with the Turtle so going through all of it again and shortening the moves wasn't really my goal. There's too much code and I had no more nerves. All I'm asking for is your overall opinion of how it was done and how it works. from turtle import * import time from random import * from tkinter import * speed(0) def mj(x, y): return x, y def crtanje2(x, y): global pon global m global klik global tocnih pon += 1 klik += 1 for i in range(20): if x >= unutar[i][0] and x <= unutar[i][0]+100 and y <= unutar[i][1]+100 and y >= unutar[i][1]: pu() goto(unutar[i][0]+50, unutar[i][1]+25) pd() rj = l.index(i) rj2 = l.index(i) if rj % 2 != 0: rj -= 1 crtanje(rj, boja[rj//2]) m[1-pon%2] = rj2 break if pon % 2 == 0 and pon != 0 and m[0] != -101 and m[1] != -100 and abs(m[0]-m[1]) != 1 or abs(m[0]-m[1]) == 1 and min(m[0], m[1]) % 2 != 0: tracer(True) time.sleep(1) tracer(False) pu() goto(unutar[l[m[0]]][0]+1, unutar[l[m[0]]][1]+1) pd() pencolor('white') fillcolor('white') begin_fill() for i in range(4): fd(98) lt(90) end_fill() pu() goto(unutar[l[m[1]]][0]+1, unutar[l[m[1]]][1]+1) pd() pencolor('white') fillcolor('white') begin_fill() for i in range(4): fd(98) lt(90) end_fill() if abs(m[0]-m[1]) == 1 and min(m[0], m[1]) % 2 == 0: tocnih += 1 if pon % 2 == 0: m = [-101, -100] pencolor('black') if tocnih == 10: import sys; sys.exit('\n----------------------\nSolved in {} steps!\n----------------------'.format(pon//2)) return def tablica(): hideturtle() pu() goto(-250, 100) pd() for i in range(20): if i % 5 == 0 and i != 0: bk(500) rt(90) fd(100) lt(90) kvadrat() fd(100) def kvadrat(): for i in range(4): fd(100) lt(90) def pravokutnik(): pu() bk(25) lt(90) fd(12.5) rt(90) pd() begin_fill() for i in range(4): if i % 2 == 0: fd(50) else: fd(25) lt(90) def zvijezda(): pu() lt(90) fd(37.5) lt(90) fd(25) rt(180) pd() begin_fill() for i in range(5): fd(50) rt(144) def paralelogram(): pu() lt(90) fd(12.5) rt(90) pd() for i in range(4): if i % 2 == 0: fd(25) lt(140) else: fd(25) lt(40) def trapez(): pu() lt(90) fd(12.5) rt(90) bk(17.5) pd() begin_fill() fd(50) lt(140) fd(25) lt(40) fd(25) lt(40) fd(25) lt(140) fd(25) def polukrug(): pu() fd(25) lt(90) fd(12.5) pd() begin_fill() circle(25, extent = 180) lt(90) fd(50) def crtanje(br, boja): fillcolor(boja) if br == 0: begin_fill() circle(25, steps = 4) elif br == 2: begin_fill() circle(25) elif br == 4: begin_fill() circle(25, steps = 3) elif br == 6: pravokutnik() elif br == 8: pu() bk(20) lt(90) fd(12.5) rt(90) pd() rt(45) begin_fill() circle(25, steps = 4) lt(45) elif br == 10: pu() bk(20) lt(90) fd(20) rt(90) pd() begin_fill() for i in range(5): circle(5) pu() fd(10) pd() elif br == 12: zvijezda() elif br == 14: begin_fill() paralelogram() elif br == 16: trapez() else: polukrug() end_fill() tracer(False) l = sample(range(0, 20), 20) tablica() colormode(255) boja = [] for i in range(0, 20, 2): boja.append((randint(0, 255), randint(0, 255), randint(0, 255))) for j in range(i, i+2): pu() goto(-200+((l[j]%5)*100), 125-((l[j]//5)*100)) pd() crtanje(i, boja[i//2]) tracer(True) time.sleep(10) reset() tracer(False) tablica() unutar = [(-250, 100), (-150, 100), (-50, 100), (50, 100), (150, 100), (-250, 0), (-150, 0), (-50, 0), (50, 0), (150, 0), (-250, -100), (-150, -100), (-50, -100), (50, -100), (150, -100), (-250, -200), (-150, -200), (-50, -200), (50, -200), (150, -200)] tocno = 0 tracer(True) tracer(False) pon = 0 tocnih = 0 m = [-101, -100] klik = 0 poz = onscreenclick(crtanje2) mainloop() Answer: All I'm asking for is your overall opinion of how it was done and how it works. Not good. Your code breaks almost any rule generally agreed about good code. But don't be sad, you are lucky because you can now learn some good principles and write better code in the future. from turtle import * import time from random import * from tkinter import * Please do not use import *, instead use import long_long_name as short crtanje2(x, y) and crtanje(br, boja) are 50 lines long. Divide them in smaller functions. Remove the following because it is never used. def mj(x, y): return x, y global pon global m global klik global tocnih Do not use globals, instead pass the variables you need to the function as parametre. Here code written in the English language is the most welcome, yours isn't, please consider translating your function names to English. pu() lt(90) fd(37.5) lt(90) fd(25) rt(180) pd() Your code uses a great deal of two letters names, longer names are generally preferred. It is common practice to define a main function that actually does stuff and then do: if __name__ == "__main__": main() so that you can then import your module. unutar = [(-250, 100), (-150, 100), (-50, 100), (50, 100), (150, 100), (-250, 0), (-150, 0), (-50, 0), (50, 0), (150, 0), (-250, -100), (-150, -100), (-50, -100), (50, -100), (150, -100), (-250, -200), (-150, -200), (-50, -200), (50, -200), (150, -200)] This is a long list that maybe could be generated with list comprehension. You are giving the programme all the info to draw the images by itself, this is quite time consuming and very hard to change. Use the right tool for the right job Pygame supports sprite (a fancy term for image) loading so you can have a resources folder with all the images of the objects that you want to draw and you can change them anytime. When you have fixed all the more serious issues fix what PEP8 tells you to fix.
{ "domain": "codereview.stackexchange", "id": 11007, "tags": "python, game, tkinter, turtle-graphics" }
What happens to the mobility of Group 1 elements going down the periodic table?
Question: Whilst reading the book, So you want to go to Oxbridge (Third Edition), I noticed that it states that a past question given during an interview for the place on the undergraduate chemistry course (M.Chem) was: What happens to the mobility of Group 1 elements going down the periodic table? I do not understand the question. And so, I would both like to understand the question and know and understand its answer. Preemptively, Thank you Answer: When you have charged particles in a liquid acted upon by a electric field, the experience a force, and said force accelerates them to a terminal, drift speed. Note: The physical treatment of the problem, and the notion of terminal speed is quite similar to say a ball falling from the top of a sky scraper through air (a fluid medium) Now, let's go back to our charged particles in fluids. Let's say we have applied a voltage $\Delta\phi$ between two large, planar electrodes, immersed in the liquid and separated by a distance $l$ The magnitude of the electric field, and the electric force experienced by an ion of charge $ze$ are given by the following relations: $$ E = \frac{\Delta\phi}{l}$$ $$F = zeE = (ze).\frac{\Delta\phi}{l}$$ A cation responds by accelerating towards the negative electrode, and an anion responds by accelerating towards the positive electrode. This acceleration, however, is short lived because the fluid medium offers a retarding force $F_{\text{friction}}$. $$F_{\text{friction}}\propto s\text{, where s := speed} $$ Assuming we have spherical particles of radius $a$, let's say our retarding force is given by Stokes' Law $$F_{\text{friction}} = (6\pi\eta a)s$$ $\eta$) is the viscosity of the medium. When the electric force is exactly balaced by the viscous drag, the acceleration drops to zero and this final speed of the particle is called the drift speed ($s_d$). $$F_{\text{friction}} = F_{\text{electric}}$$ and solving for the drift speed we get, $$s_d = \frac{zeE}{6\pi\eta a}$$ At this point we note that the drift speed is proportional to the applied electric field and introduce the following definition: $$s_d = \mu E$$ where, $$\mu = \frac{ze}{6\pi\eta a} $$ and we call $\mu$ the mobility. From the aforementioned equations/definitions one can deduce that $\mu \propto \frac{1}{a}$. Thus, bigger/bulky ions ought to have lower mobilities, and lower drift speeds in solution. However, $a$ is not the ionic radius, but the Stokes radius (or the hydrodyanmic radius), i.e the "effective radius" of an ion in solution taking into account its solvation shell (i.e the solvent molecules surrounding the ion). I believe the following illustration is instructive, The size of this solvation shell depends on how "attractive" an ion appears to a solvent molecule, and that in turn depends on the electric field produced by the ions. For spherical ions, we consider the electric field on the surface of a charged sphere: $$ E _\text{surface of a charged sphere} \propto \frac{ze}{r^2} $$ here $r$ is the ionic radius. So smaller ions have a larger solvation shell than larger ions (assuming they carry the same charge). This is the reason why mobilities of alkali metal ions increase from $\ce{Li^+}$ to $\ce{Cs^+}$ (and not the other way around); the lithium ions are effectively dragging a large amount of solvent molecules with them and thus, move slower.
{ "domain": "chemistry.stackexchange", "id": 6272, "tags": "metal" }
Can you see black holes?
Question: So I was arguing with someone whether it's possible or not to see a black hole. Now, I know it's not possible to see it when in space (unless it sucks matter) but my question is whether or not it will be possible if, for example, we were to put one in front of a white screen. Answer: A black hole appears, as the name suggests, black. This is because they absorb all light and don't emit any. So as you already said, when compared to the black background of space, they are invisible unless they are surrounded by an accretion disk. This is also how the famous picture of a black hole by the EHT was taken: (Source) The orange-ish ring that you see is, obviously, not the black hole itself – it is matter surrounding the black hole which emits light. One is however able to "see" the black hole in the middle1 because there, the light from the accretion disk is blocked by the black hole. Similarly, you would be able to "see" a black hole in front of a white wall because it would block the white light reflected from the wall, thus creating a circular black "shadow". 1 The black region doesn't correspond to the black hole itself, but its shadow – about 2.5 times of the event horizon.
{ "domain": "physics.stackexchange", "id": 79979, "tags": "black-holes" }
Newton's Cradle: why does it stay symmetric?
Question: How is it that always the same number of balls leave at the other end in Newton's cradle. I understand that the momentum needs to be conserved, but as momentum is defined as p=m*v couldn't you have a different number of balls move at a different speed instead of the same number of balls at the same speed? Answer: The compression pulse that propagates through the metal spheres of Newton's cradle are not ordinary sound waves. They are approximate solitons (a nonlinear wave form that balances dispersion against nonlinearity). It is this property of soliton pulses that is responsible for the observed behavior. More Details: Newton's cradle is a physical manifestation of the Fermi Pasta Ulam simulation conducted in the 1950s: https://en.wikipedia.org/wiki/Fermi%E2%80%93Pasta%E2%80%93Ulam_problem The expected thermalization (analogous to the expectation that more and more balls in Newton's cradle will start to move) fails to materialize because the dispersion is exactly balanced by the nonlinearity that results from the Hertz deformation law for elastic spheres. Instead, repeated occurrences of the initial conditions are observed. In actuality the soliton solutions are only an approximate representation of the motions of the balls in Newton's cradle. As the energy get dissipated through friction, the balance between dispersion and nonlinearity breaks down and the expected thermalization finally occurs.
{ "domain": "physics.stackexchange", "id": 31158, "tags": "newtonian-mechanics, energy-conservation, momentum, conservation-laws, collision" }
Why reduced density operator being same is necessary sufficient for no signalling?
Question: Problem Statement : Two parties $A$ ( Alice ) and $B$ ( Bob ) ( in order ) share an entangled pair $\frac{1}{\sqrt{3}}(|00\rangle+|01\rangle +|11\rangle)$. Bob does a measurement in basis $\{ |0\rangle,|1\rangle\}$ and gets the outcome of his qubit as $|0\rangle$. Bob wants to guess if Alice had performed a measurement on his qubit in the basis $\{ |0\rangle,|1\rangle\}$ or not. If Bob is able to guess even partially ( with a probability $\neq \frac{1}{2}$ ) then it means signalling takes place ( if I am not wrong ). My doubt related to the problem above: Let $i$ be the event that Alice performed the measurement and $m$ the event that Bob's measurement outcome for his qubit is $|0\rangle$. So I want to calculate is $p(i|m)$. $$p(i|m) = \frac{p(i)p(m|i)}{p(m)}$$ Bob calculates this probability so according to him $p(i)=\frac{1}{2}$ ( if this was not the case he might know something already ), $p(m)$ from Bob's perspective is $\frac{1}{3}$. Now I don't understand what $p(m|i)$ should be ( why should it be $\frac{1}{3}$, it comes out to $\frac{2}{3}$ for me but can't find the mistake ). For no signalling to hold valid $p(i|m)=\frac{1}{2}$. My doubt in general : Basically my doubt arises because partially density operator being the same is the condition for no signalling ( talking from Bob's perspective ). But partial density operator just tells with what probability to expect the outcome, but once a particular outcome is observed I might able to partially deduce something. I mean why do we base validation of no signalling on the probabilities of outcomes of a measurement ( given by reduced density ) rather than validating it by inability to deduce the desired information even partially from the outcomes of my measurements ( talking by Bob's perspective ). Answer: There is a problem with your statement about the condition of no signalling is $p = \frac{1}{2}$. We can expand your problem into three parts: how to understand the condition conceptually, how to use the condition in your problem and how the condition is reflected in the density matrix formalism. For the sake of convenience, I will denote $|\psi\rangle = \frac{1}{\sqrt{3}}(|00\rangle + |01\rangle + |11\rangle)$. To start with, the fact that Bob can guess with $p \neq \frac{1}{2}$ does not imply any signalling between Alice and Bob, for the concept of 'impartiality' depends on the quantum state that you are dealing with. It is actually quite easy to see it with a classical example: suppose Alice has a fair coin (with $p_H = \frac{1}{2}$) and Bob has a biased coin (with $p_H \neq \frac{1}{2}$) and they both flip the coin independently. There is no communication or signaling between Alice and Bob, but it is obvious that for Bob he should find that his coin has $p_H \neq \frac{1}{2}$, which really is attributed to the fact that the coin is biased. The case is similar in your problem, as for Bob to discriminate between the case where Alice measures in $\{|0\rangle, |1\rangle\}$ basis or not, he needs two possible results that give him different probabilities (or frequencies) corresponding to each case. However, it can be demonstrated that he cannot do that, as for example if he measures his qubit in $\{|0\rangle, |1\rangle\}$ basis, he will get $|0\rangle$ with probability: $P_0 = \langle \psi | 0 \rangle \langle 0 | \psi \rangle = \frac{1}{3}$. This would be the baseline he can use to compare with the situation where Alice steers the system by measuring her qubit in $\{|0\rangle, |1\rangle\}$ basis. With probability $\frac{2}{3}$ Alice would collapse the state to $\frac{1}{2} (|0\rangle + |1\rangle)$, and with probability $\frac{1}{3}$ Alice would collapse the state to $|1\rangle$. Therefore the probability of Bob measuring the state to be in $|0\rangle$ given Alice had done a measurement is $P_0' = \frac{2}{3} \times \frac{1}{2} + \frac{1}{3} \times 0 = \frac{1}{3}$. In other words, Bob will not be able to tell whether Alice had made a measurement by simply measuring his qubit, for both give the same probability $\frac{1}{3}$. So despite the system has some probability $p \neq \frac{1}{2}$, it is the fact that Bob obtains the same probability for both cases rather than the probability being $\frac{1}{2}$ that suggests Bob cannot decide what Alice had done, i.e. the condition for no signalling. One can obviously extend this question by thinking about whether there are other ways to conduct signalling - say for instance, would Alice's measurement in other basis such as $\{|+\rangle, |-\rangle\}$ make an observable difference in Bob's measurement? The answer is summed up pretty nicely by the reduced density matrix formalism: Bob's measurement on his qubit is completely determined by $\text{Tr}_A (|\psi\rangle \langle\psi|)$. On the other hand, for Alice to conduct any measurement on some basis on her qubit, the state collapses to $ |\psi\rangle \langle \psi| \rightarrow \Pi_a |\psi\rangle \langle \psi| \Pi_a^\dagger + \Pi_b |\psi\rangle \langle \psi| \Pi_b^\dagger$, where $\Pi$ are projectors in the basis denoted by $a$ and $b$. When Bob measures his state, using the partial trace rule once again, his measurement is completely determined by $\text{Tr}_A (\Pi_a |\psi\rangle \langle \psi| \Pi_a^\dagger + \Pi_b |\psi\rangle \langle \psi| \Pi_b^\dagger) = \text{Tr}_A((\Pi_a + \Pi_b) |\psi \rangle \langle \psi|) = \text{Tr}_A |\psi\rangle \langle\psi|$, since the projectors sums up to identity operator. Here we can see the reduced density matrix for Bob's qubit is identical in both situations, and therefore Bob can do pretty much nothing to tell whether Alice had made a measurement or not, or on which basis her measurement is conducted.
{ "domain": "physics.stackexchange", "id": 22492, "tags": "quantum-mechanics, quantum-information, density-operator" }
What happens if particle is measured outside finite well?
Question: A particle, which is in bound state (and eigenstate) of a finite well, has a small probability of being found just outside the well. If one happens to locate it there, then to the experimenter how will the particle manage to occupy that potential energy (greater than total energy)? Answer: Let's take a concrete example of the measurement process. Suppose I'm detecting the particle by firing high electrons at it. I can fire my electron beam towards the well and there will be some probability that my beam scatters off the particle. By measuring the trajectory of the scattered electron and tracing it back I can tell where the particle in the well was when the scattering event happened. And what I'll find it that there is a small probability that the scattering event happened outside the well i.e. that the particle was outside the well when it collided with the electron. But this doesn't mean that conservation of energy has been violated. During the scattering there will be some transfer of energy from the incoming electron to the particle. If the energy transfer is less than the well depth then the particle stays in the well, just in a higher energy bound state. If the energy transferred is greater than the well depth then the particle will be knocked completely out of the well and will head off towards infinity. In all cases when I add up the total energy before and after I find it's the same, and this is true whether the location of the scattering event was inside the well or outside it.
{ "domain": "physics.stackexchange", "id": 49328, "tags": "quantum-mechanics, quantum-tunneling" }