anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Quantitative relation between entropy and time
Question: Increase in entropy gives us the arrow of time. But is there any quantitative relation between these two quantities that relates the entropy change with time interval? Any literature/text/paper on this subject? Answer: If you are willing to accept the idea that, even in an irreversible process, entropy in a non-equilibrium state of a system can be defined locally per unit volume and integrated over the volume of the system to give the overall entropy of the system, then the answer to your question is Yes. See Bird, Stewart, and Lightfoot, Transport Phenomena, Chapter 11, problem 11.D.1 Equation of change of entropy, and see Chapter 24, Section 24.1, THE EQUATION OF CHANGE FOR ENTROPY (which includes diffusion and chemical reaction). These references provide the rate of change of entropy with time during an irreversible process.
{ "domain": "physics.stackexchange", "id": 81772, "tags": "thermodynamics, statistical-mechanics, entropy, time" }
to include first single word in bigram or not?
Question: in a text such as "The deal with Canada's Barrick Gold finalised in Toronto over the weekend" When I try to break it into bigram model, I get this "The deal" "deal with" "with Canada's" "Canada's Barrick" "Barrick Gold" "Gold finalised" "finalised in" "in Toronto" "Toronto over" "over the" "the weekend" my question shall I include the first word and the last word as single words "* The" "Weekend *" Answer: What you describe is called padding and is indeed used frequently in language modeling. For instance if one represents the sequence "A B C" with trigrams: # # A # A B A B C B C # C # # The advantages of padding: it makes every word/symbol appear the same number of times whether it appears in the middle of the sequence or not. it marks the beginning and end of a sentence/text, so that the model can represent the probability to start/end with a particular word.
{ "domain": "datascience.stackexchange", "id": 6377, "tags": "nlp" }
D&D dice rolling app
Question: This JavaScript object is part of a mobile application that rolls dice for our D&D games. Here are some example of possible inputs from the user: 2d4 2d4 + 3d6 2d4 + 3d6 + 12 function DiceRoller() { this.rollDices = function(input){ var parsedInput = input.split("+"); var totalScore = 0; for (var diceIndex = 0; diceIndex < parsedInput.length; diceIndex++) { var diceRoll = parsedInput[diceIndex]; //If we can't find 'd', it means we only have to add the input if (diceRoll.indexOf("d") > 0) { var diceParts = diceRoll.split('d'); //We need a random roll for each dice. ex: 4d6, needs 4 rolls. for (var numberOfDiceIndex = 0; numberOfDiceIndex < diceParts[0];numberOfDiceIndex++) { totalScore += getRandomInt(1,diceParts[1]); } } else { totalScore += parseInt(diceRoll); } } return totalScore; }; var getRandomInt = function(min, max){ return Math.floor(Math.random()*(max-min+1)+min); } } I'm concerned about the performance of the rollDices method, considering I use split quite a lot and I have nested loops. Answer: Performance here won't be a problem. The splits you have are relatively trivial, and presumably there is a human component in here too (someone entering the roll-specs), so the parse time will essentially be negligible compared to that. The code itself though has some neat features, and also some concerns. Random First up, the getRandomInt method is good. It follows the classic definition for retrieving a random value from a specified closed range [low,high]. I would prefer for there to be spaces on the operators though. Your code: return Math.floor(Math.random()*(max-min+1)+min); should be: return Math.floor(Math.random() * (max - min + 1) + min); Numbers Your code makes some lucky guesses about numbers, and although it works, it is also easy to break. For example, if I take your random function again: return Math.floor(Math.random()*(max-min+1)+min); and instead make it: return Math.floor(Math.random()*(1+max-min)+min); it should work fine, right? Except, it won't, because you give the max value to the function as string: var diceParts = diceRoll.split('d'); ..... totalScore += getRandomInt(1,diceParts[1]); ..... So, since you pass in diceParts[1] as a string, the 1+max-min will become 1 + "4" - 1 which in turn becomes 14 - 1 instead of 5 - 1.... so, your code works, but is it an accident? Well, not really, because you figured this out in the other side of the situation - where there is no d in the spec. For that, you have: totalScore += parseInt(diceRoll); You should be more defensive about number-handling in your code. You will run into mysterious bugs at some time in the future if you don't control them now. parseInt(...) You should always give parseInt a radix to work with. The documentation says: Specify 10 for the decimal numeral system commonly used by humans. Always specify this parameter to eliminate reader confusion and to guarantee predictable behavior. Different implementations produce different results when a radix is not specified. (note: the bold emphasis is from the documentation, it is not my addition) Since parseInt() ignores leading whitespace, and stops parsing on the first non-digit, it is OK to give parseInt(...) padded values - you don't need to trim them first. Map->Reduce I would have liked to have seen a map-reduce process happening here: you have a + separated sequence of inputs each part has a roll/limit each result should be summed. I would be tempted to build the process as: this.rollDices = function(spec) { return spec.split(/\+/) .map(parseRoll) .reduce(function(a,b){return a + b;}); }; The above code describes the process quite well: split on the + convert parts to values sum each part. The implementation details for parseRoll is: var getRandomInt = function(min, max){ return Math.floor(Math.random() * (max - min + 1) + min); }; var parseValue = function(val) { return parseInt(val || "1", 10); } var parseRoll = function(roll) { var parts = roll.split(/d/); var sum = 0; var limit = parseValue(parts[1]); for (var i = parseValue(parts[0]) - 1; i >= 0; i--) { var got = getRandomInt(1, limit); sum += got; // console.log("From roll " + roll + " part " + i + " got " + got + " sum " + sum); } return sum; }; I have put the above in to a snippet here: var getRandomInt = function(min, max){ return Math.floor(Math.random() * (max - min + 1) + min); }; var parseValue = function(val) { return parseInt(val || "1"); } var parseRoll = function(roll) { // only here for the snippet debug("Parsing " + roll); var parts = roll.split(/d/); var sum = 0; var limit = parseValue(parts[1]); for (var i = parseValue(parts[0]) - 1; i >= 0; i--) { var got = getRandomInt(1, limit); sum += got; // only here for the snippet debug(" From roll " + roll + " part " + i + " got " + got + " sum " + sum); } return sum; }; var rollDices = function(spec) { // only here for the snippet debugarea.value = ""; return spec.replace(/[^+0-9d]+/g, "") .split(/\+/) .map(parseRoll) .reduce(function(a,b){return a + b;}); }; // ************************* // only here for the snippet // ************************* var results = document.getElementById("result"); var debugarea = document.getElementById("debug"); var debug = function(txt) { debugarea.value = debugarea.value + txt + "\n"; } function updated(input) { results.value = rollDices(input.value); } #spec { background-color: lightgreen; } #result, #debug: { background-color: lightgray; } Dice Spec :<br> <input id="spec" type="text" size="20" oninput="updated(this)" /> <p> Dice Value:<br> <input id="result" type="text" size="5" readonly="true"> <p> Debug:<br> <textarea id="debug" rows="20" cols="40"></textarea>
{ "domain": "codereview.stackexchange", "id": 13021, "tags": "javascript, performance, game, random, dice" }
assertion in first order logic
Question: Can anybody give me an idea how to write this assertion in in first order logic? X has not passed one or more of the prerequisites for A. Here, X is the name of a person and A is a constant representing a course name. Answer: Expressing properties of elements of the universe in first order logic is mostly achieved through defining appropriate relations. So for this you might like to consider relations like: $P(x)$ - $x$ is a person, $C(x)$ - $x$ is a course, $Pre(x,y)$ - $x$ is a prerequisite of $y$ (note that this doesn't actually say that $x$ and $y$ are courses), $Pass(x,y)$ - $x$ has passed course $y$. (The parentheses are just included for clarity - you may have seen different notation where relations are written without.) Then the rest is just building the logical formula, which in this case should be fairly obvious. $notready(x,a) \equiv P(x) \wedge C(a) \wedge \exists y(C(y) \wedge Pre(y,a) \wedge \neg Pass(x,y)) $
{ "domain": "cs.stackexchange", "id": 2445, "tags": "logic" }
Static variable in method vs private class property
Question: I have a class with a method that will be called multiple times over an object's lifetime to perform some processing steps. This method operates on a mixture of immutable (does not change over the lifetime of the process) data and data that is passed as an argument. The immutable data is comparatively expensive to calculate, so I would like to cache it. class Sample { public function process($data) { $immutable = $this->getImmutable(); $this->processImplementation($immutable, $data); // not interesting } } How should getImmutable be implemented? Option #1 would be public function getImmutable() { static $cache; if ($cache === null) { $cache = "not interesting"; } return $cache; } Option #2 would be private $_cache; public function getImmutable() { if ($this->_cache === null) { $this->_cache = "not interesting"; } return $this->_cache; } Option #2 is of course better OOP, but what I like about option #1 is that the "implementation detail" $cache is physically close to the only place where it is used. This means that it doesn't increase the mental load of someone reading the code unless that someone decides to read the body of getImmutable, in which case the implementation detail has become important. In my mind, this purely practical argument is stronger than theoretical purisms. I am also aware that the static version shares the cache across all instances of the class, which option #2 does not (and that's a good thing). However this is not an issue because no more than one instance of the class will be created per process PHP is not multithreaded so the shared cache will not be a problem even when unit testing Is there some other argument for option #2 that could tip the scales? Answer: I'd go with Option #3: extract $cache and getImmutable to a new class whose sole responsibility is providing the expensive data. Whether it caches it and how should be up to the new class. This new class will look just like Option #2 but be separated from the existing class that needs the data. This provides encapsulation and increases testability. Update The ideal design that allows maximal flexibility must always be weighed against the current needs. If the former may give you 4 degrees of awesomeness but you only need 1 in the current application, you are probably better off going with a design that satisfies the latter until you need those extra features. Encapsulation doesn't need to be applied at the class level in every instance. You can apply it at the method level until you need something more complex. What would be more complex? One controller needs the data as a PHP model while another needs the JSON, but both may be required multiple times. In this case you'll get gains by caching each form separately. Now you'll want to separate the code to produce and transform the various data representations where the JSON producer can reuse the data-model producer. By making the JSON controller depend on the JSON producer alone, that producer is free to reuse the data-model producer, hit the database directly, or use test data provided in a fixture for unit tests. Now you're ready to separate all these concerns. Until then, however, a static variable hidden inside a controller method is sufficient for your needs while providing a basic level of encapsulation. In other words, go with Option #1 and call it a day.
{ "domain": "codereview.stackexchange", "id": 3502, "tags": "php, object-oriented, static" }
Encryption/Decryption algorithm #2
Question: This is a follow-up question to this one. I have tried to implement all the recommended things in the answers (except commenting, and not being OS specific). Again, if you see anything that needs improvement, or would make it faster, please tell me! Here's the code. Only 222 lines this time :). #!/usr/bin/env python # not sure if I did this right import base64 import random import os def add_padding(plain_text, block_size=128): plain_text = plain_text.encode() padding = -(len(plain_text) + 1) % block_size # Amount of padding needed to fill block padded_text = plain_text + b'=' * padding + bytes([padding + 1]) return decimal_to_binary(padded_text) def xor_string(key, secret): xored_secret = '' for i in range(len(secret) // len(key)): if i > 0: key = get_round_key(key) xored_secret += decimal_to_binary([bin_to_decimal(key, len(key))[0] ^ bin_to_decimal(secret[i * len(key):len(key) + (i * len(key))], len(key))[0]], len(key)) return xored_secret def generate_key(key): if len(key) >= 128: key = decimal_to_binary(key.encode()) return key[:1024] elif len(key) < 128: key = key.encode() for i in range(128 - len(key)): b = decimal_to_binary([key[i]]) b = xor_string(decimal_to_binary([sum(key) // len(key)]), b[::-1]) key += bytes([int(b, 2)]) new_key = ''.join(str(i) for i in key) half1 = new_key[:len(new_key) // 2] half2 = new_key[len(new_key) // 2:] new_key = decimal_to_binary([int(half1 + half2)]) new_key = new_key[:1024] return new_key def bin_to_base64(binary): return base64.b64encode(bytes([int(binary[i * 8:8 + i * 8], 2) for i in range(len(binary) // 8)])).decode() def bin_to_decimal(binary, length=8): b = [binary[i * length:length + (i * length)] for i in range(len(binary) // length)] decimal = [int(i, 2) for i in b] return decimal def decimal_to_binary(decimal, length=8): return ''.join(str(bin(num)[2:].zfill(length)) for num in decimal) def base64_to_bin(base): decoded = '' for letter in base64.b64decode(base): decoded += bin(letter)[2:].zfill(8) return decoded def matrix_to_str(m): return ''.join(str(m[i][j]) for i in range(32) for j in range(32)) def obfuscate(binary, key, encrypting, loops): shuffled_binary = '' round_key = key for i in range(len(binary) // 1024): if i > 0: round_key = get_round_key(round_key) if encrypting: m = [list(binary[j * 32 + i * 1024:j * 32 + i * 1024 + 32]) for j in range(32)] m = shuffle(m, bin_to_decimal(round_key, 1024)[0], loops) shuffled_binary += xor_string(round_key, matrix_to_str(m)) else: xor = xor_string(round_key, binary[i * 1024:i * 1024 + 1024]) m = [list(xor[j * 32:j * 32 + 32]) for j in range(32)] m = reverse_shuffle(m, bin_to_decimal(round_key, 1024)[0], loops) shuffled_binary += matrix_to_str(m) return xor_string(key, shuffled_binary) def shuffle(m, key, loops): for j in range(loops): # move columns to the right m = [row[-1:] + row[:-1] for row in m] # move rows down m = m[-1:] + m[:-1] shuffled_m = [[0] * 32 for _ in range(32)] for idx, sidx in enumerate(test(key)): shuffled_m[idx // 32][idx % 32] = m[sidx // 32][sidx % 32] m = shuffled_m # cut in half and flip halves m = m[len(m) // 2:] + m[:len(m) // 2] # test m = list(map(list, zip(*m))) return m def reverse_shuffle(m, key, loops): for j in range(loops): # test m = list(map(list, zip(*m))) # cut in half and flip halves m = m[len(m) // 2:] + m[:len(m) // 2] shuffled_m = [[0] * 32 for _ in range(32)] for idx, sidx in enumerate(test(key)): shuffled_m[sidx // 32][sidx % 32] = m[idx // 32][idx % 32] m = shuffled_m # move rows up m = m[1:] + m[:1] # move columns to the left m = [row[1:] + row[:1] for row in m] return m def test(seed): random.seed(seed) lst = list(range(1024)) random.shuffle(lst) return lst def get_round_key(key): key = [[key[(j * 32 + n)] for n in range(32)] for j in range(32)] # get the last column col = [i[-1] for i in key] # interweave col = [x for i in range(len(col) // 2) for x in (col[-i - 1], col[i])] new_key = '' for i in range(32): cols = '' for row in key: cols += row[i] cols = cols[16:] + cols[:16] new_key += xor_string(''.join(str(ele) for ele in col), cols) return new_key def bin_to_bytes(binary): return int(binary, 2).to_bytes(len(binary) // 8, byteorder='big') def encrypt(password, secret, loops): key = generate_key(password) secret = add_padding(secret) secret = xor_string(key, secret) secret = obfuscate(secret, key, True, loops) secret = bin_to_base64(secret) return secret def decrypt(password, base, loops): key = generate_key(password) binary = base64_to_bin(base) binary = xor_string(key, binary) binary = obfuscate(binary, key, False, loops) binary = bin_to_bytes(binary) pad = binary[-1] binary = binary[:-pad] return binary.decode() if __name__ == '__main__': while True: os.system('cls') com = input('1)Encrypt Text \n2)Decrypt Text\n3)Exit\n') if com == '1': os.system('cls') secret = input('Enter the text you wish to encrypt: ') os.system('cls') key = input('Enter your key: ') os.system('cls') print(f'Encrypted text: {encrypt(key, secret, 1)}') input() elif com == '2': os.system('cls') b64 = input('Enter the text you wish to decrypt: ') os.system('cls') key = input('Enter your key: ') os.system('cls') print(f'Decrypted text: {decrypt(key, b64, 1)}') input() elif com == '3': break Answer: Consider adding PEP484 type hints. I needed to go through this to make some sense of the values you're passing around. not being OS specific - indeed. Your call to cls has dubious security value, and if you deem it to have such value, it's better to call into a cross-platform library that will accomplish the same thing. Currently you're pegged to Windows and that is bad. You're so close to having a cross-compatible application; it would be a shame to let this remain your only obstacle. For now in the example I have simply deleted your cls calls. If they were only for aesthetic purposes, you should keep it that way. Of much (much) higher security value is getpass instead of input, to prevent an over-the-shoulder of passwords. obfuscate is not a particularly good name for a symmetric crypto function; it only "obfuscates" if encrypting=True. Names are hard; maybe call this process_crypto or somesuch. A string of 0 and 1 characters - or possibly worse, a list of strings of length 1, each a 0 or 1 character - is a very inefficient and impractical internal representation of binary data. It's more work than I'm willing to do, but for an application that whatsoever exceeds superficial, beginner-level instructional code, it's of critical importance that you refactor to use bytes arrays (in the case of immutable data) or bytearray() (in the case of mutable data) Related to the above - probably not a great idea to carry around an arbitrary-length integer of over 300 digits (!!). Again bytes is a better representation. Avoid incremental concatenation of strings in a loop, O(n^2) in time. You need to relax with the one-liners. This: xored_secret += decimal_to_binary([bin_to_decimal(key, len(key))[0] ^ bin_to_decimal(secret[i * len(key):len(key) + (i * len(key))], len(key))[0]], len(key)) is illegible and unmaintainable, and I see at least three different expressions in there that should receive their own separate, temporary variable on a separate line. Reassigning key to a value of a different type - str to bytes - is not adviseable; make a different variable name. Having a __main__ guard is insufficient to create scope. To put your main variables into function scope you need an actual function. Calling into random is deleterious to the security of your crypto, and is one of the mistakes that probably every newcomer to crypto commits. Call into secrets instead. Covering some (certainly not all) of the above: #!/usr/bin/env python # not sure if I did this right import base64 import random from getpass import getpass from typing import List def add_padding(plain_text: str, block_size: int = 128) -> str: plain_text = plain_text.encode() padding = -(len(plain_text) + 1) % block_size # Amount of padding needed to fill block padded_text = plain_text + b'=' * padding + bytes([padding + 1]) return decimal_to_binary(padded_text) def xor_string(key: str, secret: str) -> str: xored_secret = '' for i in range(len(secret) // len(key)): if i > 0: key = get_round_key(key) some_decimals = bin_to_decimal(secret[i * len(key):len(key) + (i * len(key))], len(key)) some_values = [ bin_to_decimal(key, len(key))[0] ^ some_decimals[0] ] xored_secret += decimal_to_binary(some_values, len(key)) return xored_secret def generate_key(key: str) -> str: if len(key) >= 128: key = decimal_to_binary(key.encode()) return key[:1024] elif len(key) < 128: key = key.encode() for i in range(128 - len(key)): b = decimal_to_binary([key[i]]) b = xor_string(decimal_to_binary([sum(key) // len(key)]), b[::-1]) key += bytes([int(b, 2)]) new_key = ''.join(str(i) for i in key) half1 = new_key[:len(new_key) // 2] half2 = new_key[len(new_key) // 2:] new_key = decimal_to_binary([int(half1 + half2)]) new_key = new_key[:1024] return new_key def bin_to_base64(binary: str) -> str: ints = [ int(binary[i * 8:8 + i * 8], 2) for i in range(len(binary) // 8) ] return base64.b64encode(bytes(ints)).decode() def bin_to_decimal(binary: str, length: int = 8) -> List[int]: b = [binary[i * length:length + (i * length)] for i in range(len(binary) // length)] decimal = [int(i, 2) for i in b] return decimal def decimal_to_binary(decimal: List[int], length: int=8) -> str: return ''.join( str(bin(num)[2:].zfill(length)) for num in decimal ) def base64_to_bin(base: str) -> str: decoded = '' for letter in base64.b64decode(base): decoded += bin(letter)[2:].zfill(8) return decoded def matrix_to_str(m: List[List[str]]) -> str: return ''.join( str(m[i][j]) for i in range(32) for j in range(32) ) def obfuscate(binary: str, key: str, encrypting: bool, loops: int) -> str: shuffled_binary = '' round_key = key for i in range(len(binary) // 1024): if i > 0: round_key = get_round_key(round_key) if encrypting: m = [list(binary[j * 32 + i * 1024:j * 32 + i * 1024 + 32]) for j in range(32)] m = shuffle(m, bin_to_decimal(round_key, 1024)[0], loops) shuffled_binary += xor_string(round_key, matrix_to_str(m)) else: xor = xor_string(round_key, binary[i * 1024:i * 1024 + 1024]) m = [list(xor[j * 32:j * 32 + 32]) for j in range(32)] m = reverse_shuffle(m, bin_to_decimal(round_key, 1024)[0], loops) shuffled_binary += matrix_to_str(m) return xor_string(key, shuffled_binary) def shuffle(m: List[List[str]], key: int, loops: int) -> List[List[str]]: for j in range(loops): # move columns to the right m = [row[-1:] + row[:-1] for row in m] # move rows down m = m[-1:] + m[:-1] shuffled_m = [[0] * 32 for _ in range(32)] for idx, sidx in enumerate(test(key)): shuffled_m[idx // 32][idx % 32] = m[sidx // 32][sidx % 32] m = shuffled_m # cut in half and flip halves m = m[len(m) // 2:] + m[:len(m) // 2] # test m = list(map(list, zip(*m))) return m def reverse_shuffle(m: List[List[str]], key: int, loops: int) -> List[List[str]]: for j in range(loops): # test m = list(map(list, zip(*m))) # cut in half and flip halves m = m[len(m) // 2:] + m[:len(m) // 2] shuffled_m = [[0] * 32 for _ in range(32)] for idx, sidx in enumerate(test(key)): shuffled_m[sidx // 32][sidx % 32] = m[idx // 32][idx % 32] m = shuffled_m # move rows up m = m[1:] + m[:1] # move columns to the left m = [row[1:] + row[:1] for row in m] return m def test(seed: int) -> List[int]: random.seed(seed) lst = list(range(1024)) random.shuffle(lst) return lst def get_round_key(key): key = [[key[(j * 32 + n)] for n in range(32)] for j in range(32)] # get the last column col = [i[-1] for i in key] # interweave col = [x for i in range(len(col) // 2) for x in (col[-i - 1], col[i])] new_key = '' for i in range(32): cols = '' for row in key: cols += row[i] cols = cols[16:] + cols[:16] new_key += xor_string(''.join(str(ele) for ele in col), cols) return new_key def bin_to_bytes(binary: str) -> bytes: return int(binary, 2).to_bytes(len(binary) // 8, byteorder='big') def encrypt(password: str, secret: str, loops: int = 1) -> str: key = generate_key(password) secret = add_padding(secret) secret = xor_string(key, secret) secret = obfuscate(secret, key, True, loops) secret = bin_to_base64(secret) return secret def decrypt(password: str, base: str, loops: int = 1) -> str: key = generate_key(password) binary = base64_to_bin(base) binary = xor_string(key, binary) binary = obfuscate(binary, key, False, loops) binary = bin_to_bytes(binary) pad = binary[-1] binary = binary[:-pad] return binary.decode() def main(): while True: com = input( '1) Encrypt Text\n' '2) Decrypt Text\n' '3) Exit\n' ) input_text = input('Enter the text: ') key = getpass('Enter your key: ') if com == '1': print(f'Encrypted text: {encrypt(key, input_text)}') elif com == '2': print(f'Decrypted text: {decrypt(key, input_text)}') elif com == '3': break print() if __name__ == '__main__': main() Speaking more generally, for educational and recreational purposes writing this kind of code is fun. However, cryptographic implementations are viciously difficult to get correct, and sometimes even more difficult to prove that they're correct. In the real, production world, please do not use this; just call into a library.
{ "domain": "codereview.stackexchange", "id": 41227, "tags": "python, reinventing-the-wheel, cryptography" }
Medical robots models
Question: Hi, I am a lecturer in a university that just created a degree in "medical IT engineer". The students have basic robotics courses, such as geometrical modeling and sensor-based control. I'd love to teach this topic using a medical robot simulator in Rviz, but the nearest I found is Care-O-Bot, which is not really a medical robot. I already made the teaching code to play with any URDF, so I just need a medical related model. Have you by chance heard of any existing ressources for this topic ? Cheers. Originally posted by Olivier on ROS Answers with karma: 1 on 2012-11-23 Post score: 0 Answer: You might be interested in this link. Originally posted by sedwards with karma: 1601 on 2012-11-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11860, "tags": "rviz, urdf" }
Chess game in Windows Forms Part #2
Question: (Part 1 here) I've recently created my own 2-player chess game, no AI... at least for now. I do plan to do that in the future, so one of my concerns is if the code is flexible enough to just use the same classes in all the different modes so I can avoid rewriting the same code over and over again. Resuming with the other Figure implementations. RookPiece Class : public sealed class RookPiece : Figure { public RookPiece(FigureDefinition definition) : base(definition) { Moves = RemoveFailedTurns(this, GetValidTurns()); Moves = Moves.Distinct().ToList(); } protected override List<Tuple<int, int>> GetValidTurns() { int n = 1; Tuple<int, int> rightMove = new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 + n); Tuple<int, int> topMove = new Tuple<int, int>(CurrentPosition.Item1 + n, CurrentPosition.Item2); Tuple<int, int> leftMove = new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 - n); Tuple<int, int> downMove = new Tuple<int, int>(CurrentPosition.Item1 - n, CurrentPosition.Item2); List<Tuple<int, int>> startingMoves = new List<Tuple<int, int>> { rightMove, topMove, leftMove, downMove }; List<Tuple<int, int>> validMoves = startingMoves.Where( startingMove => !IsOutOfBounds(startingMove) && !WillCollideWithAlly(startingMove, PieceColor)) .ToList(); while (!IsOutOfBounds(rightMove) && !WillCollideWithAlly(rightMove, PieceColor) && !WillCollideWithEnemy(rightMove, PieceColor).Item1) { validMoves.Add(rightMove); n++; rightMove = new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 + n); if (WillCollideWithEnemy(rightMove, PieceColor).Item1) { validMoves.Add(rightMove); break; } } n = 1; while (!IsOutOfBounds(topMove) && !WillCollideWithAlly(topMove, PieceColor) && !WillCollideWithEnemy(topMove, PieceColor).Item1) { validMoves.Add(topMove); n++; topMove = new Tuple<int, int>(CurrentPosition.Item1 + n, CurrentPosition.Item2); if (WillCollideWithEnemy(topMove, PieceColor).Item1) { validMoves.Add(topMove); break; } } n = 1; while (!IsOutOfBounds(leftMove) && !WillCollideWithAlly(leftMove, PieceColor) && !WillCollideWithEnemy(leftMove, PieceColor).Item1) { validMoves.Add(leftMove); n++; leftMove = new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 - n); if (WillCollideWithEnemy(leftMove, PieceColor).Item1) { validMoves.Add(leftMove); break; } } n = 1; while (!IsOutOfBounds(downMove) && !WillCollideWithAlly(downMove, PieceColor) && !WillCollideWithEnemy(downMove, PieceColor).Item1) { validMoves.Add(downMove); n++; downMove = new Tuple<int, int>(CurrentPosition.Item1 - n, CurrentPosition.Item2); if (WillCollideWithEnemy(downMove, PieceColor).Item1) { validMoves.Add(downMove); break; } } return validMoves; } } RookDefinitions Class : public class RookDefinitions { private static readonly GeneratePieces generatedPieces = new GeneratePieces(Figure.FigureType.Rook, 0, 7, 7, ImagePaths.WhiteRookImagePath, ImagePaths.BlackRookImagePath); public IEnumerable<FigureDefinition> WhiteRooks = generatedPieces.GenerateWhitePieces(); public IEnumerable<FigureDefinition> BlackRooks = generatedPieces.GenerateBlackPieces(); } QueenPiece Class : public sealed class QueenPiece : Figure { public QueenPiece(FigureDefinition definition) : base(definition) { Moves = RemoveFailedTurns(this, GetValidTurns()); Moves = Moves.Distinct().ToList(); } protected override List<Tuple<int, int>> GetValidTurns() { List<Tuple<int, int>> rookMoves = GetRookMoves(); List<Tuple<int, int>> bishopMoves = GetBishopMoves(); List<Tuple<int, int>> validMoves = rookMoves.ToList(); validMoves.AddRange(bishopMoves); return validMoves; } private List<Tuple<int, int>> GetRookMoves() { int n = 1; Tuple<int, int> rightMove = new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 + n); Tuple<int, int> topMove = new Tuple<int, int>(CurrentPosition.Item1 + n, CurrentPosition.Item2); Tuple<int, int> leftMove = new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 - n); Tuple<int, int> downMove = new Tuple<int, int>(CurrentPosition.Item1 - n, CurrentPosition.Item2); List<Tuple<int, int>> startingMoves = new List<Tuple<int, int>> { rightMove, topMove, leftMove, downMove }; List<Tuple<int, int>> validMoves = startingMoves.Where( startingMove => !IsOutOfBounds(startingMove) && !WillCollideWithAlly(startingMove, PieceColor)) .ToList(); while (!IsOutOfBounds(rightMove) && !WillCollideWithAlly(rightMove, PieceColor) && !WillCollideWithEnemy(rightMove, PieceColor).Item1) { validMoves.Add(rightMove); n++; rightMove = new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 + n); if (WillCollideWithEnemy(rightMove, PieceColor).Item1) { validMoves.Add(rightMove); break; } } n = 1; while (!IsOutOfBounds(topMove) && !WillCollideWithAlly(topMove, PieceColor) && !WillCollideWithEnemy(topMove, PieceColor).Item1) { validMoves.Add(topMove); n++; topMove = new Tuple<int, int>(CurrentPosition.Item1 + n, CurrentPosition.Item2); if (WillCollideWithEnemy(topMove, PieceColor).Item1) { validMoves.Add(topMove); break; } } n = 1; while (!IsOutOfBounds(leftMove) && !WillCollideWithAlly(leftMove, PieceColor) && !WillCollideWithEnemy(leftMove, PieceColor).Item1) { validMoves.Add(leftMove); n++; leftMove = new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 - n); if (WillCollideWithEnemy(leftMove, PieceColor).Item1) { validMoves.Add(leftMove); break; } } n = 1; while (!IsOutOfBounds(downMove) && !WillCollideWithAlly(downMove, PieceColor) && !WillCollideWithEnemy(downMove, PieceColor).Item1) { validMoves.Add(downMove); n++; downMove = new Tuple<int, int>(CurrentPosition.Item1 - n, CurrentPosition.Item2); if (WillCollideWithEnemy(downMove, PieceColor).Item1) { validMoves.Add(downMove); break; } } return validMoves; } private List<Tuple<int, int>> GetBishopMoves() { int n = 1; Tuple<int, int> rightUpDiagonal = new Tuple<int, int>(CurrentPosition.Item1 + n, CurrentPosition.Item2 + n); Tuple<int, int> leftUpDiagonal = new Tuple<int, int>(CurrentPosition.Item1 + n, CurrentPosition.Item2 - n); Tuple<int, int> rightDownDiagonal = new Tuple<int, int>(CurrentPosition.Item1 - n, CurrentPosition.Item2 + n); Tuple<int, int> leftDownDiagonal = new Tuple<int, int>(CurrentPosition.Item1 - n, CurrentPosition.Item2 - n); List<Tuple<int, int>> startingMoves = new List<Tuple<int, int>> { rightUpDiagonal, leftUpDiagonal, rightDownDiagonal, leftDownDiagonal }; List<Tuple<int, int>> validMoves = startingMoves.Where( startingMove => !IsOutOfBounds(startingMove) && !WillCollideWithAlly(startingMove, PieceColor)) .ToList(); while (!IsOutOfBounds(rightUpDiagonal) && !WillCollideWithAlly(rightUpDiagonal, PieceColor) && !WillCollideWithEnemy(rightUpDiagonal, PieceColor).Item1) { validMoves.Add(rightUpDiagonal); n++; rightUpDiagonal = new Tuple<int, int>(CurrentPosition.Item1 + n, CurrentPosition.Item2 + n); if (WillCollideWithEnemy(rightUpDiagonal, PieceColor).Item1) { validMoves.Add(rightUpDiagonal); break; } } n = 1; while (!IsOutOfBounds(leftUpDiagonal) && !WillCollideWithAlly(leftUpDiagonal, PieceColor) && !WillCollideWithEnemy(leftUpDiagonal, PieceColor).Item1) { validMoves.Add(leftUpDiagonal); n++; leftUpDiagonal = new Tuple<int, int>(CurrentPosition.Item1 + n, CurrentPosition.Item2 - n); if (WillCollideWithEnemy(leftUpDiagonal, PieceColor).Item1) { validMoves.Add(leftUpDiagonal); break; } } n = 1; while (!IsOutOfBounds(rightDownDiagonal) && !WillCollideWithAlly(rightDownDiagonal, PieceColor) && !WillCollideWithEnemy(rightDownDiagonal, PieceColor).Item1) { validMoves.Add(rightDownDiagonal); n++; rightDownDiagonal = new Tuple<int, int>(CurrentPosition.Item1 - n, CurrentPosition.Item2 + n); if (WillCollideWithEnemy(rightDownDiagonal, PieceColor).Item1) { validMoves.Add(rightDownDiagonal); break; } } n = 1; while (!IsOutOfBounds(leftDownDiagonal) && !WillCollideWithAlly(leftDownDiagonal, PieceColor) && !WillCollideWithEnemy(leftDownDiagonal, PieceColor).Item1) { validMoves.Add(leftDownDiagonal); n++; leftDownDiagonal = new Tuple<int, int>(CurrentPosition.Item1 - n, CurrentPosition.Item2 - n); if (WillCollideWithEnemy(leftDownDiagonal, PieceColor).Item1) { validMoves.Add(leftDownDiagonal); break; } } return validMoves; } } QueenDefinitions Class : public class QueenDefinitions { private static readonly GeneratePieces generatedPices = new GeneratePieces(Figure.FigureType.Queen, 3, 3, 1, ImagePaths.WhiteQueenImagePath, ImagePaths.BlackQueenImagePath); public IEnumerable<FigureDefinition> WhiteQueens = generatedPices.GenerateWhitePieces(); public IEnumerable<FigureDefinition> BlackQueens = generatedPices.GenerateBlackPieces(); } KingPiece Class : public sealed class KingPiece : Figure { public KingPiece(FigureDefinition definition) : base(definition) { Moves = RemoveFailedTurns(this, GetValidTurns()); Moves = Moves.Distinct().ToList(); } protected override List<Tuple<int, int>> GetValidTurns() { List<Tuple<int, int>> tempMoves = new List<Tuple<int, int>> { new Tuple<int, int>(CurrentPosition.Item1 + 1, CurrentPosition.Item2), new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 + 1), new Tuple<int, int>(CurrentPosition.Item1 - 1, CurrentPosition.Item2), new Tuple<int, int>(CurrentPosition.Item1, CurrentPosition.Item2 - 1), new Tuple<int, int>(CurrentPosition.Item1 + 1, CurrentPosition.Item2 + 1), new Tuple<int, int>(CurrentPosition.Item1 + 1, CurrentPosition.Item2 - 1), new Tuple<int, int>(CurrentPosition.Item1 - 1, CurrentPosition.Item2 + 1), new Tuple<int, int>(CurrentPosition.Item1 - 1, CurrentPosition.Item2 - 1) }; List<Tuple<int, int>> validMoves = tempMoves.Where( tempMove => !IsOutOfBounds(tempMove) && !WillCollideWithAlly(tempMove, PieceColor)) .ToList(); return validMoves; } } KingDefinitions Class : public class KingDefinitions { private static readonly GeneratePieces generatedPieces = new GeneratePieces(Figure.FigureType.King, 4, 4, 1, ImagePaths.WhiteKingImagePath, ImagePaths.BlackKingImagePath); public IEnumerable<FigureDefinition> BlackKings = generatedPieces.GenerateBlackPieces(); public IEnumerable<FigureDefinition> WhiteKings = generatedPieces.GenerateWhitePieces(); } Every FigureDefinition class uses a class called GeneratePieces. So here it is it basically shortens more repetitive code. It uses the FigureDefinition class which you can see in part 1: public class GeneratePieces { private readonly FigureType pieceType; private readonly int startingRowWhite = 0; private readonly int startingRowBlack = 7; private readonly int startingColumn; private readonly int endingColumn; private readonly int increase; private readonly string whitePieceImagePath; private readonly string blackPieceImagePath; public GeneratePieces(FigureType pieceType,int startingColumn,int endingColumn,int increase, string whitePieceImagePath, string blackPieceImagePath) { this.pieceType = pieceType; this.startingColumn = startingColumn; this.endingColumn = endingColumn; this.increase = increase; this.whitePieceImagePath = whitePieceImagePath; this.blackPieceImagePath = blackPieceImagePath; if (pieceType == FigureType.Pawn) { startingRowWhite = 1; startingRowBlack = 6; } } public IEnumerable<FigureDefinition> GenerateBlackPieces() { List<FigureDefinition> pieces = new List<FigureDefinition>(); for (int i = startingColumn; i <= endingColumn; i += increase) { CooperativeForm.Board[startingRowBlack][i] = true; FigureDefinition piece = new FigureDefinition { PieceColor = FigureColor.Black, PieceType = pieceType, PieceImage = Image.FromFile(blackPieceImagePath), StartingPosition = new Tuple<int, int>(startingRowBlack, i), CurrentPosition = new Tuple<int, int>(startingRowBlack, i), WasMoved = false }; pieces.Add(piece); } return pieces; } public IEnumerable<FigureDefinition> GenerateWhitePieces() { List<FigureDefinition> pieces = new List<FigureDefinition>(); for (int i = startingColumn; i <= endingColumn; i += increase) { CooperativeForm.Board[startingRowWhite][i] = true; FigureDefinition piece = new FigureDefinition { PieceColor = FigureColor.White, PieceType = pieceType, PieceImage = Image.FromFile(whitePieceImagePath), StartingPosition = new Tuple<int, int>(startingRowWhite, i), CurrentPosition = new Tuple<int, int>(startingRowWhite, i), }; pieces.Add(piece); } return pieces; } } We also have the public static Class ImagePaths which basically holds some locations of the figure's images : public static class ImagePaths { private const string assetsPath = @"Assets\Figures\"; public static string BlackPawnImagePath { get; } = assetsPath + @"Black\b-peshka.png"; public static string WhitePawnImagePath { get; } = assetsPath + @"White\w-peshka.png"; public static string BlackKnightImagePath { get; } = assetsPath + @"Black\b-kon.png"; public static string WhiteKnightImagePath { get; } = assetsPath + @"White\w-kon.png"; public static string BlackBishopImagePath { get; } = assetsPath + @"Black\b-oficer.png"; public static string WhiteBishopImagePath { get; } = assetsPath + @"White\w-oficer.png"; public static string BlackRookImagePath { get; } = assetsPath + @"Black\b-top.png"; public static string WhiteRookImagePath { get; } = assetsPath + @"White\w-top.png"; public static string BlackQueenImagePath { get; } = assetsPath + @"Black\b-kralica.png"; public static string WhiteQueenImagePath { get; } = assetsPath + @"White\w-kralica.png"; public static string BlackKingImagePath { get; } = assetsPath + @"Black\b-kral.png"; public static string WhiteKingImagePath { get; } = assetsPath + @"White\w-kral.png"; } The CooperativeModeForm also use's the Rochade Class which is implemented like this : using static GLS_Chess.Figures.Figure; public class Rochade { private enum RochadesByColor { White, Black } public Figure RochadeRook { get; private set; } = null; public Figure RochadeKing { get; private set; } = null; private static readonly List<Tuple<int, int>[]> longRochadeMoves = new List<Tuple<int, int>[]> { new[] {new Tuple<int, int>(0, 2), new Tuple<int, int>(0, 3)}, new[] {new Tuple<int, int>(7, 2), new Tuple<int, int>(7, 3)}, }; private static readonly List<Tuple<int, int>[]> shortRochadeMoves = new List<Tuple<int, int>[]> { new[] {new Tuple<int, int>(0, 6), new Tuple<int, int>(0, 5)}, new[] {new Tuple<int, int>(7, 6), new Tuple<int, int>(7, 5)}, }; public static Tuple<int, int> newKingMove { get; set; } public void DoRochade(Figure kingToBeMoved) { if (kingToBeMoved.PieceType != FigureType.King || !Equals(kingToBeMoved.CurrentPosition, kingToBeMoved.StartingPosition) || kingToBeMoved.WasMoved || kingToBeMoved.WillCollideWithEnemy(newKingMove, kingToBeMoved.PieceColor).Item1 || kingToBeMoved.WillCollideWithAlly(newKingMove, kingToBeMoved.PieceColor)) { return; } List<Figure> currentTeamFigures = kingToBeMoved.PieceColor == FigureColor.Black ? CooperativeForm.BlackFigures : CooperativeForm.WhiteFigures; List<Figure> enemyTeamFigures = kingToBeMoved.PieceColor == FigureColor.Black ? CooperativeForm.WhiteFigures : CooperativeForm.BlackFigures; if (enemyTeamFigures.Any(enemyTeamFigure => enemyTeamFigure.Moves.Contains(kingToBeMoved.CurrentPosition))) { return; } foreach (var currentAllyFigure in currentTeamFigures.Where(figure => figure.PieceType == FigureType.Rook)) { if (!IsLongRochade(kingToBeMoved, currentAllyFigure) && !IsShortRochade(kingToBeMoved, currentAllyFigure)) { continue; } List<Tuple<int, int>[]> rochadeMoves = IsLongRochade(kingToBeMoved, currentAllyFigure) ? longRochadeMoves : shortRochadeMoves; int rochadeArrayIndex = currentAllyFigure.PieceColor == FigureColor.Black ? (int) RochadesByColor.Black : (int) RochadesByColor.White; RochadeRook = new RookPiece(new FigureDefinition { StartingPosition = currentAllyFigure.StartingPosition, CurrentPosition = rochadeMoves[rochadeArrayIndex][1], PieceColor = currentAllyFigure.PieceColor, PieceType = FigureType.Rook, PieceImage = currentAllyFigure.PieceImage, WasMoved = true }); RochadeKing = new KingPiece(new FigureDefinition { StartingPosition = currentAllyFigure.StartingPosition, CurrentPosition = rochadeMoves[rochadeArrayIndex][0], PieceColor = currentAllyFigure.PieceColor, PieceType = FigureType.King, PieceImage = kingToBeMoved.PieceImage, WasMoved = true }); break; } } private static bool IsLongRochade(Figure king, Figure rook) { bool found = longRochadeMoves.Any(t => Equals(t[0], newKingMove)); if (!found) { return false; } int arrayIndex = king.PieceColor == FigureColor.White ? 0 : 1; if (Equals(rook.CurrentPosition, rook.StartingPosition)) { return longRochadeMoves.Where((t, i) => rook.Moves.Contains(longRochadeMoves[arrayIndex][i])).Any(); } return false; } private static bool IsShortRochade(Figure king, Figure rook) { bool found = shortRochadeMoves.Any(t => Equals(t[0], newKingMove)); if (!found) { return false; } int arrayIndex = king.PieceColor == FigureColor.White ? 0 : 1; if (Equals(rook.CurrentPosition, rook.StartingPosition)) { return shortRochadeMoves.Where((t, i) => rook.Moves.Contains(shortRochadeMoves[arrayIndex][i])).Any(); } return false; } } And we also have the PassedTurn Class which makes the Turn Tracking feature possible : public class PassedTurns { public enum ListsOrder { WhitePlayer, BlackPlayer, PieceType, Action } public enum ItemsOrder { Position, PieceType, Action } private object Actions = new object(); private object Positions = new object(); private object PieceTypes = new object(); public void AddNewMove(Tuple<int, int> newTurnPosition, FigureType newTurnPieceType, string newTurnAction) { Actions = newTurnAction; Positions = newTurnPosition; PieceTypes = newTurnPieceType; } public List<object> GetPassedTurns() { return new List<object> { Positions, PieceTypes, Actions, }; } } Reaching the end of the post board Lastly (phew!), the ReplacePawnForm is invoked whenever a pawn reaches the end of the board and must be replaced with a figure chosen by the user: public partial class ReplacePawnForm : Form { private enum ImageIndexes { Queen, Rook, Bishop, Knight } public Figure ReplacedFigure { get; private set; } private FigureType replacedFigureType; private string[] imagePaths; private readonly PictureBox[] piecesPictureBoxs = new PictureBox[4]; private readonly Panel[] piecesPanels = new Panel[4]; private bool pressedPictureBox = false; private readonly Tuple<int, int> pawnCurrentPosition; private readonly FigureColor pieceColor; private readonly FigureType[] pieceTypes = { FigureType.Queen, FigureType.Rook, FigureType.Bishop, FigureType.Knight, }; public ReplacePawnForm(FigureColor pieceColor,Tuple<int,int> pawnCurrentPosition) { InitializeComponent(); SetImagePaths(pieceColor); SetPanels(); SetPictureBoxs(); CreateLabels(); this.pieceColor = pieceColor; this.pawnCurrentPosition = pawnCurrentPosition; } private void bDone_Click(object sender, EventArgs e) { if (!pressedPictureBox) { MessageBox.Show(@"You haven't picked a figure yet !"); } else { SetReplacedFigure(); Close(); } } private void SetImagePaths(FigureColor inputPieceColor) { if (inputPieceColor == FigureColor.Black) { imagePaths = new[] { ImagePaths.BlackQueenImagePath, ImagePaths.BlackRookImagePath, ImagePaths.BlackBishopImagePath, ImagePaths.BlackKnightImagePath }; } else { imagePaths = new[] { ImagePaths.WhiteQueenImagePath, ImagePaths.WhiteRookImagePath, ImagePaths.WhiteBishopImagePath, ImagePaths.WhiteKnightImagePath }; } } private void SetPanels() { int horizontal = 20; const int vertical = 55; for (int i = 0; i < piecesPanels.Length; i++) { piecesPanels[i] = new Panel { Location = new Point(horizontal,vertical), Size = new Size(105,95) }; Controls.Add(piecesPanels[i]); horizontal += 125; } } private void SetPictureBoxs() { for (int i = 0; i < piecesPictureBoxs.Length; i++) { piecesPictureBoxs[i] = new PictureBox { BackgroundImage = Image.FromFile(imagePaths[i]), BackgroundImageLayout = ImageLayout.Stretch, Size = new Size(100,90), Name = i.ToString() }; piecesPictureBoxs[i].Click += PictureBox_Click; Controls.Add(piecesPictureBoxs[i]); piecesPanels[i].Controls.Add(piecesPictureBoxs[i]); } } private void PictureBox_Click(object sender, EventArgs e) { RemoveBackgroundColor(); pressedPictureBox = true; PictureBox currentPb = (PictureBox) sender; GetContainerPanel(currentPb).BackColor = Color.DarkCyan; replacedFigureType = GetFigureType(currentPb); } private FigureType GetFigureType(Control currentPb) { for (int i = 0; i < imagePaths.Length; i++) { if (currentPb.Name == i.ToString()) { return pieceTypes[i]; } } return FigureType.Queen; } private Panel GetContainerPanel(Control currentPb) { return piecesPanels.FirstOrDefault(piecesPanel => piecesPanel.Controls.Contains(currentPb)); } private void RemoveBackgroundColor() { foreach (var piecesPanel in piecesPanels) { piecesPanel.BackColor = DefaultBackColor; } } private void CreateLabels() { Label[] labels = new Label[piecesPictureBoxs.Length]; for (int i = 0; i < labels.Length; i++) { labels[i] = new Label { Location = new Point(piecesPanels[i].Location.X + piecesPanels[i].Width/4, piecesPanels[i].Location.Y + piecesPanels[i].Height), Text = pieceTypes[i].ToString(), Font = new Font("Arial", 10, FontStyle.Bold) }; Controls.Add(labels[i]); } } private void SetReplacedFigure() { switch (replacedFigureType) { case FigureType.Bishop: ReplacedFigure = new BishopPiece(new FigureDefinition { PieceType = FigureType.Bishop, PieceImage = Image.FromFile(imagePaths[(int) ImageIndexes.Bishop]), CurrentPosition = pawnCurrentPosition, StartingPosition = pawnCurrentPosition, PieceColor = pieceColor, WasMoved = true }); break; case FigureType.Knight: ReplacedFigure = new KnightPiece(new FigureDefinition { PieceType = FigureType.Knight, PieceImage = Image.FromFile(imagePaths[(int) ImageIndexes.Knight]), CurrentPosition = pawnCurrentPosition, StartingPosition = pawnCurrentPosition, PieceColor = pieceColor, WasMoved = true }); break; case FigureType.Rook: ReplacedFigure = new RookPiece(new FigureDefinition { PieceType = FigureType.Rook, PieceImage = Image.FromFile(imagePaths[(int) ImageIndexes.Rook]), CurrentPosition = pawnCurrentPosition, StartingPosition = pawnCurrentPosition, PieceColor = pieceColor, WasMoved = true }); break; case FigureType.Queen: ReplacedFigure = new QueenPiece(new FigureDefinition { PieceType = FigureType.Queen, PieceImage = Image.FromFile(imagePaths[(int) ImageIndexes.Queen]), CurrentPosition = pawnCurrentPosition, StartingPosition = pawnCurrentPosition, PieceColor = pieceColor, WasMoved = true }); break; } } } Answer: Some of the terminology you're using is unusual. I would say Piece or Man instead of Figure, and "castling" instead of "Rochade". GetValidTurns should probably be called GetValidMoves. As Eric Lippert suggests on your other question, you should probably create a struct called BoardPosition which contains a row and a column as ints. The code for BoardPosition should contain comments indicating how the ranks and files are numbered. Maybe the constructor should check that the given numbers are in range. It would be useful for BoardPosition to have a method public BoardPosition Move(int right, int up) which returns a new BoardPosition with the coordinates altered appropriately. If you wanted, you could make GetValidTurns on QueenPiece just a single line: return GetRookMoves().Concat(GetBishopMoves()).ToList(); The methods GetValidTurns on RookPiece and GetRookMoves on QueenPiece are very similar. Consider replacing these two methods with a single GetRookMoves method. You might have this method as a protected method in the Figure class, or perhaps as a static method in a new static class, called something like ChessMoves or (Eric's suggestion) Rulebook. In GetRookMoves, the while-loops for rightMove, topMove, leftMove, and downMove are all very similar. See if you can combine these four pieces of code into one. I find the method call WillCollideWithEnemy(rightMove, PieceColor).Item1 to be confusing. Given the name WillCollideWithEnemy, I would expect that method to return a bool, but a bool doesn't have an Item1. Consider doing the following instead: Create a method public Figure GetEnemyAt(boardPosition, pieceColor), which returns null if the square at boardPosition is empty or contains an allied piece. Create a separate method public bool ContainsEnemy(boardPosition, pieceColor), which merely checks if the square contains an enemy or not and returns true or false. (This method could be implemented as a one-liner: return (GetEnemyAt(boardPosition, pieceColor) != null);.) For the while-loops in GetRookMoves, the logic seems a little more complicated than necessary. The startingMoves variable isn't necessary. Remove it, and initialize validMoves as the empty list. The implementation of the while-loops could be something like the following: — BoardPosition destination = CurrentPosition; while(true) { destination = destination.Move(right: 1, up: 0); if (IsOutOfBounds(destination) || WillCollideWithAlly(destination, PieceColor)) { break; } validMoves.Add(destination); if (ContainsEnemy(destination, PieceColor)) { break; } } Consider making all of the members of ImagePaths const instead of static. You should be able to use definitions such as public const string BlackPawnImagePath = assetsPath + @"Black\b-peshka.png";. (Making ImagePaths a static class was a good idea.) Consider making Rochade a static class and turning all of its non-constant properties (including newKingMove) into parameters and/or local variables of the methods. In the PassedTurns class, shouldn't Actions, Positions, and PieceTypes all be lists?
{ "domain": "codereview.stackexchange", "id": 19969, "tags": "c#, winforms, chess" }
Diffusion always present?
Question: Simply asked: Is diffusion always and everywhere present? Let's reduce the question to the macroscopic world. Anytime, when two materials touch each other (air <-> wall, tea <-> cup, chair <-> floor), will a diffusion process always be present then? As far as I understand a diffusion covers only the transport of atoms and the like between materials, not within. edit: According to Vadim's comment, I'd like to stick to diffusion between materials. Answer: When two materials A and B are placed in contact with one another, molecules of A will diffuse across the interface into B, and molecules of B will diffuse across the interface into A. The highest concentration of B in A will be at the interface, and the B concentration will decrease with distance from the interface. So diffusion of B molecules will be driven by the concentration gradient into A. The extent of the region where this occurs will increase as time progresses. In cases where A and B are not mutually miscible, A will first dissolve into B at the interface, and then diffuse inward; similarly for B into A. This dissolution at the interface will typically be described by Henry's law for gas dissolution in a solid or liquid.
{ "domain": "physics.stackexchange", "id": 80231, "tags": "thermodynamics, solid-state-physics, diffusion" }
Large switch statement bucketing system
Question: I've finally got a solution for a "weekly streak" bucketing system I've been working on but unfortunately it's resulted in a very large switch statement. I'm running through a handful of dates and determining by a range of seconds (the length of a week) which bucket that date belongs to. It's obvious to me that this could be refactored in a programmatic way because of the amount of repetition but I'm not sure where to start. Dictionary? Some kind of sorting algorithm? Recursion? open func weeklyStreakCount(weeklyGoal target: Int) -> Int { let endDate = Date() let startDate = endDate.startOfWeek! let startDateInterval = Double(startDate.timeIntervalSinceNow) var workoutsPerWeek = [Int: Int]() let userWorkouts: [UserWorkoutEntity] = self.userWorkouts(completed: true) var numberOfGoodBuckets = 0 for i in 0...100 { workoutsPerWeek.updateValue(0, forKey: i) } // calculate the time from now to seconds in a week and round to the nearest hundreths to create a bucket for that week for userWorkout in userWorkouts { let workoutTimeInterval = Double((userWorkout.completionDate?.timeIntervalSinceNow)!) let rawBucket = (startDateInterval - workoutTimeInterval) / numberOfSecondsInAWeek let bucket = Int(rawBucket * 1000) let abbrNumberOfSecondsInAWeek = 604 switch bucket { case 0 ... abbrNumberOfSecondsInAWeek: workoutsPerWeek[0]! += 1 case abbrNumberOfSecondsInAWeek ... (abbrNumberOfSecondsInAWeek * 2): workoutsPerWeek[1]! += 1 case abbrNumberOfSecondsInAWeek * 2 ... (abbrNumberOfSecondsInAWeek * 3): workoutsPerWeek[2]! += 1 case abbrNumberOfSecondsInAWeek * 3 ... (abbrNumberOfSecondsInAWeek * 4): workoutsPerWeek[3]! += 1 case abbrNumberOfSecondsInAWeek * 4 ... (abbrNumberOfSecondsInAWeek * 5): workoutsPerWeek[4]! += 1 case abbrNumberOfSecondsInAWeek * 5 ... (abbrNumberOfSecondsInAWeek * 6): workoutsPerWeek[5]! += 1 case abbrNumberOfSecondsInAWeek * 6 ... (abbrNumberOfSecondsInAWeek * 7): workoutsPerWeek[6]! += 1 case abbrNumberOfSecondsInAWeek * 7 ... (abbrNumberOfSecondsInAWeek * 8): workoutsPerWeek[7]! += 1 case abbrNumberOfSecondsInAWeek * 8 ... (abbrNumberOfSecondsInAWeek * 9): workoutsPerWeek[8]! += 1 case abbrNumberOfSecondsInAWeek * 9 ... (abbrNumberOfSecondsInAWeek * 10): workoutsPerWeek[9]! += 1 default: break } } // Run through each bucket and see how many times the user hit their goal for i in 0...10 { if(workoutsPerWeek[i]! > target) { numberOfGoodBuckets += 1 } else { break } } return numberOfGoodBuckets } Answer: First, I would replace the dictionary var workoutsPerWeek = [Int: Int]() by an array. A dictionary could be memory-saving if you have "few" keys in a "large range", e.g. 1, 20, 300, 4000, as a "sparse-array" emulation. But that is not the case here: The bucket numbers range from 0 to 9, so that an array is more appropriate, easier to initialize, and easier to access (no forced unwraps needed). It is unclear in your code how many entries are needed: You initialize entries for 0...100, update entries for 0...9, and finally evaluate entries 0...10. If you want to collect the data for the last 10 weeks then it would be var workoutsPerWeek = Array(repeating: 0, count: 10) Now you can use the computed bucket number as an index into the array and replace the switch statement by let rawBucket = Int((startDateInterval - workoutTimeInterval) / numberOfSecondsInAWeek) if rawBucket >= 0 && rawBucket < workoutsPerWeek.count { workoutsPerWeek[rawBucket] += 1 } The range check for the bucket number can also be done as if workoutsPerWeek.indices.contains(rawBucket) { ... } There are more things which can be improved: In let workoutTimeInterval = Double((userWorkout.completionDate?.timeIntervalSinceNow)!) the conversion to Double is not needed. And what if completionDate is nil? Your code would crash in that case. But my main point of criticism is how the bucket is computed. A day does not necessarily have 24 hours. In regions with daylight saving time, a day can have 23 or 25 hours if the clocks are adjusted one hour back or forward. Applied to your code: A week does not necessarily have 604,800 seconds. The proper way to compute calendar differences is to use the Calendar methods and DateComponents: for userWorkout in userWorkouts { // Ensure that completionDate is set, otherwise ignore this entry guard let completionDate = userWorkout.completionDate else { continue } // Compute #of weeks between completionDate and startDate let weeksAgo = Calendar.current.dateComponents([.weekOfMonth], from: completionDate, to: startDate).weekOfMonth! // Update corresponding bucket if workoutsPerWeek.indices.contains(weeksAgo) { workoutsPerWeek[weeksAgo] += 1 } } Finally, the calculation of the number of consecutive weeks where the goal has been reached can be simplified to let numberOfGoodBuckets = workoutsPerWeek.prefix(while: { $0 >= target }).count
{ "domain": "codereview.stackexchange", "id": 26652, "tags": "algorithm, sorting, recursion, swift" }
Machine Learning Loss Functions In C++
Question: I was looking for C++ versions of the machine learning metrics implemented in Python's sklearn, but they were surprisingly hard to find. I came across a website that had most of the loss functions implemented in Python, so I did my best to translate them to C++. Below is what I have so far. I plan to implement most of the metrics on that page. From testing, I'm not really getting the speed I was expecting. Two notes: I've found that object members having their own local variables, rather than having to access shared class variables, gives a small speed boost. That's why several functions have their own similar variables that I decided not to make class members. I also found that using my own squaring function is faster than std::pow(n, 2). // metrics.hpp #include <vector> #include <cmath> class Metrics { // utils double square (double res) { return res * res ; } public: // regression double mean_absolute_error ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double store {0} ; size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { store += std::abs(y_true[i] - y_pred[i]) ; } return store/size ; } double root_mean_square_error ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double store {0} ; size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { store += square((y_true[i] - y_pred[i])) ; } return std::sqrt(store / size) ; } double mean_gamma_deviance ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double store {0} ; size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { store += 2.0 * (std::log(y_pred[i]/y_true[i]) + (y_true[i]/y_pred[i]) - 1.0) ; } return store / size ; } double mean_poisson_deviance ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double store {0} ; size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { store += 2.0 * ((y_true[i]*std::log(y_true[i]/y_pred[i])) + (y_pred[i]-y_true[i])) ; } return store / size ; } // classification double accuracy ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double store {0} ; size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { if (y_true[i] == y_pred[i]){ store += 1.0 ; } } return -(store / size) ; } double precision ( const std::vector<double> &y_true, const std::vector<double> &y_pred ) { double tp {0} ; // true positive double fp {0} ; // false positive size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { if (y_true[i] == y_pred[i] == 1) { tp += 1.0 ; } } for (int i {0}; i < size ; ++ i) { if (y_true[i] != y_pred[i] && y_pred[i]==1) { fp += 1.0 ; } } return tp/(tp+fp) ; } double recall ( const std::vector<double> &y_true, const std::vector<double> &y_pred ) { double tp {0} ; // true positive double fn {0} ; // false negative size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { if (y_true[i] == y_pred[i] == 1) { tp += 1.0 ; } } for (int i {0}; i < size ; ++ i) { if (y_true[i] != y_pred[i] && y_pred[i] == 0) { fn += 1.0 ; } } return tp/(tp+fn) ; } double f1 ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double prec {precision(y_true, y_pred)} ; double rec {recall(y_true, y_pred)} ; return -(2 * ((prec*rec) / (prec+rec))) ; } double jaccard_score ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double intersect {0} ; double uni {0} ; size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { if (y_true[i] == y_pred[i]) { intersect += 1.0 ; } } for (int i {0}; i < size ; ++ i) { uni += 1.0 ; } return intersect / uni ; } } met ; Answer: Design review There are two very common mistakes people make when trying to translate code from other languages to C++ looking for a performance boost. The first is trying a direct translation: that is, literally rewriting the exact same code logic from the other language to C++. There are countless posts on StackOverflow from people who rewrote some code from another language in C++, and are baffled why they aren’t getting a massive speed improvement. The reason for that should be obvious; if you just do the exact same thing in C++ that you did in Python or Java or whatever… what else would you expect to happen? C++ isn’t magical. It will take your CPU the same time to add, multiply, and do all the other operations regardless of the language you specified those operations in. If you do the same operations, you’ll get roughly the same performance. Each programming language has its own philosophy. When I write Python, I think very differently than when I write C++. If you want to get the most out of C++, you need to align your thinking with the philosophy of C++. You need to do things the C++ way. You need to learn and use the language’s idioms. Good code looks very different in C++ than it does in Python, even when that code is supposed to be doing the same thing. That being said… Sometimes—quite often, in fact—you will get a performance boost from a simple direct translation to C++. Maybe not as much as you were hoping for, but the exact same algorithm usually runs at least a little bit faster when implemented in C++. And the reason for that is: the compiler. Which brings us to the second common mistake people trying to translate code to C++ for a performance boost make: thinking that it’s all about the language. C++, more so than perhaps any other popular language, DEPENDS on the compiler. That’s the true secret to its power and efficiency; C++ is designed, from the ground up, to work WITH the compiler—hand-in-hand, as a partnership. C++ without a compiler is… nothing; it’s actually a pretty shitty language, on its own. So if you’re just looking at the code—the language—without taking into account what the compiler can, should, and probably will do for you, you’re just not going to get the promised performance benefits of C++. So if you have an algorithm or tool implemented in another language, and you want better performance, you can always† get it with C++… but you need to do two things: You need to completely rethink, redesign, and rewrite the code, starting from scratch, using C++ philosophy. You need to work with the compiler. Don’t think of it just as a tool. In C++, the compiler is your partner in performance. († There is no qualifier there; it is always possible, at least in theory, to get equivalent or better performance if you rewrite something in C++ properly. Yes, C++ can even beat C.) So let’s look at your code, keeping those two common mistakes in mind. So the original Python code put everything in classes for… reasons? A direct translation of the precision score code would look something like this: class Precision { std::size_t tp = 0; std::size_t fp = 0; public: auto true_positive(std::vector<double> const& l1, std::vector<double> const& l2) { for (auto i : std::views::iota(decltype(l1.size()){}, l1.size())) { if (l1[i] == l2[i] and l2[i] == 1) // * tp += 1; } // *: Note that Python promises that l2[i] is only evaluated once, // while C++ does not. However, even the most half-assed C++ // compiler will optimize it to a single load. } auto false_positive(std::vector<double> const& l1, std::vector<double> const& l2) { for (auto i : std::views::iota(decltype(l1.size()){}, l1.size())) { if (l1[i] != l2[i] and l2[i] == 1) fp += 1; } } auto calc_precision(std::vector<double> const& l1, std::vector<double> const& l2) { true_positive(l1, l2); false_positive(l1, l2); std::cout << (tp / double(tp + fp)); } }; However, that’s horrible C++. Your version of the precision score code is essentially just that, with true_positive() and false_positive() inlined, and the member variables tp and fp turned into local variables (and, of course, you return the value instead of printing it). That’s a slight improvement—some bugs are fixed, some new bugs are introduced, but overall it’s a bit better. However, it’s still not what you’d write if you were actually trying to solve this problem in C++. Additionally, the “fixes” you made—like moving the member variables to be locals, creating your own square() function, and so on—are mostly micro-optimizations attempting to “outsmart” or work around the compiler. But the compiler is a LOT smarter than you give it credit for. I dare say: the compiler is smarter than you are. I don’t say that to imply you’re not smart: in point of fact, I know the compiler is smarter than I am, too. Let’s consider the squaring issue. You claim that manually squaring via x * x is faster than std::pow(x, 2). I call bullshit. I mentioned in a comment that GCC optimizes std::pow(x, 2) to a single multiply instruction even at optimization level 1, but I also offer this Quick Bench comparison (which uses Clang, just for variation) that shows exactly the same performance either way. But let’s dig deeper. Let’s try cubing. Aha! Now we see that GCC doesn’t replace the call to std::pow(x, 3)! Have we finally outsmarted the compiler? Nope. In fact, the compiler has way outsmarted both of us. See, what’s happening here is super-complex, and goes deep into the weeds of how numbers and calculations work in a computer, IEEE 754 crap, and so on. But the very, very basic explanation is that the compiler can always transform pow(x, 2) into a single multiply instruction because the multiply is done using double-wide registers. In other words, double is 64 bits, but the xmm0 register is 128 bits. If you multiply 2 64-bit numbers, the result can never exceed 128 bits. But if you multiply 3 64-bit numbers, as in cubing, the answer could potentially be 192 bits. That’s overflowing the xmm0 register, so it is necessary to include error-handling code to detect and report that. However… You can disable proper IEEE 754 behaviour in GCC. Just add the -ffast-math flag (in the top right box, just after -O1). Now look at what happened. In fact, try replacing the 3 with 4. Try 10. Try 1254. Hell, try 0.5! See? The compiler knows what’s going on. For the record, moving those member variables to be local variables… probably also doesn’t make a lick of difference. The compiler probably just does all the computation in registers in any case. You just wrote more code for no reason. The lesson here is TRUST THE COMPILER. More precisely, don’t assume the compiler is dumb and that you can do better by manually micro-optimizing. The compiler WILL beat you. Almost every time. Sometimes the compiler might need more information to help it out. All compilers currently have ways to give the compiler this information, but there is not a standard, portable way, yet. There are several proposals in flight, though, like the contracts proposal, and one specifically about assumptions. The reason I’m hammering on this is because the fact that you’re finding performance improvements by manually squaring or moving variables around tells me that SOMETHING IS WRONG. I don’t know what you’re doing wrong, but I know that you’re doing something wrong. You shouldn’t have to treat the compiler like a dumb tool that has to be outsmarted to get performance. You should be able to cooperate with the compiler—basically, just write good, clear, idiomatic C++ code—and get top-notch performance. It’s 2021 (almost 2022!); if you can get better performance merely by doing silly stuff like manually unrolling loops, moving variables around, or expanding calculations with constants, something is horribly wrong. Before I go further, I want to address a comment by @IEatBagels, saying that you should be using numerics libraries like Eigen, or GPGPU techniques (as with CUDA). They’re not wrong that you might get some performance boosts with a numerics library… and certainly will if you offload the program to a number-crunching co-processor (which is essentially what a GPU is). But do you need either of those things? No, not really. All modern C++ compilers worth mentioning can already vectorize your code right out of the box, and most have OpenMP or the like built right-in. Frankly, the stuff you’re doing really isn’t complicated enough to warrant a third-party numerics library. As for using CUDA/OpenCL/Vulkan Compute/whatever… well, yeah, obviously that will make your code run faster, in the same sense that running any code on two or more computers rather than one will be faster. (Assuming the data set is large enough to make splitting the work over several processors, assuming the work is splittable, assuming blah blah blah all the other stuff that comes along with heterogeneous computing.) But that’s a whole different world of programming, at least for the time being; standard C++ doesn’t yet speak heterogeneous computing (but it’s coming!). Now, you’ve written your code as a class, where all the functions are member functions. Why? This doesn’t actually make any sense. Think of it from the mathematical/statistics sense: if someone wanted the MAE of a set of predictions, why would they need to first construct a “metrics object” before they can run the calculation they actually need? auto const predictions = calculate_predictions(); auto const observations = make_observations(); // okay, now I want the MAE, so I *should* just be able to do: auto error = mean_absolute_error(predictions, observations); // but no, apparently I have to do: auto metrics = Metrics{}; auto error = metrics.mean_absolute_error(predictions, observations); And why? The Metrics object doesn’t serve any purpose. You just need it to call the function. So why not just have the function? Every one of those member functions (except square(), which serves no purpose) should be a free function. You could group them in a namespace if you really wanted to. (You should put all of your code in your own namespace, and these functions could be in their own sub-namespace.) But there doesn’t seem to be a need for a class. In addition, there is a remarkable amount of code duplication in those functions. Every one of the “regression” functions is (with modifications to correct the bugs): template <typename Func1, typename Func2> auto regression( std::vector<double> const& y_true, std::vector<double> const& y_pred, Func1&& func_1, Func2&& func_2) { auto store = 0.0; auto size = y_true.size(); for (auto i = decltype(size){}; i < size ; ++i) { store += func_1(y_true[i], y_pred[i]); } return func_2(store / size); } For example, root_mean_square_error() is: auto root_mean_square_error( std::vector<double> const& y_true, std::vector<double> const& y_pred) { return regression( y_true, y_pred, [] (auto a, auto b) { return std::pow(a - b, 2); }, [] (auto r) { return std::sqrt(r); } ); } And, because you are worried about performance and such, you can verify with Compiler Explorer that the code I wrote above generates literally identical assembly to the code in your version of root_mean_square_error() (once I correct the bugs, of course). The next step worth taking is to generalize. First, instead of hard-coding vectors, you could use std::span. That will allow you to use these functions with vectors, C++ arrays, C arrays, and even third-party numeric library array types. While we’re at it, we’ll remove the hard-coded double type: template <typename T, typename Func1, typename Func2> auto regression( std::span<T const> y_true, std::span<T const> y_pred, Func1&& func_1, Func2&& func_2) { auto store = T{}; auto p = y_true.begin(); auto q = y_pred.begin(); for (; p != y_true.end(); ++p, ++q) { store += func_1(*p, *q); } return func_2(store / y_true.size()); } // For example: template <typename T> auto root_mean_square_error(std::span<T const> y_true, std::span<T const> y_pred) { return regression<T>( y_true, y_pred, [] (auto a, auto b) { return std::pow(a - b, 2); }, [] (auto r) { return std::sqrt(r); } ); } We could go even further, and make the code even more generic. But before we do that, I want to point out that what that regression() function is doing is a bog-standard algorithm. In fact, it’s such a common pattern, that there are actually multiple algorithms in the standard library that could do it. The old dog would be std::inner_product(): template <typename T, typename Func1, typename Func2> auto regression( std::span<T const> y_true, std::span<T const> y_pred, Func1&& func_1, Func2&& func_2) { return func_2( std::inner_product( y_true.begin(), y_true.end(), y_pred.begin(), T{}, std::plus<>{}, std::forward<Func1>(func_1)) / y_true.size()); } However, std::transform_reduce() is the out-of-order, parallelizable version of std::inner_product(). In your case, it’s okay to parallelize the algorithm, so you can use std::transform_reduce(). Now, without further comment, I’m just going to show you the way I would write root_mean_square_error() using C++20. I’m not saying my way is the “right” way, or the only way. I just want to illustrate a point. Here’s (basically) your version (with bugs fixed): double root_mean_square_error ( std::vector<double> const& y_true, std::vector<double> const& y_pred) { auto const size = y_true.size(); auto store = 0.0; for (auto i = decltype(size){}; i < size ; ++i) store += std::pow(y_true[i] - y_pred[i], 2); return std::sqrt(store / size) ; } Here’s my version (just a rough, first pass… later I might properly constrain the detail function, and support sized and non-sized ranges, and so on): namespace detail_ { template <typename T> concept not_bool = not std::is_same_v<std::remove_cv_t<T>, bool>; template <typename T> concept arithmetic = std::integral<T> or std::floating_point<T>; // This is just a basic implementation. // // To improve this, I'd constrain the template parameters, and account // for non-sized ranges and non-default constructible types and so on. template <typename R1, typename R2, typename F1, typename F2> auto regression(R1&& y_true, R2&& y_pred, F1&& f1, F2&& f2) { return f2( std::transform_reduce( std::ranges::begin(y_true), std::ranges::end(y_true), std::ranges::begin(y_pred), std::ranges::range_value_t<R1>{}, std::plus<>{}, std::forward<F1>(f1)) / std::ranges::size(y_true)); } } // namespace detail_ template <typename T> concept number = detail_::arithmetic<T> and detail_::not_bool<T>; template <std::ranges::input_range R1, std::ranges::input_range R2> requires number<std::ranges::range_value_t<R1>> and std::same_as< std::remove_cv_t<std::ranges::range_value_t<R1>>, std::remove_cv_t<std::ranges::range_value_t<R2>>> auto root_mean_square_error_2(R1&& y_true, R2&& y_pred) // pre: not std::ranges::empty(y_true) // pre: std::ranges::size(y_true) == pre: std::ranges::size(y_pred) { return detail_::regression( std::forward<R1>(y_true), std::forward<R2>(y_pred), [] (auto a, auto b) { return std::pow(a - b, 2); }, [] (auto r) { return std::sqrt(r); }); } Couple things about my version: It’s a lot longer, but remember that that detail function gets reused for all the regression functions (mean_absolute_error(), mean_gamma_deviance(), etc.). You only need to test and optimize the one function, and all the regression functions will benefit from it. That means that adding new regression functions is trivial. You want mean squared log error, max error, or whatever else? It’s a one-liner (not counting constraints, of course, but if you wanted to, you could make a single concept with all constraints, and reuse that). It supports any numeric value type, not just double: integer types, float, long double, and even float16. It supports any range type, not just std::vector: C++ arrays, C arrays, arbitrary third-party numeric library types, and even stream views. And here’s the kicker. Are you ready for this? All that power, flexibility, and extensibility I gained—not to mention the simplification of testing, optimizing, and extending due to extracting the guts of all the regression functions into a single function—surely must come with a performance cost, right? Nope. My version is up to 2× faster. That is what programming in C++ is all about: writing portable, high-level, reusable, and extensible code… that still melts CPUs and beats any other language out there. So here’s my summary of the high-level design review: Mechanically translating Python code to C++ will probably gain you some performance improvements, but nothing compared to what you could do if you just started from scratch in C++, writing code based on C++ philosophy, using C++ idioms. You have an antagonistic relationship with your compiler, where you’re trying to tweak this and that to get around what you imagine the dumb tool’s limitations are. That kind of thinking made sense in the 1980s, maybe most of the 1990s, and possibly the early 2000s. But it doesn’t fly anymore. Modern compilers are so freaking advanced, that even experts are frequently gobsmacked by how smart they are at optimizing their code. Instead of trying to undermine, outmanoeuvre, and outsmart the compiler, you need to learn to respect it as a partner in your coding endeavours. You need to learn how to communicate with it—how to tell it what you want in a way it can understand, so it can work its magic to your benefit. Don’t say “the compiler is failing to optimize the code as much as I want, so I need to work around it”, say “what information do I need to give the compiler so it knows what I need it to do”. Rather than simply throwing coding constructs at a problem because they’re the proverbial hammer that makes everything look like nails, think about what you’re trying to create on a conceptual level, and implement that in code. For example, if you want to calculate the mean value of a set of data, do you first get/build a “statistics object”, and then use that to do the calculation? No, of course not. You just do the calculation; you just plug the data into the formula, and get an answer out of it. A formula is a function (mathematically speaking and programatically speaking). So you don’t need a class, you just need a function. Incidentally, this is the kind of micro-optimization you generally don’t need to worry about, but in point of fact, having your statistics functions be non-static member functions of a class, rather than free functions, makes them less efficient. Member functions take an additional, hidden parameter—this—which you pay the price for ever time they’re called. Even if you don’t use it, that hidden this also prevents a number of other optimizations; for example, member functions can never be [[gnu:const]] (though they can be [[gnu:pure]]). This is yet another example of when doing the right thing in C++—in this case, making functions be just functions, without requiring unnecessary classes—automatically gives you better performance. “DRY”—“don’t repeat yourself”. All of your functions can be refactored to pull common elements out into reusable detail functions, with no loss of usability or efficiency. Doing so makes everything easier to test and optimize, because the core of all the functions is in a single place. You only need to make it perfect once, and all of the functions will be better. Think big. Rather than just writing functions that solve your immediate problem, step back and consider how you could solve future problems as well. Today you want to calculate the F-score of a vector of doubles. Tomorrow you might need the F-score of a boost::numeric::ublas::vector_slice<boost::numeric::ublas::vector<long double>>. It takes virtually the same amount of effort to write the one function with std::vector<double> as it does to write a template that takes an arbitrary range (you can always add all the constraints and other concepts bells and whistles later; they’re not strictly necessary). Even if you never do end up using the function for anything other than std::vector<double>, you can still benefit from the practice of trying to generalize your code. Code review Alright, now let’s dig into the actual code. Because the functions are so repetitive, I will mostly be able to review a single function, and the notes will apply to most/all. Before I get into the code, I have to comment on what’s missing. There is no namespace. You should always put your code in your own, personal namespace. There is a serious lack of comments. You have comments grouping the functions—that’s very good—and comments explaining what tp and fp stand for—also good. (Though, honestly, I’d normally prefer to write out true_positive and false_positive. That’s a personal preference, though, and it doesn’t really apply to situations where the short version is the mathematical standard, as it is for tp and fp.) But there are no other comments explaining your reasoning. You don’t need to write useless comments that explain what the code is doing, but you do really need to explain when you are doing something non-obvious. For example, why do you loop over the data twice in precision() and recall()? Is there a reason? What about in jaccard_score()… isn’t uni just size? Note that merely the act of writing the comments—trying to explain your reasoning—forces you to think about your reasoning. Had you tried to explain why you were simply counting in the second loop in jaccard_score(), you would have realized you got the algorithm wrong. (Actually, you would have realized that the original code’s documentation is wrong.) Most egregiously, there is not a single letter documenting the functions’ interfaces, preconditions, or usage. I can deduce from the code that you are not expecting empty vectors (otherwise, you’d be dividing by zero when you do store / size). But the only way I could spot that is by doing a detailed scan of the code… which most people are not going to have the time or inclination to do. I can also deduce that the sizes of the two vectors passed to all the functions must be the same. Those are important preconditions that users of your functions really need to know. Are there any other preconditions? For example, is, perhaps, the range of values in the two data sets supposed to be between zero and one? Finally, no tests. How do you even know your functions work? (Spoiler: some of them don’t.) Code without tests is garbage code, in the sense that if someone checks in code without tests into a project I’m working on, I will immediately and summarily reject the code with prejudice, and throw the whole commit into the garbage. I won’t even look at it, so it doesn’t matter if it’s the most incredible code ever written by a god-level programmer. If it’s got no tests, it’s worse than useless to me. Writing tests for your code should be the first thing you do. It forces you to think about your interface and usability, so you will automatically write better code. You should use a proper test library, like Catch or Google Test, or Boost.Test, write the tests for your code first, and only then start writing your actual functions. One more thing before we start: all of your functions have very restricted interfaces. They all take std::vector<double> and only std::vector<double>. This is unnecessarily restrictive. It is so easy to write generic code in C++. Let's start with one of your functions: double mean_absolute_error ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double store {0} ; size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { store += std::abs(y_true[i] - y_pred[i]) ; } return store/size ; } First, we refactor to use iterators rather than indexes. Iterators are not only more generic, they are… theoretically… faster. Why? Well, if you have an index and two arrays/vectors, you need 4 values: the start pointer of the first array the start pointer of the second array the index; and the end index. And on each loop iteration, you need to do three things: increment the index (++index) calculate the pointer for the first array (start_1 + index) calculate the pointer for the second array (start_2 + index) But with iterators you only need 3 values: the iterator for the first array the iterator for the second array; and the end iterator. And on each loop iteration, you only need to do two things: increment the iterator for the first array (++p_1) increment the iterator for the second array (++p_2) You see? Iterators are intrinsically more efficient. (In practice, the compiler might be able to optimize an index just as well… or it might simply ignore the index and use iterators internally.) So, refactoring the code to use iterators: double mean_absolute_error ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double store {0} ; auto p_1 = y_true.begin(); auto p_2 = y_pred.begin(); auto const q_1 = y_true.end(); for (; p_1 != q_1; ++p_1, ++p_2) { store += std::abs(*p_1 - *p_2) ; } return store / y_true.size() ; } And to make that completely generic, we just template the argument types, and use generic begin(), end(), and so on: template <typename R> double mean_absolute_error ( R const& y_true, R const& y_pred) { double store {0} ; auto p_1 = std::ranges::begin(y_true); auto p_2 = std::ranges::begin(y_pred); auto const q_1 = std::ranges::end(y_true); for (; p_1 != q_1; ++p_1, ++p_2) { store += std::abs(*p_1 - *p_2) ; } return store / std::ranges::size(y_true) ; } That’s all there is to it. (It would be even simpler if we had views::zip, of course: template <typename R> double mean_absolute_error ( R const& y_true, R const& y_pred) { double store {0} ; for (auto const [a, b] : std::views::zip(y_true, y_pred)) store += std::abs(a - b) ; return store / std::ranges::size(y_true) ; } But we don’t get zip until C++23.) You could add further improvements from there, like constraining the template parameters, supporting value types other than double, and supplying different implementations for ranges that don’t have a constant size. But that’s just gravy. The code above already does everything your current code does, and much, much more, with no loss of efficiency. Alright, from the top: double mean_absolute_error ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { The convention in C++ is to put the type modifiers with the type. In plain English: T &t is C style. T& t is C++ style. double store {0} ; It’s good to space out your code a bit when it makes sense, but there’s a point where it gets ridiculous. There is no purpose to spacing the semicolon away from the statement it terminates; we don’t put spaces between the last letter and the period at the end of sentences. Nor is there any purpose to separating the braced initializer from the variable it’s initializing; store {0} makes it look like the {0} is unrelated to the store. (It’s also inconsistent to write store {0} but y_true[0].) Also, you seem to really like the type var{init}; form of variable declarations. That’s fine, but it does get a little weird when you really go all-in. For example, I’ve been programming C++ since before it was standardized, and I think this is the first time I’ve ever seen for (int i {0}; ...). Tradition and convention has it as for (int i = 0; ...). Modern convention is moving toward “Almost Always auto”, possibly with concept constraints, which would be for (auto i = 0; ...), which is pretty much the same thing. One thing you need to be cautious of if you’re going to stick with the type var{init}; form is that you should never use auto to get type deduction with that form. That’s because auto var{init}; behaves differently depending on the version of C++, and in more recent versions, will deduce to an initializer list, not the type of init. Personally, I’m a big proponent of consistency, and there is only one declaration form that is perfectly consistent, and perfectly behaved in all cases: auto var = init;, or auto var = type{init}; to force a type. But the form you like is fine… just don’t use auto with it. size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { There are some insidious bugs lurking here. First, it’s std::size_t, not size_t. Second, the type of y_true.size() is not (necessarily) std::size_t. It’s std::vector<double>::size_type. It should be harmless to force that into a std::size_t, though. However… This is not good: i < size. This is comparing a signed int to an unsigned std::size_t. Signed/unsigned comparisons are dangerous; if you compile with warnings turned on (and you should!) you should be getting warnings about this code. But there’s an even bigger problem. On some platforms, int is a lot smaller than either std::size_t or std::vector<double>::size_type… which means you could be getting truncation or wraparound (which would be UB). I believe int is only 32 bits on Windows, but std::size_t (and probably std::vector<double>::size_type) is 64, so this is not a rare problem you will only run into on obscure systems. This is why you may have noticed I rewrote these lines as: auto size = y_true.size(); for (auto i = decltype(size){}; i < size ; ++i) The auto here correctly sets the type of size to std::vector<double>::size_type, and the decltype(size) makes sure i is the same type. Even better, though, would be to use iterators. It’s unfortunately a little clunky because we don’t have std::views::zip_view until C++23, but it’s still safer, more flexible, and more efficient than indexes. Even better, though, would be to stop and think about what these loops and such are actually doing, identifying the patterns, and using standard library algorithms when appropriate. I’ll admit this is a little tricky in current C++, because all of the functions are working with two ranges simultaneously. Once we get views::zip in C++23, it will become easy. For example, accuracy() is: auto accuracy( std::vector<double> const& y_true, std::vector<double> const& y_pred) { return -(double{ std::ranges::count_if( std::views::zip(y_true, y_pred), [] (auto&& p) { return std::get<0>(p) == std::get<1>(p); })} / y_true.size()); } That’s just “zip the elements of y_true and y_pred together, then count the ones that are equal” (and convert to double and divide by the size for the final answer, of course). But even today, you can still do this with std::inner_product() or std::transform_reduce()… easy to spot, because there are very few algorithms that take two ranges, and even fewer that take two ranges and produce a single value (rather than another range). These functions are a bit more verbose because they don’t support ranges, but: auto accuracy( std::vector<double> const& y_true, std::vector<double> const& y_pred) { return -(double{ std::transform_reduce( std::ranges::begin(y_true), std::ranges::end(y_true), std::ranges::begin(y_pred), std::plus<>{}, [] (auto&& a, auto&& b) { return a == b ? std::size_t{1} : std::size_t{0}; })} / y_true.size()); } // or: auto accuracy( std::vector<double> const& y_true, std::vector<double> const& y_pred) { return -(double{ std::inner_product( std::ranges::begin(y_true), std::ranges::end(y_true), std::ranges::begin(y_pred), std::size_t{0}, std::plus<>{}, [] (auto&& a, auto&& b) { return a == b ? std::size_t{1} : std::size_t{0}; })} / y_true.size()); } std::transform_reduce() is better than std::inner_product(), though, because std::inner_product() must be done in order, while std::transform_reduce() can be done out-of-order, and even parallelized, which can be faster. And as I illustrated above, using standard algorithms can be much faster than hand-rolling your own loops, and that’s even before you start using things like std::execution::par_unseq. store += 2.0 * ((y_true[i]*std::log(y_true[i]/y_pred[i])) + (y_pred[i]-y_true[i])) ; There’s some really odd spacing going on there. I don’t get the logic of putting a space between the semicolon and the rest of the statement, but not a single space in (y_true[i]*std::log(y_true[i]/y_pred[i])). I don’t see how that improves readability. Also, you’re not defending against y_pred[i] being zero. Same goes elsewhere where you divide by y_true[i]. And, of course, you never account for empty data sets where the size is zero. double accuracy ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double store {0} ; size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { if (y_true[i] == y_pred[i]){ store += 1.0 ; } } return -(store / size) ; } All of your classification functions are essentially counting. In this case, you’re counting the number of times in the data set when the true and predicted values match. The problem is: you’re using a double to count. Yes, you ultimately want a double, so you can do the division and not get truncation. But for the actual counting, doing store += 1.0 on most hardware will be significantly slower than doing ++store (assuming store is an integer). On most hardware, you will get a dramatic performance boost if you do: double accuracy ( std::vector<double> const& y_true, std::vector<double> const& y_pred) { auto store = std::size_t{0}; auto const size = y_true.size(); for (auto i = decltype(store){0}; i < size ; ++i) { if (y_true[i] == y_pred[i]) ++store; } return -(double{store} / size); } This is even more true for precision() and recall(), where you’re incrementing two doubles, in two loops: double precision ( std::vector<double> const& y_true, std::vector<double> const& y_pred) { auto tp = std::size_t{0}; // true positive auto fp = std::size_t{0}; // false positive auto const size = y_true.size(); for (auto i = decltype(store){0}; i < size ; ++i) { if (y_true[i] == y_pred[i] and y_pred[i] == 1) ++tp; if (y_true[i] != y_pred[i] and y_pred[i] == 1) ++fp; } return tp / double{tp + fp}; } (Your compiler is probably smart enough to merge the two loops on its own. Still, it doesn’t really make sense to write two loops when you really only need one.) Before I move on, I need to comment about all the times you’re using == and != with doubles. That is almost always wrong. It is almost impossible to get exactly the same result from a set of mathematical operations if they’re not done in the exact same order. Also, if you’re dealing at all with any sort of analog input (like a sensor), line noise alone will screw up two readings that are supposed to be exactly the same. Comparing floating point numbers is a dark art. If I even start to explain it here, we’ll be here all day. But the bottom line is that you should give the user a way to specify a comparison function. You could always provide a sensible default for simplicity. For example: template <std::floating_point T> constexpr auto numeric_compare(T a, T b, T max_relative_error) noexcept // pre: not std::isnan(max_relative_error) { if (std::isunordered(a, b)) return std::partial_ordering::unordered; if (std::abs(a - b) <= (std::max(std::abs(a), std::abs(b)) * max_relative_error)) return std::partial_ordering::equivalent; return (a < b) ? std::partial_ordering::less : std::partial_ordering::greater; } template <std::floating_point T> constexpr auto numeric_compare(T a, T b) noexcept { return numeric_compare(a, b, std::numeric_limits<T>::epsilon()); } template <std::integer T> constexpr auto numeric_compare(T a, T b) noexcept { return a <=> b; } template <typename NumericCompare> double accuracy ( std::vector<double> const& y_true, std::vector<double> const& y_pred, NumericCompare&& compare) { auto store = std::size_t{0}; auto const size = y_true.size(); for (auto i = decltype(store){0}; i < size ; ++i) { if (compare(y_true[i], y_pred[i]) == 0) ++store; } return -(double{store} / size); } double accuracy ( std::vector<double> const& y_true, std::vector<double> const& y_pred) { return accuracy(y_true, y_pred, [] (auto a, auto b) { return numeric_compare(a, b); }); } Then if I want, say, 6 digits of precision, I can do: auto acc = accuracy(y_true, y_pred, [] (auto a, auto b) { return numeric_compare(a, b, 0.000001); }); // or: auto my_compare = [] (auto a, auto b) { return numeric_compare(a, b, 0.000001); }; auto acc = accuracy(y_true, y_pred, my_compare); One more note about comparisons: double precision ( const std::vector<double> &y_true, const std::vector<double> &y_pred ) { double tp {0} ; // true positive double fp {0} ; // false positive size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { if (y_true[i] == y_pred[i] == 1) { // <-- That last line is wrong. Python allows chaining comparisons, so a == b == c gets rewriten as a == b and b == c. C++ does not support that. So y_true[i] == y_pred[i] == 1 is interpreted as (y_true[i] == y_pred[i]) == 1. Which reduces to either true == 1 or false == 1… which is just true or false. In other words, that whole expression is basically just y_true[i] == y_pred[i]. To mimic the Python behaviour, you want y_true[i] == y_pred[i] and y_pred[i] == 1. (Python needs chained operators because it does not have an optimizing compiler, so if you write y_true[i] == y_pred[i] and y_pred[i] == 1, y_pred[i] gets evaluated twice. C++ expects the compiler to optimize the repeated evaluation (presuming it has no side effects, of course).) double jaccard_score ( const std::vector<double> &y_true, const std::vector<double> &y_pred) { double intersect {0} ; double uni {0} ; size_t size {y_true.size()} ; for (int i {0}; i < size ; ++ i) { if (y_true[i] == y_pred[i]) { intersect += 1.0 ; } } for (int i {0}; i < size ; ++ i) { uni += 1.0 ; } return intersect / uni ; } I believe this function is incorrect. (Did you test your code? If you’d written proper tests, and applied them, you would have discovered this.) As far as I understand it, the Jaccard index is | y_true ∩ y_pred | ÷ | y_true ∪ y_pred |. In other words, the size of the intersection divided by the size of the union. uni is not the size of the union, it’s just the size of the set. That loop: for (int i {0}; i < size ; ++ i) { uni += 1.0 ; } is just uni = double{size};, calculated the long way around. You can calculate the size of the union as the size of both sets minus the size of the intersection… and you’ve already got the size of the intersection. So you could do: auto jaccard_score ( std::vector<double> const& y_true, std::vector<double> const& y_pred) { auto const set_size = y_true.size(); auto intersection_size = std::size_t{0}; for (auto i = decltype(set_size){0}; i < set_size; ++i) { if (y_true[i] == y_pred[i]) // should do a proper comparison here ++intersection_size; } return double{intersection_size} / ((2 * set_size) - intersection_size); } Finally: } met ; Why? Summary of code review: There are no comments at all, which means: No explanations of usage, so there’s no way to decide whether there might be better ways of accomplishing something. No precondition expectations, so there’s no way to know whether an unconsidered corner case is a bug or not. No postcondition promises, so there’s no way for a user of the code to know what state it might leave their program in. No explanations of rationale for any of the code, so there’s no way to know if something done in the code was done for very smart reasons, or was just a brain fart. There are no tests at all. Untested code is garbage code, not even worthy of review in a serious project. The usage of whitespace is… idiosyncratic, to say the least, and quite excessive. By my estimate, over half the lines of your code are just blank lines. Indenting is also excessive, and inconsistent: you indent the class access specifier (public:) by four spaces… and then indent the functions by another four spaces, except for square(). That’s 10% of the horizontal space in an 80 line text editor just wasted. There are spaces before semicolons, but not around binary operators… both of which hurt readability. These functions should all be free functions; the Metrics class serves no purpose. There should be a namespace, though. There is a massive amount of repetition. A refactor pulling out common code would make your functions simpler, easier to test, and easier to optimize (because you’d only need to test/optimize in one place). Hand-rolled loops are a code smell. Use algorithms. Not only do they make your code easier to understand, they can also offer performance benefits. Watch out for truncation and signed/unsigned comparisons. These are a common problem with hand-rolled loops. (Better yet, don’t write hand-rolled loops, or, if you must, use iterators, which are more flexible, and (theoretically) faster.) Don’t compare floating point numbers with ==. Make your interfaces more generic. It’s not that much harder to support completely generic interfaces than it is to restrict it to std::vector<double>, and there is no performance cost.
{ "domain": "codereview.stackexchange", "id": 42546, "tags": "c++, performance, machine-learning" }
moveit_tutorial build failing
Question: I am following the step here: https://ros-planning.github.io/moveit_tutorials/doc/getting_started/getting_started.html#install-moveit ROS Version: Melodic OS: Ubuntu 18 It works till catkin configure. After the catkin build command the panda_moveit_config finished without errors but the moveit_tutorials gives me erros. The build log shows the following [ 5%] Built target robot_model_and_robot_state_tutorial [ 11%] Built target ros_api_tutorial [ 17%] Built target planning_scene_ros_api_tutorial [ 23%] Built target planning_scene_tutorial [ 29%] Built target motion_planning_api_tutorial [ 35%] Built target motion_planning_pipeline_tutorial [ 47%] Built target interactivity_utils [ 52%] Built target move_group_interface_tutorial [ 58%] Built target state_display_tutorial [ 61%] Building CXX object doc/subframes/CMakeFiles/subframes_tutorial.dir/src/subframes_tutorial.cpp.o [ 67%] Built target pick_place_tutorial [ 73%] Built target cylinder_segment [ 79%] Built target bag_publisher_maintain_time Scanning dependencies of target trajopt_example Scanning dependencies of target visualizing_collisions_tutorial [ 85%] Built target controller_manager_example [ 88%] Building CXX object doc/trajopt_planner/CMakeFiles/trajopt_example.dir/src/trajopt_example.cpp.o [ 91%] Building CXX object doc/visualizing_collisions/CMakeFiles/visualizing_collisions_tutorial.dir/src/visualizing_collisions_tutorial.cpp.o /home/pyro/ws_moveit/src/moveit_tutorials/doc/visualizing_collisions/src/visualizing_collisions_tutorial.cpp:47:10: fatal error: moveit/collision_detection_fcl/collision_env_fcl.h: No such file or directory #include <moveit/collision_detection_fcl/collision_env_fcl.h> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. doc/visualizing_collisions/CMakeFiles/visualizing_collisions_tutorial.dir/build.make:62: recipe for target 'doc/visualizing_collisions/CMakeFiles/visualizing_collisions_tutorial.dir/src/visualizing_collisions_tutorial.cpp.o' failed CMakeFiles/Makefile2:4490: recipe for target 'doc/visualizing_collisions/CMakeFiles/visualizing_collisions_tutorial.dir/all' failed make[2]: *** [doc/visualizing_collisions/CMakeFiles/visualizing_collisions_tutorial.dir/src/visualizing_collisions_tutorial.cpp.o] Error 1 make[1]: *** [doc/visualizing_collisions/CMakeFiles/visualizing_collisions_tutorial.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp: In function ‘void spawnCollisionObjects(moveit::planning_interface::PlanningSceneInterface&)’: /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:114:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_names’ box.subframe_names.resize(5); ^~~~~~~~~~~~~~ /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:115:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses.resize(5); ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:117:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_names’ box.subframe_names[0] = "bottom"; ^~~~~~~~~~~~~~ /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:118:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[0].position.y = -.05; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:119:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[0].position.z = 0.0 + z_offset_box; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:123:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[0].orientation = tf2::toMsg(orientation); ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:126:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_names’ box.subframe_names[1] = "top"; ^~~~~~~~~~~~~~ /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:127:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[1].position.y = .05; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:128:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[1].position.z = 0.0 + z_offset_box; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:130:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[1].orientation = tf2::toMsg(orientation); ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:132:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_names’ box.subframe_names[2] = "corner_1"; ^~~~~~~~~~~~~~ /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:133:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[2].position.x = -.025; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:134:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[2].position.y = -.05; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:135:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[2].position.z = -.01 + z_offset_box; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:137:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[2].orientation = tf2::toMsg(orientation); ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:139:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_names’ box.subframe_names[3] = "corner_2"; ^~~~~~~~~~~~~~ /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:140:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[3].position.x = .025; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:141:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[3].position.y = -.05; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:142:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[3].position.z = -.01 + z_offset_box; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:144:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[3].orientation = tf2::toMsg(orientation); ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:146:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_names’ box.subframe_names[4] = "side"; ^~~~~~~~~~~~~~ /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:147:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[4].position.x = .0; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:148:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[4].position.y = .0; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:149:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[4].position.z = -.01 + z_offset_box; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:151:7: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? box.subframe_poses[4].orientation = tf2::toMsg(orientation); ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:169:12: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? cylinder.subframe_poses.resize(1); ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:170:12: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_names’ cylinder.subframe_names.resize(1); ^~~~~~~~~~~~~~ /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:171:12: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_names’ cylinder.subframe_names[0] = "tip"; ^~~~~~~~~~~~~~ /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:172:12: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? cylinder.subframe_poses[0].position.x = 0.03; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:173:12: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? cylinder.subframe_poses[0].position.y = 0.0; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:174:12: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? cylinder.subframe_poses[0].position.z = 0.0 + z_offset_cylinder; ^~~~~~~~~~~~~~ plane_poses /home/pyro/ws_moveit/src/moveit_tutorials/doc/subframes/src/subframes_tutorial.cpp:176:12: error: ‘moveit_msgs::CollisionObject {aka struct moveit_msgs::CollisionObject_<std::allocator<void> >}’ has no member named ‘subframe_poses’; did you mean ‘plane_poses’? cylinder.subframe_poses[0].orientation = tf2::toMsg(orientation); ^~~~~~~~~~~~~~ plane_poses doc/subframes/CMakeFiles/subframes_tutorial.dir/build.make:62: recipe for target 'doc/subframes/CMakeFiles/subframes_tutorial.dir/src/subframes_tutorial.cpp.o' failed make[2]: *** [doc/subframes/CMakeFiles/subframes_tutorial.dir/src/subframes_tutorial.cpp.o] Error 1 CMakeFiles/Makefile2:4783: recipe for target 'doc/subframes/CMakeFiles/subframes_tutorial.dir/all' failed make[1]: *** [doc/subframes/CMakeFiles/subframes_tutorial.dir/all] Error 2 [ 94%] Linking CXX executable /home/pyro/ws_moveit/devel/.private/moveit_tutorials/lib/moveit_tutorials/trajopt_example [ 94%] Built target trajopt_example Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Originally posted by pyropotato on ROS Answers with karma: 1 on 2019-09-14 Post score: 0 Original comments Comment by aPonza on 2019-09-23: Seems like the same as this issue. Answer: Did you build moveit from source beforehand? https://ros-planning.github.io/moveit_tutorials/doc/getting_started/getting_started.html#create-a-catkin-workspace-and-download-moveit-source was updated just two days after your question... Originally posted by jschleicher@Pilz with karma: 56 on 2019-09-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by pyropotato on 2019-09-29: tried again(link text) but gives other errors. I was however able to install it by following these instructions: [https://moveit.ros.org/install/source/]
{ "domain": "robotics.stackexchange", "id": 33765, "tags": "ros, moveit, ros-melodic, catkin, build" }
Ros communicate via rs232
Question: I have servos that communiate via rs232 and I was wondering if anybody knew of any ros libraries or packages that can communicate with devices over rs232. I looked at rosserial, and I think it is aimed toward arduino, so probably not what I want. Any advice? Thanks! Originally posted by jbrown on ROS Answers with karma: 101 on 2015-04-06 Post score: 0 Original comments Comment by 130s on 2015-04-06: Worth looking at this QA thread http://answers.ros.org/question/84821/how-to-use-serial-port-in-ros/ Comment by jbrown on 2015-04-06: Thanks! I was looking at the rosserial_python package, that just might do what I want if I use some rs232 to usb converters. I can specify the ports in the launch file. Comment by gvdhoorn on 2015-04-07: @jbrown: rosserial_python is actually a pkg that implements a bridge over a RS232 link that allows embedded devices to use the ROS pub/sub and svc model to communicate with a full ROS system without having to install full ROS on the embedded device. It is not a generic RS232 library for Python&ROS Comment by jbrown on 2015-04-07: Thanks! That does help me understand the python library better. I guess there isn't a generic rs232 library then. At one time, ros communicated with these servos via usb to rs232 converters. Comment by yuanye on 2017-10-26: Hi jbrown, I want to use ROS to control a robot arm by RS232C. Did you find out how to communicate via rs232? thank you Comment by jbrown on 2017-10-27: @yuanye: The previous comment from gvdhoorn is pretty good advice, it can communicate with a device over RS232 without ROS being installed on the device (Thanks gvdhoorn!). I like using rosserial_python myself. It may take some reading and trail and error, but there are tutorials and code samples. Comment by jbrown on 2017-10-27: There is a ROS Wiki page for rosserial if you google it, it is informative and helpful. I would also recommend that you look for sample code that uses rosserial as well as the Wiki page, the sample code will show you how to use rosserial_python in your code. What kind of robot arm are you using? Answer: I guess there isn't a generic rs232 library then. At one time, ros communicated with these servos via usb to rs232 converters. There is a 'generic' ROS serial communication library, it is called serial. It's C++ only though. For Python you could take a look at what rosserial_python itself uses. Originally posted by gvdhoorn with karma: 86574 on 2015-04-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jbrown on 2015-04-07: Yea, that just might work! Thanks! Should you git clone into a catkin_ws and catkin_make, or build it like they said? Comment by gvdhoorn on 2015-04-08: No. Assuming you use a Ubuntu release (or derivative), you should install it using apt-get. More precisely: sudo apt-get install ros-DISTRO-serial. Only install things from source if you need to.
{ "domain": "robotics.stackexchange", "id": 21357, "tags": "ros, servo" }
Acetic acid freezing distilation
Question: How can acetic acid be distilled by freezing when its freezing point is above the freezing point of water. Examples on the web show the water portion of the vinegar as ice and the acid as a liquid but Wikipedia shows that freezing point of acetic acid is 16.6°C. Answer: The procedure you are describing sounds like fractional freezing. Acetic acid is frequently purified by this method, hence 100% acetic acid is often called "glacial" acetic acid. As an example of how this works, let's consider a mixture of acetic acid and water. Acetic acid has a freezing point of 16.6 oC. Water has a freezing point of 0 o. They form a eutectic at -26.7 oC (although I do not know the composition). The phase diagram probably looks something like (but is not) the following. Phase Legend: A = Liquid solution B = Solid solution of acetic acid in water PLUS liquid solution C = Solid solution of water in acetic acid PLUS liquid solution D = Solid solution of acetic acid in water E = Solid solution of water in acetic acid F = Solid solution of acetic acid in water PLUS Solid solution of water in acetic acid A mixture of acetic acid and water that is approximately 72% acetic acid is cooled until it begins to freeze (solid blue vertical line), which happens around 2 oC. The phase changes from A (liquid solution) to C (Solid solution of water in acetic acid PLUS liquid solution). The solid solution is enriched in acetic acid compared to the original liquid solution. The composition of the remaining supernatant liquid solution is enriched in water compared to the original liquid solution. The composition of the solid solution can be determined by drawing a horizontal (red) line to the phase boundary between C and E and reading down to 91% acetic acid. The composition of the remaining supernatant liquid solution can be determined by drawing a horizontal (green) line to the phase boundary between A and C and reading down to 63% acetic. The supernatant phase is removed and discarded, and the solid phase is allowed to melt. The process is repeated. Each successive freezing yields a new solid solution that is enriched in acetic acid, until the purity is as high as you want it. Note, that since water and acetic acid have a eutectic point, if we has started on the left side of the eutectic point, the phases would have been reverse. The solid would be enriched in water and the supernatant would have been enriched in acetic acid. The supernatant would be remove and kept, and the solid would be discarded. The supernatant would then be cooled and fractionally frozen again to further enrich the supernatant in acetic acid until the eutectic point is reached (at which point the composition of the supernatant is equivalent to the composition of the solid phase). If you want to isolate acetic acid from a solution that is mostly water, I would distil the solution until the composition is mostly acetic acid, and then fractionally freeze it.
{ "domain": "chemistry.stackexchange", "id": 241, "tags": "organic-chemistry, acid-base, purification" }
How to determine the quality of synthetic data?
Question: I'm working on a VAE model to produce synthetic data of X-Ray diffraction spectrums. I try to figure out how I can measure the quality of the spectrums. The goal would be to produce synthetic data which is similar to the training data but also different from the training data. The spectrums should keep their characteristics, but should be different in terms of noise and intensity. I trained models which can produce those type of spectrums (because I checked some of them visual), but I don't know how to quantify the difference/similarity to the origin (1) and the difference between the produced synthetic spectrums in one dataset (2). Are there any methods to quantify these points? Answer: Due to subjective nature, quantitative evaluation of synthetic images is difficult in general. However, there are metrics like Inception Score or FID score that are used for evaluation of generative models like GANs or VAEs. Technically, it considers two aspects of the generated data: Similarity with training data Diversity within itself Even though such metrics do not assess new images as we humans do, but it is widely accepted in the community.
{ "domain": "ai.stackexchange", "id": 3056, "tags": "datasets, autoencoders, generative-model, variational-autoencoder, algorithm-request" }
PHP email script
Question: This is my first OOP script, a PHP email script designed for a site I'm working on: ndkutz(.)net. I want a user to be able to send an email to the barbershop owner from the website. I'm self taught and even though I know I'm on the right track I feel absolutely lost. Is my code any good? <?php $error = ''; $errormsg = ''; $finalMessage = ''; $finalName = ''; $finalSubject = ''; $finalTo = ''; $finalHeader = ''; $sendingEmail = ''; class emailConstruction { private $from = ""; private $name = ""; private $message = ""; public function scrubAll($data) { $data = htmlspecialchars($data); $data = trim($data); $data = strip_tags($data); return $data; } public function setfrom($from){ $this->from = stripslashes($from); $this->from = $from; } public function getFrom(){ return $this->from; } public function setName($name){ $this->name = scrubAll($name); $this->name = $name; } public function getName(){ return $this->name; } public function setMessage($message){ $this->message = scrubAll($data); $this->message = wordwrap($data,70,"<br />"); $this->message = $message; } public function getMessage(){ return $this->message; } } if(isset($_POST['submit'])) { if(empty($_POST['uname'])) { $error = 1; $errormsg = "Your name is required."; return false; }else{ $error = 0; $createEmail = new emailConstruction; $createEmail->setName($_POST['uname']); } if(empty($_POST['umail'])) { $error = 1; $errormsg = "Email address required."; return false; }else { $error = 0; $createEmail = new emailConstruction; $createEmail->setTo($_POST['umail']); } if(empty($_POST['umsg'])) { $error = 1; $errormsg = "Message is required"; return false; }else{ $error = 0; $createEmail = new emailConstruction; $createEmail->setMessage($_POST['umsg']); } if($error = 0) { $finalHeader = 'from:' . $finalFrom; $finalHeader .='MIME-Version: 1.0\r\n'; $finalHeader .='Content-type: text/html\r\n'; $finalMessage = $createEmail->getMessage(); $finalName = $createEmail->getName(); $finalSubject = 'New potiential client by the name of ' . $finalName; $finalTo = $createEmail->getTo(); $sendingEmail = mail($finalTo,$finalSubject,$finalMessage,$finalHeader); if($sendingEmail == true) { $emailMessageS = 'Email sent successfully!'; }else{ $emailMessageF = 'Error. Please try again!'; } } } ?> Answer: There are several issues to review OOP First off, it's a good intention but a very bad implementation. A class should be made on purpose, but this class' purpose is uncertain. Why would you need a class that just prepares the data but not a class that sends the actual email? In your place I would create a class that has methods like setSubject(), setTo(), setBody() and - most important one - send(). Such a class would have a very good use. Cargo cult code No offense, but every operator in your code should be justified. Writing a certain operator only because you've seen it used somewhere makes a cargo cult code (the name is from the story about savages on the Pacific islands creating straw planes during WWII in hopes those will bring cargo as good as real ones). Unfortunately, almost none of them are. Take scrubAll() method for example. htmlspecialchars() and strip_tags() are mutual exclusive functions. Once you run the former, the latter will find nothing to strip. you should apply only one of them and it should be htmlspecialchars() as it does less harm trim() could be useful, but I don't think it's necessary in this case so it makes your scrubAll function rather useless. stripslashes() used in setfrom() is absolutely of no use. It could have been used under some conditions 10 years ago but in 2018 it makes no sense to call it just in case. I had to use it only once in the recent 5 years, to fix a malformed JSON string. Security. Ironically, but despite all these preparations your code is still vulnerable to Mail Injection attack. With user input injected right into headers it's just a textbook example. At best, you should never put anything from the user input into mail headers. Let alone "From:" header which will likely get your email in spam. If you want a neat way to reply, make it "Reply-to:" header and validate the entered email, an example could be taken straight from the manual page. Conclusion. So, take out all getters from your class, call it "sendMail", remove useless functions and add a send() method which should assume the code which is in the global scope now For the model example you may want to take a look at PHPMailer's usage examples. It is not that I ask you to write something similar but just to look at the way it is called in these examples.
{ "domain": "codereview.stackexchange", "id": 30103, "tags": "php, email" }
How do I figure out how to combine simpler quantum gates to create the gate I want?
Question: I want to create other quantum gates from the basic building blocks of a universal quantum gate set. I've been playing with IBM's quantum computing interface for that. I wanted to create a Toffoli gate, which I managed to in the end, but part of it was by trial and error. I want to create some other gates now, and I want to avoid the long arduous process of trial and error. Is there a better way? Is there also a way to how to construct the gate from the minimum number of gates necessary? I'm going to add my work on the Toffoli here for anyone to see / critique. % Matlab / Octave code q=[1 0]; X=[0 1;1 0]; Y=[0 -1i; 1i 0]; Z=[1 0;0 -1]; H=[1 1;1 -1]/sqrt(2); S = [1 0;0 1i]; St = [1 0;0 -1i]; C = [1 0 0 0;0 1 0 0; 0 0 0 1; 0 0 1 0]; Cgap = [1 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 0; 0 0 1 0 0 0 0 0; 0 0 0 1 0 0 0 0; 0 0 0 0 0 1 0 0; 0 0 0 0 1 0 0 0; 0 0 0 0 0 0 0 1; 0 0 0 0 0 0 1 0]; T = [1 0; 0 exp(pi*1i/4)]; Tt = [1 0; 0 exp(-pi*1i/4)]; I = eye(2); I2 = eye(4); I3 = eye(8); CS = kron(T,Tt)*C*kron(Tt,Tt)*C*kron(T,S); CSt = kron(Tt,St)*C*kron(T,T)*C*kron(Tt,T); CgapS = kron(T,kron(I,Tt))*Cgap*kron(Tt,kron(I,Tt))*Cgap*kron(T,kron(I,S)); Toffoli = kron(I2,H)*kron(I,CS)*kron(C,I)*kron(I,CSt)*kron(C,I)*CgapS*kron(I2, H) Toffoli gate program using IBM's Quantum Experience (the two X gates on the very left are used to set the values of the qubits to 1; they can be removed to run the gate with 0 values): The way I figured it out was I found some lecture notes with the gate expressed as seven building blocks, but I didn't have some of the blocks, so I constructed those by trial and error. Here's the pdf: https://inst.eecs.berkeley.edu/~cs191/fa07/lectures/lecture9_fa07.pdf Answer: There are standard ways to approximate any unitary operation with just CNOTs, Hs, and Ts. In the specific case of the Toffoli gate you don't need something so general. Start with the Toffoli gate: Use a construction that relies on operations having a square root to cut the worst-case number of controls from 2 to 1: Use a construction that moves controls off of arbitrary single-qubit operations and onto CNOTs: Oh hey that looks kinda familiar ;) Now simplify by sliding gates around so that some of them cancel. Zs can move over controls, but not over NOTs (but can hop over the space between two NOTs that undo each other). It's also useful to know how to move a CNOT over another CNOT's control by introducing a third CNOT. Hadamards cancel when paired, and turn Zs into Xs (and vice versa) when hopping since $HX = ZH$. Anyways, after some fiddling...: (Note: $T = Z^{1/4}$ and $T^\dagger = Z^{-1/4}$) You can confirm the circuit works by toying with it in Quirk. These constructions are explained in more detail in textbooks such as Nielsen and Chuang. The 'moving controls' one is particularly tricky, because you have to find $A$, $B$, $C$ such that $ABC=I$ but $AXBXCe^{i\theta}=U$. Or you can read a few blog posts about making a controlled-by-every-other-wire NOT using $O(n)$ basic gates that also uses these constructions but explains them in more detail.
{ "domain": "cstheory.stackexchange", "id": 3757, "tags": "quantum-computing" }
Poincaré maps and interpretation
Question: What are Poincaré maps and how to understand them? Wikipedia says: In mathematics, particularly in dynamical systems, a first recurrence map or Poincaré map, named after Henri Poincaré, is the intersection of a periodic orbit in the state space of a continuous dynamical system with a certain lower-dimensional subspace, called the Poincaré section, transversal to the flow of the system. But I fail to understand any part of the above definition... Examples of Poincaré maps: The angular momentum and the angle $\theta$ of a kicked rotator, in a poincaré map is described as: If I'm not mistaken the closed lines are called Tori, but how this interpret this map? Another example: Billiard in stadium-like table: the poincaré map is: Where $p$ and $q$ are the globalized coordinates for momentum and position. Again How to interpret this? (Please lean towards a physical explanation when answering.) Answer: The essential idea of a Poincaré map is to boil down the way you represent a dynamical system. For this, the system has to have certain properties, namely to return to some region in its state space from time to time. This is fulfilled if the dynamics is periodic, but it also works with chaotic dynamics. To give a simple example, instead of analysing the entire trajectory of a planet, you would only look at its position once a year, more precisely, whenever it intersects (with a given direction) a plane that is perpendicular to the plane in which the planets trajectories lie, that contains the central celestial body around which the planet rotates. This plane is a Poincaré section for the orbit of this planet, as it is transversal to the flow of the system (which goes along the planet’s trajectories). Now, if the planet’s orbit is exactly periodic with a period length corresponding to one year, our yearly recording would always yield the same result. With other words, our planet would intersect the Poincaré section at the same point every year. If the planet’s orbit is however more complicated, e.g., the Perihelion precession of Mercury, the point of intersection with the Poincaré section will slightly change each year. You can then consider a Poincaré map which describes how the intersection point for one year depends on the intersection point for the previous year. While I have only looked at the geometrical position for this example, you can also look at other observables and probably need to, if you cannot fully deduce the position in phase space from the geometrical position. In our example, you would also need to record the impulse of the planet (or some other observable). Now, what’s the purpose of this? If our planet’s orbit only deviates from perfect periodicity slightly, what happens during one year is just going in circles and thus “rather boring” and obfuscating the interesting things that happen on larger time scale. The latter can be observed on our Poincaré map, which shows us how the orbit slightly changes each year. Therefore it may be easier or more illustrative to just analyse the Poincaré map instead of the entire trajectory. This is even more pronounced for billiard: Between two collisions with a boundary, the dynamics is just $\dot{x}=v$. In particular, certain properties of your underlying dynamics translate to the Poincaré map, e.g.: If the dynamics is chaotic, so is your Poincaré map. If, in our planet example, the dynamics is periodic with a period of four years, your Poincaré map will alternate between four points. If your dynamic is quasi-periodic with two incommensurable frequencies (for example, if one observable is $\sin(x)+\sin(\pi x)$), the intersections with your Poincaré section will all lie on a closed curve. For example, most straight trajectories on the surface of a torus correspond to a dynamics with incommensurable frequencies and will eventually come arbitrarily close to any point on the torus, i.e., they fill the torus’s surface. Thus the intersection of the trajectory with a Poincaré section that is perpendicular to the torus’s surface at all points will yield the border of a circle (and non-perpendicular Poincaré sections will yield something close to an ellipsis). In general, the dimension of the intersections with the Poincaré section is the dimension of the attractor minus one. Also, if you want to model an observed system in the sense of finding equations that reproduce its dynamics to some extent, you might start with modelling the Poincaré map (i.e., find an explicit formula for it).
{ "domain": "physics.stackexchange", "id": 16985, "tags": "phase-space, chaos-theory, non-linear-systems, complex-systems, poincare-recurrence" }
What would be the best design for spike-and-recovery and linearity-of-dilution validation experiments in one 96-well ELISA plate?
Question: I have searched in the web for a detailed explanation of doing such validation experiment, but unfortunately couldn't find a satisfactory one. I came across the following sources: Thermo Scientific Tech tip #58: IMAHO, well written description, but still there are some issues Pros Clear on describing terms and straight forward calculations Troubleshooting of the results is outlined Cons Not designed to be performed in one 96-well plate ELISA No mention of any acceptable criteria for the both validation tests No mention of blank Quansys Biosciences: a practical design to implement Pros Both spike-and-recovery and linearity-of-dilution experiments can be performed in one 96-well plate ELISA Justified introduction of the endogenous sample, which is diluted the same amount as the spiked sample Addition of blank Mention of acceptance criteria for both types of experiments Cons No mention of troubleshooting a little bit messy plate design R&D spike and recovery protocol: a brief one but useful Pros Clear on design Mention of acceptable criteria at least for the spike recovery 80-120% Mention of blank Hint about possibility of using diluted neat sample until it gives reading Cons One sample was used for assessment and not averaging on 3 samples at least No mention for any criteria for linearity of dilution Issues/Questions - Q1: How can we combine the two experiments into one plate to spare samples, and materials? - Q2: Can we dilute sample with assay diluent instead of sample diluent? - Q3: In case low sample material is there, can we skip the neat sample and add 1:2 diluted one instead to spare material? - Q4: In preparing spikes in Thermo Scientific tech#58, why was it 10µl spike stock solution added to 50µl sample? what governs this addition, is there any ratio to stick to here or arbitrary ratio? Can we add 90µl sample and 10µl stock solution? Please feel free to expand sources, or questions, and of course answers are more than welcome to achieve the following aim: The best design for both validation tests (i.e., spike-and-recovery and linearity-of-dilution) in one 96-well plate ELISA that will use the minimum amount of sample material as possible with explanation of calculations based on averages and clear troubleshooting plan PS: I am short of reputation to add important tags to enhance searching, e.g.: ELISA, validation Answer: Background: There are a lot of recipes to extract protein from human tissues out there, but all of them boil down to one thing; preserving the tissue proteins as much as possible while obtaining a reasonable yield for downstream applications, using extraction buffer, extra techniques, and by adding protease inhibitors ELISA is one of the those downstream applications that you might be interested in, but one important question? is the sample matrix that you have already obtained valid for the ELISA assay? keep in mind, your sample is not serum, it is a mixture of tissue, extraction buffer, and protease inhibitors, and other things as well Two known assays are usually performed to address the above question. These are, spike-and-recovery (SAR), and linearity-of-dilution (LOD), and these two assays are specific to the analyte that you want to measure in your samples, e.g., cytokines, factors, Igs, etc...and specific also to the tissue, and assay kit as well Often times, you are interested in performing these two assays with minimum resources as much as possible, i.e., less time, less kit materials, and, most importantly, less sample material. This can be quite challenging and it depends on many issues that is out scope of this post to discuss them all If you are interested in performing these two assays using only one 96-well ELISA plate and examine validity of different sample matrices, sample buffer (lysis buffer or sometimes called extraction buffer) for a certain analyte, say X, then this scheme below is for you: Technical Notes: In this scheme, recovery is examined in 3 types of solutions; assay buffer, sample buffer and samples. These are different in their complexity. Typically, the assay buffer is usually optimized to detect the standard protein provided with the commercial kit The idea of spike-and-recovery is that you add a certain amount from standard stock solution into the wells containing the solution to be tested (e.g., sample buffer or samples) and measure them, to see whether you can recover that amount again, and how much you can recover from it in %. If, for any reason, you couldn't recover that amount in comparison with a control well, where that same amount was added into assay buffer, then this means that something in the test solution is not in favor of the assay. The specific amount you add into the wells is called spike, and it should be the same amount across all tested wells (try to be consistent). Make sure that the added amount when diluted in the well, its final concentration should still be measured by the assay and lie inside your standard curve. E.g., if your standard curve is from 4- 4500 pg/mL, and each well has 100µl of sample, if you want now to add spike into that sample well, you may add whatever amount you want that will still give you at the end a reading inside the standard curve, you may choose to add 10µl from the highest stock which is 500pg/mL, so you end up here with 1/10, that is 50pg/mL, which is fine as it lies inside the 4-500pg/mL range. Alternatively, you can add 50µl sample + 50µl spike (500pg/mL) and would end up with 1/2 dilution (which is the same in this scheme), that is 250pg/mL, which is also fine. So it is up to you to choose, keeping in mind the final concentration should lie inside your standard curve For the linearity-of-dilution, it is obvious that you need a high concentration solution that you can make two-fold (or whatever fold you like) serial dilution from and its reading should still lie inside the standard curve. The best candidate for this is the high spiked wells. The LOD would tell you about the effect of different dilutions on the precision of the assay For SAR and LOD you will get at the end an average percentage from 3 samples at least (calculation example will be provided later), I added a fourth sample, but this fourth can be a different sample type, say samples from tissue culture, so it is up to you to re-design this scheme and be innovative Choosing the best dilution is not an easy task, it is not without compromises sometimes. For example, if you have a harsh sample buffer, you may not have good recovery at 1/2, 1/4, 1/8 dilutions of the sample buffer in assay buffer, whereas almost good recovery is achieved at 1/16, then this dilution factor is most probably the one you should go for in your assay. On the other hand, 1/16 might not be detected by your ELISA assay, which is limited by its sensitivity. Here, the E wells can give you a clue whether you detect something in these unspiked samples or not, that's why I recommend you include at least one sample of high expected value of the analyte in question (this is not easy sometimes to predict) in this scheme, here this is sample 3. One more thing, if you get good recovery at 1/16 DO NOT go to 1/32 or 1/64, as it is obvious that with these higher dilutions you are reducing the chance of the analyte's detectability as it is limited by your assay's sensitivity In most of the cases, you will see that the dilution factor of good recovery of the sample buffer in columns 3 and 4, will coincide with that of the spiked samples in columns 5, 7, and 9. This will give you the assurance that you are in the right direction. If sample 4 is extracted by a different sample buffer, then it is clear that should not expect to end up with the same dilution factor (please, be more reasonable!)
{ "domain": "biology.stackexchange", "id": 1012, "tags": "human-biology, assay-development, elisa" }
Counting number of times we visit vertices of given tree
Question: Suppose we are given a tree $T=(V,E)$ where $|V|=10$. Now, one of the vertices is goal and we want to find it. The structure of the tree is given as input, but which vertex is the goal is unknown. In each time step, we can visit a arbitrary vertex $v$ and find out which edge of $v$ is nearest to goal. What is the number of times we visit nodes to identify which node is the goal? If we visit $v$ and find out its neighbor $u$ is goal, we don't need to visit $u$. I read this link, but the answer is $3$, how we get the answer $3$? This question come from an interace exam of university and it's not homework. Answer: Give a tree and a node $u$ in it. Consider $u$ as the root of the tree. If there is no more than $n/2$ nodes in every subtree (except the whole tree), we will call $u$ a center of that tree. A tree can have multiple centers. Fact: A tree has at least one center. Proof: Select an arbitrary node $u_1$. If $u_1$ is a center, we are done. Otherwise, suppose the subtree at $u_2$ of the tree rooted at $u_1$ has more than $n/2$ nodes. Now we consider $u_2$. If $u_2$ is a center, we done. Otherwise, repeat the process. Since the number of nodes in the most popular subtree is strictly decreasing, in the end, we will find a center. As in the question, suppose we are given a tree $T=(V,E)$ that $|V|=10$. Pick $v_1$, an arbitrary center of $T$. If $v_1$ or one of its neighbors is the goal, we have found the goal with $1$ visit. Otherwise, let $u_1$ be the root of $T$. We know which edge that is incident to $v_1$ is nearest to goal. In other words, we know which subtree contains the goal. Let that subtree be $T_1$, which has no more than $10/2=5$ nodes. Pick $v_2$, an arbitrary center of $T_1$. If $v_2$ or one of its neighbors is the goal, we have found the goal with $2$ visits. Otherwise, let $u_2$ be the root of $T_1$. We know which edge in $T_1$ that is incident to $v_2$ is nearest to goal. In other words, we know which subtree of $T_1$ contains the goal. Let that subtree be $T_2$, which has no more than $5/2=2.5$ nodes, i.e., which has at most $2$ nodes. Pick $v_3$, an arbitrary node in $T_2$. The goal is either $v_3$ or the other node in $T_2$ that is connected to $v_3$ if it exists. So we have found the goal with $3$ visits. In summary, $10\ \rightarrow\ 10//2=5\ \rightarrow\ 5//2 = 2$. Exercise. Explain that it is enough for $3$ visits to find the goal if $|V|=12$.
{ "domain": "cs.stackexchange", "id": 19342, "tags": "algorithms" }
OOP Matchsticks Game [Update]
Question: I made the changes that were suggested to improve the program in the previous version. I would like to know if it is decent now or is there something to tweak. class MatchsticksGame: def __init__(self, initial_matchsticks=23): self.players = ('Player 1', 'Player 2') self.player = None self.turn = 0 self.matchsticks = initial_matchsticks def _current_player(self): return self.players[self.turn % 2] def _show_matchsticks(self): x, y = divmod(self.matchsticks, 5) return '||||| '*x + '|'*y def _get_move(self, player): while True: try: matchsticks_to_remove = int(input(f'{player} removes matchsticks: ')) if 1 <= matchsticks_to_remove <= 3: return matchsticks_to_remove print('You can delete only between 1 and 3 matchsticks.') except ValueError: print('The value entered is invalid. You can only enter numeric values.') def _play_turn(self): self.player = self._current_player() self.turn += 1 def _game_finished(self): return self.matchsticks <= 0 def play(self): print('Game starts.') print(self._show_matchsticks()) while self.matchsticks > 0: self._play_turn() matchsticks_to_remove = self._get_move(self.player) self.matchsticks -= matchsticks_to_remove print(self._show_matchsticks()) if self._game_finished(): print(f'{self.player} is eliminated.') break if __name__ == '__main__': game = MatchsticksGame() game.play() Answer: There are a few things that you may want to improve. Currently, the active player is determined in _current_player() by the turn number: if it's even, it's the turn of Player 1, otherwise, it's the turn of Player 2. This works, but it's not very explicit, and it would fail for any single- or multiplayer version of your game where the number of players is not two. Given that you only need the value determined by _current_player() to be able to print the right player name, I'd recommend using a different data structure to handle the players: deque from the collections module. A deque is basically a list that is optimized for inserting new elements at the beginning of the list. It also has a rotate(n=1) method that shifts the elements in the list by n positions. You can use this after every turn so that the first element of the deque contains always the current player. In this way, you can dispose of the class variables turn and player, as well as the methods _current_player() and _play_turn(). Your program doesn't properly handle cases in which the current player enters a number that exceeds the number of remaining sticks. For example, if there is only one stick left, the current player can still input 3. They will lose after all, but _show_matchsticks() will produce an output that looks as if there were still sticks remaining. You can solve this by changing the input validation in _get_move() to take the current number of sticks into account. Your main game loop is a while loop with a losing condition, but you also use the method _game_finished() to determine the losing condition, and to explicitly break from the while loop even though it would stop at that point anyway. You can either remove the check for _game_finished(), or you can change the conditional while loop to a loop that will need to be halted explicitly (i.e. while True:). The latter version is an idiom frequently found for game loops or event loops. Given the simplicity of your game, however, I think it's more explicit and reader-friendly if you make the while loop determine whether the game has been lost, thus getting rid of _game_finished(). There are two calls to _show_matchsticks(), one before and one inside of your game loop. By rearranging them, you can reduce that to just one. For reasons of brevity, I'd recommend to subtract the result of _get_move() from matchsticks without storing it in the intermediate variable matchsticks_to_remove. Currently, you haven't limited the number of characters per line. PEP8, which is the ultimate reference for all issues concerning Python coding style, recommends a line length of 79 characters. This affects the lines with print() statements, which can be reformatted to include line breaks at appropriate places. For more complex programs, you may want to consider to store the strings in constants that are defined in one place anyway, because this will make changing the game output easier e.g. if you want to offer localized versions of your game. Here's a version that combines these suggestions: from collections import deque class MatchsticksGame: def __init__(self, initial_matchsticks=23): self.players = deque(['Player 1', 'Player 2']) self.matchsticks = initial_matchsticks def _show_matchsticks(self): x, y = divmod(self.matchsticks, 5) return '||||| '*x + '|'*y def _get_move(self, player): max_value = min(3, self.matchsticks) while True: try: matchsticks_to_remove = int( input(f'{player} removes matchsticks: ')) if 1 <= matchsticks_to_remove <= max_value: return matchsticks_to_remove print('You can delete only between 1 and ' f'{max_value} matchsticks.') except ValueError: print('The value entered is invalid. You can only enter ' 'numeric values.') def play(self): print('Game starts.') while self.matchsticks > 0: print(self._show_matchsticks()) player = self.players[0] self.matchsticks -= self._get_move(player) self.players.rotate() print(f'{player} is eliminated.') if __name__ == '__main__': game = MatchsticksGame() game.play() However, now there is only little justification left to use an object-oriented structure – a procedural version of your game is even more readable than that. This version will feature your methods _get_move(), _show_matchsticks(), and play() as separate module-level functions, and the game state will be passed between the functions as arguments. play() will be expanded by the initialization statements, and relabeled as main() because it's now the main top-level function. In my opinion, this is pretty close to the most concise and most explicit version of your game that you can get: from collections import deque def show_matchsticks(matchsticks): x, y = divmod(matchsticks, 5) return '||||| ' * x + '|' * y def get_move(player, matchsticks): max_value = min(3, matchsticks) while True: try: value = int(input(f'{player} removes matchsticks: ')) if 1 <= value <= max_value: return value print(f'You can delete only between 1 and {max_value} matchsticks.') except ValueError: print('The value entered is invalid. You can only enter numeric ' 'values.') def main(matchsticks=23): players = deque(['Player 1', 'Player 2']) print('Game starts.') while matchsticks > 0: print(show_matchsticks(matchsticks)) player = players[0] matchsticks -= get_move(player, matchsticks) players.rotate() print(f'{player} is eliminated.') if __name__ == '__main__': main()
{ "domain": "codereview.stackexchange", "id": 42986, "tags": "python, beginner, python-3.x, object-oriented" }
Moving wedge and pulley system
Question: A wedge of mass $M$ rests on a rough horizontal surface with coefficient of static friction $\mu$. The face of the wedge is a smooth plane inclined at an angle $ \alpha$ to the horizontal. A mass $m_1$ hangs from a light string which passes over a smooth peg at the upper end of the wedge and attaches to a mass $m_2$ which slides without friction on the face of the wedge. A) Find the accelerations of $ m_1, m_2 $ and the tension in the string when $ \mu $ is very large. B)Find the smallest coefficient of friction such that the wedge will remain at rest. Attempts: No specification on whether or not any of the masses $m_i$ are bigger than the others, so let us assume that $m_1$ goes down and $m_2$ goes up the slope. Then A) is fine. (a minus error in the final answer for A) is introduced if I assume the other orientation) As for B) if we consider the wedge+masses as a system, then to conserve linear momentum if the masses move as above, then the wedge must move rightwards. So the frictional force acts leftwards until the wedge slips. The only other force acting on the wedge is via the normal contact force from $m_2$. A component of this force acts to accelerate the wedge horizontally. Let's write this relative to some fixed inertial frame of reference with $x$ rightwards. Then the above recast into symbols gives $$-\mu (M+m_1+m_2)g - m_2 g \cos \alpha \sin \alpha = 0 $$ at the point of slip of the wedge. However, solving this for $\mu$ gives a negative result. Where did I go wrong? Answer: Part A $\mu$ large implies that the wedge does not move. Therefore the two forces on the small masses along their direction of movement are: $$F_1 = m_1 g, \quad F_2 = m_2 g \sin(\alpha).$$ They pull in different directions on the pulley, so the net acceleration (with respect to $m_1$ going down) is: $$F = F_1 - F_2 = g [m_1 - m_2 \sin(\alpha)].$$ Then the acceleration of both masses is: $$ a = \frac Fm = \frac{g}{m_1 + m_2}[m_1 - m_2 \sin(\alpha)].$$ You should not be able to introduce a minus error just by choosing a convention. I assume that you just did not flip all the minus signs. Part B The wedge has to be at rest, so the force of friction has to compensate the tangential forces exactly. This means $$ \mu F_\text{down} \overset != F_\text{tangential}. $$ I now construct the forces that act on the masses: The tensions will change with the acceleration: $$ T_1 = m_1[g-a], \quad T_2 = m_2[g\sin(\alpha) + a].$$ To digest this, set $a =0$, the static case. Then the tensions are just the forces that gravity exerts on the masses. The extreme case, where $m_1$ is free falling is $a = g$. Then $T_1$ has to be zero, since the object is free falling. The mass $m_2$ exerts a normal force onto the wedge that will push it to the left. The tension $T_2$ will drag on the pulley which will then push on the wedge to the right. The force to the left is given by $$ m_2 g \cos(\alpha)\sin(\alpha)$$ whereas the force to the right (just the $x$-component) is given by $$ m_2[g \sin(\alpha) + a] \cos(\alpha).$$ In the static case $a = 0$, those two forces are equal and do not make the wedge move. In the dynamic case, they try to move the wedge. This is what you meant by conservation of linear momentum, just written down with the forces. The net force to the right (assuming $a > 0$ then is $$ m_2 a \cos(\alpha) = F_\text{tangential}.$$ Now the forces that push the wedge downward and give it a frictional force. The mass $M$ will drag downward. Then $T_1$ will push the pulley down, so does the $y$-component of $T_2$. The last thing is the $y$-component of the normal force. The last two mostly add up to just $m_2 g$, there is still a $m_2 a \sin(\alpha)$ left from the dynamics in $T_2$: $$\underbrace{m_1[g-a]}_{T_1} + \underbrace{m_2[g\sin(\alpha) + a]\sin(\alpha)}_{{T_2}_y} + m_2 g \cos(\alpha)^2$$ Together, this is: $$F_\text{down} = Mg + m_1[g-a] + m_2[g + a\sin(\alpha)].$$ Now set them equal, plug in $a$ and solve for $\mu$. You might want to omit all the terms with $a$ here, since that makes it really complicated. If $M$ is large, you can do that without a large error.
{ "domain": "physics.stackexchange", "id": 15576, "tags": "homework-and-exercises, newtonian-mechanics, friction" }
Why cannot match $ Bool \equiv Bool $ with $ refl $ while $1 \equiv 1$ can?
Question: This code depends on agda-stdlib: {-# OPTIONS --without-K #-} open import Data.Nat open import Data.Bool open import Relation.Binary.PropositionalEquality -- this code doesn't check, cannot match e with refl why : (e : Bool ≡ Bool) -> ℕ why refl = zero but-why : (e : 1 ≡ 1) -> ℕ but-why refl = zero I know K-rule means I cannot match $ \forall a.a \equiv a $ with $ refl $, but if $a$ is a concrete value, it can (i.e. $1 \equiv 1$ can be matched with $refl$). But why I cannot match $Bool \equiv Bool$ with $refl$? Why is type and value treated differently in a dependently-typed programming language? (Maybe related: What does it mean if we disable K-rule in Agda?) Answer: Essentially, even without K, you can match against refl, but at least one of the endpoints must be an "unconstrained" variable. Both of these type check in Agda-sans-K: foo : ∀ { A B : Set } -> A ≡ B → A → B foo refl a = a fooℕ : ∀ { A : Set } -> A ≡ ℕ → A → A fooℕ refl a = a + 3 These cases are (probably) handled by a few standard induction principles for equality (AKA dependent elimination principles), such as "path induction" and "based path induction", which do not rely on axiom K, but are a fundamental ingredient in the definition of the equality type. Instead, the following does not type check, since the two endpoints are constrained to be the same. bar : ∀ { A : Set } -> A ≡ A → A → A bar refl a = a I'm unsure about why this does not work as expected. By comparison, in Coq this works fine: Definition bar: forall A : Type, A = A -> A -> A := fun A p x => match p with | eq_refl => x end . As the translation of the OP's code: Definition foo: (bool = bool) -> nat := fun p => match p with | eq_refl => 0 end . In these two last cases, we do not even need dependent elimination, the regular non-dependent one suffices. I guess that Agda, when no endpoint is "free", internally translates the match into some form which relies on axiom K, even if in some cases (like the above) there is no real need to do so. One can indeed define the original why pattern match, by generalizing it so that at least one endpoint is "free". why-generalized : {A : Set} -> (e : Bool ≡ A) -> ℕ why-generalized {.Bool} refl = zero why : (Bool ≡ Bool) -> ℕ why = why-generalized (Well, in this case we could also use why x = zero, but the point of the above code is to pattern match against refl)
{ "domain": "cs.stackexchange", "id": 12351, "tags": "proof-assistants" }
Determining if given languages are regular or recursively enumerable
Question: I came across following problem: Suppose $L_1$ and $L_2$ are two languages, $M$ is a Turing machine $L_1 =\{M|M$ accepts at most 2016 strings$\}$ $L_2=\{M|M$ accepts at least 2016 strings$\}$ If $L=L_1\cap L_2$, then which one of the below is correct? A) $L'$ is recursively enumerable B) $L\cap L'$ is recursively enumerable C) $L\cup L'$ is recursively enumerable D) $L$ is recursively enumerable The solution given was: $L_1$ by definition is regular language (“atmost”). $L_2$ by definition is recursively enumerable (“at least”). Recursively enumerable languages are closed under regular intersection. Hence, $L = L_1 ∩ L_2$ is recursively enumerable. Thus option D. Recursively enumerable languages are not closed under complementation. Hence $L’$ is not recursively enumerable. Hence option A is wrong. And since $L’$ is not recursively enumerable, options B and C are also wrong. My doubt I am struggling with how $L_1$ is regular and $L_2$ is recursively enumerable, that is with first two sentences: $L_1$ by definition is regular language (“atmost”). $L_2$ by definition is recursively enumerable (“at least”). Usually language definition specifies criteria on format of input string which will allow us to reject or accept it. But here the definition specifies how many strings the language has or its corresponding Turing machine accepts. I found this similar sounding problem, in which the answer gives membership algorithm and hence proving language in that problem is indeed recursively enumerable. But I feel this does not applies in my problem. Correct? Or the problem is incorrect in telling number of strings languages can accept instead of format of acceptable strings? Answer: Your intuition is entirely correct; this solution is nonsense. $L_1$ isn't a regular language; it's not even a decidable language, by Rice's Theorem, and also not recognizable (aka recursively enumerable). $L_2$ is in fact recognizable. The following algorithm recognizes it: Take an encoded Turing machine $M$ as input. First simulate it on every input of length ≤ 1 for 1 step, then on every input of length ≤ 2 for 2 steps, and so on. Keep track of how many distinct inputs it's ever accepted. If this number ever exceeds 2016, return True. But as it turns out, none of this matters to the actual problem. Note that (C) is taking the union of a language with its complement, and the union of any language with its complement is $\Sigma^*$. This language is regular, thus decidable, thus recognizable. So the correct answer is (C). EDIT: Of course, the intersection of a language with its complement is $\varnothing$, which is also regular, thus decidable, thus recognizable. So (B) is also true.
{ "domain": "cs.stackexchange", "id": 11888, "tags": "formal-languages, regular-languages, undecidability, semi-decidability" }
Designing the constructor interface for a reflection object (any class)
Question: I am working on cleaning up and making open source a C++ reflection library that has served me well over the last few years. One of the most important classes of the library is Object, which is used to contain and work with1 any type derived from the object's template parameter T (or any type at all if T is void). I am interested in getting some feedback on the constructor interface of the Object class (any feedback on other parts of the code is of course welcome as well), specifically: Is the purpose of each constructor understandable? Would it be understandable without the accompanying comment? What would make it easier to understand? Are there too many constructors? Would a single constructor that handles all cases (using tag dispatching) be preferable? Compiler errors are decent but not great when trying to construct an Object with incompatible arguments. Any suggestions to improve them? The constructor taking a std::reference_wrapper<Object<...>> and similar uses a templated r-value reference parameter to handle all variants of the reference_wrapper argument (e.g., std::reference_wrapper<const Object<...>>). This feels ugly to me, but the only alternative I can think of is to duplicate the constructor for each variant. Is the EnableIf logic understandable? object.h #include "traits.h" namespace Reflect { // A class derived from Object that ensures the underlying object contains a // value rather than a reference. template <typename T = void> class Value; // A class derived from Object that ensures the underlying object contains a // reference rather than a value. template <typename T = void> class Reference; // A class that holds and grants access to a value of type T or a type derived // therefrom. // The value may be owned by the object, or merely be referenced by the object. template <typename T = void> class Object { public: static_assert(std::is_same<T, typename std::decay<T>::type>::value, "Object must be of unqualified type."); using element_type = T; public: // Construct object containing a copy of other. // The reflected type of the object will be T_Derived. template < typename T_Derived, Detail::EnableIf< Detail::IsDerived<T_Derived, T>::value && std::is_constructible<T_Derived, T_Derived>::value && !Detail::IsSameTemplate<T_Derived, Object<T>>::value && !Detail::IsSameTemplate<T_Derived, Value<T>>::value && !Detail::IsSameTemplate<T_Derived, Reference<T>>::value && !Detail::IsSameTemplate<T_Derived, std::reference_wrapper<T>>::value >... > Object(T_Derived &&other); // Construct object containing a copy of the other object's value. // The reflected type of the object will be equivalent to that of other. Object(Object const &other); // Construct object containing the other object's moved value. // The reflected type of the object will be equivalent to that of other. Object(Object &&other); // Construct object containing a copy of the other object's value. // The reflected type of the object will be equivalent to that of other. // Throws an exception if the other object's value is not derived from T. template < typename T_Related, Detail::EnableIf< Detail::IsRelated<T_Related, T>::value >... > Object(Object<T_Related> const &other); // Construct object containing the other object's moved value. // The reflected type of the object will be equivalent to that of other. // Throws an exception if the other object's value is not derived from T. template < typename T_Related, Detail::EnableIf< Detail::IsRelated<T_Related, T>::value >... > Object(Object<T_Related> &&other); // Construct object referencing the value of other. // The reflected type of the object will be T_Derived. template < typename T_Derived, Detail::EnableIf< Detail::IsDerived<T_Derived, T>::value >... > Object(std::reference_wrapper<T_Derived> other); // Construct object referencing the other object's value. // The reflected type of the object will be equivalent to that of other. // Throws an exception if the other object's value is not derived from T. template < typename T_Ref, Detail::EnableIf< Detail::IsSameTemplate<T_Ref, std::reference_wrapper<T>>::value && Detail::IsRelated<typename T_Ref::type::element_type, T>::value && (Detail::IsSameTemplate<typename T_Ref::type, Object<T>>::value || Detail::IsSameTemplate<typename T_Ref::type, Value<T>>::value || Detail::IsSameTemplate<typename T_Ref::type, Reference<T>>::value ) >... > Object(T_Ref &&ref); // Construct object containing an instance of T, forwarding the provided // arguments to T's constructor. // The reflected type of the object will be T. template < typename ...T_Args, Detail::EnableIf< std::is_constructible<T, T_Args...>::value && !Detail::IsSameTemplate<T_Args..., Object<T>>::value && !Detail::IsSameTemplate<T_Args..., Value<T>>::value && !Detail::IsSameTemplate<T_Args..., Reference<T>>::value && !Detail::IsSameTemplate<T_Args..., std::reference_wrapper<T>>::value >... > Object(T_Args &&...args); // Destroy the object and its contents. ~Object(); }; } // namespace Reflect traits.h #include <type_traits> namespace Reflect { namespace Detail { // Simplified enable_if for better usability. enum struct EnableIfType { }; template <bool condition> using EnableIf = typename std::enable_if<condition, EnableIfType>::type; // Indicate whether the first type is derived from the second type. // Evaluates to false if more than two types are specified. // Note that void is considered a base of all types. template <typename, typename ...> struct IsDerived : std::false_type { }; template <typename T_Derived, typename T_Base> struct IsDerived<T_Derived, T_Base> : std::conditional< std::is_void<typename std::decay<T_Base>::type>::value || std::is_base_of<typename std::decay<T_Base>::type, typename std::decay<T_Derived>::type>::value || std::is_same<typename std::decay<T_Base>::type, typename std::decay<T_Derived>::type>::value, std::true_type, std::false_type >::type { }; // Indicate whether the first type is related to the second type in a way that // could be resolved by the reflection system. // Evaluates to false if more than two types are specified. template <typename, typename ...> struct IsRelated : std::false_type { }; template <typename T_Lhs, typename T_Rhs> struct IsRelated<T_Lhs, T_Rhs> : std::conditional< IsDerived<T_Lhs, T_Rhs>::value || IsDerived<T_Rhs, T_Lhs>::value, std::true_type, std::false_type >::type { }; // Indicate whether the first type is of the same template as the second type. // Evaluates to false if more than two types are specified. template <typename, typename ...> struct IsSameTemplateImpl : std::false_type { }; template <template <typename ...> class T, typename ...U, typename ...V> struct IsSameTemplateImpl<T<U...>, T<V...>> : std::true_type { }; template <typename ...T_Args> struct IsSameTemplate : IsSameTemplateImpl<typename std::decay<T_Args>::type...> { }; } } // namespace Reflect::Detail The code is compilable (but will fail linking). main.cpp #include "object.h" struct Base { }; struct Derived : Base { }; int main() { // Create an object of any type containing reflected type int with value 42. Reflect::Object<> object1 = 42; // Create an object of Base type containing reflected type Derived. Reflect::Object<Base> object2 = Derived(); // Error, cannot create Object<int> from string. //Reflect::Object<int> object("hello"); } Please note that the code is written for C++11 or later. Some of the template metaprogramming is therefore messier than necessary for newer C++ versions. To "work with" the object means to access its reflected properties, methods, etc., through the reflection system. For example, given an Object<void> obj containing a std::string, obj.getProperty("length") would return a Reference<void> to the length of the string. obj.call("append", " world") would append " world" to the string, etc. This is what sets Object apart from e.g. std::any or a base class pointer. Answer: Is the EnableIf logic understandable? Well, I wonder how it’s different from std::enable_if and related stuff. I assume there is something different about it so that’s a mental burden right there. Is the purpose of each constructor understandable? Would it be understandable without the accompanying comment? What would make it easier to understand? I understand what a copy constructor and move constructor are supposed to do, without any commentary. If there is comments, I wonder why they are subtly different from what those functions are supposed to do, or are documenting caveats. As for the move constructor, I’m puzzled by the ability to move a reference. I think the ctors would be easier to read, in general, if the huge blocks of contraints were factored out into a single named constraint. And you can use the _t aliases rather than ::type all over the place. namespace Reflect { namespace Detail { You can now write simply: namespace Reflect::Detail { …which is used to contain and work with any type derived from the object's template parameter T (or any type at all if T is void). This may be beyond the scope of the review, but I wonder how the T=void case is different from std::any, and the constrained case different from a normal base-class pointer. How is “related in a way that can be resolved by the reflection system” different from dynamic_cast?
{ "domain": "codereview.stackexchange", "id": 30246, "tags": "c++, c++11, interface, constructor" }
Equivalent Straight Line Embedding of a Planar Graph Drawing on a Grid
Question: An embedding of a graph G on a surface Σ is a representation of G on Σ in which points of Σ are associated to vertices and simple arcs are associated to edges in such a way that: the endpoints of the arc associated to an edge e are the points associated to the end vertices of e, no arcs include points associated with other vertices, two arcs never intersect at a point which is interior to either of the arcs. Two embeddings of a planar graph in the plane are called equivalent for every vertex of the graph the cyclic order of the incident edges is the same in both embeddings. I am looking for a reference which shows that any Jordan arc embedding of a planar graph can be equivalently embedded as a straight line drawing with the $n$ vertices of the graph lying on the vertices of an $O(n) \times O(n)$ grid. (Certainly any planar graph can be embedded with its vertices on the vertices of such a grid but I'm looking for an embedding that's equivalent to the originally given one.) Schnyder's algorithm seems to produce an embedding equivalent to the topological embedding given as its input but I've not managed to find a proof of this. Does anybody know of such a theorem? Answer: "Schnyder's algorithm" computes an embedding on an $O(n)\times O(n)$ grid. See here for a reference, and see here for a more general treatment that is not behind a paywall. Note that if your initial (topological) embedding is not unique, you can just add edges (while preserving planarity) until the graph becomes 3-connected (or even a triangulation) and then apply some drawing algorithm. Once the layout is computed you simply remove the augmentation. I want to point out that there is also a different drawing algorithm for the $O(n)\times O(n)$ grid by De Fraysseix, Pach and Pollack. It is based on the canonical order of the planar graph. See here for the reference if the graph is a triangulation, and here for 3-connected planar graphs.enter link description here. These articles are behind paywalls, but the algorithms are standard and you will find lecture notes if you google the keywords. You might also try to check out the handbook of graph drawing for more information.
{ "domain": "cs.stackexchange", "id": 4247, "tags": "graphs, reference-request, graph-drawing" }
Degrees of freedom in General Relativity and well-posedness of the EFE
Question: I would like to understand what are the degrees of freedom in GR. I have read a few previous posts already, but none of them really help me. Below, I will try to write down the entangled web of thoughts I have/know and hopefully someone we help me reach epiphany. Let us consider the vacuum Einstein's field equation (EFE). It has 10 independent equations (naively I guess...). The metric itself has 10 independent components (naively?) so all seems good. But now, trouble starts... 1) First, due to general covariance, 4 components of the metric are really redundant. That is, if some $g_{\mu \nu}$ solve the EFE, then $g_{\mu \nu} \rightarrow g'_{\mu \nu}$ will also solve it. So, actually, the EFE under-determine the metric!? There is a first confusion here. To me, it is a bit like saying: here we have a system of 10 polynomial equations with 10 unknowns, but the system is nonetheless under-determined. I mean, if we look at say some standard PDE's like the wave equation, for appropriate initial data, we have a unique solution. Are we then saying that even given appropriator initial data, the EFE are still under-determined?? Even though the number of unknowns and equations match? Or are we strictly speaking about the physical "interpretation" of the solution? In the sense that, given a solution, say in cartesian coordinates, we could translate it into polar coordinates, which would not "really" change anything, only the individual numerical values, but the description of the solution is still the same? In this case, cant we say that the wave equation is then also under-determined since we could play that same game here as well? 2)Ok, now let us say I am fine with the above. We end up with 6 "really" independent components of the metric, for still 10 equations. But, now, somehow, 4 equation of EFE are actually redundant because of the bianchi identity. I dont see how this makes sense. The Bianchi identity is,well, an identity. So it always holds, and therefore in my eyes it cannot provide with any additional constraints. More directly, if we start if $G_{\mu \nu} = 0$, which ones should we remove? and how do we recover me? (which we should be able to since they are supposedly redundant). For example, if we choose to "forget" about the first one, i.e. $G_{00}=0$, how can we recover it? From the Bianchi, we have: $\nabla_t G^{00} + \nabla_i G^{i0} = 0$, but that is not enough (even with all the other EFE) to deduce $G_{00} = 0$? In fact, with the EFE, we would have $\nabla_t G^{00} = 0$, but we can't get any further? 3)Now, even at this stage, I dont see how we get to 2 degree of freedom. I usually hear like diffeomorphism invariance removes 4 dof (as above), but Bianchi removes another 4?? However, Bianchi "at best" removes redundant EFE and does not "touch" the metric like diffeomorphism invariance does?? 4) Finally, this was for the vaccum equation. Now, if we add a non-trivial stress-energy tensor, things get worse, since this looks like one is "adding" even more unknowns to the EFE. Even with Equations of state, one typically introduces more unknowns (for example for perfect fluid, pressure and 4-velocity). How does this work out? I hear that you introduce the continuity equation $\nabla_\mu T^{\mu \nu}=0$, but this is a direct consequence of the EFE, and therefore are not additional independent equations! So we still have only 10 equations!? Thank you if you managed to read all this and thanks in advance for your responses. Answer: This is cleaner in the ADM formulation than it is directly from the Einstein equation. If you're not familiar with the ADM formulation, it is done by converting the Hilbert Lagrangian to a constrained (and vanishing) hamiltonian, and getting Hamilton's equations out of it. https://en.wikipedia.org/wiki/ADM_formalism If you do this, you will get four "gauge" degrees of freedom which derive from the coordinate system of the underlying spacetime. Associated with these degrees of freedom, you will get four constraints. What will be left over will be twelve first-order equations in the 3-metric and its first derivative. Since we usually talk about degrees of freedom in terms of second-ordere equations, and you can always convert a system of equations of the form: $$\begin{align} {\dot q_{i}} &= f_{i}(q,p)\\ {\dot p_{i}} &= g_{i}(q,p) \end{align}$$ into a set of equations of the form: $${\ddot q_{i}} = h_{i}(q, {\dot q})$$ Then, given a fixed choice for your coordinate system (which is equivalent to choosing a fixed gauge in E&M), you have six second-order equations for the 3-metric, and four constraints, leaving you with two local degrees of freedom.
{ "domain": "physics.stackexchange", "id": 52811, "tags": "general-relativity, degrees-of-freedom" }
Mystery $p^0$ particle
Question: Some exercises in my physics book mention a particle denoted $p^0$, but I can't seem to find any information about this particle, neither in my book nor on the web. I've been able to deduce from the exercises that it's not a baryon (because $p+n\rightarrow p+p^0$ apparently violates conservation of baryon number), but I haven't got much closer than that. Some examples: $\pi^+ \rightarrow p^0 + e^+ + v_e$ $p^0 + n \rightarrow K^0 + \Sigma^-$ Answer: This is a typo in the book K. A. Tsokos, Physics for the IB Diploma, Sixth Edition, Cambridge University Press. The symbol is supposed to be $\mathrm \rho$, not $\mathrm p$. Source: Is there a $\mathrm{p}^0$ particle?
{ "domain": "physics.stackexchange", "id": 36196, "tags": "particle-physics, interactions" }
Profile of conservation between bacterial sequences and human protein
Question: My goal is to make a conservation plot between bacterial sequences and a human protein. So far, I have a FASTA file of the protein, and a FASTA file with the sequences of the proteins from the BLAST results. In my initial attempts, I have tried to do this using msa software: I have downloaded clustalx and run a profile alignment using the single protein sequence as Profile 1 and the sequences from the BLAST results as Profile 2, and selected the option "Align Sequences to Profile 1". I'm having difficulty interpreting and dealing with the results. Since the majority of the sequences aren't aligned, the results are a total mess, with a huge number of dashes added. My goal is to visualize the number of BLAST hits per amino acid in my protein of interest. ie: I would like to have a graph with my protein on the x-axis and a plot like the regions of high conservation for the multiple alignments, except with the spikes corresponding to a high number of BLAST hits. This would allow me to identify regions of my protein that have higher sequence similarity to bacteria than others. Is there a better way to achieve this or to salvage the results from the profile alignment? Thanks. Answer: I'm not sure what you mean by it being a mess? That looks like a pretty good alignment to me, and it most certainly has aligned all the sequences. You are nearly always going to have gaps (see my MSA below, which are all paralogs from the same genome so they couldn't really be more closely related, and yet, there are gaps). That comes with its own set of problems mind you, as you need to decide how you're going to deal with them. You also don't necessarily have to do a profile alignment. Those sequences don't look massively divergent to me, so straight forward sequence alignment would probably give you more or less the same result. My goal is to visualize the number of BLAST hits per amino acid in my protein of interest. This doesn't really make sense. You should really be searching for the number of BLAST hits with a particular domain, if you want to infer homology. If you want a per-position visualisation of the conservation, you could look at the shannon entropy for each column and plot that. I wrote a script to do just that a little while ago: https://github.com/jrjhealey/bioinfo-tools/blob/master/Shannon.py Just beware it's not super well tested yet. Feed an MSA in with as many sequences as you want to analyse, but you'll have to have identified the sequences and done the alignment first. For example, given this MSA: 16 149 PAU_02775 MSTTPEQIAV EYPIPTYRFV VSLGDEQIPF NSVSGLDISH DVIEYKDGTG PLT_01696 MSTTPEQIAV EYPIPTYRFV VSIGDEQIPF NSVSGLDISH DVIEYKDGTG PAK_02606 MSTTPEQIAV EYPIPTYRFV VSIGDEQVPF NSVSGLDISH DVIEYKDGTG PLT_01736 MSTTPEQIAV EYPIPTYRFV VSIGDEKVPF NSVSGLDISH DVIEYKDGTG PAK_01896 MTTTT----V DYPIPAYRFV VSVGDEQIPF NNVSGLDITY DVIEYKDGTG PAU_02074 MATTT----V DYPIPAYRFV VSVGDEQIPF NSVSGLDITY DVIEYKDGTG PLT_02424 MSVTTEQIAV DYPIPTYRFV VSVGDEQIPF NNVSGLDITY DVIEYKDGTG PLT_01716 MTITPEQIAV DYPIPAYRFV VSVGDEKIPF NNVSGLDVHY DVIEYKDGTG PLT_01758 MAITPEQIAV EYPIPTYRFV VSVGDEQIPF NNVSGLDVHY DVIEYKDGIG PAK_03203 MSTSTSQIAV EYPIPVYRFI VSIGDDQIPF NSVSGLDINY DTIEYRDGVG PAU_03392 MSTSTSQIAV EYPIPVYRFI VSVGDEKIPF NSVSGLDISY DTIEYRDGVG PAK_02014 MSITQEQIAA EYPIPSYRFM VSIGDVQVPF NSVSGLDRKY EVIEYKDGIG PAU_02206 MSITQEQIAA EYPIPSYRFM VSIGDVQVPF NSVSGLDRKY EVIEYKDGIG PAK_01787 MSTTADQIAV QYPIPTYRFV VTIGDEQMCF QSVSGLDISY DTIEYRDGVG PAU_01961 MSTTADQIAV QYPIPTYRFV VTIGDEQMCF QSVSGLDISY DTIEYRDGVG PLT_02568 MSTTVDQIAV QYPIPTYRFV VTVGDEQMSF QSVSGLDISY DTIEYRDGIG NYYKMPGQRQ AINISLRKGV FSGDTKLFDW INSIQLNQVE KKDISISLTN NYYKMPGQRQ AINISLRKGV FSGDTKLFDW INSIQLNQVE KKDISISLTN NYYKMPGQRQ AINISLRKGV FSGDTKLFDW INSIQLNQVE KKDISISLTN NYYKMPGQRQ AINITLRKGV FSGDTKLFDW LNSIQLNQVE KKDISISLTN NYYKMPGQRQ LINITLRKGV FPGDTKLFDW LNSIQLNQVE KKDVSISLTN NYYKMPGQRQ LINITLRKGV FPGDTKLFDW LNSIQLNQVE KKDVSISLTN NHYKMPGQRQ LINITLRKGV FPGDTKLFDW LNSIQLNQVE KKDVSISLTN NYYKMPGQRQ SINITLRKGV FPGDTKLFDW INSIQLNQVE KKDIAISLTN NYYKMPGQRQ SINITLRKGV FPGDTKLFDW INSIQLNQVE KKDIAISLTN NWFKMPGQSQ LVNITLRKGV FPGKTELFDW INSIQLNQVE KKDITISLTN NWFKMPGQSQ STNITLRKGV FPGKTELFDW INSIQLNQVE KKDITISLTN NYYKMPGQIQ RVDITLRKGI FSGKNDLFNW INSIELNRVE KKDITISLTN NYYKMPGQIQ RVDITLRKGI FSGKNDLFNW INSIELNRVE KKDITISLTN NWLQMPGQRQ RPTITLKRGI FKGQSKLYDW INSISLNQIE KKDISISLTD NWLQMPGQRQ RPTITLKRGI FKGQSKLYDW INSISLNQIE KKDISISLTD NWLQMPGQRQ RPSITLKRGI FKGQSKLYDW INSISLNQIE KKDISISLTD EAGTEILMTW SVANAFPTSL TSPSFDATSN EVAVQEITLT ADRVTIQAA EAGTEILMTW SVANAFPTSL ISPSFDATSN EVAVQEITLT ADRVTIQAA EAGTEILMTW SVANAFPTSL TSPSFDATSN EVAVQEITLT ADRVTIQAA EAGTEILMTW SVANAFPTSL TAPAFDATSN EVAVQEISLT ADRVTIQAA ETGTEILMSW SVANAFPTSL TSPSFDATSN DIAVQEIKLT ADRVTIQAA EVGTEILMTW SVANAFPTSL TSPSFDATSN DIAVQEIKLT ADRVTIQAA EAGTEILMSW SVANAFPTSL TSPSFDATSN DIAVQEIKLT ADRVMIQAA ETGSQILMTW NVANAFPTSF TSPSFDAASN DIAIQEIALV ADRVTIQAP EAGTEILMTW NVANAFPTSF TSPSFDATSN EIAVQEIALT ADRVTIQAA DAGTELLMTW NVSNAFPTSL TSPSFDATSN DIAVQEITLT ADRVIMQAV DAGTELLMTW NVSNAFPTSL TSPSFDATSN DIAVQEITLM ADRVIMQAV DTGSEVLMSW VVSNAFPSSL TAPSFDASSN EIAVQEISLV ADRVTIQVP DTGSKVLMSW VVSNAFPSSL TAPSFDASSN EIAVQEISLV ADRVTIQVP ETGSNLLITW NIANAFPEKL TAPSFDATSN EVAVQEMSLK ADRVTVEFH ETGSNLLITW NIANAFPEKL TAPSFDATSN EVAVQEISLK ADRVTVEFH ETGSNLLITW NIANAFPEKL TAPSFDATSN EVAVQEISLK ADRVTVEFH You'd get this plot: I would like to have a graph with my protein on the x-axis and a plot like the regions of high conservation for the multiple alignments, except with the spikes corresponding to a high number of BLAST hits. This would allow me to identify regions of my protein that have higher sequence similarity to bacteria than others. I'm not sure your logic is quite right here though. Blast won't give you hits depending on a particular position. It's a local aligner, so it'll just return you hits where at least some part of your query matches at least some part of another. What you could do is take the logic in the script above, and just use a different metric. For example, perhaps you could count the proportion of sequences which have the most common amino acid at a given position within your MSA. That would be fairly crude though. As you say in the comments, My final goal is to produce a plot visualizing regions of high bacterial sequence similarity to my human protein of interest. your original alignment will show you this intrinsically, if only you include the sequences of all the BLAST hits in the first place. Thus your work flow will be: Blast sequence of interest. Download all/as many hits as you want (bear in mind the E-value/bitscore and the number of hits you get. It might only be a few dozen, in which case you can use the lot, but if not, just take all the hits below a certain cut-off.) Align all the sequences. Look at the column scores for the whole MSA. You can use whatever metric of conservation you like really. Might be as simple as proportion of sequences with the most common residue, or something more complex like Shannon entropy (though as you can see in the graph above Shannon entropy can be kinda noisy) etc.
{ "domain": "bioinformatics.stackexchange", "id": 157, "tags": "sequence-alignment" }
How to Simplify Different but Similar Functions?
Question: absolute newbie here. I need help with simplifying my code. I want to create many more similar functions as shown below. Is there a simpler way where I can code without having to manually type in the same function every time I want to add a new function? function toggleItem1() { var myItem1 = document.getElementById('item1Image'); var displaySetting = myItem1.style.display; var item1Button = document.getElementById('item1Button'); if (displaySetting == 'block') { myItem1.style.display = 'none'; item1Button.innerHTML = 'Show item'; } else { myItem1.style.display = 'block'; item1Button.innerHTML = 'Hide item'; } } function toggleItem2() { var myItem2 = document.getElementById('item2Image'); var displaySetting = myItem2.style.display; var item2Button = document.getElementById('item2Button'); if (displaySetting == 'block') { myItem2.style.display = 'none'; item2Button.innerHTML = 'Show item'; } else { myItem2.style.display = 'block'; item2Button.innerHTML = 'Hide item'; } } Answer: Similar but different functions can be merged by using parameters. function toggleItem(itemId, buttonId) { var item = document.getElementById(itemId); var displaySetting = item.style.display; var button = document.getElementById(buttonId); if (displaySetting == 'block') { item.style.display = 'none'; button.innerHTML = 'Show item'; } else { item.style.display = 'block'; button.innerHTML = 'Hide item'; } } // usage toggleItem('item1image', 'item1button')
{ "domain": "codereview.stackexchange", "id": 40511, "tags": "javascript" }
What is the unit of Klein Gordon field?
Question: Normally I don't care about units in the derivations on relativity or QM. Just set $\hbar = c = 1$. But learning about the energy momentum tensor for the Klein Gordon equation, I couldn't make $T^{00}$ for example have units of energy density, that means energy per (spatial) volume. Of course $T^{\mu \nu}$ comes from the Lagrangian density, that should also have units of energy per volume. So I tried to examine it. In the expression below, the second term for example has units of $L^{-2}$ if the field is adimensional. $${\cal L} =\frac{1}{2} (\partial^\mu \phi \partial_\mu\phi -\left(\frac{mc}{\hbar}\right)^2\phi^2)$$ It could be fixed if the field has units of $$\left(\frac{E}{L}\right)^{\frac{1}{2}}$$ But I don't see it mentioned anywhere, so I am not sure about it. Just to compare, both the Lagrangian density and energy density for electromagnetism have consistent units. Answer: Assuming that spacetime is four-dimensional, your result is correct. When $\hbar=c=1$, it reduces to the statement that $\phi$ has mass dimension $1$, which is often stated in the literature about relativistic quantum field theory. To see this more directly, start with the fact that the action $S$ should have the same units as Planck's constant $\hbar$. (We usually say it the other way around: Planck's constant has units of action!) The action is the integral of ${\cal L}$ over spacetime, which implies that ${\cal L}$ has units of energy density (energy per unit spatial volume). If we use the convention that the spatial-derivative part of kinetic term is $(\nabla\phi)^2 / 2$, where $\nabla$ is the spatial gradient, then we arrive at the result shown in the question: $\phi$ must have units $(E/L)^{1/2}$, where $E$ is energy and $L$ is length. One virtue of this argument is that it doesn't assume anything about how the coefficient of the $\phi^2$ term is related to mass, and it works even if the $\phi^2$ term is absent. The same argument can be generalized to $D$-dimensional space for arbitrary $D$. The conclusion is that $\phi$ has units $(E/L^{D-2})^{1/2}$.
{ "domain": "physics.stackexchange", "id": 74352, "tags": "homework-and-exercises, lagrangian-formalism, dimensional-analysis, stress-energy-momentum-tensor, klein-gordon-equation" }
Does the specific size of matrices affect the performance of matrix operations?
Question: I was reading DeepMind's paper on I2A's and realized that the sizes of the hidden layers in their model were all like 32, 64, 256, and so on: all powers of 2. I have found the same thing in other papers. Is there any performance reason for it? Maybe related to data structure alignment? More concretely, I would like to know if I should use this "special" sizes when training my own models. Answer: While you can only be 100% certain when you ask the authors, most authors use this simply because you have to choose one value. The specific value doesn't matter too much, only the order of magnitude. Taking a power of 2 seems to be a natural choice. You can also take a setup which uses a power of two and reduce the number by one. The computation time should be roughly equal, probably be a bit lower. If it is noticeably higher, there might be a performance benefit of using the choice of the author. See also Datascience.SE: Why the number of neurons or convolutions chosen equal powers of two? Quora: Should I use powers of 2 when choosing the size of a batch size when training my Neural Network?
{ "domain": "cs.stackexchange", "id": 9764, "tags": "machine-learning, neural-networks, matrices" }
Help analyzing SDS-Page gel
Question: In this experiment, we transformed a truncation of the NFAT protein sequence into a plasmid vector to be expressed in E.Coli as a fusion protein with GST. We also attempted to transform the normal plasmid without NFAT so that we could get GST alone. After growing up these colonies and selecting positive colonies, we lysed the e.coli. We collected some of this lysate for SDS-PAGE. We then put the lysed GST and GST-NFAT solutions in separate columns with glutathione beads. GST binds glutathione. We washed the column and collected some of the wash for SDS-PAGE. Finally, we eluted the GST and GST-NFAT off the column and saved some for SDS-PAGE. I have attached a picture of a successful version of the SDS-PAGE. You want to direct your attention to the first lane, the Molecular Weight Marker (MWM). The rest of the lanes should be interpreted in sets of 3. The first of the three is the crude lysate, the second of the three is the first wash, and the last is the eluted protein of interest. The important lanes for my analysis are 8-10 and 11-13. I will explain what I know from this SDS-PAGE gel so far. I know the weights of the different bands in the molecular weight marker because I was given a copy of the different proteins in the marker. From this information, I was able to confirm that I isolated my protein of interest in lane 13 and isolated GST-alone in lane 10. I also understand the reason why the lysate lanes are darkest, followed by the wash lanes, and then very clean protein of interest lanes. Also, note that the marker and the elution lanes were loaded with 10 microliters while the other lanes were 2 microliters each. However, there are some things that I do not understand from the gel. (1) Why are the bands so diffuse/low resolution. I imagine it may be overloading the wells, but I am not convinced because I followed a set procedure. (2) Why is there large amounts of streaking, especially in lanes 5, 8, 9, 10, and 11. (3) I isolated my protein of interest because it was a fusion protein with GST (or just GST alone), and then eluted the protein off the glutathione beads via excess glutathione wash. Theoretically, the lanes for the elutions should be free of everything except the protein of interest and glutathione then. This mostly holds true for lanes 4 and 9, which should only have GST and glutathione. However, lanes 7 and 13 should only have GST-NFAT and glutathione. The lanes are not as clear however and seem to have stained numerous other proteins. What is going on? (4) I was told by a professor that the second band in lane 13 (the GST-NFAT lane) was a degredation product. How can this be? The band does not correspond to the weights of NFAT alone or GST alone. Also, if the NFAT-GST was degraded, wouldn't there be two bands of degredation product? (5) Why do the GST-alone lanes have what looks like doublet-bands? (Lanes 4, 10). (6) If we eluted the proteins of interest with excess glutathione, which has a very low molecular weight, why doesn't it show up on the gel? Is it possible that it ran off before everything else? (7) I have a list of the weights of the bands of the molecular marker. Yet, there is an extra band at the bottom that does not show up on my list. Is it possible my list is incomplete, or is there something else happening? Finally, I have attached a picture of a failed SDS-PAGE gel. The only thing that I definitively know was a source of error is the fact that my buffer level fell during the run (due to a leak). The results of this gel are accurate (Lanes 2-7 correspond to lanes 8-13 on the above gel, and the results are almost identical) yet the gel turned out horribly. How can the buffer falling cause this result? Or is there something else that may have contributed to this weird pattern? Finally, the lanes are not as distinct in this gel as the first gel, yet the same comb, and thus the same size and spaced wells were used. So why was there so much bleed between lanes/ lack of space between the lanes? Thanks in advance for any help. Answer: Here are some thoughts based on my own experiences purifying many proteins and running many SDS-PAGE gels: (1) Resolution--how fast did you run this gel? Often if you run at higher than say 150V or so you can get bands that look like this. The resolution could also be related to.. (2) I would bet the streaking is due to salt in the sample--a lot of the time lysis/wash/elution buffers can have upwards of 500 mM or 1 M salt, and this invariably causes streaking. Load less sample in these lanes and keep the total volume the same to dilute the salt below 250 mM at least. Other things in your buffers could be causing the streaking, but salt is the most common (let us know the composition of your lysis/wash/elution buffers). (3) Your purification of GST alone looks very good--often your preps will look much more like the lanes you have indicated, which are not entirely clean. This is quite common for a one step purification--often you need 2-3 for certain proteins to completely remove contaminants. There are several possible sources of these bands: Endogenous proteins: Many bacterial proteins will non-specifically stick to the beads (this is why we wash), but even thorough washing can still leave some contaminants. These proteins also could be bound to your protein of interest and are co-eluted off of the resin. Truncation products: Depending where your tag is (let's sat it is on the N-terminus), you can sometimes get incomplete translation products--they have the tag, but they are not full length proteins. These will still bind and elute from the beads because they have retained the tag. Ignore this point if you tag is on the C-terminus, as in that case the tag will only be on full-length protein. Degradation products: This is related to your later questions--often during the purification your protein will be degraded by bacterial proteases. Did you include a variety of protease inhibitors in your lysis buffer? Keeping all of your samples on ice is also important for this reason. (4) Understand that degradation products do not mean you get only your protein and the fused GST--they can be a variety of truncations due to degradation by proteases. (5) I would bet this is just due to your running conditions or the salt in your samples. I think this is probably actually your single GST band. (6) Glutathione will not show up on the gel--it's formula weight is about 300 g/mol if I recall meaning it's about 300 Daltons, so much smaller than anything that you'll retain on the gel. Further, remember than your stain is for proteins--Coomassie blue binds basic residues on proteins. (7) I'm not sure which band you are referring to, but I think the very bottom band in your marker lane is simply the dye front. It could also be a degradation product of a protein in your ladder. As far as the second gel goes--it is all due to the buffer level falling during the run. Without buffer covering the top of the gel there is no current flowing. Probably as the leak was occurring you had uneven distribution of the buffer. Strange things can happen when this occurs (unscientific, I know).
{ "domain": "biology.stackexchange", "id": 5207, "tags": "lab-techniques, gel-electrophoresis, staining, purification, sds-page" }
Find all nearby points in a set, for each element of the set
Question: Given a finite set $S$ of points in $\mathbb R^p$ and a number $\rho$, my collaborators and I want to find, for each $s\in S$, the other points in $S$ that are within $\rho$ of $s$. Of course there's the obvious $O(|S|^2)$ algorithm, and we also came up with something (roughly, sort in each dimensions, then for each dimension and for each element of $S$ mark nearby points in some sparse matrices, and then combine the sparse matrices) that has better running time under certain assumptions, but not in the worst case. I feel that this must be a standard problem, but I haven't been able to find references. I was wondering if someone here knows what I should search for or can suggest a good reference to start from? Thanks in advance! Answer: I think what you are looking for is the Fixed-radius near neighbors problem.
{ "domain": "cstheory.stackexchange", "id": 623, "tags": "reference-request, cg.comp-geom" }
What is happening inside a non-linear medium?
Question: I am trying to understand second-harmonic generation as explained in this video. Now we have 2 formulas in a non-linear medium: $$\mathrm{P}_{\mathrm{L}} = \epsilon_0 \chi^{(1)} \mathrm{E}$$ $$\mathrm{P}_{\mathrm{NL}} = \epsilon_0 \chi^{(2)} \mathrm{E^2}$$ As far as I understand $\mathrm{P}_{\mathrm{L}}$ and $\mathrm{P}_{\mathrm{NL}}$ are the polarization of the medium which has been caused by the electric field E. But second harmonic generation is based on the effect that after an electric field with frequency $\omega$ enters a non-linear medium the outcoming electric field will be a mixture of $\omega$ and $2 \omega$. So what exactly is happening inside of the medium? Is it that the entering electric field causes 2 polarizations (linear and non-linear) and these 2 polarizations then "produce" two new electric fields which are the ones we observe leaving the medium? Correct? Answer: The way to understand this just with the equations that relate polarization to electric field is that adding a nonlinear term allows second order harmonics to appear in the medium. That's it, no more physical insight can be extracted from those. However, the addition of the nonlinear term itself does have a physical explanation. Think of the simple pendulum. Small oscillations of the pendulum mean that the motion can be described by the armonic oscillator equation. Larger oscillations make the pendulum behave in a different way, and anarmonic (nonlinear) terms start to appear. The same happens with dielectric mediums, you can think of them as a collection of oscillators (atoms) that when the electric field gets too big they stop behaving linearly. If you still want to know more, I refer you to one good book Boyd - nonlinear optics It has an entire chapter dedicated to the derivation of those coefficients with Schroedinger's equation in terms of the intrinsic parameters of those atoms. It also discusses the symmetry properties of those coefficients and what needs to happen for them to be identically zero.
{ "domain": "physics.stackexchange", "id": 63663, "tags": "electromagnetism, optics, polarization" }
Which cells in our body perform lactic acid fermentation?
Question: Does lactic acid fermentation in humans happen only in the muscle? I have read in this Wikipedia page that human muscle cells sometimes respire anaerobically to produce ATP and lactic acid. Does it happen anywhere else in our body? My research: Cells which lack mitochondria or do not have enough supply of oxygen perform lactic acid fermentation for the energy. Aerobic and anaerobic respiration: The NADH gets converted to NAD+ during the process of fermentation. Aerobic respiration and anaerobic respiration follow through glycolysis, the Krebs cycle and finally the Electron Transport Chain (ETC). The difference between aerobic and anaerobic respiration is that, in anaerobic respiration another molecule , other than oxygen, is used as the electron acceptor in the ETC. Fermentation: But during fermentation, there is no ETC ( or Krebs cycle). Because of the lack of oxygen, the cells performing fermentation have glycolysis followed by some extra reaction ( to get back NAD+) for their energy requirement in the absence of oxygen. During glycolysis NAD+ gets reduced into NADH. Then the NADH oxidises in the ETC to form NAD+. This can again be used in glycolysis. But there is no ETC in fermentation. So, the pyruvate (or a derivative of it) after glycolysis acts as the electron acceptor and oxidises the NADH back to NAD+ in fermentation. Based on the electron acceptor during fermentation, it can be classified as lactic acid fermentation or alcoholic fermentation. The two main type of cells that perform fermentation are the erythrocytes (RBCs) and SKELETAL muscle cells. As mature RBCs lack mitochondria, which is essential for aerobic respiration, they only performs lactic acid fermentation. Skeletal muscle cells on the other hand perform lactic acid fermentation when the energy requirement is higher than the rate at which the muscles cells can get oxygen. This is usually during strenuous exercises. I would like to know if any other cell in the human body might be doing this. So the answer might be a cell which lacks mitochondria or that experiences hypoxia and needs energy. Answer: Certain human tissues can be considered completely aerobic (e.g. liver) or anaerobic (e.g. erythrocytes) in their energy (ATP) production. However it is wrong to think, as some other recent questions or comments have seemed to imply, that the only other tissue that displays anaerobic metabolism — glycolysis followed by conversion of pyruvate to lactate — is skeletal muscle. Certainly the mitochondrial content and blood supply of white muscle during excercise leads to it being the major producer of lactate, but most tissues also have the capability for anaerobic glycolysis, and there exists a spectrum in the relative contribution of aerobic and anaerobic metabolism to energy production. The primary literature on this is quite old and tends to be of the observation and measurement variety. I will list some lactate-producing tissues with linked ‘medium old’ references, as modern ones tend to have a narrow focus. Most well established are: retina Sertoli cells of the testis renal medulla brain astrocytes Somewhat less so, perhaps, are: adipose tissue skin In addition, it should be noted that many tissues produce lactate in abnormal medical conditions, reviewed here and, from a more surgical viewpoint, here.
{ "domain": "biology.stackexchange", "id": 12289, "tags": "cellular-respiration, bioenergetics" }
How to transform and cummulate data in python
Question: I would like to transform dataframe and cummulate them using pandas. year country value 1999 JAPAN 10 2000 KOREA 15 2000 USA 20 2001 USA 13 2002 JAPAN 30 * I want to transform dataframe and cummulate value for each country year country value 1999 JAPAN 10 1999 KOREA 0 1999 USA 0 2000 JAPAN 10 2000 KOREA 15 2000 USA 20 2001 JAPAN 10 2001 KOREA 15 2001 USA 33 2001 JAPAN 40 2001 KOREA 15 2001 USA 33 I need your help. Thank you. Answer: I think the following lines should give the output you are looking for: # Create pivot table df = pd.pivot_table(df, values="value", index="country", columns="year") # Calculate cumulative sum and forward fill NaN df = df.cumsum(axis=1).fillna(method="ffill", axis=1) # Reshape data back into long format df = df.reset_index().melt("country") I first create a pivot table with the countries in the rows and the years in the columns to get all the years that may be missing in the long format data. I then calculate the cumulative sum, after which I forward fill the values over the columns. To finally reshape the data back into the original long format I reset the index and melt the dataframe with the country as the id column. Given your input, this returns the following dataframe: country year value 0 JAPAN 1999 10.0 1 KOREA 1999 NaN 2 USA 1999 NaN 3 JAPAN 2000 10.0 4 KOREA 2000 15.0 5 USA 2000 20.0 6 JAPAN 2001 10.0 7 KOREA 2001 15.0 8 USA 2001 33.0 9 JAPAN 2002 40.0 10 KOREA 2002 15.0 11 USA 2002 33.0
{ "domain": "datascience.stackexchange", "id": 7193, "tags": "python, pandas, data-cleaning" }
Suggestion for relevant non-complex simulator
Question: I've recently started working on some localization algorithms like probabilistic road map and SLAM algorithms. What I'm looking for is a software that would help me in simulation for such algorithms. I started with pythons graphics.py and have now started working with Gazebo and ROS, but I've found it complex to use. Is there any similar freeware simulation software that is easy to setup and work with, thus allowing me to reduce my time stressing over the simulation part and working more on the algorithms? Answer: just use matlab or and python with a set of fixed features in space represented as points. Don't do any vision processing. At this point any vision processing would be overkill. you are making this way too complex. the measurements for your slam can be angles to features. create a set of 30 features randomly populating a 20m by 20m square. make your view point move in a circle around the square. this is enough to get started with slam. if you only rely on other people's packages and modules without going through the trouble of building your own you will never really get a sense of how these things work and break.
{ "domain": "robotics.stackexchange", "id": 2389, "tags": "slam, ros, simulation, simulator, gazebo" }
Project Euler: Problem 13 - Large Sum
Question: I've written some code to solve this problem: Work out the first ten digits of the sum of the following one-hundred 50-digit numbers. My solution functions correctly, but I'm having some trouble deciding how I should be handling exceptions, especially with regards to the class's constructor. I went the route of creating a BigNumber class, even though it's not necessary, as I've never solved a problem involving arbitrary-length numbers before. Here's the class, sans constructors: class BigNumber { public static int MaxComponentValue = 999999999; public static int MaxComponentDigits = MaxComponentValue.ToString().Length; public bool Add(string source) { List<int> comps = this.SplitIntoComponents(source); // Not sure about this if (comps == null) return false; this.AllocateComponentSpace(comps.Count); for (int i = 0; i < comps.Count; i++) { int componentSpace = MaxComponentValue - this.components[i]; int amountToCarry = comps[i] - componentSpace; if (amountToCarry > 0) { while (amountToCarry > MaxComponentValue) { amountToCarry -= MaxComponentValue + 1; this.components[i + 1]++; } this.components[i] = 0; this.components[i + 1]++; this.components[i] += amountToCarry - 1; } else this.components[i] += comps[i]; } return true; } public override string ToString() { if (this.components.Count() > 1) { StringBuilder output = new StringBuilder(); var activeComponents = this.components.Reverse().SkipWhile(i => i == 0).ToList(); output.Append(activeComponents[0].ToString()); // Skip the first element as we've already added output for it. foreach (var c in activeComponents.Skip(1)) output.Append(String.Format("{0:000000000}", c)); return output.ToString(); } return this.components[0].ToString(); } private List<int> SplitIntoComponents(string source) { if (source == null) throw new ArgumentNullException("source"); List<int> comps = new List<int>(); for (int i = source.Length; i > 0; i -= MaxComponentDigits) { string token; if (i < MaxComponentDigits) token = source.Substring(0, i); else token = source.Substring(i - MaxComponentDigits, MaxComponentDigits); int comp; if (int.TryParse(token, out comp)) comps.Add(comp); else return null; } return comps; } private void AllocateComponentSpace(int count) { // The + 1 accounts for the possibility that there // will be an amount to carry over from the last token. // E.g 99 + 99 would result in the need for two components // to store the result 198 - the first one for 98 and the second for 1. if (this.components.Length < count) Array.Resize(ref this.components, count + 1); } private int[] components = new int[1]; } Problem 1 In the private method SplitIntoComponents, if int.TryParse fails, I'm failing early and returning null. In the Add method, I call SplitIntoComponents, check if it has returned null and then return false to the caller if it has. The consequence of that is the Add method fails early, before modifying the data stored in the BigNumber instance. I have two questions. Firstly, should SplitIntoComponents be throwing instead of returning null? Secondly, should Add return false to the caller, or should that really be throwing an ArgumentException? I'm thinking SplitIntoComponents should stay as it is and then I should throw from Add. Problem 2 I want to define a constructor that takes a string argument. This is what I have so far: public BigNumber(string source) { if (source == null) throw new ArgumentNullException("source"); List<int> comps = this.SplitIntoComponents(source); if (comps == null) throw new ArgumentException(); // Not implemented yet } In this case, I feel as though I should be throwing the ArgumentException because I'm unable to return anything to the caller. The problem here is obviously the inconsistency between how I'm handling the return value of SplitIntoComponents in the constructor and in the Add method. Should I opt for that and throw ArgumentException? General Observations I should overload the '+' operator to allow addition of multiple BigNumbers. Add some support for Linq (would that even work for Sum?) Any other suggestions? Answer: I'll self-answer this as my approach now is quite different from anything else mentioned here. The constructor idea was essentially the root of the problem, because I was asking it to do too much. With that said, I decided to create a static method to separately handle the parsing of a string: public static BigNumber Parse(string source) { // } This now means two things: I can make the exception handling logic consistent between the Parse and Add methods, allowing me to refactor it further at some point. The constructor is no longer responsible for handling parsing (which it shouldn't have been to begin with).
{ "domain": "codereview.stackexchange", "id": 5212, "tags": "c#, project-euler" }
Python: determine whether one string can be obtained by changing one character in the middle of another string
Question: This script solves a simple problem: Input two strings differ one character or zero characters in length, return true if the following conditions are met: 1, They must start with same character and end with same character. 2, One string can be obtained by changing one character or zero characters in the middle of the other string (neither at the start nor end), here changing means replacing and inserting. I have Google searched for several hours to find a solution and it was futile just as usual, I can't ask on stackoverflow.com because I am currently blocked from asking there because my questions are prone to be closed. So after about half an hour I came up with a solution of my own: def fuzzymatch(str1, str2): if abs(len(str1) - len(str2)) <= 1 and (str1[0], str1[-1]) == (str2[0], str2[-1]): longer = max(str1, str2, key=len) shorter = (str1 + str2).replace(longer, '') same = sum(''.join(longer[:i + 1]) in shorter for i in range(len(shorter))) if ''.join(longer[same + 1:]) in shorter: same += len(''.join(longer[same + 1:])) if abs(same - len(shorter)) <= 1 and len(str1) == len(str2): return True elif abs(same - len(longer)) <= 1: return True return False I have tested it and confirmed its correctness: In [2]: fuzzymatch('sample', 'ample') Out[2]: False In [3]: fuzzymatch('sample', 'sample') Out[3]: True In [4]: fuzzymatch('sample', 'sampley') Out[4]: False In [5]: fuzzymatch('sample', 'samply') Out[5]: False In [6]: fuzzymatch('sample', 'sammple') Out[6]: True In [7]: fuzzymatch('sample', 'sammmple') Out[7]: False In [8]: fuzzymatch('sample', 'simple') Out[8]: True In [9]: fuzzymatch('sample', 'siimple') Out[9]: False I know my approach is not very clean, that's why I am asking here, I want to know, what is a better approach, that gives the same result, is more efficient and with less complexity? Answer: It would be good to include a docstring explaining the parameters and expected result. Best of all would be to include the unit-tests in doctest format: def fuzzymatch(str1, str2): """ Returns True if the two strings differ by only one letter (addition, removal or change) and that letter is not the first or last. >>> fuzzymatch('sample', 'ample') False >>> fuzzymatch('sample', 'sample') True >>> fuzzymatch('sample', 'sampley') False >>> fuzzymatch('sample', 'samply') False >>> fuzzymatch('sample', 'sammple') True >>> fuzzymatch('sample', 'sammmple') False >>> fuzzymatch('sample', 'simple') True >>> fuzzymatch('sample', 'siimple') False """ Now we can auto-test the function: if __name__ == "__main__": import doctest doctest.testmod() And we can add some more tests (for example, empty strings should return a value, rather than throwing an exception as they do right now). It's often easier to deal with the simplest returns first, rather than having a long indented block where the reader is holding the condition in their head. if str1[0] != str2[0] or str1[-1] != str2[-1]: return False And we can simplify the longer/shorter stuff by simply swapping to make str1 always the longest: if len(str1) < len(str2): str1, str2 = str2, str1 That is less work than concatenating the two and then finding and removing one of them. This is a really inefficient way of finding the common suffix: same = sum(''.join(str1[:i + 1]) in str2 for i in range(len(str2))) String in string is a search, but we don't need a search, as we know where we expect to find the match. As well as inefficient, it's also obscure, since we have str.endswith() and str.startswith() at our disposal. A simpler version is if len(str1) < len(str2): str1, str2 = str2, str1 # str1 is at least as long as str2 now if len(str1) > len(str2) + 1: return False for i in range(1, len(str2) - 1): if str2.startswith(str1[:i]) and str2.endswith(str1[i+1:]): return True return False We're still copying string slices around. To make it more efficient, we can look at a character at a time, and for this, I would create a helper function: import itertools def prefix_count(str1, str2): """ Length of longest common prefix of str1 and str2 >>> prefix_count('', '') 0 >>> prefix_count('a', '') 0 >>> prefix_count('a', 'b') 0 >>> prefix_count('ab', 'aa') 1 >>> prefix_count('aa', 'aa') 2 """ return sum(1 for _ in itertools.takewhile(lambda x: x[0]==x[1], zip(str1, str2))) We can now use that to find the common prefix forwards and backwards, and see if their length indicates that they meet with no more than 1 character of separation: if len(str1) < len(str2): str1, str2 = str2, str1 # str1 is at least as long as str2 now if len(str1) > len(str2) + 1: return False if not str2 or str1[0] != str2[0] or str1[-1] != str2[-1]: return False return prefix_count(str1, str2) + prefix_count(str1[::-1], str2[::-1]) + 1 >= len(str1) Final modified version import itertools def prefix_count(str1, str2): """ Length of longest common prefix of str1 and str2 >>> prefix_count('', '') 0 >>> prefix_count('a', '') 0 >>> prefix_count('a', 'b') 0 >>> prefix_count('ab', 'aa') 1 >>> prefix_count('abc', 'aac') 1 >>> prefix_count('aa', 'aa') 2 """ return sum(1 for _ in itertools.takewhile(lambda x: x[0]==x[1], zip(str1, str2))) def fuzzymatch(str1, str2): """ Returns True if the two strings differ by only one letter (addition, removal or change) and that letter is not the first or last. >>> fuzzymatch('', '') False >>> fuzzymatch('', 'a') False >>> fuzzymatch('aba', 'abca') True >>> fuzzymatch('aba', 'acba') True >>> fuzzymatch('acba', 'adba') True >>> fuzzymatch('sample', 'ample') False >>> fuzzymatch('sample', 'sample') True >>> fuzzymatch('sample', 'sampley') False >>> fuzzymatch('sample', 'samply') False >>> fuzzymatch('sample', 'sammple') True >>> fuzzymatch('sample', 'sammmple') False >>> fuzzymatch('sample', 'simple') True >>> fuzzymatch('sample', 'siimple') False >>> fuzzymatch('aabaa', 'aaaaaa') False """ if len(str1) < len(str2): str1, str2 = str2, str1 # str1 is at least as long as str2 now if len(str1) > len(str2) + 1: return False if not str2 or str1[0] != str2[0] or str1[-1] != str2[-1]: return False return prefix_count(str1, str2) + prefix_count(str1[::-1], str2[::-1]) + 1 >= len(str1) if __name__ == "__main__": import doctest doctest.testmod()
{ "domain": "codereview.stackexchange", "id": 41648, "tags": "python, performance, beginner, python-3.x" }
Clebsch-Gordan in Fock Space?
Question: When adding the angular momenta of two particles, you use Clebsch-Gordan coefficients, which allow you, in fancy language, to decompose the tensor product of two irreducible representations of the rotation group into a direct sum of irreducible representations. (I'm not exactly clear on what this means, so on a side note can someone suggest a good book on representation theory of Lie groups?) If we want to add the angular momenta of even more particles, then we have to use other coefficients, like the Wigner 3j, 6j, or in general 3nj symbol. But what if you have an unknown number of particles, like you do in quantum field theory? In this case quantum states live in a so-called Fock space made of a direct sum of infinitely many tensor products of Hilbert spaces (which is another thing I'm unclear on). So then how do you add angular momentum in Fock space? To put it another way, how do you decompose a tensor product of irreducible representations into a direct sum if you have a variable number of terms in the tensor product? Any help would be greatly appreciated. Thank You in Advance. Answer: First of all, I got a bit carried away writing this response, so it's very long (but hopefully useful); my apologies for any grammar errors As a book reference for introductory representation theory, I personally like the same book that Edward Hughes recommended. Lie Groups, Lie Algebras, and Representations by Brian Hall. It's a math book written by a mathematician, but it's not overly abstract for a physicist, and if you want to learn representation theory right, your best bet is probably to let a mathematician teach it to you. (I hope I don't get hate mail for saying that.) As for the Fock space question, I think one needs to be careful to distinguish between (a) writing down an infinite direct sum of tensors products of spin Hilbert spaces and performing Clebsch-Gordan decomposition and (b) writing down a physically relevant Fock space, and attempting to decompose its direct summands. Performing (a) doesn't involve any real subtleties as indicated by Andrew and Edward Hughes, and here's how you would do it explicitly. Suppose that we have an $n$-fold tensor product of angular momentum Hilbert spaces; \begin{align} \mathcal H_{j_1}\otimes\mathcal H_{j_2} \otimes \cdots \otimes \mathcal H_{j_n}, \end{align} then we can perform a Clebsch-Gordan decomposition recursively in pairs by noting that the tensor product distributes over the direct sum in much the same way that multiplication of real numbers distributes over addition, and by taking advantage of the associativity of the tensor product. We could start with the first pair $\mathcal H_{j_1}\otimes\mathcal H_{j_2}$ which gives \begin{align} \mathcal H_{j_1}\otimes\mathcal H_{j_2} = \bigoplus_{j=|j_1-j_2|}^{j_1+j_2} \mathcal H_j \end{align} We can then deal with the first three factors; \begin{align} \mathcal H_{j_1}\otimes\mathcal H_{j_2}\otimes\mathcal H_{j_3} = \bigoplus_{j=|j_1-j_2|}^{j_1+j_2} \mathcal H_j\otimes\mathcal H_{j_3} = \bigoplus_{j=|j_1-j_2|}^{j_1+j_2}\bigoplus_{j'=|j-j_3|}^{j+j_3} \mathcal H_{j'} \end{align} and so-on. So in the event that you consider some infinite direct sum like \begin{align} \mathbb C \oplus \mathcal H_{j_{11}} \oplus (\mathcal H_{j_{21}}\otimes\mathcal H_{j_{22}}) \oplus (\mathcal H_{j_{31}}\otimes\mathcal H_{j_{32}}\otimes\mathcal H_{j_{33}})\oplus\cdots, \end{align} one can apply the above procedure to each direct summand to eliminate all tensor products in favor of direct sums. But now, what about case (b) where we want to consider some the Fock space of some physical system. In this context, Fock space usually refers to an infinite direct sum of symmetrized or antisymmetrized tensor products of a certain single-particle Hlbert space with itself. Physically, this comes from the fact that it's the sort of space we use to describe systems consisting of an arbitrary number of identical fermions of bosons. Since we have to be careful about particle symmetrization and antisymmetrization in each direct summand, the story is rather subtle. Let's consider, for the sake of concreteness, a system that can contain an arbitrary number of identical spin-$1/2$ particles. Let $\mathcal H_\frac{1}{2}$ denote the spin-$1/2$ Hilbert space. Then we might naively think that the Fock space for such a system is as follows: \begin{align} \mathbb C \oplus S_-(\mathcal H_\frac{1}{2}) \oplus S_-(\mathcal H_\frac{1}{2}\otimes \mathcal H_\frac{1}{2}) \oplus S_-(\mathcal H_\frac{1}{2}\otimes\mathcal H_\frac{1}{2}\otimes\mathcal H_\frac{1}{2})\oplus\cdots \end{align} The first term $\mathbb C$ in the direct sum corresponds to the Hilbert space with no particles in it, which is one-dimensional. The symbol $S_-$ in front of each summand picks out the antisymmetric subspace of the Hilbert space on which it operates since we are dealing with identical fermions whose states must be totally antisymmetric. Now the question becomes, how does one decompose this beast into direct sums of Hilbert spaces, each of which is a copy of some spin-$s$ Hilbert space? Let's deal with the direct summands one-by-one. First, we note that \begin{align} S_-(\mathcal H_\frac{1}{2}) = \mathcal H_\frac{1}{2} \end{align} since when there is a single particle, no antisymmetrization does nothing; there are no tensor factors to permute. Next, we use Clebsch-Gordan theory for the second summand; \begin{align} \mathcal H_\frac{1}{2}\otimes \mathcal H_\frac{1}{2} &\cong \mathcal H_0 \oplus \mathcal H_1 \end{align} where $\cong$ here denotes isomorphism of vector spaces. In other words, the spin-$1/2$ Hilbert space decomposes into a direct sum of a spin-$0$ and a spin-$1$ Hilbert space. Now, it is not hard to see by inspection that all of the (three) states in the spin-$1$ Hilbert space are symmetric, while the single state in the spin-$0$ Hilbert space is antisymmetric. If you don't believe me, open a book on quantum and stare at the "singlet" and "triplet" states which form bases for $\mathcal H_0$ and $\mathcal H_1$ respectively, and you'll see that the triplet states don't change under interchange of the two particles, but the singlet state changes by a sign. It follows that \begin{align} S_-(\mathcal H_\frac{1}{2}\otimes \mathcal H_\frac{1}{2}) \cong \mathcal H_0. \end{align} Now what about the terms with higher tensor powers of $\mathcal H_\frac{1}{2}$? If we have three factors of $\mathcal H_\frac{1}{2}$, then we can write \begin{align} \mathcal H_\frac{1}{2}\otimes \mathcal H_\frac{1}{2} \otimes \mathcal H_\frac{1}{2} &\cong (\mathcal H_0\oplus \mathcal H_1)\otimes \mathcal H_\frac{1}{2} \\ &\cong (\mathcal H_0\otimes \mathcal H_\frac{1}{2})\oplus (\mathcal H_1\otimes \mathcal H_\frac{1}{2})\\ &\cong (\mathcal H_\frac{1}{2})\oplus (\mathcal H_\frac{1}{2}\oplus\mathcal H_\frac{3}{2}) \end{align} But now, notice the following. A basis for the spin-$1/2$ Hilbert space consists of the spin up and spin down states $|+\rangle$ and $|-\rangle$. It follows that a general state $|\psi\rangle$ in this $3$-particle Hilbert space will some linear combination of tensor product basis elements of the form $|\nu_1\rangle|\nu_2\rangle|\nu_3\rangle$ where each $\nu_i = \pm$; \begin{align} |\psi\rangle = \sum_{\nu_1,\nu_2,\nu_3=\pm}c_{\nu_1,\nu_2,\nu_3}|\nu_1\rangle|\nu_2\rangle|\nu_3\rangle \end{align} But there is no such state that is totally antisymmetric under the exchange of any two tensor factors, so there is no state in this Hilbert space that is totally antisymmetric. This is just a reflection of the Pauli Exclusion principle; when the single-particle Hilbert space is only two-dimensional, one can only build composite systems out of this Hilbert space that contain at most two particles. In fact, similar reasoning shows that one cannot build a totally antisymmetric state in any of the Hilbert spaces that contain three or more tensor factors of $\mathcal H_\frac{1}{2}$. It follows that symmetrization of the $n$-fold tensor product of $\mathcal H_\frac{1}{2}$ with itself gives the zero vectors space for all $n\geq 3$; \begin{align} S_-(\mathcal H_\frac{1}{2}^{\otimes n}) = \{0\}, \qquad\text{for all $n\geq 3$} \end{align} and therefore, putting this all together shows that the Fock space actually just reduces to a finite-dimensional vector space; \begin{align} \mathbb C\oplus \mathcal H_\frac{1}{2}\oplus(\mathcal H_\frac{1}{2}\otimes \mathcal H_\frac{1}{2}) = \mathbb C \oplus \mathcal H_\frac{1}{2}\oplus \mathcal H_0 \end{align} But now you may ask, how then do we have systems that can contain, for example, an infinite number of identical electrons? Take, for example, a gas of non-interacting electrons in one dimension in the harmonic oscillator potential in contact with an infinite particle reservoir subject, in addition, to an external magnetic field in the $z$-direction to which the spin of each electron couples. In this case, the single-particle Hilbert space is no longer just $\mathcal H_\frac{1}{2}$, which is two-dimensional. Instead, the single-particle Hilbert space is \begin{align} \mathcal H = \mathcal H_\frac{1}{2}\otimes \mathcal H_\mathrm{ho} \end{align} where $\mathcal H_\mathrm{ho}$ denotes the Harmonic oscillator Hilbert space that is spanned by the states $|n\rangle$ with $n=0,1,2\dots$. In particular, the single particle Hilbert space is now infinite-dimensional. For notational convenience, let's write the tensor product basis elements of $\mathcal H$ as $|\nu, n\rangle$ where $\nu =\pm$ is the spin up-down label and $n$ is the harmonic oscillator label. In this case, the Fock space is \begin{align} \mathbb C\oplus S_-(\mathcal H)\oplus S_-(\mathcal H\otimes\mathcal H)\oplus S_-(\mathcal H\otimes\mathcal H\otimes\mathcal H)\oplus \cdots \end{align} But now, the the story is a lot richer than in the last example. Let's look, for example, at the third direct summand. In the last example where the single particle Hilbert space was just $\mathcal H_\frac{1}{2}$, the triplet states were not allowed in the double-particle Hilbert spaces because they were symmetric, and we are dealing with identical Fermions. But now that the single particle space has a Harmonic oscillator factor, the story is much different. In particular, it's easy to construct an antisymmetric state in which both of the particles are spin-up as follows: \begin{align} |+,0\rangle|+,1\rangle - |+,1\rangle|+,0\rangle \end{align} They key point here is that since there is now an extra tensor factor in the single-particle Hilbert space, there is another label, in this case a non-negative integer, and can be used to break the symmetry between the spin states of the two particles as in the state above where they are both spin up, but have different harmonic oscillator labels. This means that the two particle Hilbert space no-longer degenerates into just the singlet Hilbert space. In this case, in fact, it is infinite dimensional contains states of both spin $0$ and spin $1$, as in the full Clebsch-Gordan decomposition. Similar comments apply to the later direct summands. For example, in the fourth term where we have three tensor factors of the single-particle Hilbert space, we can have states with any combination of the three spins being up or down. For example, if we want all of the spins to be down then we can form the state \begin{align} \sum_{\sigma\in S_3}(-1)^\mathrm{sgn(\sigma)}|-,\sigma(0)\rangle|-,\sigma(1)\rangle|-,\sigma(2)\rangle \end{align} where $S_3$ is the group of permutations of $\{0,1,2\}$ and $\mathrm{sgn}(\sigma)$ is the sign of the permutation which is positive if it involves an even number of exchanges, and negative otherwise. The main point, is that now, even after antisymmetrization, the antisymmetrization of each direction summand containing $n$ particles, namely \begin{align} S_-(\mathcal H^{\otimes n}) \end{align} contains states of every total spin that would have appeared in the Clebsch-Gorban decomposition of the $n$ spin space tensor factors alone. However, it's not clear to me if there is a systematic notation for writing down what the resulting antisymmetrized Hilbert space is, I'm not sure if the Clebsch-Gordan decomposition does much for us in these cases, and I've never personally seen this issue crop up in field theory/QM courses/research.
{ "domain": "physics.stackexchange", "id": 9845, "tags": "quantum-mechanics, quantum-field-theory, mathematical-physics" }
What determines the energy of a photon emitted by an electron when it changes it's energy level?
Question: A nuetral helium atom in is in the excited electronic state, with one electron in the (2p) level and the other in the (3d) level. The atom is initally in a region with zero applied magnetic field. (a) Can the electron in the (3d) level emit a photon and change to the (2p) level? If so, how many different photon energies would you expect to measure? Explain your answer. (b) A magnetic field of 1.0 Tesla is applied to the atom. Can the electron in the (2p) level emit a photon and change to the (1s) level, with the electron in the (3d) level remaining in the (3d) level? If so, how many different photon energies would you expect to measure? Explain your answer. I already know the answers. However, there is something I need help understanding. Since the principle quantum number n can only change by +/-1, and the photon energy is determined by the change in n, how can there be multiple photon energies for a single transition? Answer: The n is called principle quantum number because it gives the basic energy level. l and m give corrections on this basic energy level, and this allows for the possibility of many more transitions, as long as quantum numbers are conserved. If you look at the hydrogen energy levels at extremely high resolution, you do find evidence of some other small effects on the energy. The 2p level is split into a pair of lines by the spin-orbit effect. The 2s and 2p states are found to differ a small amount in what is called the Lamb shift. And even the 1s ground state is split by the interaction of electron spin and nuclear spin in what is called hyperfine structure.
{ "domain": "physics.stackexchange", "id": 44098, "tags": "quantum-mechanics, electrons, atomic-physics" }
The DC bin of an FFT should be the mean of all the other bins in the spectrum, why does the FFT produce 0?
Question: I am experimenting a bit with FFT's. From what I have read the first bin of an FFT is called the DC bin -- that is it is the mean of the other components in the FFT. I have also read that it should be close to 0. When I actually perform the FFT and take the absolute value, what I find is that the first element of this is always 0. This makes sense mathematically if you consider the definition of the DFT itself. Is there a best practice to reconstructing this first DC bin? So it seems that because I was making my test FFT with a sine wave as the transient signal, this is what caused the 0 valued first FFT bin. I changed it to a cosine and got a non-zero first element...And the same with any random data. I should have thought a bit first. Answer: FFT computes the DFT which is $$X[k] = \sum_{n=0}^{N-1} x[n] e^{-j \frac{2\pi}{N} k n } $$ where $x[n]$ is a sequence of length $N$ defined in $0 \leq n \leq N-1$. The DC bin is $X[0]$; for $k=0$ $$ X[0] = \sum_{n=0}^{N-1} x[n] $$ is the sum of all samples of the signal $x[n]$. It's not the average, but the average is obtained simply by dividing it by $N$.
{ "domain": "dsp.stackexchange", "id": 7680, "tags": "fft" }
About light in the universe
Question: As a light source in the universe (e.g. sun) emits light in different directions, some of the light emitted reaches places like Earth, and some doesn't. So does the light that reaches the Earth disappear or it is reflected in other directions? And for the light that doesn't reach any place, does it keep on going forever? If it does keep on going forever, will the universe become brighter and brighter? Thanks! Answer: As you probably know, the light that you mention is an electromagnetic radiation, so it is part of a large spectrum in which $\textit{visible light}$ makes only a small part. Even the sun emits tons of radiation, but not all of it is visible light. When you say about the brightness of the Universe (referring only to the visible light which is radiated by the cosmic objects), you need to be careful here: the brightness of the Universe can be calculated actually if you know how much light each cosmic object emits. Of course if you know the brightness of say Andromeda galaxy, then you probably know enough details about its components (i.e. nebulas, stars and so on). Scientists want to measure the brightness of the Universe ( see here). Probably they will obtain a value which of course will be large, but of course, finite. It can also change in value due to normal cosmic events: supernovae, quasars, death of stars (so it can decrease or increase) but I guess the change in value is NOT noticeable in one day :D
{ "domain": "physics.stackexchange", "id": 41951, "tags": "visible-light, universe, stars" }
Handle builtin commands
Question: I've written a small C function to handle the builtin commands of a custom shell that I'm writing in C. Is it better to use a switch instead? int handleBuiltinCommands(char *input, int ret) { int built_in_command = 0; if (strcmp(input, "exit") == 0) { free(input); exit(0); } if (StartsWith(input, "cd")) { built_in_command = 1; runcd(input); } if (StartsWith(input, "checkEnv")) { built_in_command = 1; checkEnv(ret); } return built_in_command; } I compile it with gcc -pedantic -Wall -ansi -O3 (Background: Tokenizing a shell command) Answer: It's OK to use the if construction, although if you plan to have more than a few commands, using switch leads to cleaner code imho. In addition, why not using if else instead of just if? That way not all of your if have to be evaluated (in average).
{ "domain": "codereview.stackexchange", "id": 19574, "tags": "c, shell, unix, posix" }
Total Relativistic Energy of Massive Body
Question: I have 2 different expressions for the relativistic energy of a body and I'm trying to link them both: The first one is given by $$E=\sqrt{(mc^2)^2+(pc)^2},$$ with $m$ the mass and $p$ the momentum. The second one is given by the mass energy $mc^2$ together with the kinetic energy $T$: $$E = mc^2 + T.$$ I also have that $$T=p^2/2m,$$ but if I substitute that in the first expression I don't obtain the second expression. Can someone explain why? Answer: You can derive the second expression from the the first one, but only approximately, not exactly. $$\begin{align} E&=\sqrt{(mc^2)^2+(pc)^2} \\ &=mc^2\sqrt{1+\frac{p^2}{m^2c^2}} \\ &\approx mc^2\left(1+\frac{p^2}{2m^2c^2}\right) \\ &=mc^2+\frac{p^2}{2m}. \end{align}$$ The approximation done in the 3rd line is $$\sqrt{1+x}\approx 1+\frac{x}{2}$$ which is valid only if $x\ll 1$. In your case this means $p\ll mc$, which essentially is the non-relativistic case (bodies with slow velocity, much smaller than $c$). Remember, the relation $T=\frac{p^2}{2m}$ is from Newtonian mechanics. and thus becomes wrong for larger velocity (i.e. not much smaller than $c$).
{ "domain": "physics.stackexchange", "id": 73559, "tags": "special-relativity, energy, kinematics" }
Kinect with zoom lens calibration: Bad link on ros wiki
Question: I am trying to view documentation about calibrating the Kinect using a Zoom lens. The link suggested by www.ros.org/wiki/kinect_calibration/technical) leads to this dead link: www.ros.org/doc/api/kinect_near_mode_calibration/html) Why is this documentation no longer on the server? Originally posted by ee.cornwell on ROS Answers with karma: 108 on 2013-02-02 Post score: 0 Answer: Check out docs embedded in https://svn.code.sf.net/p/jsk-ros-pkg/code/trunk/jsk_common/jsk_openni_kinect/kinect_near_mode_calibration/launch/kinect_near_mode_calibration.launch Originally posted by Nick Armstrong-Crews with karma: 481 on 2014-07-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12699, "tags": "kinect" }
How did Einstein know where to put individual elements of E-M Tensor $T$ w.r.t. the corresponding tensor $G$? where $G=\kappa T$
Question: We know the well known relation in General Relativity. $G=\kappa T$ Where G is the Einstein Tensor and T is the Energy-Momentum Tensor and K is the constant. I wanted to ask how did Einstein got to know where to place a particular element of a energy-momentum tensor with respect to the corresponding curvature tensor? I know that for T(0,0) is the energy density which corresponds to the G(0,0) curvature tensor which gives the newtonian picture in rest frame. But how did Einstein got to know where to put what element of the T with respect to the curvature tensor G? Any insight would be super helpful. Thanks! Answer: The divergence of the energy-momentum-stress tensor is zero. The time component of this expresses energy conservation, and the spatial components express momentum conservation. This means that the time-time component is energy density, the time-space components are energy flow / momentum density, and the space-space components are momentum flow. The components cannot be arbitrarily arranged inside the tensor. They are arranged in a way that they transform properly under local Lorentz transformations. For example, if the tensor has nonzero energy density but zero momentum density in a rest frame, it has to have nonzero momentum density in a moving frame.
{ "domain": "physics.stackexchange", "id": 72237, "tags": "general-relativity, metric-tensor, curvature, stress-energy-momentum-tensor" }
Conundrum involving distance to object, universal expansion, age of universe, etc
Question: I was recently reading about a galaxy or quasar (not sure which, so lets just say quasar) where the estimated distance to this thing was some very significant fraction of the age of the universe. What I mean is this: I can't recall the exact particulars, but assuming for argument's sake that the age of the universe is 13.8 billion years old, this thing was like 13 billion light years away. (Again, some detail here might be slightly off; the thing might have been estimated at 12 billion light years away - so focus on the general principle I am layout out here, not any particulars). What I don't understand is this. If the thing is 13 billion light years away, then the light reaching my eyes as I look at it now took 13 billion years to get to me. Yet the age of the universe is 13.8 billion years old (again if I'm slightly wrong in some of these numbers, just accept them for now for argument's sake). It means the light must have left this object when the universe was much "smaller" or more compact, and things were much closer together. Which means the light should have passed my position a long time ago if I have been either "moving" or "expanding" (or some combination) away from the object at anything except some rather large percentage of the speed of light. To try to boil this down, or restate it in simpler terms: 1) either it seems I am moving/expanding/red-shifting away from this object at some significant fraction of the speed of light (haven't done the numbers, but might be 80% or 90% or so) and have been for a while, or 2) something is "off" (the age of the universe, or the distance of this object). or 3) there is something I'm not understanding. Which is why I came here. I have never seen it stated that we are "expanding" or "redshifting" or [insert whatever term you like] at some significant fraction of the speed of light; I have only ever seen it stated that for some brief period of time after the big bang the universe could have (or did) expand faster than light, but again that was for like less than a second if memory serves. At any rate, anyone care to explain? Thanks. Answer: The 13 billion lightyears distance of the quasar mean 13 billion lightyears light-travel distance. In other words, the light took 13 billion years to reach us, independent of the distance the quasar is now away from us. The current proper distance (a chain of rulers would measure) is called comoving distance; it is much larger than the light-travel distance for large redshifts. The quasar is moving away from us in an accelerated fashion, and is now moving away faster than light. This means, light which is emitted now (in the sense of cosmic time) by the galaxy, the quasar may have evolved into in the meanwhile, will never reach us. The light, the quasar emitted 13 billion years ago, left the quasar just in time to leave the regions of the universe which later have been receding faster than light from us, to eventually reach us. As a thought experiment, imagine walking with 5 km/h (our playground speed of light) on a rubber band which is expanding with a constant rate of 1 km/h per meter actual length of the band. (This leads to an exponential acceleration of the distance between the two marks on the band). If you start closer than 5 meters away from your goal, e.g. from a start mark (our quasar) 4.50 m away, you'll finally reach the goal mark. The start mark will soon be further away from the goal mark than 5 meters, therefore receding with more than 5 km/h from the goal shortly after you left the start mark. At the moment you arrive at the goal mark, the start mark will be much further away (comoving distance) than the distance you needed to walk (light travel distance). And you've been walking a longer distance than the (proper) distance between the marks was at the time you started walking. Btw.: Acceleration is only felt as a force, when the velocity to the local rubber band (mataphoric space-time) is changed. Example calculations with a protogalaxy of redshift $z=11.9$: Based on the Cosmology Calculator on this website, the cosmological parameters $H_0 = 67.11$ km/s/Mpc, $\Omega_{\Lambda} = 0.6825$ provided by the Planck project, and the scale factor $d(t) = d_0 / (1+z)$, setting $\Omega_M = 1- \Omega_{\Lambda} = 0.3175$, the age of the universe is $13.820$ Gyr, and the comoving distance of the protogalaxy is $d_0 = 32.644$ Gly. The age of the universe, we see the protogalaxy (at redshift 11.9), was 0.370 Gyr, light-travel distance has been 13.450 Gly, proper distance was 2.531 Gly. After the protogalaxy has been emitting light 0.370 Gyr after the big bang, the light travelled towards us through space of redshift beginning with 11.9 shrinking to 0; the light arrived at us 13.820 Gyr after the big bang. The comoving distance (to us) of the space traversed by the light started with 32.644 Gly shrinking to 0. The remaining distance, the light needed to travel, started with 13.450 Gly shrinking to 0. The proper distance between the protogalaxy and us started with 2.531 Gly increasing to 32.644 Gly due to the expansion of spacetime. Here some intermediate states described by a couple of tuples, consisting of redshift $z$, according age $t$ of the universe (Gyr), comoving radial distance (at age $t$) of the emitted light, we can now detect from the protogalaxy (Gly), remaining light travel distance of that emitted light (Gly), proper distance of the protogalaxy at age $t$, according to $d(t) = d_0 / (1+z)$: $$(11.9, 0.370, 32.644, 13.450, 2.531),$$ $$(11.0, 0.413, 32.115, 13.407, 2.720),$$ $$(10.0, 0.470, 31.453, 13.349, 2.968),$$ $$( 9.0, 0.543, 30.693, 13.277, 3.264),$$ $$( 8.0, 0.636, 29.811, 13.184, 3.627),$$ $$( 7.0, 0.759, 28.769, 13.061, 4.081),$$ $$( 6.0, 0.927, 27.511, 12.892, 4.663),$$ $$( 5.0, 1.168, 25.952, 12.651, 5.441),$$ $$( 4.0, 1.534, 23.952, 12.285, 6.529),$$ $$( 3.0, 2.139, 21.257, 11.680, 8.161),$$ $$( 2.0, 3.271, 17.362, 10.549, 10.881),$$ $$( 1.0, 5.845, 11.124, 7.974, 16.322),$$ $$( 0.0, 13.820, 0.0 , 0.0 , 32.644).$$ The Hubble parameter, meaning the expansion rate of space per fixed proper distance, is decreasing with time. This allowed the protogalaxy to recede almost with the speed of light, although it was just about 2.5 Gly away from us (proper distance) in the time, when it emitted the light we detect now. Nevertheless distant objects in this space accelerate away from us, since their increasing distance is multiplied with the expansion rate of space.
{ "domain": "astronomy.stackexchange", "id": 6628, "tags": "distances, expansion, redshift" }
Diameter-constrained Minimum Spanning Tree Problem
Question: The diameter-constrained Minimum Spanning Tree (MST) problem is as follows: you have a undirected weighted graph $G = (V,E)$ of different weights where $V$ is the set of vertices and $E$ is the set of edges between vertices, and a constant $d$. The goal is to find an MST such that the diameter (i.e., the maximum distance of the shortest paths between vertices) of the MST is at most $d$. My question is that I am debating whether the following is true or not: Once you have an MST of a graph $G$, if the diameter of the MST is $>d$, then is it true to say that there exist no feasible solution. Also, once you have a MST of $G$, then is the diameter of that MST the maximum diameter of $G$? or minimum? or what could be said about the diameter of a MST? Answer: There is no direct relationship between the diameter of a (minimum) spanning tree and the total cost of the tree1. Consider the following example: The spanning tree on the left (whose edges are highlighted in red) is minimum. Its total cost is 7 and the diameter is equal to 5. In contrast, the spanning tree on the right is not minimum (since its total cost is 12), but it has a smaller diameter: 4. The same situation may occur when two spanning trees are minimum, as suggested by Yuval. Consider the following example (for the complete graph $K_4$): In this case, the total cost of the two Minimum Spanning Trees (MST) is 3; however, the MST on the left has a diameter that is equal to 3, while the MST on the right has a smaller diameter: 2. This counter-example disproves your assumption: It is, indeed, possible to find two different MSTs $T$ and $T'$ whose diameters are $\gt d$ and $\leq d$, respectively, within the same weighted undirected graph $G$. 1. Note that Minimum-Diameter Spanning Trees (MDST) can be found in polynomial time, but the problem becomes NP-Hard when we also want the MDST to be minimum (i.e., when we want the total cost of the tree to be minimized).
{ "domain": "cs.stackexchange", "id": 7508, "tags": "graphs, weighted-graphs, spanning-trees, minimum-spanning-tree" }
Changing TransformListener's transform
Question: Hi, So I'm trying to change the transform that I'm getting from a transform listener, but it doesn't seem to change. When trying to set the values, they don't change. Here I'm declaring a const variable, since getRotation.getX returns a const: const double scalarX = transform.getRotation().getX() + 0.1; transform.getRotation().setX(scalarX); Transform is acting like it's const, but it's not and so isn't getRotation, but I still can't change the values. tf::StampedTransform transform; listener.lookupTransform(*camera_depth, *base_link, ros::Time(0), transform); cout << "Before added scalar: " << transform.getRotation().getX() << endl; const double scalarX = transform.getRotation().getX() + 0.1; transform.getRotation().setX(scalarX); cout << "After added scalar: " << transform.getRotation().getX() << endl; Originally posted by RosFan19 on ROS Answers with karma: 107 on 2015-03-23 Post score: 0 Answer: This is not operating as you expect because you are operating on a temporarily allocated Quaternion not on the transform object. Originally posted by tfoote with karma: 58457 on 2015-03-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by RosFan19 on 2015-03-27: You are correct. Thanks for the answer!
{ "domain": "robotics.stackexchange", "id": 21215, "tags": "ros, transform-listener, stampedtransform, tf2" }
Computing the longest common substring of two strings using suffix arrays
Question: After I learned how to build a suffix array in $O(N)$ complexity, I am interested in discovering the applications of the suffix arrays. One of these is finding the longest common substring between two strings, in $O(N)$ time. I found on the internet the following algorithm: merge the two strings $A$ and $B$ into one string $AB$ compute the suffix array of $AB$ compute the $LCP$(longest common prefix) array the answer is the largest value $LCP[i]$ I tried to implement it, but as many implementation details were not said(i.e. when concatenating the strings, should I put a special character between them($AcB$)?), my code failed on many test cases. Could someone elaborate more on this algorithm? Thanks in advance. Note: I do not guarantee the correctness of this algorithm; I found it on a blog, and I'm not sure it is working. If you think it is incorrect, please suggest another algorithm. Answer: Your algorithm is incorrect. I assume you know how to compute the suffix array and the LCP array of a string, that is, their efficient implementation. As has been pointed out in the comments, you should try to understand what each component is, and why it works. First of all, is the suffix array ($SA$) of a string. A suffix array is basically all the suffixes of the string $S$ arranged in ascending lexicographic order. More specifically, the value $SA[i]$ indicates that the suffix of $S$ starting from position $SA[i]$ is ranked $i$ in the lexicographic ordering of all suffixes of $S$. Next is the $LCP$ array. $LCP[i]$ indicates the length of the longest common prefix between the suffixes starting from $SA[i-1]$ and $SA[i]$. That is, it keeps track of the length of the longest common prefix among two consecutive suffixes of $S$ when arranged in lexicographic order. As an example, consider the string $S = abbabca$. The suffixes in lexicographic order would be $\{a, abbabca, abca, babca, bbabca, bca, ca\}$, so $SA = [7, 1, 4, 3, 2, 5, 6]$ for a 1-indexed array. The $LCP$ array would be $LCP = [-, 1, 2, 0, 1, 1, 0]$. Now, given two strings $A$ and $B$, we concatenate them as $S = A\#B$, where $\#$ is a character not present in both $A$ and $B$. The reason for choosing such a character is so that when computing the LCP of two suffixes, say $ab\#dabd$ and $abd$, the comparison will break off at the end of the first string (since it only occurs once, two different suffixes will never have it in the same position), and won't "overflow" into the other string. Now, it can be seen that you should be able to see why you only need to see consecutive values in the $LCP$ array (the argument is based on contradiction and the fact that the suffixes in $SA$ are in lexicographic order). Keep checking the $LCP$ array for the maximum value such that the two suffixes being compared do not belong to the same original string. If they don't belong to the same original string (one begins in $A$ and the other in $B$), then the largest such value is the length of the largest common substring. As an example, consider $A = abcabc$ and $B = bc$. Then, $S = abcabc\#bc$. Sorted suffixes are $\{abc\#bc, abcabc\#bc, bc, bc\#bc, bcabc\#bc, c, c\#bc, cabc\#bc\}$. $\begin{align*} SA &= [4, 1, 8, 5, 2, 9, 6, 3, 7] \\ LCP &= [-, 3, 0, 2, 2, 0, 1, 1, 0] \end{align*}$ Now, the greatest value is $LCP[2] = 3$, but it is for $SA[1]$ and $SA[2]$, both of which start in the string $A$. So, we ignore that. On the other hand, $LCP[4] = 2$ is for $SA[3]$ (corresponds to the suffix $bc$ of $B$) and $SA[4]$ (corresponding to suffix $bcabc\#bc$ of $A$). So, this is the longest common substring between the two strings. For getting the actual substring, you take a length $2$ (value of the greatest feasible $LCP$) substring starting from either $SA[3]$ or $SA[4]$, which is $bc$.
{ "domain": "cs.stackexchange", "id": 8634, "tags": "algorithms, suffix-array" }
"can't locate node [spawner] in package [controller_manager]"
Question: Trying to run the rrbot gazebo demo found at http://arnaudbertrand.io/blog/2015/06/26/ros-gazebo-tutorial-the-definitive-crash-course/. When I run: > roslaunch -v rrbot_control rrbot_control.launch I get: ERROR: cannot launch node of type [controller_manager/spawner]: can't locate node [spawner] in package [controller_manager] The problem seems to be that rosrun is trying to find the controller_manager spawner node in /opt/ros/kinetic/share/controller_manager, but in fact it seems to be installed at /opt/ros/kinetic/lib/controller_manager. This is my first week using ROS, so I'm at something of a loss how to fix this. Is spawner simply installed in the srong location? Should I copy the files over to /share? Originally posted by GlenH on ROS Answers with karma: 36 on 2016-11-07 Post score: 1 Answer: OK, so if anyone who is flailing trying to understand ROS in the future stumbles across this, here's the answer: ROS installs a script called "setup.bash" in /opt/ros/kinetic that you have to source before anything will run. If you also install some third-party code that you have to build from source (as opposed to installing from packages), that code resides in a directory somewhere on your system called "catkin_ws" or similar. There will be a couple of directories inside of this called "devel" and maybe "install". Inside those directories there are also "setup.bash" files that tell the system about the new stuff you've built. You can source them using an "--extend" flag in order to add the extra search paths to your ROS system. Here's the key, though: you have to source all of the setup scripts in all of the terminals, even ones that you aren't using to call your new code. That was my problem: I was sourcing /opt/ros/kinetic/setup.bash in all of my terminals, but only sourcing devel/setup.bash in the terminal I was using to call the code that lived in my catkin workspace. Originally posted by GlenH with karma: 36 on 2016-11-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2016-11-08: re: "source all the setup scripts": typically, you only source a single setup script, which already automatically extends any underlying workspaces for you. You achieve that by building your own workspace after having sourced one that contains your dependencies. .. Comment by gvdhoorn on 2016-11-08: .. All of your workspaces (if you have multiple) should form a chain, with /opt/ros/$distro probably being the 'lowest' one, and all the others layering on top of that. See also wiki/catkin/workspace overlaying.
{ "domain": "robotics.stackexchange", "id": 26176, "tags": "ros, roslaunch, controller-manager" }
By how much protons dipole moment inside a nucleus attenuate the culomb force between them?
Question: By how much protons dipole moment inside a nucleus attenuate the culomb force between them? As up quarks repel more than down quarks the protons should be oriented with the positive side looking away from the centre of the nucleus. In that case the strong force have more job to do with the dipole moment that wants to break apart the particle instead of the residual strong force that have dipole moment as a 'friendly' force? Answer: The electric polarizabilities for the proton and neutron, catalogued for example by the Particle Data Group, are about a thousand times smaller than you would expect from doing dimensional analysis. In a hand-waving way, this is because strong interaction makes the "medium" inside of a proton "stiffer" than the "medium" within a hydrogen atom (where the dimensional-analysis result is okay). In low-mass nuclei, the statement that "strong isospin is a good symmetry" is basically equivalent to "you have permission to neglect nucleon charge." If you can effectively predict the excitation spectrum for a nucleus without considering electric charge at all, it's probably also reasonable to neglect the small electromagnetic correction due to polarizability. If you wanted to compute this, I'd use a mean-field approach. Choose a nucleus of interest and model it as a uniform-density sphere of charge, whose electric field is zero at the origin, linear in radius to the edge of the nucleus, then $1/r^2$ to infinity. Then convert your nucleon polarizability into an electric susceptibility for nuclear matter. The energy density of the electric field will shrink a little within the nucleus, as you move from $\vec E$ to $\vec D$. The difference in the integrated electric field energy is an order-of-magnitude estimate for the polarizability correction to the energies of nuclear states.
{ "domain": "physics.stackexchange", "id": 85920, "tags": "nuclear-physics, quarks, protons, dipole-moment" }
Biological term for close species rivalry
Question: Is there any phenomenon/force in biology when two very close species fiercely fight each other (as a sign of a strong tendency to deepen the difference between species)? If there is, what's the name of the phenomenon or any similar phenomenon? (Question from a layman in biology, thanks for an answer.) Answer: Citing wikipedia Character displacement refers to the phenomenon where differences among similar species whose distributions overlap geographically are accentuated in regions where the species co-occur, but are minimized or lost where the species’ distributions do not overlap. This pattern results from evolutionary change driven by competition among species for a limited resource (e.g. food). The rationale for character displacement stems from the competitive exclusion principle, also called Gause's Law, which contends that to coexist in a stable environment two competing species must differ in their respective ecological niche; without differentiation, one species will eliminate or exclude the other through competition. In ecology, the competitive exclusion principle, sometimes referred to as Gause's law of competitive exclusion or just Gause's law,[2] is a proposition that states that two species competing for the same resource cannot coexist at constant population values, if other ecological factors remain constant. When one species has even the slightest advantage or edge over another then the one with the advantage will dominate in the long term. One of the two competitors will always overcome the other, leading to either the extinction of this competitor or an evolutionary or behavioral shift toward a different ecological niche. The principle has been paraphrased into the maxim "complete competitors cannot coexist"
{ "domain": "biology.stackexchange", "id": 4866, "tags": "terminology" }
In MLP, how would I update the weights using batches? Would I have to calculate the accumulated error(of all samples) of each output neuron?
Question: In Multilayer Perceptron neural networks, I know that there are two types of training: online training, and batch training, which consists of dividing the samples and updating the weights using the accumulated error. In other neural networks, for example in Adaline, I know that something similar is done to update the weights, for example in this video: https://youtu.be/MTe2qsS56MQ?si=Sky-220_zA15l7T8 In this case, it shows the " total error", from what it seems to me, the weights are only adjusted at the end of each epoch, using the accumulated error (summed, that is, the total error) of the samples. In the example in the video, he used only one output neuron, as it was just one Adaline, and in the way it was done, it seems like it was just a single batch, as he made a sum of the squared errors of each sample and only at the end I updated the weights using this accumulated error as I mentioned above, it was just a batch. But for me it's confusing, because, if I have a Multilayer Perceptron using several output classes, how would I update the weights using batches? Would I have to calculate the accumulated error(of all samples) of each output neuron? For example, if I divide 10 samples into 2 batches, how would the weights be updated? Would it add up the error of all samples from each batch, and then update the weights (after the current batch)? When using multiple classes, what would this weight update look like? I would have to add the error (that is, obtain the accumulated error) of all output neurons, for example: "total error of class 1", "total error of class 2", "total error of class 3", etc? In Multilayer Perceptron,using several output classes, how would I update the weights using batches? Would I have to calculate the accumulated error(of all samples) of each output neuron? Answer: So, two things: Yes you technically calculate the accumulated error for each neuron, but in practice we calculate all neurons at once by treating it as a matrix instead of individual vectors. Likewise, you do not have to calculate error gradients for the individual hidden neurons, but rather you calculate the gradients on a layer-wise basis which can be decomposed into individual neurons. The size of your minibatches and the number of samples per update step are not necessarily the same thing. While you can perform backprop and weight updates after each minibatch, you can also perform what's called gradient accumulation where, as the name says, you accumulate the gradients between several (or all) minibatches before performing your update step. Accumulating the gradients for all minibatches before performing your update step effectively makes it so you're doing batch gradient descent, even if you don't have the hardware to do "true" batch gradient descent on data of your size. How you select your minibatch size and number of gradient accumulation steps (minibatches per update step) is an empirical process, for now.
{ "domain": "ai.stackexchange", "id": 4085, "tags": "neural-networks, mini-batch-gradient-descent" }
Prop Orientation on a Multirotor
Question: While looking up information for the right propellers for my quadcopter, I realized that they had different orientations i.e. Clockwise and Counterclockwise. On further research I found that all multi-rotors have different combinations of these orientations. So my question is WHY? How does it matter if the propeller is turning clockwise or anti-clockwise? Answer: This has to do with the torque, or moment, the rotors induce on the body of the quadcopter/multirotor. If all of the rotors were to spin the same direction they would all induce a torque in the same direction causing the craft to yaw. Of course this is undesirable for many reasons. By spinning half of the rotors the opposite direction the torques are theoretically canceled preventing the craft from yawing.
{ "domain": "robotics.stackexchange", "id": 177, "tags": "quadcopter, multi-rotor" }
How can I minimise the false positives?
Question: I have 50,000 samples. Of these 23,000 belong to the desired class $A$. I can sacrifice the number of instances that are classified as belonging to the desired class $A$. It will be enough for me to get 7000 instances in the desired class $A$, provided that most of these instances classified as belonging to $A$ really belong to the desired class $A$. How can I do this? The following is the confusion matrix in the case the instances are perfectly classified. [[23000 0] [ 0 27000]] But it is unlikely to obtain this confusion matrix, so I'm quite satisfied with the following confusion matrix. [[7000 16000] [ 500 26500]] I am currently using the sklearn library. I mainly use algorithms based on decision trees, as they are quite fast in the calculation. Answer: I think you're looking for the minimization of false positives, that is, the instances that are classified as belonging to the desired class (the positive part of false positives) but that do not actually belong to that class (the false part of false positives). In practice, given your constraints, you may want to maximize the precision, while maintaining a good recall. In this answer to the question How can the model be tuned to improve precision, when precision is much more important than recall?, the user suggests performing a grid search (using sklearn.grid_search.GridSearchCV(clf, param_grid, scoring="precision")) to find the parameters of the model that maximize the precision. See also the question Classifier with adjustable precision vs recall.
{ "domain": "ai.stackexchange", "id": 1566, "tags": "machine-learning, classification, python" }
Invoking "make cmake_check_build_system" failed
Question: Hi... I am to ROS and i would be posting lots of questions. :) Th thing i am trying to do is that i want to create multiple packages inside single catkin workspace. I created my package "my_first_pkg" inside "catkin_ws/src". It worked fine. Then i tried to add another package in the same "catkin_ws/src" with the name "my_second_pkg" using "catkin_create_pkg". Then i run following command catkin_make it gives me following error. Invoking "make cmake_check_build_system" failed Originally posted by UsmanArif on ROS Answers with karma: 41 on 2016-07-21 Post score: 0 Original comments Comment by ahendrix on 2016-07-21: It sounds like there is more than just one line in this error message; please edit your question to include the full error message. Answer: Hi. Thanks for the reply. I found the error. There was problem with the other packages in the same workspace. One of the other package had wrong cmakelist.txt file.. Thanks :) Originally posted by UsmanArif with karma: 41 on 2016-07-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25322, "tags": "ros, catkin, packages" }
Expected number of updates of minimum
Question: I came across the following problem in a exam. We choose a permutation of n elements $[1,n]$ uniformly at random. Now a variable MIN holds the minimum value seen so far at it is defined to $\infty$ initially. Now during our inspection if we see a smaller value than MIN, then MIN is updated to the new value. For example, if we consider the permutation, $$5\ 9\ 4\ 2\ 6\ 8\ 0\ 3\ 1\ 7$$ the MIN is updated 4 times as $5,4,2,0$. Then the expected no. of times MIN is updated? I tried to find the no. of permutations, for which MIN is updated $i$ times, so that I can find the value by $\sum_{i=1}^{n}iN(i)$, where $N(i)$, is the no. of permutations for which MIN is updated $i$ times. But for $i\geq2$, $N(i)$ is getting very complicated and unable to find the total sum. Answer: The trick is to use linearity of expectation. Let $E_k$ be the event that the $k$th input is a left-to-right minimum (i.e., requires an update), and let $X_k$ be an indicator variable for $E_k$, that is, $X_k$ is $1$ if $E_k$ happens and $0$ otherwise. Let $U = X_1 + \cdots + X_n$ be the number of updates. The expected number of updates is $$ \mathbb{E}[U] = \sum_{k=1}^n \mathbb{E}[X_k] = \sum_{k=1}^n \Pr[E_k]. $$ It remains to compute $\Pr[E_k]$. We can construct a random permutation $\pi$ of $[n] = \{1,\ldots,n\}$ in the following way: take a random permutation of $[n]$, and randomly permute the first $k$ elements. This shows that the probability that $\pi(k) = \min(\pi(1),\ldots,\pi(k))$ is exactly $1/k$, and so $\Pr[E_k] = 1/k$. All in all, we get $$ \mathbb{E}[U] = \sum_{k=1}^n \Pr[E_k] = \sum_{k=1}^n \frac{1}{k} = H_n, $$ the $n$th Harmonic number. It is well-known that $H_n = \ln n + \gamma + O(1/n)$ (Wikipedia contains the entire asymptotic expansion). We can also compute the variance in this way: $$ \begin{align*} \mathbb{E}[U^2] &= \sum_{k=1}^n \mathbb{E}[X_k^2] + 2\sum_{k=1}^{n-1} \sum_{\ell=k+1}^n \mathbb{E}[X_k X_\ell] \\ &= \sum_{k=1}^n \Pr[E_k] + 2\sum_{k=1}^{n-1} \sum_{\ell=k+1}^n \Pr[E_k \land E_\ell], \end{align*} $$ where $\land$ is logical AND. We already know that $\Pr[E_k] = 1/k$. In order to compute $\Pr[E_k \land E_\ell]$ (where $k < \ell$), we follow the same route as before. With probability $1/\ell$, $\pi(\ell)$ is a left-to-right minimum. Given that, the probability that $\pi(k)$ is a left-to-right minimum is $1/k$. Therefore $\Pr[E_k \land E_\ell] = 1/(k\ell)$, and so $$ \begin{align*} 2\sum_{k=1}^{n-1} \sum_{\ell=k+1}^n \Pr[E_k \land E_\ell] &= 2\sum_{k=1}^{n-1} \sum_{\ell=k+1}^n \frac{1}{k\ell} \\ &= \left(\sum_{k=1}^n \frac{1}{k}\right)^2 - \sum_{k=1}^n \frac{1}{k^2} \\ &= H_n^2 - \sum_{k=1}^n \frac{1}{k^2}. \end{align*} $$ Therefore $$ \begin{align*} \mathbb{E}[U^2] &= H_n + H_n^2 - \sum_{k=1}^n \frac{1}{k^2}, \\ \mathbb{V}[U] &= H_n - \sum_{k=1}^n \frac{1}{k^2} = \ln n + \gamma - \frac{\pi^2}{6} + O\left(\frac{1}{n}\right). \end{align*} $$ We can compute all other moments in a similar way using (essentially) the inclusion-exclusion principle and the formula $$ \mathbb{E}[U^d] = \sum_{i_1,\ldots,i_d=1}^n \prod_{i \in \{i_1,\ldots,i_d\}} \frac{1}{i}. $$ If we are careful enough then we can probably establish the asymptotic normality of $U$.
{ "domain": "cs.stackexchange", "id": 2621, "tags": "algorithm-analysis, runtime-analysis, search-algorithms" }
Cause of m/z = 56 peak in 4-methyl-1-pentanol
Question: This is the mass spectrum for 4-methylpentan-1-ol: From its structural formula, what could be the fragmentation which results in a peak at $m/z = 56?$ I was considering a few possibilities; could it be $\ce{C3H4O+}?$ Answer: Unfortunately I did not find a high resolution spectrum of this compound, which would have quickly answered your question. Nevertheless, ionized alcohols and even more primary alcohols have a main fragmentation pathway which is the loss of water to yield an ionized alkene. In this case, the corresponding alkene would be 4-methyl-1-pentene, at m/z 84. The peak is present although rather small in the EI spectrum. From there, it is quite obvious, when one looks at the fragmentation spectrum of 4-methyl-1-pentene (NIST Webbook spectrum), that it shares a lot of similarities with the 4-methylpentan-1-ol EI spectrum. (See for instance peaks at m/z 69, 56, 43, 42, 41.) From there, it is quite likely that the m/z 56 peak is ionized isobutene ($\ce{C_4H_8}^{\bullet +}$) arising from a retro-ene type rearrangement.
{ "domain": "chemistry.stackexchange", "id": 13455, "tags": "organic-chemistry, analytical-chemistry, alcohols, mass-spectrometry" }
Error: Insufficient values in manual scale. 24 needed but only 1 provided?
Question: folks. Can you give me idea about an issue that I encountered? I ran single-cell RNA sequencing data using Suerat in R, and when I tried to draw violin plot, there is an error as below. VlnPlot(tumors, "Cd79a", "Foxp3", "CD14") Error: Insufficient values in manual scale. 24 needed but only 1 provided. Run rlang::last_error() to see where the error occurred. However, I can draw the violin plot for Cd79a. The problem happened to only several genes. I haven't solved this problem yet. Do you know what should we do? Thank you. Answer: You must supply genes (or features) as a vector, not individually: VlnPlot(tumors, c("Cd79a", "Foxp3", "CD14")) By the way it is probably Cd14, not CD14.
{ "domain": "bioinformatics.stackexchange", "id": 1439, "tags": "scrnaseq, seurat" }
Are all prefix codes uniquely decodable?
Question: I can't think of any counterexample but I can't find any such statement on the internet or my textbook either. I know that for each uniquely decodable code, there exists a prefix code with the same average length. Answer: You can prove by induction on the length of the encoding that any string is uniquely decodable. Let $s=s_1...s_n$ be some string encoded with a prefix free code. Since our code is prefix free, there exists a unique prefix of $s$, $s_1...s_j$ which is a code word. $s_{j+1}...s_n$ is also a valid encoding (we removed a single codeword from a concatenation of codewords), so by our inductive hypothesis, $s_{j+1}...s_n$ is uniquely decodable, which completes the proof.
{ "domain": "cs.stackexchange", "id": 16374, "tags": "encoding-scheme" }
Is Stationary wave actually a Wave?
Question: Okay so by the definition of a wave i.e The osciallatory disturbance in a medium which propogates energy is called as wave..the stationary wave contradicts both the crieteria as it doesnt propogate nor does it transmit any energy. Now one may say that a stationary wave is made by two progressive waves with same phase, frequency and amplitude superimposing in opposite direction..and we know that compsition of two waves is also a new wave..by that conclusion the stationary wave must be a wave..but it contradicts the definition of a wave So what do we call the Stationary Wave as? this was asked to me by a physics proffessor and i am pondering since. wave Answer: The fact that it propagates with a velocity of zero and transfers energy at a zero rate does not preclude it being called a standing/stationary wave noting it does have energy stored in it. What I will try and show you is that your definition of a wave is somewhat limited and needs refinement. At this level you should perhaps use the idea that a stationary wave can be thought of a the superposition of two waves each of which satisfy the wave equation which in one dimension is, $\dfrac{\partial^2y(x,t)}{\partial t^2} =v^2\dfrac{\partial^2y(x,t)}{\partial x^2}$? Doing that then introduces numerous other entities which you may not consider a wave, for example, $y(x,t) = x-v t$ between $x$ and $x+v$ and zero otherwise. A wave which looks a saw tooth. Two wave profiles are shown one at time $t=0$ and another at time $t$. Here is a disturbance which has energy associated with it and the energy is moving in the positive x-direction.
{ "domain": "physics.stackexchange", "id": 97829, "tags": "waves, superposition" }
Differences in paths between gazebo and rosrun gazebo_ros gzserver
Question: Hi I have my files in the following structure: ~/faster_dev/branches/yn/gazebo/plugin: mars.world ~/faster_dev/branches/yn/gazebo/models: model database, model.sdf files in subfolders ~/gazebo_source/catkin_ws/src/gazebo_ros_pkgs/gazebo_ros My environment variables are set up as follows: GAZEBO_MODEL_PATH=:/home/yn/faster_dev/branches/yn/gazebo/models:/home/yn/faster_dev/trunk/gazebo/models GAZEBO_RESOURCE_PATH=/home/yn/local/share/gazebo-1.9:/home/yn/local/share/gazebo_models:/home/yn/faster_dev/branches/yn/gazebo/plugin:/home/yn/faster_dev/branches/yn/gazebo/models GAZEBO_MASTER_URI=http://localhost:11345 GAZEBO_PLUGIN_PATH=/home/yn/local/lib/gazebo-1.9/plugins:/home/yn/faster_dev/branches/yn/gazebo/plugin/build:/home/yn/faster_dev/trunk/gazebo/plugin/build GAZEBO_MODEL_DATABASE_URI=http://gazebosim.org/models When I run "gazebo mars.world" from any location the world is loaded with all textures and models found, howvere 'rosrun gazebo_ros gzserver mars.world" can only find the world file if I run the command from folder containing the mars.world file, and in this case a number of resources (png for heightmap and textures) cannot be found. Is there any difference in the paths or environment variables used by gazebo and gazebo_ros? Yasho Originally posted by ynevatia on Gazebo Answers with karma: 41 on 2013-07-27 Post score: 0 Original comments Comment by ynevatia on 2013-07-30: The problem lies with the scripts in gazebo_ros/scripts. These call the setup.sh from Gazebo, which was overwriting the environment variables I had set in my .bashrc with default values. Answer: I finally figured out the problem: it lies with a conflict in the scripts in gazebo_ros/scripts and the way I setup the environment variables. the gazebo_ros scripts call the setup.sh from Gazebo, which was overwriting the environment variables I had set in my .bashrc with default values. without changing how I saw the environment variables outside the execution. By commenting out the relevant lines in gazebo_ros/scripts/gazebo and building it again (for safety), roslaunch gazebo_ros mars_world.launch works. Originally posted by ynevatia with karma: 41 on 2013-07-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 3400, "tags": "ros" }
Understanding Google's “Quantum supremacy using a programmable superconducting processor” (Part 1): choice of gate set
Question: I was recently going through the paper titled "Quantum supremacy using a programmable superconducting processor" by NASA Ames Research Centre and the Google Quantum AI team (note that the paper was originally posted on the NASA NTRS but later removed for unknown reasons; here's the Reddit discussion). It believe they're referring to "quantum supremacy" in the "quantum ascendency" sense. In their 54 qubit Sycamore processor, they created a 53 qubit quantum circuit using a random selection of gates from the set $\{\sqrt{X}, \sqrt{Y}, \sqrt{W}\}$ in the following pattern: FIG 3. Control operations for the quantum supremacy circuits. a, Example quantum circuit instance used in our experiment. Every cycle includes a layer each of single- and two-qubit gates. The single-qubit gates are chosen randomly from $\{\sqrt X, \sqrt Y, \sqrt W\}$. The sequence of two-qubit gates are chosen according to a tiling pattern, coupling each qubit sequentially to its four nearest-neighbor qubits. The couplers are divided into four subsets (ABCD), each of which is executed simultaneously across the entire array corresponding to shaded colors. Here we show an intractable sequence (repeat ABCDCDAB); we also use different coupler subsets along with a simplifiable sequence (repeat EFGHEFGH, not shown) that can be simulated on a classical computer. b, Waveform of control signals for single- and two-qubit gates. They also show some plots in FIG 4, apparently proving their claim of quantum supremacy. FIG. 4. Demonstrating quantum supremacy. a, Verification of benchmarking methods. $\mathcal{F}_\mathrm{XEB}$ values for patch, elided, and full verification circuits are calculated from measured bitstrings and the corresponding probabilities predicted by classical simulation. Here, the two-qubit gates are applied in a simplifiable tiling and sequence such that the full circuits can be simulated out to $n = 53, m = 14$ in a reasonable amount of time. Each data point is an average over 10 distinct quantum circuit instances that differ in their single-qubit gates (for $n = 39, 42, 43$ only 2 instances were simulated). For each $n$, each instance is sampled with $N$s between $0.5 M$ and $2.5 M$. The black line shows predicted $\mathcal{F}_\mathrm{XEB}$ based on single- and two-qubit gate and measurement errors. The close correspondence between all four curves, despite their vast differences in complexity, justifies the use of elided circuits to estimate fidelity in the supremacy regime. b, Estimating $\mathcal{F}_\mathrm{XEB}$ in the quantum supremacy regime. Here, the two-qubit gates are applied in a non-simplifiable tiling and sequence for which it is much harder to simulate. For the largest elided data ($n = 53$, $m = 20$, total $N_s = 30 M$), we find an average $\mathcal{F}_\mathrm{XEB} > 0.1\%$ with $5\sigma$ confidence, where $\sigma$ includes both systematic and statistical uncertainties. The corresponding full circuit data, not simulated but archived, is expected to show similarly significant fidelity. For $m = 20$, obtaining $1M$ samples on the quantum processor takes 200 seconds, while an equal fidelity classical sampling would take 10,000 years on $1M$ cores, and verifying the fidelity would take millions of years. Question: Why did they specifically choose the gate set $\{\sqrt{X}, \sqrt{Y}, \sqrt{W}\}$? $W$ is $(X+Y)/\sqrt 2$ as per the supplementary material. Also, why are they using randomly generated circuits? User @Marsl says here that: "In case you are confused by the need for a random unitary, it needs to be random in order to avoid that the classical sampler trying to reproduce the right prob. distribution can adapt to the particular unitary. Basically, if I wanted build a classical sampling algorithm that solves the problem for any unitary you hand over to me (or a description of the circuit), then the randomness assures that my sampler has to be "general-purpose", I have to design it such that it works well for any instance!" It not clear to me what they mean by "adapt"-ing to some particular unitary in this context. Sequel(s): Understanding Google's “Quantum supremacy using a programmable superconducting processor” (Part 2): simplifiable and intractable tilings Understanding Google's “Quantum supremacy using a programmable superconducting processor” (Part 3): sampling Answer: While a follow-up question asks for the motivation behind the two-qubit gates used in Sycamore, this question focuses on the random nature of the single qubit operations used in Sycamore, that is, the gates $\{\sqrt{X},\sqrt{Y},\sqrt{W}=(X+Y)/\sqrt{2}\}$ applied to each of the $53$ qubits between each of the two-qubit gates. Although I agree with @Marsl that these gates were likely relatively easy to realize with the transmon qubits used in Sycamore, I suspect that there is a little more to the story. For example, page 26 of the Supplementary Information notes that although $\sqrt{X}$ and $\sqrt{Y}$ belong to the Clifford group, $\sqrt{W}$ does not. I believe $\sqrt{W}$ was added, at least partly, because it is not a member of the Clifford group. This may help to avoid the pitfalls of the Gottesman-Knill theorem, which says that circuits consisting of only normalizers of the Pauli group $(I,X,Y,Z)$ are efficiently simulatable. Thus, for example, if $\sqrt{Z}$ were used as opposed to $\sqrt{W}$, then the claim of quantum supremacy would have to overcome the implications of easy simulatability in view of Gottesman-Knill. Furthermore, I believe at least three single-qubit gates are needed to help support the claim of quantum supremacy. For example further review of page 26 of the Supplemental Information states that although the first cycle randomly chooses among the $3$ gates, subsequent cycles never use the same gates used in the immediately preceding cycle. It's hard to scramble a Rubik's cube by giving two half-twists to the same face twice in a row. Similarly their circuit used for quantum supremacy is chosen randomly from all of the $3^n2^{nm}$ such words on $n$ qubits and $m$ cycles of single- and two-qubit gates.
{ "domain": "quantumcomputing.stackexchange", "id": 982, "tags": "quantum-gate, superconducting-quantum-computing, quantum-advantage, random-quantum-circuit, google-sycamore" }
How to estimate limits for period of a binary?
Question: How to estimate upper and lower limits for the period of a binary that is not eclipsing? What parameters are necessary, please? Answer: A binary system could have an orbital period anything from the period the object would have if the two stars were almost touching (I'm assuming a contact binary would give a noticeable light curve modulation) to being so wide that it can just survive being broken up by the Galactic tidal field. Both of these limits will depends on the mass of the binary components. The former limit will depend on the radii of the components. The lack of eclipses does not place very strong prior contraints on the probability of the binary having any particular orbital period, except at very close separations where the lack of any eclipses becomes unlikely. The probability of eclipse for a given separation is something like $P(a) \sim (R_1 +R_2)/a$, where $R_{1,2}$ are the radii of the components and $a$ is their separation, which approaches 1 when the stars are almost touching. Thus the probability of not eclipsing is $1 -P(a)$ and this reduces the a-priori probability that your non-eclipsing object is a very close (short period) binary.
{ "domain": "astronomy.stackexchange", "id": 6639, "tags": "binary-star" }
How to get two continuous tracks (tank treads) to move at the same rate?
Question: I've got a couple Vex 269 Motors hooked up to an Arduino Duemilanove. These motors run a some Vex Tank Treads. The two motors are run as servos on the Arduino using the Servo Library. The problem I'm having is that the two tracks don't turn at the same speed when sent the same servo angle. This is clearly due to the fact that the continuous tracks have so many moving parts that having identical friction forces on each track is hard to get. How do I get them to move the same speed? Should they be moving the same speed given the same servo angle regardless of the friction and the Vex 269 Motors just aren't strong enough (meaning I should use the Vex 369 or some other more powerful motor)? Is it best to just doing trial and error long enough to figure out which servo angle results in equal speeds on each? Should I tinker with the tracks until they have nearly identical frictions? Thank you much! Answer: The short answer is that you need better control (feedback) to do it. Practically, you will never be able to calibrate the system accurately enough to go straight for more than a few tens of robot-body lengths. Once you dial it perfectly for one set of conditions, the environment or wear conditions will change and you'll have to tune it up again. Surface conditions, traction, attitude, motor-isolation (the distribution of electrical power to each motor from a common power source), and many other real-time operational factors effect forward velocity for each side of the 'bot. Depending on your precision requirements, something as simple as a magnetic compass (position as far forward on the robot as possible to maximize its responsiveness) could help you maintain a heading during forward motion. Often it isn't critically important precisely which direction your 'bot is moving; rather, it simply needs to be making progress on some task (follow the leader, search for a target, etc). If you post greater detail on your robot and its design objectives, I could assist you further. A note about magnetic sensor placement But, why should I "position [the magnetic transducer] as far forward as possble"? Isn't it true that the angle is the same? Yup. That's true, but the magnitude of the Earth's magnetic field is not. You are standing in a different spot on Earth. Imagine your robot is as big as a car. If you sit in the geometric center of the car and the car pivots about you, your coordinates on the Earth have not changed; only your attitude has. Now if you are sitting on the hood of the car and the car repeats it's previous motion, both your attitude and your coordinates have changed. Changing coordinates produces a bigger difference in magnitude of the Earth's field than rotation alone. Over the last few years I worked on a team with Dr. Dwight Veihland of Virginia Tech, arguably the world's leading expert on super-high sensitivity magnetic sensors. If I were to crystallize the body of his work (like in this example), I would say that he is always in pursuit of greater signal-to-noise ratios in the detection of ever tinier magnitudes. Any increase in the magnitude difference you can generate makes life easier for your sensor... and in this case, you get it for free. A number of DARPA grand challenge robots placed the GPS sensor forward for this same reason.
{ "domain": "robotics.stackexchange", "id": 56, "tags": "mobile-robot, motor, tracks" }
Run multiple servo on ROS with arduino
Question: Hello everybody, i wanted to know is there any way out by which i can 2 or more servos at a time from a single subscriber. Currently i am running one servo on a single channel of arduino Due. But i require two servos to be run at a same time by giving them different angle values. like; rostopic pub Servo1 std_msgs/Int16 230; this passes 230 degree to the servo attached. But i want to control two at a time like; rostopic pub Servo1 std_msgs/Int16 230 130 240; 230 for 1st servo; (degrees) 130 for 2nd servo; (degrees) 240 for 3rd servo...(degrees) I would appreciate if someone puts a light on this issue. thnks Originally posted by sonny on ROS Answers with karma: 33 on 2014-03-29 Post score: 3 Answer: You've got two options. Most straightforward would be a basic custom message with an array. This answer may help. Second option which is more elegant and involves less code is a MultiArray. Originally posted by Ryan with karma: 3248 on 2014-03-29 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by s1 on 2015-09-22: can anyone give an example of MultiArray usage please.
{ "domain": "robotics.stackexchange", "id": 17459, "tags": "ros, rosseral-arduino" }
Is it possible to construct a thermodynamical potential with all intensive variables fixed?
Question: Would it be possible to construct a thermodynamic potential with all the intensive variables ($T$, $p$, and $\mu$—the temperature, pressure, and chemical potential, respectively) fixed? This is one of my homework questions. I thought it was mathematically possible, using Legendre transforms like you would transform any other potential, but my professor mentioned later in one of the lectures that it's a "big no no". So, am I missing something here? And if it's not possible then why? Answer: Your professor should remember that in Physics, it is a good idea to keep in mind the motto "never say never". The "big no no" requires some qualification. It is true that if one is dealing with a usual extensive macroscopic system, the triple Legendre transform of $U(S,V,N)$: $$ {\mathbb Z}(T,P,\mu)= U-TS+PV-\mu N $$ is identically zero for every temperature, pressure, and chemical potential. Therefore it looks useless. Notice that here I am using a symbol like ${\mathbb Z}$ because there is no established convention for notations and names. Sometimes I saw it called "zero-thermodynamic-potential," although I do not have a reference for this name. However, ${\mathbb Z}=0$ is a consequence (through Euler's theorem) of the homogeneity of degree $1$ (extensiveness) of the internal energy $U(S,V,N)$. In turn, the extensiveness is a consequence of the continuity and the additivity of the energy. The latter property is true only for large systems and in the absence of long-range interactions. Therefore, for small systems where sub-extensive terms are not negligible, the ${\mathbb Z}$ potential may play a role because it allows focusing on the sub-extensive thermodynamics of nanoparticles. Actually, with a different name and symbol, subdivision energy, ${\bf E}$, such a thermodynamic potential was introduced years ago by TL Hill. A recent paper by Hill on such thermodynamic potential is Hill, T. L. (2001). A different approach to nanothermodynamics. Nano Letters, 1(5), 273-275.
{ "domain": "physics.stackexchange", "id": 91286, "tags": "homework-and-exercises, thermodynamics, statistical-mechanics" }
How to intentionally add noise to a Qiskit circuit?
Question: I am attempting to intentionally add noise to my qiskit circuit by applying n-pairs of CNOT gates. The effect of this should be to yield the same result of my baseline (no pairs of CNOT gates applied) circuit with the addition of some noise. However, the result of my "noisy" circuit looks the same as my baseline even for large n. My best guess is that qiskit automatically removes unnecessary operations (like a pair of CNOTs) in an effort to reduce the number of gates. If this is true, then how can I add a series of gates which will keep the baseline circuit the same but will add gate noise? Answer: Yes, Qiskit's transpiler will optimize the circuit and remove redundant gates automatically. For normal operation this is desired behavior. However for cases like this where you don't want to transpiler to optimize the circuit for you and you want to send the circuit to the backend in a raw form (it will still fit it to the backend based on its constraints) you can use optimization_level=0 kwarg for transpile() and execute(). This disables all the optimization passes and will just run the transforms necessary to run on the device (basis gate transformation, layout, routing) For example, when using execute() it would be: qiskit.execute(circuit, backend, optimization_level=0) If you want to experiment with the transpiler you can just call qiskit.transpile(circuit, backend) which will return a circuit that has been transformed so it is optimized and will be able to run on the backend. This gets internally called by execute() prior to sending a job to the backend so you can see what transforms are happening and tune things for your use case. The documentation on transpiler covers how the transpiler works pretty well: https://qiskit.org/documentation/apidoc/transpiler.html and the tutorial https://qiskit.org/documentation/tutorials/circuits_advanced/4_transpiler_passes_and_passmanager.html has examples on how you can work with it.
{ "domain": "quantumcomputing.stackexchange", "id": 2064, "tags": "qiskit, programming, circuit-construction, noise" }
What are the advantages of using fft2 over fft
Question: Assume we have a matrix x of size (8,8), . As known, FFT(x) performs 1D-FFT transformation, column wise. However, FFT2(x), performs 2D-FFT transformation. In that case, what's the advantage of using 2D-FFT over 1D-FFT ? Answer: They perform two different mathematical operations FFT executes a 1-dimensional Discrete Fourier Transform one each column of the input matrix (or the first non-singleton dimension) FFT2 executes a full 2-dimensional Discrete Fourier Transfrom on the entire matrix. Which one you want to use depends on your specific application. One is not inherently "better" than the other, they are just two different things. In particular $$\mathcal{F}_{2D}(X) = \mathcal{F}(\mathcal{F}(X)')'$$
{ "domain": "dsp.stackexchange", "id": 8428, "tags": "fft, dft" }
Why should spin "degeneracy" of electron not be infinite?
Question: When we calculate the density of states for an electron, in the standard way as done in statistical mechanics textbooks ( by integrating once over space, then integrating over $\theta$ and $\phi$ of the k-space,etc), we finally multiply a '2' as the spin degeneracy. My question is, since the electron can be in an infinite number of states on the Bloch sphere, why are we just taking two states? I know that only two of the infinite number of states are orthogonal, but even if the huge number of states are non-orthogonal, why should not they contribute to an increase in the number of states? I did not want to use the term degeneracy in the title, because degeneracy is defined as the dimension of the eigenspace of the degenerate eigenvalue, so that is of course two. What I want to know is, why are we taking only the orthogonal states? Answer: Good question! Really what you care about when calculating the density of states is the number of orthogonal states, whether you are talking about spin, spatial states, or whatever. In other words, you want to know density of Hilbert space dimension. To give a non-spin example if you are looking at the density of states in an $L\times L\times L$ box of Fermi gas, you get the momentum states by enforcing periodic boundary conditions... but by your argument, as soon as you have two momentum eigenfunctions, you should have infinite density of states, because each single particle could be in any superposition of those two states. As you know, this is not actually the case: the same applies for spins. Count the number of orthogonal states, not the total number of vectors in the Hilbert space (usually infinite!)
{ "domain": "physics.stackexchange", "id": 57927, "tags": "quantum-spin" }
Show full rosparam filepath
Question: When running a node with roslaunch and loading filepaths into the rosparam server, the terminal output clips the end of the filepath as shown: PARAMETERS * /driver/transforms_path: /home/matt/rosdat... * /driver/alg_settings_path: /home/matt/rosdat... * /driver/settings_dir: /home/matt/rosdat... * /rosdistro: kinetic * /rosversion: 1.12.14 How do I force ROS to show the full, unclipped filepath? Seeing the full path makes debugging faster and easier. Originally posted by M@t on ROS Answers with karma: 2327 on 2020-03-02 Post score: 0 Answer: You can't, as it isn't supported. The code which prints the summary (as that's what it's called), is here. Notice how max_length is hard-coded to 20. You could consider proposing a PR (similar to ros/ros_comm#1655) which adds a command line option for this. Originally posted by gvdhoorn with karma: 86574 on 2020-03-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by M@t on 2020-03-02: Ah, I was afraid that might be the answer. Thanks a lot!
{ "domain": "robotics.stackexchange", "id": 34527, "tags": "roslaunch, ros-kinetic, rosparam" }
How is Monte Carlo different from model-based methods?
Question: I was going through an article where it is mentioned: The Monte-Carlo methods require only knowledge base (history/past experiences)—sample sequences of (states, actions and rewards) from the interaction with the environment, and no actual model of the environment. Aren't the model-based method dependent on past sequences? How is Monte Carlo is different than? Answer: Model-based methods (such as value or policy iteration) use a model of the environment, which is usually represented as a Markov decision process. More specifically, the model consists of the transition and reward functions of the Markov decision process, which should represent the dynamics of the environment. For example, in policy iteration, the rewards (used to estimate the policy or value functions) are not the result of the interaction with the environment but given by the MDP (the model of the environment), so the decisions are made according to the reward function (and the transition function) of the MDP that represents the dynamics of the environment. Model-based methods are not (usually) dependent on past actions. For example, policy iteration converges to the optimal policy independently of the initial values of the states, the initial policy or the order of iteration through the states. Monte Carlo methods do not use such a model (the MDP), even though the assumption that the environment can be represented as an MDP is (often implicitly) made (and the MDP might actually be available). In the case of Monte Carlo methods, all estimates are solely based on the interaction with the environment. In general, Monte Carlo methods are based on sampling (or random) operations. In the case of reinforcement learning, they sample the environment. The samples are the rewards that are obtained when certain actions are taken from certain states.
{ "domain": "ai.stackexchange", "id": 1275, "tags": "reinforcement-learning, comparison, monte-carlo-methods, model-based-methods" }
What does it mean by DN in salt-pepper noise?
Question: What is DN on page 245 of these notes shown in the slide below? Answer: From the same course, you can find the following indication in an earlier set of slides: Input quantized image pixel values (integers): Digital Number (DN) From this you can infer that Salt noise: DN = maximum possible indicates that salt noise shows up as pixels with the largest possible pixel values (e.g. 255 if the picture is a grey scale picture represented by pixel in the [0-255] range), and that Pepper noise: DN = minimum possible indicates that pepper noise shows up as pixels with the smallest possible pixel values (e.g. 0 if the picture is a grey scale picture represented by pixel in the [0-255] range).
{ "domain": "dsp.stackexchange", "id": 4972, "tags": "image-processing, noise" }
plotting phase of a signal adding delay
Question: I'm trying to plot the phase of this signal $s(f)=A^2T^2sinc^2(Tf)e^{-(j\pi Tf)}$ How can I plot manually this signal?I have to follow some particular rules?I have problems with the delay. Edit After calculation I found that the phase for this signal is: $\phi(s(f))=-\pi Tf$ it should be this the plot. Right or not? I plot it for $T =3$ Answer: This is indeed correct. There is a few things you could do to make it a better graph: Label the X-axis Label the Y-axis Use real physical units if it's applicable. Phase is measured either in radians or in degrees. These are VERY different things, so proper labeling with units helps to clarify what you are using. Similar if f is a frequency in Hz (or kHz or MHz) and T is a time (in nanoseconds or fortnights), it should be stated. You could consider changing the Y-grid so that the grid lines match with meaningful phase values such as 2*pi (or an integer multiple thereof). Along the same lines, you could alternatively normalize the phase to a meaningful value (like 2*pi) and keep the integer grid. If the result was obtained by Fourier transform of a real valued time domain signal and the context is clear, consider omitting the left half (f<0) of the plot. The convention is to only plot positive frequencies and the complex conjugate symmetry is implied. This may sound nit-picky, but making a good graph is typically time well spend. It really helps to clarify what's going on, simplifies communication and will help you to remember what you did when you look at it in 3 months from now.
{ "domain": "dsp.stackexchange", "id": 224, "tags": "fourier-transform, phase" }
What measures can I use to find correlation between categorical features and binary label?
Question: For analyzing numerical features, we have correlation. What measures do we have to analyse the relevance of a categorical feature to the target value? If there isn't a direct measure, how can we achieve this? Chi-squared test is known, but I can't find any implementation of it for categorical values. One other way is to label encode into numerical values, but that assigns certain priority to higher valued labels. Answer: Checking if two categorical variables are independent can be done with the Chi-Squared test of independence where we perform a hypothesis test for that. Let's say A & B are two categorical variables then our hypotheses are: H0: A and B are independent HA: A and B are not independent We create a Contingency table that counts for the combination of outcomes from two variables, if the null hypothesis(H0) is correct then the values of the contingency table for these variables should be distributed uniformly. And then we check how far away from uniform the actual values are. For Example Suppose we have two variables in the dataset Obesity: Obese, Not obese Marital Status: Married, Cohabiting, Dating We observe the following data: Oij for a cell (i,j) is the observed count in given data | dating | married | cohabiting | Total | -----------|------------:|:------------:|:------------:|:------------:| Obese | 81 | 147 | 103 | 331 | Not obese | 359 | 277 | 326 | 962 | Total | 440 | 424 | 429 | 1293 | Expected Counts Calculation i.e. Expected counts if H0 was true. Eij for a cell (i,j) as Eij=row j total * column i total / table total | dating | married | cohabiting | -----------|------------:|:------------:|:------------:| Obese | 113 | 109 | 110 | Not obese | 327 | 316 | 319 | X2-statistics Calculation Statistics Assuming independence, we would expect that the values in the cells are distributed uniformly with small deviations because of sampling variability so we calculate the expected values under H0 and check how far the observed values are from them. We use the standardized squared difference for that and calculate Chi-square statistics that under H0 follows χ2 distribution with df=(n−1)⋅(m−1) where n & m are the number of categories in the first & second variable respectively. \begin{equation} \chi^2 = \sum_ i \sum_ j \frac{ ({O_{ij} - E_{ij}})^2}{ E_{ij}} \end{equation} χ2 value comes out to be 30.829 We can use R to find the p-value tbl = matrix(data=c(81, 147, 103, 359, 277, 326), nrow=2, ncol=3, byrow=T) dimnames(tbl) = list(Obesity=c('Obese', 'Not obese'), Marital_status=c('Dating', 'Married','Cohabiting')) chi_res = chisq.test(tbl) chi_res Pearson's Chi-squared test data: tbl X-squared = 30.829, df = 2, p-value = 2.021e-07 Since p-value < 0.05 we reject the null hypothesis, we can conclude that obesity and marital status are dependent. There also exists a Crammer's V that is a measure of correlation that follows from this test. Putting values in the formula, R code sqrt(chisq.test(tbl)$statistic / (sum(tbl) * min(dim(tbl) - 1 ))) 0.1544 So we can say there is a weak positive correlation between obesity and marital status. I hope I am clear with the explanation.
{ "domain": "datascience.stackexchange", "id": 5163, "tags": "feature-selection, correlation" }
Why is climate change triggering faster rotation?
Question: On July the 29th 2022, the Earth finished its rotation about 1.5 milliseconds earlier than the entire 24 hours. Scientists link this to climate change, saying that a possible reason could be due to the melting of polar glaciers. I do not know for sure what dictates this would happen, but what came to my mind first was the law of conservation of angular momentum. If glaciers melt, then the water gets spread out across the oceans, so the mass located away from the rotation axis increases. This means that there is an increase in the moment of inertia. But doesn't this mean there should be a decrease in the rotational speed? I wonder whether there is some larger physical phenomenon at play, something with greater influence on the rotational speed. Answer: Glaciers are water that is frozen and high up in the mountains. If you thaw that ice and the water flows back to sea-level, then it would seem that mass in the water would get closer to the rotation axis; the moment of inertia of the Earth would decrease and the angular speed increase. However, it is not such a simple calculation because ice is also melting at the poles, the sea level can rise more at the equator and in addition, the density of water is temperature dependent, the weight of water can deform the crust and the rotation axis of the Earth is also shifting in response to the distribution of water (e.g., Deng et al. 2021).
{ "domain": "physics.stackexchange", "id": 90257, "tags": "rotational-dynamics, angular-momentum, earth, estimation, climate-science" }
Bandwidth of an entire song
Question: My question has to do with the difference between the frequencies of a single note, and the frequencies of an entire song. If I have a 5 second signal of the form: $x(t)=\sin(8\pi t)$, here is the frequency response with zero-padding: For a signal of the form $x(t)=\sin(8\pi t)\sin^2(\pi t/5)$, here is how this looks: Here is the frequency response with zero-padding: My intuition about the second signal is that it is a 4Hz tone that gets louder and then quieter again. In both cases, the highest frequency contained in the signal is higher than 4Hz, even though it was a 4Hz tone. This indicates to me that the frequencies that we hear are not the same as the frequencies in the Fourier transform. This further indicates that just because the highest note of a song may be at 16kHz does not mean that the bandwidth of the song is at -16kHz to +16kHz and that 32kHz sampling rate is sufficient. Music is typically recorded at 44.1kHz, but is the bandwidth of a song really -22.05kHz to +22.05kHz, even if every individual note is? I took the FFT of Bad Habits by Ed Sheeran out of curiosity. The highest frequency component actually appears to be at 22.05 kHz. Doesn't this indicate that if we had a higher sampling rate than 44.1kHz, we would have seen higher frequency components of the music? In other words, the FFT looks like higher frequency components got "cut off" by a low sampling rate. My second question is about understanding how a note played during a song will affect the FT of the song. Without zero-padding, the first signal is purely 4Hz and the second signal has nonzero components at 4Hz and the two adjacent bins. With zero-padding, it appears there are a large number of nonzero bins in each, and the second signal actually appears as the first with sidelobe suppression. This seems significant to me because if an 8kHz tone, played for one second, appears in a 3 minute song, it would not affect the FT of the song by adding a pure 8kHz tone in. I think it would appear as an 8kHz tone of one second duration, zero-padded to a 3 minute duration (and therefore including the sidelobes), since the sidelobes are important to destructively interfere with the note outside of the timeframe it is supposed to be played at. Is this correct? Edit: I just remembered something probably critical. Any signal of finite time necessarily has infinite bandwidth. If the highest tone in a song is 16kHz, then the highest frequency component of the whole song would be a "smeared" 16kHz, and some sidelobes will be cut off when sampling at 44.1kHz. Therefore the DFT is lossy. Part of my confusion is probably because I read elsewhere on the internet that the DFT is lossless, but I am thinking now that must be wrong since all real signals are of finite time/infinite bandwidth, therefore all real signals must have an infinite sampling rate to be truly lossless. Is this correct? Edit #2: Envidia pointed out that I had forgotten to fftshift Bad Habits. It definitely looks better now. Answer: First of all, kudos to you: I appreciate the effort and thinking you've managed to articulate in your question. The DFT is a mathematical tool. As such, the parameters used to compute it can hide or reveal information that is there or not there. For example, zero-padding will reveal side-lobes for a single tone but that is just an artifact of the DFT of a finite-length sequence. You don't "hear" these side lobes when listening to a single tone. You only hear that single tone (a perfect frequency peak if the parameters of the DFT are chosen appropriately). As another example, if the parameters aren't set correctly, giving poor frequency resolution, the DFT will hide information from you. That being said, case-in-point: This indicates to me that the frequencies that we hear are not the same as the frequencies in the Fourier transform. $5$ seconds is too short to see the expected peaks in the frequency domain your signal has: recall that $$\sin(A)\sin(B) = \frac{1}{2}\big(\cos(A-B)-\cos(A+B)\big)$$ In your case, the FFT should therefore show peaks at $3.8\,\tt{Hz}$, $4\,\tt{Hz}$ and $4.2\,\tt{Hz}$. Set your parameters appropriately: increase the duration to $10$ seconds and you should see: Music is typically recorded at 44.1kHz, but is the bandwidth of a song really -22.05kHz to +22.05kHz? I think you already know the answer: in the analog world, no. Some details: Harmonics are integer multiples of a fundamental frequency. For example, if a pianist bangs on a middle $A$, which has fundamental frequency of $440 \,\tt{Hz}$, its harmonics would be at $880\,\tt{Hz}$ (2nd harmonic), $1320\,\tt{Hz}$ (3rd harmonic), $1760\,\tt{Hz}$ (4th harmonic), and so on. There's technically no upper limit to how high harmonics can go in terms of frequency. However, the amplitude (~loudness) of these harmonics generally decreases as the frequency increases, making them less significant in the overall sound, especially as they move out of the range of human hearing. Which brings us to: The highest frequency component actually appears to be at 22.05 kHz. Doesn't this indicate that if we had a higher sampling rate than 44.1kHz, we would have seen higher frequency components of the music? Correct! And some audio systems claim to use $96\,\tt{kHz}$ for that exact purpose (although I have a hard time believing anyone that claims they can hear the difference with standard $44.1\,\tt{kHz}$ or $48\,\tt{kHz}$, unless they'd want their dogs to be able to?) - edit: there are other purposes to higher sampling rates such as post-processing considerations and less stringent anti-aliasing/reconstruction filter requirements, but that's outside the scope of your question, see comments -. As far as quality of playback is concerned, there's no point in going higher because: 1. Humans can not hear past $\approx 20\,\tt{kHz}$, and 2. as previously mentioned, the harmonics will generally be well below our hearing threshold anyways. if an 8kHz tone, played for one second, appears in a 3 minute song, [...] it would appear as an 8kHz tone of one second duration, zero-padded to a 3 minute duration (and therefore including the sidelobes), since the sidelobes are important to destructively interfere with the note outside of the timeframe it is supposed to be played at. Is this correct? You are correct that it would be interpolated because of the length of your DFT. But again, this is only the result of a mathematical operation. The side-lobes would interfere with other frequencies side-lobes, but this is not typically referred to as "destructive interference" in the context of Fourier analysis. Instead, it's an aspect of the spectral leakage caused by the finite duration of the signal. Finally: Part of my confusion is probably because I read elsewhere on the internet that the DFT is lossless, but I am thinking now that must be wrong since all real signals are of finite time/infinite bandwidth, therefore all real signals must have an infinite sampling rate to be truly lossless. Is this correct? Yes and no. The term "lossless" in the context of DFT refers to the fact that, theoretically, the DFT of a signal and its inverse can reconstruct the original signal exactly, without any information loss. But like I said earlier, in real-world scenarios, signals are band-limited, and practical sampling rates are chosen to capture the essential frequency content of these signals. So while it's theoretically true that an infinite sampling rate is required to capture all frequency components of a finite-duration signal, in practice, a sufficiently high sampling rate (like $44.1 \, \tt{kHz}$ for audio) is adequate to capture all the significant components within the human hearing range.
{ "domain": "dsp.stackexchange", "id": 12460, "tags": "fft, dft, bandwidth, music, zero-padding" }
Scalogram (and related nomenclatures) for DWT?
Question: My understanding of the scalogram is that, for a particular row, the scores of the projection of the input signal with the wavelet at a particular displacement is shown. Across rows, the same thing applies, but for dilated version of the wavelet. I thought that scalograms can be defined for all types of wavelet transforms, that is, for the: Continuous wavelet transform Discrete wavelet transform Redundant wavelet transform However upon further investigation is seems that the scalogram is only definable for the CWT. Based on this I have multiple inter-related questions that google has not sufficed for ATM. Questions: Is it true that the scalogram is not defined for the DWT or RWT? If so, why not? Let us say an $N$ length signal has a 10-level decomposition by using DWT. If all levels are plotted as an image, (that is, a $10xN$ image), what is this image called? As an example of a DWT 'scalogram', here is one for AWGN: Concerning the same signal, suppose we instead plot the approximation MRA of the signal at all levels. (So again, a $10xN$) image. What is this image called in proper terminology? For example, here I have shown approximation MRAs and detail MRAs for AWGN. (Clearly they are not the same as 'scalogram' of DWT). Thanks! Answer: Continuous wavelet transform is suitable for a scalogram because the analysis window can be sized and placed at any position. This flexibility allows for the generation of a smooth image in both the time in scale (analogous to frequency) directions. The continuous wavelet transform is a redundant transform because the analysis window can overlap. In fact the CWT is considered infinitely redundant. Discrete wavelet transform is a non-redundant transform. It was developed so there would be a one to one correspondence between the information in the signal domain and the transform domain. This tight correspondence makes the DWT more suitable for use in signal reconstruction. The analysis windows are fixed in both the time and scale directions, so if you plot the resulting DWT coefficients you will end up with a grid of boxes that start out large at one end of the scale axis and end up small at the other end. This representation isn't very satisfying for visual analysis of a signal. It certainly can be done, but I haven't seen anyone bother to do it. The plot is also referred to as a scalogram. Redundant Wavelet Transform: I had no previous experience with this, but thanks to comments from the OP, I found that the RWT or Stationary Wavelet Transform (SWT) is a discrete wavelet transform that has redundancy introduced to make the transform translation invariant. Furthermore, I found a reference that does a nice comparison of transform types as they apply to speech analysis. In this article, the transform results are all plotted and for any case of wavelet transform, the plots are all referred to as scalograms (this includes the DWT and a version of RWT). You can see how the various transform types present themselfs visually in the article. For reference, here is a link to the article: http://www.math.purdue.edu/~lipeijun/paper/2005/End_Gen_Li_Fra_Sch_JASA_2005.pdf MRA - My encounter with this term is in association with multiresolution analysis. This applies to all wavelet transform types, but usually is discussed in the context of the DWT and its realization as a set of filter banks. In this context the result of a MRA is the same as the result of a DWT and the plot of such results (a plot of a set of numbers) would still be a scalogram. Here is another paper that discusses MRA: http://alexandria.tue.nl/repository/books/612762.pdf The following is an example of CWT and DFT Scalograms:
{ "domain": "dsp.stackexchange", "id": 665, "tags": "discrete-signals, frequency-spectrum, wavelet, terminology, scale-space" }
Plot a simple scatter plot graph with two additional solid lines
Question: I have been coding in Python for a number of years now. I've always felt that Matplotlib code takes up a lot more lines of code than it should. I could be wrong. I have the following function that plots a simple scatter plot graph with two additional solid lines. Is there any way for me to reduce the number of lines to achieve exactly the same outcome? I feel that my code is a little 'chunky'. dates contains an array of DateTime values in the yyyy-mm-dd H-M-S format return_values - array of floats main_label - string import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates %matplotlib inline plt.style.use('ggplot') def plot_returns(dates, return_values, ewma_values, main_label): plt.figure(figsize=(15, 10)) plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m')) plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=31)) plt.scatter(dates, return_values, linestyle = '-', color='blue', s = 3, label = "Daily Returns") plt.plot(dates, ewma_values, linestyle = '-', color='green', label = "EWMA") plt.gcf().autofmt_xdate() plt.xlabel('Day', fontsize = 14) plt.ylabel('Return', fontsize = 14) plt.title(main_label, fontsize=14) plt.legend(loc='upper right', facecolor='white', edgecolor='black', framealpha=1, ncol=1, fontsize=12) plt.xlim(left = min(dates)) plt.show() dates = pd.date_range(start = '1/1/2018', end = '10/10/2018') return_values = np.random.random(len(dates)) ewma_values = 0.5 + np.random.random(len(dates))*0.1 plot_returns(dates, return_values, ewma_values, "Example") Answer: Is there any way for me to reduce the number of lines to achieve exactly the same outcome? should, in isolation, not be your overriding concern, and your code is about as minimally chunky as matplotlib will allow. Your current push - rather than to shed a line or two - should be to increase static testability, maintainability and structure. Said another way, this is not code golf, and not all short code is good code. To that end: Do not enforce a style in the global namespace - only call that from a routine in the application. What if someone else wants to import and reuse parts of your code? Add PEP484 type hints. Avoid calling gca and gcf. It's easy, and preferable, to have local references to your actual figure and axes upon creation, and to use methods bound to those specific objects instead of plt. Function calls via plt have more visual ambiguity, and need to infer the current figure and axes; being explicit is a better idea. On top of that, calls to plt are just wrappers to the bound instance methods anyway. Choose a consistent quote style. black prefers double quotes but I have a vendetta against black and personally prefer single quotes. It's up to you. Do not force a show() in the call to plot_returns, and return the generated Figure instead of None. This will improve reusability and testability. Do not use strings for internal date logic. Even if you had to use strings, prefer an unambiguous YYYY-mm-dd ISO format instead of yours. np.random.random is deprecated; use default_rng() instead. Suggested from datetime import date import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates from matplotlib.figure import Figure def plot_returns( dates: pd.DatetimeIndex, return_values: np.ndarray, ewma_values: np.ndarray, main_label: str, ) -> Figure: fig, ax = plt.subplots(figsize=(15, 10)) ax.scatter(dates, return_values, linestyle='-', color='blue', s=3, label='Daily Returns') ax.plot(dates, ewma_values, linestyle='-', color='green', label='EWMA') fig.autofmt_xdate() ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m')) ax.xaxis.set_major_locator(mdates.DayLocator(interval=31)) ax.set_xlabel('Day', fontsize=14) ax.set_ylabel('Return', fontsize=14) ax.set_title(main_label, fontsize=14) ax.legend(loc='upper right', facecolor='white', edgecolor='black', framealpha=1, ncol=1, fontsize=12) ax.set_xlim(left=min(dates)) return fig def main() -> None: dates = pd.date_range(start=date(2018, 1, 1), end=date(2018, 10, 10)) rand = np.random.default_rng() return_values = rand.random(len(dates)) ewma_values = rand.uniform(low=0.5, high=0.6, size=len(dates)) plt.style.use('ggplot') plot_returns(dates, return_values, ewma_values, 'Example') plt.show() if __name__ == '__main__': main()
{ "domain": "codereview.stackexchange", "id": 42107, "tags": "python, matplotlib" }
Refilling a Bejeweled Board
Question: When you start thinking about designing a Match 3 type game, you realize that there are a great many ways to structure the rules. In this particular variant, the direction that the player swipes the orbs determines the way that the board is refilled. The new pieces will enter the board opposite the direction that the player swipes. So if the player swipes down new pieces will enter the board from the top. After the matched pieces are removed, the existing pieces are moved down to replace them, and then new random pieces are generated to replace those existing pieces. I have tried to think of ways to reduce the redundant code here and also ways to shorten the methods, but I am not sure if it would make the code more readable or not. Left and right swipes are handled differently from up and down swipes, so they need to be in different methods. The logic is similar for each of the four directions but it is distinctly different. I would definitely appreciate feedback on the algorithm I am using. The first part of this determines whether a match has been made and then marks the orbs for destruction if they are part of a match. You can see that code here in my previous Bejeweled clone question. This part of the code refills the board after a successful match. #pragma mark - Replace Orbs -(void) replaceMarkedOrbsForDirection:(MODirection)direction { NSMutableSet *rowsWithChanges = nil; NSMutableSet *columnsWithChanges = nil; switch (direction) { case MODirectionLeft: rowsWithChanges = [self rowsWithChanges]; for (DMRow *row in rowsWithChanges) { [self replaceOrbsForRow:row direction:direction]; } break; case MODirectionRight: rowsWithChanges = [self rowsWithChanges]; for (DMRow *row in rowsWithChanges) { [self replaceOrbsForRow:row direction:direction]; } break; case MODirectionUp: columnsWithChanges = [self columnsWithChanges]; for (DMColumn *column in columnsWithChanges) { [self replaceOrbsForColumn:column direction:direction]; } break; case MODirectionDown: columnsWithChanges = [self columnsWithChanges]; for (DMColumn *column in columnsWithChanges) { [self replaceOrbsForColumn:column direction:direction]; } break; default: break; } } -(NSMutableSet *) rowsWithChanges { NSMutableSet *rowsWithChanges = [[NSMutableSet alloc]init]; for (DMRow *row in _board.rows) { for (DMOrb *orb in row.orbs) { if (orb.markedForDestruction) { [rowsWithChanges addObject:row]; } } } return rowsWithChanges; } -(NSMutableSet *) columnsWithChanges { NSMutableArray *columns = _board.columns; NSMutableSet *columnsWithChanges = [[NSMutableSet alloc]init]; for (DMColumn *column in columns) { for (DMOrb *orb in column.orbs) { if (orb.markedForDestruction) { [columnsWithChanges addObject:column]; } } } return columnsWithChanges; } -(void) replaceOrbsForRow:(DMRow *)row direction:(MODirection)direction { NSMutableArray *markedOrbs = [[NSMutableArray alloc]init]; if (direction == MODirectionLeft) { //get the marked orbs for (int i = 0; i < kNumOrbsPerRow; i++) { DMOrb *orb = row.orbs[i]; if (orb.markedForDestruction) { [markedOrbs addObject:orb]; } } //get the existing orbs in the proper direction NSMutableArray *existingOrbs = [[NSMutableArray alloc]init]; DMOrb *lastMarkedOrb = [markedOrbs lastObject]; for (int i = lastMarkedOrb.boardPosition.x + 1; i < kNumOrbsPerRow; i++) { [existingOrbs addObject:row.orbs[i]]; } //fill up the array to minimum necessary size with new orbs while (existingOrbs.count < markedOrbs.count) { [existingOrbs addObject:[DMOrb randomOrb]]; } //set the marked orbs to the appropriate orbs int orbIndex = 0; DMOrb *firstMarkedOrb = [markedOrbs firstObject]; for (int i = firstMarkedOrb.boardPosition.x; i < firstMarkedOrb.boardPosition.x + markedOrbs.count; i++) { [row setOrbInColumn:i toOrb:existingOrbs[orbIndex]]; orbIndex++; } //set remaining existing orbs to their changed positions //set new orbs for empty spaces at the end for (int i = firstMarkedOrb.boardPosition.x + markedOrbs.count; i < kNumOrbsPerRow; i++) { if (orbIndex < existingOrbs.count) { [row setOrbInColumn:i toOrb:existingOrbs[orbIndex]]; orbIndex++; } else { [row setOrbInColumn:i toOrb:[DMOrb randomOrb]]; } } } else if (direction == MODirectionRight) { //get the marked orbs for (int i = kNumOrbsPerRow - 1; i >= 0; i--) { DMOrb *orb = row.orbs[i]; if (orb.markedForDestruction) { [markedOrbs addObject:orb]; } } //get the existing orbs in the proper direction NSMutableArray *existingOrbs = [[NSMutableArray alloc]init]; DMOrb *lastMarkedOrb = [markedOrbs lastObject]; for (int i = lastMarkedOrb.boardPosition.x - 1; i >= 0; i--) { [existingOrbs addObject:row.orbs[i]]; } //fill up the array to minimum necessary size with new orbs while (existingOrbs.count < markedOrbs.count) { [existingOrbs addObject:[DMOrb randomOrb]]; } //set the marked orbs to the appropriate orbs int orbIndex = 0; DMOrb *firstMarkedOrb = [markedOrbs firstObject]; for (int i = firstMarkedOrb.boardPosition.x; i > firstMarkedOrb.boardPosition.x - markedOrbs.count; i--) { [row setOrbInColumn:i toOrb:existingOrbs[orbIndex]]; orbIndex++; } //set remaining existing orbs to their changed positions //set new orbs for empty spaces at the end for (int i = firstMarkedOrb.boardPosition.x - markedOrbs.count; i >= 0; i--) { if (orbIndex < existingOrbs.count) { [row setOrbInColumn:i toOrb:existingOrbs[orbIndex]]; orbIndex++; } else { [row setOrbInColumn:i toOrb:[DMOrb randomOrb]]; } } } } -(void) replaceOrbsForColumn:(DMColumn *)column direction:(MODirection)direction { NSMutableArray *markedOrbs = [[NSMutableArray alloc]init]; if (direction == MODirectionUp) { //get the marked orbs for (int i = kNumOrbsPerRow - 1; i >= 0; i--) { DMOrb *orb = column.orbs[i]; if (orb.markedForDestruction) { [markedOrbs addObject:orb]; } } //get the existing orbs in the proper direction NSMutableArray *existingOrbs = [[NSMutableArray alloc]init]; DMOrb *lastMarkedOrb = [markedOrbs lastObject]; for (int i = lastMarkedOrb.boardPosition.y - 1; i >= 0 ; i--) { [existingOrbs addObject:column.orbs[i]]; } //fill up the array to minimum necessary size with new orbs while (existingOrbs.count < markedOrbs.count) { [existingOrbs addObject:[DMOrb randomOrb]]; } //set the marked orbs to the appropriate orbs int orbIndex = 0; DMOrb *firstMarkedOrb = [markedOrbs firstObject]; for (int i = firstMarkedOrb.boardPosition.y; i > firstMarkedOrb.boardPosition.y - markedOrbs.count; i--) { DMRow *row = _board.rows[i]; [row setOrbInColumn:column.columnNumber toOrb:existingOrbs[orbIndex]]; orbIndex++; } //set remaining existing orbs to their changed positions //set new orbs for empty spaces at the end for (int i = firstMarkedOrb.boardPosition.y - markedOrbs.count; i >= 0; i--) { DMRow *row = _board.rows[i]; if (orbIndex < existingOrbs.count) { [row setOrbInColumn:column.columnNumber toOrb:existingOrbs[orbIndex]]; orbIndex++; } else { [row setOrbInColumn:column.columnNumber toOrb:[DMOrb randomOrb]]; } } } else if (direction == MODirectionDown) { //get the marked orbs for (int i = 0; i < kNumOrbsPerRow; i++) { DMOrb *orb = column.orbs[i]; if (orb.markedForDestruction) { [markedOrbs addObject:orb]; } } //get the existing orbs in the proper direction NSMutableArray *existingOrbs = [[NSMutableArray alloc]init]; DMOrb *lastMarkedOrb = [markedOrbs lastObject]; for (int i = lastMarkedOrb.boardPosition.y + 1; i < kNumOrbsPerRow; i++) { [existingOrbs addObject:column.orbs[i]]; } //fill up the array to minimum necessary size with new orbs while (existingOrbs.count < markedOrbs.count) { [existingOrbs addObject:[DMOrb randomOrb]]; } //set the marked orbs to the appropriate orbs int orbIndex = 0; DMOrb *firstMarkedOrb = [markedOrbs firstObject]; for (int i = firstMarkedOrb.boardPosition.y; i < firstMarkedOrb.boardPosition.y + markedOrbs.count; i++) { DMRow *row = _board.rows[i]; [row setOrbInColumn:column.columnNumber toOrb:existingOrbs[orbIndex]]; orbIndex++; } //set remaining existing orbs to their changed positions //set new orbs for empty spaces at the end for (int i = firstMarkedOrb.boardPosition.y + markedOrbs.count; i < kNumOrbsPerRow; i++) { DMRow *row = _board.rows[i]; if (orbIndex < existingOrbs.count) { [row setOrbInColumn:column.columnNumber toOrb:existingOrbs[orbIndex]]; orbIndex++; } else { [row setOrbInColumn:column.columnNumber toOrb:[DMOrb randomOrb]]; } } } } Answer: -(void) replaceMarkedOrbsForDirection:(MODirection)direction { NSMutableSet *rowsWithChanges = nil; NSMutableSet *columnsWithChanges = nil; switch (direction) { case MODirectionLeft: rowsWithChanges = [self rowsWithChanges]; for (DMRow *row in rowsWithChanges) { [self replaceOrbsForRow:row direction:direction]; } break; case MODirectionRight: rowsWithChanges = [self rowsWithChanges]; for (DMRow *row in rowsWithChanges) { [self replaceOrbsForRow:row direction:direction]; } break; case MODirectionUp: columnsWithChanges = [self columnsWithChanges]; for (DMColumn *column in columnsWithChanges) { [self replaceOrbsForColumn:column direction:direction]; } break; case MODirectionDown: columnsWithChanges = [self columnsWithChanges]; for (DMColumn *column in columnsWithChanges) { [self replaceOrbsForColumn:column direction:direction]; } break; default: break; } } This can be drastically shorter. First of all, we don't need the NSMutableSet variables, and second of all, in Objective-C, switch cases fall through by default, thats why we need the break; statement. - (void)replaceMarkedOrbsForDirection:(MODirection)direction { switch (direction) { case MODirectionLeft: case MODirectionRight: for (DMRow *row in [self rowsWithChanges]) { [self replaceOrbsForRow:row direction:direction]; } break; case MODirectionUp: case MODirectionDown: for (DMRow *row in [self rowsWithChanges]) { [self replaceOrbsForRow:row direction:direction]; } break; } } Also, notice that we don't need the default if we're switching on an enum and handling all cases. - (NSMutableSet *) rowsWithChanges { NSMutableSet *rowsWithChanges = [[NSMutableSet alloc]init]; for (DMRow *row in _board.rows) { for (DMOrb *orb in row.orbs) { if (orb.markedForDestruction) { [rowsWithChanges addObject:row]; } } } return rowsWithChanges; } We're using a set, which means that adding an object multiple times does nothing. So, let's save some iterations and checks: - (NSMutableSet *)rowsWithChanges { NSMutableSet *rowsWithChanges = [NSMutableSet set]; for (DMRow *row in _board.rows) { for (DMOrb *orb in row.orbs) { if (orb.markedForDestruction) { [rowsWithChanges addObject:row]; break; } } } return rowsWithChanges; } Notice that this break statement only breaks the inner-most loop. Once we know we need to handle this row, we don't need check any more orbs, so let's move on to the next row. We can do the same with our columnsWithChanges method as well. But note in that method, we can eliminate this line: NSMutableArray *columns = _board.columns; And just reference _board.columns directly in the forin loop. And one final note about these two methods... why don't we declare some readonly properties? @property (readonly) NSMutableSet *rowsWithChanges; @property (readonly) NSMutableSet *columnsWithChanges; No local instance variable is created--there is no _rowsWithChanges or _columnsWithChanges, because we manually implemented each getter already and didn't use the variable. But meanwhile, without changing anything else specifically regarding these properties or methods, we can stop writing: [self rowsWithChanges] And instead, use self.rowsWithChanges. Our code will still perform identically but it will read a little nicer. I think rather than two gigantic methods for left/right or up/down, I'd rather prefer four methods actually--one for each direction. Or--if possible, think of some way to simplify the logic into one method... but yeah... For some reason, here, we slip out of the forin loops: for (int i = kNumOrbsPerRow - 1; i >= 0; i--) { DMOrb *orb = column.orbs[i]; if (orb.markedForDestruction) { [markedOrbs addObject:orb]; } } Is it important to go backward here? If so, we can still forin backwards: for (DMOrb *orb in [column.orbs reverseObjectEnumerator]) { if (orb.markedForDesctruction) { [markedOrbs addObject:orb]; } } But if backwards isn't important, just don't call reverseObjectEnumerator. And then I read this next section: //get the existing orbs in the proper direction NSMutableArray *existingOrbs = [[NSMutableArray alloc]init]; DMOrb *lastMarkedOrb = [markedOrbs lastObject]; for (int i = lastMarkedOrb.boardPosition.y - 1; i >= 0 ; i--) { [existingOrbs addObject:column.orbs[i]]; } Aren't we going through the same loop twice for no reason? Can't we just do this: for (DMOrb *orb in [column.orbs reverseObjectEnumerator]) { if (orb.markedForDesctruction) { [markedOrbs addObject:orb]; } else { [existingOrbs addObject:orb]; } } Followed by: while (existingOrbs.count < kNumOrbsPerRow) { [existingOrbs addObject:[DMOrb randomOrb]]; } And this should eliminate the next for loop where we're setting positions, I believe. But for good measure, I do want to point something out: for (int i = firstMarkedOrb.boardPosition.y; i < firstMarkedOrb.boardPosition.y + markedOrbs.count; i++) { DMRow *row = _board.rows[i]; [row setOrbInColumn:column.columnNumber toOrb:existingOrbs[orbIndex]]; orbIndex++; } The orbIndex++; within the loop body isn't particularly good. You should know that we can put this into the update statement. The following is perfectly valid C/ObjC syntax: for (int i = someCalculatedValue, orbIndex = 0; i < someUpperLimit; ++i, ++orbIndex) { // do loop stuff } And it's more readable. If the declaration/conditional/update statements in your for statement are too long, separate them onto multiple lines: for ( int i = someCalculatedValue, orbIndex = 0; i < someUpperLimit; ++i, ++orbIndex ) { // do loop stuff }
{ "domain": "codereview.stackexchange", "id": 11319, "tags": "game, objective-c" }
Complex Grassmann Dirac Functional - How do we integrate over it?
Question: I'm following the Book of Brian Hatfield (Quantum Field Theory of point particles and Strings), p.192 here: For real Grassmann numbers (and Functionals thereof): If $\Phi[\psi]$ is a functional, and $\psi(x)$ is a Grassmann-valued function, we demand that $$\int \mathcal{D}\psi \delta[\psi - \xi] \Phi[\psi] = \Phi[\xi]$$ (this is equation 9.67) , and one option to do this is to let (equation 9.66) $$\delta[\psi - \xi] = \prod_x (\psi(x)-\xi(x)).$$ The complex case of the delta function is NOT treated in the book, and I want to deduce how the mentioned relations would turn out for that case. Here, $\psi$ now has two components ($\psi = \frac{1}{2} \psi_1 + i \psi_2$) - Which makes me wonder: How does the fundamental relation turn out? For complex $\psi$: \begin{align} \int \mathcal{D}\psi \delta[\psi - \xi] \Phi[\psi] = \Phi[\xi] \end{align} or \begin{align} \int \mathcal{D} \psi \int \mathcal{D} \psi^* \delta[\psi - \xi] \Phi[\psi] = \Phi[\xi]? \end{align} The first version works, but only if I assume that $\delta[\psi - \xi] = \prod\limits_x (\psi(x) - \xi(x))$ and $\delta[\psi-\xi] = \delta[\psi-\xi]^*$, and those exclude each other. In either case, what would be a realization of the $\delta$ functional? Would it still be \begin{align} \delta[\psi - \xi] = \prod_x (\psi(x) - \xi(x))? \end{align} Answer: Motivated by the one-dimensional complex delta function, whereby $\delta_\mathbb{C}(z):=\delta(z)\delta(\bar{z}),$ so that $$\int \mathrm{d}z\wedge\mathrm{d}\bar{z}\ \delta_\mathbb{C}(z-\zeta) f(z) = f(\zeta),$$ you can define $\delta_\mathbb{C}[\psi] := \delta[\psi]\delta\!\left[\bar{\psi}\right],$ satisfying $$\int\mathrm{D}\psi\;\mathrm{D}\bar{\psi}\ \delta_\mathbb{C}[\psi-\xi] \Phi[\psi] = \Phi[\xi]$$ and realised as $$\delta_\mathbb{C}[\psi-\xi] = \prod_{x} \Big(\psi(x)-\xi(x)\Big)\Big(\bar{\psi}(x)-\bar{\xi}(x)\Big).$$
{ "domain": "physics.stackexchange", "id": 91340, "tags": "notation, integration, complex-numbers, dirac-delta-distributions, grassmann-numbers" }
How to resolve dependency issues when installing multimaster-fkie on Ubuntu ARM?
Question: Hi all, I'm trying to install multimaster-fkie with the command sudo apt-get install ros-indigo-multimaster-fkie I get the following error message --------@---------:~/catkin_ws$ sudo apt-get remove ros-indigo-multimaster-fkie Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libqt4-opengl-dev : Depends: libgles2-mesa-dev E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). when I try to run, as suggested, sudo apt-get -f install I get the following error message. trying to overwrite '/usr/include/GLES2/gl2.h', which is also in package libraspberrypi-dev 1.20150502.d280099-1 Errors were encountered while processing: /var/cache/apt/archives/libegl1-mesa-dev_10.1.3-0ubuntu0.4_armhf.deb /var/cache/apt/archives/libgles2-mesa-dev_10.1.3-0ubuntu0.4_armhf.deb E: Sub-process /usr/bin/dpkg returned an error code (1) So my question is: Do anyone know if it is possible to resolve the dependency issues for libegl1-mesa-dev and libgles2-mesa-dev such that the installation is successful? Im running a ubuntu 14.04 ARM on a Raspberry PI 2. The image is taken from https://wiki.ubuntu.com/ARM/RaspberryPi Thanks in advance :) Originally posted by naits3 on ROS Answers with karma: 21 on 2015-09-25 Post score: 2 Original comments Comment by tropic on 2016-02-22: Did you ever solve this, I receive the same error on my Raspberry Pi 2 Model B running Ubuntu 14.04? Comment by mukut_noob on 2016-03-24: Same for me too Answer: As the above link is broken, Ill repeat (and facilitate) the solution from http://forums.raspberrypi.org/forums/viewtopic.php?f=56&t=100553&start=200 here. Also just ran into the problem .. Basically its sudo apt-get download libegl1-mesa-dev libgles2-mesa-dev sudo dpkg -i --force-overwrite /var/cache/apt/archives/libegl1-mesa-dev_10.1.3-0ubuntu0.6_armhf.deb sudo dpkg -i --force-overwrite /var/cache/apt/archives/libgles2-mesa-dev_10.1.3-0ubuntu0.6_armhf.deb sudo apt-get install -f Maybe you need to replace the versions in the middle commands with something different .. tab-autocomplete is your friend ;) Originally posted by blubbi321 with karma: 95 on 2017-03-04 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ruffsl on 2019-06-03: Alternatively, you can specify dpkg to force overwrite through apt-get directly, e.g: sudo apt-get -o Dpkg::Options::="--force-overwrite" install libegl1-mesa-dev libgles2-mesa-dev
{ "domain": "robotics.stackexchange", "id": 22714, "tags": "ros" }
Is there a natural scale associated with polynomials?
Question: This question is related to a previous question asked here. Power laws are scale invariant. They don't have a built-in or characteristic scale associated with them. Exponentials such as $e^{-x/\xi}$ are not scale-invariant. They have a characteristic scale $\xi$. What is the matter with polynomials such as $f(x)=ax^2+bx^3$ (where $x,a,b$ are all dimensional parameters with appropriate dimension)? Like exponentials, they too are not scale-invariant. But is there a natural scale associated with it? If yes, how does one find that hidden scale? Answer: Write your polynomial as $f(x) = ax^2 [1+ x/(a/b)]$. You see that $a/b$ is the scale at which the $x^3$ term takes over. Take a look at the log-log plot of $f(x)$ for different values of $l=a/b$. The black lines are there to guide the eye. The dashed line corresponds $x^2$ and the dotted one to $x^3$. Take a look at the red curve. It scales as $x^2$ for small arguments and as $x^3$ when $x$ is large. The cross-over scale is clearly visible in the middle.
{ "domain": "physics.stackexchange", "id": 47705, "tags": "dimensional-analysis, scale-invariance, scales" }
Going distant in sourcing machining
Question: My North American facility is looking at a major shut down this fall. In the area, three other large facilities are also doing either shutdowns or expansions in the fall. A last minute project that I have been assigned requires a couple of weeks of machining custom components. This is an issue as none of the local machine shops are likely to be able to fit the work within their schedule in the required time frame. At this point I think my only option is to start looking beyond the local shops. What should one look for when trying to source non-local machining? Are there any red flags to look for when talking to a shop? Answer: I've dealt with this in many ways. Ultimately, here is what I look for when I have to go beyond local sourcing of custom parts: Responsiveness. You can have the best website in the world, but unless I know you are there for me when I need to change anything, I can't rely on you. Questioning. The best vendors I have for custom pieces ask questions and explain to me every step of the way, so I know exactly what's going on, and what they might want to change. Compliance with my specifications. Ultimately, if it is really complicated, I draw up a specification. If they question my spec, that's a good thing. If they quote and except themselves from my spec, that's a bad thing. Openness to visit. Your local machine shop is usually willing to let you come in for a visit. Your non-local machine shop should do the same. Quote comes in with an honest lead time. Ultimately, if it isn't there on time, it isn't there. Make sure they state lead time and are willing to back it up.
{ "domain": "engineering.stackexchange", "id": 268, "tags": "machining, project-management" }
Use of geometry index for the determination of coordination environment
Question: Geometry index $\tau$ is supposed to resolve proper geometry for coordination numbers (C.N.) 4 and 5 based on its extreme values ($0$ or $1$). There is also a web app Geom which handles both cases for a structure in XYZ format. I'd like to summarize the questionable topics regarding proper and efficient usage of this method: I'm not sure how to address the intermediate values. Say, for $\tau_5 = 0.33$: is it a square pyramidal geometry with a character of trigonal bipyramid? Or one can just name this coordination environment square pyramid and call it a day? Are there similar algorithms developed for the higher C.N.s: capped trigonal prism vs pentagonal bipyramid (C.N. 7); cube vs square antiprism (C.N. 8)? Answer: For your first question, the original paper (J. Chem. Soc., Dalton Trans. 1984, 1349–1356) that described the geometry index $\tau_5$ defined it as an "index of trigonality". For example, they write for a compound with $\tau_5=0.48$ By this criterion, the irregular co-ordination geometry of $\ce{[Cu(bmdhp)(OH2)]2+}$ in the solid state is described as being $48\%$ along the pathway of distortion from square pyramidal toward trigonal bipyramidal. For your second question, while I haven't been able to find a $\tau_7$ or $\tau_8$ used in the literature, it seems possible to define such parameters under the right conditions. To devise a $\tau_8$, we can see that for a regular cube $\ce{MX_8}$, there can only be bond angles of $70.5^\circ$ (between adjacent $\ce{X}$ in the same square) and $109.5^\circ$ (between opposite corner $\ce{X}$ of the same square or between corner $\ce{X}$ of different squares). However, an antiprism instead has an angle of $99.6^\circ$ separating the $\ce{X}$ of different squares. (Image obtained from Inorganic Chemistry by Miessler and Tarr) This suggests using a formula reminiscent of $\tau_5$ to define $\tau_8$ as the antiprismatic distortion index. One possibility is $$\tau_8=\frac{\beta-\alpha}{9.9^\circ}$$ where $\beta > \alpha$ are the two largest valence angles and $9.9^\circ$ is a normalization factor to make it between $0$ and $1$. So when $\alpha=\beta=109.5^\circ \to \tau_8=0 \to$ cubic geometry and $\alpha=109.5^\circ$ $\beta=99.6^\circ \to \tau_8=1 \to$ antiprismatic geometry. This will only work if the structure is a regular antiprism (i.e an anticube). The same is true for defining $\tau_7$ between a pentagonal bipyramid and a monocapped trigonal prismatic. This is because the angles for these will vary if all the attached groups are not the same and so a consistent scheme based on the angles would not suffice. I also imagine that $\tau_7$ would be harder to define in this way because I don't think there is pair of angles that on its own could describe the distortion between the two geometries.
{ "domain": "chemistry.stackexchange", "id": 8697, "tags": "coordination-compounds, crystal-structure, molecular-structure, crystallography" }
Program to determine ranges of char, short, int and long variables, both signed and unsigned
Question: I wrote code to determinate ranges of char,short int and long variables,both signed and unsigned.Please help me to improve my code :) Here is the code: #include <stdio.h> //function declerations... long power(int base, int power); int main(void) { //From 2^(N-1) to 2^(N-1)-1 (Two's complement) int intmin = power(2, sizeof(int) * 8 - 1); int intmax = power(2, sizeof(int) * 8 - 1) - 1; unsigned unsignedintmax = power(2, sizeof(int) * 8) - 1; char minchar = -(power(2, sizeof(char) * 8 - 1)); char maxchar = power(2, sizeof(char) * 8 - 1) - 1; unsigned char unsignedcharmax = power(2, sizeof(char) * 8) - 1; short shortmin = -(power(2, sizeof(short) * 8 - 1)); short shortmax = power(2, sizeof(short) * 8 - 1) - 1; unsigned short unsignedshortmax = power(2, sizeof(short) * 8) - 1; long minlong = power(2, sizeof(long) * 8 - 1); long maxlong = power(2, sizeof(long) * 8 - 1) - 1; unsigned long unsignedlongmax = power(2, sizeof(long) * 8) - 1; minlong*=-1; printf("\nSigned char can be minimum: %d and maximum: %d\n", minchar, maxchar); printf("\nUnsigned char can be minimum: %d and maximum: %u\n", 0, unsignedcharmax); printf("\nSigned short can be minimum: %d and maximum: %d\n", shortmin, shortmax); printf("\nUnsigned short can be minimum: %d and maximum: %u\n", 0, unsignedshortmax); printf("\nSigned int can be minimum: %d and maximum: %d\n", intmin, intmax); printf("\nUnsigned int can be minimum: %d and maximum: %u\n", 0, unsignedintmax); printf("\nSigned long can be minimum: %ld and maximum: %ld\n", minlong, maxlong); printf("\nUnsigned long can be minimum: %d and maximum: %lu\n\n", 0, unsignedlongmax); return 0; } long power(int base, int power) { long pf = 1; for (int i = 0; i < power; i++) { pf *= base; } return pf; } Answer: Signedness of unqualified char is implementation defined. It may well be possible that char is in fact unsigned. Change char to signed char. A char is not guaranteed to have 8 bits (it is guaranteed to have at least 8 bits). Use CHAR_BIT instead. Narrowing types (e.g. assigning long to char) always make me uncomfortable. A better technique to get a value with only MSB set is (TYPE is either char,int,long,whatever) unsigned TYPE value = ~(((unsigned TYPE) -1) >> 1)
{ "domain": "codereview.stackexchange", "id": 22337, "tags": "beginner, c, integer" }
Calling a button's OnClick method elsewhere - Good Practice?
Question: I have two buttons on an ASP.NET page that have separate OnClick methods in the code behind. In one of the methods, if a certain condition is met, the entire process of the other method should be executed. Here is a simplified example of my code, which works as I intend in my actual implementation: // OnClick="ThisButton_OnClick" for ThisButton protected void ThisButton_OnClick(object sender, EventArgs e) { // do stuff if(condition) ThatButton_OnClick(null, null); else // do stuff } // OnClick="ThatButton_OnClick" for ThatButton protected void ThatButton_OnClick(object sender, EventArgs e) { // do stuff } My question is this: is it good practice to do something like this, where you call a method specifically designed for OnClick without clicking that button, or should I be more explicit (perhaps have a separate helper function that is called by ThatButton_OnClick and in the if)? Answer: I'd suggest something like the following which is simple enough for even a novice like me to understand. Is there a real need to simulate button click? You're not testing buttons, are you? // OnClick="ThisButton_OnClick" for ThisButton protected void ThisButton_OnClick(object sender, EventArgs e) { // do stuff if(condition) DoStuffThatButtonWouldDo(arg); else // do stuff } // OnClick="ThatButton_OnClick" for ThatButton protected void ThatButton_OnClick(object sender, EventArgs e) { DoStuffThatButtonWouldDo(arg); } private static DoStuffThatButtonWouldDo(fancy arguments) { //TODO: do all the stuff that the that button would do }
{ "domain": "codereview.stackexchange", "id": 4217, "tags": "c#, asp.net" }
How many RNA-binding proteins can simultaneously bind on a single mRNA?
Question: Typically, how many RNA-binding proteins can simultaneously bind to a single mRNA? Or said differently, how many "binding sites" does an mRNA have? What order of magnitude? I am interested in RNA granules like stress granules or P-bodies. They contain, inter alia, mRNA and RNA-binding proteins. I am not a biologist and I didn't come across this information so far in the related literature. Answer: See this paper. They have studied RBP-protected sites in the entire human transcriptome by RNA-protein crosslinking followed by RNAse digestion and sequencing: PIPseq. Figure 1 of the paper shows distribution of protein protected sites in RNAs. They also correlate it with different regions of mRNA and its expression. They show number of protein protected sites (PPS) per transcript but that is not a proper metric in my opinion. The number should be normalized with transcript length so that you get density of protected sites. From figure 4 (see below) you can roughly estimate that average PPS density is close to 0.6 which means that 60% of any RNA is expected to be protein bound. Figure 4 Other points to be noted: Highly translated mRNAs will have multiple ribosomes on their CDS and are likely to be more protected. Sequestered RNAs in stress granules will also have high density of PPS Footprint of different RBPs will be different. So number of proteins that can bind to a mRNA will differ between different RBPs. Further reading: http://dx.doi.org/10.1186/gb4153 http://dx.doi.org/10.1016/j.cell.2012.04.031 http://dx.doi.org/10.1016/j.molcel.2012.05.021
{ "domain": "biology.stackexchange", "id": 2876, "tags": "biochemistry, molecular-biology, rna, mrna, protein-binding" }
Do we include Transmission delay in Round trip time?
Question: Is RTT = $2*T_{Propagation delay}$ only. Or we include Transmission delay to send packets as well. Some examples include Transmission delay some not. I'm confused. Can anyone help. Answer: Suppose you have two computers (A and B) both connected to a switch with cut-through forwarding. When you are interested about RTT between computer A and computer B, you'll omit transmission delay, because the switch transmits the frame to B as it arrives from A, so transmission delay is not significant. Now, suppose that our switch uses store-forward, in this case, when the frame arrives, the switch will first store it completely, and then sends it to B, so we have to deal with transmission delay.
{ "domain": "cs.stackexchange", "id": 18164, "tags": "computer-networks" }
Mean across every several rows in pandas
Question: I have a table of features and labels where each row has a time stamp. Labels are categorical. They go in a batch where one label repeats several times. Batches with the same label do not have a specific order. The number of repetitions of the same label in one batch is always the same. In the example below, every three rows has the same label. I would like to get a new table where Var 1 and Var 2 are averaged across repetitions of labels. In this example, I would like to average every three rows as follows: Is this possible to do with some functions in pandas or other Pyhton libraries? Answer: Solution: In [24]: res = (df.groupby((df.Label != df.Label.shift()).cumsum()) .mean() .reset_index(drop=True)) Result: In [25]: res Out[25]: Var1 Var2 Label 0 22.413333 18.733333 2 1 39.390000 20.270000 3 2 38.450000 20.196667 1 3 21.173333 17.860000 3 4 36.453333 19.246667 2 Source DF (I had to use an OCR program in order to parse the data from your picture - please post your dataset in text/CSV form next time): In [23]: df Out[23]: Timestamp Var1 Var2 Label 0 2015-01-01 23.56 18.85 2 1 2015-02-01 21.23 18.61 2 2 2015-03-01 22.45 18.74 2 3 2015-04-01 35.32 19.94 3 4 2015-05-01 40.50 20.36 3 5 2015-06-01 42.35 20.51 3 6 2015-07-01 41.33 20.43 1 7 2015-08-01 38.35 20.19 1 8 2015-09-01 35.67 19.97 1 9 2015-10-01 22.20 17.97 3 10 2015-11-01 20.11 17.75 3 11 2015-12-01 21.21 17.86 3 12 2015-01-13 32.79 18.95 2 13 2015-01-14 37.45 19.33 2 14 2015-01-15 39.12 19.46 2 Explanation: if we want to group DF by consecutive labels of the same value, then we need to create a series with a unique value for each group. This can be done using the following trick: In [32]: (df.Label != df.Label.shift()).cumsum() Out[32]: 0 1 1 1 2 1 3 2 4 2 5 2 6 3 7 3 8 3 9 4 10 4 11 4 12 5 13 5 14 5 Name: Label, dtype: int32 In [33]: df.Label != df.Label.shift() Out[33]: 0 True 1 False 2 False 3 True 4 False 5 False 6 True 7 False 8 False 9 True 10 False 11 False 12 True 13 False 14 False Name: Label, dtype: bool NOTE: False == 0 and True == 1 in Python.
{ "domain": "datascience.stackexchange", "id": 4407, "tags": "python, pandas, sql, data-wrangling, data-table" }
How to identify whether the friction is kinetic or static?
Question: The body 'A' is moving forward with certain acceleration over the ground. The ground applies a kinetic friction force on 'A' in backward direction to oppose the relative motion. The block 'A' also applies an equal and opposite force of friction on the ground, would that force of friction be kinetic or the static? Intuitively, there is a relative motion of ground with respect to block, so, there must be kinetic friction. But since ground is a part of earth, an inertial frame in this case, why can't we term it as static friction? Answer: To determine whether friction is static or kinetic, examine whether the two surfaces are moving relative to each other -- if so, the friction is kinetic, and if not, it's static. It doesn't matter which inertial reference frame is chosen. Even though the ground is an inertial frame, the relative motion between the box and ground means that friction is kinetic here. This makes sense in the context of the physical difference between kinetic and static friction. Static friction usually results from two surfaces "interlocking" together into a configuration that requires additional force to change. On the other hand, kinetic friction usually results from chemical interactions between the two surfaces. The interlocking can only occur between surfaces that aren't in relative motion.
{ "domain": "physics.stackexchange", "id": 14404, "tags": "newtonian-mechanics, friction" }
what is the first step of doing image processing with ROS?
Question: hi i am beginner with image processing and want to start that with ROS but i dont know what is the first step ? for example should i first install OpenCV? or its not important ? and may be i should install some package in my workspace ? i have a webcam on my laptop i want to use it. please guide me if any one could. Originally posted by fatima on ROS Answers with karma: 62 on 2015-12-18 Post score: 0 Answer: The first thing you'll need to do is get your webcam output captured by a ros node and broadcast as an image topic so that other ros nodes can use the images. Take a look at http://wiki.ros.org/usb_cam or http://wiki.ros.org/libuvc_camera Both should work with a USB web cam. Then when the camera capture node is running you can use Rviz as a quick way to check the images are being published properly. Rviz is a GUI that allows you to visualize ros topics. If you click 'add' in the bottom left of the window and click on the add by topic tab you should see you camera node listed. Selecting this topic will add a new window to Rviz that should show the images from your webcam! When this is all working you can have a look at the image subscriber tutorial tutorial here: http://wiki.ros.org/image_transport/Tutorials/SubscribingToImages to get started manipulating the image data, this is where you might starting playing with OpenCV Originally posted by PeteBlackerThe3rd with karma: 9529 on 2015-12-18 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by fatima on 2015-12-18: [ INFO] [1450467742.148343193]: rviz version 1.11.10 [ INFO] [1450467742.148411685]: compiled against OGRE version 1.8.1 (Byatis) [ INFO] [1450467742.676989007]: Stereo is NOT SUPPORTED [ INFO] [1450467742.677113146]: OpenGl version: 3 (GLSL 1.3). Comment by fatima on 2015-12-18: hi thank you so much, i run usb_cam node before and could run Rviz i clicked add the result is : Comment by fatima on 2015-12-18: [ INFO] [1450467742.148343193]: rviz version 1.11.10 [ INFO] [1450467742.148411685]: compiled against OGRE version 1.8.1 (Byatis) [ INFO] [1450467742.676989007]: Stereo is NOT SUPPORTED [ INFO] [1450467742.677113146]: OpenGl version: 3 (GLSL 1.3). Comment by fatima on 2015-12-18: i saw Rviz window with a grid i added camera but again i only have a grid i dont know how should i use the camera in rviz ? Comment by hvpandya on 2016-01-21: You have to add the image topic to Rviz. After you run Rviz, click on Add button on lower left corner. Then click 'By topic' and select the correct topic based on your camera topic name. It should look something like: <camera_name>/Image
{ "domain": "robotics.stackexchange", "id": 23258, "tags": "ros, opencv, webcam, image" }
Charge transfer in two different metals by touching them
Question: When we touch two metallic conductors one is neutral and other has excess of charges, Case 1- Both metals are of copper, then we can calculate actual charges on them at steady state by capacitance formula. Case 2- One metal is of copper and other of aluminium, and again we can calculate steady state charges by capacitance formula. Problem - If size and shape of conductors remain the same then calculated charge on both cases will also be the same because capacitance depends on size and shape but intuitively why can this be true? Even changing metals doesn't change anything about charge transfer which seems quite counterintuitive to me or am I missing something? And does time constants (charging time) in both cases will change or not? Answer: There are several factors which could in principle affect what happens when objects made of two different metals touch, but in practice their effects are not significant. Resistivity is one, but equilibrium will be reached too quickly for its effect to be easily observed. If there is both some corrosion of the metals and a thin layer of moisture between the metals there could be a battery effect, but this would often be shorted out, and in any case is very small compared with the electrostatic charge. Treating the different metals merely as conductors works well in most situations.
{ "domain": "physics.stackexchange", "id": 69578, "tags": "electrostatics, charge, conductors" }
Insect identification
Question: It is found in my apartment in Finland. I had to kill it to stop it from getting away before looking for the camera, but it looked pretty much the same alive and intact. It's length is about half a centimeter, and the younger ones look darker. It is often found in dark. Since their appearance, I often notice red and slightly itchy bug bite in the morning. The question is what is this insect? Answer: It reminds me of silverfish, a common wingless house insect.
{ "domain": "biology.stackexchange", "id": 1744, "tags": "species-identification" }
What generates the current that then is made to flow in superconductors at CERN?
Question: At CERN for example, what is actually providing the current? It cannot be a battery because when the load resistance is less that the internal resistance then the load sees no voltage... So do they use a current source? A transformer and then a rectifier to make it DC? Answer: CERN is powered from the French national grid, with a backup power line from the Swiss national grid. Details are available in this document. You specifically ask about the magnets: superconducting magnets use a controlled current power supply like this one or this one, or many others a short Google away. Those two examples are intended for applications like NMR or MRI where the power requirements are relatively modest, but the controlled current PSUs at the LHC will be basically similar but larger. I think, though I wouldn't swear to it, that the PSUs for some if not all of the LHC magnets were built by Ocem.
{ "domain": "physics.stackexchange", "id": 35641, "tags": "electric-current, superconductivity, large-hadron-collider, experimental-technology" }
How many shots should one take to get a reliable estimate in a quantum program?
Question: When testing my quantum programs, I wonder how many shots I must take to get a specific accuracy. Are there any papers that you can recommend that analyze this? Answer: This is depends on what algorithm you are executing. For example, if you look at the Bernstein-Vazirani algorithm then theoretically the number of shot you need is 1 if you have a perfect quantum computer. This is because the end result that you are looking for collapsed onto a single eigenstate. However, because of noise, we would like to do more. In term of how many more, this is depends on the device. Not all device have the same noise level. You can, actually, look at the gate fidelity on the machines (at least you can do that for IBM's machines) and work out the probability that your circuit will fail, then from there determine the number of shots you might want to use. However, without doing that analysis, then you can just be on the safe side by using the maximum number of shots giving to you. For IBM's machines this works out to be 8192 shots. However, you can run multiple jobs to collect the statistics if you think you need more shots than this. Now, of course, if your quantum circuit is too long, the errors will built up so much that your circuit will "essentially" guarantee to fails, then additional shots will not help. The above discussion is for quantum algorithms that only need $O(1)$ shots/samples. This is not true for algorithm like VQE. For VQE, we need to do $O(1/\epsilon^2)$ shots to achieve error of $\epsilon$. For chemical accuracy, which is for $\epsilon = 10^{-3}$, you would need to do ~$10^6$ shots at each iteration... which is insane! Now if you just create a random circuit, which output a state that could be in super-position of $2^n$ eigenbasis then you will need $O(2^n)$ shots to able to able to extract the output probabilities. This why, designing a quantum algorithm is like an art. Therefore, You can't just do some operations on a quantum computers, and claim that because you did it on a quantum computer, it must be faster than doing it classically. You do not have access to the state of the qubits! This is also the reason why you can't just perform Quantum Fourier Transform ( essentially doing Discrete Fourier Transform on a quantum computer) and claim that you have a speed-up. Although, QFT does give you an exponential speed-up over DFT or even FFT (Fast Fourier Transform), doing it randomly will output a generic quantum state that in superposition of $2^n$ eigenbasis... Extracting out this state would take exponential amount of samples which then defeat the entire purpose.
{ "domain": "quantumcomputing.stackexchange", "id": 2069, "tags": "programming" }
Get special sequence of integer numbers
Question: Task: Get array of random integer (less than 10) numbers and random sequence of mathematical operations (multiplication and addition) which, when applied, will give a even number Example: genOddSeq(4) -> [[6, 6, 6, 9], ['+', '*', '*']] (6 + 6 * 6 * 9 = 330) Code: function genOddSeq(count) { let getRandomInt = (min, max) => Math.floor(Math.random() * (Math.ceil(max) - Math.ceil(min))) + Math.ceil(min) let result = [] let operations = [] let numbersComp = 0; for (let i = 0; i < count; i++) { result.push(getRandomInt(1, 10)) } for (let i = 0; i < count - 1; i++) { operations.push(Math.random() > 0.5 ? '+' : '*') } numbersComp += operations[0] == '+' ? result[0] + result[1] : result[0] * result[1] for (let i = 1; i < count - 1; i++) { numbersComp = operations[i] == '+' ? numbersComp + result[i + 1] : numbersComp * result[i + 1] } if (numbersComp % 2 != 0) { result[getRandomInt(0, count)] += 1 } return [result, operations] } Answer: Your function name is strange. You wanted to get a sequence of random numbers and operations whose result is an even number. But you named your function genOddSeq. Also, avoid abbreviations. Write code for the human, not the machine. Instead of using an empty array and a loop to fill that array, consider using Array(), Array.fill() and array.map(). For example, creating an array of 10 random values from 0 to 1 would look like: const tenRandomDecimals = Array(10).fill().map(Math.random) You can do the same for the operations. Now instead of hard-coding the logic of the operation selection inside a ternary, consider using a key-value pair where the key is + and * and the values are functions that accept two numbers which do the actual operation. When you generate an array of random + and *, just use them to get the actual function in the key-value pair. if (numbersComp % 2 != 0) { result[getRandomInt(0, count)] += 1 } This bit of logic is strange. Do you really need to get a random value? Or can you just use one from a known location, like the first number? Last but not least, tests. It's always good practice to start with tests. Even other code review posts include a minimal form of test to verify the code is working. Anyways, here's how I'd implement it. A bit longer, but there's no surprises. There's no state held, no variables to keep track of. Also using const both as a signal and a guarantee that the value will not change during runtime. const operationFunctions = { '+': (a, b) => a + b, '*': (a, b) => a * b, } const operationKeys = Object.keys(operationFunctions) const getRandomInteger = (min, max) => Math.floor(Math.random() * (Math.ceil(max) - Math.ceil(min))) + Math.ceil(min) const getRandomOperationKey = () => operationKeys[Math.floor(Math.random() * operationKeys.length)] const range = size => Array(size).fill().map((v, i) => i) const getEvenResultSequence = count => { const firstNumber = getRandomInteger(1, 10) const numbers = range(count - 1).map(() => getRandomInteger(1, 10)) const operations = range(count - 1).map(getRandomOperationKey) const result = operations.reduce((c, v, i) => operationFunctions[v](c, numbers[i]), firstNumber) const adjustedFirstNumber = result % 2 ? firstNumber + 1 : firstNumber return [[adjustedFirstNumber, ...numbers], operations] } console.log(getEvenResultSequence(4)) console.log(getEvenResultSequence(4)) console.log(getEvenResultSequence(4)) console.log(getEvenResultSequence(4)) console.log(getEvenResultSequence(4))
{ "domain": "codereview.stackexchange", "id": 26883, "tags": "javascript, random" }