anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Designing yet another coffee machine, Lombok style
Question: After reading Designing a coffee machine, I decided to implement the same problem as an exercise to get to know Guava and Lombok. I used the problem statement from the given question: Design a coffee machine which makes different beverages based on set ingredients. The initialization of the recipes for each drink should be hard-coded, although it should be relatively easy to add new drinks. The machine should display the ingredient stock (+cost) and menu upon startup, and after every piece of valid user input. Drink cost is determined by the combination of ingredients. For example, Coffee is 3 units of coffee (75 cents per), 1 unit of sugar (25 cents per), 1 unit of cream (25 cents per). Ingredients and Menu items should be printed in alphabetical order. If the drink is out of stock, it should print accordingly. If the drink is in stock, it should print "Dispensing: ". To select a drink, the user should input a relevant number. If they submit "r" or "R" the ingredients should restock, and "q" or "Q" should quit. Blank lines should be ignored, and invalid input should print an invalid input message. They supplied the default ingredients (&stock @10) and drinks/recipes. In my version, I attempted to do the following: Make everything immutable that is logical to be immutable Separate the user-io from the data structures as much as possible, so that designing a different user interface becomes easy. Make object creation as painless as possible through use of the builder pattern I'm not sure if the builder pattern was the right way to go, however. Main.java package coffee; import com.google.common.collect.Range; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.List; import java.util.SortedMap; public class Main { public static void main(String[] args) { DrinkMachine drinkMachine = DrinkMachine.builder() .drink(Recipe.builder().name("Coffee") .ingredient(new IngredientListing("Coffee", Money.valueOf(0.75), 3)) .ingredient(new IngredientListing("Sugar", Money.valueOf(0.25), 1)) .ingredient(new IngredientListing("Cream", Money.valueOf(0.25), 1)) .build()) .drink(Recipe.builder().name("Decaf Coffee") .ingredient(new IngredientListing("Decaf Coffee", Money.valueOf(0.75), 3)) .ingredient(new IngredientListing("Sugar", Money.valueOf(0.25), 1)) .ingredient(new IngredientListing("Cream", Money.valueOf(0.25), 1)) .build()) .drink(Recipe.builder().name("Caffe Latte") .ingredient(new IngredientListing("Espresso", Money.valueOf(1.10), 2)) .ingredient(new IngredientListing("Steamed Milk", Money.valueOf(0.35), 1)) .build()) .drink(Recipe.builder().name("Caffe Americano") .ingredient(new IngredientListing("Espresso", Money.valueOf(1.10), 3)) .build()) .drink(Recipe.builder().name("Caffe Mocha") .ingredient(new IngredientListing("Espresso", Money.valueOf(1.10), 1)) .ingredient(new IngredientListing("Steamed Milk", Money.valueOf(0.35), 1)) .ingredient(new IngredientListing("Cocoa", Money.valueOf(0.90), 1)) .ingredient(new IngredientListing("Whipped Cream", Money.valueOf(1), 1)) .build()) .drink(Recipe.builder().name("Cappuccino") .ingredient(new IngredientListing("Espresso", Money.valueOf(1.10), 2)) .ingredient(new IngredientListing("Steamed Milk", Money.valueOf(0.35), 1)) .ingredient(new IngredientListing("Foamed Milk", Money.valueOf(0.35), 1)) .build()) .build(); boolean running = true; BufferedReader inputReader = new BufferedReader(new InputStreamReader(System.in)); do { printStock(drinkMachine.getStock()); printMenu(drinkMachine.getDrinks()); System.out.println("Which drink would you like to choose (q for quit)?"); String input; try { do { input = inputReader.readLine(); } while (input.trim().equals("")); } catch (IOException e) { throw new IllegalStateException("Unexpected IO error; couldn't read user input"); } if (input.equalsIgnoreCase("q")) { running = false; System.out.println("Goodbye!"); } else if (input.equalsIgnoreCase("r")) { System.out.println("Restocking ingredients."); drinkMachine.restock(); } else if (input.matches("\\d+")) { int chosenIndex = Integer.parseInt(input) - 1; if (Range.closedOpen(0, drinkMachine.getDrinks().size()).contains(chosenIndex)) { Recipe chosenDrink = drinkMachine.getDrinks().get(chosenIndex); if (drinkMachine.drinkInStock(chosenDrink)) { drinkMachine.makeDrink(chosenDrink); System.out.printf("Dispensing: %s%n", chosenDrink.getName()); } else { System.out.println("Drink out of stock. Consider restocking (r)"); } } else { System.out.println("Sorry, that wasn't a valid option."); } } else { System.out.println("Sorry, that wasn't a valid option."); } } while (running); } private static void printStock(SortedMap<Ingredient, Integer> stock) { stock.forEach((ingredient, stockCount) -> { System.out.printf("%s: %d%n", ingredient.getName(), stockCount); }); System.out.println(); } private static void printMenu(List<Recipe> drinks) { System.out.println("Menu:"); for (int i = 0; i < drinks.size(); i++) { System.out.printf("%d: %s %s%n", i + 1, drinks.get(i).getPrice(), drinks.get(i).getName()); } System.out.println("r: Restock"); System.out.println("q: Quit"); System.out.println(); } } DrinkMachine.java package coffee; import com.google.common.collect.ImmutableList; import lombok.*; import java.util.*; import static com.google.common.base.Preconditions.checkArgument; @Data public class DrinkMachine { private static final int DEFAULT_STOCK_VALUE = 10; private final ImmutableList<Recipe> drinks; private final SortedMap<Ingredient, Integer> stock; private Money internalCash; @Builder public DrinkMachine(@Singular("drink") @NonNull List<Recipe> drinks) { drinks = new ArrayList<>(drinks); drinks.sort(Recipe.NAME_COMPARATOR); this.drinks = ImmutableList.copyOf(drinks); this.stock = new TreeMap<>(Ingredient.NAME_COMPARATOR); restock(); this.internalCash = Money.ZERO; } public void restock() { restock(DEFAULT_STOCK_VALUE); } public void restock(int toValue) { for (Recipe recipe : drinks) { recipe.getIngredientList().stream().map(IngredientListing::getIngredient).forEach((ingredient -> { stock(ingredient, DEFAULT_STOCK_VALUE); })); } } public void stock(Ingredient ingredient, int stockCount) { stock.put(ingredient, stockCount); } public SortedMap<Ingredient, Integer> getStock() { return Collections.unmodifiableSortedMap(stock); } public boolean drinkInStock(Recipe recipe) { return recipe.getIngredientList().stream() .allMatch(listing -> stock.getOrDefault(listing.getIngredient(), 0) >= listing.getNumber()); } public void makeDrink(Recipe recipe) { reduceStockByRecipe(recipe); this.internalCash = this.internalCash.add(recipe.getPrice()); } private void reduceStockByRecipe(Recipe recipe) { checkArgument(drinkInStock(recipe), "Not enough ingredients to make drink %s", recipe); for (IngredientListing listing : recipe.getIngredientList()) { stock.compute(listing.getIngredient(), (key, value) -> value - listing.getNumber()); } } } Recipe.java package coffee; import com.google.common.collect.ImmutableList; import lombok.Builder; import lombok.NonNull; import lombok.Singular; import lombok.Value; import java.util.*; import static com.google.common.base.Preconditions.checkArgument; @Value @Builder public class Recipe { public static final Comparator<Recipe> NAME_COMPARATOR = (a, b) -> a.getName().compareTo(b.getName()); private final String name; @Singular("ingredient") private final ImmutableList<IngredientListing> ingredientList; @java.beans.ConstructorProperties({ "name", "ingredientList" }) public Recipe(@NonNull String name, @NonNull List<IngredientListing> ingredientList) { this.name = name; validateIngredientList(ingredientList); ingredientList = new ArrayList<>(ingredientList); ingredientList.sort(IngredientListing.NAME_COMPARATOR); this.ingredientList = ImmutableList.copyOf(ingredientList); } public Money getPrice() { return ingredientList.stream() .map(IngredientListing::getPrice) .reduce(Money::add).get(); } private void validateIngredientList(List<IngredientListing> ingredientList) { Set<Ingredient> knownIngredients = new HashSet<>(); for (IngredientListing listing : ingredientList) { checkArgument(!knownIngredients.contains(listing.getIngredient()), "Ingredient %s was declared multiple times in the recipe", listing.getIngredient().getName()); knownIngredients.add(listing.getIngredient()); } } } Ingredient.java package coffee; import lombok.EqualsAndHashCode; import lombok.Value; import java.util.Comparator; @Value @EqualsAndHashCode(exclude = { "price" }) // Consider differently priced versions of the same ingredient to be the same public class Ingredient { public static final Comparator<Ingredient> NAME_COMPARATOR = (a, b) -> a.getName().compareTo(b.getName()); public static final Comparator<Ingredient> PRICE_COMPARATOR = (a, b) -> a.getPrice().compareTo(b.getPrice()); private String name; private Money price; } IngredientListing.java package coffee; import lombok.AllArgsConstructor; import lombok.NonNull; import lombok.Value; import java.util.Comparator; @Value @AllArgsConstructor public class IngredientListing { public static final Comparator<IngredientListing> NAME_COMPARATOR = (a, b) -> Ingredient.NAME_COMPARATOR.compare(a.ingredient, b.ingredient); public static final Comparator<IngredientListing> PRICE_COMPARATOR = (a, b) -> a.getPrice().compareTo(b.getPrice()); @NonNull private Ingredient ingredient; private int number; public IngredientListing(@NonNull String name, @NonNull Money price, int number) { ingredient = new Ingredient(name, price); this.number = number; } public Money getPrice() { return ingredient.getPrice().multiply(number); } } Money.java package coffee; import lombok.NonNull; import lombok.Value; import lombok.val; import java.math.BigInteger; import static com.google.common.base.Preconditions.checkArgument; @Value public class Money implements Comparable<Money> { public static final Money ZERO = Money.valueOf(0); private static final String MONEY_REGEX = "\\$?\\d+(\\.\\d\\d)?" + "|\\$?(\\.\\d\\d)?"; @NonNull private BigInteger value; public Money multiply(int num) { return new Money(value.multiply(BigInteger.valueOf(num))); } public Money add(@NonNull Money m) { return new Money(value.add(m.getValue())); } public Money subtract(@NonNull Money m) { return new Money(value.subtract(m.getValue())); } public String toString() { val x = value.divideAndRemainder(BigInteger.valueOf(100)); return "$" + x[0] + "." + String.format("%02d", x[1].intValue()); } public static Money parseMoney(@NonNull String representation) { checkArgument(representation.matches(MONEY_REGEX), "Invalid money specifier for \"%s\"", representation); return new Money(new BigInteger(representation.replaceAll("\\D", ""))); } public static Money valueOf(@NonNull String representation) { return parseMoney(representation); } public static Money valueOf(double value) { return parseMoney(String.format("%.2f", value)); } @Override public int compareTo(Money money) { return getValue().compareTo(money.getValue()); } } Answer: Money I'm not exactly sure why you need a String-based valueOf() method here, when you are only using the other one that takes double values. Wouldn't it be simpler to just have your Money class accept cents as the base unit, and then set BigInteger value with BigInteger.valueOf()? Receipe and the builder pattern Lombok's builder pattern removes boilerplate code at the (slight) expense of flexibility in expressiveness. What I mean by that is that while it's nice for it to readily provide the name() and ingredient() methods, sometimes you may encounter other patterns that: Use an external -Builder class, i.e. ReceipeBuilder in your case, Use verbs to describe the action, i.e. addIngredient() instead of ingredient() (I understand this is just a simple change on your part), Specify the name at the final build 'step', e.g. ReceipeBuilder.addIngredient(...).create("name"). My take on this is that there's really no issues with using Lombok's builder pattern, but at the same time don't be too limited by what you can do through a framework. :) I think Receipe.validateIngredientList() can be made redundant if you were to work with an Set/ImmutableSet directly, instead of a List. Even if you will like to stick to this current approach, this method can be rewritten as such to leverage on the return value from Set.add, which is true only if the addition modifies the Set: private void validateIngredientList(List<IngredientListing> ingredientList) { Set<Ingredient> knownIngredients = new HashSet<>(); for (IngredientListing listing : ingredientList) { checkArgument(knownIngredients.add(listing.getIngredient()), "Ingredient %s was declared multiple times in the recipe", listing.getIngredient().getName()); } } IngredientListing You can consider using enum types for your various ingredients, if you are fine with defining the 'universe' of ingredients first. Comparator Since you are already using a healthy dose of Java 8 features, there's the newer Comparator.comparing() method that (arguably) makes these declarations more readable, e.g. @Value @EqualsAndHashCode(exclude = { "price" }) public class Ingredient { public static final Comparator<Ingredient> NAME_COMPARATOR = Comparator.comparing(Ingredient::getName); public static final Comparator<Ingredient> PRICE_COMPARATOR = Comparator.comparing(Ingredient::getPrice); private String name; private Money price; } Now, I must say this is totally untested as I don't have Lombok... maybe the method reference way of declaration might not work. Main Recommended to put the declaration of your drinks into its own method, e.g. private static DrinkMachine createDrinks(). Reading from the console can be done and validated in its own method, e.g. private static String getInput(Scanner). Short of turning your collection of drinks into an enum type so that you can reference their ordinal() methods, here is another alternative implementation of displaying it that you may want to consider: private static void printMenu(List<Recipe> drinks) { System.out.println("Menu:"); int[] counter = new int[1]; drinks.forEach(drink -> System.out.printf("%d: %s %s%n", ++counter[0], drink.getPrice(), drink.getName()); System.out.println("r: Restock"); System.out.println("q: Quit"); System.out.println(); } The trick here is to rely on a single-element int[] array (effectively making a stream-friendly counting Object) as an index for the drink to be displayed.
{ "domain": "codereview.stackexchange", "id": 15276, "tags": "java, object-oriented, interview-questions, guava, lombok" }
Simple string compression reloaded
Question: Inspired by this question I thought I provide my implementation. I tried to go with the spirit of the *nix tool chain - read from stdin and write to stdout. This has the added benefit of making buffering very easy (current and previous characters and the count). All kinds of reviews welcome (best practices, error handling, weird edge cases, potential bugs or other pitfalls). #include <stdio.h> #include <stdbool.h> #include <stdint.h> void write_char(int c) { if (EOF == putchar(c)) { if (ferror(stdout)) { perror("error writing char to stdout"); exit(EXIT_FAILURE); } } } void write_count(uint64_t count) { if (printf("%ull", count) < 0) { perror("error writing character count to stdout"); exit(EXIT_FAILURE); } } int main(int argc, char** argv) { int current_char = 0; int previous_char = 0; uint64_t current_char_count = 0; while (EOF != (current_char = getchar()) { if (current_char_count == 0 || current_char_count == UINT64_MAX || previous_char != current_char) { if (current_char_count > 0) { write_count(current_char_count); } write_char(current_char); current_char_count = 1; previous_char = current_char; } else { current_char_count += 1; } } } Answer: Compressor number or real When you are write_counting, you are writing the ASCII number characters to the new file. However, when you go to decompress this file, how are you going to differentiate between the actual content in the file and the numbers that mark the occurrences of a character? A possible solution for this might be to just write the number itself to the file (no ASCII). That way, when you encounter a number that is ASCII, you can be almost sure that the number is part of the content (that is, unless there was a letter that occurred so many times in a row that the counter rose into the '0'-'9' range). Two ones or twelve? This is kind of a continuation from the top one. Let's say your compressor went to go compress this file: 12 Now, I am ready to decompress it. Since your compressor writes a number to show occurrences of a character, the output would be this: 1121 How do I know if all of those numbers are part of the content? The only fix I can think of, unfortunately, would be to follow the above tip and write 0x01 instead of an ASCII number. Misc while (EOF != (current_char = getchar()) You are missing a brace here. if (printf("%ull", count) < 0) When compiling your code, I get this on this line: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 2 has type ‘uint64_t’ [-Wformat=] This also showed a problem that two ls are written after the number that shows how many character occurrences there were.
{ "domain": "codereview.stackexchange", "id": 17734, "tags": "c, compression" }
Using Parsec for simple arithmetic language
Question: I've been reading Types and Programming Languages and I wanted to try to implement the first language in Haskell to understand it properly I have barely written any Haskell before and not used Parsec so I would grateful for any feedback on this code Here are some specific points I am unsure about Is my main function sensible? Can the eval function be expressed any better? I'm unhappy with functionParser and ifParser. How can I code these better. In particular can the ifParser be coded in an applicative style? If there's anything else you consider odd don't hesitate to mention it import Control.Monad import System.Environment import Text.ParserCombinators.Parsec data Term = TmTrue | TmFalse | TmIf Term Term Term | TmZero | TmSucc Term | TmPred Term | TmIsZero Term | TmError deriving Show main :: IO[()] main = do args <- getArgs forM args (\arg -> case parseArith arg of Left e -> print e Right term -> print $ eval term) isNumerical :: Term -> Bool isNumerical term = case term of TmZero -> True TmSucc subterm -> isNumerical subterm TmPred subterm -> isNumerical subterm _ -> False eval :: Term -> Term eval term = case term of TmTrue -> TmTrue TmFalse -> TmFalse TmZero -> TmZero TmIf term1 term2 term3 -> case eval term1 of TmTrue -> eval term2 TmFalse -> eval term3 _ -> TmError TmIsZero subterm -> case eval subterm of TmZero -> TmTrue t2 | isNumerical t2 -> TmFalse _ -> TmError TmPred TmZero -> TmZero TmPred (TmSucc subterm) -> eval subterm TmSucc subterm -> case eval subterm of t2 | isNumerical t2 -> TmSucc t2 _ -> TmError _ -> TmError parseArith :: String -> Either ParseError Term parseArith input = parse arithParser "Failed to parse arithmetic expression" input arithParser :: GenParser Char st Term arithParser = try( ifParser ) <|> try( succParser ) <|> try( predParser ) <|> try( isZeroParser ) <|> try( trueParser ) <|> try( falseParser ) <|> try( zeroParser ) trueParser :: GenParser Char st Term trueParser = string "true" >> return TmTrue falseParser :: GenParser Char st Term falseParser = string "false" >> return TmFalse zeroParser :: GenParser Char st Term zeroParser = char '0' >> return TmZero functionParser :: String -> (Term -> Term) -> GenParser Char st Term functionParser name funcTerm = do string $ name ++ "(" term <- arithParser char ')' return $ funcTerm term succParser :: GenParser Char st Term succParser = functionParser "succ" TmSucc predParser :: GenParser Char st Term predParser = functionParser "pred" TmPred isZeroParser :: GenParser Char st Term isZeroParser = functionParser "iszero" TmIsZero ifParser :: GenParser Char st Term ifParser = do string "if" spaces term1 <- arithParser spaces string "then" spaces term2 <- arithParser spaces string "else" spaces term3 <- arithParser return $ TmIf term1 term2 term3 Answer: Where you used case you could have used pattern matching. This yields more idiomatic code in many cases. isNumerical :: Term -> Bool isNumerical TmZero = True isNumerical (TmSucc subterm) = isNumerical subterm isNumerical (TmPred subterm) = isNumerical subterm isNumerical _ = False eval :: Term -> Term eval TmTrue = TmTrue eval TmFalse = TmFalse eval TmZero = TmZero eval (TmIf term1 term2 term3) = evalIf (eval term1) term2 term3 eval (TmIsZero subterm) = evalIsZero $ eval subterm eval (TmPred subterm) = evalPred $ eval subterm eval (TmSucc subterm) = evalSucc $ eval subterm eval TmError = TmError evalIf :: Term -> Term -> Term -> Term evalIf TmTrue a _ = eval a evalIf TmFalse _ b = eval b evalIf _ _ _ = TmError evalIsZero :: Term -> Term evalIsZero TmZero = TmTrue evalIsZero term | isNumerical term = TmFalse | otherwise = TmError evalPred :: Term -> Term evalPred TmZero = TmZero evalPred t@(TmSucc subterm) = t evalPred _ = TmError evalSucc :: Term -> Term evalSucc term | isNumerical term = TmSucc term | otherwise = TmError Note the complex terms are farmed off to their own functions. This makes testing easier. Especially as the runtime gets more complex. You asked about main's type. If you use forM_ you can define it as main :: IO (). As for Applicative: functionParser :: String -> (Term -> Term) -> GenParser Char st Term functionParser name funcTerm = funcTerm <$> (string (name ++ "(") *> arithParser <* char ')') ifParser :: GenParser Char st Term ifParser = TmIf <$> (string "if" *> spaces *> arithParser) <*> (spaces *> string "then" *> spaces *> arithParser) <*> (spaces *> string "else" *> spaces *> arithParser) Also, the last 'try' should not be used. 'try' means to attempt a parse and do not consume the input if it failed. The last parse action is the end of the parse so there is no need to leave the input in the parser.
{ "domain": "codereview.stackexchange", "id": 6750, "tags": "beginner, haskell, parsec" }
Dfferent distances to comet 67P
Question: Earlier, I was looking for the current distance from Earth to 67P/Churyumov-Gerasimenko, in order to do a "back of envelope" calculation of signal delay. Not important, but I was somewhat puzzled to find quite a large discrepancy between values reported by different sites, and would love to understand why. ESA's Where is Rosetta page currently gives the distance as around 719m km, which, if I'm using it correctly, is backed up by JPL Horizons tool, which gives me a delta of around 4.81 AU. Which is great. But when I look at The Sky Live's 67P live tracker, it is reporting around 941m km, which their comet info page gives as around 6.3 AU. Am I missing something blindingly obvious? Answer: I suspect that The Sky Live got their comet IDs mixed up. The orbital elements show perihelia in Aug 2015 and Jan 2022, aphelion 5.69 AU, period 6.45 years, consistent with other sources e.g. Minor Planet Center. However, the distance chart shows a perihelion in mid 2018, aphelion about 11 AU, period about 16 years, clearly not the same object. Report the bug and they'll probably fix it.
{ "domain": "astronomy.stackexchange", "id": 1867, "tags": "distances, 67p" }
Determining if a file or URL is a git repo or archive
Question: I'm getting these three offenses: libraries/helpers.rb:4:1: C: Method has too many lines. [11/10] def get_files(url, file, destination) ^^^ libraries/helpers.rb:18:1: C: Assignment Branch Condition size for unpack_archive is too high. [16.52/15] def unpack_archive(url, file, destination) ^^^ libraries/helpers.rb:18:1: C: Method has too many lines. [16/10] def unpack_archive(url, file, destination) These are all length based offenses. Is there any way to refactor to meet the style guidelines? # Takes a file and or a url decides if it is a git repo or an archive. # Downloads or clones it. to the destination. # Do not enter url with trailing slash. def get_files(url, file, destination) case File.extname(file) when '.tar.gz', '.zip' then unpack_archive(url, file, destination) when '.git' git 'destination' do repository "#{url}/#{file}" reference 'master' user node['apache']['user'] action :sync end else fail ArgumentError, "dunno how to handle #{file}" end end def unpack_archive(url, file, destination) remote_file "#{Chef::Config['file_cache_path'] || '/tmp'}/#{file}" do owner 'root' group 'root' mode '0644' source "#{url}/#{file}" not_if File.readable?(file) end extract = case File.extname(file) when '.tar.gz' then "tar-xC #{destination} -f #{file};" when 'zip' then "unzip -d #{destination} -qo #{file};" end bash "Extract #{file}" do cwd ::File.dirname("#{Chef::Config['file_cache_path'] || '/tmp'}/#{file}") code extract not_if { ::File.readable(file) } end end Answer: Well the easiest option is to extract parts into separate methods, no? In unpack_archive the handling of File.extname seems buggy for zip files? At least it's different to the same case statement in get_files. File.readable? is used, but the other case in bash is missing the question mark. I don't know enough about Ruby and Chef, but I also saw that the not_if arguments are specified in two different ways; maybe it's possible to use the same way for both directives? With two additional functions it looks a bit less redundant and more to the point, however I guess there's still space for further improvements: # Takes a file and or a url decides if it is a git repo or an archive. # Downloads or clones it. to the destination. # Do not enter url with trailing slash. def get_files(url, file, destination) case File.extname(file) when '.tar.gz', '.zip' then unpack_archive(url, file, destination) when '.git' then git_clone(url, file) else fail ArgumentError, "dunno how to handle #{file}" end end def git_clone(url, file) git 'destination' do repository "#{url}/#{file}" reference 'master' user node['apache']['user'] action :sync end end def get_extract_command(file) case File.extname(file) when '.tar.gz' then "tar -xC #{destination} -f #{file};" when '.zip' then "unzip -d #{destination} -qo #{file};" else fail ArgumentError, "dunno how to extract #{file}" end end def unpack_archive(url, file, destination) directory = "#{Chef::Config['file_cache_path'] || '/tmp'}/#{file}" remote_file directory do owner 'root' group 'root' mode '0644' source "#{url}/#{file}" not_if File.readable?(file) end extract = get_extract_command(file) bash "Extract #{file}" do cwd ::File.dirname(directory) code extract not_if { ::File.readable(file) } end end
{ "domain": "codereview.stackexchange", "id": 18788, "tags": "ruby, git" }
When to create a generator or return simply a list in Python?
Question: I have this kind of instance method: def list_records(self,race_type,year=0): yearnow= datetime.now().year yearlist = [yearnow - i for i in range(4)] if not year: for y in yearlist: if self.records: yield self.records[utils.dateracetype_to_DBcolumn(y,str(race_type))] else: yield None else: if self.records: yield self.records[utils.dateracetype_to_DBcolumn(year,str(race_type))] else: yield None I am calling it mainly through list comprehension, but I could return it simply as a list, what is the most pythonic way to do this? What would be the reasons to chose one method or the other? let's say I'll go with a generator, should I turn the 2 last 'yield' into return values as there will be only one value to be returned? Is there a more optimized way to code this method, as I have the feeling there's too much nested conditions/loop in there? Is there a Python convention to name generators? something like 'gen...' Thanks Answer: If you are going to need the whole list for some reason, then make a list. But if you only need one item at a time, use a generator. The reasons for preferring generators are (i) you only need the memory for one item at a time; (ii) if the items are coming from a file or network connection or database then generators give you some parallelism because you can be processing the first item without having to wait for all items to arrive; (iii) generators can handle infinite streams of items. There is no special convention for naming generators in Python. It is best to name a function after the things it generates. Instead of: [yearnow - i for i in range(4)] write: range(yearnow, yearnow - 4, -1) The duplicated code could be avoided like this: if year: year_range = (year,) else: year_now = datetime.now().year year_range = range(year_now, year_now - 4, -1) for y in year_range: if self.records: yield self.records[utils.dateracetype_to_DBcolumn(y, str(race_type))] else: yield None (But I don't recommend implementing it like this, as I'll explain below.) It is conventional to use None as the default value for an argument like year. A default like year=0 might be confused with a legitimate value for year. For example, in astronomical year numbering there is a year zero, and how would you pass this to your function? Instead of yielding an exceptional value (here None) when there are no results to generate, just generate no results. The problem with exceptional values is that the caller might forget to check for them, whereas it is usually straightforward to handle the case of no results. I don't think it's a good idea to use datetime.now().year for the case where the caller doesn't specify a year. The problem is that close to midnight on 31st December, there is a race: the current year might be 2018 just before the call to list_records happens, and 2019 just after. What should list_records return in this case? The Zen of Python says, In the face of ambiguity, refuse the temptation to guess. so it would be better to avoid this problem by requiring the caller to specify the year. The function does two different things: it generates the record for a year (if a year was specified), or it generates records from the last four years (if no year was specified). The single responsibility principle suggests that these two things should be implemented by two functions: def year_record(self, race_type, year): """Return the record for race_type and year. Raise IndexError if there are no records. """ return self.records[utils.dateracetype_to_DBcolumn(year, str(race_type))] def recent_records(self, race_type, year, n=4): """Generate the records for race_type for the n years up to (and including) year, in reverse order. """ for y in range(year, year - n, -1): try: yield self.year_record(race_type, y) except IndexError: pass If the caller needs to know whether there was a record for each year, then I recommend generating the necessary data explicitly, like this: def recent_records(self, race_type, year, n=4): """Generate the records for race_type for the n years up to (and including) year, in reverse order. For each year, generate a tuple (year, have_record, record) where have_record is True if there is a record for the given year, or False if not. """ for y in range(year, year - n, -1): try: yield year, True, self.year_record(race_type, y) except IndexError: yield year, False, None Then the caller can write: for year, have_record, record in self.recent_records(...): if have_record: # ...
{ "domain": "codereview.stackexchange", "id": 31171, "tags": "python, generator" }
How can I get the uncertainties for peaks on an image?
Question: When pick the peak points on an image, e.g. the matrix made by peak in matlab as this one, I can use max to get the index of the peaks. But how can I also get the uncertainties of the index for the peaks? For example, the max peak of the red spot could be located at (ix,iy)=(25,37). How do I get the uncertainties like (25±3,37±2)? Answer: One way is to simply model each peak with a Gaussian, with mean $\mu_i$ and variance $\sigma_i$. In fact what you mean by uncertainty corresponds to the variance. You can iteratively fit Gaussians using e.g. EM-algorithm. In MATLAB you could easily do this with built-in fitting functions: https://www.mathworks.com/help/curvefit/gaussian.html Alternatively, you could use the mean-shift algorithm to find the modes in the data, from which you could extract the probabilities: http://de.mathworks.com/matlabcentral/fileexchange/39079-mean-shift-for-finding-modes And finally, we might want to think of the uncertainty as the steepness of the maximum. In that regard, the magnitude of the first derivative would give us how strong the peak is. Then, a first order derivative, or a curvature filter can characterize the uncertainty. This is also doable in MATLAB with simple finite difference approximation schemes.
{ "domain": "dsp.stackexchange", "id": 4571, "tags": "matlab, image-processing, peak-detection, matrix" }
Took a picture of my laptop screen with my iPhone. The yellowish pattern in the image look like magnetic lines. How is this possible?
Question: The pattern seems consistent with the magnetic force lines of a bar magnet. Answer: That looks like a Moire pattern to me. You have a camera with a grid of pixels on the imaging element and a screen with a grid of (colored) pixels. These elements don't line up exactly, so you get the odd patterns. Try taking another image with the camera slightly twisted along the lens axis or slightly angle the lens axis to the laptop. If it's a Moire, then the image will look quite different in both cases. And if I am understanding it right, these type of Moire patterns are made as a result of overlapping of curved grid lines No, they're similar to aliasing effects. Imagine taking a picture of a grid. The optics will put a picture of the grid onto the imaging sensor (CCD). At some distance/zoom, the lines of the grid will be almost exactly the same distance apart as the CCD elements. If the lines fall between the elements, they won't be seen easily. If they fall exactly on the elements, they are seen easily. But your laptop screen isn't all the same distance from the lens. It's flat, so the edges are farther than the center is. This means the angular separation of the LCD pixels changes from the center to the edge. This apparent change in the grid separation from one part of the image to another makes the bright/dark/bright/dark areas appear. They're curved because the apparent grid size changes with distance (so you get sort of circular patterns). Your original image looks to me to be a very lucky shot. The grid in the middle is free of much distortion over a large area. Pretty neat.
{ "domain": "physics.stackexchange", "id": 21983, "tags": "magnetic-fields, imaging" }
What factors might affect the reaction between potassium permanganate and ethylene glycol?
Question: Our chemistry club has, for the last few years, been showing off some reactions to prospective students. One of the most popular tricks we have is a bit of an explosive one: we mix some potassium permanganate with ethylene glycol in a small bottle. The permanganate oxidizes the glycol, and we usually end up with an intense, three foot flame that shoots out of the bottle, followed by some popping, and finally, the whole thing burning to the ground. Usually. Recently, we've been using the exact same mixtures with the same ratios and getting wildly differing results. This entire week, all we've gotten is the glycol bubbling a little and maybe spitting out some of the permanganate, and not much more. The ratios we use are almost the same, so I'm not sure what could be causing this. Is this reaction sensitive to age of the reactants? Ambient temperature? Humidity? What could cause the reaction to not work normally? P.S. In spite of the home-experiment tag, the chemicals used here are lab-grade (from Sigma-Aldrich), though I'd be interested in how the purity might affect the reaction as well. Answer: Capsules of potassium permanganate injected with ethylene glycol are used as aerial ignition devices in bush and forest firefighting. They are considered a reasonably safe option because of the delayed reponse prior to ignition. They are sometimes known as dragon eggs or fireballs. The rate of chemical reaction is very much dependent upon the particle size of the potassium permanganate. Powdered permanganate (greater surface area) works much better than larger crystals. the concentration of the ethylene glycol. Dilute glycol may not lead to ignition at all. Perhaps your glycol solution is very wet. ambient temperature. Unless you are trying this in the middle of winter outside in the snow, this is unlikely to be the cause of disappointment.
{ "domain": "chemistry.stackexchange", "id": 2104, "tags": "home-experiment" }
A situation that validation accuracy is higher than test accuracy at its minimum loss region
Question: This case would be an example. Around epoch = 800, we have the validation loss(orange line) reaches its minimum region. Should we record the accuracy of the model as an average value from epoch 780 to 800? Or shall we take the accuracy after it became an almost straight line (after epoch = 1200)? If we record the accuracy at the minium point of validation loss. The accuracy may not be the highest. For example, at epoch = 800, val_loss = 0.627,val_accuracy = 0.783. At epoch=909, val_loss = 0.624, val_accuracy = 0.761 what is the best way to do it for such a situation? Thanks Answer: Assuming the goal is to give the true performance of the model, then it should be the performance obtained by applying the final model on the test set. The fact that the performance on the validation set can be observed during training is irrelevant. So the question becomes: which model should you select based on observing the performance on the validation set during training? You can select the model which gives the maximum accuracy, but the real performance is obtained when you evaluate it on a fresh test set. The second question depends on which performance measure you want to optimize: if you want to optimize accuracy, then you should pick the model which obtains the highest accuracy.
{ "domain": "datascience.stackexchange", "id": 9009, "tags": "machine-learning, deep-learning" }
ROS Fuerte node communicating with ROS Electric node
Question: Hi everyone I've just started working with ROS Fuerte and I have a bunch of code written for ROS Electric. Would any one please confirm if nodes written in Fuerte version can communicate with nodes written in Electric version ?? BhanuKiran Originally posted by CBK on ROS Answers with karma: 28 on 2012-10-04 Post score: 0 Answer: Depends on what you mean exactly by 'can communicate with'. Code written for Electric can always be made to compile on Fuerte, provided that you take care of the necessary changes. Depending on the actual code that might not even be very difficult. I've had multiple packages I could just rosmake --pre-clean in a Fuerte shell and they 'just worked'. If you want to mix binary (as in: already compiled) nodes from Eletric with new ones written for Fuerte, that is not going to work, or at least really not recommended. Message formats could be different between the two, as well as ROS internals, which can (and will) result in either an incorrectly functioning system, or (subtle) bugs. It can be done though, but it depends on actual messages used, use of ROS libraries, etc, etc. Originally posted by ipso with karma: 1416 on 2012-10-04 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Mani on 2012-10-04: I think from the question it can be implied that the first part of your answer is what he is looking for. Comment by joq on 2012-10-05: Right. So, the answer is "don't do that". Recompile your Electric nodes for Fuerte.
{ "domain": "robotics.stackexchange", "id": 11238, "tags": "ros, ros-fuerte, ros-electric" }
Boolean search explained
Question: My mother is taking some online course in order to be a librarian of sorts, in this course they cover boolean searches, so they can search databases efficiently, however, she got a question sounding something like this: The search "x OR y" will result in 105 000 hits, while a search for only x will result in 80 000 hits, and a search for only y will get 35 000 hits. Why does the search "x OR y" give 105 000 hits, when the combined individual searches gives 115 000 hits? For me this sounded strange, so I tested this myself, using the words bacon and sandwich. Only bacon yielded 179 000 000 results Only sandwich yielded 312 000 000 results bacon OR sandwich gave 491 000 000 results But for me it adds up: 179 000 000 (bacon) + 312 000 000 (sandwich) = 491 000 000 (bacon OR sandwich) Why could an OR query result in fewer hits than both individual queries combined? Answer: The counting principle that applies here is inclusion-exclusion. $$ \left|X \cup Y\right| = \left|X\right| + \left|Y\right| - \left|X \cap Y \right|$$ To make the numbers work out, $\left|X \cap Y \right|$ must be 10000. A Venn diagram may be more convincing to someone who may be intimidated by the notation.
{ "domain": "cs.stackexchange", "id": 6658, "tags": "sets, counting" }
why use bellman-ford instead of Dijstra in RIP routing?
Question: The RIP routing protocol was published in 1988 and uses Bellman-Ford algorithm to calculate shortest path. Also more recent version of RIP (RIPv2 and RIPng) use the same algorithm. The Djikstra algorithm was published in 1959. Why RIP use Bellman-Ford instead of Djikstra? More generally, why routing protocols prescribe specific shortest path algorithm, could they be parametrics how the short path is calculated? Answer: Bellmon Ford is used by distance vector routing protocol (DVR). In DVR technique router keep knowledge of only next router which is adjacent that's why exchange of routing database only with neighbors. So in DVR no need of shortest path. But in DVR every router implicitly will able to find shortest path very slowly after stabilize. The main drawback of DVR technique is slow convergence and count-to-infinity problem. But to keep the knowledge of entire topology router needs every other router's (neighbors as well as non-neighbors) routing database, because of first convergence and remove the count-to-infinity problem. So from one router to reach another we need shortest path algorithm. In this scenario link state (OSPF) routing protocol is example which does multicasting as well unicasting.
{ "domain": "cs.stackexchange", "id": 19938, "tags": "graphs, shortest-path, computer-networks, communication-protocols, routing" }
Failed to add library to rqt plugin
Question: I have recently written an rqt plugin gui in C++ based on the rqt_image_view plugin. Everything was working fine until I decided to add another C++ file to my project. This file contains basically a set of useful functions and classes. After adding it as a library in the CMakeLists.txt, catkin_make is working fine and I am getting no build error. Moreover, as long as I don't use any functionality from that source file, my plugin loads and works fine. And even if I use stuff from the associated header file, still the plugin loads and works fine. The issue happens when I actually use that source file in my plugin code. What happens is that although everything still builds, the plugin simply refuses to load, and I get this error message: RosPluginlibPluginProvider::load_explicit_type(myGUI) failed creating instance PluginManager._load_plugin() could not load plugin "myGUI": RosPluginlibPluginProvider.load() could not load plugin "myGUI" I am guessing that it might be related to the plugin.xml since there I specify the library path to be that of the gui only. Should I maybe add the path of the library related to my source file? how to do that? Note that I currently have 2 add_library statement in my CMakeLists.txt: one for the gui and one for the added source file. If I add both of them in one add_library statement, the plugin fails to load even if I don't use the code from the additional source file. EDIT This is the relevant part of my CMakeLists.txt: add_library(helperLib src/additionalSourceFile.cpp) include_directories(${rqt_my_gui_INCLUDE_DIRECTORIES} ${catkin_INCLUDE_DIRS}) add_library(${PROJECT_NAME} ${rqt_my_gui_SRCS} ${rqt_my_gui_MOCS} ${rqt_my_gui_UIS_H}) target_link_libraries(${PROJECT_NAME} ${catkin_LIBRARIES} ${QT_QTCORE_LIBRARY} ${QT_QTGUI_LIBRARY} helperLib) And the plugin.xml: (I am guessing the 1st line should be modified in some way to include the other helper library path) <library path="lib/librqt_my_gui"> <class name="myGUI" type="rqt_my_gui::myGUI" base_class_type="rqt_gui_cpp::Plugin"> <description> A GUI plugin </description> <qtgui> <group> <label>Robot Tools</label> </group> <label>My GUI</label> <icon type="theme">applications-other</icon> <statustip>User interface to allow interaction.</statustip> </qtgui> </class> </library> Originally posted by beginner on ROS Answers with karma: 83 on 2015-05-05 Post score: 0 Original comments Comment by Dirk Thomas on 2015-05-05: Sharing your code might help others to help you. Comment by beginner on 2015-05-05: Added code in the edit... I really hope that someone can help! Answer: This error does not necessarily mean that rqt failed to find and load your library file. If you are saying Everything was working fine until I decided to add another C++ file to my project. the lib itself is most likely found but there are some issues when loading the classes/functions in it. A very common mistake that causes this behaviour is declaring (and using) a function that is not defined (implemented); or function declaration and definition do not match. When creating an exec this would cause a linking error but as you are just creating a lib in your project (that is loaded outside your project) the linker will assume that undefined references will be resolve somewhere else when creating the lib. So always double-check all function definitions are declared and definitions and declaration match... Originally posted by Wolf with karma: 7555 on 2015-05-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by beginner on 2015-05-06: Apparently, you are right! I tried using a simple source file with simple functions, and everything worked like a charm :) I still need to figure out why my original source file is not working though (it is compiling without errors) Comment by Wolf on 2015-05-06: check all function signatures are identical, and of cource make sure your new cpp file is amongst ${rqt_my_gui_SRCS} Comment by beginner on 2015-05-06: checked, but still not working :( Comment by Wolf on 2015-05-06: can you post the code of your cpp files and header? Comment by beginner on 2015-05-06: Thanks... I figured it out :D It was a missing definition for a class method which I wasn't using... ughhhhh Comment by ivaughn_whoi on 2019-12-27: Anyone have better advice on how to get ANY detail on the linker errors? I just get "could not load plugin", even though the library definitely exists. You can check for missing symbols with ldd -r devel/lib/<your_library_here>.so, but it would be lovely if there was a way to ask pluginlib to dump debug info.
{ "domain": "robotics.stackexchange", "id": 21618, "tags": "ros" }
Does "concentration gradient" refer to the amount of all solutes, or a specific solute moving across a membrane?
Question: I'm struggling with something that seems really simple, and I think it's because I'm stuck on the definition of concentration gradients. If you have a solution with a moderate concentration of molecule A crossing a membrane where the other side has no A but a high concentration of molecule B, is A moving up the concentration gradient because there's a higher ratio of solute:solvent, or down because there isn't any molecule A present in that solution? Thank you! Answer: The concentration gradient is typically used to describe the driving force for mass transport by diffusion. It is going to be the different concentrations of species A that drives species A to move, B can have an influence but typically it is only minor. So typically when mentioning concentration gradients it is for a specific species/molecule and not for the total concentration of species/molecules. So in your case the species will be going from a high concentration to low concentration of A, i.e. down the gradient.
{ "domain": "chemistry.stackexchange", "id": 18003, "tags": "terminology, definitions" }
rosbag --clock time for files recorded on a different machine
Question: I ran rosbag record on a laptop to produce a bag file. However, all of the nodes producing the topics were on my robot's computer (i.e., on the same network, but not on the same machine). I am now trying to play back the data using rosbag play --clock , but the time being produced by /clock is about two minutes off from the time embedded in one of my sensor messages (/imu/data). I'm assuming that the time on the laptop was different than the time on the robot's computer, and that is what's accounting for this issue. Where does ROS get its initial time when playing back a bag file? Is there any way to change it? Ideally, I'd like it to be in line with the first message that was received from any sensor. The reason it's a problem for me is that I am using robot_pose_ekf (with use_sim_time set), which is waiting for a transformation from /imu to /base_footprint. However, the time difference between ros::Time::now() and the IMU message's timestamp is ~130 seconds, and robot_pose_ekf is only using a transform time difference of 0.5 seconds. Originally posted by Tom Moore on ROS Answers with karma: 13689 on 2013-03-13 Post score: 3 Answer: rosbag records ros::Time::now() of when the message is received on the "rosbag record" computer. This helps the playback system emulate the latency of the logged messages. Your imu publisher likely fills in the msg.header.stamp with ros::Time::now() of the robot the imu is attached to. To change the timestamps, have a look at the rosbag APIs, you can read in the bagfile and write a new one with whatever timestamps you want. http://www.ros.org/wiki/rosbag/Cookbook#Rewrite_bag_with_header_timestamps http://www.ros.org/doc/api/rosbag/html/c++/ Finally, to prevent this from happening in the future, take a look at a solution like chrony. It's great for ROS because ROS requires two way communication; and that's all it takes for chrony to synchronize your timestamps. http://answers.ros.org/question/11570/ros-time-across-machines/ Originally posted by Chad Rockey with karma: 4541 on 2013-03-13 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 13347, "tags": "navigation, rosbag, robot-pose-ekf, time, use-sim-time" }
how to setup navigation stack folders
Question: Hi Everyone, I've been trying to study ROS for a week now getting no where. I can build pushers listeners and understand basic ROS structure without an issue however Im having issues with how to set up the navigation stack for sensors and odometry. I'm really confused about how to include the packages and package types. Should tf sensors and odometry be implemented by myself in separate packages? If so is there any guideline? The tutorials aren't comprehensive enough for my understanding. I tried to look for how to set these up or find some ready made projects with navigation stack so I could guide myself but couldn't find all. Any help will be extremely appreciated. Kind regards, Maciej Originally posted by maciejm on ROS Answers with karma: 33 on 2015-12-02 Post score: 0 Original comments Comment by jarvisschultz on 2015-12-09: The turtlebot_navigation package has a few tutorials that allow you to run the full navstack w/ a simulated turtlebot. Comment by jarvisschultz on 2015-12-09: The launch files and configuration files in the turtlebot_navigation package would be a good way to see how to configure the navstack. Answer: You need to write your own packages as follows: 1- TF tree publisher: http://wiki.ros.org/navigation/Tutorials/RobotSetup/TF 2- Odometry publisher: http://wiki.ros.org/navigation/Tutorials/RobotSetup/Odom 3- There must be a node subscribing to the "cmd_vel" topic that is capable of taking (vx, vy, vtheta) <==> (cmd_vel.linear.x, cmd_vel.linear.y, cmd_vel.angular.z). To do this you create a subscriber with a call back like follows: subscriber: ros::Subscriber sub = n.subscribe("cmd_vel", 1, movecallback); And the call back would be: void movecallback(const geometry_msgs::Twist::ConstPtr& vel) { //ROS_INFO("I heard: [%s]", msg->data.c_str()); geometry_msgs::Twist new_vel = *vel; float v = sqrt (new_vel.linear.x * new_vel.linear.x + new_vel.linear.y * new_vel.linear.y); float vth = new_vel.angular.z; // here you need to send v and vth as linear and angular velocities to your robot } For the rest you need to setup launch files as described here: http://wiki.ros.org/navigation/Tutorials/RobotSetup. You need to also install gmapping to be able to create a map, navigation stack do not work without a map unless you change its default to do slam. Hope it helps Mohsen Originally posted by mohsen1989m with karma: 397 on 2015-12-09 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 23138, "tags": "navigation, odometry" }
Google's 'Tango' - How it works and what's special about the hardware?
Question: I'm looking for a good breakdown and explanation of Google's 'Tango' AR platform. Specifically how the hardware works together to generate depth maps and the SDK's use of it. I know the hardware composes of a fisheye lens camera and an RGB-I camera. I am only familiar with stereo vision with identical cameras and disparity maps, I am thinking the different lenses and camera elements make it easier to distinguish variations in the environment but must have some very special (and proprietary) algorithms. However, there must actually some special hardware and dedicated chipsets for processing the depth map to take the burden off the CPU/GPU? Also, for the AR software implementation, I assume the SDK has some GPU utilization built into it like OpenCL or CUDA (but specific for the Adreno GPU). Does it simply use OpenCL (this is supported by the Adreno GPU) or does it have something proprietary from Google similar to CUDA for the nVidia chipsets? Basis for the question - I work with OpenCV some and am experimenting with stereo vision applications, but would like to move on to developing apps for specialized hardware and this sounds like the right (maybe only?) platform. Answer: I suspect, but don't know, that the Tango uses a variant of volumetric reconstruction using a Truncated Signed Distance Function (Good intro at http://www.cs.nyu.edu/courses/fall12/CSCI-GA.2945-001/dl/jiakai-slides.pdf) It uses structured IR light to obtain a dense depth map, projects these points back into 3-space as a point cloud, probably turns this into a mesh by triangulating it and then projects it into a TSDF volume representing the space being mapped. Marching Cubes or similar algorithm can then be used to extract the 0 crossing of this isosurface into a mesh. Given that the Tango seems to bee able to map quite large areas, I would assume that either the mapping resolution is quite coarse or else Google have implemented a streaming algorithm to move the volumetric reconstruction into and out of GPU memory. The seminal paper here is KinectFusion (https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ismar2011.pdf) which describes how this was done with a Kinect using depth only data.
{ "domain": "robotics.stackexchange", "id": 1340, "tags": "computer-vision, stereo-vision, opencv" }
Total pressure in a gaseous system
Question: The kinetic theory of gases assumes that all collisions between gas molecules are completely elastic. So kinetic energy is conserved in collisions between molecules. Thus the average value of velocity remains constant for the gas. Pressure is caused by the change in momentum associated with collisions of gas molecules against the walls of the container. So as the average value of velocity remains constant it is safe to ignore the effects of the collisions between the molecules themselves, when calculating the pressure of the system. Is this reasoning correct? If it's correct, PV=nRT successfully calculates the true pressure of the gas. Consider my thought experiment. Two ideals gases are sealed in a container. There would be some temperature and total pressure associated with the system. Now, if we can successfully ignore the effects of the collisions between the gas molecules themselves, then this system is equivalent to having the two gases separate, in similar containers ( they just add up, that's all) If so, The individual pressures of the gases are going to be equal to their partial pressure, which is a measure of how much a given gas contributes to the total pressure. But the gases are in equilibrium are they not? So their pressures must equal the total pressure. Where does the additional pressure come from? Is there is something wrong in the above reasoning? Can somebody point out where? Thanks for any help offered Answer: "So as the average value of velocity remains constant it is safe to ignore the effects of the collisions between the molecules themselves, when calculating the pressure of the system. Is this reasoning correct?" Whether it is safe to ignore collisions between the molecules themselves has nothing to do with constancy of average velocity of the molecules. If the gas is in thermodynamic equilibrium, mean velocity of molecules is constant (no further conditions are needed). Collisions between molecules, however, may matter or may not - it depends on how the mean free path of molecules compares to size of the molecules. "But the gases are in equilibrium are they not? So their pressures must equal the total pressure. Where does the additional pressure come from?" I do not know what you mean here. Are you asking if each gas individually has the same pressure the original system of mixed gases had? Why would that be so? If the gases got separated while maintaining the temperature constant, the pressure of any of the two gases will be lower than the original pressure. In case the gases interact only negligibly, the sum of the pressure after separation is equal to the original pressure of the mix.
{ "domain": "physics.stackexchange", "id": 42936, "tags": "kinetic-theory" }
prosilica GC650C
Question: Hello I am trying to use a prosilica GC650C. I follow the instructions at http://www.ros.org/wiki/prosilica_camera/Tutorials The camera works, except for a small detail - images are in grayscale and the camera should give color images. I launched reconfigure_gui but could not find any parameter regarding video format. Can anyone help? regards Miguel Originally posted by Miguel Riem de Oliveira on ROS Answers with karma: 254 on 2012-01-16 Post score: 0 Answer: The prosilica output is in Bayer format; you need to run image_proc to debayer it to produce a color image. See: http://www.ros.org/wiki/image_proc Originally posted by ahendrix with karma: 47576 on 2012-01-16 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 7903, "tags": "ros, prosilica-camera" }
No sign flipping while figuring out the emf of voltaic cell?
Question: I learnt that for a voltaic cell, the value for the $E_\text{cell}^\circ$ when the reaction is spontaneous is given by $$E_\text{cell}^\circ = E_\text{cathode}^\circ - E_\text{anode}^\circ, \label{eqn:1}\tag{1}$$ so that the difference in the right gives us a positive value for $E_\text{cell}^\circ$. But suppose we are given two half-reactions: $$ \begin{align} \ce{Ag+(aq) + e- &→ Ag(s)} &\qquad E^\circ &= \pu{0.80 V} \\ \ce{Sn^2+(aq) + 2 e- &→ Sn(s)} &\qquad E^\circ &= \pu{-0.14 V} \end{align} $$ When finding the overall spontaneous reaction, we must flip the second reaction, multiply it by $2$, and then add it with the first to get our desired equation. But when determining the $E_\text{cell}^\circ$, why don't we negate the minus sign of the second half-reaction and make positive, before we put it in $\eqref{eqn:1}$ to figure out the $E_\text{cell}^\circ$? Shouldn't we do that because we reversed the second equation? My book tells me to keep the $E_\text{half-cells}^\circ$ as they are written in the tables and simply put them in $\eqref{eqn:1}$. But why? Answer: Take a look at the two half reactions: $$ \begin{align} \ce{Ag+(aq) + e- &→ Ag(s)} &\qquad E^\circ &= \pu{0.80 V} \\ \ce{Sn^2+(aq) + 2 e- &→ Sn(s)} &\qquad E^\circ &= \pu{-0.14 V} \end{align} $$ If there is an electron for grabs (like the ones in the wire of a voltaic cell), $\ce{Ag+(aq)}$ and $\ce{Sn^2+(aq)}$ are competing for it. Whichever half reaction has the higher (more positive) reduction potential will win. If the reduction potentials are equal, it is a draw and the reaction is at equilibrium. So we are taking the difference of the reduction potentials to see in which direction the reaction will go. No sign flipping while figuring out the emf of voltaic cell? Take a look at the equation you are using to figure out the emf. You are already treating the oxidation half reaction differently than the reduction half reaction because there is a negative sign in front of the anode reduction potential. $$E_\text{cell}^\circ = E_\text{cathode}^\circ - E_\text{anode}^\circ$$ If you switch the anode and cathode half reaction, you would get the opposite sign for the emf. (Not that the reaction would go in that direction.)
{ "domain": "chemistry.stackexchange", "id": 11694, "tags": "physical-chemistry, electrochemistry, redox" }
String related problem from CodingBat
Question: I was trying this problem and I was wondering if there is a way to check for the edge cases inside the for-loop. The problem statement: We'll say that a lowercase 'g' in a string is "happy" if there is another 'g' immediately to its left or right. Return true if all the g's in the given string are happy. Here I am starting the loop from index 1 as I need to look back and I don't want an ArrayIndexOutOfBounds exception. The same applies for the end edge case where I need to look the following index. To avoid this problem I'm checking these cases right after I've checked all the others, but it doesn't feel quite right. Is there a way to do it in a more compact way? public boolean gHappy(String str) { if(str.length() == 1 && str.charAt(0) == 'g') return false; if(str.length() == 0) return true; for(int i = 1; i < str.length() - 1; i ++) { if(str.charAt(i) == 'g') { if(str.charAt(i - 1) != 'g' && str.charAt(i + 1) != 'g') return false; } } // edge cases (start-end) if(str.charAt(0) == 'g' && str.charAt(1) != 'g') return false; if(str.charAt(str.length() - 1) == 'g' && str.charAt(str.length() - 2) != 'g') return false; return true; } Answer: There's several 'special cases' in your code. String of length 1 String of length 0 'g' at start of String 'g' at end of String I'd recommend getting rid of these special cases. To do this, you can check for special conditions inside the for-loop. Loop through all the positions in the string, and check if there is a 'g' on that position. If it is, then check if the position is more than 0 (i.e. it has a char in front of it) or if the position is less than str.length - 1 (i.e. it has a char after it). The details about how to write this can be done in oh so many ways, but here is what I would end up with: for (int i = 0; i < str.length(); i++) { char ch = str.charAt(i); if (ch == 'g') { if (i > 0 && str.charAt(i - 1) == ch) { continue; } if (i < str.length() - 1 && str.charAt(i + 1) == ch) { continue; } return false; } } return true;
{ "domain": "codereview.stackexchange", "id": 14635, "tags": "java, strings" }
Does slope of $y$ vs $x$ curve tell anything about magnitude of instantaneous velocity at that point?
Question: I know that slope at any point on the trajectory of a moving body gives us the direction of its instantaneous velocity. Does it tell us anything about magnitude of the velocity at that point? I don’t think it does, but I’m not completely sure. Slope of $y$ vs $x$ curve means $\frac{dy}{dx}$. We can write it as $=>$ $\frac{dy}{dx}$ $\frac{dt}{dt}$ $=$ $\frac{v_y}{v_x}$ $=$ $c$ (Some constant) $=>$ $v_y$ $=$ $cv_x$ But this doesn’t tell us anything about magnitude of instantaneous velocity at that point unless we know the magnitude of any one of $v_y$ or $v_x$, at that point. Is there a way we can find out the value of magnitude of instantaneous velocity at a point on the trajectory with the help of just the slope at the point? Answer: The velocity cannot be determined by $\frac{\text dy}{\text dx}$, and a simple example will show why. Imagine we both start at the origin and move along the positive x-axis until reaching the coordinate $(1,0)$. Let's I travel at $1\,\mathrm{m/s}$ and you travel at $2\, \mathrm{m/s}$. What are both of our trajectories $y(x)$? Well we have the same trajectory $$y(x)=0 \text{ for }0\leq x\leq 1$$ and therefore the same trajectory slope $$\frac{\text dy}{\text dx}=0 \text{ for }0\leq x\leq 1$$ However, we both had different magnitudes of our velocities. Of course, this is a simple example. But this is true for any $y(x)$ (or even if the trajectory is not a well-defined function of $x$). This is because if we have one parametrization $(x(t),y(t))$ we can easily pick a different parametrization that follows the same curve but does it at a different velocity, for example $(x(2t),y(2t))$.
{ "domain": "physics.stackexchange", "id": 61795, "tags": "kinematics" }
Alternative to using sleep() to avoid a race condition in PyQt
Question: I have a situation where I would like to use a single QThread to run two (or more) separate methods at different times. For example, I would like the QThreadto run play() sometimes, and when I am done playing, I want to disconnect the QThread.started() signal from this method so that I may connect it somewhere else. In essence I would like the QThread to act as a container for anything I would like to run in parallel with the main process. I have run into the problem where starting the QThread and then immediately disconnecting the started() signal it causes strange behavior at runtime. Before I discovered what 'race condition' meant (or really understanding much about multithreading), I had the sneaking suspicion that the thread wasn't fully started before being disconnected. To overcome this, I added a 5 ms sleep in between the start() and disconnect() calls and it works like a charm. It works like a charm but it isn't The Right Way. How can I implement this functionality with one QThread without making the call to sleep()? def play(self): self.stateLabel.setText("Status: Playback initated ...") self.myThread.started.connect(self.mouseRecorder.play) self.myThread.start() time.sleep(.005) #This is the line I'd like to eliminate self.myThread.started.disconnect() def record(self): self.stateLabel.setText("Status: Recording ...") self.myThread.started.connect(self.mouseRecorder.record) self.myThread.start() time.sleep(.005) #This is the line I'd like to eliminate self.myThread.started.disconnect() Answer: Use a Queue, python has a threadsafe Queue class. Push the function you want to run on the background thread onto the queue. Have the thread wait on the queue executing functions as they are put into it.
{ "domain": "codereview.stackexchange", "id": 3140, "tags": "python, multithreading, pyqt" }
Introductory Quantum, trouble with this boundary condition and potential
Question: Working on problem 2.40 from Griffiths but can't seem to understand the first boundary condition. We are given the potential $V(x) = \left\{\begin{matrix} \infty & x < 0\\ \frac{-32\hbar^{2}}{ma^{2}}& 0 \leq x \leq a\\ 0 &x>a \end{matrix}\right.$ And we want to find the bound states. Since our minimum potential is $\frac{-32\hbar^{2}}{ma^{2}}$ we know that our $E$ has to be between this potential value and 0. The problem I'm having at the moment right now with the middle region and the boundary condition at $x=0$. In region $1$ we have that $E - V(x) > 0$ so then we having the following form of the TISE. $\frac{d^{2}\psi}{dx^{2}} = \frac{-2mE}{\hbar^{2}}\psi$ Letting $k$ = $\frac{\sqrt{2mE}}{\hbar^{2}}$ Then we have solutions of the form $\psi(x) = Ae^{ikx}+Be^{-ikx}$ Or $\psi(x) = Asin(kx) + Bcos(kx)$ If you apply the boundary condition then $\psi(0) = 0$ The thing that I'm confused by is the first equation seems to suggest that $A = -B$ And the second equation suggests $B$ is $0$ because $A\sin(0) + B\cos(0) = 0$ I know from looking this up earlier that I'm supposed to get that A is nonzero while B is zero, but I'm not sure how the two different equations match up. I should be able to arrive at the same conclusion whether or not I use complex exponentials or sines and cosines right? Answer: The choice of the constants $A$ and $B$ depends on the form of the solution. You could have denote one pair of constanst by $A$ and $B$, and the other by $C$ and $D$. A possible solution is $\psi (x)=A\sin(kx)$. In complex form, the sine is: $$\sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$$ If you've never proved this formula, try it using $e^{ix}=\cos x+i\sin x$. This indicates that if you express the solution as $\psi(x)=Ae^{ikx}+Be^{-ikx}$, the constants must obey the relation$A=-B$. After all, both solutions are the same.
{ "domain": "physics.stackexchange", "id": 48510, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, potential" }
Re: Logistic Regression
Question: I am working on a dataset that has a dependent variable that is binary, but it contains 98% of 0's and 2% of 1's. I am trying to use Logistic regression to predict purchase of a product. But because of the huge number of 0's, the model is not predicting well and getting a large number of false positive result. Kindly suggest how can I approach this. Answer: This kind of problem is call Data Imbalance issue, this is a very common issue in Financial Industry, Health Care Industry(Cancer Cell Detection) like Banks and Insurance (for Fraud Detection) To overcome such issues, we use different techniques like Over-sampling or Under-sampling. Over-sampling tries to increase that minority records by duplicating those records to make balance in the data Under-sampling tries to decrease the majority records by removing some records which are not significant to make balance in the data. There are different algorithms for implementing the same. you can go through these Link-1,Link-2, for Explanation and Implementation of the same. Let me know if you need anything else.
{ "domain": "datascience.stackexchange", "id": 2771, "tags": "classification, logistic-regression" }
A question about variation of metric under Weyl and coordinate transformations
Question: I have a question about deriving variation of metric under Weyl and coordinate transformations in Polchinski's string theory (3.3.16). Under transformation $$\zeta: g \rightarrow g^{\zeta}, \,\,\, g_{ab}^{\zeta}(\sigma')=\exp[ 2 \omega (\sigma) ] \frac{ \partial \sigma^c }{\partial \sigma'^a} \frac{ \partial \sigma^d}{\partial \sigma'^b} g_{cd}(\sigma) \tag{3.3.10} $$ how to show $$ \delta g_{ab} = 2 \delta \omega g_{ab} - \nabla_a \delta \sigma_b-\nabla_b \delta \sigma_a ? \tag{3.3.16}$$ The first term in (3.3.16) comes from Weyl transformation. I am unable to derive the second and third terms. Answer: It goes a little something like this: \begin{align*} \delta g_{ab}(\sigma)&=g_{ab}^{\zeta}(\sigma)-g_{ab}(\sigma)\\ &=\exp(2\omega(\sigma-\delta\sigma))\frac{\partial (\sigma^c-\delta \sigma^c)}{\partial \sigma^a}\frac{\partial( \sigma^d-\delta \sigma^d)}{\partial \sigma^b}g_{cd}(\sigma-\delta \sigma)-g_{ab}(\sigma)\\ &\approx (1+2\omega )({\delta^c}_a-\partial_a\delta \sigma^c)({\delta^d}_b-\partial_b\delta \sigma^d)(g_{cd}(\sigma)-\delta\sigma^e\partial_eg_{cd}(\sigma))-g_{ab}(\sigma)\\ &\approx 2\omega g_{ab}(\sigma)-\partial_a\delta\sigma_b-\partial_b\delta\sigma_a-\delta\sigma^e\partial_eg_{ab}(\sigma). \end{align*} The last expression we recognize as the Lie derivative of the metric along the vector field $\delta\sigma^a$. What you wrote down is an equivalent form using the covariant derivative. P.S. you made it to the hardest chapter :)
{ "domain": "physics.stackexchange", "id": 8867, "tags": "homework-and-exercises, string-theory, differential-geometry, metric-tensor" }
What would be the effects of a -400 nanotesla geomagnetic storm on modern electronics?
Question: After reading the Wikipedia article on geomagnetic storms, I'm curious about what the effects of a -400-nanotesla-minimum geomagnetic storm would be on modern military and consumer electronics. The March 1989 geomagnetic storm, with a minimum of -589 nT, shut down Quebec's power grid, but the Bastille Day event, with a minimum of -301 nT, apparently didn't do much, at least to power grids. However, I cannot find information on how either effected military/consumer electronics. What would a -400 nT storm cause, and how could it be shielded against? Answer: You basically asked the same question over on Worldbuilding. I'm copying my answer to that question here. This answer does not specifically address the strength of the magnetic pulse because whether or not that strength has any affect is dependent on far too many variables to give you a simple answer (e.g., ground conductivity, ground charge, age of the power grid, availability of discharge protection, material used for the wires, climatological conditions, time of day... just to name a very, very, very few). However, it appears you're still looking for a reasonably "real life" justification for damaging personal electronics without blowing the power grid, so the answer is fundamentally the same. You can screw up wireless coms, but nothing else without destroying the power grid first The nature of the stellar event doesn't matter. In fact, the source of what's causing the power grids to black out is irrelevant. What's happening to your earth is a massive magnetic pulse, similar to a nuclear EMP. What's happening is exactly what happens with a generator: a changing magnetic field induces electrical flow in a long length of wire. The long length of wire isn't just important — it's critical. As the wire gets shorter, the strength of the magnetic field must get stronger to induce the same electrical current. The induced current must be enough to either screw up computation (very unlikely) or to blow the device's circuitry (much more likely). And there's your problem. An electromagnetic (EM) event strong enough to knock out a single piece of electronic equipment would literally cause the wires in the power grid (above ground or underground, unless it was incredibly and therefore impractically deep underground) to vaporize. OK, so I'm an electrical engineer, but let's not take my education's word for it. Let me give you a practical example. My family used to live in Texas, and one evening lightning struck near our house (yes, this really happened!). The thunder shook the whole house. But what happened electrically? A phone line (very thin, very long wire) in the corner of the house nearest the strike vaporized. I had to run a new phone cable. All other phone cables in the house were unharmed. An electrical wire in the wall closest to the strike heated to the point of melting the insulation, which caused it to short out and throw a breaker. I had to pull that and run a new one. All other electrical wires in the house were unharmed. Gratefully, this didn't start a fire, demonstrating the value of sheetrock and uninflammable insulation. My computer and printer were connected using a 6-foot Centronics-style 36-wire parallel cable. It was just barely long enough to couple enough energy to blow the input port on the printer. The computer was unharmed What this really meant was that the electrostatic discharge (ESD) protection on the printer sucked. Walking across the floor and touching the input port directly would have blown it. I was very disappointed, but back then ESD protection in consumer electronics was only just coming into regular use. What does this mean for you? It's impossible to electromagnetically (EM) damage equipment on a global scale without incredible damage to the power grid. All those really long, easy-to-induce-current-into wires make it simply impossible. However, a strong global EM event could devastate wireless communications while leaving the power infrastructure intact.
{ "domain": "astronomy.stackexchange", "id": 5762, "tags": "the-sun, stellar-astrophysics, coronal-mass-ejection, electromagnetic-spectrum" }
HTML elements word spacing using jQuery
Question: I'm doing a hobby project for fun recently, it's a GitHub webpage using Jekyll. One of the main component of my site is displaying Chinese characters with their Sino-Vietnamese pronunciation. I use <ruby> elements to render these texts, for those who don't know, you can read its MDN docs The problem with that is word spacing is broken when a <rt> element is longer than its associated <rb> element. So I created a jQuery function to dynamically adjust spacing. A minimal example here: const stdSpace = "1em"; // for ruby annotation spacing $(function () { $("rb").each(rubyAdjust); }); function rubyAdjust(i, el) { // for each ruby base const rbW = $(el).width(), // take its width rtW = $(el).next("rt").width(), // its associated ruby text width diff = (rtW - rbW).toFixed(0), // excess amount addSpace = diff > 0 ? `calc(${stdSpace} + ${diff.toString()}px)` : stdSpace; $(el).css("margin-right", addSpace); } rb { display: inline-block /* fluid word spacing with margin using JS */ } rt { text-align: left /* to work together with rb margin */ } <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <ruby> <rb>是</rb><rt>Thị</rt> <rb>故</rb><rt>cố</rt> <rb>空</rb><rt>không</rt> <rb>中</rb><rt>trung</rt> </ruby> My function rubyAdjust works as expected, but becomes a UI blocking point when I have a very long text. For a full fledged example, you can visit a page of my project which shows the Diamond Sutra. So what I want is to improve the rendering speed: Is there a better way to adjust word spacing of <ruby>? The official docs of <ruby> doesn't have much, maybe because the feature is still uncomplete and fews people are using it? I've also searched for a multi thread execution of jQuery, but it seems like DOM manipulation isn't possible. Do you guys have any idea? Many thanks, Answer: Some improvements: completely remove the JS function and replace with CSS (thanks to this SO answer); this greatly reduces loading time wrap one ruby for each pair rb & rt with ruby, instead of having multiple pairs of them inside ruby use flex to render rather than the default display: block The improved code: ruby { display: inline-flex; flex-direction: column-reverse; margin-right: 0.5em; /* additional spacing */ } rb, rt { display: inline; line-height: 1; /* spacing between rb & rt can be changed */ text-align: left; /* my personal preference */ } <ruby><rb>是</rb><rt>Thị</rt></ruby> <ruby><rb>故</rb><rt>cố</rt></ruby> <ruby><rb>空</rb><rt>không</rt></ruby> <ruby><rb>中</rb><rt>trung</rt></ruby>
{ "domain": "codereview.stackexchange", "id": 41665, "tags": "jquery, html, css" }
Definition of alpha-carbon
Question: What is an $\alpha$-carbon, and how can I identify one? The other questions because of which my questions has been termed as duplicated is asking about alpha carbons in alkyl halide , while my question addresses simply ,"what an alpha carbon is and not in any particular kind of carbon compound", I didn't even knew if alpha carbon is supposed to be connected to functional groups or halogens or double/triple bond, While the other post asks for alpha carbon in "alkyl halide". Answer: It is a descriptor of the position of one group relative to another. The most common situation would be for a carbonyl group, where the α-carbon is the one immediately adjacent to the carbonyl itself: It is for this reason that naturally occurring amino acids are often called α-amino acids - the amine is attached to the α-carbon.
{ "domain": "chemistry.stackexchange", "id": 17936, "tags": "organic-chemistry, hydrocarbons" }
Twin paradox in curved space time
Question: In a flat space, where special relativity works, a travelling body can only return to the same point if we apply some kind of acceleration to the body. So twin paradox is not a paradox because a travelling body that returns to the same point where it starts is not an inertial reference. But then we have general relativity, that states that mass (energy) curves space-time. So when a photon changes its trajectory passing nearby sun, it's actually moving in a straight line, but in a curved space-time. Now we can create a new version of twin paradox, where the spaceship that carries one of the twins uses the curvature of some astronomical body (like Jupiter, sun or a black hole) to return to the same point? In that case which twin will be older? How to solve the paradox in this context? Answer: In that case which twin will be older? Each twin experiences a proper time $\int ds$, where the integral is taken along their world-line. In general, this is all we can really say. However, in the case of a static spacetime, you can define a gravitational potential and then analyze the proper time in terms of two terms, a kinetic term (special-relativistic $\gamma$) and a gravitational one (proportional to the potential). How to solve the paradox in this context? The SR paradox occurs if we assume, erroneously, that there is symmetry between the twins. The SR paradox is resolved because the world-lines are different. The symmetry fails because they're distinguishable: only one of them is inertial. The GR version you've posed is resolved in the same way: the world-lines are distinguishable (although both are inertial), so integrating $\int ds$ along them gives different answers. For instance, one could orbit the earth 47 times in an elliptical orbit, while the other orbits it 10 times in a circular orbit. The orbits intersect at the beginning and end.
{ "domain": "physics.stackexchange", "id": 53520, "tags": "general-relativity, metric-tensor, time, coordinate-systems, observers" }
Why does Friction not accelerate the body in this case?
Question: My textbook says (see the highlighted paragraph below), "Normal is the perpendicular component of contact force, while friction is the parallel component". First of all, I am familiar with how friction works. For example, if a body is at rest on a horizontal rough (i.e non-fictionless) surface, no friction is acting on it horizontally. (Static) friction comes into the picture only if the body tries to move relative to the surface, and tries to stop it from moving. If the body is actually moving relative to the surface, it experiences kinetic friction, again opposing its motion. Similar case when the body is placed on a ramp. Static friction balances a component of gravity if the body is at rest. Now coming back to the textbook. An object is kept on a horizontal surface. According to the book, a smooth surface would exert a very small force parallel to it, and hence is nearly frictionless. It means a rough horizontal surface would indeed exert a large enough force parallel to it. Large enough in the sense that it can not be ignored when we analyse the forces acting on the object. This is what I can't comprehend. If I simply put a body on a very rough horizontal surface, it is at rest. The surface exerts two contact forces on the body (according to the book). One perpendicular to the surface (Normal), other parallel to it (friction). Now weight of the object cancels out Normal force acting on it. Hence no acceleration in vertical direction. Why is there no acceleration in the horizontal direction either? Horizontal component of contact force (i.e friction) is still acting on it, according to the book, because surface is rough. There is no other force in the horizontal direction. I understand the fact that what I am asking sounds wrong. Because I already mentioned that I understand that static friction is zero unless body tries to move. So there should be no friction when the the body is simply lying there. But I am asking the above question in context of how the textbook has described friction being the parallel component of contact force. Answer: The questions Why does Friction not accelerate the body in this case? and Why is there no acceleration in the horizontal direction either? Horizontal component of contract force (i.e friction) is still acting on it, according to the book, because surface is rough. There is no other force in the horizontal direction. you answered yourself: (Static) friction comes into the picture only if the body tries to move relative to the surface, and tries to stop it from moving. If the body is actually moving relative to the surface, it experiences kinetic friction, again opposing its motion. If there is no other force in the horizontal direction, then there is also no friction force. Only if some horizontal force is exerted on the object, there will be either static friction when body is at rest or kinetic friction if there is relative movement between the two surfaces in contact. The highlighted text says: "...the two bodies in contact may have a component parallel to the surface of contact..." (boldface mine) The emphasis is on the "may have", which means it can have the parallel component but not necessarily. The fact that the surface is rough only says that its coefficient of friction is (probably) larger compared to smooth surfaces. Remember that the static friction force magnitude $f_s$ is actually defined as $$f_s \leq (f_s)_\text{max} = \mu_s n$$ where $n$ is normal (perpendicular) force magnitude, and $\mu_s$ is coefficient of static friction. This means that the static friction force can have any value in range from $0$ to $(f_s)_\text{max}$ which will prevent the two surfaces in contact to relatively move. Unlike static friction force, the kinetic friction force has a constant magnitude defined by $$f_k = \mu_k n$$ Once the two contact surfaces move relatively to each other, then kinetic friction force replaces static friction force. Note that in general $\mu_k < \mu_s$. See figure below for graphical interpretation of the friction force model. Source: H. D. Young, R. A. Freedman, "University Physics with Modern Physics in SI Units", 15th ed., 2019.
{ "domain": "physics.stackexchange", "id": 87572, "tags": "newtonian-mechanics, forces, friction" }
Why does an electric field oppose the flow of positive ion?
Question: I'm solving a very interesting problem. Suppose a cell is divided into inside and outside. Inside we have positive ions and outside we also have positive ions in different concentration. Suppose a concentration difference of potassium K+ causes it to flow outwards from the inside (while the other ions are fixed), then there would be a deficiency of positive ions on the inside, thereby creating a electric field across the cell membrane (just like a capacitor). Can someone explain to me, why after there has been an electric field set up between the outside and the inside, no more positive K+ ions may flow from inside to the outside? Thanks Answer: I'm not sure what your physics background is. The phenomenon you describe comes from the equation of continuity. This can be concisely mathematically expressed as $$ \oint \left( {\bf J} + \epsilon_0 \frac{\partial {\bf E}}{\partial t} \right) \cdot d{\bf S} = 0 $$ In this equation ${\bf J}$ is an electrical current density (current per unit area) and $\partial {\bf E}/\partial t$ is the rate of change of electric field ($\epsilon_0$ is just a physical constant to get the units right). The equation says that the sum of these two things integrated over a closed surface must equal zero. That is, if you move charge into or out of a volume then the electric field will change and it will change in a direction to oppose the flow of charge. i.e. if you keep putting positive charges into a volume, then an electric field will build up that opposes the flow of positive charge into the volume. The second part of the answer (and perhaps the main thing you are asking) is how the electric field that is set up opposes the flow of charge? This is a straightforward consequence of the component of the Lorentz force exerted by electromagnetic fields on charged particles. In this case, considering only electric fields, the force is given by ${\bf F} = q{\bf E}$, where the quantities are vectors and the charge $q$ in the case you are discussing is positive. So on the diagram you show, after positive ions have been transported from inside to outside, an electric field builds up (due to the continuity equation), and this electric field is in the direction from outside to inside. The force this exerts on a positive ion is also in the direction from outside-to-inside. Eventually as more positive charges flow out of the cell, this force will become strong enough to stop the flow of charge and presumably an equilibrium will be reached where the electric field is strong enough to overcome the concentration difference.
{ "domain": "physics.stackexchange", "id": 16724, "tags": "capacitance, biophysics" }
How is the speed of sound dependent on pressure change in this formula?
Question: In the following equation, $$\text{speed of sound}=c=\sqrt{\frac{\gamma RT}{M}}$$ The pressure term is missing or I guess is integrated in the adiabatic constant $\gamma$. But I don't know how to even derive the adiabatic constant for a certain gas. Answer: The best way to understand an equation in my view is to derive it from scratch ... along the way you pick up the details as to why you have certain conditions or restrictions that limit the usage of the equation. With this in mind I will outline an easy way to derive the equation in your book. Along the way you may think "how on earth would you first think about doing this derivation"! The answer is that it is a longer yet much more basic way of arriving at the same result as opposed to the "proper way. If you wish me to describe it the proper way I will add it in at the end, its much shorter but possibly a little harder maths/concepts. Okay here goes: Longitudinal sound waves cause small oscillations in the distribution of the gas molecules (hard spheres) which propagate in the direction of the oscillations. These compressions (regions of higher pressure than usual) and rarefactions (regions of lower pressure than usual) effects the local density and pressure of the gas molecules in a given region of space with respect to their equilibrium values. If we imagine that a cylinder of gas, which has a cross sectional area of $A$ and a length of $vdt$ where $v$ is the velocity of the gas molecules and $dt$ is the time they spend in the cylinder, has a sound wave propagating through it length ways. Where we have a compression of the molecules: 1) their pressure increases from equilibrium $P_0$ to $P$ by $dP$, where $P=P_0+dP$. 2) the local density increases from equilibrium $\rho _0$ to $\rho$ by $d\rho$, where $\rho=\rho _0 +d\rho$. 3) due to all the extra collisions of lots of molecules in a small space the average velocity of each molecule $v$ is less than it is at equilibrium $v_o$ by $dv$ where $dv$ represents a deceleration (so is negative). This is the opposite of the rarefied regions which experience an acceleration, and occupy a localised region of lower $P$ and $\rho$ than equilibrium. The extra force exerted on the molecules in the compressed zone is given by $F=dPA$. The volume of the cylinder $V$ is its length multiplied by its cross sectional area giving us $V=Avdt$. The mass is just volume times density (equilibrium density), $m=\rho _0 Avdt$, notice that this part is independent of local changes since rarefactions/compressions manifest themselves by making $v$ larger/smaller and correspondingly $dt$ smaller/larger. We can now substitute that into Newton's second law $F=ma$ where $a$ is the acceleration of the gas molecules. We can go one of two ways here either compression or rarefaction ... lets pick a compression. This means that we will have a deceleration in the velocity of the gas molecules. Therefore the acceleration is now given by: \begin{equation} a=-\frac{dv}{dt} \end{equation} Further we can substitute the mass into Newton's second law: \begin{equation} F=-\rho _0Avdt\frac{dv}{dt} \end{equation} But remember from the start we said that the force is also given by $F=AdP$. \begin{equation} AdP=-\rho _0Avdt\frac{dv}{dt} \end{equation} Cancel the $A$s and $dt$s and re-arrange: \begin{equation} -\frac{dP}{dv}=\rho _0v \end{equation} Multiply both sides by $v$, \begin{equation} -v\frac{dP}{dv}=\rho _0v^2 \end{equation} Right lets put this result to one side and think about the change in volume. In fact we need to consider the fractional change in volume since volume is an extensive property. Therefore we will be interested not in $dV$ but in the quantity $dV/V_0$. We already know that the equilibrium volume is given by $V_0=Avdt$, the change in this volume under a compression is given by $dV=Advdt$. Therefore, \begin{equation} \frac{dV}{V_0}=\frac{Advdt}{Avdt} =\frac{dv}{v} \end{equation} Substituting this result into the above equation: \begin{equation} -\frac{V_0}{dV}dP=-\frac{dP}{\bigg(\frac{dV}{V_0}\bigg)}=\rho_0v^2 \end{equation} Now define the quantity in the middle as $the$ bulk modulus, $B$. It is the ratio of an infinitesimal pressure change to the relative change in volume, it gives an indication of the compressibility of a material. Now rearrange this for the velocity: \begin{equation} v= \sqrt{\frac{B}{\rho _0}} \end{equation} At this point you have your equation in the most general form, the Newton-Laplace equation. From here you can pick what equation of state you wish to use and tailor this equation as needed. You can clearly see how the pressure and density information is tied up in this expression. But that being said its not so great for the experimentalists. Think about how you would take these readings and so on... Therefore we are not quite done! If we use Boyle's law $P_1V_1=P_2V_2$ as Newton did then you will find that although close to experimental values your always slightly off. Therefore we will use a different equation of state... one that inherently assumes constant entropy (some people say that adiabatic means that "no heat is transferred", this is the easy way to think about things .... what they really mean is that the process occurs at constant entropy .... an adiabatic process means constant entropy... (maybe if you study statistical mechanics and phase space this will be more apparent). The equation we should use is the adiabatic version of Boyle's law. The reason that a sound wave travelling through a medium is considered adiabatic is because "no heat is exchanged between rarefactions and compressions" i.e the equilibrium temperature is maintained because rarefaction velocity change cancels out compressions velocity change on average meaning that the kinetic energy is pretty much constant and hence so is $T$ from $E_k=\frac{3}{2} NKT$. So instead of Boyle's law we use $PV^{\gamma}=P_0V_0^{\gamma}$. But for a given temperature the entire right hand side is just a constant so lets instead use $PV^{\gamma}=K$ where $K$ is the constant. \begin{equation} P=KV^{-\gamma} \end{equation} Now lets differentiate $P$ with respect to $V$.... lets find the rate of change oof pressure with respect to the change in volume... \begin{equation} \frac{dP}{dV}=-\gamma KV^{-\gamma-1} \end{equation} Multiply both sides by $-V$ and you will notice we have an expression for $B$, \begin{equation} -V\frac{dP}{dV}=KV\gamma V^{-\gamma-1}=VK\gamma V^{-\gamma}V^{-1}=\frac{K\gamma}{V^{\gamma}} \end{equation} Let $V=V_0$ the equilibrium volume which we are free to do. Now show the equilibrium volume in terms of the equilibrium density $\rho_0$. \begin{equation} \rho_0=\frac{nM}{V_0} \end{equation} If we now go back and put these in $v=\sqrt{\frac{B}{\rho_0}}$ and simplify then we have the equation stated in your book. \begin{equation} v=\sqrt{\frac{B}{\rho_0}}=\sqrt{\frac{K\gamma V_0}{V_0^{\gamma}nM}} \end{equation} Using $K/V_0^{\gamma}=P_0$ and then the ideal gas law $P_0V_0=nRT$, \begin{equation} v=\sqrt{\frac{\gamma P_0V_0}{nM}}=\sqrt{\frac{\gamma RT}{M}} \end{equation} So you see now how pressure is tied up in the entire expression, along with the density. The assumption we had to assume was that the sound wave travelling through is adiabatic and there we have the speed of the wave in a gas of non-interacting (except for collisions) hard spheres. If you'd like I can derive the original formula in a much more professional way from elementary fluid mechanics ... but although quicker and more sophisticated it is a little harder. (if I've made any errors feel free to edit away) :)
{ "domain": "chemistry.stackexchange", "id": 2048, "tags": "gas-laws" }
Creating a tuple from a CSV file
Question: I have written code that reads in a CSV file and creates a tuple from all the items in a group. The group ID is in column 1 of the table and the item name is in column 2. The actual datafile is ~500M rows. Is there any way to make this code more efficient? Input file: "CustID"|"Event" 1|Alpha 1|Beta 1|AlphaWord 1|Delta 2|Beta 2|Charlie 2|CharlieSay Code: def sequencer(myfile): import csv counter = 1 seq = [] sequence = [] with open(myfile, 'rb') as csvfile: fileread = csv.reader(csvfile, delimiter='|', quotechar='"') next(fileread) ## skip header for row in fileread: #if counter == 5: # break if 'word' in row[1] or 'say' in row[1]: ##if event has either word or say anywhere in the text then ignore (eg: even ignore adword or afdjklSAYwer) continue if int(row[0]) == counter: seq.extend([row[1]]) else: sequence.append(seq) seq = [row[1]] counter = counter+1 sequence.append(seq) return sequence Output: An array which is a list of lists, where each list is the sequence of events in the order of the text file for each customer ID. Answer: The problem is that a list made from the rows of a file with ~500M rows will have millions of entries, and that’s going to really hamper Python. As long as you’re using a list, I don’t think there’s any magic you can do here to magically fix that problem. So you should ask: do I need a list? You may be able to make to do with a generator, which is much faster and more memory efficient (because it only computes one element at a time, not precomputing everything). For a generator, I’m assuming that the customer IDs are monotonically increasing. If not, you really do have to go through the entire file to be sure you’ve got everything for each group. But your existing code seems to assume that, so I’ll assume I can as well. Here’s a slight reworking of your code to use generators: import csv def sequencer(myfile): """ Reads customer IDs and events from a CSV file. Yields events for successive groups. """ with open(myfile, 'rb') as csvfile: reader = csv.reader(csvfile, delimiter='|', quotechar='"') next(reader) ## skip header current_cust_id = 1 seq = [] for row in reader: cust_id, event = row if ('word' in event) or ('say' in event): continue while int(cust_id) != current_cust_id: yield seq seq = [] current_cust_id += 1 seq.append(event) yield seq I’ve done away with the massive list (sequence), and we just yield the list of per-customer events whenever the customer ID changes. A few other comments on your code: Put your imports at the top of your file; don’t hide them inside a function. There isn’t much in the way of documentation. Your function should at least have a docstring, and some more comments would be nice. For example, tell me why we’re skipping lines with “Say” or “Word”. I can read that the code is doing it, but I can’t read your mind to find out why. You were assuming that customer IDs increase by 1 every time, and that there’s never going to be a gap. What happens if you run your code against: "CustID"|"Event" 1|Alpha 1|Beta 1|AlphaWord 1|Delta 2|Beta 2|Charlie 2|CharlieSay 5|Echo 5|Delta Maybe you know that will be the case; I wasn’t sure, so my code is defensive against this possibility. Rather than doing seq.extend([row[1]]), it’s much cleaner to do seq.append(row[1]). It saves you creating an additional list.
{ "domain": "codereview.stackexchange", "id": 16307, "tags": "python, performance, algorithm, csv" }
Time Dilation Problem
Question: I'm having some trouble using the time dilation formula. Say an astronaut leaves Earth for 10 years, at 0.85c. How much time has passed according to an observer on Earth? I tried using the following formula: $$t = \frac{1}{\sqrt{1-(v^2/c^2)}}$$ but couldn't seem to get an answer that made sense. Any help would be much appreciated! This concept is super confusing to me. Answer: Your written text says "t = 1/sqrt[1-(v^2/c^2)]". If you used that equation, it's no wonder you got a nonsensical value. In the equation in the scanned image, that's a letter $t$ in the numerator, not the digit $1$. Also, although it works for this problem, the scanned image should really say something like $$\Delta t' = \frac{\Delta t}{\sqrt{1-\frac{v^2}{c^2}}}\ .$$ As it's written, it looks like it's expressing a coordinate transformation between the two frames instead of just expressing the time dilation factor, and interpreted as a coordinate transformation the equation would in general be wrong. The general coordinate transformation for the standard configuration is $$t' = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}} \left( t - \frac{vx}{c^2} \right)\ .$$ That equation reduces to the scanned equation if you're only dealing with the world line $x=0$. $x=0$ in this problem expresses that the astronaut is standing still in his coordinate system.
{ "domain": "physics.stackexchange", "id": 16182, "tags": "homework-and-exercises, special-relativity, reference-frames, time-dilation" }
Calculate the magnitude and phase of a signal at a particular frequency in python
Question: I have a signal for which I need to calculate the magnitude and phase at 200 Hz frequency only. I would like to use Fourier transform for it. I am very new to signal processing. And this is my first time using a Fourier transform. I found that I can use the scipy.fftpack.fft to calculate the FFT of the signal. Then use numpy.mag and numpyh.phase to calculate the magnitude and phases of the entire signal. But I would like to get the magnitude and phase value of the signal corresponding to 200 Hz frequency only. How can I do this using Python? So far I have done. from scipy.fftpack import fft import numpy as np fft_data = fft(signal) magnitude = np.mag(fft_data) phase = np.phase(fft_data) Answer: You can find the index of the desired (or the closest one) frequency in the array of resulting frequency bins using np.fft.fftfreq function, then use np.abs and np.angle functions to get the magnitude and phase. Here is an example using fft.fft function from numpy library for a synthetic signal. import numpy as np import matplotlib.pyplot as plt # Number of sample points N = 1000 # Sample spacing T = 1.0 / 800.0 # f = 800 Hz # Create a signal x = np.linspace(0.0, N*T, N) t0 = np.pi/6 # non-zero phase of the second sine y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(200.0 * 2.0*np.pi*x + t0) yf = np.fft.fft(y) # to normalize use norm='ortho' as an additional argument # Where is a 200 Hz frequency in the results? freq = np.fft.fftfreq(x.size, d=T) index, = np.where(np.isclose(freq, 200, atol=1/(T*N))) # Get magnitude and phase magnitude = np.abs(yf[index[0]]) phase = np.angle(yf[index[0]]) print("Magnitude:", magnitude, ", phase:", phase) # Plot a spectrum plt.plot(freq[0:N//2], 2/N*np.abs(yf[0:N//2]), label='amplitude spectrum') # in a conventional form plt.plot(freq[0:N//2], np.angle(yf[0:N//2]), label='phase spectrum') plt.legend() plt.grid() plt.show() And here is a useful manual with detailed explanations: reference.
{ "domain": "dsp.stackexchange", "id": 9640, "tags": "fourier-transform, python, frequency, phase, magnitude" }
Can an intersection of two context-free languages be an undecidable language?
Question: I'm trying to prove that $\exists L_1, L_2 : L_1$ and $L_2$ are context-free languages $\land\;L_1 \cap L_2 = L_3$ is an undecidable language. I know that context-free languages are not closed under intersection. This means that I can produce an $L_3$, which is undecidable. An example would be $L_1 = \{a^n | n \in \mathbb{N}\} \cap L_2 = \{0\} = \emptyset$. Is this a correct proof? If not, how can I prove this theorem? Is the empty language decidable? Answer: Context-free languages are decidable, and decidable languages are closed under intersection. So, though the intersection of two CF languages may not be CF, it is decidable. Remarks on your example: $\emptyset=\{\}\neq$ $\{0\}$ $L_1\cap\emptyset=\emptyset$ which is context-free. You cannot prove your claim, because it is wrong the empty language is decidable: the answer is always "no, this string is not in the empty set".
{ "domain": "cs.stackexchange", "id": 3112, "tags": "computability, context-free, undecidability" }
Creating simulated items sold at a store into a hash
Question: I am writing code that creates a hash for a store that has an item, price, and sku. The hash is inserted into an array. I also create another hash based on the sku that holds the amount of inventory for that item. I am new to Ruby and am looking for ways to improve my code. class SimStore def item_info random = Random.new price = random.rand(1.00 ... 75.00).round(2) item = 1.times.map { RandomWord.nouns.next } sku_random = Random.new sku = sku_random.rand(1000 ... 1000000) sum_store_hash = {:item => item, :price => price, :sku => sku} end def stock array_items_stock stock_hash = {} array_items_stock.each do |inventory| random = Random.new quantity = random.rand(10 ... 100) stock_hash[inventory[:sku]] = quantity end stock_hash end end items = 15; array_items = [] hash_inventory = [] while items > 0 items = items - 1 sim_store = SimStore.new items_hash = sim_store.item_info array_items.push(items_hash) end hash_inventory = sim_store.stock array_items Answer: The SimStore class doesn't seem to do much except create a hash and create an item. These seem like separate concerns You really could use an Item class with a static method for stubbing out an Item. First, let's start with an "Item" class: class Item attr_accessor :price, :name, :sku, :quantity def initialize(name, sku, price, quantity) self.name = name self.sku = sku self.price = price self.quantity = quantity end def self.stub price = Random.new.rand(1.00 ... 75.00).round(2) name = 1.times.map { RandomWord.nouns.next } sku = Random.new.rand(1000 ... 1000000) quantity = Random.new.rand(10 ... 100) self.new name, sku, price, quantity end def to_h { name: name, price: price, sku: sku, quantity: quantity } end end It has a price, name and sku that is a getter and setter. A static, or class level method called Item.stub generates a fake item based on the code you are using. Lastly, it has a to_hash method for converting the Item into a Hash. Now we can take one step back and think about your SimStore, which at this point doesn't simulate anything so we can rename it to Store: class Store def items @items ||= [] end def self.stub(number_of_items) store = self.new number_of_items.times do |i| store.items << Item.stub end store end def to_h items.map { |item| item.to_hash } end def sku_hash # For Ruby < 2.1 array = items.map { |item| [item.sku, item.quantity] } Hash[array] # For Ruby >= 2.1 items.map { |item| [item.sku, item.quantity] }.to_h end end It has an array of Items and a to_hash method that converts the entire store into a Hash. Additionally, it also has a class method called stub, which accepts the number of items that should be stubbed out. You wanted to convert the items into a Hash of SKUs for the keys, and quantities for the values, so explicitly creating a sku_hash method seems to be more appropriate. And to use it: store = Store.stub 15 store_hash = store.to_hash # => [{name: '...', sku: 123, price: 23.99}, ...] sku_hash = stsore.sku_hash # => { 'sku1' => 2, 'sku2' => 55 } By separating the Item from the Store and adding class methods to stub items and entire stores, you get two classes that can actually be used in a real context, but also can be used just for playing around.
{ "domain": "codereview.stackexchange", "id": 16584, "tags": "ruby, hash-map" }
Does the System F with pairs have the strong normalisation and subject reduction properties?
Question: It is easy to look in a lot of textbooks the proofs of subject reduction and strong normalisation for System F, also, sometimes there are definitions of System F with pairs, where (t,r) is a term, not only an encoding. The question is, what would be the reference for this system? Answer: The treatment of pairs given by encoding, such as that in Proofs and Types, isn't what you usually want since they aren't "surjective pairs", i.e., there is no eta rule. Let's call surjective pairs, products. An extension of system F with products and unit is given in: Di Cosmo, 1995, Isomorphisms of types: from lambda-calculus to information retrieval and language design, Birkhauser: Basel.
{ "domain": "cstheory.stackexchange", "id": 912, "tags": "reference-request, lo.logic, type-theory, lambda-calculus, polymorphism" }
Program to print a triangle of numbers
Question: I want to print the following triangle: 1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 1 2 3 1 2 1 I tried to do but I cannot avoid two for loops. Is it possible to achieve the result in a single for loop? static void printtriangle() { for (int i = 1; i <= 5; i++) { for (int j = 1; j <= i; j++) { Console.Write("" + (j)); } Console.WriteLine(""); } for (int k = 4; k >= 0; --k) { for (int j = 1; j <= k; j++) { Console.Write("" + (k)); } Console.WriteLine(""); } } Answer: You can do it with one loop that goes from a negative number to a positive, and create a string from a range calculated from the number: int max = 5; for (int i = 1 - max; i < max; i++) { Console.WriteLine(String.Join(" ", Enumerable.Range(1, max - Math.Abs(i)))); } That is of course actually two loops, the Range method will create numbers that the Join method loops through.
{ "domain": "codereview.stackexchange", "id": 11906, "tags": "c#, formatting" }
Extracting specific rows and columns from a CSV file
Question: I have written a function to selectively extract data from a file. I want to be able to extract only from a certain line and only given rows. Would convert this function into a generator improve the overhead when I need to process large files? import itertools import csv def data_extraction(filename,start_line,lenght,span_start,span_end): with open(filename, "r") as myfile: file_= csv.reader(myfile, delimiter=' ') #extracts data from .txt as lines return (x for x in [filter(lambda a: a != '', row[span_start:span_end]) \ for row in itertools.islice(file_, start_line, lenght)]) Answer: Use round parenthesis for generators Also x for x in was unnecessary: return (filter(lambda a: a != '', row[span_start:span_end]) \ for row in itertools.islice(file_, start_line, lenght)) If you use Python 2 you should use itertools.ifilter because it returns a generator while filter returns a list. The functions is pretty clear overall, I suggest you space your argument list as according to PEP8 conventions. Also investigate in easier to remember argument formats like f(file, line_range, inline_range) where two tuples replace 4 arguments.
{ "domain": "codereview.stackexchange", "id": 22118, "tags": "python, performance, csv, generator" }
Why are there two structures of L-Glucose in the Fischer projection?
Question: We are told that L-Glucose is an enantiomer of D-Glucose, which would mean that they are mirror images of each other. But then we're also told that the L and D forms is determined by the spacial relationship to glyceraldehyde. Which of the structure is correct? Are they the same? Answer: It looks to me like the left one is just wrong. It looks like who made the image took D-glucose and just flipped the 5-OH. The result? The molecule at left is not glucose, but L-Idose!
{ "domain": "chemistry.stackexchange", "id": 13102, "tags": "stereochemistry, carbohydrates" }
Why is a vertex a derivative of the propagator?
Question: Where can I find the proof to this nice trick: if the momentum $q$ is small, the vertex is the derivative with respect to the mass of a propagator times a factor $(-m/v)$ like in the picture: Answer: First note that \begin{equation} \frac{\partial}{\partial m} \frac{i}{\not p - m} = \frac{i}{(\not p-m)^2} \end{equation} Now the exact answer for finite $q$ for that diagram is \begin{equation} \mathcal{A} = \left( \frac{im}{v} \right) \frac{i}{\not p-m} \frac{i}{\not p + \not q - m} G(q) \end{equation} where $G(q)$ is the propagator for whatever particle is on the dotted line (I'm guessing it's a higgs). Now for whatever reason they've stripped the dotted propagator for the answer$^1$, let's just take that as give, so we are really interested in \begin{equation} \mathcal{A}' = \frac{im}{v} \frac{i}{\not p- m} \frac{i}{\not p + \not q -m} \end{equation} In the limit $q\rightarrow 0$, the above becomes \begin{equation} \mathcal{A}' = \frac{im}{v} \left(\frac{-1}{(\not p-m)^2} + O (q)\right) \rightarrow \frac{-m}{v} \frac{i}{(\not p-m)^2} = \frac{-m}{v} \frac{\partial}{\partial m} \frac{i}{\not p - m} \end{equation} which is what you wanted. The overall point is that things simplify in the $q\rightarrow 0$ limit. $^1$ I'm not 100% sure why they do this without thinking about it more, but on a quick glance through the notes it looks like they have in mind processes like $H\rightarrow \gamma \gamma$, so in the identity you use the Higgs is implicitly taken to be an external particle, whereas the fermions run in a loop. Thus the Higgs propagator will disappear since it is an external line in the diagram you are ultimately interested in, whereas the fermion propagators show up in the loop. The net result of this trick is then to effectively convert a three point function at one loop to the derivative of a two point function at one loop, which is much easier to handle. Again, physically, the idea is that as the momentum becomes very small the calculation should simplify. This kind of trick is very useful.
{ "domain": "physics.stackexchange", "id": 24548, "tags": "quantum-field-theory, renormalization, feynman-diagrams, propagator" }
Optimal maintenance temperature and allowable drop for energy efficient hot water storage
Question: I bought an under-sink electric boiler/hot water storage unit, for instant almost-boiling water. This device has an 'eco' mode, which broadens its hysteresis to 10 degrees C: it will allow the stored water temperature to drop to $T_\text{target} - 10$ before turning on the element. This made me curious, since naively I would think that it will turn on less often of course, but for longer (or hotter) when it does; consuming just as much energy overall. If we assume: the element has a fixed resistance (e.g. it does not deliberately vary with $T_\text{target} - T_\text{now}$), i.e. a fixed kW consumption $P$ s.t. $E=P\cdot{t_\text{heated}}$ and we need only worry about $t_\text{heated}$; the unit has a fixed efficiency across all $T_\text{now}$; standby power is negligible; (so as to avoid specifics about the particular device) I know: the stored unheated water will cool exponentially over time; the heated water will rise in temperature as an inverse exponential, asymptotic to $T_\text{element}$; let: $t_{\text{fall},x}$ be the time taken for the unheated water to fall from $T_\text{target}$ to $T_\text{target} - x$; $t_{\text{rise},x}$ be the time taken for the heated water to rise from $T_\text{target}-x$ to $T_\text{target}$; then: what is the relationship between these exponents, what else are they a function of if it's really the case that $$\dfrac{t_{\text{rise},x}}{t_{\text{rise},y}} \ne \dfrac{t_{\text{fall},x}}{t_{\text{fall},y}}$$? if they are only a function of time, $T_\text{target}$, and $T_\text{ambient}$ (room/insulation, or the element, for cooling or heating respectively), then surely 'eco' would be setting an appropriate $T_\text{target}$, not the max allowable $T_\text{target}-T_\text{now}$? Answer: the heating is not asymptotic. Its roughly linear with time. The reason is that when you are heating you are supplying constant heat flux (ie. heat energy per unit of time). The underlying equation is that: $$Q = m \cdot c_p \delta T$$ where: Q is the heat energy provided m is the mass of the liquid $C_p$ is the heat capacity of the liquid $\delta T $ is the temperature difference. So if you differentiate with time (keeping constant m and constant $c_p$ then) $$\frac{dQ}{dt} = m\cdot c_p \cdot \frac{d T}{dt}$$ The reason that it takes longer is because the losses to the environment increase however, they are significantly less in the heat energy balance.
{ "domain": "engineering.stackexchange", "id": 5070, "tags": "thermodynamics, heat-transfer, heating-systems, fluid" }
An interesting use of the reduce function to convert tab-delimited data into JSON
Question: I am learning functional programming in JavaScript. This is some code from a video tutorial by funfunfunction. I am particularly concerned about the way the existence of the customer key is checked in the customers object. Is it weird to do it this way? Is there any other way? Also, if you switch the order of the OR statement operands, the object is overwritten. Does order matter in an OR statement? The function: var fs = require('fs') var output = fs.readFileSync('data2.txt', 'utf8') .trim() .split('\r\n') .map(function(line) { return line.split('\t') }) .reduce(function(customers, line) { customers[line[0]] = customers[line[0]] || [] customers[line[0]].push({ item: line[1], quantity: line[2], price: line[3], }) return customers }, {}) console.log('output', JSON.stringify(output, null, 2)); The data: Marry Poppins red bicycle 80 2 Marry Poppins glass vase 20 8 Abe Lincoln gold ring 1 100 Marry Poppins umbrella 5 50 Steve Rogers video camera 1 100 Abe Lincoln president chair 1 1000 Marry Poppins blue dress 2 200 Answer: So for your first question, that is not a weird way to check the existence, and it is used quite frequently to check whether certain variables/properties are set (such as checking browser compatibility, optional arguments with defaults in functions). It typically just works as a "use this value if it exists, otherwise use this value." You could check/set it any number of ways, like the ternary operator. customer[line[0]] = customer[line[0]] ? customer[line[0]] : {}; In the same vein, you could just use an if statement to check if it is defined. Order definitely does matter in an OR statement. In javascript (and probably most other languages) the first condition/operation is always performed, and the 2nd condition or operation is ignored unless the first condition/operation equated to a falsy value. In your example, customer[line[0]] is only assigned as an empty object if customer[line[0]] is undefined. If you reverse the order, customer[line[0]] will always be assigned to the empty object, and the 2nd half of the statement is ignored. Javascript is the only language I've used where OR statements are normally used outside of actual control statements, so it was definitely puzzling to look at these statements at first. Another, similar operator is the comma , operator, which is essentially the lowest in the order of operations. It just returns the last of the comma-separated statements, but all of the statements are performed.
{ "domain": "codereview.stackexchange", "id": 20469, "tags": "javascript, node.js, functional-programming, json, csv" }
Text based survival game
Question: I have recently made a text based survival game. I hope that I can improve the game but I am not sure where I can improve it. It is NOT EASY, don't expect to get over it the first time. #import import random #functions def GenerateRandomScene(): scenes = ['Riverside','Top of the Mountain','Middle of Forest','Mountain Side']#desert excluded water = [True,False] startitems = ['Match','Match','Match','Match','Match','Pocket Knive','Jacket','Full Bottle of Clean Water','Full Bottle of Clean Water','Energy Bar','Banadges','Walking Stick','Journal','Match','Tea Leaves','Cooking Set'] randomstarting1 = ['Magnesium Fire Starter', 'Warming Pack', 'Sleeping Bag', 'Axe', 'Torch'] randomstarting2 = ['Batteries', 'Bucket', 'Vaseline', 'Hygiene Kit', 'String', 'Camp Set'] sceneforplayer = random.choice(scenes) if sceneforplayer == 'Riverside': water = True elif sceneforplayer == 'Desert': water = False else: water = random.choice(water) startitems.append(random.choice(randomstarting1)) startitems.append(random.choice(randomstarting2)) return [[random.randint(50,100),sceneforplayer,water],startitems] def GenerateRandomExplore(scene): itemscanbeobtained = ['Berries', 'Mushrooms', 'Tree Bark', 'Plant Fibre', 'Dead Grass', 'Dead Hare', 'Bones','Water Puddle'] retlist = [] if scene == 'Riverside': for i in range(random.randint(0,3)): retlist.append(random.choice(itemscanbeobtained)) return retlist elif scene == 'Top of the Mountain': itemscanbeobtained.remove('Berries') itemscanbeobtained.remove('Mushrooms') itemscanbeobtained.remove('Dead Hare') itemscanbeobtained.append('Dead Birds') itemscanbeobtained.append('Bait') itemscanbeobtained.append('Bird Nest') for i in range(random.randint(0,3)): retlist.append(random.choice(itemscanbeobtained)) return retlist elif scene == 'Middle of Forest': itemscanbeobtained.append('Bait') itemscanbeobtained.append('Bird Nest') for i in range(random.randint(0,3)): retlist.append(random.choice(itemscanbeobtained)) return retlist elif scene == 'Desert': itemscanbeobtained = ['Catti', 'Dead Catti Skin', 'Tree Bark', 'Stick', 'Well', 'Bones'] for i in range(random.randint(0,3)): retlist.append(random.choice(itemscanbeobtained)) return retlist else: itemscanbeobtained.remove('Berries') itemscanbeobtained.remove('Mushrooms') itemscanbeobtained.remove('Dead Hare') itemscanbeobtained.append('Dead Birds') itemscanbeobtained.append('Bait') itemscanbeobtained.append('Bird Nest') for i in range(random.randint(0, 3)): retlist.append(random.choice(itemscanbeobtained)) return retlist def GetWood(inventory): if 'Axe' in inventory: minnum = 3 maxnum = 7 else: minnum = 1 maxnum = 5 return random.randint(minnum,maxnum) def Hunting(location,inventory,water): fishingrod = SearchItem('Fishing Rod',inventory) hygiene = SearchItem('Hygiene Kit',inventory) prey = ['bear','lion','cheetah','deer','cow','pig','fox','wolf','rabbit','toad','','',''] if fishingrod != 0 and water: prey.append('fish') if hygiene != 0: for i in range(2): prey.append('deer') prey.append('cow') prey.append('pig') prey.append('fox') prey.append('wolf') prey.append('rabbit') prey.append('toad') desertprey = ['camel','','','','','','','scorpian','poisonous scorpian'] if location == 'Desert': appear = random.choice(desertprey) if appear == 'poisonous scorpian': return [appear,'lethal'] else: return [appear,'True'] else: appear = random.choice(prey) if appear in ['bear','lion','cheetah']: success = random.choice(['True','False','False','False','False','False','False','False','False','False','False']) if success == 'False': injury = random.choice(['leg','arm','chest','broken ribs','broken leg','neck','lethal']) return [appear,injury] else: return [appear,success] else: return [appear,'True'] def Menu(hydration,heat,hour,watersource,dndeterminer,dist,hunger): if hour > 12: hour = hour - 12 if dndeterminer == 0: dndeterminer = 1 elif dndeterminer == 1: dndeterminer = 0 if dndeterminer == 1: print "There are "+str(12-hour)+" hours of night time left" else: print "There are "+str(12-hour)+" hours of day time left" print '1. Chop Trees' print '2. Make Fire' print '3. Discover' print '4. Read The Rule of 3' print '5. Rest' print '6. Hunt' print '7. Walk' print '8. Craft' print '9. View Inventory' print '10. Drink Water' print '11. Eat' print '12. Use Items' print '13. Build Shelter' if watersource: print "There are Water Sources around You" print '14. Get Water' print 'Hydration: '+str(hydration) print 'Body Heat: '+str(heat) print "Hunger: "+str(hunger) print 'Distance between Civilisation: '+str(dist)+ ' km' print return [raw_input('Action >>> '),hour,dndeterminer] def Ruleof3(): print '3 mins without air' print '3 hours without shelter' print '3 days without water' print '3 weeks without food' print '3 months without hope' def Intro(): print "Welcome!" print "This game is all about your survival techinques" print "Always Remember the rule of 3" Ruleof3() scene = GenerateRandomScene() print "Your charactor is trapped in a "+scene[0][1] print "The distance to civilisation is "+str(scene[0][0])+" km" print "Your aim is to help your charactor survive" DisplayInventoryTry2(scene[1]) return scene def Walk(dist,hydration,heat,inventory,dndeterminer): if 'Walking Stick' in inventory: maxwalking = 8 else: maxwalking = 5 if dndeterminer == 1 and ("Torch" not in inventory or "Fire Torch" not in inventory): maxwalking = 2 walked = random.randint(1,maxwalking) dist -= walked hydration -= 10 heat += 10 return [walked,hydration,heat] def DisplayInventoryTry2(inventory): amtcount = [] amtcountunique = [] for i in range(len(inventory)): if inventory[i] not in amtcountunique: amtcount.append([inventory[i],1]) amtcountunique.append(inventory[i]) else: for x in amtcount: if inventory[i] == x[0]: x[1]+=1 print "You have: " for i in amtcount: print str(i[1])+" x "+str(i[0]) def DrinkWater(inventory): amtcount = [] amtcountunique = [] water = [] for i in range(len(inventory)): if inventory[i] not in amtcountunique: amtcount.append([inventory[i], 1]) amtcountunique.append(inventory[i]) else: for x in amtcount: if inventory[i] == x[0]: x[1] += 1 for i in amtcount: if i[0] in ["Full Bottle of Clean Water","Full Bottle of Dirty Water"]: water.append(i) if water[0][0] == "Full Bottle of Dirty Water": water[0],water[1] = water[1],water[0] print "You have: " if len(water) != 0: for i in range(len(water)): print str(i+1)+". " + str(water[i][1])+" x "+str(water[i][0]) print "Which water would you want to drink? " waters = raw_input(">>> ") if waters == '1': if water[0][1] - 1 >= 0: inventory.remove('Full Bottle of Clean Water') inventory.append('Empty Bottle') return [50, inventory] elif waters[0] == '2': if water[1][1] - 1 >= 0: inventory.remove('Full Bottle of Dirty Water') inventory.append('Empty Bottle') if random.randint(1,10) < random.randint(1,10): return [30,inventory] else: print "The water is contaminated. Hydration - 30!" return [-30,inventory] else: print "Invalid Choice, No Such Choice Exists" else: print "You have no water supplies" def CheckValidBody(hydration): if hydration > 100: hydration = 100 return hydration else: return hydration def CheckValidHunger(hydration): if hydration < 0: hydration = 0 return hydration else: return hydration def SearchItem(item,inventory): amtcount = [] amtcountunique = [] for i in range(len(inventory)): if inventory[i] not in amtcountunique: amtcount.append([inventory[i], 1]) amtcountunique.append(inventory[i]) else: for x in amtcount: if inventory[i] == x[0]: x[1] += 1 for i in amtcount: if i[0] == item: amount = i[1] try: return amount except: return 0 def ReplaceItem(item,changeto, inventory): inventoryx = inventory inventoryx.remove(item) for i in range(len(inventory)-len(inventoryx)): inventory.append(changeto) return inventoryx def MakeFire(inventory): inventoryx = FireLightingMethod(inventory) if inventoryx == inventory: print "You cannot light fire as you do not have enough tinder" return [0,inventory] else: print 'Fire Lighted!' print 'Body Heat + 30!' return [30,inventoryx] def SearchFood(inventory): food = ['berries', 'mushrooms', 'bear flesh', 'lion flesh', 'cheetah flesh', 'deer flesh', 'cow flesh', 'pig flesh', 'fox flesh', 'wolf flesh', 'rabbit flesh', 'toad flesh', 'camel flesh', 'scorpian flesh', 'poisonous scorpian flesh', 'fish'] energy = [15,15,40,70,70,40,50,40,20,20,40,20,40,20,20,60] ownedfood = [] for i in inventory: if i in food: ownedfood.append(i) if len(ownedfood) == 0: print "You do not have any food" else: DisplayInventoryTry2(ownedfood) print "What do you want to eat?" consume = raw_input(">>> ").lower() for i in ownedfood: if consume == i: owned = True else: owned = False if not owned: print "You do not have this food" else: inventory.remove(consume) if consume == 'rabbit flesh': inventory.append('rabbit skin') for i in range(len(food)): if food[i] == consume: energy = energy[i] return [inventory,energy] def CraftingList(): print ">>> Crafting List <<<" print "1. Tinder: 1 x Wood > 3 x Tinder" print "2. String: 1 x Plant Fibre > 1 x String" print "3. String: 1 x Dead Grass > 1 x String" print "4. String: 1 x Cloth > 2 x String" print "5. Cloth: 1 x Clothes > 3 x Cloth ***Tearing Clothes will increase the rate of heat loss!" print "6. Bow Drill: 1 x String, 2 x Wood > 1 x Bow Drill" print "7. Bone Knive: 1 x Bone, 1 x String > 1 x Bone Knive" print "8. Bandage: 1 x Cloth > 3 x Bandage" print "9. Fire Torch: 1 x Wood, 1 x String, 1 x Vaseline, 1 x Tree Bark > 1 x Fire Torch" print "10. Fishing Rod: 1 x Wood, 1 x String, 1 x Knive > 1 x Fishing Rod" print "11. Water Bottle: 1 x Rabbit Skin > 1 x Water Bottle" print "Which one do you want to craft?" print return raw_input(">>> ") def CraftItem(inventory,required_materials,crafted): amtcount = [] amtcountunique = [] consumestr = "" for i in range(len(required_materials)): if required_materials[i] not in amtcountunique: amtcount.append([required_materials[i],1]) amtcountunique.append(required_materials[i]) else: for x in amtcount: if required_materials[i] == x[0]: x[1]+=1 for i in amtcount: consumestr = consumestr + str(i[1])+" x "+str(i[0]) + " " craftedstr = str(crafted[1])+" x "+str(crafted[0]) print "Crafting "+str(craftedstr)+" will consume "+str(consumestr) print "Are you sure?" confirmation = raw_input("Y / N >>> ") if confirmation.lower() == "y": try: for i in required_materials: inventory.remove(i) for i in range(int(crafted[1])): inventory.append(crafted[0]) return inventory except: print "You are missing some indgredients" return inventory def FireLightingMethod(inventory): lightingitems = ['Bow Drill', 'Match', 'Magnesium Fire Starter', 'Batteries'] ownedlighting = [] for i in inventory: if i in lightingitems: ownedlighting.append(i) print "You have these lighting items:" DisplayInventoryTry2(ownedlighting) print "Which one do you want to use to light?" tolight = raw_input(">>> ").lower() found = False for i in ownedlighting: if tolight == i.lower(): removeitem = i found = True if not found: print "You cannot light fire as you do not have the item" for i in inventory: if i == 'Tinder': if tolight == 'match': inventory.remove(removeitem) else: pass inventory.remove('Tinder') return inventory def UseableItemsOutput(inventory): useableitems = ['Tea Leaves','Warming Pack','Energy Bar'] useableowned = [] for i in inventory: if i in useableitems: useableowned.append(i) if len(useableowned) != 0: print 'You own these useable items:' DisplayInventoryTry2(useableowned) print ">>> Which one do you want to use? <<<" use = raw_input("Please type in the full item name >>> ").lower() for i in useableowned: if use == i.lower(): inventory.remove(i) return [inventory,i] break else: print 'You do not own any useable items' def UsedItem(hydration,heat,hunger,useitem): import time useableitems = ['Tea Leaves', 'Warming Pack', 'Energy Bar'] if useitem not in useableitems: print "Error 001, item to use is not in the Useable Items List!" print "Creating Crash Log." f = open('CrashLog: Error 001','a+') f.writelines(time.asctime()) f.writelines('Item To Use: '+useitem) f.close() raise ValueError,'item to use is not in the Useable Items List!' else: if useitem == useableitems[0]: hydration = 100 print "Hydration Restored to 100" elif useitem == useableitems[1]: heat = 100 print "Heat Restored to 100" elif useitem == useableitems[2]: hunger -= 30 print "Hunger - 30" return [hydration,heat,hunger] #Variables inform = Intro() dist = inform[0][0] scene = inform[0][1] watersource = inform[0][2] startingitems = inform[1] hour = 0 dndeterminer = 0 hydration = 100 heat = 100 hunger = 0 shelter = False clothing = True #__main__ while dist > 0: if hydration > 0 and heat > 0 and hunger < 100: print if not clothing: #for minising heat -= 5 action = Menu(hydration,heat,hour,watersource,dndeterminer,dist,hunger) hour = action[1] dndeterminer = action[2] action = action[0] if action == '1': if dndeterminer == 0: wood = GetWood(startingitems) else: wood = random.randint(0,2) for i in range(wood): startingitems.append('Wood') print str(wood) + ' log(s) are obtained' hour += 1 heat += 10 heat = CheckValidBody(heat) hydration -= 15 hunger += 5 hydration = CheckValidBody(hydration) print "You used 1 hour to collect some logs" elif action == '2': fire = MakeFire(startingitems) startingitems = fire[1] heat += fire[0] try: startingitems = ReplaceItem('Full Bottle of Dirty Water','Full Bottle of Clean Water', startingitems) print "Water Purified!" except: pass elif action == '3': if dndeterminer == 0: discover = GenerateRandomExplore(scene) discover_str = '' else: discover = ["Nothing"] discover_str = '' if discover != []: for i in discover: discover_str = discover_str + i + ' ' startingitems.append(i) print discover_str + 'is/are obtained' else: print "Nothing is found" hour += 1 heat += 10 heat = CheckValidBody(heat) hydration -= 10 hunger += 10 hydration = CheckValidBody(hydration) print "You have used 1 hour to explore around you" elif action == '4': # this line is used for minising Ruleof3() elif action == '5': hour += 5 print "You have used 5 hours to rest, energy is restored" hydration -= 5 if shelter: pass else: heat -= 15 if 'Sleeping Bag' in startingitems: heat += 20 hunger += 15 heat = CheckValidBody(heat) hydration = CheckValidBody(hydration) elif action == '6': captured = Hunting(scene,startingitems,watersource) hour += 1 if captured[1] == 'True' and captured[0] != 'poisonous scorpian': if captured[0]: print "You saw a "+captured[0]+" and you captured it" startingitems.append((captured[0]+' flesh')) print 'You have used 1 hour to hunt' hydration -= 15 hydration = CheckValidBody(hydration) heat += 15 heat = CheckValidBody(heat) else: print "You didn't see anything" elif captured[1] == 'False': print "You saw a "+captured[0]+" and it escaped... bad luck!" else: print "You saw a "+captured[0]+" and it attacked you, causing a "+captured[1]+" injury!" if captured[1] in ['broken ribs','lethal','neck']: print "You died" dist = 0 hunger += 20 elif action == '7': walked = Walk(dist,hydration,heat,startingitems,dndeterminer) hour += 4 dist -= walked[0] hydration = walked[1] hydration = CheckValidBody(hydration) heat = walked[2] heat = CheckValidBody(heat) watersource = random.choice([True,False]) print "You have used 4 hours to walk "+str(walked[0])+" km" hunger += 20 elif action == '8': craft = CraftingList() if craft == "1": startingitems = CraftItem(startingitems, ["Wood"], ["Tinder", "3"]) elif craft == "2": startingitems = CraftItem(startingitems, ["Plant Fibre"], ["String", "1"]) elif craft == "3": startingitems = CraftItem(startingitems, ["Dead Grass"], ["String", "1"]) elif craft == "4": startingitems = CraftItem(startingitems, ["Cloth"], ["String", "2"]) elif craft == "5": startingitemsx = CraftItem(startingitems, ["Clothes"], ["Cloth", "3"]) if startingitemsx != startingitems: clothing = False startingitemsx = startingitems elif craft == "6": startingitems = CraftItem(startingitems, ["String","Wood","Wood"], ["Bow Drill", "1"]) elif craft == "7": startingitems = CraftItem(startingitems, ["Bone","String"], ["Bone Knive", "1"]) elif craft == "8": startingitems = CraftItem(startingitems, ["Cloth"], ["Bandage", "3"]) elif craft == "9": startingitems = CraftItem(startingitems, ["Wood","Vaseline","String","Tree Bark"], ["Fire Torch", "1"]) elif craft == "10": startingitems = CraftItem(startingitems, ["Wood","String","Knive"], ["Fishing Rod", "1"]) elif craft == "11": startingitems = CraftItem(startingitems, ["Rabbit Skin"], ["Water Bottle", "1"]) else: print "This is not a valid choice" elif action == '9': #this line is used for minising DisplayInventoryTry2(startingitems) elif action == '10': water = DrinkWater(startingitems) try: hydration += water[0] hydration = CheckValidBody(hydration) startingitems = water[1] except: pass elif action == '11': print "Energy Bars are in Use Items Section. It has an amazing buff!" food = SearchFood(startingitems) try: startingitems = food[0] hunger -= food[1] CheckValidHunger(hunger) except: pass elif action == '12': check = UseableItemsOutput(startingitems) startingitems = check[0] used = check[1] useitem = UsedItem(hydration,heat,hunger,used) hydration = useitem[0] heat = useitem[1] hunger = CheckValidHunger(useitem[2]) elif action == '13': if 'Camp Set' in startingitems: hour += 1 else: hour += 4 print "Shelter Built, Rest would not decrease heat." elif action == '14': if watersource: emptybottles = SearchItem('Empty Bottle',startingitems) if emptybottles == 0: print "You do not have empty bottles" else: startingitems = ReplaceItem('Empty Bottle',"Full Bottle of Dirty Water",startingitems) print str(emptybottles)+' x Empty Bottles has been filled up to '+str(emptybottles)+' x Full Bottle of Dirty Water' hunger += 5 else: print "This is not a valid input" else: # this line is used for minising print "This is not a valid input!" else: dist = 0 if hydration <= 0: print "You died of thirst" print "GAME OVER..." elif heat <= 0: print "You died of hypothermia" print "GAME OVER..." elif hunger >= 100: print "You have starved to death" print "GAME OVER..." if dist <= 0 and hydration > 0 and heat > 0 and hunger < 100: print "WELL DONE!" print "YOU SURVIVED!!!!!" Well, you might ask: what happened to the desert? I think the desert is a bit too troublesome because there are different logics to the normal game. Python version 2.6. Answer: Style There's a standard coding style recommendation for Python called PEP8. It's strongly recommended to follow that as much as possible. Avoid assigning to a different type This statement assigns a simple value to a variable that was originally list: action = action[0] This is a bad practice that makes it harder to understand the code. It's better to use a different name, and avoid reassigning a value to a different type. Unnecessary conditions Instead of this: if hydration < 0: hydration = 0 return hydration else: return hydration It would be simpler and better like this: if hydration < 0: hydration = 0 return hydration Fragile menus The menu handling in Menu and in CraftingList is very fragile. The text presented to the user is a hard-coded text. The code that uses these menus checks the choice by hard-coded values, such as "1", "2", and so on. The problem with this is that if you later need to make a change to a number in the text, you have to remember to change everywhere it is used. The worst is if you need to insert a new menu option in the middle, let's say position 3, and then you have to shift all other options and all the code that uses them. You may also mistake a condition by using the incorrect number that doesn't correspond to the intended choice. It would be better to encapsulate the menu choices in a data type, let's call it a MenuItem. Each MenuItem instance could have a number by which users can select them, and a text that is displayed. The menu could be built from the list of MenuItem instances, instead of a hardcoded text. And then the code checking the selected value could be intention revealing, for example: if craft == items.bow_drill: # ... elif craft == items.bone_knife: # ... And so on. This kind of approach will eliminate the hard-coding, and many potential errors in future modifications, oversight, and improve the readability. Magic values There are many values that appear at multiple places in the code, for example the names of the scenes like Top of the Mountain. The problem with that is if one day you decide to make a small change, you have to remember to make that change in multiple places. It's better to create constants for such hardcoded values, so the concrete values are written at one place, and whenever you need to use it, you refer to it using the constant. Don't repeat yourself This chunk of code appears twice: itemscanbeobtained.remove('Berries') itemscanbeobtained.remove('Mushrooms') itemscanbeobtained.remove('Dead Hare') itemscanbeobtained.append('Dead Birds') itemscanbeobtained.append('Bait') itemscanbeobtained.append('Bird Nest') for i in range(random.randint(0, 3)): retlist.append(random.choice(itemscanbeobtained)) return retlist It would be better to avoid such duplication of logic by extracting to a helper function. Note that in Python you can define functions within functions, so when a block of code is duplicated within a function and never used outside, then the helper function could be inside the function that uses it.
{ "domain": "codereview.stackexchange", "id": 30271, "tags": "python, game, adventure-game, python-2.x" }
Reciprocal Time Dilation in Special Relativity
Question: I'm trying to understand theory of special relativity, but there is one thing that really makes me confused which is reciprocal time dilation in special relativity. In special relativity, the time dilation effect is reciprocal: as observed from the point of view of either of two clocks which are in motion with respect to each other, it will be the other clock that is time dilated. (This presumes that the relative motion of both parties is uniform; that is, they do not accelerate with respect to one another during the course of the observations.) - Wikipedia http://en.wikipedia.org/wiki/Time_dilation This paragraph tells us that: as your friend flying on a high-speed moving rocket passes around you (who is at the rest in space), you see her age more slowly than yourself. She in turn will see you age more slowly. This conclusion seems rather contradicting to me. What does this conclusion gives as the result of age-relationship between you and your friend? Does it mean if the rocket were forever in the uniform motion going away from you, you will be always older than your friend, and friend will be always older than you depending on which frame of reference you take? Answer: In relativity time is no longer a universal concept, it is a quantity specific to a frame of reference. It isn't meaningful to compare the "age" of two objects in two different frames of reference using a single "frame time." "Frame time" denotes time as measured in a specific frame of reference. This frame of reference could be that of either object for example or perhaps even a third party observer. There is one way to compare ages of objects that's meaningful though but it requires a different concept of time to be measured. Proper time is the time measured in a frame of reference where the observer measures him or herself to be at rest. This definition is rather bizarre since it doesn't allow arbitrary points in space to have a proper time. Only observers with a defined velocity (or motion of any kind) can have a measured proper time. It is still a useful concept as it allows us to directly compare what different observers measure from their own frame of reference. With your example we can use proper time to see how old the travelers are as measured in their own time. By the definition of proper time, traveler A and traveler B age in terms of proper time at the same rate, that's the whole point of proper time! Now what happens when they try to measure each other's age using their own time? If they do this they're not measuring the other traveler's age in terms of proper time, but rather their own frame time. They would need to calculate the proper time of the other traveler that they're observing. But when traveler A observes traveler B at a proper time of 1 minute, does traveler A see traveler B at a proper time of 1 minute or some other time? Obviously there will be some delay for the light of traveler B to reach A, but even accounting for this effect, do we expect A to still measure a difference in her observation of B's proper time? The answer is yes, we do, because A's notion of simultaneity is not the same as B's. When A looks at B, A does not see B at the same proper time A sees herself. Because they are moving at different relative speeds, A's frozen snapshot of space at a given time doesn't look like B's. We could imagine depicting A's frozen snapshot of space as a line where at each point we give the value of A's clock for that point. It will look like this for A say at time = 0: ...-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-... Every point exists at the same time. Now from B's perspective we could depict A's frozen time snapshot assigning those same points a time, t', based on B's clock instead. It will look like the following for example: ...-0-1-2-3-4-5-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-... A's frozen time is frozen at a different time for each point in space from B's perspective. Likewise from A's perspective, B's frozen snapshot looks the same as A's did from B's perspective. This means measuring each other's age at a frozen snapshot is not a fair depiction of how A or B would measure their own age. A doesn't see B at her own value of proper time and likewise B doesn't see A at his own value of proper time. They are just as delayed from each other's point of view. This is the same thing as depicted in the graphic knzhou posted in the comments. The diagonal line of the driver represents B's simultaneity as seen from A. All along this line, A has different values of her clock even though it is a "frozen snapshot." So what happens if A and B decide to meet in the same frame of reference, i.e. have 0 relative speed? Then they will have the same frame time and when they measure each other's proper time, it will match their own proper time. So what will their relative ages be? It depends on how they get there. If they both agree to do equal work to match speeds, they will see each other age the same amount over the time it takes to match speeds. If A does more of the work, then she will come out younger for it, if B does more of the work, then he will be younger.
{ "domain": "physics.stackexchange", "id": 32032, "tags": "special-relativity, inertial-frames, time-dilation, observers" }
Group neural networks outputs using Keras/Tensorflow
Question: I am trying to group the outputs of my neural network, in order to have them perform a separated classification. Let's take the example where the groups are constituted of two nodes and we previously have four output nodes, then the network should look like this : How can I achieve this using Keras or Tensorflow ? I was thinking that I need to implement a custom layer and do the operation inside of it, but I wonder if there is an easier solution using the Keras Functional API (or lower level Tensorflow ?) :) Answer: Using the keras functional API will get you what you need. I'm assuming that you are currently using the standard keras sequential model API, which is simpler but also restricts you to a single pipeline. When using the functional API, you do need to keep track of inputs and outputs, instead of just defining layers. For the example in your question: from keras.layers import Input, Dense from keras.models import Model # Left side sub-model: L1 = Input(shape=(2,)) L2 = Dense(2, activation='softmax')(L1) # Right side sub-model: R1 = Input(shape=(2,)) R2 = Dense(2, activation='softmax')(R1) # Combining them together: merge = concatenate([L2, R2]) # Some additional layers working on the combined layer: merged_layer_1 = Dense(4, activation='relu')(merge) # Output Layer: output = Dense(2, activation='softmax')(hidden1) # Defining the model: my_model = Model(inputs=[L1, R1], outputs=output) The depths of the layers are made up, but you should get the general idea.
{ "domain": "datascience.stackexchange", "id": 5860, "tags": "neural-network, keras, tensorflow" }
Why does the temperature of a gas inside a moving container not increase with velocity?
Question: A rectangular (simplified) container with rigid surfaces, has a certain mass of ideal gas within it, and it accelerates in free space, undergoing rectilinear motion. There are no dissipative forces. Now, since the container moves, its kinetic energy increases, and since the temperature of the enclosed gas is dependent on the kinetic energy, their temperature should consequently increase, which does not occur. What's the reason behind it? I try to solve it in the following manner: Lets consider a rectangular container and lets label its vertical edges A and B. Assuming that it moves along BA ,after a certain time the edge A moves to A', B to B', and thus, the effective space for the molecules remains always unchanged. Now considering the force parameter in the problem, here the forcing agent is only the edge B colliding with the molecules in the vicinity(let vicinity be the region BC, C being sufficiently close to B) of B. Now, I argue that the impulsive force imparted by B on the molecules near it, modifies the initial random motion direction of these nearby molecules, and forces them to move along BA direction. But after BC, i.e. in the region CA, the molecules don't attain a perfectly horizontal motion, and the alignment along BA deteriorates as one moves from B to A, in a disciplined manner, and yet not changing the Maxwellian distribution. I consider layers within the container, at the not-so-clearly defined C junction, there's a continuous yet slightly increasing variation from BA direction, although the resultant velocity of these molecules may be in NW/SW directions. These molecules collide with those in the next layer, and these latter molecules undergo further variation from BA (not to forget that in all these layers, some molecules might even move in NE/SE/E(AB) directions, the fraction of which is relatively low, but increases, and becomes relatively higher while moving along BA). Thus this variation considerably increases in the vicinity of the A edge, and as such, magnitudes of the velocities of these molecules eventually re-organize themselves (due to the changing angle-factor at each layer of molecules) in such a way that the final energy-distribution curve varies too little from the initial one, to be of any significance at a practical level, whence the temperature of the enclosed gas remains unchanged. Is this a correct approach, as far as an explanation is concerned? P.S. also provide me with an alternative simpler explanation, which I am sure there is... Answer: The equation $$ T = \frac{m\langle v^2 \rangle}{3 k_B} $$ for $\langle v^2 \rangle$ the average speed of a particle in the gas does not hold in frames other than the rest frame of the gas, where the rest frame is the one in which $\langle v \rangle = 0$ for all gas particles. This is because the derivation of this law assumes that the movement of the particles is "random", in particular, that there is no preferred direction, and "no preferred direction" is equivalent to $\langle v \rangle = 0$. Hence, there is nothing to explain here, your statement since the temperature of the enclosed gas is dependent on the kinetic-energy, their temperature must consequently increase is simply unfounded.
{ "domain": "physics.stackexchange", "id": 22407, "tags": "thermodynamics, kinetic-theory" }
Why doesn't the optimal angle (for maximum range) on an inclined plane equal 45 degrees?
Question: Observe this case The goal is to maximize $d$ by increasing the angle of the initial velocity. Since we know that the range is maximum for $\theta=45^\circ$ I would reason that the jumping ramp has to be elevated for $\theta=10^\circ$ in order for $\theta+\phi=10^\circ+35^\circ=45^\circ$. However this is not the case, since the ideal angle of ramp elevation is found to be $\theta=27.5^\circ.$ Why is this? Answer: Adding the angles would be like rotating the hill to make it level. However, if you rotated the hill with a simple coordinate transformation you would need to rotate gravity with it. Unfortunately the 45 degree maximum range rule only works if gravity is directly downward, so the equation would no longer be valid.
{ "domain": "physics.stackexchange", "id": 34288, "tags": "newtonian-mechanics, classical-mechanics, kinematics" }
How to compile C++ library inside ROS package?
Question: I have a package structured as follows: -include -src -my_submodule -include -src CMakeLists.txt CMakeLists.txt package.xml The my_submodule folder is basically a C++ library that I want to compile along with my ROS package and use as a dependency. I could install my_submodule and use find_package(my_submodule), but I want to compile my_submodule along with the package. What do I need to modify to compile my_submodule along with my package? The idea is that my_submodule is not dependent on ROS or my package, but I want to include without installing separately. Originally posted by Ralff on ROS Answers with karma: 280 on 2020-04-22 Post score: 0 Answer: This doesn't seem like a ROS question but a CMake question. For example: regardless of ROS, are you aware of how to link up multiple cmakelists together? A good hint is looking at the add_subdirectory() macro in CMake. Examples: https://github.com/ros-planning/navigation2/blob/master/nav2_system_tests/CMakeLists.txt#L51 https://github.com/SteveMacenski/slam_toolbox/blob/eloquent-devel/CMakeLists.txt#L33 Originally posted by stevemacenski with karma: 8272 on 2020-04-22 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Ralff on 2020-04-22: Yes. I use add_subdirectory(), but the subdirectory is not compiling. The command doesn't seem like it is doing anything. Comment by stevemacenski on 2020-04-22: Without far more information, there's no help anyone can give. Please edit your original question with more detail. Comment by Ralff on 2020-04-23: add_subdirectory() did work. I changed the build directory, and I was checking wrong directory to see if worked properly... Thank you for your answer! Comment by stevemacenski on 2020-04-23: No worries! Can you mark the question as correct so its off the unanswered questions queue?
{ "domain": "robotics.stackexchange", "id": 34819, "tags": "ros, ros-kinetic" }
How do I describe the following system?
Question: So the professor of my physics 3 class assigned this problem regarding forced oscillations. A mass of 0.30kg hangs of a massless rope. The center of oscillation can move as shown in the figure. When the driving frequency $\omega$ is zero, the pendulum oscillates 100 times before its amplitude is about 35% of the initial amplitude. Write an equation that describes the system in the presence of external forces. Find the solution and determine the resonance frequency, the quality factor $Q$, the maximum amplitude, and the phase as a function of the driving frequency. The thing is I have no idea how to start. I tried describing the system in terms of angular position, but I just don't see how I'm supposed to take the periodic displacement $\mu(t)$ into account. A hint and a brief explanation would be highly appreciated. Answer: We can start off with just the pendulum. Info #1: length and mass of object given. Natural frequnecy ($\omega_0$) of the free pendulum can be found out. Info #2: when $\omega = 0$ , from this frictional coefficient of the oscillatory system can be found out. Now, if the top of the pendulum is undergoing an acceleration(like in a car maybe) then we see that the pendulum starts to move(pseudo-acceleration). So the instataneous acceleration is $\frac{d^2\mu}{dt^2}$. Now, you can make an assumption that is the oscillations are very small, then you will notice that the pendulum's motion in y direction is negligible. Thus, $$(\frac{d^2}{dt^2} + \omega_0^2 + \beta \frac{d}{dt})x(t) = m \frac{d^2\mu}{dt^2}$$ To make this in $\theta$, you can use $x = lsin(\theta)$.. and for small $\theta$, $x = lsin(\theta) \approx l\theta$
{ "domain": "physics.stackexchange", "id": 71291, "tags": "homework-and-exercises, waves, frequency, oscillators, resonance" }
How to convert a partially entangled state into maximally entangled using SLOCC
Question: Let's say I have a generic partially entangled two-qubit state with Schmidt decomposition $$|\psi\rangle_{AB} = \sqrt{\alpha} |00\rangle_{AB} + \sqrt{\beta}|11\rangle_{AB}.$$ I know from Lo and Popescu (1999) and Vidal (1999) that the optimal probability of converting this state into the singlet $|\phi^+\rangle_{AB} = \frac{1}{\sqrt{2}} |00\rangle_{AB} + \frac{1}{\sqrt{2}}|11\rangle_{AB}$ using SLOCC is equal to $p_{\,\text{MAX}} = 2\beta$. However, I'm unable to find any example of how this can be done (I also checked this related question here). I would love a simple example that perform this probabilistic operation, just to see how this can be done, even with non-optimal probability. Answer: This case is pretty straightforward, if you're already familiar with generalised measurements. Assume $\alpha>\beta$ are real numbers. We can define $$ M_1=\left(\begin{array}{cc} \sqrt{\frac{\beta}{\alpha}} & 0 \\ 0 & 1 \end{array}\right). $$ Note that this is defined such that $(M_1\otimes I)|\psi\rangle\propto |\phi^+\rangle$. We then have to pick a multiplying factor to be as large as possible such that we can make $M_1$ part of a valid measurement. In particular, there needs to be a second measurement operator $M_2$ such that $$ M_1^\dagger M_1+M_2^\dagger M_2=I $$ and $M_2^\dagger M_2$ must be positive semi-definite. In the present case, we can have $$ M_2=\left(\begin{array}{cc} \sqrt{1-\frac{\beta}{\alpha}} & 0 \\ 0 & 0 \end{array}\right). $$ Now, if you work it out more carefully, $(M_1\otimes I)|\psi\rangle=\sqrt{2\beta}|\phi^+\rangle$. So, if you perform this measurement and get the 1 result (which happened with probability $2\beta$), the final state is $|\phi^+\rangle$. However, if you get the result 2, your output will be $|00\rangle$, and completely useless to you (from an entanglement perspective).
{ "domain": "quantumcomputing.stackexchange", "id": 5306, "tags": "entanglement, locc-operation" }
How long does the Ebola virus remain infectious on contaminated items or surfaces?
Question: I'm sure there will be variation depending on what the contaminated item or surface is made of - linens, I could imagine, would remain dangerous for longer than a door-knob. But if the items are not decontaminated in some way, how long can the virus survive outside a host? Answer: This really depends on the environment, one study (listed below as reference 1) found that the Ebola virus can survive under ideal conditions on flat surfaces in the dark for up to six days - see the figure from the same publication. However, the virus is quite sensitive to UV radiation (see reference 2 for all the details) and most viral particles are likely to get inactivated within relatively short time. It might still be possible to get positive tests for Ebola from really sensitive PCR-based tests, but these are most likely not infectious anymore. The CDC lists common bleach (or any other routinely used disinfected) as a good way to get rid of Ebola viruses. References: Persistence in darkness of virulent alphaviruses, Ebola virus, and Lassa virus deposited on solid surfaces Sensitivity to ultraviolet radiation of Lassa, vaccinia, and Ebola viruses dried on surfaces
{ "domain": "biology.stackexchange", "id": 2981, "tags": "virus, virology, infection, epidemiology, ebola" }
Source of energy for artificial gravity in rotating torus
Question: I was hoping someone could clear up my (mis)understanding of the following: Suppose we have a toroid-shaped spaceship that rotates around a central axis, which would create artificial gravity due to the centripetal force. My question is, once this spaceship is "spun up", would it continue to spin indefinitely? And if so, would this mean that we're getting artificial gravity without expending any energy? That doesn't seem quite right, since other forms of acceleration require lots of energy (huge amounts of mass in the case of actual gravity, or lots of 'mechanical' energy for acceleration from propulsion). or will this spaceship eventually stop spinning and require continual boosting? (and why?) Answer: It will continue to spin indefinitely, just as planets do. Conservation of angular momentum guarantees that. The reason energy is not being expended, in classroom physics terms, is that no work is being done on the objects in the torus. The reason no work is being done is that work is the dot product of force with distance, which means the work done is zero if the only motion is perpendicular to the direction of the force. Since the force (and acceleration) is towards the middle, but the instantaneous motion is perpendicular to that, no work is done. If the dot product doesn't mean anything to you, just know that work is force times the amount of motion in the direction of the force.
{ "domain": "physics.stackexchange", "id": 81922, "tags": "newtonian-mechanics, angular-momentum, conservation-laws, centripetal-force" }
How to specify default arguments in a Service or Action
Question: Is it possible to define default arguments in a Service or Action message? I want to expose several options in services and actions, but it seems by default, ROS makes all fields required. How do you specify defaults? Originally posted by Cerin on ROS Answers with karma: 940 on 2016-05-03 Post score: 0 Answer: The ROS message, service and action formats do not allow you to specify defaults for any of the fields. It may be better to expose several services, or you may want to write a C++ or python wrapper class around the service or actionlib client that provides the defaults. Originally posted by ahendrix with karma: 47576 on 2016-05-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24556, "tags": "actionlib" }
It is possible to use VLP-16-lite on ROS indigo & kinetic?
Question: I'm planning to buy VLP-16 or VLP-16-lite. So I want to know that there is a driver for VLP-16 & VLP-16-lite on ROS. I found that I can get the driver for VLP-16 on indigo & kinetic from here. And I found that the site which says there is the driver for VLP-16-lite(Puck lite). But I'm wondering if the site says truth or not... So if someone know there is a driver for VLP-16-lite, could you inform me? Originally posted by graziegrazie on ROS Answers with karma: 46 on 2017-04-14 Post score: 0 Original comments Comment by Augusto Luis Ballardini on 2017-04-15: Hi, I have a VLP-16 in my university lab and I'm currently using the lidar with the standard velodyne_driver ros package (and the calibration yaml file). Since the only difference should be the weight, I suppose it's going to work with the lite version too.. Comment by graziegrazie on 2017-04-24: I think so, but I want the evidence. VLP-16 and VLP-16-LITE is very expensive. And I want to skip to develop driver for the LITE on ROS. That is why I want to confirm if there is a drive for the LITE or not. Comment by amburkoff on 2018-07-20: Have you checked it? Does VLP-16 work in ROS kinetic? Comment by graziegrazie on 2018-07-20: Yes, I checked. VLP-16 works well in ROS Kinetic. Comment by amburkoff on 2018-07-20: Does VLP-16-LITE work in ROS kinetic? Or does the driver come from the regular version? Comment by graziegrazie on 2018-07-25: I have not tried VLP-16 LITE on any distribution. Now I cannot confirm which driver I used. But maybe you can use this. Answer: The VLP-16 has been tested on Indigo and Kinetic with velodyne_driver version 1.3.0, which must currently be built from source on both those repositories. I have no reports of testing with the VLP-16-lite. If you try it, please report back whether it works or not, or edit the wiki page to add the additional model. Originally posted by joq with karma: 25443 on 2017-04-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by graziegrazie on 2017-04-24: OK. If I test the LITE with velodyne_driver, I'll update the wiki.
{ "domain": "robotics.stackexchange", "id": 27608, "tags": "ros-kinetic, ros-indigo, velodyne" }
MoveIt! - Markers missing on custom Robot, IK-Planning not working
Question: Hey there, I've been trying to get my MoveIt-project to work similar to the pr2 demo, but I'm unable to get the markers to appear. I have seen and read, that other users had issues with this, too, and tried the suggested solutions. But none of them fixed the issue in my case. So I have tried multiple different approaches through the MoveIt setup assistant, defined the robot-arm as a chain instead of joints, set a parent group for the gripper... but no luck. Im using the Robot lwa4p from Schunk with a very simple gripper, consisting of 3 boxes. Also: The planning of paths doesn't work at all with my robot. Any Ideas for the cause of my problems? (if i had >points, i would upload my xacro and a screenshot :D) Originally posted by David_111 on ROS Answers with karma: 32 on 2017-05-02 Post score: 0 Answer: One of the reasons for markers not appearing and IK not working is the IK solver not being specified. You have to specify the IK solver in the config file. This can be done manually or through the setup assistant. Does your kinematics.yaml file have the following? name_of_kinematic_chain: kinematics_solver: kdl_kinematics_plugin/KDLKinematicsPlugin kinematics_solver_search_resolution: 0.05 kinematics_solver_timeout: 1 kinematics_solver_attempts: 10 If not you could add it. Replace the "name_of_kinematic_chain" with yours. Hope it helps! Originally posted by hc with karma: 114 on 2017-05-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by David_111 on 2017-05-03: That did solve the marker Problem! And obviously it solved the planning path not working problem :D Thanks!
{ "domain": "robotics.stackexchange", "id": 27780, "tags": "ros, moveit, ik, interactive-markers" }
Simplified formula for temperature between the star and background temperature
Question: I am looking for a simplified formula to calculate the temperature of an object in space and how distance from the star affects said temperature. Something that closely represents reality will suffice. It does not have to be accurate. It is just to model temperature damage in a semi-realistic sci-fi space game. Basically if I go out into space and I grab a thermometer and hold it close to the sun and take a reading and then take a few steps back and take a reading again and repeat this what would my curve look like? So I basically have the surface temperature of a star and the distance between the star and my "thermometer" and I have to find out roughly what temperature it would read. PS: Is this even how temperature in space works? Thanks in advance! Answer: Since you're only interested in a rough approximation, we can assume that both the star and the body you're interested in are blackbodies. In that case, we can simply set the total power incident on the body equal to the total power radiated by that body, and from that, solve for the body's temperature as a function of distance. The total power $L$ radiated by a star with radius $R_s$ and surface temperature $T_s$ is given by the Stefan-Boltzmann law: $$L=4\pi R_s^2 \sigma T^4$$ where $\sigma$ is the Stefan-Boltzmann constant. Suppose the body is sitting at a distance $r$ from the star. At that distance, the power per unit area incident on the body is $$\frac{P}{A}=\frac{L}{4\pi r^2}$$ since the total area over which power is being emitted is a sphere of radius $r$, which has a surface area of $4\pi r^2$. If your body has a cross-sectional area of $A_{cs}$, then the power incident on that body is $$P_{in}=\frac{LA_{cs}}{4\pi r^2}$$ If the body's temperature is stable, then it must be emitting as much power as it receives. If the body has total surface area $S$, then it's radiating an amount of power also given by the Stefan-Boltzmann law: $$P_{out}=S\sigma T_b^4$$ for a given temperature $T_b$. Setting $P_{in}=P_{out}$ and solving for $T_b$, we get $$T_b=T_s\sqrt{\frac{R_s}{r}}\left(\frac{A_{cs}}{S}\right)^{1/4}$$ So you can see that the temperature of your body decreases as $\sqrt{\frac{1}{r}}$ with distance. If the body you're talking about is a sphere of radius $a$, then we can make this even simpler: $A_{cs}=\pi a^2$ and $S=4\pi a^2$, so $A_{cs}/S=1/4$, and $$T_b=T_s\sqrt{\frac{R_s}{2r}}$$ If your body isn't a sphere, then the ratio $A_{cs}/S$ will change with its orientation, as it presents more or less exposed surface to the star. But the spherical approximation should probably work for your purposes.
{ "domain": "physics.stackexchange", "id": 46658, "tags": "temperature, stars, space" }
Why are orbits elliptical instead of circular?
Question: Why do planets rotate around a star in a specific elliptical orbit with the star at one of it's foci? Why isn't the orbit a circle? Answer: Assume the planet has a negligible mass compared to the star, that both are spherically symmetric (so Newton's law of gravitation holds, but this normally happens to a very good approximation anyway), and that there aren't any forces besides the gravity between them. If the first condition does not hold, then the acceleration of each is going to be towards the barycenter of the system, as if barycenter was attracting them a gravitational force with a certain reduced mass, so the problem is mathematically equivalent. Take the star to be at the origin. By Newton's law of gravitation, the force is $\mathbf{F} = -\frac{m\mu}{r^3}\mathbf{r}$, where $\mathbf{r}$ is the vector to the planet, $m$ is its mass, and $\mu = GM$ is the standard gravitational parameter of the star. Conservation Laws Because the force is purely radial $(\mathbf{F}\parallel\mathbf{r})$, angular momentum $\mathbf{L} = \mathbf{r}\times\mathbf{p}$ is conserved: $$\dot{\mathbf{L}} = \frac{\mathrm{d}}{\mathrm{d}t}\left(\mathbf{r}\times\mathbf{p}\right) = m(\dot{\mathbf{r}}\times \dot{\mathbf{r}}) + \mathbf{r}\times\mathbf{F} = \mathbf{0}\text{.}$$ If the initial velocity is nonzero and the star is at the origin, then in terms of the initial position and velocity, the orbit must be confined to the plane of all points with vectors $\mathbf{x}$ from the origin that satisify $\mathbf{L}\cdot\mathbf{x} = 0$. If the initial velocity is zero, then the motion is purely radial, and we can take any one of infinitely many planes that contain the barycenter and initial position. The total orbital energy is given by $$\mathcal{E} = \frac{p^2}{2m} - \frac{m\mu}{r}\text{,}$$ where the first term part is the kinetic energy and the second term is the gravitational potential energy of the planet. Its conservation, as well as the fact that it invokes the correct potential energy, can be proven by the fundamental theorem of calculus for line integrals. Define the Laplace-Runge-Lenz vector to be $$\mathbf{A} = \mathbf{p}\times\mathbf{L} - \frac{m^2\mu}{r}\mathbf{r}\text{.}$$ It is also conserved: $$\begin{eqnarray*} \dot{\mathbf{A}} &=& \mathbf{F}\times\mathbf{L} + \mathbf{p}\times\dot{\mathbf{L}} - \frac{m\mu}{r}\mathbf{p} + \frac{m\mu}{r^3}(\mathbf{p}\cdot\mathbf{r})\mathbf{r}\\ &=& -\frac{m\mu}{r^3}\underbrace{\left(\mathbf{r}\times(\mathbf{r}\times\mathbf{p})\right)}_{(\mathbf{r}\cdot\mathbf{p})\mathbf{r} - r^2\mathbf{p}} - \frac{m\mu}{r}\mathbf{p} + \frac{m\mu}{r^3}(\mathbf{p}\cdot\mathbf{r})\mathbf{r}\\ &=& \mathbf{0}\text{.} \end{eqnarray*}$$ Finally, let's also take $\mathbf{f} = \mathbf{A}/(m\mathcal{E})$, which has the same units as $\mathbf{r}$, and since $\mathbf{L}\cdot\mathbf{f} = 0$, it lies along the orbital plane. As it's a conserved vector scaled by a conserved scalar, it's easy to show that $\mathbf{f}$ is conserved as well, as long as $\mathcal{E}\neq 0$. Simplifying By employing the vector triple product, we can write $$\begin{eqnarray*} \frac{1}{m}\mathbf{A} &=& \frac{1}{m}\left[p^2\mathbf{r}-(\mathbf{p}\cdot\mathbf{r})\mathbf{p}\right] -\frac{m\mu}{r}\mathbf{r}\\ &=& \left(\mathcal{E}+\frac{p^2}{2m}\right)\mathbf{r} - \frac{1}{m}\left(\mathbf{p}\cdot\mathbf{r}\right)\mathbf{p}\\ \mathcal{E}(\mathbf{f}-\mathbf{r}) &=& \left(\frac{p^2}{2m}\right)\mathbf{r} - \frac{1}{m}\left(\mathbf{p}\cdot\mathbf{r}\right)\mathbf{p}\text{,} \end{eqnarray*}$$ the norm-squared of which is easy to crank out: $$\mathcal{E}^2|\mathbf{f}-\mathbf{r}|^2 = \left(\mathcal{E} + \frac{m\mu}{r}\right)^2r^2\text{,}$$ where $\mathcal{E}$ was used throughout to switch between kinetic and potential terms. Why Ellipses? Since $\mathcal{E}$ is energy relative to infinity, to have a bound orbit we need $\mathcal{E}<0$. Thus, from the previous section, $|\mathbf{f}-\mathbf{r}| = -\mathcal{E}^{-1}\left(\mathcal{E}r + m\mu\right)$ and therefore $$|\mathbf{f}-\mathbf{r}| + |\mathbf{r}| = -\frac{m\mu}{\mathcal{E}}\text{,}$$ which defines an ellipse with foci $\mathbf{0},\,\mathbf{f}$ and major axis $2a=-m\mu/\mathcal{E}$. Why Not Circles? The circle is a special case where the foci are the same point, $\mathbf{f} = \mathbf{0}$, which can be restated as $$\mathcal{E} = -\frac{1}{2}\frac{m\mu}{r} = -\frac{p^2}{2m}\text{.}$$ In other words, circular orbits require the orbital energy to be the negative of the kinetic energy. This is possible, but almost certain not to hold exactly. Since any values of $\mathcal{E}<0$ are allowed for bound orbits, there are many more ways to have elliptic orbits. (Although some of them would actually crash because the star and planet have positive size.) Note that hyperbolic orbits have $\mathcal{E}>0$, and we can still find the foci using the above method, though being careful with the signs. For $\mathcal{E}=0$, the second focus $\mathbf{f}$ is undefined because this is a parabolic orbit, and parabolas only have one focus within a finite distance from the center. Additionally, the eccentricity vector $\mathbf{e} = \mathbf{A}/(m^2\mu)$ is an alternative choice for the LRL vector; as the name suggests, its magnitude is the orbital eccentricity.
{ "domain": "astronomy.stackexchange", "id": 5024, "tags": "star, orbit, planet" }
Can we find out the direction of $\textbf{A}$ from $\nabla\times\textbf{A}$?
Question: Given a vector field of the form $\textbf{B}=\nabla\times\textbf{A}$, can we uniquely find out the direction of $\textbf{A}$? Answer: No. If you find any specific $\mathbf A_0$ such that $\nabla \times \mathbf A_0$ matches your given $\mathbf B$, then adding any constant vector to $\mathbf A$ will result in an equally valid vector potential. Even more strongly, since the curl of every gradient vanishes, adding a gradient to $\mathbf A_0$ will also result in an equally valid vector potential. This is known as the gauge freedom of classical electrodynamics and it's discussed in detail in any textbook that talks about the vector potential.
{ "domain": "physics.stackexchange", "id": 43114, "tags": "gauge-theory, differentiation, vector-fields" }
Anti-triplet representation
Question: What is the anti-triplet representation of a group? Specifically, what is the anti-triplet representation of $SU(3)$? Answer: A mathematician will give more detailed and rigorous answer. From the physicist point of view, representation corresponds to the way the object transforms under a symmetry group $G$. Let us assign upper index to the object $a^{i}$, that transforms like a column under $G$: $$ a^{i} \rightarrow U_j^{i} \ a^{j} $$ This we will call the fundamental representation. For the case of $SU(N)$ the fundamental representation is a column of $N$-elements. The object with the lower index $b_i$ we will regard as transforming in antifundamental representation (it will reside in the dual vector space to the fundamental). $$ b_i \rightarrow (U^*)_{i}^{ j} b_j $$ This object will transform as a row, or ,looking in another way, as a column, but under the hermitian conjugated matrix $U^{\dagger}$. For the case of $SU(N)$ think about it as row of $N$ complex numbers.
{ "domain": "physics.stackexchange", "id": 69221, "tags": "terminology, group-theory, definition, representation-theory" }
Managing network address information
Question: I'm a developer at a networking company that really has no peers that work above me that I can use for any sort of sounding board for my code, so it's just me. I was wondering if anyone would be willing to take a look at some of my code and just give me a few notes, bullet points, etc... on things that are wrong, things I should look into and so on. If anyone is interested please feel free to comment. I'm concerned with the JavaScript/jQuery. It seems very coupled and I was wondering if there are any better ways to accomplish the things that I'm doing with this code. Everything works, but I'm always sure there is room for improvement. //Global networkInfo Object. var networkInfo = {}; $(function () { // Bind custom events $(networkInfo).bind('updateNetworkInfo', updateNetworkAddress); $(networkInfo).bind('updateNetworkRanges', updateNetworkRangeDropDowns); $(networkInfo).bind('selectNetworkRanges', selectNetworkRanges); // Set form validation $('form').validate({ messages: { tbSiteName: "This is an invalid site code" } }); Add custom rule for site name. This rule has been a pain because this is a webforms application and the name of the input can't be controlled. $('#tbSiteName').rules("add", { required: true, remote: function () { var r = { url: "/webservices/ipmws.asmx/SiteValid", type: "POST", data: "{'tbSiteName': '" + $('#tbSiteName').val() + "'}", dataType: "json", contentType: "application/json; charset=utf-8", dataFilter: function (data) { return (JSON.parse(data)).d; } } return r; } }); DrillDownProvisioning.aspx doesn't allow the user to change the Subnet mask. They should only be allowed to change this value by clicking on nodes in the DrillDown Tree. $('#ddSubnetMask').change(function (e) { $(this).val(networkInfo.SubnetMask); $('#lblMessageBox') .removeClass('hidden') .addClass('error') .text("Please do not change this value manually"); }); // Populate drop down with all the available vlans. $.ajax({ type: 'POST', url: '/webservices/ipmws.asmx/GetVlans', data: '{}', dataType: 'json', contentType: 'application/json; charset=utf-8', success: function (data) { $('#ddNumber').append($('<option />').val("").text("")); $.each(data.d, function (index) { $('#ddNumber').append($('<option />').val(this.VlanId).text(this.Number)); }); } }); // When a vlan number is change auto populate the standard name and description input fields with the predefined values $('#ddNumber').change(function () { var number = $(this); var standardName = $('#tbStandName'); var description = $('#tbDescription') if (number.val() === "") { standardName.val(""); description.val(""); } else { $.ajax({ type: 'POST', url: '/webservices/ipmws.asmx/GetVlanInfo', data: "{'number': " + $('#ddNumber').val() + "}", success: function (data) { standardName.val(data.d.StandardName); description.val(data.d.StandardName); standardName.valid(); description.valid(); }, dataType: 'json', contentType: 'application/json; charset=utf-8' }); } }); // Populate drop down populateSubnetMask(); // When the network type drop down is changed update the network ranges drop downs. $('#ddNetworkTypes').change(function () { $(networkInfo).trigger('selectNetworkRanges'); }); /* * toggle the 6 drop down menus. If the check box is not marked * reset all the values to empty so they don't post. */ $('#enableRange').change(function () { if (this.checked) { $(networkInfo).trigger('selectNetworkRanges'); } else { $('.mask').val(0); } $('#networkRangeSelectors').slideToggle(); }); /* * Open drill down tree for selection */ $('.open').click(function () { $('#drilldowntreecontainer').toggle('1000'); $(this).toggle(); }); // Close drill down tree $('.close').click(function () { $('#drilldowntreecontainer').toggle('1000'); $('.open').toggle(); }); Because the page no longer posts back to itself the document.referrer should always be the previous page they visited from. If for some reason a post back is required this will no longer work. $('#btnCancel').click(function (e) { e.preventDefault(); window.location.replace(document.referrer); }); }); /* * Populate the dropdown box with the default * range of 1 through 31 */ function populateSubnetMask() { var dropDown = $('#ddSubnetMask'); dropDown.append($('<option />').val("").text("")); for (var i = 31; i > 0; i--) { dropDown.append($('<option />').val(i).text("/" + i)); } } Fetches the predefinied subnet start and stop values from webservice. Populates select boxes with start to stop ranges to be selected. function updateNetworkRangeDropDowns(e, network, bits) { $.ajax({ type: 'POST', url: '/webservices/ipmws.asmx/GetNetworkRanges', data: "{'network': '" + network + "', 'bits': " + bits + "}", success: function (data) { networkInfo.networkRanges = data.d; var html = "<option value=''></option>"; for (var i = data.d.Start; i < data.d.End; i++) { html += "<option value='" + i + "'>" + i + "</option>"; } $(".mask").empty().append(html); $(networkInfo).trigger('selectNetworkRanges'); }, dataType: 'json', contentType: 'application/json; charset=utf-8' }); } Looks at the networkInfo.networkRanges object and selects the correct values in the drop down based on the values in the object. Should only select values if the network type is a VLAN(type=9). Will reset all the values to empty if network type is not a VLAN. function selectNetworkRanges() { if ($('#ddNetworkStart option').size() > 0 && $('#ddNetworkTypes').val() == 9) { $('#ddNetworkStart').val(this.networkRanges.NetworkStartSelected); $('#ddNetworkEnd').val(this.networkRanges.NetworkEndSelected); $('#ddFixedStart').val(this.networkRanges.FixedStartSelected); $('#ddFixedEnd').val(this.networkRanges.FixedEndSelected); $('#ddDHCPStart').val(this.networkRanges.DhcpStartSelected); $('#ddDHCPEnd').val(this.networkRanges.DhcpEndSelected); } else { $('.mask').val(0); } } Custom event that updates the values in the form inputs for SubnetAddress/SubnetMask with the values in the networkInfo object. Because this function is bound as a custom event you can access the networkInfo object with this function updateNetworkAddress() { $('#tbSubnetAddress').val(this.SubnetAddress); $('#ddSubnetMask').val(this.SubnetMask); $('#lblMessageBox').addClass('hidden'); $('#tbSubnetAddress, #ddSubnetMask').valid(); } This will only be required on the drillDown page, this gets the subnet mask/address from the Radtree on the left and inserts the values into the appropriate form input/select This gets called by DrillDownProvisioning.aspx in the Radtree OnClientNodeClicked="drillDownNodeClick" function drillDownNodeClick(sender, eventArgs) { var node = eventArgs.get_node(); var address = node.get_value().split("/"); networkInfo.SubnetAddress = address[0]; networkInfo.SubnetMask = address[1]; $(networkInfo).trigger('updateNetworkInfo'); $(networkInfo).trigger('updateNetworkRanges', [$('#tbSubnetAddress').val(), $('#ddSubnetMask').val()]); } Answer: From perusing your code a few times: Custom event abuse While it's very cool to have custom events, I would simply remove this extra layer of logic. It does not make sense to call $(networkInfo).trigger('selectNetworkRanges'); if you could just call selectNetworkRanges(). I understand that you would loose access to this but you are accessing networkInfo directly in updateNetworkRangeDropDowns anyway. DRY (Don't Repeat Yourself) In selectNetworkRanges you could do var ranges = this.networkRanges; and then access ranges instead of this.networkRanges every time You are building a dropdown in populateSubnetMask and in updateNetworkRangeDropDowns in a different way even though the functionality is very close. With some deep thoughts you could create a helper function that could build a dropdown for both #ddSubnetMask and .mask $('.open').click and $('.close').click do the same really, you could just do this: $('.close,.open').click(function () { $('#drilldowntreecontainer').toggle('1000'); $('.open').toggle(); }); What's in a name? Please avoid short names like r , d It is considered good practice to prefix jQuery results with $ so var $dropDown = $('#ddSubnetMask'); for example Style Comma separated variables with a single var are considered better so var node = eventArgs.get_node(), address = node.get_value().split("/"); instead of var node = eventArgs.get_node(); var address = node.get_value().split("/"); You are using both double quotes and single quotes for your string constants, you should stick single quote string constants. With the possible exception of your data: statements. Comments Great commenting in general, maybe a tad too verbose at times You should mention in the top that this code relies on the jQuery Validation Plugin, in fact it would have saved time if you mentioned that in your question ;) Design ddSubnetMask could be set as disabled, you would need a hidden input that contains the actual value to be submitted as per https://stackoverflow.com/a/368834/7602 The last few functions starting with populateSubnetMask are not within your $(function () {, I would keep it all together Magic Numbers You are commenting that VLAN(type=9), I would still advocate to create a var IS_VLAN = 9 and then use that constant Not a magic number per se, 'application/json; charset=utf-8' should be a properly named constant ( it's a DRY issue as well ). Dancing in the rain Your $.ajax calls should deal with error, it will happen at some point All in all, I could work with this code. You are correct that the code is tightly coupled. I think that's because of the data you have to work with, so I would not worry about it too much.
{ "domain": "codereview.stackexchange", "id": 6751, "tags": "javascript, jquery, networking" }
Contest Solution: Remorseful Sam
Question: The Problem Sam Morose served for many years as communications officer aboard the U.S.S. Dahdit, a U.S. Coast Guard frigate deployed in the South Pacific. Sam never quite got over the 1995 decision to abandon Morse code as the primary ship-to shore communication scheme, and was forced to retire soon after because of the undue mental anguish that it was causing. After leaving the Coast Guard, Sam landed a job at the local post office, and became a model U.S. Postal worker… That said, it isn’t surprising that Sam is now holding President Clinton hostage in a McDonald’s just outside the beltway. The FBI and Secret Service have been trying to bargain with Sam for the release of the president, but they find it very difficult since Sam refuses to communicate in anything other than Morse code. Janet Reno has just called you to write a program that will interpret Sam’s demands in Morse code as English. Morse code represents characters of an alphabet as sequences of dits (short key closures) and dahs (longer key closures). If we let a period (.) represent a dit and a dash (-) represent a dah, then the Morse code version of the English alphabet is: Sample Input Your program must takes its input from the ASCII text file morse.in. The file contains periods and dashes representing a message composed only of the English alphabet as specified above. One blank space is used to separate letters and three blanks are used to separate words. Sample contents of the file could appear as follows: .... . .-.. .-.. --- -- -.-- -. .- -- . .. ... ... .- -- Sample Output Your program must direct its output to the screen and must be the English interpretation of the Morse code found in the input file. There must be no blanks between letters in the output and only one blank between words. You must use all capital letters in the output. The output corresponding to the input file above is: HELLO MY NAME IS SAM code.py import re codes = { '.-':'A', '-...':'B', '-.-.':'C', '-..':'D', '.':'E', '..-.':'F', '--.':'G', '....':'H', '..':'I', '.---':'J', '-.-':'K', '.-..':'L', '--':'M', '-.':'N', '---':'O', '...':'S', '-':'T', '..-':'U', '...-':'V', '.--':'W', '-..-':'X', '-.--':'Y', '--..':'Z' } with open('morse.in') as f: for line in f: t = [] for s in line.strip().split(' '): t.append(codes.get(s) if s else ' ') print(re.sub(' +', ' ', ''.join(t))) Any advice on performance enhancement and solution simplification is appreciated, as are topical comments! Answer: As long as you are using re.sub() to handle the spaces, you may as well use it to perform the entire task. Note that I've added entries to codes that maps triple-space to a space, and space to nothing. Also, since codes contains one entry per line, I prefer to put a comma after every entry, including a superfluous comma after the last one, to make it easy to add or remove entries. import re codes = { '.-': 'A', '-...': 'B', '-.-.': 'C', … '--..': 'Z', ' ': ' ', ' ': '', } with open('morse.in') as f: print( re.sub( '[.-]+| | ', lambda match: codes[match.group()], f.read() ), end='' )
{ "domain": "codereview.stackexchange", "id": 32525, "tags": "python, python-3.x, programming-challenge, morse-code" }
Count the occurrence of each unique word in the file
Question: I've been doing a task for an interview that I will have soon. The requirement is to code the following problem in C#. Write a program (and prove that it works) that: Given a text file, count the occurrence of each unique word in the file. For example; a file containing the string “Go do that thing that you do so well” should find these counts: 1: Go 2: do 2: that 1: thing 1: you 1: so 1: well I coded my solution and it's working fine for the tests I gave. I'm looking for any ways that I can improve my code and to get feedback on the solution. Is there's any way I could to the problem more efficiently. Thank you in advance. using System; using System.Collections.Generic; using System.Text.RegularExpressions; namespace CountOccurrence { class Program { static void Main(string[] args) { string text = System.IO.File.ReadAllText(@"F:\Ex\Myfile.txt"); // User ReadAllText to copy the file's text into a string string textToLower = text.ToLower(); // Converts the string to lower case string Regex reg_exp = new Regex("[^a-z0-9]"); // Regular expressions to replace non-letter and non-number characters with spaces. It uses the pattern [^a-z0-9].The a-z0-9 part means any lowercase letter or a digit. textToLower = reg_exp.Replace(textToLower, " "); // The code uses the Regex object’s Replace method to replace the characters that match the pattern with a space character. string[] Value = textToLower.Split(new char[] {' '}, StringSplitOptions.RemoveEmptyEntries); // Split the string and remove the empty entries Dictionary<string, int> CountTheOccurrences = new Dictionary<string, int>(); // Create a dictionary to keep track of each occurrence of the words in the string for (int i = 0; i < Value.Length; i++) // Loop the splited string { if (CountTheOccurrences.ContainsKey(Value[i])) // Check if word is already in dictionary update the count { int value = CountTheOccurrences[Value[i]]; CountTheOccurrences[Value[i]] = value + 1; } else // If we found the same word we just increase the count in the dictionary { CountTheOccurrences.Add(Value[i], 1); } } Console.WriteLine("The number of counts for each words are:"); foreach (KeyValuePair<string, int> kvp in CountTheOccurrences) { Console.WriteLine("Counts: " + kvp.Value + " for " + kvp.Key); // Print the number of counts for each word } Console.ReadKey(); } } } Answer: Efficiency I won't say much about efficiency - because without a clear use-case it will be hard to know whether possible changes would be worth the effort - but my main concern would be fact that you force a whole file into a string, of which you immediately produce a second copy. It would be nice to see a version which takes a stream of some description rather than a whole file, as this could (in theory) cope with very large files (ones which can't fit in memory), could have much better memory characteristics, and with a bit of effort could start processing the file before it has read the whole thing, so that you are not stalled waiting for the whole file before you can begin (though an asynchronous implementation would be necessary to fully exploit such possibilities). API Don't hard-code the input file-name. Put this code in a nicely packaged method, and take the input as a parameter. This parameter could be a file-name, but it could also be a Stream or String (if you intend to always read the whole thing at the start) or whatever; you can always provide a convenience method to cover important use-cases. And don't print the output to the console: if the calling code wants to print the counts to the console, let it do that, but give it the information it needs to do what it wants, rather than deciding what to do with the information for it. Returning the dictionary (perhaps as an IDictionary, so that you aren't coupled to the particular class) produces a much more useful interface. IDictionary<string, int> CountWordOccurances(string text); If your specification says that you must be printing these counds to the console, then you can write a method to print out the dictionary (perhaps to an arbitrary TextWriter, rather than only Console.WriteLine, which is no fun to test against), and write another method which composes the two. Comments Comments should be useful, explaining why code is doing what it is doing or providing some important context. // Converts the string to lower case string says nothing which text.Tolower() doesn't already, and is liable to rot as the method is modified. The code uses the Regex object’s Replace method to replace the characters that match the pattern with a space character. is far too wordy, and just states what the code is doing, without any indication as to why it might be doing that. We can see that it uses a Regex object, and we can see that that is uses the replace method, and we can see that it replaces matches with a " ": none of this needs clarifying. Variables I'm not terribly fond of your variable naming. They don't escape the method, so it doesn't really matter what style you use (though everyone uses lowerCamelCase, see the Microsoft Naming Guidelines), but you must be consistent. Value -> value (or values, since it is a collection). CountTheOccurances is not a great name; counts or wordCounts would scan much better. reg_exp encodes no information beyond that which is clear from the type. Something like letterFilter might be better. I'd be inclined to ditch textToLower, and just replace text: it's so easy to use the old variable accidentally when they have such similar names. If you wanted the separation to be clear, you could put the reading of the text and to-lowering in a different scope (or even method), so that only textToLower appears for the rest of the method; however, you've already confused matters by re-using textToLower as textToLowerAfterRegex. Splitting Your regex bit doesn't really make sense; you are replacing every character that doesn't map to a lower-case latin letter or arabic numeral with a space: where is the specification which tells you what counts as a word or not? I'll leave someone who knows more about unicode to comment on how you should do this sort of thing properly, but your code is deficient, not least because it will cut "naïve" into "na" and "ve". You can use Regex.Split instead of performing a replacement and then splitting, and use something like \p{L} to cover a wider variety of letters (just for example). A LINQ where can then be used to filter out empty entries. Regex nonLetters = new Regex(@"[^\p{L}]"); return nonLetters.Split(text).Where(s => s.Length > 0); A more efficient alternative might be to use a regex which matches words, and return captures instead of splitting; however, again, if performance is a concern, then you need to benchmark under realistic conditions. Regex wordMatcher = new Regex(@"\p{L}+"); return wordMatcher.Matches(text).Select(c => c.Value); I'd be strongly inclined to separate this 'extracting words' bit from the 'counting words' bit, and to avoid ToLowering at this point. Indeed, rather than using ToLower() to group words, consider supplying the dictionary a case-insensitive string comparer such as StringComparer.CurrentCultureIgnoreCase. Counting Though there is merit in using a separate ContainsKey call and [key] lookup, it's more efficient and a bit tidier to use TryGetValue. Here are 2 obvious ways of using it. if (CountTheOccurrences.TryGetValue(value[i], out int count)) { CountTheOccurrences[value[i]] = count + 1; } else { CountTheOccurrences.Add(value[i], 1); } // or int count; if (!CountTheOccurrences.TryGetValue(value[i], out count)) { Count = 0; } CountTheOccurrences[value[i]] = count + 1; Personally I prefer the second one (count doesn't leak if not-found), but the first is closer to what you already have. Another option is to ditch the loop completely, and use a LINQ GroupBy call, reducing the code complexity alot. In the code below, I write a custom general-purpose CountOccurances method, which could be reused for other purposes, and makes the intention of the code plain without compromising on performance (GroupBy would introduce significant overheads). Example Rewrite The below incorporates most of the ideas above, with a couple of other adjustments. The separation of concerns is maybe a little excessive, but while the number of lines of code seems to have increased, essentially all the complexity is hidden away in the completely generic (and potentially widely reusable (I wish LINQ had it already)) CountOccurrences method; the other methods are trivial, but none-the-less encapsulate domain information behind a nice API. /// <summary> /// Convenience method which prints the number of occurrences of each word in the given file /// </summary> public static void PrintWordCountsInFile(string fileName) { var text = System.IO.File.ReadAllText(fileName); var words = SplitWords(text); var counts = CountWordOccurrences(words); WriteWordCounts(counts, System.Console.Out); } /// <summary> /// Splits the given text into individual words, stripping punctuation /// A word is defined by the regex @"\p{L}+" /// </summary> public static IEnumerable<string> SplitWords(string text) { Regex wordMatcher = new Regex(@"\p{L}+"); return wordMatcher.Matches(text).Select(c => c.Value); } /// <summary> /// Counts the number of occurrences of each word in the given enumerable /// </summary> public static IDictionary<string, int> CountWordOccurrences(IEnumerable<string> words) { return CountOccurrences(words, StringComparer.CurrentCultureIgnoreCase); } /// <summary> /// Prints word-counts to the given TextWriter /// </summary> public static void WriteWordCounts(IDictionary<string, int> counts, TextWriter writer) { writer.WriteLine("The number of counts for each words are:"); foreach (KeyValuePair<string, int> kvp in counts) { writer.WriteLine("Counts: " + kvp.Value + " for " + kvp.Key.ToLower()); // print word in lower-case for consistency } } /// <summary> /// Counts the number of occurrences of each distinct item /// </summary> public static IDictionary<T, int> CountOccurrences<T>(IEnumerable<T> items, IEqualityComparer<T> comparer) { var counts = new Dictionary<T, int>(comparer); foreach (T t in items) { int count; if (!counts.TryGetValue(t, out count)) { count = 0; } counts[t] = count + 1; } return counts; } Note that every method has inline-documentation, which describes its job (though I'll grant the summaries are not very good; with a proper spec be able to writer better APIs with better documentation).
{ "domain": "codereview.stackexchange", "id": 33983, "tags": "c#, performance, interview-questions, hash-map" }
Encrypt a String using AES in CBC mode
Question: Your opinion interests me regarding this program. This program encrypts a text message using the AES256 algorithm and CBC. It allows the creation of an encrypted message that contains: The salt used for the creation of the AES256 key The salt used for the creation of the derived key used for calculating the HMAC for message integrity verification The initialization vector The size of the message The encrypted message The HMAC for integrity verification Do you have any security remarks about this code? Does it seem secure to you? If not, why? package security; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException; import java.io.InputStream; import java.nio.charset.StandardCharsets; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.security.SecureRandom; import java.util.Arrays; import javax.crypto.Cipher; import javax.crypto.Mac; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.PBEKeySpec; import javax.crypto.spec.SecretKeySpec; public class Sample { public static void main(String[] args) throws Exception { String password = "12341234@xxxx!'9876"; String msg = "This is the message"; System.out.println("Encrypt '" + msg + "'"); byte[] encrypted = encrypt(msg, password); System.out.println(bytesToHex(encrypted)); String msg2 = decrypt(encrypted, password); System.out.println("restultat='" + msg2 + "'"); } public static byte[] encrypt(String message, String password) throws Exception { SecureRandom rand = new SecureRandom(); try (ByteArrayOutputStream out = new ByteArrayOutputStream()) { try (DataOutputStream dout = new DataOutputStream(out)) { try (InputStream in = new ByteArrayInputStream( message.getBytes(StandardCharsets.UTF_8))) { byte[] salt = new byte[8]; rand.nextBytes(salt); out.write(salt); byte[] derivatedSalt = new byte[8]; rand.nextBytes(derivatedSalt); out.write(derivatedSalt); byte[] iv = new byte[16]; rand.nextBytes(iv); out.write(iv); dout.writeLong(message.getBytes().length); byte[] aesKey = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA256") .generateSecret(new PBEKeySpec(password.toCharArray(), salt, 60000, 256)) .getEncoded(); Cipher ci = Cipher.getInstance("AES/CBC/PKCS5Padding"); ci.init(Cipher.ENCRYPT_MODE, new SecretKeySpec(aesKey, "AES"), new IvParameterSpec(iv)); Mac hmac = Mac.getInstance("HmacSHA256"); byte[] key1 = saltedSHA256(saltedSHA256(aesKey, derivatedSalt), derivatedSalt); hmac.init(new SecretKeySpec(key1, "HmacSHA256")); hmac.update(iv); hmac.update(salt); byte[] ibuf = new byte[8192]; int len; while ((len = in.read(ibuf)) != -1) { byte[] obuf = ci.update(ibuf, 0, len); if (obuf != null) { out.write(obuf); hmac.update(ibuf, 0, len); } } byte[] obuf = ci.doFinal(); if (obuf != null) { out.write(obuf); } byte[] bmac = hmac.doFinal(); out.write(bmac); return out.toByteArray(); } } } } public static String decrypt(byte[] xx, String password) throws Exception { try (InputStream in = new ByteArrayInputStream(xx)) { try (DataInputStream din = new DataInputStream(in)) { try (ByteArrayOutputStream out = new ByteArrayOutputStream()) { byte[] salt = new byte[8]; read(in, salt); byte[] derivatedSalt = new byte[8]; read(in, derivatedSalt); byte[] iv = new byte[16]; read(in, iv); long taille = din.readLong(); byte[] aesKey = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA256") .generateSecret(new PBEKeySpec(password.toCharArray(), salt, 60000, 256)) .getEncoded(); Cipher ci = Cipher.getInstance("AES/CBC/PKCS5Padding"); ci.init(Cipher.DECRYPT_MODE, new SecretKeySpec(aesKey, "AES"), new IvParameterSpec(iv)); byte[] key1 = saltedSHA256(saltedSHA256(aesKey, derivatedSalt), derivatedSalt); Mac hmac = Mac.getInstance("HmacSHA256"); hmac.init(new SecretKeySpec(key1, "HmacSHA256")); hmac.update(iv); hmac.update(salt); long tailleToread = (taille / 16 + 1) * 16; long resteALire = tailleToread; int bufsize = 8192; while (resteALire > 0) { int blockToRead = Math.min( (resteALire < bufsize ? (int) resteALire : bufsize + 500), bufsize); // astuce pour gérer une taille de message long avec une taille de buffer en in byte[] ibuf = in.readNBytes(blockToRead); int len = ibuf.length; resteALire -= len; byte[] obuf = ci.update(ibuf, 0, len); if (obuf != null) { out.write(obuf); hmac.update(obuf); } } byte[] obuf = ci.doFinal(); if (obuf != null) { out.write(obuf); hmac.update(obuf); } // recalcul byte[] bmac = hmac.doFinal(); byte[] readsha = in.readAllBytes(); if (!Arrays.equals(bmac, readsha)) { throw new Exception("HMAC error"); } return out.toString(StandardCharsets.UTF_8); } } } } private static void read(InputStream inputStream, byte[] buffer) throws IOException { int bufferLength = buffer.length; int totalBytesRead = 0; int bytesRead; while ((bytesRead = inputStream.read(buffer, totalBytesRead, bufferLength - totalBytesRead)) != -1) { totalBytesRead += bytesRead; if (totalBytesRead == bufferLength) { break; } } } private static byte[] saltedSHA256(byte[] data, byte[] salt) throws NoSuchAlgorithmException { byte[] bytes = new byte[data.length + salt.length]; System.arraycopy(data, 0, bytes, 0, data.length); System.arraycopy(salt, 0, bytes, data.length, salt.length); MessageDigest digest = MessageDigest.getInstance("SHA-256"); return digest.digest(bytes); } public static String bytesToHex(byte[] bytes) { StringBuilder hexStringBuilder = new StringBuilder(2 * bytes.length); for (byte b : bytes) { hexStringBuilder.append(String.format("%02x", b)); } return hexStringBuilder.toString(); } } Answer: The main thing to be wary of is that this kind of ciphertext can be cracked "offline", i.e. at the convenience of the adversary using as many machines as possible. That means that bad passwords are immediately vulnerable even though PBKDF2 is used to derive the data encryption & MAC keys. If possible it should be avoided. Security / Cryptography When it comes to the cryptography, the following can be noted: Instead of inventing your own AEAD cipher, it is probably better and less error prone to use AES in GCM mode. There are reasons to prefer HMAC in some cases, but generally those are rather specific. The number of iterations is an important security parameter, which should be clearly documented. It should definitely be higher than 60000 in most cases, more towards a million. If there is a random salt for each encryption then you will get a different key for each encryption; using a different IV isn't really required (it can remain as an all-zero IV). It may be a better idea to use public key encryption and signing instead of using secret keys. That would allow you to protect the plaintext with a real key instead of a password (a password can be used to protect the private keys). There is no reason to call saltedSHA256 twice, it has no security consequences. HMAC forgets to include the plaintext size in the calculation; as it is an adversary could alter the size after which decryption fails (currently they probably need to remain within the block though). Note that saltedSHA256 basically implements a Key Derivation Function. There are official key derivation functions such as HKDF. Conciseness Clearly the code can be more concise, for instance: A single try can be used to control multiple streams at once, e.g. try (var s1 = new Stream(); var s2 = new Stream(s1)) { // code goes here } Instead of performing the cipher and HMAC update yourself you could have used CipherOutputStream and DigestOutputStream. In modern Java versions you can use the var keyword, e.g. var ba = new ByteArrayOutputStream(). Instead of creating a buffer etc. it is possible to use InputStream.transferTo(OutputStream out) But while we are at it, we don't need any InputStream, if we have CipherOutputStream we can just write the encoded message directly. Instead of using ByteArrayOutputStream simply use DataOutputStream#write, then it is possible to use var out = new DataOutputStream(new ByteArrayOutputStream()) (far out, eh?). The size of the ciphertext can easily be predicted. As the result is being returned in a byte array, it makes more sense to use ByteBuffer or byte[]. The good thing about ByteBuffer is that Cipher can use it as output buffer, but that it also has put(byte[]) and putLong(long) methods defined for it. So the encode method doesn't require streaming anyway. Code Some variable names could be improved, e.g. cipher instead of ci. The exception handling basically lets the user handle all (checked) exceptions; it is better to turn most of these into RuntimeException instances, see here how to handle the various exceptions. Java can only destroy strings by using garbage collection, and normally Java doesn't destroy string instances. A password is often shown as using char[] because the char values in the array can be overwritten by methods such as Arrays.fill, which would usually destroy the contents. If the method takes a String that using a char[] becomes pointless. Recommendations These are recommendations for creating a streaming implementation, as mentioned in the comments as goal (next time include that information in the question text): If the code needs to use GCM then it does limit to 2 GiB of data due to the array size. For file encryption HMAC can be used, but it makes more sense to e.g. copy the ideas within DigestOutputStream and DigestInputStream to allow for streaming in that case. At a very minimum make the password handling as robust as possible (e.g. indicate weak passwords and/or recommend strong ones). Somehow register the iteration count with the ciphertext so it can be upgraded later. Include as much information in the HMAC calculation as possible.
{ "domain": "codereview.stackexchange", "id": 45398, "tags": "java, security, cryptography, aes" }
Command Pattern with Undo , returning response in Invoker and Command class or Callback?
Question: I have used command pattern with undo support. The terms associated with Command Pattern are as follows Client → Invoker → Command instance execute() → Receiver so the client will need to know whether the operation was successfully so that it can decide whether to undo. In my example, Receiver talks to the Volt DB using some APIs, so there are predefined response classes with result code and result message. It would be bad if I sent those all the way to the client. Having Invoker and Command to return boolean indicating success or failure feels better. I have seen an example where they use callbacks. Is this something only for asynchronous operations or I can also use it here? Which would be better? Snippets of the code are below; full code is available on GitHub. Invoker class package com.spakai.undoredo; import java.util.Stack; public class Invoker { private final Stack<Command> undoStack; private final Stack<Command> redoStack; public Invoker() { undoStack = new Stack<>(); redoStack = new Stack<>(); } public void execute(Command cmd) { undoStack.push(cmd); redoStack.clear(); cmd.execute(); } public void undo() { if (!undoStack.isEmpty()) { Command cmd = undoStack.pop(); cmd.undo(); redoStack.push(cmd); } } public void redo() { Command cmd = redoStack.pop(); cmd.execute(); undoStack.push(cmd); } } Receiver interface package com.spakai.undoredo; public interface Receiver { public CreateGroupResponse createGroup(int groupId, int subscriptionId); public DeleteGroupResponse deleteGroup(int groupId); //many more } Command interface package com.spakai.undoredo; public interface Command { public void execute(); public void undo(); public void redo(); } Sample Command instance package com.spakai.undoredo; public class CreateGroupAndSubscription implements Command { // which states do i need to store in order to execute and undo private int groupId; private int subscriptionId; // this is the Volt handle apis that talks to VoltDB private Receiver receiver; public CreateGroupAndSubscription(int groupId, int subscriptionId, Receiver receiver) { this.groupId = groupId; this.subscriptionId = subscriptionId; this.receiver = receiver; } @Override public void execute() { CreateGroupResponse response = receiver.createGroup(groupId,subscriptionId); } @Override public void undo() { DeleteGroupResponse response = receiver.deleteGroup(groupId); } @Override public void redo() { execute(); } } Any other suggestions on the whole code are most welcomed. Answer: Excellent "by the book" realization of the command pattern with undo/redo ;) Just 2 comments: If receiver changes the state of the database, I would definitively handle the error case - the current implementation seems to ignore it. Imagine that undo fails (group was not deleted) and the previous command created the subscription. Then another undo (which tries to delete the subscription) also fails because there is still a group referencing the subscription. Maybe it is a matter if taste, but I don't see a case where the implementation of execute and redo differs. Usually, I use only 2 methods do and undo where do will be used for execute as well as redo. Having Invoker and Command to return boolean indicating success or failure feels better. I have seen an example where they use callbacks. Is this something only for asynchronous operations or I can also use it here? Which would be better? If a boolean (without error message) is enough, I would prefer the boolean return value because of its simplicity. Otherwise, I would even prefer something like an ErrorInfo object that contains a success flag and an optional error message (or even other information like the exception) as return value. Another option is to throw an exception - may be also valid in your case. For that kind of error handling, the command should be executed before being pushed to the undoStack obviously.
{ "domain": "codereview.stackexchange", "id": 26939, "tags": "java, design-patterns" }
Nodelet Publisher
Question: virtual void onInit() { ros::NodeHandle & private_nh = getPrivateNodeHandle(); pub = private_nh.advertise<nodelet_test::buff>("chatter2", 10); nodelet_test::buff buffer; buffer.buffer_data.push_back(20); while(ros::ok()) { pub.publish(buffer); } } I have the example code above for nodelet publisher only. Since I do not use callbacks (no subscriber ), is the way its done ? adding while loop inside onInit () ? Originally posted by ROSfc on ROS Answers with karma: 54 on 2016-11-18 Post score: 0 Answer: Reading the documentation of onInit it should not block. Basically it prevent the surrounding nodelet manager to continue initializing the other nodelets. Instead you should create a timer in the onInit function and then publish your message in the triggered callback of the timer. Originally posted by Dirk Thomas with karma: 16276 on 2016-11-18 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by ROSfc on 2016-11-20: I would require 36Hz frequency of published messages, do you advice using a timer ? or do you see any problems arising ? Comment by ROSfc on 2016-11-21: As you suggested I used createTimer and in that i could specify the frequency using ros::Duration
{ "domain": "robotics.stackexchange", "id": 26279, "tags": "ros, roscpp, publisher, nodelet" }
How to treat the units of measure when taking a derivative?
Question: I've had a doubt for a long time: when I'm taking the derivative, of a function for example, how should I treat the units of measurement? For example, if I'm taking the derivative of: $$S\,[{\rm m}]=S_{0}\,[{\rm m}]+v\,[{\rm m}\,{\rm s}^{-1}]\,t\,[{\rm s}]+\frac{1}{2}a\,[{\rm m}\,{\rm s}^{-2}]\,t^{2}\,[{\rm s}^{2}]$$ Thanks in advance! Answer: For the purposes of dimensions (units), you can treat a derivative like a division. So when you apply $\frac{{\rm d}}{{\rm d}t}$ to a function you divide the dimensions of the function by a unit of time. In your example I get: $$\frac{{\rm d}S}{{\rm d}t}\left[{\rm m}\,{\rm s}^{-1}\right] = v \left[{\rm m}\,{\rm s}^{-1}\right] + a\,\left[{\rm m}\,{\rm s}^{-2}\right]\,t\,\left[{\rm s}\right]$$ which of course makes sense, in terms of units.
{ "domain": "physics.stackexchange", "id": 31614, "tags": "units, differentiation, calculus" }
How can I apply Rice's theorem?
Question: I am learning for my computability and complexity exam in which there is always an exercise to decide whether some problem is decidable or not. In one of the past exams, there was the following problem: Given Turing Machine M, decide whether there exists a prime number on which M halts. I am supposed to decide whether the problem is decidable or not. I guess I can rewrite the problem into the language $L = \{ \langle M \rangle | \exists p \in \mathbb{P} : M(p) \downarrow \}$. I have been given a hint that I can use Rice's theorem to prove the language is undecidable. I am actually struggeling, since I have no idea how should I apply Rice's theorem to (generaly any) problem. I am interested in these questions: How can I find out whether Rice's theorem is applicable or not? If I find it out, how to apply it? (In this excercise in particular) Any help appreciated. Answer: Here's another, elementary proof technique that does not use Rice's theorem which is way overkill for this simple problem. We have Turing machine family $F(A)$ that goes into an infinite loop for any input except 2, in which case it will run program $A$, which can be arbitrary. Now if we could decide your original problem we could decide for arbitrary inputless Turing machines whether they halt using $F$. But this is obviously undecidable and therefore your original problem is as well.
{ "domain": "cs.stackexchange", "id": 15368, "tags": "undecidability, rice-theorem" }
Any use for $F_4$ in hep-th?
Question: In high energy physics, the use of the classical Lie groups are common place, and in the Grand Unification the use of $E_{6,7,8}$ is also common place. In string theory $G_2$ is sometimes utilized, e.g. the $G_2$-holonomy manifolds are used to get 4d $\mathcal{N}=1$ susy from M-theory. That leaves $F_4$ from the list of simple Lie groups. Is there any place $F_4$ is used in any essential way? Of course there are papers where the dynamics of $d=4$ $\mathcal{N}=1$ susy gauge theory with $F_4$ are studied, as part of the study of all possible gauge groups, but I'm not asking those. Answer: $F_4$ is the centralizer of $G_2$ inside an $E_8$. In other words, $E_8$ contains an $F_4\times G_2$ maximal subgroup. That's why by embedding the spin connection into the $E_8\times E_8$ heterotic gauge connection on $G_2$ holonomy manifolds, one obtains an $F_4$ gauge symmetry. See, for example, http://arxiv.org/abs/hep-th/0108219 Gauge theories and string theory with $F_4$ gauge groups, e.g. in this paper http://arxiv.org/abs/hep-th/9902186 depend on the fact that $F_4$ may be obtained from $E_6$ by a projection related to the nontrivial ${\mathbb Z}_2$ automorphism of $E_6$ which you may see as the left-right symmetry of the $E_6$ Dynkin diagram. This automorphism may be realized as a nontrivial monodromy which may break the initial $E_6$ gauge group to an $F_4$ as in http://arxiv.org/abs/hep-th/9611119 Because of similar constructions, gauge groups including $F_4$ factors (sometimes many of them) are common in F-theory: http://arxiv.org/abs/hep-th/9701129 More speculatively (and outside established string theory), a decade ago, Pierre Ramond had a dream http://arxiv.org/abs/hep-th/0112261 http://arxiv.org/abs/hep-th/0301050 that the 16-dimensional Cayley plane, the $F_4/SO(9)$ coset (note that $F_4$ may be built from $SO(9)$ by adding a 16-spinor of generators), may be used to define all of M-theory. As far as I can say, it hasn't quite worked but it is interesting. Sati and others recently conjectured that M-theory may be formulated as having a secret $F_4/SO(9)$ fiber at each point: http://motls.blogspot.com/2009/10/is-m-theory-hiding-cayley-plane-fibers.html Less speculatively, the noncompact version $F_{4(4)}$ of the $F_4$ exceptional group is also the isometry of a quaternion manifold relevant for the maximal $N=2$ matter-Einstein supergravity, see http://arxiv.org/abs/hep-th/9708025 In that paper, you may also find cosets of the $E_6/F_4$ type and some role is also being played by the fact that $F_4$ is the symmetry group of a $3\times 3$ matrix Jordan algebra of octonions. A very slight extension of this answer is here: http://motls.blogspot.com/2011/10/any-use-for-f4-in-hep-th.html
{ "domain": "physics.stackexchange", "id": 3345, "tags": "particle-physics, research-level, symmetry, group-theory, lie-algebra" }
Time duration for a mechanical wind up mechanism?
Question: Is there a specific way to figure out the gear ratio I would need for a wind up mechanism that could last around one year? Would I start out with a really high tooth and step down too a small enough ratio for a complete cycle to last a year? Do I need to devise a polynomial function, then use calculus to solve for the gear ratios, number of teeth, and time? Answer: Starting with the gear ratio is the wrong way to look at this problem. Start by determining the total energy you need for one year. No amount of gearing can fix it if the fully wound coil doesn't hold the energy you need. Once you know the energy you need, you look for a spring coil that can store at least a little more than that. Only then are you ready to consider gear ratio. The spring spec will give you the number of turns over the coil's useful discharge life. Only you can tell how many turns of some shaft you need to run your device for one year. The overall gear ratio is then the latter divided by the former.
{ "domain": "engineering.stackexchange", "id": 1085, "tags": "mechanical-engineering, gears" }
Why are symmetries in phase space generated by functions that leave the Hamiltonian invariant?
Question: Hamilton's equation reads $$ \frac{d}{dt} F = \{ F,H\} \, .$$ In words this means that $H$ acts on $T$ via the natural phase space product (the Poisson bracket) and the result is the correct time evolution of $F$. In other words $H$ generates temportal shifts $t \to t +dt$. The function $F$ over phase space describes a conserved quantitiy if $$ \frac{d}{dt} F = \{ F,H\} =0 \, .$$ Nother's theorem now exploits that the Poisson bracket is antisymmetric $$ \{ A,B\} = - \{ B,A\} .$$ Therefore we can reverse the role of the two functions in the Poisson bracket above $$ \{ F,H\} =0 \quad \leftrightarrow \quad \{ H,F\} =0 \,. $$ In words, this second equation tells us that for any conserved quantity $F$, its action on the Hamiltonian $H$ is zero. In other words, $F$ generates as symmetry. This is exactly Noether's theorem. But usually, we argue that only the Lagrangian has to be invariant. The Hamiltonian can change under symmetries like boosts which increase the potential energy. (While the Lagrangian is a scalar, the Hamiltonian is only one component of the energy-momentum vector and therefore, there is no reason why it should be invariant.) So why exactly do we find in the Hamiltonian version of Noether's theorem that the Hamiltonian remains invariant under symmetry transformmations? Answer: I recently spent a lot of time thinking about this stuff and wrote a little document which I put on my website here (under the title "Visualizing the Inverse Noether Theorem and Symplectic Geometry"). So I will begin by first addressing your specific question of how the symmetries of the Hamiltonian and Lagrangian are connected. However, I also want to address the deeper sub-question: what is a "symmetry" exactly, and how should we think about them? This part of my answer will be a little bit like a manifesto. Main Question: Invariance of Lagrangian Any transformation that changes the Lagrangian by a total derivative is called a "symmetry" (sometimes a "quasi symmetry"). Noether's theorem can be used extract a conserved quantity using this symmetry. In the Hamiltonian framework, you then find that this conserved quantity "generates" the original symmetry. It is easier to see why this works out in the "Hamiltonian Lagrangian" formalism, where the Lagrangian $L_H$ is a function of momentum and position. $$ L_H(p_i, q_i, \dot q_i) = p_i \dot q_i - H(q_i, p_i) $$ (Here, $i = 1\ldots n$ and summation is implied when indices are repeated.) Now consider we have some conserved quantity $Q$; $$ \{ Q, H \} = 0. $$ This "generates" the infinitesimal transformation $$ \delta q_i = \varepsilon \frac{\partial Q}{\partial p_i} \hspace{1 cm} \delta p_i = -\varepsilon \frac{\partial Q}{\partial q_i} $$ Now, if we imagine that $\varepsilon$ is some tiny time dependent function, i.e. $\varepsilon = \varepsilon(t)$, we can use it to vary the action of a path. Assuming the boundary conditions $\varepsilon(t_1) = \varepsilon(t_2) = 0$, on solutions to the equations of motion we have \begin{align*} 0 &= \delta S \\ &= \int_{t_1}^{t_2} \delta L_H dt \\ &= \int_{t_1}^{t_2} \Big( -\varepsilon \frac{\partial Q}{\partial q_i} \dot q_i + p_i \frac{d}{dt} \big( \varepsilon \frac{\partial Q}{\partial p_i} \big) - \varepsilon\{H, Q\} \Big) dt \\ &= \int_{t_1}^{t_2} \Big( -\varepsilon \frac{\partial Q}{\partial q_i} \dot q_i - \dot p_i \varepsilon \frac{\partial Q}{\partial p_i} \Big) dt \\ &= -\int_{t_1}^{t_2} \varepsilon \dot Q dt \end{align*} We can therefore see that on solutions to the equations of motion, $\dot Q = 0$. This is just Noether's theorem. We may now wonder how this symmetry transformation affects $L_H$ when $\varepsilon$ is a constant. We see that it changes it exactly by a total derivative, as expected: \begin{align*} \delta L_H &= - \frac{\partial Q}{\partial q_i} \dot q_i - p_i \frac{d}{dt} \Big( \frac{\partial Q}{\partial p_i} \Big) + \{ H, Q\} \\ &= - \frac{\partial Q}{\partial p_i} \dot q_i - \dot p_i \frac{\partial Q}{\partial p_i} + \frac{d}{dt} \Big( p_i \frac{\partial Q}{\partial p_i} \Big) \\ &= \frac{d}{dt} \Big( p_i \frac{\partial Q}{\partial p_i} - Q\Big) \end{align*} So we can see that $L_H$ necessarily changes by a total derivative. Let me now point out an interesting aside. When the quantity $p_i \frac{\partial Q}{\partial p_i} - Q = 0$, the total derivative is $0$. This happens when the conserved quantity is of the form $$ Q = p_i f_i(q). $$ Note that in the above case, $$ \delta q_i = f_i(q) $$ That is, symmetry transformations which do not "mix up" the $p$'s with the $q$'s have no total derivative term in $\delta L$. Manifesto: What really is a "symmetry"? You said something very interesting in your question statement which I have heard many physicists say. Therefore we can reverse the role of the two functions in the Poisson bracket above $\{F,H\}=0↔\{H,F\}=0.$ In words, this second equation tells us that for any conserved quantity F, its action on the Hamiltonian H is zero. In other words, F generates as symmetry. This is exactly Noether's theorem. Now, the word "symmetry" is pretty slippery. Earlier in this answer I said that a symmetry is something that changes the Lagrangian by a total derivative. However, that is a pretty obtuse definition for symmetry. In your question, you refer to a symmetry as a transformation which keeps $H$ constant. That definition is also a bit obtuse. In my opinion, a "symmetry" in classical mechanics is an operation that commutes with time evolution. So, for example, if your system has a "rotational symmetry," then rotating your system, then time evolving it, will result in the same final state as time evolving it, then rotating it. Note that not every "symmetry" in modern parlance fits this description. For instance, think about the scaling symmetry of a free particle. A free particle will travel along a straight line at a constant velocity: $\vec x = \vec v t + \vec a$. If we multiply the particle's coordinate by some constant $b$, then $\vec x = b (\vec v t + \vec a).$ This is another valid path the particle may take, so scaling is a symmetry of the equations of motion. While that is true, scaling is NOT a "symmetry" given my preferred definition. However, this naive scaling symmetry does not change the Lagrangian by a total derivative, so it has no associated conserved quantity. (I am trying to convince you that my preferred definition is the more useful one.) What about Lorentz boosts? Those also fit my definition, but there is a tiny complication. When you perform a Lorentz boost, you must change your definition of time. So if you boost and then time evolve, you should end up with the same final state as if you time evolved and then boosted, as long as you correctly account for the fact that the definition of "time" changes after a boost. So the case of special relativity is a little subtle. I do not think that $$ \{ Q, H \} = 0 = \{H, Q\} $$ is the correct way to understand Noether's theorem in Hamiltonian mechanics. In my opinion, the avatar is the "inverse Noether theorem" $$ X_H(Q) = 0 \implies [X_H, X_Q] = 0. $$ In the above expression, $X_H$ is the Hamiltonian vector field "generated by $H$" and $[\cdot, \cdot]$ is the vector field "Lie Bracket" defined by $$ [X_H, X_Q] = X_H X_Q - X_Q X_H. $$ Note that I am also using the notation where vector fields act on functions as a differential operator, so for example $$ X_H (Q) = \{Q, H\}. $$ $X_H(Q)$ should be thought of as the change in $Q$ that comes from "flowing along" $X_H$, i.e. $$ \dot Q = X_H(Q). $$ The "proof" of Noether's theorem in Hamiltonian mechanics is just the Jacobi identity. $$ \{ \{g, h\}, f\} + \{ \{h, f\}, g\} + \{ \{f, g\}, h\} = 0 $$ Rearranging a bit, also using the anti symmetry of the Poisson bracket $$ \{f, \{h, g\}\} = \{ g, \{h, f\}\} - \{ h, \{g, f\}\} $$ we can use definition of our Hamiltonian vector fields acting on a test function $f$ to write $$ X_{\{h, g\}} (f) = [X_g, X_h](f) $$ and finally $$ X_{\{h, g\}} = [X_g, X_h]. $$ This demonstrates that if $Q$ is conserved, i.e. $\dot Q = X_H(Q) = \{Q, H\} = 0$, then we have a symmetry $[X_H, X_Q] = X_{\{Q, H \}} = X_0 = 0$. Note that the Lie bracket can be shown to give the "failure" of flows to commute infinitesimally. Perhaps I have been introducing notation too quickly, so I should mention I discuss this at a more reasonable pace in my notes linked above. Anyway, once you start thinking in terms of commuting flows, you realize that "symmetries" in classical mechanics are directly analogous to "symmetries" in quantum mechanics. In quantum mechanics, we capture the above statement mathematically (suppressing $\hbar$) as $$ [e^{-i t \hat H}, e^{- i \theta \hat J}] = 0. $$ The above equation can actually be understood as four closely related equations. $[e^{-i t \hat H}, e^{- i \theta \hat J}] = 0$: Rotating and then time evolving a state is the same as time evolving and then rotating. (We have a symmetry.) $[e^{-i t \hat H},\hat J] = 0$: The angular momentum of a state does not change after time evolution. (Angular momentum is conserved.) $[\hat H, e^{- i \theta \hat J}] = 0$: The energy of a state does not change if the state is rotated. $[\hat H, \hat J] = 0$: If you measure the angular momentum of a state, the probability that the state will have any particular energy afterwards will not change. The reverse is also true. ($\hat H$ and $\hat J$ can be simultaneously diagonalized.) We can see that symmetries and conservation laws are interrelated in many ways far beyond the simple statement "symmetries give conservation laws." Rather amazingly, three of these four statements about quantum mechanics also have direct analogs in classical mechanics! $[X_H, X_J] = 0$: Rotating and then time evolving a state is the same as time evolving and then rotating. (We have a symmetry.) $X_H(J) = 0$: The angular momentum of a state does not change after time evolution. (Angular momentum is conserved.) $X_J(H) = 0$: The energy of a state does not change if the state is rotated. $\{H, J\} = 0$: No classical meaning I can think of. (Can you think of one?) In my opinion, once you think about "symmetries" of mechanics in terms of commutativity, many disparate facts start to fit together in a more pleasing and unified way. However, in other areas of physics, "symmetry" means something totally different (like gauge symmetry). I think you always need to be very careful with this important yet slippery word...
{ "domain": "physics.stackexchange", "id": 56092, "tags": "classical-mechanics, conservation-laws, symmetry, hamiltonian-formalism, noethers-theorem" }
How does translational coupling work in prokaryotes?
Question: Today I heard about a phenomenon called "translational coupling", where the translation of one protein influences the translation of another protein. The messenger RNA levels don't seem influenced. How does this work? Do they need to be in the same operon? Answer: Translational coupling describes in how some cases an mRNA will code from more than one protein (i.e. will be polycistronic). Translational coupling is thought to be mostly used as a way to make a set of genes are translated at roughly the same amount in the cell. Translational coupling is very common in prokaryotes and nearly half of e coli genes are found in a polycistronic operon. What we know about them is revealing. There are some fancy mechanisms to adjust the ratios of these adjacent genes, which are still coupled, but with ratios that are not just 1:1. Its shown that the later genes are often translated at somewhat lower frequency because the first genes are available more quickly before the mRNA degrades. Eric Alm @ MIT wrote this great paper on how operons evolve. I've only been able to find this reference to a eukaryotic case of "translational coupling" which is very rare, but does exist. The most common cases of translationally coupled genes in eukaryotes are RNA viruses which usually contain only a single full length mRNA which codes for all the genes in the virus. The selective pressures to keep the viral genome small and constrain the ratios of these genes, will cause these genes to even overlap, starting the next one before the current gene finishes.
{ "domain": "biology.stackexchange", "id": 6586, "tags": "molecular-biology, translation" }
Recursive Breadth First Search Knights Tour
Question: This program was written on Windows 7 using Visual Studio 2012 Professional. Some of the issues encountered may have been fixed or changed in a more recent version. This is a quote from my second question in response to a comment made below: The algorithm in KnightMovesImplementation.cs implements a recursive tree search for all possible paths of a Knight on a Chess Board from point A to point B with the given restrictions of board size and slicing . The tree is implemented as a recursive algorithm rather than a data structure. The data structures used in the algorithm are paths (a collection of moves from the origin to the destination), moves (the 8 possi ble moves a Knight can make on a Chess Board), and locations (squares on the chessboard represented by row number and column number). Slicing can be either the Knight can't visit a previously visited row or column on the board, or the Knight can't visit a previously visited square on the board. My second question here on Code Review is here. It was as a request for review of my C++ and as an answer to this question. The author of that question asked me if I could provide a solution in C# since he was first learning C# and didn't know C++. I hope he didn't hold his breath waiting because that was 3 months ago, but here is my C# version. To write this solution I had to take a detour and refactor my original post which I asked last week. Only three main files are shown here. You can find the entire program on GitHub. The three files are the core algorithm (KnightMoveImplementation and KMMoveFilters) and the Program.cs file. Since it is my first C# program, I'd really love someone to dissect it and tell me all the things I can do to improve it. Sample Output Do you want to print each of the resulting paths? (y/n) Select the number of the test case you want to run. Test Case # Start Name Target Name Board Size Slicing Method 1 A3 H4 8 Can't return to previous row or column 2 A1 H8 8 Can't return to previous row or column 3 A8 H1 8 Can't return to previous row or column 4 B3 H4 8 Can't return to previous row or column 5 B3 H8 8 Can't return to previous row or column 6 C1 H4 8 Can't return to previous row or column 7 A3 H8 8 Can't return to previous row or column 8 A3 H4 8 Can't return to previous row or column 9 H4 A3 8 Can't return to previous row or column 10 D4 A8 8 Can't return to previous row or column 11 D4 E6 8 Can't return to previous row or column 12 A3 B5 8 Can't return to previous row or column 13 A3 B5 13 Can't return to previous row or column 14 A3 B5 8 Can't return to previous location 15 A3 B5 26 Can't return to previous row or column 16 All of the above except for 15 and 14 17 All of the above (Go get lunch) finished computation at 7/1/2016 12:57:41 PM elapsed time: 0.0144886 Seconds The point of origin for all path searches was A3 The destination point for all path searches was H4 The number of squares on each edge of the board is 8 The slicing methodology used to further limit searches was no repeat visits to any rows or columns. There are 5 Resulting Paths There were 276 attempted paths The average path length is 4.8 The median path length is 4 The longest path is 6 moves. The shortest path is 4 moves. ######################### End of Test Case A3 to H4######################### finished computation at 7/1/2016 12:57:41 PM elapsed time: 0.0006995 Seconds The point of origin for all path searches was A1 The destination point for all path searches was H8 The number of squares on each edge of the board is 8 The slicing methodology used to further limit searches was no repeat visits to any rows or columns. There are 18 Resulting Paths There were 138 attempted paths The average path length is 6 The median path length is 6 The longest path is 6 moves. The shortest path is 6 moves. ######################### End of Test Case A1 to H8######################### finished computation at 7/1/2016 12:57:41 PM elapsed time: 0.0018398 Seconds The point of origin for all path searches was B3 The destination point for all path searches was H4 The number of squares on each edge of the board is 8 The slicing methodology used to further limit searches was no repeat visits to any rows or columns. There are 7 Resulting Paths There were 330 attempted paths The average path length is 4.71428571428571 The median path length is 5 The longest path is 5 moves. The shortest path is 3 moves. ######################### End of Test Case B3 to H4######################### KnightMovesImplementation.cs using System; using System.Collections.Generic; namespace KnightMoves_CSharp { class KnightMovesImplementation { /* * This class provides the search for all the paths a Knight on a chess * board can take from the point of origin to the destination. It * implements a modified Knights Tour. The classic knights tour problem * is to visit every location on the chess board without returning to a * previous location. That is a single path for the knight. This * implementation returns all possible paths from point a to point b. * * The current implementation is a Recursive Breadth First Search. Conceptually * the algorithm implements a B+ tree with a maximum of 8 possible branches * at each level. The root of the tree is the point of origin. A particular * path terminates in a leaf. A leaf is the result of either reaching the * destination, or reaching a point where there are no more branches to * traverse. * * The public interface CalculatePaths establishes the root and creates * the first level of branching. The protected interface CalculatePath * performs the recursive depth first search, however, the * MoveFilters.GetPossibleMoves() function it calls performs a breadth * first search of the current level. */ KMBoardLocation PointOfOrigin; KMBoardLocation Destination; uint SingleSideBoardDimension; KnightMovesMethodLimitations PathLimitations; KMOutputData Results; KMMoveFilters MoveFilter; KMPath m_Path; public KnightMovesImplementation(KMBaseData UserInputData) { SingleSideBoardDimension = UserInputData.DimensionOneSide; PathLimitations = UserInputData.LimitationsOnMoves; InitPointOfOrigin(UserInputData); InitDestination(UserInputData); Results = new KMOutputData(PointOfOrigin, Destination, SingleSideBoardDimension, PathLimitations); MoveFilter = new KMMoveFilters(PointOfOrigin, SingleSideBoardDimension, PathLimitations); m_Path = new KMPath(); } public KMOutputData CalculatePaths() { List<KMMove> PossibleFirstMoves = MoveFilter.GetPossibleMoves(PointOfOrigin); if (PossibleFirstMoves.Count == 0) { // Anywhere on the board should have at between 2 and 8 possible moves throw new ApplicationException("KnightMovesImplementation::CalculatePaths: No Possible Moves."); } else { foreach (KMMove CurrentMove in PossibleFirstMoves) { CurrentMove.SetOriginCalculateDestination(PointOfOrigin); if (CurrentMove.IsValid() == true) { CalculatePath(CurrentMove); } } } return Results; } protected void InitPointOfOrigin(KMBaseData UserInputData) { PointOfOrigin = new KMBoardLocation(UserInputData.StartRow, UserInputData.StartColumn, SingleSideBoardDimension); PointOfOrigin.SetName(UserInputData.StartName); } protected void InitDestination(KMBaseData UserInputData) { Destination = new KMBoardLocation(UserInputData.TargetRow, UserInputData.TargetColumn, SingleSideBoardDimension); Destination.SetName(UserInputData.TargetName); } /* * Recursive algorith that performs a depth search. * The call to CurrentMove.GetNextLocation() implements the breadth-first portion * of the search. */ protected void CalculatePath(KMMove CurrentMove) { KMBoardLocation CurrentLocation = CurrentMove.GetNextLocation(); m_Path.AddMoveToPath(CurrentMove); MoveFilter.PushVisited(CurrentLocation); if (Destination.IsSameLocation(CurrentLocation) == true) { Results.IncrementAttemptedPaths(); Results.AddPath(m_Path); } else { if (CurrentMove.IsValid() == true) { List<KMMove> PossibleMoves = MoveFilter.GetPossibleMoves(CurrentLocation); if (PossibleMoves.Count != 0) { foreach (KMMove NextMove in PossibleMoves) { CalculatePath(NextMove); } } else { Results.IncrementAttemptedPaths(); } } else { throw new ApplicationException("In KnightMovesImplementation::CalculatePath CurrentLocation Not Valid"); } } // Backup to previous location MoveFilter.PopVisited(); m_Path.RemoveLastMove(); } }; } KMMoveFilters.cs using System; using System.Collections.Generic; namespace KnightMoves_CSharp { /* * This class provides all the possible Knight moves for a specified location * on the chess board. In the center of the chess board there are 8 possible * moves. In the middle of the edge on the chess board there are 4 possible * moves. In a corner of the chess board there are 2 possible moves. The * location on the board provides the first filter. * Slicing is used to allow the program to complete in a reasonable finite * amount of time. The slicing method can be varied, the default slicing * method is the knight can't return to any row or column it has previously * visited. The slicing is the second filter. */ public class KMMoveFilters { private struct LocationBase { public uint Row; public uint Column; } // The 8 possible moves the knight can make. public static int MAIXMUM_POSSIBLE_MOVES = 8; private KMMove Left1Up2; private KMMove Left2Up1; private KMMove Left2Down1; private KMMove Left1Down2; private KMMove Right1Up2; private KMMove Right2Up1; private KMMove Right2Down1; private KMMove Right1Down2; const int Left1 = -1; const int Left2 = -2; const int Down1 = -1; const int Down2 = -2; const int Right1 = 1; const int Right2 = 2; const int Up1 = 1; const int Up2 = 2; List<KMMove> AllPossibleMoves = new List<KMMove>(); private uint BoardDimension; public uint GetBoardDimension() { return BoardDimension; } KnightMovesMethodLimitations PathLimitations; Stack<LocationBase> VisitedLocations = new Stack<LocationBase>(); Stack<uint> VisitedRows = new Stack<uint>(); Stack<uint> VisitedColumns = new Stack<uint>(); public KMMoveFilters(KMBoardLocation Origin, uint SingleSideBoardDimension, KnightMovesMethodLimitations VisitationLimitations) { PathLimitations = VisitationLimitations; BoardDimension = SingleSideBoardDimension; Left1Up2 = new KMMove(Left1, Up2, BoardDimension); Left2Up1 = new KMMove(Left2, Up1, BoardDimension); Left2Down1 = new KMMove(Left2, Down1, BoardDimension); Left1Down2 = new KMMove(Left1, Down2, BoardDimension); Right1Up2 = new KMMove(Right1, Up2, BoardDimension); Right2Up1 = new KMMove(Right2, Up1, BoardDimension); Right2Down1 = new KMMove(Right2, Down1, BoardDimension); Right1Down2 = new KMMove(Right1, Down2, BoardDimension); AllPossibleMoves.Add(Left1Up2); AllPossibleMoves.Add(Left2Up1); AllPossibleMoves.Add(Left2Down1); AllPossibleMoves.Add(Left1Down2); AllPossibleMoves.Add(Right1Up2); AllPossibleMoves.Add(Right2Up1); AllPossibleMoves.Add(Right2Down1); AllPossibleMoves.Add(Right1Down2); // Record the initial location so we never return LocationBase PtOfOrigin = new LocationBase(); PtOfOrigin.Row = Origin.GetRow(); PtOfOrigin.Column = Origin.GetColumn(); VisitedLocations.Push(PtOfOrigin); } /* * C# does not seem to provide a default copy constructor so copy AllPossibleMoves */ private List<KMMove> GetAllPossibleMoves() { return CopyAllPossibleMoves(); } private List<KMMove> CopyAllPossibleMoves() { List<KMMove> PublicCopy = new List<KMMove>(); foreach (KMMove PossibleMove in AllPossibleMoves) { PublicCopy.Add(new KMMove(PossibleMove)); } return PublicCopy; } public List<KMMove> GetPossibleMoves(KMBoardLocation Origin) { if (Origin.IsValid() == false) { throw new ArgumentException("KMMoveFilters::GetPossibleMoves : Parameter Origin is not valid!\n"); } List<KMMove> TempAllPossibleMoves = GetAllPossibleMoves(); List<KMMove> PossibleMoves = new List<KMMove>(); foreach (KMMove PossibeMove in TempAllPossibleMoves) { PossibeMove.SetOriginCalculateDestination(Origin); if ((PossibeMove.IsValid() == true) && (IsNotPreviouslyVisited(PossibeMove.GetNextLocation()) == true)) { PossibleMoves.Add(PossibeMove); } } return PossibleMoves; } protected bool IsNotPreviouslyVisited(KMBoardLocation PossibleDestination) { bool NotPrevioslyVisited = true; LocationBase PossibleLocation; PossibleLocation.Row = PossibleDestination.GetRow(); PossibleLocation.Column = PossibleDestination.GetColumn(); // We can't ever go back to a previously visited location if (VisitedLocations.Count != 0) { if (VisitedLocations.Contains(PossibleLocation) == true) { NotPrevioslyVisited = false; } } switch (PathLimitations) { default: throw new ArgumentException("KMPath::CheckMoveAgainstPreviousLocations : Unknown type of Path Limitation."); case KnightMovesMethodLimitations.DenyByPreviousLocation: // Handled above by VisitedLocations.Contains(). break; case KnightMovesMethodLimitations.DenyByPreviousRowOrColumn: if ((VisitedRows.Count != 0) && (VisitedColumns.Count != 0)) { if (VisitedRows.Contains(PossibleDestination.GetRow()) == true) { NotPrevioslyVisited = false; break; } if (VisitedColumns.Contains(PossibleDestination.GetColumn()) == true) { NotPrevioslyVisited = false; break; } } break; } return NotPrevioslyVisited; } public void PushVisited(KMBoardLocation Location) { LocationBase TestLocation = new LocationBase(); TestLocation.Row = Location.GetRow(); TestLocation.Row = Location.GetRow(); VisitedLocations.Push(TestLocation); VisitedRows.Push(Location.GetRow()); VisitedColumns.Push(Location.GetColumn()); } public void PopVisited() { VisitedRows.Pop(); VisitedColumns.Pop(); VisitedLocations.Pop(); } }; } Program.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Diagnostics; namespace KnightMoves_CSharp { class Program { static void OutputOverAllStatistics(List<double> TestTimes) { if (TestTimes.Count < 1) // Prevent Division by 0 in Average { Console.WriteLine("No test times to run statistics on!"); return; } TestTimes.Sort(); // Sort once for Min, Median and Max Console.WriteLine("\nOverall Results"); Console.WriteLine("The average execution time is {0} seconds.", TestTimes.Average()); Console.WriteLine("The median execution time is {0} seconds.", TestTimes[TestTimes.Count() / 2]); Console.WriteLine("The longest execution time is {0} seconds.", TestTimes[TestTimes.Count - 1]); Console.WriteLine("The shortest execution time is {0} seconds.", TestTimes[0]); } static double ExecutionLoop(KMBaseData UserInputData, bool ShowPathData) { Stopwatch StopWatch = new Stopwatch(); KnightMovesImplementation KnightPathFinder = new KnightMovesImplementation(UserInputData); StopWatch.Start(); KMOutputData OutputData = KnightPathFinder.CalculatePaths(); StopWatch.Stop(); TimeSpan ts = StopWatch.Elapsed; Double ElapsedTimeForOutPut = ts.TotalSeconds; Console.Write("finished computation at "); Console.WriteLine(DateTime.Now); Console.WriteLine("elapsed time: {0} Seconds", ElapsedTimeForOutPut); Console.WriteLine(); Console.WriteLine(); OutputData.ShowPathData(ShowPathData); OutputData.ShowResults(); return ElapsedTimeForOutPut; } static void Main(string[] args) { bool ShowPathData = false; Console.WriteLine("Do you want to print each of the resulting paths? (y/n)"); string ShowPathAnswer = Console.ReadLine(); ShowPathData = (ShowPathAnswer.ToLower() == "y"); KMTestData TestData = new KMTestData(); List<KMBaseData> ListOfTestCases = TestData.LetUserEnterTestCaseNumber(); try { List<double> TestTimes = new List<double>(); foreach (KMBaseData TestCase in ListOfTestCases) { TestTimes.Add(ExecutionLoop(TestCase, ShowPathData)); GC.Collect(); } OutputOverAllStatistics(TestTimes); return; } catch (ArgumentOutOfRangeException e) { Console.Error.Write("A fatal range error occurred in KnightMoves: "); Console.Error.WriteLine(e.ToString()); return; } catch (ArgumentException e) { Console.Error.Write("A fatal argument error occurred in KnightMoves: "); Console.Error.WriteLine(e.ToString()); return; } catch (ApplicationException e) { Console.Error.Write("A fatal application error occurred in KnightMoves: "); Console.Error.WriteLine(e.ToString()); return; } } } } Answer: Some unorganized comments: NotPrevioslyVisited has a typo, should be NotPreviouslyVisited. if (VisitedLocations.Count != 0) { if (VisitedLocations.Contains(PossibleLocation) == true) { NotPrevioslyVisited = false; } } You can combine if statements here - and there's no need to explicitly check for true. if (VisitedLocations.Count != 0 && VisitedLocations.Contains(PossibleLocation)) will do. if (PossibleFirstMoves.Count == 0) { // Anywhere on the board should have at between 2 and 8 possible moves throw new ApplicationException("KnightMovesImplementation::CalculatePaths: No Possible Moves."); } else { foreach (KMMove CurrentMove in PossibleFirstMoves) { CurrentMove.SetOriginCalculateDestination(PointOfOrigin); if (CurrentMove.IsValid() == true) { CalculatePath(CurrentMove); } } } No need for else here, you're throwing an exception. Put guard clauses separate from your code. Like so: if (PossibleFirstMoves.Count == 0) { // Anywhere on the board should have at between 2 and 8 possible moves throw new ApplicationException("KnightMovesImplementation::CalculatePaths: No Possible Moves."); } foreach (KMMove CurrentMove in PossibleFirstMoves) { CurrentMove.SetOriginCalculateDestination(PointOfOrigin); if (CurrentMove.IsValid() == true) { CalculatePath(CurrentMove); } } /* * Recursive algorith that performs a depth search. * The call to CurrentMove.GetNextLocation() implements the breadth-first portion * of the search. */ protected void CalculatePath(KMMove CurrentMove) { KMBoardLocation CurrentLocation = CurrentMove.GetNextLocation(); m_Path.AddMoveToPath(CurrentMove); MoveFilter.PushVisited(CurrentLocation); if (Destination.IsSameLocation(CurrentLocation) == true) { Results.IncrementAttemptedPaths(); Results.AddPath(m_Path); } else { if (CurrentMove.IsValid() == true) { List<KMMove> PossibleMoves = MoveFilter.GetPossibleMoves(CurrentLocation); if (PossibleMoves.Count != 0) { foreach (KMMove NextMove in PossibleMoves) { CalculatePath(NextMove); } } else { Results.IncrementAttemptedPaths(); } } else { throw new ApplicationException("In KnightMovesImplementation::CalculatePath CurrentLocation Not Valid"); } } // Backup to previous location MoveFilter.PopVisited(); m_Path.RemoveLastMove(); } A hiccup with indentation - and more == true checks. Also, else { if(){ } else { } structures can be converted to else if() { } else { } structures: if (Destination.IsSameLocation(CurrentLocation)) { Results.IncrementAttemptedPaths(); Results.AddPath(m_Path); } else if (CurrentMove.IsValid()) { List<KMMove> PossibleMoves = MoveFilter.GetPossibleMoves(CurrentLocation); if (PossibleMoves.Count != 0) { foreach (KMMove NextMove in PossibleMoves) { CalculatePath(NextMove); } } else { Results.IncrementAttemptedPaths(); } } else { throw new ApplicationException("In KnightMovesImplementation::CalculatePath CurrentLocation Not Valid"); }
{ "domain": "codereview.stackexchange", "id": 20858, "tags": "c#, performance, recursion, breadth-first-search, chess" }
Confront Order Of magnitudes
Question: Is it correct to say that 9.0 is one order of magnitude smaller than 10.0? Has anyone a link/source about confronting order of magnitudes, apart from wikipedia? Answer: I suppose the OP is looking for some general rule to be used when you want to say "A is N orders of magnitude bigger (smaller) than B". In that case, consider $$N = || \log_b(A/B)) ||$$ (where $ || \dots || $ is taken to mean round to the nearest integer, and negative values just mean chose "A is smaller than B", but the magnitude retains the same significance. Here $b$ the the base you are speaking in (10 generally, but it is sometimes useful in computer science circles to speak of binary orders of magnitude). In this case $\log_{10}(10/9) = 0.045 \approx 0$ so 9 and 10 are of the same order of magnitude as one would naively expect. You can manage this rule without having to extract logarithms by noting that $0.5 = \log_{10}(R)$ implies $R = \sqrt{10} \approx 3.16$. Just count the number of digits difference in the long-hand written form and add one if the ratio of the leading values is at least 3.2. That is 30 is the same order of magnitude as 10 35 is one order of magnitude larger than 10 300 is one order of magnitude larger than 10 350 is two order of magnitude larger than 10 3.5 is the same order of magnitude as ten 3.0 is one order of magnitude smaller than 10 Final note: Don't obsess over this! Orders of magnitude are useful because they let you make quick and reasonably accurate guesses, and guesses are not subject to precise rules. For instance $\pi$ is close enough to $\sqrt{10}$ that it's OK to treat it as the same order of magnitude as either 1 or 10.
{ "domain": "physics.stackexchange", "id": 1813, "tags": "soft-question, order-of-magnitude" }
Simplest exactly solved model displaying a phase transition?
Question: The classical example of an exactly solved model which displays a phase transition is the 2D Ising model. However, all the proofs I've seen of this have been very long and complicated. So, I wanted to know whether there were any other exactly solved models with phase transition, which were easier to solve, or that the 2D Ising model is the simplest such model that we know of. Answer: The simplest model demonstrating a phase transition is probably the Ising model with an interaction constant that is the same for all spin pairs: $H=-J\sum_{i,j}S_i S_j$. I will try to find a reference later. EDIT (9/6/2021): https://homepages.spa.umn.edu/~vinals/tspot_files/phys5201/2015/hwk8.pdf
{ "domain": "physics.stackexchange", "id": 82414, "tags": "statistical-mechanics, phase-transition, models" }
Image based 3d position estimation with one camera
Question: There is too many 2d position estimation with one camera. Is there any 3d position estimation application or technique with one camera? If there is no application or technique why? Answer: take an image than take with little zoom out. crop two specific points from first and match (tepmlate)from second. than calculate difference this gives relation between two images than you can compute depth with camera angle.
{ "domain": "robotics.stackexchange", "id": 700, "tags": "localization" }
Polyphase Filter decomposition. It is not working
Question: So, I want to do a polyphase implementation of a filter bank (I had problems with it since the beginning and I already asked for help to the community in the past, indeed: Norm MPEG-1 Layer III (Mp3) Filter banks distortion). Thanks to the support I was able to make it work (without polyphase implementation). Nonetheless, I tried to do all the analysis filters and synthesis filters by means of polyphase filters banks as follows: From: To: The idea is, basically, based on the first picture, with the low pass filter prototype modulated by a cosine to make frequency shifts of the filter response to cover all the bandwidth (with 32 filters), decomposing each particular filter response into 32 polyphase branches. (So 32 branches, obtained by means of shifting the prototype by multiplying a cosine. Each of these 32 branches is implemented by means of decomposing each analysis and synthesis filters into 32 polyphase branches). So, once the first part works (i.e. The filter bank not implemented by means of polyphase filter banks), I do the polyphase decomposition as follows: 1. Polyphase decomposition: h are the modulated coefficients of each filter of the bank. hh is the polyphase filter bank coefficients the $i^{\rm th}$ row of the matrix represents the coefficients of the $i^{\rm th}$ filter of the polyphase bank M are the polyphase filter branches, 32 in our examplehh=zeros(M,length(h)/M); for l=0:1:M-1 n=l+1:M:length(h); hh(l+1,:)=h(n); end end 2. The shift ($Z^{-1}$) is performed by a function that performs as follows ("delay" is the delay to apply. e.g. if 0 samples are to be delayed, then the output shall be equal to the input): in_data: Input sequence out_data: Output sequence. delay_samp=delay+1; out_data=zeros(1,length(in_data)); out_data(delay_samp:length(in_data))=in_data(1:length(in_data)-(delay_samp-1)); 3. Ok, now, I apply the filters as follows: Analysis: yy=zeros(branches,length(hh(1,:))+fix(length(in_signal)/branches)); for k=1:branches %1. Shift shifted_input=shift_data(in_signal,k-1); %2. Downsample temp=zeros(1,fix(length(in_signal)/branches)); downsampled_data=shifted_input(1:branches:end); temp(1:length(downsampled_data))=downsampled_data; shifted_downsampled_input=temp; %3. Filter temp=conv(hh(k,:),shifted_downsampled_input); yy(k,:)=temp; end And Synthesis: yy=zeros(branches,branches*(length(hh(1,:))+length(in_signal)-1)); for k=1:branches %1. Filter temp_conv=conv(hh(k,:),in_signal); %2. Upsample temp=zeros(1,length(temp_conv)*branches); temp(1:branches:end)=temp_conv;%upsample(temp,branches); %3. Shift. yy(k,:)=shift_data(temp,branches-(k-1)); end The result is very distorted: The output with a 1KHz tone as an input: I guess that I am making something wrong with the upsampling and downsampling processes as there are spectral replicas all over the second example, but I have no clue on what exactly is wrong. I checked on the internet and I also checked the reference referred in the other post. I mean this book. But I think there is a problem with the way I am coding this not with the theoretical back. This is the low pass filter prototype. Answer: The following is a working code that uses 32-component polyphase decomposition of the associated 32-channel anslysis and synthesis filterbanks. As I have already commented, the speed gain is not dramatic in this cae due to short signal and filter lengths. However further architectural improvements as well as coding optimizations can provide better results. % S0 - Load the prototype lowpass filter impulse response h0[n]: % -------------------------------------------------------------- load h2.mat; % h[n] is the prototype lowpass filter of length 512 L = length(h); % S1 - Create the 32 x 512 analysis filter-bank hha[k,n] by cosine modulation from protoype : % ----------------------------------------------------------------------------------------- numbands = 32; % number of banks (channels) n=0:L-1; hha=zeros(numbands,L); % bank of filters hha[k,n] = 32 x 512 array. for k=0:1:numbands-1 hha(k+1,:) = h.*cos( ( (2*k+1)*pi*(n-16) ) / (2*numbands) ); end % S2 - Create the 32-polyphase components hhap[k,m,n] , for each one of 32 analysis filters hha[k,n]: % --------------------------------------------------------------------------------------------------- numpoly = numbands; % polyphase component number = decimation ratio = number of channels hhap = zeros(numbands,numpoly, L/numpoly); % hhap = 32 x 32 x 512/32 , 3D ANALYSIS filter bank array M = numpoly; % polyphase system decimation ratio for k=1:numbands for m = 1:numpoly hhap(k,m,:) = hha(k,m:M:end); % create the m-th polyphase component of k-th channel filter end end % S3 - Design the 32 x 512 synthesis (complementary) filter bank : % ----------------------------------------------------------------- numbands = 32; % number of banks n=0:L-1; hhs = zeros(numbands,L); % bank of filters for k=0:1:numbands-1 hhs(k+1,:) = h.*cos( ( (2*k+1)*pi*(n+16) ) / (2*numbands) ); end % S4 - Obtain the 32-polyphase components hhsp[k,m,n] , for each one of 32 synthesis filters hhs[k,n]: % ---------------------------------------------------------------------------------------------------- numpoly = numbands; % polyphase component number = interpolation ratio = number of channels hhsp = zeros(numbands,numpoly, L/numpoly); % hhap = 32 x 32 x 512/32 , 3D ANALYSIS filter bank array M = numpoly; % polyphase system decimation ratio for k=1:numbands for m = 1:numpoly hhsp(k,m,:) = hhs(k,m:M:end); % create the m-th polyphase component of k-th channel filter end end % S5 - Generate the test input signal % ----------------------------------- N = 2*1024; wav_in = cos(0.01791*pi*[0:N-1]); % pure sine tone % S6 - Apply test signal to the filterbank: ANALYSIS STAGE : % ----------------------------------------------------------- yyd = zeros( numbands, floor(N/numbands)); % decimated outputs.. M = numbands; for k=1:1:numbands temp = conv([wav_in(1:M:end),0] , hhap(k,1,:)); for m=2:M temp = temp + conv([0,wav_in(M-m+2:M:end)],hhap(k,m,:)); end yyd(k,:) = temp(L/(2*M)+1 : L/(2*M)+N/numbands); end % S7 - Apply SYNTHESIS filterbank on the decimated signal : % --------------------------------------------------------- ys = zeros(1, N); for k=1:numbands temp = zeros(1, N+L-1); for m = 1:numpoly temp(m:numbands:end-31) = conv( yyd(k,:) , hhsp(k,m,:) ); end ys = ys + temp(L/2+1:L/2+N); end ys = numbands*ys; % SX - DISPLAY RESULTS: % --------------------- L = length(h); figure,subplot(2,1,1) stem([0:L-1],h);title('The Prototype Lowpass Filter'); subplot(2,1,2) plot(linspace(-1,1,4*L),20*log10(abs(fftshift(fft(h,4*L))))); grid on; figure plot(linspace(-1,1,4*L),20*log10(abs(fftshift(fft(hha(1,:),4*L))))); hold on for k=2:numbands plot(linspace(-1,1,4*L),20*log10(abs(fftshift(fft(hha(k,:),4*L))))); end title('32 CHANNEL FILTERBANK'); figure,subplot(2,1,1) plot(wav_in);title('input signal') subplot(2,1,2) plot(linspace(-1,1,4*N),20*log10(abs(fftshift(fft(wav_in,4*N))))); figure,subplot(2,1,1) plot(ys);title('Synthesized Back'); subplot(2,1,2) plot(linspace(-1,1,4*N),20*log10(abs(fftshift(fft(ys,4*N)))));
{ "domain": "dsp.stackexchange", "id": 5821, "tags": "filtering, sound, digital-filters, polyphase" }
BackgroundWorker vs TPL ProgressBar Exercise
Question: I wanted to fiddle around with the BackgroundWorker and Task classes, to see how I would implement a given task with these two techniques. So I created a new WinForms project, and implemented a simple UI; two sections, each with a ProgressBar, and Start + Cancel buttons: I implemented a DoSomething-type "service": public class SomeService { public void SomeMethod() { Thread.Sleep(1000); } } But that's irrelevant. The form's code-behind is where I put all the code this post is all about: Constructor The form's constructor is essentially the entry point here (program.cs is ignored), so I put the obvious fields in first, and initialized them in the constructor: public partial class Form1 : Form { private readonly SomeService _service; private readonly BackgroundWorker _worker; public Form1() { _service = new SomeService(); _worker = new BackgroundWorker { WorkerReportsProgress = true, WorkerSupportsCancellation = true }; _worker.DoWork += OnBackgroundDoWork; _worker.ProgressChanged += OnBackgroundProgressChanged; _worker.RunWorkerCompleted += OnBackgroundWorkerCompleted; InitializeComponent(); CloseButton.Click += CloseButton_Click; StartBackgroundWorkerButton.Click += StartBackgroundWorkerButton_Click; CancelBackgroundWorkerButton.Click += CancelBackgroundWorkerButton_Click; StartTaskButton.Click += StartTaskButton_Click; CancelTaskButton.Click += CancelTaskButton_Click; } private void CloseButton_Click(object sender, EventArgs e) { CancelBackgroundWorkerButton_Click(null, EventArgs.Empty); CancelTaskButton_Click(null, EventArgs.Empty); Close(); } #region BackgroundWorker Button.Click handlers These are the event handlers for the Start and Cancel buttons' Click event: private void StartBackgroundWorkerButton_Click(object sender, EventArgs e) { StartBackgroundWorkerButton.Enabled = false; _worker.RunWorkerAsync(); } private void CancelBackgroundWorkerButton_Click(object sender, EventArgs e) { CancelBackgroundWorkerButton.Enabled = false; _worker.CancelAsync(); } BackgroundWorker.DoWork handler Here we are. The "work" here, will be to call our "time-consuming operation" 5 times, reporting progress as we go, and assigning the DoWorkEventArgs.Cancel as needed: private void OnBackgroundDoWork(object sender, DoWorkEventArgs e) { //CancelBackgroundWorkerButton.Enabled = true; // this call fails the background task (e.Error won't be null) Invoke((MethodInvoker)(() => { CancelBackgroundWorkerButton.Enabled = true; })); var iterations = 5; for (var i = 1; i <= iterations; i++) { if (_worker.CancellationPending) { e.Cancel = true; return; } _service.SomeMethod(); if (_worker.CancellationPending) { e.Cancel = true; return; } _worker.ReportProgress(100 / (iterations) * i); } Thread.Sleep(500); // let the progressbar animate to display its actual value. } ProgressChanged handler This handler assigns the new value to the ProgressBar control owned by the UI thread: private void OnBackgroundProgressChanged(object sender, ProgressChangedEventArgs e) { // BGW facilitates dealing with UI-owned objects by executing this handler on the main thread. BackgroundWorkerProgressBar.Value = e.ProgressPercentage; if (BackgroundWorkerProgressBar.Value == BackgroundWorkerProgressBar.Maximum) { CancelBackgroundWorkerButton.Enabled = false; } } WorkerCompleted handler If I get the BackgroundWorker right, this is where we can determine whether the thing has succeeded, failed with an error or was cancelled: private void OnBackgroundWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { if (e.Cancelled) { MessageBox.Show("BackgroundWorker was cancelled.", "Operation Cancelled", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } else if (e.Error != null) { MessageBox.Show(string.Format("BackgroundWorker operation failed: \n{0}", e.Error), "Operation Failed", MessageBoxButtons.OK, MessageBoxIcon.Error); } else { MessageBox.Show("BackgroundWorker completed.", "Operation Completed", MessageBoxButtons.OK, MessageBoxIcon.Information); } ResetBackgroundWorker(); } private void ResetBackgroundWorker() { BackgroundWorkerProgressBar.Value = 0; StartBackgroundWorkerButton.Enabled = true; CancelBackgroundWorkerButton.Enabled = false; } #endregion #region Task I don't get to write this kind of code very often, so I'm very interested in this part. I declared a private field and handled the Click event like this: CancellationTokenSource _cancelTokenSource; private void StartTaskButton_Click(object sender, EventArgs e) { StartTaskButton.Enabled = false; _cancelTokenSource = new CancellationTokenSource(); var token = _cancelTokenSource.Token; var task = Task.Factory.StartNew(DoWork, token); task.ContinueWith(t => { switch (task.Status) { case TaskStatus.Canceled: MessageBox.Show("Async task was cancelled.", "Operation Cancelled", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); break; case TaskStatus.Created: break; case TaskStatus.Faulted: MessageBox.Show(string.Format("Async task failed: \n{0}", t.Exception), "Operation Failed", MessageBoxButtons.OK, MessageBoxIcon.Error); break; case TaskStatus.RanToCompletion: MessageBox.Show("Async task completed.", "Operation Completed", MessageBoxButtons.OK, MessageBoxIcon.Information); break; case TaskStatus.Running: break; case TaskStatus.WaitingForActivation: break; case TaskStatus.WaitingForChildrenToComplete: break; case TaskStatus.WaitingToRun: break; default: break; } ResetTask(); _cancelTokenSource = null; }); } DoWork Action Instead of inlining the DoWork method, I wrote it as a private void parameterless method: private void DoWork() { // CancelTaskButton.Enabled = true; // fails the background thread. Invoke((MethodInvoker)(() => { CancelTaskButton.Enabled = true; })); var iterations = 5; for (var i = 1; i <= iterations; i++) { _cancelTokenSource.Token.ThrowIfCancellationRequested(); _service.SomeMethod(); _cancelTokenSource.Token.ThrowIfCancellationRequested(); var progress = 100 / (iterations) * i; Invoke((MethodInvoker)(() => { TaskProgressBar.Value = progress; })); if (i == iterations) { Invoke((MethodInvoker)(() => { CancelTaskButton.Enabled = false; })); } } Thread.Sleep(500); // let the progressbar animate to display its actual value. } Cancellation In the Click handler for the Cancel button, I called Cancel() on the token source: private void CancelTaskButton_Click(object sender, EventArgs e) { CancelTaskButton.Enabled = false; // token source is null if Close button is clicked without task started if (_cancelTokenSource != null) { _cancelTokenSource.Cancel(); } } private void ResetTask() { Invoke((MethodInvoker)(() => { TaskProgressBar.Value = 0; StartTaskButton.Enabled = true; CancelTaskButton.Enabled = false; })); } #endregion Of course this is just an exercise. The question is, is it well conducted? Are there suspicious patterns in the coding style? In the BGW code, I don't like that I'm accessing _worker.CancellationPending, but it works (the CancellationPending property must be thread-safe then). Is this correct usage? In the TPL code, I don't like that switch (task.Status) block. That can't be the best way to go about it?! Answer: It does show very good example for the progressbar in WinForms. Here are my code-review outputs for you. Closebutton_Click calls the other click "handlers". This is not the common way. Implement another method and call it from those handlers. I couldnt understand the logic why are you asking the cancellation before and after invoking the DoSomething(). Reduce synchronization. Try to "synchron" your threads at once. Optimize method invocation delegate for progressbar.value and the canceltaskbutton.enabled. Your .ContinueWith approach is good. But always eliminate 'switch’s from your code. Here would be a dictionary very handy. You can define your status-action set before hand and .ContinueWith status keyed action. This would reduce the complexity and increase the readability of your code. On the OnBackgroundWorkerCompleted method, the Else-if should be separated from the first if-expression. (Again complexity and readability) Please see following code. if (e.Error != null) { // Handle Error and call reset. return; } if (e.Cancelled) { // Handle cancel and call reset. return; } if (e.Result == null) { // No Result } else { // Result } // reset ...
{ "domain": "codereview.stackexchange", "id": 6057, "tags": "c#, asynchronous, winforms, task-parallel-library" }
Distance calculation between two vectors
Question: In Quantum Machine Learning for data scientists, Page 34 gives an algorithm to calculate the distance between two classifical vectors. As mentioned in this question, it is not clear how the SwapTest is done and used to derive the distance. One answer from @cnada suggested that the swap is on the ancilla qubit only, per the original paper Quantum algorithms for supervised and unsupervised machine learning. However, the SwapTest was not designed on partial inputs. I try to adapt SwapTest (on Page 33 of Quantum Machine Learning for data scientists) to derive as follows (by updating formula 131 with a minus sign, per the same answer from @cnada above), but cannot find the distance from the measurement at all. First, initialize per DistCalc: $$ |\psi\rangle = \frac{1}{\sqrt{2}} (|0,a\rangle + |1,b\rangle) $$ $$ |\phi\rangle = \frac{1}{\sqrt{Z}} (|a||0\rangle - |b||1\rangle) $$ Also let: $$ |\psi'\rangle = \frac{1}{\sqrt{2}} (|a,0\rangle + |b,1\rangle) $$ Note that $\psi'$ is a valid (normalized) qubit as it can be obtained by swapping qubits from $|\psi\rangle$. Now, initialize per SwapTest: $$ | 0, \psi, \phi \rangle = \frac{1}{\sqrt{2Z}} (|a|| 0,0,a,0\rangle - |b|| 0,0,a,1\rangle +|a|| 0,1,b,0\rangle - |b|| 0,1,b,1\rangle $$ Apply Hadamard gate on first qubit: $$ | 0, \psi, \phi \rangle = \frac{1}{2\sqrt{Z}} (|a|| 0,0,a,0\rangle + |a|| 1,0,a,0\rangle - |b|| 0,0,a,1\rangle -|b|| 1,0,a,1\rangle$$ $$+ |a|| 0,1,b,0\rangle+|a|| 1,1,b,0\rangle - |b|| 0,1,b,1\rangle - |b|| 1,1,b,1\rangle ) $$ Apply Swap gate on the ancilla qubit of $\psi$ (before $a$ or $b$) and the only qubit on $\phi$ (after $a$ or $b$): $$ | 0, \psi, \phi \rangle = \frac{1}{2\sqrt{Z}} (|a|| 0,0,a,0\rangle + |a|| 1,0,a,0\rangle - |b|| 0,1,a,0\rangle -|b|| 1,1,a,0\rangle$$ $$+ |a|| 0,0,b,1\rangle+|a|| 1,0,b,1\rangle - |b|| 0,1,b,1\rangle - |b|| 1,1,b,1\rangle ) $$ $$= \frac{1}{\sqrt{2Z}} (|a|\frac{| 0 \rangle+| 1 \rangle}{\sqrt{2}}| 0 \rangle - |b|\frac{| 0 \rangle+| 1 \rangle}{\sqrt{2}}| 1 \rangle)(| a,0 \rangle+| b,1 \rangle)$$ Apply Hadamard on first qubit: $$ \frac{1}{\sqrt{2Z}} (|a|| 0,0 \rangle - |b|| 0,1 \rangle)(| a,0 \rangle+| b,1 \rangle)$$ Measure on first qubit to get probability in getting first qubit in $| 0 \rangle$: $$ \vert \frac{1}{\sqrt{2Z}} (|a|\langle 0| 0,0 \rangle - |b|\langle 0| 0,1 \rangle)(| a,0 \rangle+| b,1 \rangle)]\vert^2$$ $$=\frac{1}{2Z} \vert(|a||0 \rangle - |b||1 \rangle)(| a,0 \rangle+| b,1 \rangle)\vert^2$$ $$=\vert\frac{|a||0 \rangle - |b||1 \rangle}{\sqrt{Z}}\vert^2 \vert\frac{| a,0 \rangle+| b,1 \rangle}{\sqrt{2}}\vert^2$$ $$=||\phi\rangle|^2||\psi'\rangle|^2 = 1$$. However, this is independent of the distance $|a - b|$=$||a||a\rangle - |b||b\rangle|$. Another way to think about this is that in the SwapTest, both $|a\rangle$ and $|b\rangle$ are $|0\rangle$ if the swap is indeed done on the ancilla bit of $\psi$. Then $|\langle a| b\rangle|=1$. Is there anything wrong in my derivation above? Answer: The problem is that you applied a Swap gate when you should have applied a CSWAP, and so you never entangled the readout qubit with your query states (as a result the readout qubit will always return a "0", which makes sense because the net effect of $HH|0\rangle$ is $I|0\rangle$). Continuing your derivation starting from just after the first Hadamard, we apply a CSWAP gate on the ancilla qubit of $\psi$ (before $a$ or $b$) and the only qubit on $\phi$ (after $a$ or $b$): $$ \rightarrow \frac{1}{2\sqrt{Z}} (|a|| 0,0,a,0\rangle + |a|| 1,0,a,0\rangle - |b|| 0,0,a,1\rangle -|b|| 1,1,a,0\rangle$$ $$+ |a|| 0,1,b,0\rangle+|a|| 1,0,b,1\rangle - |b|| 0,1,b,1\rangle - |b|| 1,1,b,1\rangle ) $$ Rearranging before applying the Hadamard: $$ \rightarrow \frac{1}{2\sqrt{Z}} (|a|(|0\rangle + |1\rangle) |0,a,0\rangle - |b|(|0\rangle + |1\rangle) |1,b,1\rangle$$ $$ + (|a|| 0,1,b,0\rangle +|a|| 1,0,b,1\rangle - |b|| 0,0,a,1\rangle -|b|| 1,1,a,0\rangle) $$ Now apply a Hadamard to the first qubit and group by the state of the first qubit (you only need to work out the amplitude for the first qubit being in "$|0\rangle$", and normalization will tell us the amplitude for being in "$|1\rangle$"): $$ \rightarrow \frac{1}{\sqrt{2Z}} |0\rangle \lbrack 2 |a| |0,a,0\rangle - |b| |0,a,1\rangle + |a| |1,b,0\rangle - 2|b| |1,b,1\rangle + |a||0,b,1\rangle -|b||1,a,0\rangle\rbrack + \cdots$$ $$=\frac{1}{\sqrt{2Z}} |0\rangle \lbrack (|0,a\rangle + |1,b\rangle ) (|a| |0\rangle - |b| |1\rangle ) + (|a| |0\rangle - |b| |1\rangle )(|a,0\rangle + |b,1\rangle ) \rbrack + \cdots $$ You can check that this is on the correct track because it has the form of the "symmetric" combination of swapped states that provides the amplitude for a "0" readout in the SWAP test (equation 127 from your first ref): $$ \frac{1}{2}|0\rangle (|\psi\rangle|\phi\rangle + |\phi\rangle|\psi\rangle) + \cdots $$ Though in this case you need to account for the fact that you only partially swapped the registers of $|\psi\rangle$ and $|\phi\rangle$, which is why the $a$ and $b$ states remain in the third slot for both parts of the superposition. Then from this simpler expression you can continue with the derivation of the ordinary SWAP test to find that the readout probability for "0" is directly related to Euclidean distance.
{ "domain": "quantumcomputing.stackexchange", "id": 942, "tags": "machine-learning, swap-test" }
How does receptor downregulation/upregulation work?
Question: My understanding is that if a cell is flooded with a certain neurotransmitter, it may decrease the density of that neurotransmitter. What I don't understand is how. Is it a direct physical result of the receptor-ligand complex being internalized? I.e., the receptors simply aren't there anymore, because they've been internalized? Or, does the internalization of specific receptor-ligand complexes actually get sensed by the cell somehow, causing it to produce fewer receptors in the future? Answer: Both internalization (sometimes with degradation) and changes in gene expression can occur; the circumstances leading to the down regulation determine which (or both). It isn't necessary for receptors to be bound to their ligand to be internalized, though, and it isn't the internalization of receptors that causes changes in gene expression (I suppose it is possible that happens someplace, but not that I am familiar with). The processes involve activation of second messenger systems and kinases that either phosphorylate/dephosphorylate the receptors to mark them for internalization and/or degradation, or activate/deactivate transcription factors. Here are two reviews with more information: one is newer but focused on a single class of receptors and fairly in-depth; the other is a bit dated but more broad, but brief: Williams, J. T., Ingram, S. L., Henderson, G., Chavkin, C., von Zastrow, M., Schulz, S., ... & Christie, M. J. (2013). Regulation of µ-opioid receptors: Desensitization, phosphorylation, internalization, and tolerance. Pharmacological reviews, 65(1), 223-254. Tsao, P., & von Zastrow, M. (2000). Downregulation of G protein-coupled receptors. Current opinion in neurobiology, 10(3), 365-369. You can also read about long term depression, which is different from homeostatic responses to ligand concentrations, but it does seem to serve a homeostatic function to keep overall synaptic strengths from growing indefinitely, and the molecular pathways are very similar (and it is a very well-studied phenomenon so you will find a lot of accessible information).
{ "domain": "biology.stackexchange", "id": 7112, "tags": "neurotransmitter, receptor" }
What is the efficiency of Solar energy run turbines? Are they better or worse than Solar PV cells?
Question: The Solar PV cell has an efficiency of 25-40%. Now steam turbines run by coal or some other fuel have an efficiency of approximately 60-80% What is the efficiency of steam turbines run by Solar energy? Are there any energy losses when using solar energy to heat up water instead of using something like coal? For eg: See this video. What is the efficiency of these solar powered turbines? Efficiency interms of total energy out put vs total solar energy incident. How does it compare to the efficiency of energy generated by Solar PV cells? Would PV panels installed in a similar volume generate more electric power? Answer: Less than 31.25% when going to electricity per wikipedia. (If you just needed heat for heating a room, it's a different story- 80% is not unheard of, but you'd be doing well to get half that into water to make PRESSURIZED steam because you can't really insulate on the solar collector itself and mechanisms such as reflectors and heat exchangers for collecting to an area with lower thermal losses will incur their own inefficiencies. Followed by your 70% turbine, that multiplies to ~30% at 1 significant digit ) Whether it's better than solar photovoltaic, depends on which panel, per your 25-40%. Of all of these technologies the solar dish/Stirling engine has the highest energy efficiency. A single solar dish-Stirling engine installed at Sandia National Laboratories National Solar Thermal Test Facility (NSTTF) produces as much as 25 kW of electricity, with a conversion efficiency of 31.25%.[66] Solar parabolic trough plants have been built with efficiencies of about 20%.[citation needed] Fresnel reflectors have a slightly lower efficiency (but this is compensated by the denser packing). The gross conversion efficiencies (taking into account that the solar dishes or troughs occupy only a fraction of the total area of the power plant) are determined by net generating capacity over the solar energy that falls on the total area of the solar plant. The 500-megawatt (MW) SCE/SES plant would extract about 2.75% of the radiation (1 kW/m²; see Solar power for a discussion) that falls on its 4,500 acres (18.2 km²).[67] For the 50 MW AndaSol Power Plant[68] that is being built in Spain (total area of 1,300×1,500 m = 1.95 km²) gross conversion efficiency comes out at 2.6%. Efficiency does not directly relate to cost: total cost includes the cost of construction and maintenance. From the first web result after google's shenanigans of attempting to provide what people ask and guessed answers. https://en.m.wikipedia.org/wiki/Solar_thermal_energy#Electrical_conversion_efficiency Efficiency comparisons only make sense when you have the same input and output. It wouldn't be good to have to burn coal to make light to feed a photovoltaic panel to generate electricity, just as it would be bad to have to grow plants, compress the hydrogen out them to make coal to then power something. Pay attention instead to what you need and what you have that you can exchange for it, seeking the most direct path.
{ "domain": "engineering.stackexchange", "id": 5253, "tags": "energy-efficiency, turbines, solar-energy, steam, solar" }
What is the use of gears in a device with a fixed gear ratio?
Question: From my understanding, there are two uses of a gearing system: to change the speed of output rotation (trading it with torque), and to change the axis of rotation. Now, in a car, for example, it is necessary to have multiple available gear ratios, to allow for high torque and high acceleration when the car begins to move from stationary, and also high speed when the car is already on the move. However, I know that some systems still have gears when they only used a fixed gear ratio. For example, in my lab, we work with a robot arm, and my colleagues often talk about the gearing system in the arm. But the arm does not repeatedly change its gear ratio like in a car. So what are these gears actually doing? If they are to change the speed / torque of the output of the motor, and this is a fixed gear ratio, then why was the arm not designed with a different motor entirely -- one which provides the desired speed / torque properties? My intuition is perhaps that it is easier to mass produce motors that have high speed and low torque, and so it is more economical to buy one of these generic motors and attach it to a gearing system, rather than design a bespoke motor that has very high torque at its output by default....is this correct? Answer: There are various reasons why one might chose to use a geared motor with a fixed gear ratio for a robot arm over an ungeared motor: As Carlton commented on your question, it may be difficult to find an ungeared motor that meets the specific speed/torque requirements for a given application and a motor that can provide enough torque ungeared may be too large/massive for the robot arm (and may be too power hungry as well) From a controls perspective, assuming the motor is small and the gear ratio is used to increase torque/reduce speed, a large change in the angle of the motor would result in a small change in the angle of the robot arm; this would mean that you might be able to more accurately control the position of the arm by measuring and commanding the position of the motor (bigger angle = easier to measure) It is also entirely possible, that they happened to have a motor of a certain size available and decided to use it instead of procuring a different, perhaps more expensive, motor for the arm I'm sure there are many other factors to consider, but, at a high level, this should give you an idea as to why the robot arm may not have been designed to use an ungeared motor.
{ "domain": "engineering.stackexchange", "id": 394, "tags": "mechanical-engineering, motors, gears, torque" }
Return array of objects with matching keys
Question: For an exercise, I've written a function that, given an array of objects and a key (2nd argument), returns a new array of objects that contain all properties from a key. How'd I do? function where(collection, source) { var arr = []; var keys = Object.keys(source); var countMatchingProps = 0; var currentProp; for (var i = 0; i < collection.length; i++) { countMatchingProps = 0; for(var j = 0; j < keys.length; j++){ // assigned to variable for a bit of readability. currentProp = keys[j]; // if object contains key -> if(collection[i].hasOwnProperty(currentProp)){ // -> then compare their values nad increment counter if(collection[i][currentProp] === source[currentProp]){ countMatchingProps++; } } // if number of matched properties are // equal to keys we can push current object to array if (countMatchingProps === keys.length) arr.push(collection[i]); } } return arr; } where([{ "a": 1 }, { "a": 1 }, { "a": 1, "b": 3 }], { "a": 1}); where([{ "a": 1, "b": 2 }, { "a": 1 }, { "a": 1, "b": 2, "c": 2 }], { "a": 1, "b": 2 }) Answer: Better readability could be achieved using array.filter, since it creates the array for you and all you have to do is return true or false. No need to create an array, do the comparison and push yourself. In the same way when checking for the values, Object.keys can be used with array.every to iterate through your constraints and see if each of the current item's keys match the constraint. Also, I wouldn't call it source. It's not a source of anything. It's more of a "constraint" for your collection. So I'd call it that way instead. In terms of performance, array iteration functions are slower than your average for loop (older APIs will tend to be more optimized). However, in terms of readability, these array APIs really shorten your code. function where(collection, constraint){ // filter creates an array of items whose callback returns true for them return collection.filter(function(collectionItem){ // every returns true when every item's callback returns true return Object.keys(constraint).every(function(key){ return collectionItem.hasOwnProperty(key) && constraint[key] === collectionItem[key]; }); }); } var a = where([{ "a": 1 }, { "a": 1 }, { "a": 1, "b": 3 }], { "a": 1}); var b = where([{ "a": 1, "b": 2 }, { "a": 1 }, { "a": 1, "b": 2, "c": 2 }], { "a": 1, "b": 2 }) document.write(JSON.stringify(a)); document.write('<br>'); document.write(JSON.stringify(b)); The code can further be simplified by taking advantage of ES6 arrow functions. This removes the brackets (with one arg, the parens are optional), and the body can be an expression which would then be an implicit return value, eliminating return. function where(collection, constraint){ return collection.filter(collectionItem => Object.keys(constraint).every(key => collectionItem.hasOwnProperty(key) && constraint[key] === collectionItem[key])); } var a = where([{ "a": 1 }, { "a": 1 }, { "a": 1, "b": 3 }], { "a": 1}); var b = where([{ "a": 1, "b": 2 }, { "a": 1 }, { "a": 1, "b": 2, "c": 2 }], { "a": 1, "b": 2 }) document.write(JSON.stringify(a)); document.write('<br>'); document.write(JSON.stringify(b));
{ "domain": "codereview.stackexchange", "id": 16561, "tags": "javascript, object-oriented" }
Radiation from farther galaxies
Question: I've read many facts from NASA's webpage.. Sometimes they tell, (for example) "NASA's Chandra X-Ray Observatory discovered this ultra-luminous X-Ray source (about 15 million LY) which shows an extraordinary outburst..." My question is.. Is it possible to obtain those electromagnetic radiation (Infra-red, Visible, UV or X-Rays) from such a long range? If so.. How? Answer: Photons go on for ever unless they hit something and space is pretty empty. So unless there is a grain of dust, or a star in the way a photon will travel across the universe. Paradoxically it's harder for high energy photons such as x-rays to travel large distances in space. Because of their energy they can be effected by passing close to even something as small as a single electron. The other reason we still don't see X-ray objects at the same vast distances that we see infrared sources is that, even with Chandra, x-ray telescopes are smaller and less sensitive than optical or radio telescopes and so we need more photons from the source, so it needs to be brighter or closer. The reason distant objects are fainter isn't so much that photons are blocked - it's that the photons spread out into a sphere. So at 2x the distance away they are spread over an area 4x as big and so are diluted.
{ "domain": "physics.stackexchange", "id": 4212, "tags": "electromagnetic-radiation, galaxies" }
rosserial_arduino failed to generate libraries for packages
Question: I am following the tutorials for rosserial_arduino and when I run rosrun rosserial_arduino make_libraries.py I get the error: *** Warning, failed to generate libraries for the following packages: *** map_msgs relative_nav_msgs I don't care about the map_msgs, but the relative_nav_msgs are custom message types that I need. It looks like it breaks part way through the messages, this is the exact formatted output Exporting relative_nav_msgs Messages: Path,FilterState,Waypoint,Snapshot,Goal,DesiredVector,NodeInfo,Keyframe,Voltage,DEBUG_Controller,PartialState,Exporting turtle_actionlib So it looks like something bad happened after the PartialState message and it skipped to Exporting the turtle_action lib. Also, is there a way to turn off which types are built? For example, I don't actually care about the map_msgs, can I omit them from ros_lib? Originally posted by pnyholm on ROS Answers with karma: 97 on 2015-01-09 Post score: 0 Original comments Comment by Andromeda on 2015-01-10: Did you already try this? Comment by pnyholm on 2015-01-12: Yes, I have tried that but then it doesn't include my custom messages directory Answer: It turns out the messages it cannot create headers for are the custom messages that are made of custom messages. My work around is to delete all of those types of messages and only create headers for the message I am interested in, which is made of floats. I will ask this question in a new thread. Originally posted by pnyholm with karma: 97 on 2015-01-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20535, "tags": "arduino, rosserial" }
Check Cycles- On adding an edge in DAG
Question: Given a DAG N, if an edge $(U \rightarrow V)$ is added between any existing nodes U and V. Then, by performing DFS from the node $U$ and checking whether there is a cycle or a not, should be sufficient to conclude that this addition of edge didn't violate the acyclic property of the given DAG. Is it correct? Or there is a need to perform DFS from other nodes in the network to verify the cyclic property of DAG? Trying to find the most efficient way to verify whether addition of an edge to an existing DAG introduces cycles or not? Answer: If you start with a dag then the only thing you need to check is whether adding the edge $U\to V$ will not create a cycle. That is, there should not exist a path from $V$ to $U$ in the original graph.
{ "domain": "cs.stackexchange", "id": 6524, "tags": "graphs" }
Phosphoric acid spill cleanup
Question: While working with some rust converter (MSDS, ~20% phosphoric acid by weight), a sequence of events that would make Rube Goldberg proud led to me spilling about 500mL of it all over the place. Yes I was wearing gloves and an acid gas respirator. I'm not really well versed in chemical cleanup procedures and I needed to act fast so I kind of winged it by: Wiping the majority with a bath towel. There happened to be a box of baking soda right there so I dumped a bunch of it on the remaining spill. It fizzed for a bit then I wiped that up. I shop-vac'd the rest of the baking soda. My question is just: Did I handle that OK? And, if not, how should I be prepared if that happens again? (I've been working with the chemical a lot lately.) The reason I ask is, when I took off my respirator, it still smells like rust converter. Also, it just happened so I won't be able to see any effects for a few hours but I'm hoping I don't start seeing rust spots on the metals it contacted, or dried crystals, or tear up any of the finish on the wood it contacted. But I'm hoping I got it. Also, there are cats here and I don't really want residue to be left laying around. So I'm wondering if that was a decent reaction, or if it was dangerous (I dunno what phosphoric acid + baking soda produces, although the space was well ventilated); and if it was effective, or if there is a better way. Context is home shop, primarily woodworking; not a lab. So I don't have those nice metal lab tables or anything; lots of cracks, crevices, porous surfaces (concrete and wood), and stuff. And while I normally work with this stuff on a tray for small splash containment, this was a rather... violent spill (heh). Answer: The product you were using is very likely a "rust converter". Used to convert rust to a stable products before painting . I have used it several times , usually I just rinse the surface , let it dry and paint . It is also used for industrial maintenance coating of poorly prepared surfaces ( those not sand blasted). That is ;it is commonly used and when I worked with coatings I never heard of a problem. Baking soda will neutralize it . I once inspected super phos storage tanks : it starts as 107 % phosphoric acid used to blend liquid fertilizers. When I stepped out of the tank and walked across the limestone gravel ,each step of the rubber boots left some jelly-like acid which foamed on the gravel. The people that worked at the facility thought nothing of it; from that I conclude neutralized phos acid it not a problem . Sorry this is all anecdotal ,but I have never needed to research the situation.
{ "domain": "chemistry.stackexchange", "id": 14961, "tags": "home-experiment, safety, cleaning" }
Is there a better way to convert to a specific type with reflection?
Question: Rather than doing what is essentially a large switch statement for every possible type, is there a better more generic way of converting to a specific type with reflection? I've looked up TypeConverter but don't understand the documentation. if (header.Property.PropertyType == typeof(Int32)) { header.Property.SetValue(instanceOfTrade, value.ToInt(), null); } else if (header.Property.PropertyType == typeof(decimal)) { header.Property.SetValue(instanceOfTrade, value.ToDecimal(), null); } else if (header.Property.PropertyType == typeof(DateTime)) { header.Property.SetValue(instanceOfTrade, value.TryToDateTime(), null); } else { header.Property.SetValue(instanceOfTrade, value.ToString(), null); } Answer: You could use an extension method (if this is common), or a regular generic method with a "IConvertible" constraint on the desired value then call "Convert.ChangeType" in your SetValue call. static class ObjectExtensions { public static void SetPropertyValue<T>(this object obj, string propertyName, T propertyValue) where T : IConvertible { PropertyInfo pi = obj.GetType().GetProperty( propertyName ); if( pi != null && pi.CanWrite ) { pi.SetValue ( obj, Convert.ChangeType(propertyValue, pi.PropertyType), null ); } } } class TestObject { public string Property1 { get; set; } public int Property2 { get; set; } } void Main() { TestObject o = new TestObject(); // Propery1 == null, Property2 == 0 o.SetPropertyValue( "Property1", 1 ); o.SetPropertyValue( "Property2", "123" ); // Propery1 == 1, Property2 == 123 } Obviously no error handling and this is assuming you want it available on all types, so I just threw it in an "ObjectExtensions" class so it'll be visible on all types. Just adjust the constraints to fit your exact needs, or just throw it in a regular class if you don't want to use extensions.
{ "domain": "codereview.stackexchange", "id": 515, "tags": "c#, reflection" }
Should the centripetal force of Earth's orbit around the Sun affect a pendulum on Earth?
Question: Lets use approximate earth angular velocity around the sun with $\omega\approx2*10^{-7}\frac{r}{s}$ and earth translation orbit radius with $r\approx1.5*10^{11}m$ we can approximate centripetal acceleration with $|A_{c}|=\omega^2*r\approx0.006\frac{m}{s^2}$ and $A_c$ is pointing to the center of the orbit. Is every object on earth affected by that acceleration with an absolute value of $|A_c|=0.006\frac{m}{s^2}$? If so lets consider we have a static pendulum without any other force applied to it, at a specific time and place such that $A_c$ is completely perpendicular to it, we should see the pendulum a little displaced to one side and 12 hours later a little bit displaced to the opposite side? (since earth makes half turn and now pendulum is oriented to the opposite side). I know since also the moon orbits the earth, we can repeat the same argument, but all things consider it's correct to try to predict with a immobile pendulum a little deviation of its center? Answer: Yes and no. ;) A body in orbit is in freefall and experiences no weight due to the gravity of the central body which it orbits. A pendulum in the International Space Station wouldn't work because it's weightless. To a first approximation, the Earth and everything on it is weightless relative to the Sun. But that's only approximately true because the Earth is an extended body, not a point. It's more accurate to say that the centre of the Earth is in freefall and all other points on the Earth feel a slight force from the Sun because their velocity isn't exactly the proper velocity for a point at that orbital distance. That slight force is called a tidal force because it drives the ocean tides. But even that's not quite true because the Earth has a rather large Moon. The Earth and Moon orbit around their common centre of mass, their barycentre, which is located (on average) about 1700 km below the surface of the Earth, or about 4670 km from the centre of the Earth. So the Earth-Moon barycentre is in freefall around the Sun, and anything on Earth not located at that barycentre will experience a tidal force from the Sun. It will also experience a tidal force from the Moon, and in fact the tidal force from the Moon is larger than that from the Sun because tidal forces diminish in accordance with an inverse cube law, rather than the inverse square law of direct gravitational forces. On the Earth's surface, these tidal forces are quite small compared to the gravitational force of the Earth. Relative to $g$, the mean gravitational acceleration at the surface of the Earth, the lunar tidal acceleration is approximately $1.12×10^{-7}g$ and the solar tidal acceleration is approximately $5.14×10^{-8}g$. It is possible to detect those tidal accelerations with a pendulum, but the pendulum needs to be built with very high precision and very low friction. And you need a very good clock to measure the tidal deviations in the pendulum's period with sufficient precision. "Time hacker" Tom Van Baak has written an excellent series of articles on this topic. Tom has performed many tests using atomic clocks and pendulums, and his articles are copiously illustrated with graphs. They also contain many equations, but (generally) don't require mathematical knowledge beyond high school level. You can find links to these articles here: Precision Pendulum Clocks, Gravity and Tides However, your question asks about angular deviations of a static pendulum. The tidal forces will cause such deviations, but they are very hard to measure accurately. Measuring the period of a swinging pendulum would be a lot easier.
{ "domain": "physics.stackexchange", "id": 84051, "tags": "newtonian-mechanics, reference-frames, orbital-motion, earth, sun" }
Why in $e^+ e^-\rightarrow \mu^+ \mu^-$ there is only two $S_F$ propagator instead of four from path integral?
Question: This was a homework to calculate the $e^+ e^-\rightarrow \mu^+ \mu^-$ following Peskin & Schroeder chapter 5.1. However, I got confused with the path integral aspect of the calculation.(Which was not the homework) The canonical quantization to calculate the $e^+ e^-\rightarrow \mu^+ \mu^-$ given in Peskin on page 131 was quite standard with the Feynman rule. $$iM =\bar v^{s'}(p')(-ie\gamma^\mu) u^s(p) (\frac{-ig_{\mu\nu}}{q^2}) \bar u^r(k)(-ie\gamma^\nu)v^{r'}(k')$$ However, if one look at the path integral $$Z_0 \exp[-\int dx^4 \int dy^4 \bar\eta(x) D_F(x-y) \eta (y) -\frac{1}{2} \int dx^4\int dy^4 A^\mu(x) D_{F_{\mu\nu}} (x-y) A^\nu(y)] $$ Then, in sketch $$\langle 0|\bar \psi(x_1) \psi (x_2) \bar \psi(x_3) \psi (x_4) |0\rangle $$ involving the insertion of $(-e\int dx^4 \bar\psi(x) \gamma^\mu A_\mu\psi(x))^2$ should produce the trace of four $$Tr[S_F S_F S_F S_F]$$ which got connected with the four external legs $$\bar \psi(x_1) \psi (x_2) \bar \psi(x_3) \psi (x_4)$$ and with one photon propagator. However, though the counting for the photon propagator was correct, there was the trace for only two of the $Tr[S_FS_F]$ in the momentum space for $iM$. Where was the additional two fermionic propagator $S_F$ in the trace? Answer: If I understand correctly your answer there are two confusions going on here. The first is that in the calculation of the tree matrix element for the $e^+e^-\to \mu^+\mu^-$ reaction there are actually no fermionic propagators involved. You can see it already in the expression for $iM$ that you provided. The only propagator involved (inverse powers of momenta, to make it simple) comes from the intermediate photon propagator. I think you might be getting confused with the fact that, when we sum over initial spins and average over final spins, we also get two traces. Those are, however, not propagators, but rather completeness relations for the fermionic spinors. In fact, they go as positive powers of the momentum, not inverse ones. So, no fermionic propagator should appear in $iM$ for this process. The fact that, instead, if you compute the four-point function from the generating functional you get four fermion propagator is completely natural. They correspond to the external legs of the diagram, which diverge when they are on-shell. The confusion here is that $n$-point functions and scattering matrix elements are not the same thing. Roughly speaking, they are related to each other by the "truncation" of external legs, which schematically means (for example for the four point function): $$ iM_4(k_1,k_2,k_3,k_4) \sim S_F^{-1}(k_1)S_F^{-1}(k_2)S_F^{-1}(k_3)S_F^{-1}(k_4) \langle 0 | \bar\psi(k_1) \bar \psi(k_2) \bar\psi(k_3)\bar\psi(k_4)|0\rangle\,, $$ where the role of the inverse propagator is precisely to "truncate" the external legs, and take care of the on-shell divergences. The more rigorous procedure to connect $n$-point functions with $n$-point scattering matrix elements is called LSZ reduction. I hope this helps!
{ "domain": "physics.stackexchange", "id": 83395, "tags": "quantum-electrodynamics, feynman-diagrams, path-integral, fermions, scattering-cross-section" }
Covalently bonding amide to glass
Question: I am unfortunately not very good at chemistry, so I am having trouble figuring out where to start on this project. I need to have some way of get some amide functional groups stuck on a substrate (most likely some glass beads or slides) so I can try to show some interactions of the amide with a molecule I am interested in. Does anyone know of any literature that shows a good way to functionalize silica with amides? Thanks for any help with this, I really appreciate it. EDIT: I found some of this stuff ((3-Aminopropyl)triethoxysilane) in our lab. Could I just put a glass slide in it and leave it over night? Then maybe come back and wash acetyl chloride over it as in this reaction? Answer: Ethanol solution, soaked over night of APTEOS will work. You won't get great coverage, but it will be likely good enough for what you want to do. The subsequent chemistry is a bit more touchy. The problem is acid byproduct which forms ammonium salt with the amine functions that is difficult to remove. This leaves latent acid catalyst at your surface. You are better served forming the amide from your APTEOS and coating with that product.
{ "domain": "chemistry.stackexchange", "id": 348, "tags": "organic-chemistry, inorganic-chemistry, everyday-chemistry" }
load custom world in gazebo from a launch file
Question: Hello everyone, I've built a custom wold for gazebo (dae file) and I'm trying to load this file from a launch file. When I run the world file directly from gazebo, it woks and I can see the world (gazebo maze.world). But when I build a launch file and include the world file inside the launch file, I get the following error: Error [SystemPaths.cc:367] File or path does not exist[""] Error [Visual.cc:2138] No mesh specified Here is the content of the world file (maze.world): and here is the launch file(maze.launch): both launch file and world file are in the same directory and I've set the GAZEBO_MODEL_PATH to that directory. Can someone tell me what's the problem here? I'm using Ubuntu 14.04, Indigo, and gazebo_ros. Thanks Originally posted by Kasra on Gazebo Answers with karma: 3 on 2014-11-21 Post score: 0 Answer: I think the problem may be related to how the mesh file location is specified: <mesh><uri>file://maze_parallel_wall.dae</uri></mesh> I would recommend following an example from gazebo_models, such as the cordless drill: <mesh><uri>model://cordless_drill/meshes/cordless_drill.stl</uri></mesh> Originally posted by scpeters with karma: 2861 on 2014-11-25 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 3679, "tags": "ros, collada" }
Neo4j graph to cypher conversion
Question: Is there a way or tools available to generate or retrieve cypher query from a Neo4j database ? Should we need to store cypher quries along with the graph data for regeneration ? Answer: This has already been answered in stackoverflow. There is no need to save the queries you used to populate the database, you only need to dump the contents of the database to a file: db1/neo4j-shell -path db1/data/graph.db/ -c dump > export_data.cypher In order to load the database dump into another database, you just supply it to neo4j shell through the standard intput: db2/bin/neo4j-shell -path db2/data/graph.db/ < export_data.cypher
{ "domain": "datascience.stackexchange", "id": 1734, "tags": "graphs, neo4j" }
How should I create a single score with two values as input?
Question: I have two series of values, a and b as inputs and I want to create a score, c, which reflects both of them equally. The distribution of a and b are below In both cases, the x-axis is just an index. How should I go about creating an equation c = f(a,b) such that a and b are (on average) represented equally in c? Edit: c = (a+b)/2 or c = ab will not work because c will be too heavily weighted by a or b. I need a function, f, where c = f(a,b) and c' = f(a + stdev(a),b) = f(a, b + stdev(b)) Answer: If you're looking for something where A and B are equally represented, consider trying something like Z score normalization (or standard score): c = (a-u_a)/sigma_a + (b-u_b)/sigma_b That score equally represents the two, but would be on a smaller scale. It really shouldn't matter since the numbers are arbitrary, however, if you need to scale it up, you could do something like: c2 = (sigma_a+sigma_b)*(c) + u_a + u_b
{ "domain": "datascience.stackexchange", "id": 281, "tags": "statistics" }
Infinitesimal generator of change of basis (Fock Space)
Question: I'm trying to find unitary transformation and prove that the infinitesimal generator for a change of basis with spatial depedency $$|\vec{r} \rangle \rightarrow e^{i \theta (\vec{r}) }|\vec{r}\rangle $$ is the density operator $\hat{\rho} = \hat{\psi}^{\dagger}(\vec{r}) \hat{\psi}(\vec{r})$ My attempt was: $$\hat{U}{(\theta(\vec{r})}) \hat{\psi}(\vec{r}) \hat{U(\theta(\vec{r}))}^{\dagger} = e^{-i\theta(\vec{r})} \hat{\psi}(\vec{r}) \iff \\ \iff \hat{\psi}(\vec{r}) - i \theta(\vec{r})[G,\hat{\psi}(\vec{r})] = \hat{\psi}(\vec{r}) - i \theta(\vec{r})\hat{\psi}(\vec{r}) $$ Thus, $$ [G,\hat{\psi}(\vec{r})] = \hat{\psi}(\vec{r}) \rightarrow G = \hat{N} $$ Which is not the solution. What am I doing wrong? Answer: The step where you went wrong is in writing $$\hat{U}(\theta(\vec{r}))=1-i\theta(\vec{r})G$$ for infinitesimal transformations. This would imply $U(\theta(\vec{r}))$ is an operator-valued function of $\vec{r}$ instead of a single unitary operator. In reality, there should be a separate generator for each point in space: $$\hat{U}(\theta(\vec{r}))=1-i\int d^3r \theta(\vec{r})G(\vec{r}).$$ Then you get $$\left[\int d^3r' \theta(\vec{r}')G(\vec{r}'),\hat{\psi}(\vec{r})\right]=\theta(\vec{r})\hat{\psi}(\vec{r})$$ $$\implies \int d^3r' \theta(\vec{r}')\left[G(\vec{r}'),\hat{\psi}(\vec{r})\right]=\theta(\vec{r})\hat{\psi}(\vec{r}).$$ You can plug in $G(\vec{r})=\hat{\psi}^\dagger(\vec{r})\hat{\psi}(\vec{r})$ at this point. $$\theta(\vec{r})\hat{\psi}(\vec{r})=\int d^3r' \theta(\vec{r}')\left[\hat{\psi}^\dagger(\vec{r}')\hat{\psi}(\vec{r}'),\hat{\psi}(\vec{r})\right]$$ $$=\int d^3r' \theta(\vec{r}')\left(\hat{\psi}^\dagger(\vec{r}')\{\hat{\psi}(\vec{r}'),\hat{\psi}(\vec{r})\} - \{\hat{\psi}^\dagger(\vec{r}'),\hat{\psi}(\vec{r})\}\hat{\psi}(\vec{r}')\right)$$ $$=\int d^3r' \theta(\vec{r}')\left(\hat{\psi}^\dagger(\vec{r}')\cdot 0 - \delta^{(3)}(\vec{r}-\vec{r}')\hat{\psi}(\vec{r}')\right)$$ $$=\theta(\vec{r})\hat{\psi}(\vec{r})$$
{ "domain": "physics.stackexchange", "id": 74435, "tags": "quantum-field-theory, hilbert-space, density-operator, many-body, unitarity" }
Why does the BER rate change randomly versus the SNR?
Question: I have altered a MATLAB code that simulates the BPSK modulation using a raised cosine filter at the transmitter and a matched filter at the receiver to study how the signal changes in each step. But, the final plot of the BER versus SNR is not decreasing as the SNR increases. I have no clue how to fix this. I would appreciate any help. The image and code is below BER vs SNR clear N = 10^6; % number of bits or symbols T = 1; % symbol duration of 1us os = 5; % oversampling factor fs = 5/T; % sampling frequency in MHz rolloff = 0.05; Eb_N0_dB = [0:10]; % multiple Eb/N0 values for ii = 1:length(Eb_N0_dB) filter = rcosine(1/T,os,'sqrt',rolloff); % Transmitter ip = rand(1,N)>0.5; % generating 0,1 with equal probability s = 2*ip-1; % BPSK modulation 0 -> -1; 1 -> 1 % up sampling the signal for transmission sU = [s;zeros(os-1,length(s))]; sU = sU(:).'; sFilt = 1/sqrt(os)*conv(sU,filter); sFilt = sFilt(1:N*os); % Noise addition y= awgn(sFilt,ii,0); % mathched filter yFilt = conv(y,fliplr(filter)); % convolution ySamp = yFilt(os:os:N*os); % sampling at time T % receiver - hard decision decoding ipHat = real(ySamp)>0; % counting the errors nErr(ii) = size(find([ip- ipHat]),2); end simBer = nErr/N; % simulated ber semilogy(Eb_N0_dB,simBer,'mx-'); grid on xlabel('Eb/No, dB'); ylabel('Bit Error Rate'); title('Bit error probability curve for BPSK modulation'); Answer: There are a few problems with your code: You are using the index ii not the actual SNR in DB in this line: y= awgn(sFilt,ii,0); You are not taking into account the filter delay in ySamp = yFilt(os:os:N*os);. Your filter is filter = rcosine(1/T,os,'sqrt',rolloff); which will design a root-raised cosine FIR filter. Your filter delay will be (length(filter)-1)/2.
{ "domain": "dsp.stackexchange", "id": 3366, "tags": "matlab, modulation, snr, bpsk" }
Regarding the usage of 'classical potentials' in quantum mechanics
Question: I am familiar with basic quantum mechanics and I know that there is no concept of 'force' in quantum mechanics, unlike in classical mechanics. Problems in quantum mechanics are solved by writing down the Hamiltonian for a system, and trying to solve for the various eigenvalues. Some of the first problems that are taught to students learning Quantum Mechanics are the harmonic oscillator problem, and the Hydrogen atom problem, where the Hamiltonian takes the same form as a classical system. Since moving over to quantum mechanics requires one to lose several ideas that have been built up while learning classical mechanics, why is the potential found in quantum mechanics problem of the same form as classical mechanics? The potential for the hydrogen atom, for example, is classical in origin and is derived from the Coulomb force. How is this direct usage of the potential, which is purely classical in origin, justified by the theory? Answer: The Hamiltonian (and thus in particular the form of the potential) is not "justified by the theory". The form of the Hamiltonian is an input to the theory. However, there are various ways to see that using the classical Hamiltonian in its quantized version is really the right thing to do: The "classical" limit of large quantum numbers gives the classical behaviour, as demanded by the correspondence principle. Even without looking at the classical theory, the predicted spectral lines (for the hydrogen atom) with this Hamiltonian are empirically correct, as seen in experiments. This is the only "justification" the Hamiltonian/Lagrangian needs: The justification for the classical form is also, in the end, that it is the form that produces the correct predictions. The form of the potential itself can be derived from the underlying quantum field theory, quantum electrodynamics, yet there you give the Lagrangian of classical electrodynamics as an input, shifting the question to "Why does the Lagrangian have this form?". Answer, again: Because it predicts the correct results. (Although one might argue that there is a certain "naturality" to this minimally coupled Yang-Mills theory that is not as evident for the Coulomb potential.)
{ "domain": "physics.stackexchange", "id": 28186, "tags": "quantum-mechanics, classical-mechanics" }