anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
ROS Stream prints spurious characters to log
Question: I launched a node with roslaunch. The node uses ROS_*_STREAM macros to log information. The log file created by the node however shows spurious characters. Following line shows an example ^[[0m[ INFO] [1325453858.376186830]: Alasca scan: time=-0.000000, points=31, nodes=11^[[0m Every line in the log file starts and ends with "^[[0m". Does anyone know what might cause this behavior. This happens with not just this node, but all nodes that I launch with roslaunch. ROS: Electric, OS: Ubuntu 11.10 x86 Originally posted by Aditya on ROS Answers with karma: 287 on 2012-01-01 Post score: 1 Answer: To me they look like escape sequences for colorization. The 0 should reset all previous color commands. Originally posted by dornhege with karma: 31395 on 2012-01-01 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 7769, "tags": "ros, roslaunch, logging" }
Formally describing a sensor network language
Question: I have a language for sensor networks (generates C code) and I want to define the formal semantics of it. The language has this form: {STATE name_state: EVERY time SELECT {variable [, variable] ...} [SENDIF send_condition] [CHANGEIF change_condition GOTO new_state]; } ... START IN initial_state; The initial state of the program is specified by the START IN instruction. Each state is defined by the STATE instruction. In the specification of each state the clauses EVERY and SELECT are compulsory and the clauses SENDIF and CHANGEIF are optional. The CHANGEIF clause is only not specified when there is a single state. What type of formal semantics is more convenient to use? Answer: In general, axiomatic semantics is the nicest form of semantics, but is difficult to obtain for anything but the most simple languages, and subject of much current research. Denotational semantics is a vague term, if you mean domain theoretic semantics, then I advise against it. In general using operational semantics is easy and natural, especially if you use state-based SOS (structural operational semantics). The fact that you already have a translation into C is a strong indication that this would work in your case, because C can be seen as a state-based formalism. Indeed your translation into C is a formal semantics of your sensor network language, albeit a painfully detailed one. So I guess you want a formal description that isto omit some level of detail (e.g. integers are 'real' mathematical integers and not some kind of finite modulo arithmetic). For this state-based SOS tends to be ideal.
{ "domain": "cs.stackexchange", "id": 1074, "tags": "formal-languages, programming-languages, semantics" }
Moving a rover and receiving current coordinates
Question: I made a little text-based game where you move a rover and it gives you the current coordinates. You can essentially move around a 1000x1000 grid, and get output into stdout on what your coordinates are. I'm simply wondering if I could improve anything. # Rover moving program from random import randint # Starting position for rover r_pos = {'x': randint(1, 1000), 'y': randint(1, 1000)} def move_rover(x_pos, y_pos): if x_pos <= 1000 and y_pos <= 1000: r_pos['x'] = x_pos r_pos['y'] = y_pos print('x{} y{}'.format( r_pos['x'], r_pos['y'])) elif x_pos > 1000 and y_pos > 1000: print('Invalid position: x{} y{}'.format( x_pos, y_pos)) You run it like this. >>> import rover >>> rover.move_rover(234, 789) x234 y789 Answer: Validation Your validation is weird, and probably not what you intended. move_rover(-1, -1) appears to succeed, even though it's not within your 1000 × 1000 grid. move_rover(-1, 1001) is a no-op. move_rover(1001, 500) is a no-op. move_rover(2000, 2000) prints a message. However, it would be more idiomatic to raise some kind of ValueError. Printing the error message directly limits the reusability of your code. Output Similarly, I advise against having your move_rover() function also print the new coordinates. Each function should do one thing only. Printing should be a separate operation. There is repetition in the code to format the position in the success and error cases. The formatting code should be factored out to a common routine. Representation Using a dictionary with keys named 'x' and 'y' seems cumbersome. You could use a tuple, a namedtuple` or class. A class probably make sense, since your rover should act as an object that responds to messages. Suggested implementation from random import randint MIN_COORD, MAX_COORD = 1, 1000 class Rover(object): def __init__(self): """Places the rover randomly in the coordinate range with a uniform distribution""" self.x = randint(MIN_COORD, MAX_COORD) self.y = randint(MIN_COORD, MAX_COORD) def move(self, x, y): if MIN_COORD <= x <= MAX_COORD and MIN_COORD <= y <= MAX_COORD: self.x, self.y = x, y else: raise ValueError('Invalid position: %s' % (self)) def __str__(self): """Reports the position of the rover""" return 'x{} y{}'.format(self.x, self.y) You run it like this: >>> from rover import Rover >>> rover = Rover() >>> print(rover) x944 y556 >>> rover.move(234, 789) >>> print(rover) x234 y789
{ "domain": "codereview.stackexchange", "id": 8625, "tags": "python, game, python-2.x, coordinate-system" }
How bright can we make a sun jar?
Question: A sun jar is an object that stores solar energy in a battery and then releases it during dark hours through a led. Assume: a $65cm^2$ solar panel a 12h/12h light/dark cycle insolation of $2.61kWh/m^2/day$ perfectly efficient components (i.e. without violating Entropy laws or other theoretical limits) light is emitted by an ideal "white" source How bright can we make the jar so that it shines for 12 hours? Useful links: Insolation Peukert's law Solar cell efficiency Luminous efficacy Answer: Using that ideal white source on the wiki link, that states 251 lm/W. The insolation level is $2.61kWh/m^2d = \frac{2610}{24}W/m^2$ which across $65cm^2 (= 0.0065m^2)$ gives 0.7W. If everything were 100% efficient, then you'd have $0.7 \times 251 lm = 178 lm$ Now we derate on some maximum theoretical efficiencies. The biggest theoretical derating will be on the PV, and that will completely dwarf any loss on the maximum theoretical efficiency of a round trip into storage and back. For a single-junction n-p PV cell in unconcentrated sunlight, the maximum efficiency (Shockley–Queisser, DOI:10.1063/1.1736034) is 30%, giving you about 53 lm. By layering multiple junctions, you could theoretically get 42% (2 junctions, 74 lm), 49% (3 junctions, 87 lm), tending to 68% ($n\to\inf$, 121 lm) (doi: 10.1088/0022-3727/13/5/018) Now, if you are allowed to put a concentrating lens onto the $65 cm^2$ cell, so that the sun jar harvests light from a much larger area, then we can really go to town. From the second link, the maximum efficiencies for concentrating PV, as the number of junctions tends to infinity, is 86.8%. So then you've got to find out what the maximum concentration could be without the whole lot bursting into flames ... that 86.8% is based on a concentration factor of 45 900 , so it's a PV cell made of pure unobtainium, and I'll stop right there.
{ "domain": "physics.stackexchange", "id": 1398, "tags": "optics, entropy, renewable-energy, photoelectric-effect" }
Tic Tac Toe console game in c++
Question: For learning purposes i wrote a Tic Tac Toe game in object oriented manner. I have question about storing players in game class - Players(cpu and human) are derived from virtual class players and stored in vector as unique pointers to common class. Is there more elegant way to achieve this? Also, feel free to point out things i could improve or i do in wrong way. For convenience purposes here is repo with whole project: repo main.cpp #include "TicTacToe.h" #include <iostream> int main() { int playersAmount; std::cout << "0 - two CPU players" << std::endl << "1 - play with CPU" << std::endl << "2 - two players" << std::endl; std::cin >> playersAmount; TicTacToe::intro(); do { TicTacToe game(playersAmount); game.clearConsole(); game.printBoard(); while (game.isRunning()) { game.step(); }; char input; std::cout << "Play again?(y/n)"; std::cin >> input; if (input != 'y') { break; } } while (true); } TicTacToe.h #pragma once #include "Board.h" #include "Players.h" #include <array> #include <memory> #include <string> #include <vector> class TicTacToe { public: TicTacToe(const int numberOfHumanPlayers); static void intro(); static std::string getInputFromConsole(); static void clearConsole(); void printBoard() const; void step(); bool isRunning() const; void terminate(); private: bool m_running { true }; int m_currentPlayer { 0 }; std::vector<std::unique_ptr<Player>> m_players; Board m_board; void nextPlayer(); char returnPlayerSign(const int player) const; }; TicTacToe.cpp #include "TicTacToe.h" #include <iostream> #include <memory> #include <stdlib.h> #include <string> #include <vector> TicTacToe::TicTacToe(const int numberOfHumanPlayers) { switch (numberOfHumanPlayers) { case 0: m_players.push_back(std::make_unique<PlayerCPU>("CPU 1")); m_players.push_back(std::make_unique<PlayerCPU>("CPU 2")); break; case 1: m_players.push_back(std::make_unique<PlayerHuman>("Player")); m_players.push_back(std::make_unique<PlayerCPU>("CPU")); break; case 2: default: m_players.push_back(std::make_unique<PlayerHuman>("Player 1")); m_players.push_back(std::make_unique<PlayerHuman>("Player 2")); } } void TicTacToe::step() { std::cout << "Player: " << m_players[m_currentPlayer]->getName() << std::endl; int selectedField; while (true) { selectedField = m_players[m_currentPlayer]->provideField(m_board); if (m_board.isMoveAllowed(selectedField)) { break; } else { std::cout << "Invalid move" << std::endl; } } m_board.takeFieldOnBoard(selectedField, returnPlayerSign(m_currentPlayer)); clearConsole(); printBoard(); if (m_board.isGameWon()) { std::cout << m_players[m_currentPlayer]->getName() << "(" << returnPlayerSign(m_currentPlayer) << ") won!" << std::endl; terminate(); return; } if (!m_board.areThereFreeFields()) { std::cout << "Game ended" << std::endl; terminate(); return; } nextPlayer(); } void TicTacToe::printBoard() const { m_board.printBoard(); } char TicTacToe::returnPlayerSign(const int player) const { if (player == 0) { return 'X'; } return 'O'; } void TicTacToe::nextPlayer() { m_currentPlayer += 1; m_currentPlayer %= 2; } void TicTacToe::clearConsole() { system("clear"); } void TicTacToe::intro() { std::cout << "Tic Tac Toe game" << std::endl; std::cout << "To make a move, enter number of field" << std::endl; } std::string TicTacToe::getInputFromConsole() { std::string input; std::cin >> input; return input; } bool TicTacToe::isRunning() const { return m_running; } void TicTacToe::terminate() { m_running = false; } Board.h #pragma once #include <array> #include <vector> class Board { public: Board(); bool isGameWon() const; bool isMoveAllowed(const int field) const; bool areThereFreeFields() const; const std::vector<int>& returnAllowedIds() const; void takeFieldOnBoard(const int field, const char sign); void printBoard() const; private: std::array<char, 9> m_board; std::vector<int> m_allowedFieldsIds; bool checkCol(const int) const; bool checkRow(const int) const; bool checkAllCols() const; bool checkAllRows() const; bool checkDiagonals() const; }; Board.cpp #include "Board.h" #include <algorithm> #include <iostream> Board::Board() { m_allowedFieldsIds.reserve(9); for (int i = 0; i < 9; i++) { m_board[i] = i + '0'; m_allowedFieldsIds.push_back(i); }; } const std::vector<int>& Board::returnAllowedIds() const { return m_allowedFieldsIds; } void Board::takeFieldOnBoard(const int field, const char sign) { m_board[field] = sign; m_allowedFieldsIds.erase(std::remove(m_allowedFieldsIds.begin(), m_allowedFieldsIds.end(), field), m_allowedFieldsIds.end()); } bool Board::isMoveAllowed(const int field) const { auto it = std::find(m_allowedFieldsIds.begin(), m_allowedFieldsIds.end(), field); if (it != m_allowedFieldsIds.end()) { return true; } return false; } bool Board::areThereFreeFields() const { return !m_allowedFieldsIds.empty(); } bool Board::isGameWon() const { if (checkAllCols()) { return true; } if (checkAllRows()) { return true; } if (checkDiagonals()) { return true; } return false; } bool Board::checkRow(const int row) const { if (m_board[row] == m_board[row + 1] && m_board[row + 1] == m_board[row + 2]) { return true; } return false; } bool Board::checkAllRows() const { for (int i = 0; i < 9; i += 3) { if (checkRow(i)) { return true; } } return false; } bool Board::checkCol(const int col) const { if (m_board[col] == m_board[col + 3] && m_board[col + 3] == m_board[col + 6]) { return true; } return false; } bool Board::checkAllCols() const { for (int i = 0; i < 3; i++) { if (checkCol(i)) { return true; } } return false; } bool Board::checkDiagonals() const { if (m_board[0] == m_board[4] && m_board[4] == m_board[8]) { return true; } if (m_board[2] == m_board[4] && m_board[4] == m_board[6]) { return true; } return false; } void Board::printBoard() const { for (int i = 0; i < 9; i += 3) { std::cout << m_board[i] << '|' << m_board[i + 1] << '|' << m_board[i + 2] << std::endl; if (i < 6) { std::cout << "_____" << std::endl; } } std::cout << std::endl; } Players.h #pragma once #include "Board.h" #include <array> #include <string> class Player { public: Player(const std::string& name) : m_name(name) {}; virtual int provideField(const Board& board) const = 0; const std::string& getName() const; private: const std::string m_name; }; // Human player class PlayerHuman : public Player { public: PlayerHuman(const std::string& name) : Player(name) {}; int provideField(const Board& board) const override; private: int askForInput() const; }; // CPU Player class PlayerCPU : public Player { public: PlayerCPU(const std::string& name) : Player(name) {}; int provideField(const Board& board) const override; int returnFirstAllowedField(const Board& board) const; int returnRandomField(const Board& board) const; }; Players.cpp #include "Players.h" #include <algorithm> #include <array> #include <chrono> #include <iostream> #include <iterator> #include <random> #include <thread> #include <vector> // Player const std::string& Player::getName() const { return m_name; } // PlayerHuman int PlayerHuman::provideField(const Board& board) const { return askForInput(); } int PlayerHuman::askForInput() const { int field; std::cout << "Field #"; std::cin >> field; return field; } // PlayerCPU int PlayerCPU::provideField(const Board& board) const { using namespace std::chrono_literals; std::this_thread::sleep_for(1000ms); return returnRandomField(board); return 0; } int PlayerCPU::returnFirstAllowedField(const Board& board) const { return board.returnAllowedIds()[0]; } int PlayerCPU::returnRandomField(const Board& board) const { static std::random_device rd; static std::mt19937 gen(rd()); std::uniform_int_distribution<> distr(0, board.returnAllowedIds().size() - 1); return board.returnAllowedIds()[distr(gen)]; } Answer: This is quite good! I see some thought has gone into organizing things into classes, you thought about const parameters and functions, passing by reference, #included local headers before the standard ones, smart pointers, and you use C++'s random number generators correctly. Use '\n' instead of std::endl Prefer using '\n' instead of std::endl; the latter is equivalent to the former, but also forces the output to be flushed, which is usually unnecessary and might hurt performance. Avoid using system() Using system() has a huge overhead: it starts a shell, parses the command you give it, and then starts another process that runs /usr/bin/clear. It is also non-portable; on Windows you would have to call cls instead of clear. Finally, it is unsafe; depending on the user's environment and the way their shell has been configured, clear might not do what you expect it to do. Consider that /usr/bin/clear is written in C, you should be able to write C++ code yourself that clears the screen without having to call another program. On most operating systems, including the latest versions of Windows, you can just print an ANSI escape code. If you want to be even more portable, you can consider using a curses library. You could also consider not bothering clearing the screen; users will see past versions of the board, but it doesn't impact the game itself. Reading input When you are reading input from std::cin, you have to consider a few things. First, the user might enter things that are not valid. For example, when asking how many players there are, the user could enter "-1", "3", "two", and so on. If a valid integer is read, you only check if that integer is 0 or 1, and anything else you treat as if it was 2, which might result in unexpected behavior. Also consider that entering "two" will result in playersAmount being equal to 0, and will start a two CPU player game. Again, that is unexpected. Finally, there could be an error reading anything from std::cin at all. For example, the terminal might be closed, or the input was redirected to something that doesn't allow reading, and so on. Again, ignoring this will result in unexpected behavior of your program. The correct thing to do is to check whether the input was read correctly. You can check after reading something if std::cin.good() is true. There are other operations you can use to find out if it was an error reading anything at all, or whether it couldn't convert the input to the desired type. In the latter case, you could consider asking the user to try again. In the case of an error you cannot recover from, just print an error message and terminate the program. Even if reading was succesful, make sure that the value that was read is valid for your program; if not you probably should ask the player to try again. Reading one line at a time You probably intend the player to write something and then press enter. That means they enter one line at a time. However, when you read somehting with std::cin >> variable, it doesn't read a whole line, instead it only reads one character, integer or word (depending on whether variable is a char, int or std::string). This can result into unexpected behavior if the user enters multiple things on one line. Consider reading in a whole line at a time into a std::string, and then parse the string. You can do this with std::getline(). For example: int playersAmount; while (true) { std::string line; if (!std::getline(std::cin, line)) { std::cerr << "Error reading input!\n"; return EXIT_FAILURE; } playersAmount = std::stoi(line); if (playersAmount >= 0 && playersAmount <= 2) { break; } std::cerr << "Enter a number between 0 and 2!\n"; } Storing the set of allowed fields You use a std::vector<int> to store the IDs of allowed fields. While this works, an even better way would be to use a std::bitset<9>. Tracking the current player You have the variable m_currentPlayer to track whose turn it is. However, that information is actually also available in m_board.m_allowedFieldsIds: consider that if the number of allowed fields to place a mark in is odd, it's the first player's turn, and if it is even it is the second player's turn. After making a move, you'd have removed an ID from m_allowedFieldsIds, so it will already be updated to reflect that it is the next player's move. Not all functions need to be in a class In C++ you can have free functions that are not part of a class. One example is main() of course. Since clearing a console is not something that is specific to a tic-tac-toe game, I would move clearConsole() out of class TicTacToe, and make it a free function. You could put it in a separate file that contains console-related functions. getInputFromConsole() could also be put there. Storing players Players(cpu and human) are derived from virtual class players and stored in vector as unique pointers to common class. Is there more elegant way to achieve this? This is the right way to do it when using inheritance, which is perfectly fine in this case. Another way to do it would be to not use inheritance, but to store a vector of std::variants: class PlayerHuman {…}; // no inheritance from Player class PlayerCPU {…}; using Player = std::variant<PlayerHuman, PlayerCPU>; class TicTacToe { … std::vector<Player> m_players; … }; TicTacToe::TicTacToe(const int numberOfHumanPlayers) { … m_players.push_back(PlayerHuman("Player")); m_players.push_back(PlayerCPU("CPU")); … } That looks much more elegant, and no pointers are needed. However, accessing the players is not so simple. You cannot write: std::cout << "Player: " << m_players[m_currentPlayer].getName() << '\n'; Instead, you would have to use std::visit(): std::cout << "Player: " << std::visit([](auto& player) { return player.getName(); }, m_players[m_currentPlayer]) << '\n'; Having to write a visitor every time you need to do something with a player is going to be cumbersome and thus inelegant. However, in your code you actually only need to call std::visit() once: void TicTacToe::step() { std::visit([&](auto& player) { std::cout << "Player: " << player.getName() << '\n'; // All the rest of step() here … }, m_players[m_currentPlayer]); } Is this more elegant? You can decide for yourself. I think it doesn't matter much for this game. However, each way has its own pros and cons, and which one to choose will depend on the situation.
{ "domain": "codereview.stackexchange", "id": 44644, "tags": "c++, object-oriented, tic-tac-toe" }
Matching these matrices in R
Question: I have two matrices; I want to convert the row names of first matrix to gene symbol from matched ensemble=gene symbol from second matrix > head(mat1[,1:2]) TCGA-L5-A4OG-11A-12R-A260-31 ENSG00000000003 1818 ENSG00000000005 0 ENSG00000000419 1436 ENSG00000000457 1175 ENSG00000000460 242 ENSG00000000938 536 TCGA-IC-A6RE-11A-12R-A336-31 ENSG00000000003 4596 ENSG00000000005 3 ENSG00000000419 751 ENSG00000000457 840 ENSG00000000460 205 ENSG00000000938 253 > > dim(mat1) [1] 56925 11 > > head(mat2) ensembl symbol ENSG00000274572 ENSG00000274572 ZYXP1 ENSG00000159840 ENSG00000159840 ZYX ENSG00000162378 ENSG00000162378 ZYG11B ENSG00000232242 ENSG00000232242 ZYG11AP1 ENSG00000203995 ENSG00000203995 ZYG11A ENSG00000070476 ENSG00000070476 ZXDC > > dim(mat2) [1] 36848 2 > How I can do that in R? Thank you Answer: Since they share the ensembl_ID column, you can merge them, then assign the symbol column to the rownames, then delete the symbol column. something like: merged <- merge(mat1, mat2, by = 0) rownames(merge) <- merged[,'symbol'] merged[,symbol] <- NULL https://stat.ethz.ch/R-manual/R-devel/library/base/html/merge.html
{ "domain": "bioinformatics.stackexchange", "id": 920, "tags": "r, rna-seq, ensembl" }
What is the uncertainty principle?
Question: I looked on Wikpedia for information on the uncertainty principle, but after reading it I still had no idea. I know it has something to do with how many things you can hold at some spot for some amount of time (maybe?). This is inspired by this question. Answer: In classical physics you are supposed to be able to measure the coordinates and the velocity (really the momentum) of a mass with infinite precision at the same time. If you try this trick in the lab you notice that that's not the case. Either your position or your momentum measurement or both will always show some non-trivial statistical fluctuations when you repeat your experiment many times. If you multiply the standard deviations of these fluctuations with each other, no experiment that you can ever perform yields a product that is smaller than a certain number. That is it.
{ "domain": "physics.stackexchange", "id": 21708, "tags": "heisenberg-uncertainty-principle" }
Checking locked users using Identity 2.0
Question: I'm writing a theoretical MVC application to aid in learning more about ASP Identity 2. In real terms I'm new to ASP Identity as a whole but thought I'd jump in in this release as it's the default in new projects and everything I read about it seems to mostly suit my needs. There is one issue that I need to be able to overcome. That issue is locking out users. I want to allow administrators the facility to be able to lock out a user should they wish to restrict their access. From what I've read so far, there is a temporary lockout solution for example when the user tries their password incorrectly x number of times for y number of minutes. This doesn't seem to be the recommended method for more long term solutions. For a longer term solution, I've added a Locked property to the ApplicationUser class in Entity Framework code first: public class ApplicationUser : IdentityUser { public string FirstName { get; set; } public string Surname { get; set; } public bool Locked { get; set; } [NotMapped] public string FullName { get { return FirstName + " " + Surname; } } } Getting to the point of my question though, which is, is my modified ActionResult Login method secure and efficient? I don't want to make unnecessary roundtrips to the database and I also don't want to unwillingly do anything unsecure. public async Task<ActionResult> Login(LoginViewModel model, string returnUrl) { if (!ModelState.IsValid) { return View(model); } // MY LOCKED CHECK CODE: // Check if the user is locked out first // 1. Fail if it is // 2. If user isn't found then return invalid login atempt // 3. If the user is found and it isn't locked out then proceed with the login as usual var user = UserManager.Users.Where(u => u.UserName == model.Email).SingleOrDefault(); if (user != null) { if (user.Locked) { return View("Lockout"); } } else { ModelState.AddModelError("", "Invalid login attempt."); return View(model); } // END MY CODE // This doesn't count login failures towards account lockout // To enable password failures to trigger account lockout, change to shouldLockout: true var result = await SignInManager.PasswordSignInAsync(model.Email, model.Password, model.RememberMe, shouldLockout: false); switch (result) { case SignInStatus.Success: return RedirectToLocal(returnUrl); case SignInStatus.LockedOut: return View("Lockout"); case SignInStatus.RequiresVerification: return RedirectToAction("SendCode", new { ReturnUrl = returnUrl, RememberMe = model.RememberMe }); case SignInStatus.Failure: default: ModelState.AddModelError("", "Invalid login attempt."); return View(model); } } Answer: Only focusing on this part var user = UserManager.Users.Where(u => u.UserName == model.Email).SingleOrDefault(); if (user != null) { if (user.Locked) { return View("Lockout"); } } else { ModelState.AddModelError("", "Invalid login attempt."); return View(model); } Are you aware that if the Users contains more than one user with the same Email the call to SingleOrDefault will throw an InvalidOperationException ? If you are 100 percent sure that this won't ever happen then there won't be a problem. In its current state this linq query will for each user query the model for the Email property. If you store it in a variable the access will be much faster. Checking first if the user == null will make it more obvious what is happening. As I first looked over this code I thought hey, how should the code after the if..else ever be reached. Applying this will lead to string email = model.Email; var user = UserManager.Users.Where(u => u.UserName == email).SingleOrDefault(); if (user == null) { ModelState.AddModelError("", "Invalid login attempt."); return View(model); } if (user.Locked) { return View("Lockout"); } This looks far better IMO and is more readable. Usually a username isn't checked in a case sensitive way so instead of u.UserName == email you should use the string.equals() method. This method takes as the third parameter a StringComparison enum which defines how the comparison will take place. The best one for all cultures, for instance using a turkish locale can make problems (does-your-code-pass-turkey-test), will be to use StringComparison.OrdinalIgnoreCase. Applying this will lead to string email = model.Email; var user = UserManager.Users.Where(u => string.Equals(u.UserName, email, StringComparison.OrdinalIgnoreCase)).SingleOrDefault(); if (user == null) { ModelState.AddModelError("", "Invalid login attempt."); return View(model); } if (user.Locked) { return View("Lockout"); }
{ "domain": "codereview.stackexchange", "id": 16218, "tags": "c#, security, asp.net-mvc" }
Confused about how to apply KMeans on my a dataset with features extracted
Question: I am trying to apply a basic use of the scikitlearn KMeans Clustering package, to create different clusters that I could use to identify a certain activity. For example, in my dataset below, I have different usage events (0,...,11), and each event has the wattage used and the duration. Based on the Wattage, Duration, and timeOfDay, I would like to cluster these into different groups to see if I can create clusters and hand-classify the individual activities of each cluster. I was having trouble with the KMeans package because I think my values needed to be in integer form. And then, how would I plot the clusters on a scatter plot? I know I need to put the original datapoints onto the plot, and then maybe I can separate them by color from the cluster? km = KMeans(n_clusters = 5) myFit = km.fit(activity_dataset) Wattage time_stamp timeOfDay Duration (s) 0 100 2015-02-24 10:00:00 Morning 30 1 120 2015-02-24 11:00:00 Morning 27 2 104 2015-02-24 12:00:00 Morning 25 3 105 2015-02-24 13:00:00 Afternoon 15 4 109 2015-02-24 14:00:00 Afternoon 35 5 120 2015-02-24 15:00:00 Afternoon 49 6 450 2015-02-24 16:00:00 Afternoon 120 7 200 2015-02-24 17:00:00 Evening 145 8 300 2015-02-24 18:00:00 Evening 65 9 190 2015-02-24 19:00:00 Evening 35 10 100 2015-02-24 20:00:00 Evening 45 11 110 2015-02-24 21:00:00 Evening 100 Edit: Here is the output from one of my runs of K-Means Clustering. How do I interpret the means that are zero? What does this mean in terms of the cluster and the math? print (waterUsage[clmns].groupby(['clusters']).mean()) water_volume duration timeOfDay_Afternoon timeOfDay_Evening \ clusters 0 0.119370 8.689516 0.000000 0.000000 1 0.164174 11.114241 0.474178 0.525822 timeOfDay_Morning outdoorTemp clusters 0 1.0 20.821613 1 0.0 25.636901 Answer: For clustering, your data must be indeed integers. Moreover, since k-means is using euclidean distance, having categorical column is not a good idea. Therefore you should also encode the column timeOfDay into three dummy variables. Lastly, don't forget to standardize your data. This might be not important in your case, but in general, you risk that the algorithm will be pulled into direction with largest values, which is not what you want. So I downloaded your data, put into .csv and made a very simple example. You can see that I am using different dataframe for the clustering itself and then once I retrieve the cluster labels, I add them to the previous one. Note that I omit the variable timestamp - since the value is unique for every record, it will only confuse the algorithm. import pandas as pd from scipy import stats from sklearn.cluster import KMeans import matplotlib.pyplot as plt import seaborn as sns df = pd.read_csv('C:/.../Dataset.csv',sep=';') #Make a copy of DF df_tr = df #Transsform the timeOfDay to dummies df_tr = pd.get_dummies(df_tr, columns=['timeOfDay']) #Standardize clmns = ['Wattage', 'Duration','timeOfDay_Afternoon', 'timeOfDay_Evening', 'timeOfDay_Morning'] df_tr_std = stats.zscore(df_tr[clmns]) #Cluster the data kmeans = KMeans(n_clusters=2, random_state=0).fit(df_tr_std) labels = kmeans.labels_ #Glue back to originaal data df_tr['clusters'] = labels #Add the column into our list clmns.extend(['clusters']) #Lets analyze the clusters print df_tr[clmns].groupby(['clusters']).mean() This can tell us what are the differences between the clusters. It shows mean values of the attribute per each cluster. Looks like cluster 0 are evening people with high consumption, whilst 1 are morning people with small consumption. clusters Wattage Duration timeOfDay_Afternoon timeOfDay_Evening timeOfDay_Morning 0 225.000000 85.000000 0.166667 0.833333 0.0 1 109.666667 30.166667 0.500000 0.000000 0.5 You asked for visualization as well. This is tricky, because everything above two dimensions is difficult to read. So i put on scatter plot Duration against Wattage and colored the dots based on cluster. You can see that it looks quite reasonable, except the one blue dot there. #Scatter plot of Wattage and Duration sns.lmplot('Wattage', 'Duration', data=df_tr, fit_reg=False, hue="clusters", scatter_kws={"marker": "D", "s": 100}) plt.title('Clusters Wattage vs Duration') plt.xlabel('Wattage') plt.ylabel('Duration')
{ "domain": "datascience.stackexchange", "id": 1436, "tags": "python, clustering, k-means, unsupervised-learning" }
Dipole moments of pyrrole and furan
Question: Why do pyrrole and furan have dipoles oriented in different directions? Answer: Both pyrrole and furan have a lone pair of electrons in a p-orbital, this lone pair is extensively delocalized into the conjugated pi framework to create an aromatic 6 pi electron system. Where pyrrole and furan significantly differ is that, in pyrrole there is an $\ce{N-H}$ bond lying in the plane of the ring and directed away from the ring whereas in furan, there is a full lone pair of electrons in roughly the same position. The localized lone pair of electrons pointing away from the ring has a very significant effect on the dipole vector and is enough to cause the observed reversal in dipole moment direction between furan and pyrrole.
{ "domain": "chemistry.stackexchange", "id": 12675, "tags": "organic-chemistry, dipole, heterocyclic-compounds" }
mapviz: No transform between /wgs84 to odom
Question: I am running robot_localization with an IMU and a GPS. I can visualize the pose estimation it in mapviz and it looks good. Now I want to set the plug-in Tile_map (with MapQuest) but i am having this error message "No transform between /wgs84 and odom" and i am not sure what to do. thanks in advance Originally posted by fandrade on ROS Answers with karma: 81 on 2016-05-12 Post score: 2 Answer: Ok, I "solved" it by using rosrun swri_transform_util initialize_origin.py I said "solved" because my tf tree looks weird (it is not a tree). It looks like: imu_link -> base_link far_field -> origin -> far_field_identity map -> map_identity odom Originally posted by fandrade with karma: 81 on 2016-05-12 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 24637, "tags": "navigation, odometry" }
Is the equality of Bloom filters analogous to set equivalence?
Question: I have two multisets $A$, $B$ where $A \subseteq B$. Using these two sets, we construct two Bloom filters $BF(A), BF(B)$; both using bitsets of size $n$ with the same $k$ hash functions. What's the probability that: $A \not\equiv B$ but $BF(A) = BF(B)$ Notes: Since $A$ and $B$ are multisets, they might have duplicate elements. However, duplications should not affect set equaivalence or subset relation. Due to lack of notation (on my side) please assume deduplication when checking for subsets or equivalence. By Bloom filter equality, I mean the bitsets being equal. I think we could assume hash functions are random and are independent of the elements of the sets. If needed, Jaccard index (ratio of intersection over union as an indicator of set similarity) could be approximated as $J(A,B) = \frac{|A|}{|B|}$ (or via MinHash) Answer: It is possible to find an upper bound on the probability of a collision in the fingerprint. Suppose the Bloom filter uses $k$ hash functions and maps into a bit array of size $m$. The case $k=1$ is much easier to analyze explicitly, so I'll start with that case. If $A\ne B$, there must be some element that is in $A$ but not $B$, let's say $a$. $a$ must hash to some bit of the array. With probability $(1-1/m)^{|B|}$, none of the elements of $B$ map to this bit position, and in this case, we necessarily have $BF(A)\ne BF(B)$. So, if $A \ne B$, the probability that $BF(A)=BF(B)$ is at most $1 - (1-1/m)^{|B|}$. When $|B|$ is small compared to $m$, this is roughly $|B|/m$. The calculations get messier when $k>1$. If you're satisfied with an approximate upper bound, you can use the fact that when $k \ll \sqrt{m}$, $a$ will typically hash to $k$ different bits ($k$ different positions in the array). The probability that at least one of the elements of $B$ maps to a particular bit is roughly $|B|k/m$, assuming $|B| \ll m/k$ (following similar reasoning to that above). So, the probability that all of those $k$ bits are mapped to by some element of $B$ is roughly $(|B|k/m)^k$. So, if $A \ne B$, the probability that $BF(A)=BF(B)$ is at most about $(|B|k/m)^k$, under the conditions that $|B| \ll m/k$ and $k \ll \sqrt{m}$.
{ "domain": "cs.stackexchange", "id": 19594, "tags": "sets, streaming-algorithm, equality, bloom-filters" }
Improve search keyword
Question: I've been given the following PHP code: if(empty($search)){ $_SESSION['keyword_exists'] = "no"; }else{ if(isset($_SESSION['old_keyword']) && $_SESSION['old_keyword'] != $search){ $_SESSION['keyword_exists'] = "no"; } } if(isset($_REQUEST['limitstart']) && $_REQUEST['limitstart'] > 0){ if (!empty($search)) { if(isset($_SESSION['keyword_exists']) && $_SESSION['keyword_exists'] === "yes"){ //intentionally left blank? }else{ $limitstart = "0"; $_SESSION['keyword_exists'] = "yes"; $_SESSION['old_keyword'] = $search; } } }else{ $limitstart = "0"; } I'm not entirely sure what this code does, as I'm fairly new to PHP and webdev. The one comment is mine. Is there any way to clean this up so that the next guy to see this is less confused than I am? Answer: Let's start with the basics: $_SESSION: This is a global array that holds all your session information. $_REQUEST: This is a global array that contains the contents of $_GET, $_POST and $_COOKIE. Read more on PHP's superglobals. if( empty($search) ) { $_SESSION['keyword_exists'] = "no"; } else { if( isset($_SESSION['old_keyword']) && $_SESSION['old_keyword'] != $search ) { $_SESSION['keyword_exists'] = "no"; } } The outer if depends on whether the $search variable is empty or not. Empty in context could mean: "" (an empty string) 0 (0 as an integer) 0.0 (0 as a float) "0" (0 as a string) NULL FALSE array() (an empty array) var $var; (a variable declared, but without a value in a class) If $search is empty, $_SESSION['keyword_exists'] is set to "no" (can't read the author's mind, but my choice would be false, not a string). From the naming we can assume that $search holds (or not) the keyword(s) of a search. Now if that is not empty: if( isset($_SESSION['old_keyword']) && $_SESSION['old_keyword'] != $search ) { $_SESSION['keyword_exists'] = "no"; } The first clause, isset($_SESSION['old_keyword']) essentially checks if there's an "old_keyword" index in the $_SESSION array (and whether it's null or not), that's a pretty typical check for arrays. The second check, that executes if and only if the first one passes, checks whether what's in $_SESSION['old_keyword'] is not the same as what's in $search. I'm assuming that the author had some reason for that, but can't imagine what that reason is. Summarizing what happens here, if: $search is empty, or $search is not the same as $_SESSION['old_keyword'] ...then $_SESSION['keyword_exists'] = "no". A perhaps simpler way to write that would be: if( empty($search) || ( isset($_SESSION['old_keyword']) && $_SESSION['old_keyword'] != $search ) ) { $_SESSION['keyword_exists'] = "no"; } Moving on to the next part: if( isset($_REQUEST['limitstart']) && $_REQUEST['limitstart'] > 0 ) { ... } else { $limitstart = "0"; } The clause is very similar to the one discussed previously, only this time other than checking if $_REQUEST has a "limitstart" index, the author also checks that the value is larger than zero. That's an unsafe check, because at this point we don't now what the type of the value in $_REQUEST['limitstart'] is, and if it's anything other than a number, there will be automatic type juggling involved, and the check is completely unreliable. From the name and context, I'm assuming the variable should hold an integer (if anything) that limits the search. If the variable doesn't hold anything, the limit is set to zero ($limitstart = "0"), curiously using a string form of zero. I'd rewrite that check as: $limitstart = 0; if( isset($_REQUEST['limitstart']) && ( is_int($_REQUEST['limitstart']) || ctype_digit($_REQUEST['limitstart']) ) && $_REQUEST['limitstart'] > 0 ) { ... } is_int() checks whether the value is an integer and ctype_digit() whether it's a string that only contains digits (thus a integer in string form), any of the two is acceptable for the following check, $_REQUEST['limitstart'] > 0. I've also moved $limitstart = 0; out of the check, I'm initializing it to zero and will override if and only if there's a need. But let's see what happens if the check is true: if ( !empty($search) ) { if( isset($_SESSION['keyword_exists']) && $_SESSION['keyword_exists'] === "yes" ) { //intentionally left blank? } else { $limitstart = "0"; $_SESSION['keyword_exists'] = "yes"; $_SESSION['old_keyword'] = $search; } } This is obviously incomplete. The outer check is simple enough, it's whether $search is empty or not. If it's empty, nothing happens, if it's not, the inner check is on whether _SESSION has a "keyword_exists" index, and if it's value is "yes". If that's true, something happens, but who knows what? I'm assuming one of the things that would happen would be to set a proper value to $limitstart. Anyways, if the check is false: $limitstart remains zero, $_SESSION['keyword_exists'] for some reason becomes "yes", and $_SESSION['old_keyword'] becomes $search. Unfortunately, I have no idea why. Anyways, my full rewrite would be: if( empty($search) || ( isset($_SESSION['old_keyword']) && $_SESSION['old_keyword'] != $search ) ) { $_SESSION['keyword_exists'] = "no"; } $limitstart = 0; if( isset($_REQUEST['limitstart']) && ( is_int($_REQUEST['limitstart']) || ctype_digit($_REQUEST['limitstart']) ) && $_REQUEST['limitstart'] > 0 ) { if ( !empty($search) ) { if( isset($_SESSION['keyword_exists']) && $_SESSION['keyword_exists'] === "yes" ) { //intentionally left blank? } else { $_SESSION['keyword_exists'] = "yes"; $_SESSION['old_keyword'] = $search; } } } The code is equivalent, and will work (?) if you replace it in your script. Hope it clarifies things a bit. The overall quality of the code is bad, there are some hints of an amateur developer there, and you shouldn't really worry that you didn't grasp what the code does, since you are unfamiliar with the language. It's an incomplete and mostly poorly written piece of code, good luck with it ;)
{ "domain": "codereview.stackexchange", "id": 1972, "tags": "php, search" }
Why do capacitors connected in parallel have the same potential?
Question: I think I am missing some fundamental piece of information or the way I am learning this model is wrong, because the general answer to this question is that: Well, they have the same potential because the equivalent capacitor is the sum of the capacitors... When I try to find out why equivalent capacitor is the sum of the capacitors, the general answer is that: Well, the equivalent capacitor is the sum of the capacitors because the potential difference between their plates is the same... So let's assume there are 2 capacitors that are not connected $C_1=2F$ & $C_2=1F$ $Q_1=10C$ & $Q_2=10C$ Therefore $V_1=5V$ & $V_2=10V$ Now lets say we connect them in parallel. The total charge $Q_t=20C$, The capacitance of the individual capacitor has not changed (or has it?) $C_1=2F$ & $C_2=1F$? According to the general explanation, since the + plates are attached together and - plates are attached together, the charges rearrange themselves such that $V_t = \frac {20}{3}V$. Even though each capacitor is inherently the same as it was before. I am assuming now if we were able to measure the charge on each capacitor we would see $Q_1=\frac{40}{3}C$ & $Q_2=\frac{20}{3}C$? Is that right? So another way to ask the original question is why do the charges distribute in such a convenient way? Answer: The answer is much simpler than that. See for example the figure: All the black lines at the (+) end of each capacitor are connected to potential A with no circuit elements in between (shorted). Therefore the (+) end of each capacitor must be at potential A. Likewise the (–) end of each capacitor is shorted to potential B. $\Delta V$ between A and B is 12 V in this case. Therefore, each capacitor also has this $\Delta V$ across it. To answer your final question, if multiple Capacitors are initially at different voltages, and are then connected as per the figure, the charges and/or current redistributes itself to equalize voltage, because that is what voltage is. It is the same as connecting two tanks of water initially at different levels. The water will redistribute until the water level (i.e. pressure) is the same in both. Voltage is exactly analogous to pressure in this case.
{ "domain": "physics.stackexchange", "id": 91576, "tags": "homework-and-exercises, electric-circuits, voltage, capacitance" }
Regarding change in temperature in Bomb Caloriemeter
Question: In a Bomb Calorimeter, we measure the change in temperature of water $\Delta T$. But when it comes to calculating the change in internal energy we use $$\Delta U = C_{cal}\Delta T$$ Where $C_{cal}$ is the specific heat capacity of the whole calorimeter. Why is that? What I mean is that we're measuring the change in temperature of the water and assuming that heat from the reaction transferred completely to water, so shouldn't we apply $$\Delta U = C_{water}\Delta T$$ since we're calculating the change in temperature of the water? Or are we assuming that the change in temperature of the water is the change in temperature of every element in the calorimeter? Answer: Basically, a bomb calorimeter consists of a small cup to contain the sample, oxygen, a stainless steel bomb, water, a stirrer, a thermometer, the dewar or insulating container (to prevent heat flow from the calorimeter to the surroundings), and ignition circuit connected to the bomb. (*) The whole device known as calorimeter consists of a bomb and a water bath, they are not two separate entities. So, $ C_{cal}$ is actually referring to the heat capacity required of the steel vessel and also the water. The construction of the calorimeter is such that the thermal mass of the calorimeter system completely absorbs the heat of reaction (**). *: This Wikipedia **: Refer 29:27 of this video
{ "domain": "chemistry.stackexchange", "id": 14617, "tags": "thermodynamics" }
Is there an explanation for action-reaction law?
Question: If a body $A$ exerts a force over body $B$, $B$ exerts a reaction force over $A$. Is there an explanation of why this happens? Answer: Yes, the explanation is the conservation of momentum. In Newtonian mechanics the third law produces conservation of momentum in mechanical systems. Later on you will see cases (matter interacting with fields) where Newton’s 3rd law is violated in some sense, but in these cases the conservation of momentum still holds (the fields have momentum). Conservation of momentum (and its associated spatial translation symmetry) have no explanation for why it is true. We have lots of solid experimental evidence that it is true, but no explanation why our universe behaves that way instead of some other way. This is what makes conservation/symmetry laws fundamental explanations. There are no further explanations in physics, just evidence that makes us believe this explanation.
{ "domain": "physics.stackexchange", "id": 91069, "tags": "newtonian-mechanics, forces, momentum, conservation-laws" }
Rotational mechanics $u$-substitution
Question: In classical physics, when we try to get the force as a function of $r$; Why do we substitue $$r=\frac{1}{u}?$$ I don't get it, what is the point of it? Is the substitution arbitrary? I am looking for a mathematical explanation of it. I understand it makes the equation solvable, otherwise we would have some messy chain differentiations. But I want it explained in Mathematical term. Perhaps like a proof. That way I wouldn't feel like I am putting in a formula without understanding. Answer: It has to do with orbit trajectories and angular momentum. The conservation of angular momentum says $$ mr^2\frac{d\theta}{dt} = \ell. $$ Suppose we want to find $r(\theta)$ instead of $r(t)$ in our orbit equations. $dr/dt = (dr/d\theta)(d\theta/dt)$, which means $$ \frac{dr}{dt} = \frac{\ell}{mr^2}\frac{dr}{d\theta} = -\frac{\ell}{m}\frac{d}{d\theta}\left(\frac{1}{r}\right) $$ \begin{multline} \frac{d^2 r}{dt^2} = \frac{d^2 r}{d\theta^2}\left(\frac{d\theta}{dt}\right)^2 + \frac{dr}{d\theta}\frac{d}{dt}\left(\frac{d\theta}{dt}\right) = \frac{\ell^2}{m^2r^4}\frac{d^2r}{d\theta^2}-\frac{2\ell}{mr^3}\frac{dr}{d\theta}\frac{dr}{dt} = \\ = \frac{\ell^2}{m^2r^4}\frac{d^2r}{d\theta^2}-\frac{2\ell^2}{m^2r^5}\left(\frac{dr}{d\theta}\right)^2 = -\frac{\ell^2}{m^2 r^2}\frac{d^2}{d\theta^2}\left(\frac{1}{r}\right) \end{multline} So we see that the orbit trajectories are more easily expressed in terms of the $\theta$ derivatives of $u= 1/r$. In particular, the radial force equation $m\ddot{r}-\ell^2/(mr^3) = F(r)$ becomes $$ \frac{d^2u}{d\theta^2} + u = -\frac{m}{\ell^2u^2}F\left(\frac{1}{u}\right), $$ and the conservation of energy becomes $$ \left(\frac{du}{d\theta}\right)^2 + u^2 - \frac{2m}{\ell^2}V\left(\frac{1}{u}\right) = \frac{2mE}{\ell^2}, $$ where $-dV/dr = F(r)$. In the common case of an inverse-square law force, the force term in the force equation is constant and the potential term in the energy equation is linear, which are easily solved exactly. Using $u = 1/r$ has the added advantage that it is always bounded, as the bodies never have zero separation, while $r$ can become unbounded in scattering-type problems where the bodies do not form bound orbits.
{ "domain": "physics.stackexchange", "id": 52980, "tags": "classical-mechanics, rotational-dynamics, orbital-motion" }
Debounce and throttle tasks using Swift Concurrency
Question: There are many debouncer and throttle implementations created using Grand Central Dispatch, and even one built into Combine. I wanted to create one using the new Swift Concurrency feature in Swift 5.5+. Below is what I put together with help from others: actor Limiter { enum Policy { case throttle case debounce } private let policy: Policy private let duration: TimeInterval private var task: Task<Void, Never>? init(policy: Policy, duration: TimeInterval) { self.policy = policy self.duration = duration } nonisolated func callAsFunction(task: @escaping () async -> Void) { Task { switch policy { case .throttle: await throttle(task: task) case .debounce: await debounce(task: task) } } } private func throttle(task: @escaping () async -> Void) { guard self.task?.isCancelled ?? true else { return } Task { await task() } self.task = Task { try? await sleep() self.task?.cancel() self.task = nil } } private func debounce(task: @escaping () async -> Void) { self.task?.cancel() self.task = Task { do { try await sleep() guard !Task.isCancelled else { return } await task() } catch { return } } } private func sleep() async throws { try await Task.sleep(nanoseconds: UInt64(duration * 1_000_000_000)) } } I created tests to go with it, but testThrottler and testDebouncer are failing randomly which means there's some race condition somewhere or my assumptions in the tests are incorrect: final class LimiterTests: XCTestCase { func testThrottler() async throws { // Given let promise = expectation(description: "Ensure first task fired") let throttler = Limiter(policy: .throttle, duration: 1) var value = "" var fulfillmentCount = 0 promise.expectedFulfillmentCount = 2 func sendToServer(_ input: String) { throttler { value += input // Then switch fulfillmentCount { case 0: XCTAssertEqual(value, "h") case 1: XCTAssertEqual(value, "hwor") default: XCTFail() } promise.fulfill() fulfillmentCount += 1 } } // When sendToServer("h") sendToServer("e") sendToServer("l") sendToServer("l") sendToServer("o") await sleep(2) sendToServer("wor") sendToServer("ld") wait(for: [promise], timeout: 10) } func testDebouncer() async throws { // Given let promise = expectation(description: "Ensure last task fired") let limiter = Limiter(policy: .debounce, duration: 1) var value = "" var fulfillmentCount = 0 promise.expectedFulfillmentCount = 2 func sendToServer(_ input: String) { limiter { value += input // Then switch fulfillmentCount { case 0: XCTAssertEqual(value, "o") case 1: XCTAssertEqual(value, "old") default: XCTFail() } promise.fulfill() fulfillmentCount += 1 } } // When sendToServer("h") sendToServer("e") sendToServer("l") sendToServer("l") sendToServer("o") await sleep(2) sendToServer("wor") sendToServer("ld") wait(for: [promise], timeout: 10) } func testThrottler2() async throws { // Given let promise = expectation(description: "Ensure throttle before duration") let throttler = Limiter(policy: .throttle, duration: 1) var end = Date.now + 1 promise.expectedFulfillmentCount = 2 func test() { // Then XCTAssertLessThan(.now, end) promise.fulfill() } // When throttler(task: test) throttler(task: test) throttler(task: test) throttler(task: test) throttler(task: test) await sleep(2) end = .now + 1 throttler(task: test) throttler(task: test) throttler(task: test) await sleep(2) wait(for: [promise], timeout: 10) } func testDebouncer2() async throws { // Given let promise = expectation(description: "Ensure debounce after duration") let debouncer = Limiter(policy: .debounce, duration: 1) var end = Date.now + 1 promise.expectedFulfillmentCount = 2 func test() { // Then XCTAssertGreaterThan(.now, end) promise.fulfill() } // When debouncer(task: test) debouncer(task: test) debouncer(task: test) debouncer(task: test) debouncer(task: test) await sleep(2) end = .now + 1 debouncer(task: test) debouncer(task: test) debouncer(task: test) await sleep(2) wait(for: [promise], timeout: 10) } private func sleep(_ duration: TimeInterval) async { await Task.sleep(UInt64(duration * 1_000_000_000)) } } I'm hoping for help in seeing anything I missed in the Limiter implementation, or maybe if there's a better way to do debounce and throttle with Swift Concurrency. Answer: The problem is the use of a nonisolated function to initiate an asynchronous update of an actor-isolated property. (I'm surprised the compiler even permits that.) Not only is it misleading, but actors also feature reentrancy, and you introduce all sorts of unintended races. In the latter part of this answer, below, I offer my suggestions on what I would change in your implementation. But, nowadays, the right solution is to use the debounce and throttle from Apple’s Swift Async Algorithms library. For example: import AsyncAlgorithms final class AsyncAlgorithmsTests: XCTestCase { // a stream of individual keystrokes with a pause after the first five characters func keystrokes() -> AsyncStream<String> { AsyncStream<String> { continuation in Task { continuation.yield("h") continuation.yield("e") continuation.yield("l") continuation.yield("l") continuation.yield("o") try await Task.sleep(seconds: 2) continuation.yield(",") continuation.yield(" ") continuation.yield("w") continuation.yield("o") continuation.yield("r") continuation.yield("l") continuation.yield("d") continuation.finish() } } } // A stream of the individual keystrokes aggregated together as strings (as we // want to search the whole string, not for individual characters) // // e.g. // h // he // hel // hell // hello // ... // // As the `keystrokes` sequence has a pause after the fifth character, this will // also pause after “hello” and before “hello,”. We can use that pause to test // debouncing and throttling func strings() -> AsyncStream<String> { AsyncStream<String> { continuation in Task { var string = "" for await keystroke in keystrokes() { string += keystroke continuation.yield(string) } continuation.finish() } } } func testDebounce() async throws { let debouncedSequence = strings().debounce(for: .seconds(1)) // usually you'd just loop through the sequence with something like // // for await string in debouncedSequence { // sendToServer(string) // } // but I'm just going to directly await the yielded values and test the resulting array let result: [String] = await debouncedSequence.reduce(into: []) { $0.append($1) } XCTAssertEqual(result, ["hello", "hello, world"]) } func testThrottle() async throws { let throttledSequence = strings().throttle(for: .seconds(1)) let result: [String] = await throttledSequence.reduce(into: []) { $0.append($1) } XCTAssertEqual(result, ["h", "hello,"]) } } // MARK: - Task.sleep(seconds:) extension Task where Success == Never, Failure == Never { /// Suspends the current task for at least the given duration /// in seconds. /// /// If the task is canceled before the time ends, /// this function throws `CancellationError`. /// /// This function doesn't block the underlying thread. public static func sleep(seconds duration: TimeInterval) async throws { try await Task.sleep(nanoseconds: UInt64(duration * .nanosecondsPerSecond)) } } // MARK: - TimeInterval extension TimeInterval { static let nanosecondsPerSecond = TimeInterval(NSEC_PER_SEC) } Given that you were soliciting a “code review”, if you really wanted to write your own “debounce” and “throttle” and did not want to use Async Algorithms for some reason, my previous answer, below, addresses some observations on your implementation: You can add an actor-isolated function to Limiter: func submit(task: @escaping () async -> Void) { switch policy { case .throttle: throttle(task: task) case .debounce: debounce(task: task) } } Note, I am not using callAsFunction as an actor-isolated function as it looks like (in Xcode 13.2.1, for me, at least) that this causes a segmentation fault in the compiler. Anyway, you can then modify your tests to use the submit actor-isolated function, e.g.: // test throttling as user enters “hello, world” into a text field func testThrottler() async throws { // Given let promise = expectation(description: "Ensure first task fired") let throttler = Limiter(policy: .throttle, duration: 1) var fulfillmentCount = 0 promise.expectedFulfillmentCount = 2 var value = "" func accumulateAndSendToServer(_ input: String) async { value += input await throttler.submit { [value] in // Then switch fulfillmentCount { case 0: XCTAssertEqual(value, "h") case 1: XCTAssertEqual(value, "hello,") default: XCTFail() } promise.fulfill() fulfillmentCount += 1 } } // When await accumulateAndSendToServer("h") await accumulateAndSendToServer("e") await accumulateAndSendToServer("l") await accumulateAndSendToServer("l") await accumulateAndSendToServer("o") try await Task.sleep(seconds: 2) await accumulateAndSendToServer(",") await accumulateAndSendToServer(" ") await accumulateAndSendToServer("w") await accumulateAndSendToServer("o") await accumulateAndSendToServer("r") await accumulateAndSendToServer("l") await accumulateAndSendToServer("d") wait(for: [promise], timeout: 10) } As an aside: In debounce, the test for isCancelled is redundant. The Task.sleep will throw an error if the task was canceled. As a matter of convention, Apple uses operation for the name of the closure parameters, presumably to avoid confusion with Task instances. I would change the Task to be a Task<Void, Error>?. Then you can simplify debounce to: func debounce(operation: @escaping () async -> Void) { task?.cancel() task = Task { defer { task = nil } try await Task.sleep(seconds: duration) await operation() } } When throttling network requests for user input, you generally want to throttle the network requests, but not the accumulation of the user input. So I have pulled the value += input out of the throttler/debouncer. I also use a capture list of [value] to make sure that we avoid race conditions between the accumulation of user input and the network requests. FWIW, this is my rendition of Limiter: actor Limiter { private let policy: Policy private let duration: TimeInterval private var task: Task<Void, Error>? init(policy: Policy, duration: TimeInterval) { self.policy = policy self.duration = duration } func submit(operation: @escaping () async -> Void) { switch policy { case .throttle: throttle(operation: operation) case .debounce: debounce(operation: operation) } } } // MARK: - Limiter.Policy extension Limiter { enum Policy { case throttle case debounce } } // MARK: - Private utility methods private extension Limiter { func throttle(operation: @escaping () async -> Void) { guard task == nil else { return } task = Task { defer { task = nil } try await Task.sleep(seconds: duration) } Task { await operation() } } func debounce(operation: @escaping () async -> Void) { task?.cancel() task = Task { defer { task = nil } try await Task.sleep(seconds: duration) await operation() } } } Which uses these extensions // MARK: - Task.sleep(seconds:) extension Task where Success == Never, Failure == Never { /// Suspends the current task for at least the given duration /// in seconds. /// /// If the task is canceled before the time ends, /// this function throws `CancellationError`. /// /// This function doesn't block the underlying thread. public static func sleep(seconds duration: TimeInterval) async throws { try await Task.sleep(nanoseconds: UInt64(duration * .nanosecondsPerSecond)) } } // MARK: - TimeInterval extension TimeInterval { static let nanosecondsPerSecond = TimeInterval(NSEC_PER_SEC) } And the following tests: final class LimiterTests: XCTestCase { // test throttling as user enters “hello, world” into a text field func testThrottler() async throws { // Given let promise = expectation(description: "Ensure first task fired") let throttler = Limiter(policy: .throttle, duration: 1) var fulfillmentCount = 0 promise.expectedFulfillmentCount = 2 var value = "" func accumulateAndSendToServer(_ input: String) async { value += input await throttler.submit { [value] in // Then switch fulfillmentCount { case 0: XCTAssertEqual(value, "h") case 1: XCTAssertEqual(value, "hello,") default: XCTFail() } promise.fulfill() fulfillmentCount += 1 } } // When await accumulateAndSendToServer("h") await accumulateAndSendToServer("e") await accumulateAndSendToServer("l") await accumulateAndSendToServer("l") await accumulateAndSendToServer("o") try await Task.sleep(seconds: 2) await accumulateAndSendToServer(",") await accumulateAndSendToServer(" ") await accumulateAndSendToServer("w") await accumulateAndSendToServer("o") await accumulateAndSendToServer("r") await accumulateAndSendToServer("l") await accumulateAndSendToServer("d") wait(for: [promise], timeout: 10) } // test debouncing as user enters “hello, world” into a text field func testDebouncer() async throws { // Given let promise = expectation(description: "Ensure last task fired") let debouncer = Limiter(policy: .debounce, duration: 1) var value = "" var fulfillmentCount = 0 promise.expectedFulfillmentCount = 2 func accumulateAndSendToServer(_ input: String) async { value += input await debouncer.submit { [value] in // Then switch fulfillmentCount { case 0: XCTAssertEqual(value, "hello") case 1: XCTAssertEqual(value, "hello, world") default: XCTFail() } promise.fulfill() fulfillmentCount += 1 } } // When await accumulateAndSendToServer("h") await accumulateAndSendToServer("e") await accumulateAndSendToServer("l") await accumulateAndSendToServer("l") await accumulateAndSendToServer("o") try await Task.sleep(seconds: 2) await accumulateAndSendToServer(",") await accumulateAndSendToServer(" ") await accumulateAndSendToServer("w") await accumulateAndSendToServer("o") await accumulateAndSendToServer("r") await accumulateAndSendToServer("l") await accumulateAndSendToServer("d") wait(for: [promise], timeout: 10) } func testThrottler2() async throws { // Given let promise = expectation(description: "Ensure throttle before duration") let throttler = Limiter(policy: .throttle, duration: 1) var end = Date.now + 1 promise.expectedFulfillmentCount = 2 func test() { // Then XCTAssertLessThanOrEqual(.now, end) promise.fulfill() } // When await throttler.submit(operation: test) await throttler.submit(operation: test) await throttler.submit(operation: test) await throttler.submit(operation: test) await throttler.submit(operation: test) try await Task.sleep(seconds: 2) end = .now + 1 await throttler.submit(operation: test) await throttler.submit(operation: test) await throttler.submit(operation: test) try await Task.sleep(seconds: 2) wait(for: [promise], timeout: 10) } func testDebouncer2() async throws { // Given let promise = expectation(description: "Ensure debounce after duration") let debouncer = Limiter(policy: .debounce, duration: 1) var end = Date.now + 1 promise.expectedFulfillmentCount = 2 func test() { // Then XCTAssertGreaterThanOrEqual(.now, end) promise.fulfill() } // When await debouncer.submit(operation: test) await debouncer.submit(operation: test) await debouncer.submit(operation: test) await debouncer.submit(operation: test) await debouncer.submit(operation: test) try await Task.sleep(seconds: 2) end = .now + 1 await debouncer.submit(operation: test) await debouncer.submit(operation: test) await debouncer.submit(operation: test) try await Task.sleep(seconds: 2) wait(for: [promise], timeout: 10) } }
{ "domain": "codereview.stackexchange", "id": 42789, "tags": "swift, concurrency" }
Car simulation to demonstrate screen wrapping
Question: I have been interested in writing some code that allows an object to wrap around the screen. It turned out to be much simpler than I was expecting, so I kept working and changed the white box that I had to a car that can rotate and move in the direction that it is facing by using basic trigonometry. I then expanded it further to include an image of a track underneath, so that the car would move faster when on the track and slower when on the grass. However, I'm not sure if the method of checking the red value of the pixel that the car is on is the best solution. So really I'd like to know if there are any better ways of testing for that, but also a review of the rest of the code to see if it could be improved at all. import sys, pygame from pygame.locals import * from math import * pygame.init() WIDTH = 800 HEIGHT = 600 SCREEN = pygame.display.set_mode((WIDTH,HEIGHT)) FPS = pygame.time.Clock() pygame.display.set_caption('Screen Wrapping') track = pygame.image.load('Track.png').convert() def on_track(sprite): '''Tests to see if car is on the track''' if sprite.x > 1 and sprite.x < WIDTH - 1 and sprite.y > 1 and sprite.y < HEIGHT - 1: if track.get_at((int(sprite.x), int(sprite.y))).r == 163 or track.get_at((int(sprite.x), int(sprite.y))).r == 0 or track.get_at((int(sprite.x), int(sprite.y))).r == 255: return True return False class Car(object): def __init__(self, start_pos = (73, 370), start_angle = 90, image = 'Car.png'): '''Initialises the Car object''' self.x = start_pos[0] self.y = start_pos[1] self.angle = start_angle self.speed = 0 self.image = pygame.transform.scale(pygame.image.load(image).convert_alpha(), (48, 48)) self.rotcar = pygame.transform.rotate(self.image, self.angle) def move(self, forward_speed = 1, rearward_speed = 0.2): '''Moves the car when the arrow keys are pressed''' keys = pygame.key.get_pressed() #Move the car depending on which keys have been pressed if keys[K_a] or keys[K_LEFT]: self.angle += self.speed if keys[K_d] or keys[K_RIGHT]: self.angle -= self.speed if keys[K_w] or keys[K_UP]: self.speed += forward_speed if keys[K_s] or keys[K_DOWN]: self.speed -= rearward_speed #Keep the angle between 0 and 359 degrees self.angle %= 359 #Apply friction if on_track(self): self.speed *= 0.95 else: self.speed *= 0.75 #Change the position of the car self.x += self.speed * cos(radians(self.angle)) self.y -= self.speed * sin(radians(self.angle)) def wrap(self): '''Wrap the car around the edges of the screen''' self.wrap_around = False if self.x < 0 : self.x += WIDTH self.wrap_around = True if self.x + self.rotcar.get_width() > WIDTH: self.x -= WIDTH self.wrap_around = True if self.y < 0: self.y += HEIGHT self.wrap_around = True if self.y + self.rotcar.get_height() > HEIGHT: self.y -= HEIGHT self.wrap_around = True if self.wrap_around: SCREEN.blit(self.rotcar, self.rotcar.get_rect(center = (self.x, self.y))) self.x %= WIDTH self.y %= HEIGHT def render(self): '''Renders the car on the screen''' self.rotcar = pygame.transform.rotate(self.image, self.angle) SCREEN.blit(self.rotcar, self.rotcar.get_rect(center = (self.x, self.y))) def main(): car = Car() while True: #Blit the track to the background SCREEN.blit(track, (0, 0)) #Test if the game has been quit for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() if event.type == KEYDOWN: if event.key == K_ESCAPE: pygame.quit() sys.exit() car.move() car.wrap() car.render() pygame.display.update() FPS.tick(30) if __name__ == '__main__': main() Here are the images that I've used in the game Car.png: Track.png: Answer: In the on_track function you can just check if the r value is in the tuple (163, 0, 255). It also looks like you can remove this line if sprite.x > 1 and sprite.x < WIDTH - 1 and sprite.y > 1 and sprite.y < HEIGHT - 1: to check if the pos is in the game area (or can it be outside?). def on_track(sprite): """See if car is on the track.""" color = track.get_at((int(sprite.x), int(sprite.y))) return color.r in (163, 0, 255) In render just blit the image at (self.x, self.y): SCREEN.blit(self.rotcar, (self.x, self.y)) I recommend to load the images globally not in the __init__ method of the classes, otherwise the image has to be loaded again and again when you create a new instance. Don't use star imports, since they make code harder to read and can cause bugs if they override duplicate names/variables. from pygame.locals import * can be used, but better avoid other star imports.
{ "domain": "codereview.stackexchange", "id": 25429, "tags": "python, python-3.x, pygame" }
What are the axes in the structure of an atom?
Question: When learning about the structure of atoms, I learnt that there are orbitals oriented along certain axes. What does it mean to be oriented along the axes? What is the reference? Also, when learning coordination chemistry, I learn that the ligands approach from the +ve and -ve x, y, and z axes, and hence the energy of different orbitals changes due to repulsion. But why can't those approach between the axes? Does the atom align itself with respect to the approaching ligands? Also, there should be no preference for z axis over the other two, right? Why is there a $d_{z^2}$ orbital, but no $d_{x^2}$ or $d_{y^2}$ orbitals, and instead a $d_{x^2-y^2}$ orbital? Edit: Added another question (This question may belong to chemistry SE. If so, please inform me, and I will remove it from here. I thought there is enough physics for it to belong here.) Answer: What is the reference? The reference is the reference. Just like in any other physics problem, you choose it, usually for maximum convenience (ease of use). There is no 'absolute' reference point. For the calculation of the orbitals of the hydrogen atom a spherical coordinate system is used. But why can't those approach between the axes? They do sometimes and the phenomenon is the cause of many coloured transition metal (d-block) complexes. For an introduction, see my post here (scroll down to A partial explanation of the colours of transition cation complexes) on the colour of $\text{Ti }+3$ complexes. I hope this helps.
{ "domain": "physics.stackexchange", "id": 73901, "tags": "atomic-physics, atoms" }
MRI K-space to image: why track frequencies in two dimensions?
Question: While trying to better understand how an MRI goes from k-space to an image, I came across this wonderful website that explains how you would represent an image as a collection of rows of pixels where each row of pixels is represented as a sum of waves of greyscale intensity. First you select a single row from an image. Then you apply the Fourier transform to represent the row as a collection of waves. What I don't understand is, if we can get the frequencies for each row and accurately reconstruct what a row of the image looks like via frequency encoding, and if we already know where each row is in relation to the other rows (position of the blue line), why do we need to do the same basic thing along a different dimension (the y-axis using phase encoding)? Isn't it sufficient to simply stack the rows to get the full image? Answer: The reality (as in physical reality, the phenomenon) is that a pixel's "value" is determined both by what is happening along the X dimension and the Y dimension (in k-space). If you want to reconstruct an image you have to do it from **two spatial sinusoidal waves. This is represented in the $f[m,n] \cdot e^{-j 2 \pi (u m + v n)}$ part of the DFT. This is the product that we sum along the $u$ and $v$ directions. Notice here, that to obtain one $u,v$ value you need to evaluate sinusoids in both the $u$ and $v$ directions. And vice-versa of course, meaning that the grey level value of one pixel is decomposed in the coefficients for both sinusoids along the $u$ and $v$ directions. If you only run one of them, you get only HALF the story. If you reconstruct an image from rows, you synthesize grey level variation from just one direction. You know how a pixel's value varies with respect to its left and right neighbours but not its top and bottom neighbours. Here is a mental experiment: Take an image and run DFTs along the rows (that is, the horizontal direction, as per the recipe that motivated this question). Now take the original image and add 42 to the pixels of the rows of the upper half (this looks like a step in the vertical direction). What is the effect of this? You are only introducing a DC to the ROW DFTS, other than this, the rest of the spectrum is exactly the same. You can choose to get even more adventurous in that vertical direction and modulate the pixels by sinusoids. They will go completely amiss, why? Because that modulation, along the vertical direction only introduces some "disturbance" to the DC component of the horizontal direction. It is impossible to pick up anything else unless you "check" for it, by evaluating the DFT along the vertical dimension too. And you can see this happening in $F[u,v] = \sum_m \sum_n x[m,n] e^{-j 2 \pi (u*m + v*n)}$ because the sums are nested as well as when you apply the DFT twice because you first apply it on the rows (now you know how a pixel varies with respect to its left-right neighbours) and then you apply it along the columns of the ROWS DFT (now you know how a pixel varies with respect to its top and bottom neighbours). Hope this (and to an extent this) helps.
{ "domain": "dsp.stackexchange", "id": 9050, "tags": "image-processing, fourier-transform" }
How do/did we figure out that planets move in orbits?
Question: I've learned that planets move in orbits around the Sun, but I really don't know how I would come to this conclusion myself. I've only seen planets in the sky a couple times (knowingly), and I am curious how we know for certain today that planets really move in orbits around the Sun (i.e., rather than moving but not around the Sun or moving around the Sun in a shape that is not a regular path). I've also heard that people in the past knew about orbits even when they thought that Earth was at the center of the solar system. How did they figure this out in their times with their technology? Answer: I've also heard that people in the past knew about orbits even when they thought that Earth was at the center of the solar system. How did they figure this out in their times with their technology? The same celestial objects (stars, planets, the Moon) could be seen every year. So, people figured out there was a pattern to it. At first, geocentrism was prevalent due to prejudice and the stars "orbiting" the Earth in a clean manner (no strange effects). If you stare up at night, stars move in a circle with the center approximately at the Pole Star. This can be mistaken for the Star moving around the Earth. Objects like visible planets (with retrograde motions) were relatively small in number — we have the inner solar system plus Jupiter and Saturn, so people gave plausible explanations for these. Ptolemy theorized that the planet orbits around a "ghost point", for example: (This is called the epicycle theory) how we know for certain today that planets really move in orbits around the Sun (i.e., rather than moving but not around the Sun or moving around the Sun in a shape that is not a regular path). The heliocentric model came up because of various smaller discoveries: Jupiter has moons, so not all celestial bodies orbit the Earth Venus has full phases, but in the geocentric model only a small subset of the phases should be visible Theories like epicycles were not as accurate as they were expected to be, and people started realizing this over time. With all this in mind, the simple solution of heliocentrism explains everything in a much cleaner fashion, without leaving much doubt in mind. Kepler's laws make much cleaner predictions. Remember that people had catalogs of astronomical data to verify things with. Later on, once Newton came up with his theory of gravitation, Kepler's laws made even more sense (though I believe that Newton formulated his theory with some inspiration from Kepler's laws). I'm pretty sure that there were on-Earth experiments to verify the inverse-square as well. Nowadays, sending probes to space and getting a different perspective has confirmed this with 100% certainity for us. More or less.
{ "domain": "astronomy.stackexchange", "id": 73, "tags": "solar-system, planet, orbit" }
XOR Statement in integer programming
Question: How can I convert a XOR statement into linear constraints for integer programming ? The expression is $(x_1 \geq 1)$ XOR $(x_2 \geq 1)$ where $x_1$ and $x_2$ are integer. It means that if $x_1 \geq 1$ then $x_2 = 0$ and vice versa. I started to linearize the statement by : $(x_1 \geq 1)$ XOR $(x_2 \geq 1) = (x_1 \geq 1 \quad and \quad x_2 \leq 0) or (x_1 \leq 0 \quad and \quad x_2 \geq 1)$ but I don't know how to continue, I would like to follow the Big M mothod but I'm going around in circles... I need it to resolve this kind of linear program where if $x_1\geq 0$ then $x_2=0$ and vice versa : \begin{equation} \begin{aligned} \min \quad & a_1x_1 + a_2x_2 + a_3x_3\\ \textrm{s.t.} \quad & x_1 + x_2 + x_3 = 200\\ &x_1 \leq 100\\ &x_2 \leq 100\\ &x_3 \leq 100\\ &x_1 \geq 1 \quad xor \quad x_2 \geq 1 \\ &x_i \in N, \quad i \in \{1, 2, 3 \} \\ &a_i \in R, \quad i \in \{1, 2, 3 \} \\ \end{aligned} \end{equation} Thanks a lot ! :) Answer: For binary variables you could just write $x_1 = 1 - x_2$, but I see that in your case $x_1,x_2 \in \{1,2,3\}$. You can still use the same trick once you force a boolean variable $y_i$ to be 1 if and only if $x_i \ge 1$: $$ \begin{align*} 3 y_1 &\ge x_1 \\ y_1 &\le x_1 \\[6pt] 3 y_2 &\ge x_2 \\ y_2 &\le x_2 \\[6pt] y_1 &= 1-y_2 \\[6pt] y_1, y_2 &\in \{0,1\} \end{align*} $$
{ "domain": "cs.stackexchange", "id": 17543, "tags": "integer-programming" }
Is there any ros node that allows to move shapes in RVIZ with a phantom omni device?
Question: Hi, I would like to know if there is any ros node that allows to move shapes in RVIZ with a phantom omni device? Thank you! Originally posted by masihec on ROS Answers with karma: 31 on 2015-12-08 Post score: 0 Answer: I'm unaware of something that provides this functionality out-of-the-box, but it would be trivial to write a node that accomplishes what you want. For example, if you use @danepowell's phantom_omni package, the driver publishes a /joint_states topic that contains the joint angles of the Omni. If you also start a robot_state_publisher as seen in his example launch file then the current geometric description of the Omni will be available via tf. All you then have to do is write a ROS node that looks up the transform from a "control" frame on the Omni (e.g. the /stylus frame) to some reference frame (e.g. the /base frame). Using this transform data, you then can correspondingly move a visualization_msgs/Marker to represent your shape. Originally posted by jarvisschultz with karma: 9031 on 2015-12-08 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 23174, "tags": "ros, markers.rviz" }
Quantum Field Theory with (2,2) metric
Question: Does someone know some reference which treats QFT in a space with the following non-Lorentzian signature: $g_{\mu\nu}=\text{diag}(-1,-1,1,1)$. I'm interested in basic stuff like the shape of the scalar propagator and Feynman rules in such a space. Answer: Meddling with the spacetime dimensions and/or signature is usually much more troubesome than what appears. At wikipedia you can find a rough discussion, but the first thing that comes to mind is that the Huygens principle is only valid in spatial dimensions greater or equal than 3 and odd. Now, regarding your 2+2 spacetime, the situation is much worse. Let's try the easiest possible field theory, namely the massless scalar field. The equation should be then $(-\partial_t^2-\partial_u^2+\partial_x^2+\partial_y^2)\phi=0$, where I named the extra timelike dimension $u$. In the usual field theory in 3+1 we start with the classical solutions of the equation (in this case it would be the just the wave equation), write the general solution as a sum of Fourier terms $\sum_k a_k e^{i(kx-\omega t)}$ and "promote" the coefficients to ladder operators $\hat{a}_k$ and its adjoints. What about in 2+2? We should try the same. Now, as explained in this answer in our math counterpart, there is a theorem stating that ultrahyperbolic PDEs, like the one that concerns you, are not Hadamard well-posed, meaning that either the solution does not exist, or it's not unique, or it is not stable. Violating the first two conditions invalidate the Fourier expansion, and violating stability means that no meaningful perturbation theory exists, even at classical level. Stability is also a problem regarding initial conditions. Failure of Hadamard well-posedness implies that if you change the boundary conditions by a small value then the full solutions is completely different from the initial one. This invalidades everything about physics, since it makes impossible to deal with the finite accuracy of experiments. Existence for the free 2+2 is easy from separation of variables, although I cannot say the same for uniqueness since I have not proven, nor seen it proven, completeness of the basis thus obtained. Nevertheless, even if uniqueness could be established we would then certainly violate stability, and then we would not be able to perform perturbation theory as usual. This are the reasons why it makes no sense to look at 2+2 spacetimes, even if only from a mathematical point of view.
{ "domain": "physics.stackexchange", "id": 14043, "tags": "quantum-field-theory, spacetime, time, resource-recommendations, spacetime-dimensions" }
QM probability to go from a point to another (Zee)
Question: In Zee's QFT in a Nutshell book at p.10-11 it is piecewise said that : In quantum mechanics, for a Hamiltonian $\hat{H}=\hat{p}^2/2m$, the amplitude to propagate from a point $q_j$ to a point $q_{j+1}$ in time $\delta t$ is : $$ \langle q_{j+1}|e^{-i\delta t(\hat{p}^2/2m)}|q_j\rangle = \left(\frac{-i2\pi m}{\delta t}\right)^{1/2} e^{i\delta t (m/2) [(q_{j+1}-q_j)/\delta t]^2} $$ Because of this, I am lead to think that the probability density to go from a point $q_j$ to a point $q_{j+1}$ in time $\delta t$ is : $$ |\langle q_{j+1}|e^{-i\delta t(\hat{p}^2/2m)}|q_j\rangle|^2 = \frac{2\pi m}{\delta t} $$ which doesn't seem to make any sense because it doesn't depend on $q_j$ and $q_{j+1}$. What is going on here? Answer: A good question. The answer is that the states $|q_i>$ are not normalized to unity, instead they have $<q_i|q_j>= \delta(q_i-q_j)$. As a result you can't get the probability in the usual way. If you start in a position eigenstate $|q_i>$, your momentum uncertainty is infinite, and you can get arbitrarily far away from the initial point in an arbitrarily short time --- this is why the "probability" you get from squaring Tony Zee's formula is independent of how far you go. What you can do with his formula is to compute $$ \psi(q_2,t) =\int dq_1 K(q_2,q_1,t) \psi(q_1,0), $$ (where $K$ is his matrix element) which gives the amplitude to evolve (without any interactions) from a normalized wavepacket $\psi(q,t=0)$ centered about some point to the more spread-out $\psi(q,t)$. The prefactor $\sqrt{-i2\pi m/\delta t}$ is arranged so that the normalization is preserved by the evolution.
{ "domain": "physics.stackexchange", "id": 55521, "tags": "quantum-mechanics, quantum-field-theory" }
Apache Spark alternatives for local compute
Question: I am creating a relatively large hobby project in Scala that needs a few ml algorithms for text classification into topics. My dataset is not huge, it is < 500,000 items with dimensionality 5 (2 dimensions are free form text). From what I've started with on Spark, it is heavily geared toward distributed computing and other production related concerns but it has a nice ml library. Is it worth using Spark if I only plan to run my project on a local dev machine level or is there a more appropriate library out there for me in Scala? Something like scikit-learn, but in Scala. Answer: You're right, Spark is intended to scale in a distributed computing environment, but it is absolutely performs well locally. When running locally Spark will still 'distribute' processing across local Executors, you have options to control the number of CPU cores the Spark job can use, and how many cores each Executor can use. You can get a lot out of using Spark locally, and Spark makes it very easy to scale up to a cluster of machines (e.g. on AWS or Google Cloud) when your single machine can't handle the tasks and it becomes necessary. I'd also check out SMILE as an alternative to Spark MLlib in Scala.
{ "domain": "datascience.stackexchange", "id": 4755, "tags": "apache-spark, scala" }
Calling an appropriate enum method depending on what type is passed
Question: I have this Enum from which I am calling the appropriate execute method based on what type of enum (eventType) is passed. public enum EventType { EventA { @Override public Map<String, Map<String, String>> execute(Map<String, String> eventMapHolder) { if (eventMapHolder.isEmpty()) { return ImmutableMap.of(); } String clientId = eventMapHolder.get("clientId"); Map<String, String> clientInfoHolder = getClientInfo(clientId); eventMapHolder.putAll(clientInfoHolder); return ImmutableMap.<String, Map<String, String>>builder().put(EventA.name(), eventMapHolder) .build(); } }, EventB { @Override public Map<String, Map<String, String>> execute(Map<String, String> eventMapHolder) { if (eventMapHolder.isEmpty()) { return ImmutableMap.of(); } return ImmutableMap.<String, Map<String, String>>builder().put(EventB.name(), eventMapHolder) .build(); } }, EventC { @Override public Map<String, Map<String, String>> execute(Map<String, String> eventMapHolder) { if (eventMapHolder.isEmpty()) { return ImmutableMap.of(); } String clientId = eventMapHolder.get("clientId"); Map<String, String> clientInfoHolder = getClientInfo(clientId); eventMapHolder.putAll(clientInfoHolder); return ImmutableMap.<String, Map<String, String>>builder().put(EventC.name(), eventMapHolder) .build(); } }, EventD { @Override public Map<String, Map<String, String>> execute(Map<String, String> eventMapHolder) { if (eventMapHolder.isEmpty()) { return ImmutableMap.of(); } Map<String, Map<String, String>> holder = new HashMap<>(); String clientId = eventMapHolder.get("clientId"); Map<String, String> clientInfoHolder = getClientInfo(clientId); eventMapHolder.putAll(clientInfoHolder); holder.put(EventD.name(), eventMapHolder); Map<String, String> processInfoHolder = getProcessInfo(clientId); holder.put(EventE.name(), processInfoHolder); return Collections.unmodifiableMap(holder); } }, EventE { @Override public Map<String, Map<String, String>> execute(Map<String, String> eventMapHolder) { return ImmutableMap.of(); } }; public abstract Map<String, Map<String, String>> execute(String eventMapHolder); public Map<String, String> getClientInfo(final String clientId) { // code to populate the map and return it } public Map<String, String> getProcessInfo(final String clientId) { // code to populate the map and return it } } For example: If I get "EventA", then I am calling its execute method. Similarly, if I get "EventB" then I am callings its execute method, and so on. String eventType = String.valueOf(payload.get("eventType")); Map<String, String> eventMapHolder = payload.get("eventMapHolder"); Map<String, Map<String, String>> processedMap = EventType.valueOf(eventType).execute(eventMapHolder); In general, I will have more event types (around 10-12) in the same enum class and mostly they will do the same operation as EventA, EventB, EventC and EventD. In my above enum, I have EventE just to be used as a constant name. Its execute method won't be called at all since I have an abstract execute method, so I have to override it. As you can see, in my EventD, I am populating maps using EventD and EventE as the key. Question: Now as you can see, code in the execute method of EventA and EventC are identical but the only difference is what I put as "key" (event name) in the returned immutable map. Similarly, there might be another event as well which will have the same exact code as EventD. Is there any way to remove that duplicated code but still achieve the same functionality in the enum? The idea is, based on what type of event is passed, I want to call its execute method, perform some operation and try to avoid duplication if possible. Also, I am thinking of moving getClientInfo and getProcessInfo in some other utility class so that this Enum class should have execution code only. If there is any other better way or any other design pattern to do this then I am open for suggestions as well which can help me remove duplicated code. Answer: In general, I will have more event types (around 10-12) in the same enum class and mostly they will do the same operation as EventA, EventB, EventC and EventD. Then you should simply call execute on the other enum constant that already has this "mostly same operation". In my above enum, I have EventE just to be used as a constant name. Its execute method won't be called at all since I have an abstract execute method. This reasoning is wrong. The abstract method declaration you have is a guarantee that all constants have this method implemented. If you don't want to implement the method execute in each and every enum constant you should change the abtract method declaration into a "normal" method (with (empty) body and without the abstract keyword), like you have for the other methods. Now as you can see, code in the execute method of EventA and EventC are identical but the only difference is what I put as "key" (event name) in the returned immutable map. Change Event?.name() to this.name() in both constants and then use your IDEs "extract method" feature to move the code out of the constants into a private enum method to be called from both enums. [edit] There is no such feature in Eclipse You should learn to know your tool better: In eclipse: Select the code you want to extract (e.g. doubleclick behind the openning curly brace of the constants method) right click on the the selection in the context menu point to "refactoring" -> "Extract Method" in the dialog that opens fill in a (good) method name in "Method Name" from the "Destination" check box select your enums name. if you did the suggested change on all enums the last line now tells how many occurrences of the same code will be replaced with a call to the new method.
{ "domain": "codereview.stackexchange", "id": 23469, "tags": "java, enum" }
What happens to the masses in the center of two black holes in the moment they merger?
Question: This an attempt to improve my recent question What happens to the singularities of two black holes in the moment they merger?. What is the background talking about „singularities“ or „masses in the center?“ In Schwarzschild black holes and astrophysical black holes (Oppenheimer Snyder collapse) the mass is in the singularity which is a point in time and not part of the manifold. „Mass in a point“ means that General Relativity breaks down (keywords here are infinite curvature and geodesic incompleteness) and so the challenging question is how to avoid the singularity so that the mass is in the center somehow (planck scale) and not in a point. Such black holes are called „physical“ or „real“ black holes sometimes. The answer could be loop quantum cosmology. There are numerous papers, .e.g this one: We find the novel result that all strong singularities are resolved for arbitrary matter. … The effective spacetime is found to be geodesically complete for particle and null geodesics in finite time evolution. Our results add to a growing evidence for generic resolution of strong singularities using effective dynamics in loop quantum cosmology by generalizing earlier results on isotropic and Bianchi-I spacetimes. Now what happens in the moment of the mergering? It seems quite clear that in the classical case (Schwarzschild black hole) the newly formed black hole being strongly deformed though has one singularity. Can we assume the same for real black holes whith their masses in the center? But how then do we explain that two seperately located masses are unified instantaneously? Is it more reasonable to assume the the two masses move towards to each other during the ringdown? Any clarification will be highly appreciated. Answer: First, when talking about black holes, the word "instantaneous" doesn't make any sense. Because of general relativity, the times of events at different locations can't be compared. In some sense, everything inside the black hole happens after the outside universe ends. This isn't a very useful view, but "instantaneous" doesn't make any better sense in any of the other ways of looking at it. The only kind of black hole we actually understand the insides of is the Schwartzchild black hole. If we take two Schwartzchild black holes and collide them head on to get a Schwartzchild black hole, then the point singularities must also collide head on; presumably they fuse to form a new point singularity (exactly how this happens will depend on quantum gravity, which we don't understand). We have equations for the metric inside a rotationally symmetric Kerr (rotating) black hole, which has a very strange ring singularity inside it. However, the space-time inside a Kerr black hole is unstable, so real Kerr black holes should not be rotationally symmetric or contain ring singularities inside. Now, if you take two Schwartzchild black holes and collide them slightly off-center, you get a Kerr black hole. This Kerr black hole is not going to have the rotationally symmetric (and completely unphysical) ring singularity in it; the only thing that can possibly happen is that it will contain the original two singularities of the original black holes, which are now rotating around each other. We don't understand the metric inside this type of Kerr black hole. Next, suppose you take two of these realistic Kerr black holes and collide them so as to get rid of the angular momentum and end up with a Schwartzchild black hole. There are four singularities in the original two black holes, and I would guess they go through some complicated dynamics and end up colliding to form one singularity. Nobody knows the details of this process, though ... it's beyond our capability to compute numerically right now.
{ "domain": "physics.stackexchange", "id": 48640, "tags": "general-relativity, black-holes, loop-quantum-gravity" }
Adding meshes to detected transform
Question: OS: Ubuntu 18.04.2 DISTRO: Melodic I have a small custom mobile robot with a Intel Realsense d435i on it to perform machine vision tasks such as object recognition and grasping. The node I've written to accomplish this classifies objects it comes across on a course and puts out a transform of the object's pose. It publishes the transform as "large_box_1", "large_box_2", etc. When I pull the RobotModel in RVIZ, I'm able to load a robot description I've described in the URDF. What I'd like to do is take the meshes for the "large_box" I've created, and overlay them on the transforms in RVIZ. This would give better visualization of how accurately the robot is classifying these poses. How would I set this up? Originally posted by lauesa on ROS Answers with karma: 3 on 2019-08-09 Post score: 0 Answer: If you're detection node can broadcast an unique tf_frame for each object then you can also publish a Marker message containing a mesh resource in that frame. You could also color tint or use transparency so that this model doesn't cover up your original point data. This method means you can use any STL / DAE file to represent the detected objects, maybe not needed for simple boxes but could be very useful for more complex shapes. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2019-08-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by lauesa on 2019-08-12: This solution worked very well for my purposes. Thanks for your help!
{ "domain": "robotics.stackexchange", "id": 33600, "tags": "rviz, urdf, ros-melodic, mesh" }
How does Pollard's rho algorithm work?
Question: I am trying to understand how does Pollard's rho algorithm actually work, but I just can not wrap my head around it. I already read its section in the CLRS book and also on internet but still can not understand its structure or analysis. This is a java implementation of the pseudocode from the CLRS book along with euclid gcd algorithm: public static void pollardRho(int n) { Random rand = new Random(); int i = 1; int x0 = rand.nextInt(n); int y = x0; int k = 2; while (true) { i++; int x = (x0 * x0 - 1) % n; int d = gcd(y - x, n); if (d != 1 && d != n) { System.out.println(d); } if (i == k) { y = x; k *= 2; } x0 = x; } } public static int gcd(int a, int b) { // fixes the issue with java modulo operator % returning negative // results based on the fact that gcd(a, b) = gcd(|a|, |b|) if (a < 0) a = -a; if (b < 0) b = -b; while (b != 0) { int tmp = b; b = (a % b); a = tmp; } return a; } Why does it choose $x = (x_0^2 - 1) \mod n$? What does $y$ actually represent and why is it chosen to be equal to $\{x_1,x_2,x_4,x_8,x_{16},...\}$? Why does it compute $\text{GCD}(y-x,n)$ and how does $d$ turns out to be a factor of $n$? And why is the expected running time is $O(n^{1/4})$ arithmetic operations and $O(2^{\beta/4} \beta^2)$ bit operations assuming that $n$ is $\beta$ bits long? I understand that if there exists a non-trivial square-root of $x^2 \equiv 1 \pmod{n}$ then $n$ is a composite and $x$ is a factor, but $y - x$ is not a square root of $n$ is it? Answer: The idea behind Pollard $\rho$ is that if you take any function $f : [0, n - 1] \to [0, n - 1]$, the iteration $x_{k + 1} = f(x_k)$ must fall into a cycle eventually. Take now $f$ as a polynomial, and consider it modulo $n = p_1 p_2 \dotsm p_r$, where the $p_i$ are primes: $\begin{equation*} x_{k + 1} = f(x_k) \bmod n = f(x_k) \bmod p_1 p_2 \dotsm p_r \end{equation*}$ Thus it repeats the same iteration structure modulo each of the primes into which $n$ factors. We don't know anything about the cycles, but it is easy to see that if you go with $x_0 = x'_0$ and: $\begin{align*} x_{k + 1} &= f(x_k) \\ x'_{k + 1} &= f(f(x'_k)) \end{align*}$ (i.e., $x'$ advances twice as fast) eventually $x'_k$ and $x_k$ will span one (or more) cycles (see Floyd's cycle detection algorithm for details), thus in our case, $x'_k \equiv x_k \mod{p_i}$, and $\gcd(x'_k, x_k)$ will be a factor of $n$, hopefully a non-trivial one. Any polynomial works, but we want irreducible ones (no non-trivial factors, detecting those is not the point of the exercise). Linear polynomials don't give factors, next simplest to compute is a quadratic one, but just $x^2$ doesn't work either (reducible), so take $x^2 + 1$ for simplicity. Remember the idea here is to work with very large numbers, few and simple arithmetic operations are a distinct bonus. The analysis of the algorithm (e.g. in Knuth's "Seminumerical algorithms") models $f(x) \bmod p$ as a random function, which is close enough to explain the overall characteristics of the algorithm.
{ "domain": "cs.stackexchange", "id": 20569, "tags": "algorithms, modular-arithmetic, factoring" }
Extension pattern in a flask controller using importlib 2.0
Question: The first edition of the code can be found in this 1.0 version. To summarize the problem : I have an endpoint in flask where the work logic change according to the query I am trying to implement an extension mechanism where the right class is loaded The first version did not provide caching of the imported class Based on the answer from @netme, this is a code update where I lazy load the right handler class and cache it : ./run.py: from flask import Flask, abort import handler app = Flask(__name__) handler_collection = handler.HandlerCollection() @app.route('/') def api_endpoint(): try: endpoint = "simple" # Custom logic to choose the right handler handler_instance = handler_collection.getInstance(endpoint) except handler.UnknownEndpoint as e: abort(404) print(handler_instance, handler_instance.name) # Handler processing. Not yet implemented return "Hello World" if __name__ == "__main__": app.run(host='0.0.0.0', port=8080, debug=True) ./handler.py: # -*- coding: utf-8 -*- import importlib import handlers class UnknownEndpoint(Exception): pass class HandlerCollection: _endpoints_classes = {} def addClass(self, endpoint_class): self._endpoints_classes[endpoint_class.name] = endpoint_class def getClass(self, endpoint_name): if (endpoint_name in self._endpoints_classes): return self._endpoints_classes.get(endpoint_name) try: # Try to import endpoint handler from a module in handlers package endpoint_module = importlib.import_module( '.{}'.format(str(endpoint_name)), 'handlers') endpoint_module.register(self) except ImportError as e: raise UnknownEndpoint('Unknown endpoint \'{}\''.format(endpoint_name)) from e return self._endpoints_classes.get(endpoint_name) def getInstance(self, endpoint_name): endpoint_class = self.getClass(endpoint_name) return endpoint_class() ./handlers/simple.py: # -*- coding: utf-8 -*- class SimpleWebhookHandler: name = "simple" def register(handler_collection): handler_collection.addClass(SimpleWebhookHandler) Each handler needs to provide a register function which allows to have different handler class name and each handler class needs to provide a class attribute name. Can you give me your opinion on this piece of code ? Is it "pythonic" ? Answer: Can you give me your opinion on this piece of code ? Is it "pythonic" ? I don't really see anything non-pythonic about it. Some minor coding style issues though: Minor PEP8 violations: put 2 blank lines before class and top-level function definitions use snake_case for function names instead of camelCase Unnecessary parentheses in if (endpoint_name in self._endpoints_classes): A bigger issue I see is that the module loading is convoluted and fragile: HandlerCollection.getClass tries to import a module : violates the single responsibility principle, it would be better to move this logic somewhere else The imported module is expected to have a register function : a semantic rule, not obvious enough from the code itself. There are no docstrings either, so as it is, implementors have to dig this piece of information out of the current implementation details The register function must pass a class that has a name property defined : semantic rule, see the previous item The logical flow of HandlerCollection.getClass is convoluted: try to import module if missing pass self to module.register module.register calls me.addClass I suggest to move the module loading part out to a different method, perhaps HandlerRegistry.load_and_register_if_missing, and let getClass behave as a simple cache.
{ "domain": "codereview.stackexchange", "id": 14643, "tags": "python, flask" }
Why does this algorithm to calculate number of pronic numbers in an interval work?
Question: I am given $A$ and $B$, where $A$ is less than or equal to $B$, and they are in the range of $[1, 100000000]$. I want to calculate the number of pronic numbers in that interval $[A, B]$. These numbers are numbers $n$ such that $k \times (k + 1) = n$ for some $k$. The following algorithm works, but I am having a lot of trouble understanding the intuition behind it. Can one explain to me why it works? def pronic(num) : # Check upto sqrt N N = int(num ** (1/2)); # If product of consecutive # numbers are less than equal to num if (N * (N + 1) <= num) : return N; # Return N - 1 return N - 1; # Function to count pronic # numbers in the range [A, B] def countPronic(A, B) : # Subtract the count of pronic numbers # which are <= (A - 1) from the count # f pronic numbers which are <= B return pronic(B) - pronic(A - 1); Answer: First of all, denoting the number of pronic integers in $1,\ldots,m$ by $N_m$, the number of pronic integers in $A,\ldots,B$ is $N_B - N_{A-1}$: this counts the number of pronic integers in $1,\ldots,A-1,A,\ldots,B$, minus their number in $1,\ldots,A-1$. Next, a formula for $N_m$. It is the unique integer $x$ such satisfying $x(x+1) \leq m < (x+1)(x+2)$: the pronic integers in $1,\ldots,m$ are $1\cdot(1+1),2\cdot(2+1),\ldots,x\cdot(x+1)$. Roughly speaking, $x \approx \sqrt{m}$, and this suggests checking whether $\lfloor \sqrt{m} \rfloor$ might work. The main observations are $$ m = \sqrt{m}^2 < (\lfloor \sqrt{m} \rfloor + 1)^2 < (\lfloor \sqrt{m} \rfloor + 1)(\lfloor \sqrt{m} \rfloor + 2) $$ and $$ m = \sqrt{m}^2 \geq \lfloor \sqrt{m} \rfloor^2 > \lfloor \sqrt{m} - 1 \rfloor \lfloor \sqrt{m} \rfloor. $$ If $\lfloor \sqrt{m} \rfloor (\lfloor \sqrt{m} \rfloor + 1) \leq m$ then $x = \lfloor \sqrt{m} \rfloor$ works, and otherwise $x = \lfloor \sqrt{m} \rfloor - 1$ does.
{ "domain": "cs.stackexchange", "id": 19211, "tags": "algorithms, number-theory" }
Force on matter in inhomogeneous magnetic field (diamagnetism and paramagnetism)
Question: I found on this site a formula (4.101). It describes wich force acts on matter in an inhomogeneous magnetic field. $ F_z = m_z * \frac{\partial B_z (z0)}{\partial z}$ What does the fraction consist of? What does the $z$ mean? Answer: What you call "the fraction" is the derivative of the magnetic induction in respect position along the z axis. This is the factor that measures the inhomogeneity of the magnetic field. It is actually the component of the gradient of the magnetic field along the z axis. z is an index for one of the direction or axes in a Cartesian coordinate system. The other two directions are usually labeled x and y. So the formula shows how the component of the magnetic force along the z axis is related to the magnetic moment along the same axis and the gradient of the magnetic field (again, along same axis).
{ "domain": "physics.stackexchange", "id": 32977, "tags": "electromagnetism, forces, matter" }
Disjoint Set Constant Time Union Operation
Question: I am following a LeetCode tutorial for Disjoint sets I have trouble understanding why the so called "Quick Find" method needs to take O(n) time. I have implemented an data structure that has an optimised Union operation and it is O(1). // Base class for all types of disjoint sets. class DisjointSet { public: DisjointSet(int n) : _n(n), vertices(new int[n]) { for (int i = 0; i < n; i++) { vertices[i] = i; } } /* Finds the root node of a given vertex */ virtual int find(int vertex) const = 0; virtual void union_(int vertex1, int vertex2) const = 0; bool connected(int v1, int v2) const { return find(v1) == find(v2); } protected: int _n; int* vertices; }; And the concrete implementation: class FastUnionSlowFindSet : public DisjointSet { public: FastUnionSlowFindSet(int n) : DisjointSet(n) {} int find(int vertex) const { while (vertices[vertex] != vertex) { vertex = vertices[vertex]; } return vertex; } void union_(int v1, int v2) const { vertices[v2] = v1; } }; The idea is that we are keeping track of each vertex's parent and when finding we are going 'up' until we have a vertex whos parent is itself. For the union operation we are just changing the parent of the second node and the find will work because it is going one parent at a time. Why do we need to check for the rank and whether the roots are the same in the official implementation? I fail to see the logic discrepancy in my approach. Answer: Unfortunately, your implementation of disjoint sets does not work. For example, consider the following calling sequence. s = FastUnionSlowFindSet(3); s.union_(0,1); s.union_(2,1); After two union operations, all three vertices 0, 1, 2 should be in the same set. However, the only relation we have is vertices[1] = 2, as it has overwritten vertices[1] = 0. So s contains two disjoint sets, {1, 2} and {0}. To avoid such a bug, it is necessary to implement union_ as something like the following. void union_(int v1, int v2) const { vertices[find(v2)] = find(v1); }
{ "domain": "cs.stackexchange", "id": 20363, "tags": "disjoint-sets" }
Managing book of Excel sheets
Question: This is a follow-on from a previous question I posted here. I've got code here that works for what I want, but the problem is the loop takes ages to perform. I was wondering if anyone could follow this and tidy it up a bit for me. Sub Refresh_Data() Application.CutCopyMode = False 'Turns screen updating off to increase speed Application.ScreenUpdating = False 'Get 'G/L Account numbers Sheet1 = "BW TB" Sheets(Sheet1).Activate Range("A1").Activate 'Find last row - always named "Overall Result" in ColA Cells.Find(What:="Overall Result", After:=ActiveCell, LookIn:=xlFormulas, LookAt _ :=xlWhole, SearchOrder:=xlByColumns, SearchDirection:=xlNext, MatchCase:= _ True, SearchFormat:=False).Activate 'This looks up to row 25 (title row), but adjusts to only copy data from row 26 down to the penultimate row (the subtotal is not required) lastrow = Selection.Row - 1 colno = Selection.Column firstrow = Selection.End(xlUp).Row + 1 'CopyPaste loop 'First sheet is titled "4020" i = Sheets("4020").Index 'Due to all the sheet names being numeric. This is a slight workaround. 'It basically runs the macro starting at the 4020 sheet and ending at the last sheet with a numeric sheets. 'i.e. pastes values for all numbered tabs. Do While IsNumeric(Sheets(i).Name) = True 'clear all formulae except first formulaic row (Row5) Sheets(i).Activate Range("A6").EntireRow.Select Range(Selection, Selection.Offset(1000, 0)).ClearContents 'Copy G/L Account numbers from BW TB sheet to current sheet Sheets(BWTB).Activate Range(Cells(firstrow, colno), Cells(lastrow, colno)).Copy Sheets(i).Activate Range("a5").PasteSpecial xlPasteValues 'Copy down formulae Range("B5:L5").Copy Range("B5:L5", Range("B5:L5").Offset(lastrow - firstrow, 0)).PasteSpecial xlPasteFormulas ActiveSheet.Calculate 'Paste As Values Range("B6:L6", Range("B6:L6").Offset(lastrow - firstrow, 0)).Copy Range("B6").PasteSpecial xlPasteValues i = i + 1 Loop End Sub The scenario is about 25 'numericlly named' sheets (eg 4020) for which I need to first clear (this is for a rolling document updated periodically with a differing number of rows required, the determine the number of rows, then copy-pastespecial data from an unformatted mastersheet (BW TB). Apologies for the mess it's in. I'm in the process of breaking up the code into more sub functions for easier reading. Answer: You should clean up every Select and Activate and use objects instead You'd better use the object model of VBA. For instance, if you only want to copy the value of a cell: Don't do Range("A1").Select Selection.Copy Range("A2").Select Selection.Paste Do Range("A2").Value = Range("A1").Value Another example: Don't do Cells.Find(...).Activate lastrow = Selection.Row - 1 colno = Selection.Column firstrow = Selection.End(xlUp).Row + 1 Do Dim mycell as Range Set cell = Cells.Find(...) lastrow = mycell .Row - 1 colno = mycell .Column firstrow = mycell .End(xlUp).Row + 1 And so on, especially on your Sheet objects. Example to copy-paste between sheets You only have to adapt this kind of statement to your specific case: Worksheets("Sheet1").Range("A1").Copy Worksheets("Sheet2").Range("A2").PasteSpecial xlPasteFormulas Other tips You can also have a look at the very good website of Chip Pearson Edit Instead of: Sheets(i).Activate Range("A6").EntireRow.Select Range(Selection, Selection.Offset(1000, 0)).ClearContents You can try: Dim lastCol as Long With Sheets(i) lastCol = .Cells(6, .Columns.Count).End(xlToLeft).Column .Range("A6", .Cells(1000, lastCol)).ClearContents End With This will find the last column where you have data (so that you don't have to clear contents of the entire row) on the 6th row and then it will clear the content of the Range between A6 and the last column and the 1000th row. Another edit You also have a minor issue in your declaration part. This: Dim mycell As Range, LastRow, ColNo, FirstRow, i As Integer doesn't work, you have to do: Dim mycell As Range, LastRow As Integer, ColNo As Integer, FirstRow As Integer, i As Integer
{ "domain": "codereview.stackexchange", "id": 1004, "tags": "vba, excel" }
LinearRegression with fixed slope parameter
Question: I have some data $(x_{1},y_{1}), (x_{2},y_{2}), ..., (x_{n},y_{n})$, where both $x$ and $y$ represent real numbers (float). I want use Scikit-learns LinearRegression model to fit a model of the form: $y_{i} = b_{0} + b_{1}x_i + e_{i} $ Typically, I know that OLS is used to compute the parameters $b_{0}, b_{1}$. However, in my case, I happen to know that $b_{1}=c$ so I only want to fit $b_{0}$. Is there a way to force scikit-learn to use $b_{1}=c$ as the slope ratio and only estimate the intecept $b_0$, or is a custom class necessary? Answer: You can just compute: $\hat{b}_0 = \operatorname{mean}(y-cx)$
{ "domain": "datascience.stackexchange", "id": 9169, "tags": "scikit-learn, linear-regression" }
base_link to world transform
Question: Greetings, In a recent conversion from diamondback to electric, I noticed that my tf state publisher had stopped converting from 'base_link' frame to 'world' frame. With no changes in my own tf publisher or robot model between ROS versions, I can only assume something changed in some ROS package(s). Upon further inspection, the geometry stack changed quite a bit, with kdl moving out completely. I can't seem to pin down the exact change that's causing the problem. Can anyone help me resolve this problem and get back my 'world' frame? Thanks, Sean Originally posted by seanarm on ROS Answers with karma: 753 on 2012-02-13 Post score: 2 Answer: In my state publisher, where I publish my other transforms with RobotStatePublisher::publishTransforms(...), I had to also add RobotStatePublisher::publishFixedTransforms(). Between diamondback and electric, RobotStatePublisher::publishTransforms(...) began to only publish transforms for moving joints. See the functions at line 78 and line 100 in robot_state_publisher.cpp (ignore the incorrect print statement for the second function). In order to publish fixed joint transforms, you must call RobotStatePublisher::publishFixedTransforms(). This problem is also referenced in this post: Why does robot_state_publisher skip publishing fixed joints. Originally posted by seanarm with karma: 753 on 2012-02-13 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 8220, "tags": "ros, frame, geometry, kdl, transform" }
Is the solar energy Infinite?
Question: Is the solar energy coming from the sun infinite and will continue to be radiated to our earth forever? (discarding any outer factors) what's the sun's fuel? Answer: When you mention solar power, it makes me think you are thinking about photo-voltaic power or power extracted from solar panels. The power put out by the sun is about $3.95*10^{26}W$ per second. But solar panels can only capture a fraction of that energy. Even so, in 2008 humans used about $4*10^{13}W$ per second which is many orders of magnitude less than the sun puts out. The suns life cycle will last billions of years fusing hydrogen, then helium and slowly working its way down to a white dwarf star. As the temperature and radius changes of the sun, it's power output will fluctuate. But it will always put out more power than humans could use until the Earth is heated greatly during the red giant phase. Considering homo sapiens have only been around 200,000 years and the Sun won't expand for another ~7,000,000,000 years that is approximately infinite in terms of human life spans which are about 80 years. What will exactly happen during this red giant phase is still under debate, but it is a long long time from now.
{ "domain": "physics.stackexchange", "id": 13851, "tags": "astrophysics, nuclear-physics, sun, stars, fusion" }
Do particle decays with different products have different decay rates?
Question: https://en.wikipedia.org/wiki/Particle_decay#Probability_of_survival_and_particle_lifetime lists the mean lifetime of particles. But particles may decay in different ways, so is the mean lifetime the average over all possible decay modes? Does each decay mode have its own decay rate? Answer: The decay rates are additive so the total decay rate is: $$\Gamma\equiv\sum_i\Gamma_i$$ where $\left\{\Gamma_i\right\}$ are the partial decay rates to each of the possible decays. This makes intuitive sense as for example if you have a bucket of pebbles and you have two people taking pebbles out of the bucket the total rate is the sum of each persons rate. This means that the lifetimes follow the following relation (harmonic sum): $$\frac{1}{\tau}=\sum_i\frac{1}{\tau_i}$$ where $\left\{\tau_i\right\}$ is the life time for each decay mode. This is exactly the same as for example the scattering of electrons in conductors where there are multiple scattering sources such as impurities and lattice vibrations. $\tau$ would be the mean time between scattering events and $\tau_i$ would be the mean time between being two consecutive lattice vibration scattering events or the mean time between two consecutive impurity scattering events.
{ "domain": "physics.stackexchange", "id": 80766, "tags": "particle-physics, radiation, probability, elementary-particles" }
Calculating Dissimilarity between attributes
Question: I would like to figure out how to calculate the dissimilarity between attributes Jack and Jim. Given the attributes table shown below Given the Relational table. and the example calculations I would like to understand how the dissimilarity is calculated for Jack and Jim. Answer: I understand how to approach the problem now. $q$: number of attributes equal to 1 Jack and Jim. $r$: number of attributes equal to 1 Jim. $s$: number of attributes equal to 0 Jack, but equal to 1 Jim. $t$: number of attributes equal to 0 for both Jack and Jim. Therefore, $d(\text{Jack},\text{Jim}) = {(r+s)\over(q+r+s)} = {(1+1)\over(1+1+1)} = 0.67$
{ "domain": "datascience.stackexchange", "id": 10017, "tags": "data-mining" }
What does %s mean in a python strings?
Question: I'm not sure what %s means in Python strings. I came across this line of code somewhere a while ago: print("On your journey to %s, you drove at an average speed of %s miles per hour." % (where, speed)) and I just came across a question with the following line: print("The", count, "prime number is %s" %index). And how come neither of the above examples had to use + to combine the strings and integers (presumably) and how come they didn't use str to covert the integers (again presumably...could be floats) to string format?? I am SO confused... Answer: As already said, this question is off-topic but it's not a stupid question, so don't get discouraged. This question isn't applicable to this platform as it is not an issue you're having with your code. If you could provide your code and then ask what %s means, it would make more sense, as I can see what you are trying to do (and if you're using %s correctly). But for clarity's sake, %s in Python is really just used for inserting and possibly formatting a string (it saves time with casting and concatenating; two very important terms!). The following links may provide more helpful and thorough insights into %s: https://careerkarma.com/blog/python-what-does-s-mean/ https://stackoverflow.com/questions/997797/what-does-s-mean-in-a-python-format-string Keep in mind: this platform is for reviewing and sorting problems for code. No surprises, as the first section of the link is "codereview.stackexchange.com"
{ "domain": "codereview.stackexchange", "id": 40122, "tags": "python, strings" }
Proving that a Language is non-Regular
Question: Prove that $L_2 = \{ w \in \{a,b\}^* \mid w = a^ib^j, i \neq j \}$ is not regular. I was wondering if my intuition holds for proving this language as not regular: Let $q = \max(i, j) - \min(i, j)$. Case 1: $i > j$ Let $$L_3 = b^q \Rightarrow L_2 \cdot L_3 = \left \{ a^ib^{j+q} \right \}=\left \{ a^ib^i \mid i \ge 0 \right \}.$$ And then do the same for when $j > i$. Is this allowed? Answer: No that's not allowed, since $i$ and $j$ are not fixed. Here is a solution using pumping lemma: The better approach is to apply pumping lemma. Let $x = a^{n!}b^{(n+1)!}$, where $n$ is the pumping constant. Note that $u = a^i$ and $v=a^j$ for some $i\geq 0$ and $j>0$ plus $i+j\leq n$. Then we let $k = 1+\frac{n\cdot n!}{j}$. After that, $uv^kw = a^mb^{(n+1)!}$ where $m = n!+(k-1)j = n!+n\cdot n! = (n+1)!$. Contradiction. Here is another solution using closure result (which is less tricky): Suppose $L_2$ is regular. Then $(\Sigma^* - L)\cap a^*b^* = \{w\in\{a, b\}^*|w=a^ib^j, i=j\}$ is regular by closure result. Then it's easy to gain a contradiction by applying pumping lemma to $(\Sigma^* - L)\cap a^*b^*$.
{ "domain": "cs.stackexchange", "id": 11259, "tags": "formal-languages, regular-languages" }
How to calculate the power required to drive a fan
Question: I need to specify a fan motor combination and wondered if there are formulas that can work this out? The fan we are using is a crossflow fan: So I'm assuming the power required to drive it is derived from the number of blades, dimensions of blades (including angle of attack), dimension of barrel/wheel and the speed in RPM. Is this possible or does it need to be worked out practically with experimental measurements etc? Hopefully this is the correct stack for this question, if not then mods please feel free to edit/close. Answer: This is very rough, might or might not help you.... Treating it as a macroscopic perspective: Kinetic Energy $$E_k = \frac{1}{2}mv^2$$ For working energy in a flow, substitute volume with Area*Velocity and mass with density $\rho$ $$E_k = \frac{1}{2}\rho Av^3$$ With that, you should be able to estimate (after efficiency corrections) the power required. Otherwise, a search for impeller equations might yield something. Most aerospace formulas are directed towards normal propellers with aerofoils.
{ "domain": "robotics.stackexchange", "id": 332, "tags": "motor, power" }
Is there any evidence that a virus can modify human evolution
Question: I was just reading Evolution of lactose tolerance, and in it one line says "But there was a time in human history when our diet and environment conspired to create conditions that mimicked those of a disease epidemic". Something I've always wondered is rather than natural selection occurring and survival of the fittest and so on, is that a viral epidemic caused a mutation in the survivors, i.e. DNA was inserted from the virus that modified the survivors genome. There is some evidence that a significant percentage of the human genome comes from viruses. Probably a naive question - so is there any evidence that this has occurred? From my lay perspective this is what attempts at genetic engineering use, so it's possible nature already did this? It's always seemed to me that natural selection and point mutations, and the chance meeting of two people with the same mutation would be too slow and viruses would be a more efficient method to evolve a large population at the same time. Things like punctuated evolution might support this? not really sure. Answer: There is some nice work from Pardis Sabeti in this broad area (not just viruses but pathogens in general). Her colleagues have found polymorphisms in SLC40A1, encoding an iron transport protein, that influence susceptibility to tuberculosis. She's also looked at human genetic variation and susceptibility to Lassa fever. And this paper talks about CD36 and malaria in sub-Saharan African populations (no association). Look at her work from 2009 onward, as there is a lot there. They're also looking at variation in the pathogen because it is a two-way interaction.
{ "domain": "biology.stackexchange", "id": 656, "tags": "evolution" }
In derivation of capillary rise we take the upwards pressure as $2T/r$. How?
Question: In the derivation of capillary rise, we take excess pressure due to surface tension in upward direction as equal to $2T/r$. Can some one please explain how does this come about? Answer: The derivation can be thought of as this: Let, Radius of Capillary be $r$ Density of the liquid $\rho$ Height of the liquid be $h$ Surface Tension of Liquid be $T$ Contact angle $\theta$ Weight of liquid inside capillary = Volume * Density * $g$ $$=\pi r^2 h \rho g$$Which is the downward force, and the force that is balancing this is the force due to Surface tension. Now, Surface tension is defined as the Force acting on a line which is on the surface. In this case the surface tension is acting on the circumference. Hence, total force upwards: component of Surface Tension upwards * length of the line it acts on $$T\cos\theta (2\pi r)$$ The $\sin\theta$ components gets cancelled as it is radially outward throughout the circumference. Equating the forces we get: $$\pi r^2 h \rho g = T\cos\theta (2\pi r)$$ $$\implies h = \frac{2T\cos\theta}{r\rho g}$$ Note: In cases of some liquids the $\theta$ is very close to $0$ degrees and hence the cos$\theta$ term can be taken as $1$.
{ "domain": "physics.stackexchange", "id": 59168, "tags": "surface-tension, capillary-action" }
Why do higher pitches appear to be louder?
Question: It may just be in a few cases, but in the case of a flute, a higher pitch appears to come with a perceived higher volume. Is this simply because you need to put more energy into the flute to get a higher harmonic? Why is that? It seems that one shouldn't need to put in more energy, and get a consequently a higher volume. Answer: The physiology of human ear (and perhaps brain) makes sounds with frequency ~3000 Hz sound louder than higher and lower frequencies, for same sound wave pressure perturbation; see https://en.wikipedia.org/wiki/Equal-loudness_contour
{ "domain": "physics.stackexchange", "id": 8170, "tags": "acoustics, harmonics" }
"Find the number" puzzle
Question: A friend gave this puzzle: Find 3 numbers (say, A B C) such that \$ABC+ABC+ABC = CCC\$ ABC means \$((A*100) + (B*10) + C)\$. Also, these 3 digits must be distinct. I wrote a quick code to find out the answer: int main(void) { // your code goes here int hun,dec,unit; int final_num = 0; int temp_num = 0; for(hun = 0; hun <10; hun++) { for(dec = 0; dec < 10; dec++) { for(unit = 0; unit < 10; unit++) { // Calculate the number which would be formed with this unit place final_num = (unit *100) + (unit * 10) + (unit); // Values should not be same if((unit!=dec) && (unit != hun) &&( dec != hun)) { // Number with current combo of ABC temp_num = ((hun * 100) + (dec * 10) + unit) * 3; if(temp_num ==final_num ) { printf("Number is %d \n",temp_num); printf(" Hun %d Dec %d Unit %d",hun,dec,unit); } } } } } printf("\nEnd of File "); return 0; } And the Answer was 185 (185 + 185 + 185 = 555). That was very amateur code. Upon thinking, I realized I could have implemented equation \$300A + 30B + 3C = 111C\$. But still couldn't avoid for loop. What could be the better way to implement this? I know only C so I solved it with C. Answer: I'm sorry, but the best solution to this problem has to be a logic solution, not a code solution. There are only 2 digits that, when multiplied by 3, have the same last digit... 0, and 5. So, 0+0+0 is 0, and 5+5+5 is 15. Since C cannot be 0, it means that C can only be 5. Now, if C is 5, and we know that there is a 'carry' of 1 in to the tens column, it means we need to find a number other than 5 that when added together 3 times ends in a 4. There is only one value that does that, 8. 8 + 8 + 8 is 24, and with the carry of 1, we have 25 (and a carry of 2 in to the 100's column). So, now we need a digit that sums three times to 3, and that's 1. There is only one possible solution where ABC + ABC + ABC is CCC, and that's 185. It can be deduced using logic alone, and brute-forcing it is overkill.
{ "domain": "codereview.stackexchange", "id": 41331, "tags": "performance, c" }
successful os x 10.6 install? anyone?
Question: Hi, I have a brand new 64bit macbook pro with os x 10.6. I am wondering if anyone has successfully installed the bare bones version of ros with a similar setup? Is there any initiative to get a package based installer? I went through the tutorial for installing and poked around some forums for answers to bugs, but at this point I have been watching macports try and build atlas for about 2 hours. Is there a non-macports based installation for os x 10.6? I am curious if anyone has even got it running/has a personal walk through on the setup. Originally posted by Chocobot on ROS Answers with karma: 1 on 2011-06-10 Post score: 0 Original comments Comment by mjcarroll on 2011-06-13: You probably just have to wait the atlas package out. It took the better part of an evening to get everything installed on 10.6. Answer: I just installed diamondback from source on my MacBook Pro (10.6.7) following the installation instructions. It worked fine for me. I've built the ros, ros_comm, and ros_tutorials stacks, as well as some others. Depending on how much of MacPorts you have installed, it can take quite a long time to build the system dependencies (e.g., at some point you might be building gcc4.4). You just have to wait it out. Originally posted by Brian Gerkey with karma: 2916 on 2011-06-13 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 5812, "tags": "ros, macos-snowleopard, 64bit, rosinstall, osx" }
How can water be electrolysed if it's not ionic compound?
Question: The electrodes have different charges. One is positive ( the cathode ) while one is negative (anode). They will attract the particles in the solution which are charged. Hydrogen to the cathode while oxygen will go to the anode. I have been taught that water is a covalent bond. Also I have been taught that covalent bonds do no have a charge so how is it possible that this compound is charged if it's covalent? Answer: Pure water contains only small amounts of $\ce{H3O+}$ and $\ce{OH-}$ which are generated by autoprotolysis (the reaction given by Klaus Warzecha in his comment to your question). Reduction of $\ce{H3O+}$ at the cathode produces molecular hydrogen. At the anode, water is in turn oxidized, yielding molecular oxygen and $\ce{H3O+}$ (reference). $$\mathrm{Cathode:}\ \ce{2H3O+ + 2e- \rightarrow H2 +2H2O}$$ $$\mathrm{Anode:}\ \ce{6H2O \rightarrow O2 + 4H3O+ + 4e-}$$ $$\mathrm{Overall\ reaction:}\ \ce{2H2O \rightarrow 2H2 + O2}$$
{ "domain": "chemistry.stackexchange", "id": 1160, "tags": "water, electrolysis" }
Generating a graph from a rectangular grid for graph searching
Question: I recently reviewed some code implementing some heuristics which has piqued my interest in the A* graph searching algorithm. I'm going to do that in a bit but first I need a way to create a graph in order to work on... I could have used Point from System.Drawing, but I didn't want to include the whole assembly for the single struct. That's potentially a bad reason but here's my reinvented Coordinate structure anyway: [DebuggerDisplay("({X}, {Y})")] internal struct Coordinate : IEquatable<Coordinate> { internal int X { get; } internal int Y { get; } public Coordinate(int x, int y) { X = x; Y = y; } public static Coordinate operator +(Coordinate a, Coordinate b) { return new Coordinate(a.X + b.X, a.Y + b.Y); } public static Coordinate operator -(Coordinate a, Coordinate b) { return new Coordinate(a.X - b.X, a.Y - b.Y); } public override bool Equals(object obj) { if (obj is Coordinate) { return Equals((Coordinate)obj); } return false; } public override int GetHashCode() { int hash = 17; hash = hash * X.GetHashCode(); hash = hash * Y.GetHashCode(); return hash; } public bool Equals(Coordinate other) { return other.X == X && other.Y == Y; } public override string ToString() { return $"({X}, {Y})"; } } I decided I would treat each point as a 'tile' and have the cost on that tile rather than weighting the edges directly. I chose null as the value representing an impassible tile... I'm not sure I like that now so thoughts welcome. [DebuggerDisplay("Location: {Location}, Cost: {Cost}")] internal sealed class MapTile : IEquatable<MapTile> { internal MapTile(Coordinate location, int? cost = null) { Location = location; Cost = cost; } internal Coordinate Location { get; } internal int? Cost { get; } public bool Equals(MapTile other) { if (ReferenceEquals(other, null)) { return false; } return Location.Equals(other.Location) && Cost == other.Cost; } public override bool Equals(object obj) { return Equals(obj as MapTile); } public override int GetHashCode() { int hash = 17; hash = hash * Location.GetHashCode(); hash = hash * Cost.GetHashCode(); return hash; } public override string ToString() { return $"Location: {Location}, Cost: {Cost}"; } } The underlying data structure for the map is really straightforward: internal sealed class Graph<T> { public IEnumerable<T> AllNodes { get; } private IDictionary<T, IEnumerable<T>> Edges; internal Graph(IDictionary<T, IEnumerable<T>> edges) { if (edges == null) { throw new ArgumentNullException(nameof(edges)); } Edges = new ReadOnlyDictionary<T, IEnumerable<T>>(edges); AllNodes = Edges.Keys; } internal IEnumerable<T> Neighbours(T node) { return Edges[node]; } } In order to create a simple 2D rectangular map, I created this map generator class: internal class RectangularMapGenerator { private int height; private int width; private HashSet<Coordinate> walls = new HashSet<Coordinate>(); private HashSet<Coordinate> water = new HashSet<Coordinate>(); private static readonly Coordinate[] CardinalDirections = new[] { new Coordinate(0, 1), new Coordinate(1, 0), new Coordinate(0, -1), new Coordinate(-1, 0) }; public RectangularMapGenerator(int width, int height) { if (height < 0) { throw new ArgumentOutOfRangeException(nameof(height)); } if (width < 0) { throw new ArgumentOutOfRangeException(nameof(width)); } this.height = height; this.width = width; } internal RectangularMapGenerator AddWall(Coordinate location) { if (!IsWithinGrid(location)) { throw new ArgumentException("Wall location must be within the grid", nameof(location)); } walls.Add(location); return this; } internal RectangularMapGenerator AddWater(Coordinate location) { if (!IsWithinGrid(location)) { throw new ArgumentException("Water location must be within the grid", nameof(location)); } water.Add(location); return this; } private bool IsWithinGrid(Coordinate c) { return c.X >= 0 && c.X < width && c.Y >= 0 && c.Y < height; } private IEnumerable<MapTile> CreateEdges(MapTile tile) { if (walls.Contains(tile.Location)) { return Enumerable.Empty<MapTile>(); } return (from d in CardinalDirections let newLocation = tile.Location + d where IsWithinGrid(newLocation) && !walls.Contains(newLocation) select CreatMapTile(newLocation)) .ToArray(); } private MapTile CreatMapTile(Coordinate location) { int? cost = null; if (!walls.Contains(location)) { cost = water.Contains(location) ? 5 : 1; } return new MapTile(location, cost); } internal Graph<MapTile> Build() { var edges = new Dictionary<MapTile, IEnumerable<MapTile>>(); for (var x = 0; x < width; x++) { for (var y = 0; y < height; y++) { var location = new Coordinate(x, y); var tile = CreatMapTile(location); edges[tile] = CreateEdges(tile); } } return new Graph<MapTile>(edges); } } An example of actually creating a map is (I only wrote this for CR so not really wanting it to be reviewed): static void Main(string[] args) { var mapBuilder = new RectangularMapGenerator(10, 10); for (var x = 1; x < 4; x++) { for (var y = 7; y < 9; y++) { mapBuilder.AddWall(new Coordinate(x, y)); } } for (var x = 4; x < 7; x++) { for (var y = 0; y < 10; y++) { mapBuilder.AddWater(new Coordinate(x, y)); } } var graph = mapBuilder.Build(); foreach (var row in graph.AllNodes.GroupBy(n => n.Location.Y)) { Console.WriteLine( string.Join(" ", row.OrderByDescending(a => a.Location.X) .Select(a => a.Cost.HasValue ? a.Cost.Value.ToString() : "-"))); } Console.ReadKey(); } Running the above outputs a map of "costs" that looks like this (which is obviously a river with a boat house next to it ;) ). I've used the convention of origin in the top left with y increasing down and x increasing to the right like it does in the html canvas. (Edit: No it doesn't! Should have used OrderBy for the x axis, not OrderByDescending... Result is that the origin is actually in the top right and x increases to the left.) I haven't done this sort of thing before (and I have a Physics background so no CS classes to fall back on) so I'm looking for comments on all aspects of the code. Is putting the cost on a tile a reasonable trade off for simplicity, or should I introduce a more modelled edge with the cost on that instead? Answer: A couple of comments: This code looks really nice! Coordinate [DebuggerDisplay] is redundant when you override ToString(). I don't mind readonly fields for X and Y there. This is a slight performance optimization. Further reading I usually throw in equality operators when overriding equality: public static bool operator ==(Coordinate left, Coordinate right) { return left.Equals(right); } public static bool operator !=(Coordinate left, Coordinate right) { return !left.Equals(right); } About GetHashCode This algorithm will generate collisions for new Coordinate(1, 2) == new Coordinate(2, 1). May or may not be an issue. No need to call X.GetHashCode() as it returns the value. Here is an alternative implementation: public override int GetHashCode() { unchecked { return (X*397) ^ Y; } } Graph I prefer writing AllNodes like this: public IEnumerable<T> AllNodes { get { return Edges.Keys; } } Or if you are using C#6 public IEnumerable<T> AllNodes => Edges.Keys; I prefer IReadOnlyList<T> in all places, it closes the door to accidentally having something lazy that executes on every call. For max performance you want to pass the raw array T[]. Depending on how it is used KeyValuePair<T, IReadOnlyList<T>>[] can have better performance than Dictionary<T, IReadOnlyList<T> due to dictionary being a pretty expensive allocation. If there are many elements and you do many lookups per dictionary they are probably right. Ending here, really nice code, not much of a review. As with all things related to performance, profile first and profile after if you decide to optimize.
{ "domain": "codereview.stackexchange", "id": 18749, "tags": "c#, graph" }
Is there a name and efficient algorithm for this Tree Traversal method?
Question: Consider a tree structured task list where intermediate nodes define sub-groupings of tasks but are not tasks themselves, and the leaves represent the actual tasks. I want to traverse this type of tree giving equal opportunity/priority/time(?) for each group with respect to its parent. For example, if the root have 3 sub-trees (node or leaf), first I want to complete 1 task from the first sub-tree, then complete another task from the second sub-tree, and finally complete a task from the third sub-tree. When all sub-trees processed, the algorithm would start from the beginning this time selecting the second task from the sub-trees. This way, all the sub-trees would have equal processing time in terms of tasks in a given time frame. The same requirement is also present in each sub-trees themselves recursively of course. This might require keeping tabs of traversed child paths for each loop on each node. ... Maybe a recursive traversal could automatically handle this? Consider the following tree structure as an example: So the traversal I am thinking about would be as following for this example. Level 0 has no task, 3 sub-trees, go into first sub-tree to find a task We follow as in depth-first to arrive at task A and select it (complete it) Now the first sub-tree in level 1 has a task completed, while its siblings are at 0, so we select from the second sub-tree Level 1 second sub-tree has only one task, D, so we select it Similarly we go in to the third sub-tree and find the task F and select it Now we return to level 1 and try to find our second task from the first sub-tree IMPORTANT: We have already selected a task from level 1's first sub-tree, so we go into its second sub-tree and select task E Now level 1 first sub-tree has two tasks completed, and the second sub-tree has no other tasks, so we skip it and go into third sub-stree and select G Similary, we loop again and select B from left side, and I from right side (we don't select H because its sibling already selected) Then we select C from the left side completing this side and K from the right side (J is not selected similarly) Only the right is left, so we select first H and finally the J, completing our traversal. Traversal of leaves: (A->D->F)->(E->G)->(B->I)->(C->K)->(H)->(J) [Adding this clarifying paragraph from the comments:] Essentially the algorithm prevents any sub-grouping getting ahead (more than 1 tasks) of its unfinished siblings. It might be considered, in part, as a hierarchical round-robin scheduling giving equal time in terms of task operations. So my questions are: Is there a name for this kind of tree traversal. If not, what would you call it? How would you approach traversing this tree in an efficient way and what would be the complexity? Big thanks! EDIT: Changed "opportunity" to "opportunity/priority/time(?)" in text and removed it from the title. UPDATE: I have used the final algorithm from Apass.Jack and compared some different trees on it. Calculating the complexity proved difficult, though it definitely looks lower than $O(n\log n)$,. JS Code can be found as a: JS Online Engine: Javascript (Node ES6+) Implementation Gist Code: GitHub Gist Source Here are some execution results for different depths (D) and leaf counts (L): D3L10 | D3L1000 | D6L10 | D6L1000 =================================== Push | 35 | 3350 | 50 | 4698 | Unshift | 14 | 1150 | 17 | 1549 | Splice | 35 | 3350 | 50 | 4698 | Access | 35 | 3350 | 50 | 4698 | Traverse()| 16 | 1351 | 20 | 1808 | Zip() | 5 | 151 | 8 | 550 | ----------------------------------- NOTE: My question traversed siblings in left to right order but as Apass.Jack has stated below, a randomized recursive round-robin (RRRR) algorithm would indeed be more useful in task-management situation and current algorithm could be modified to do that easily. Answer: If we just mark a leaf as visited in this round instead of removing it so as to allow it to be listed in the next round, we will get an algorithm to traverse tree leaves with repetitions, which is called "recursive round robin scheduler" in the case of binary trees in a paper in Computer Networks in 1999, which I would also call "recursive round robin tree leaf sampling with replacement". So I would call the output of the algorithm in OP's question "recursive round robin tree leaf sampling without replacement" or "RRR tree leaf traversal" or just "RRR traversal" in short. Indeed, I cannot think of any other algorithm that traverse tree leaves could rightfully claim being both "round robin" at subtrees of the same parent and "recursive". (If the order in which each subtree of the same parent is selected is randomized instead of always left to right, we will have an algorithm that we may call recursive randomized round robin leaf traversal algorithm, or, an even more lovely abbreviation, RRRR traversal.) Here is the pseudocode of my recursive algorithm that has the same output as OP's algorithm and its auxiliary zip procedure. procedure rrrTraversal(rootedTree): if rootedTree is just a root without children return a list with rootedTree as its only element ll := an empty list of lists foreach subtree s of rootedTree from left to right: ss := rrrTraversal(s) push ss to the back of ll return zip(ll) procedure zip(listOflists): zipped := an empty list while listOflists is not empty: foreach list li in listOflists from front to end: if li is not empty: fi := first item in li remove fi from li push fi to the back of the zipped else: remove li from listOflists return zipped OP asks "This might require keeping tabs of traversed child paths for each loop on each node. ... Maybe a recursive traversal could automatically handle this?" The algorithm listed above gives a positive answer to that. It also fully justifies the usage of "recursive" in our description of the algorithm. This algorithm may be refined by merging a parent with its only child whenever that situation, a parent with only one child, is encountered. Here is a sample implementation in JavaScript. The average complexity of the algorithm is probably $O(n\log n)$, because of the similarity of the algorithm to merge sort. However, it is not clear what will be the worst cases and what will be complexity of the worst cases. It looks like it is nontrivial to find a significant faster algorithm than the recursive algorithm above. In particular, is there a linear algorithm in term of the number of the leaves? That should be a new question, though.
{ "domain": "cs.stackexchange", "id": 11939, "tags": "algorithms, data-structures, trees" }
catkin_make fail : must be invoked in the root of workspace
Question: Please... Please help. I am trying to use rplidar, but I have this error below......... I checked the tutorial again and again but I really don't know how to fix this.. Please help me. ubuntu@ubuntu:~/catkin_ws$ cd ~/catkin_ws/src ubuntu@ubuntu:~/catkin_ws/src$ git clone https://github.com/robopeak/rplidar_ros fatal: destination path 'rplidar_ros' already exists and is not an empty directory. ubuntu@ubuntu:~/catkin_ws/src$ ls ~/catkin_ws/src CMakeLists.txt beginner_tutorials elp_stereo_camera rplidar_ros ubuntu@ubuntu:~/catkin_ws/src$ ls ~/catkin_ws/src/rplidar_ros CHANGELOG.rst CMakeLists.txt LICENSE README.md launch package.xml rplidar_A1.png rplidar_A2.png rviz scripts sdk src ubuntu@ubuntu:~/catkin_ws/src$ cd ~/catkin_ws ubuntu@ubuntu:~/catkin_ws$ catkin_make rplidarNode Base path: /home/ubuntu/catkin_ws Source space: /home/ubuntu/catkin_ws/src The specified base path "/home/ubuntu/catkin_ws" contains a CMakeLists.txt but "catkin_make" must be invoked in the root of workspace After I input ls -al /home/ubuntu/catkin_ws I got below! ubuntu@ubuntu:~/catkin_ws$ ls -al /home/ubuntu/catkin_ws total 24 drwxrwxr-x 5 ubuntu ubuntu 4096 Jan 25 15:04 . drwxr-xr-x 22 ubuntu ubuntu 4096 Jan 25 23:16 .. -rw-rw-r-- 1 ubuntu ubuntu 98 Jan 24 13:34 .catkin_workspace lrwxrwxrwx 1 ubuntu ubuntu 49 Jan 25 15:04 CMakeLists.txt -> /opt/ros/indigo/share/catkin/cmake/toplevel.cmake drwxrwxr-x 9 ubuntu ubuntu 4096 Jan 24 17:54 build drwxrwxr-x 4 ubuntu ubuntu 4096 Jan 24 17:54 devel drwxrwxr-x 5 ubuntu ubuntu 4096 Jan 25 15:39 src Originally posted by sophie.shin on ROS Answers with karma: 15 on 2017-01-25 Post score: 0 Answer: ubuntu@ubuntu:~/catkin_ws$ catkin_make rplidarNode Base path: /home/ubuntu/catkin_ws Source space: /home/ubuntu/catkin_ws/src The specified base path "/home/ubuntu/catkin_ws" contains a CMakeLists.txt but "catkin_make" must be invoked in the root of workspace There is probably a CMakeLists.txt in /home/ubuntu/catkin_ws. There shouldn't be. Can you add the output of: ls -al /home/ubuntu/catkin_ws to your original question? Use the edit button/link for that. Edit: You are right! I hace a CMakeLists.txt in /home/ubuntu/catkin_ws . I'd use unlink: unlink /home/ubuntu/catkin_ws/CMakeLists.txt Then try catkin_make again. Originally posted by gvdhoorn with karma: 86574 on 2017-01-25 This answer was ACCEPTED on the original site Post score: 12 Original comments Comment by sophie.shin on 2017-01-25: I added output!! Comment by sophie.shin on 2017-01-25: You are right! I hace a CMakeLists.txt in /home/ubuntu/catkin_ws . Do you think I need to delete this file? can I use mouse right button to remove this file? Comment by sophie.shin on 2017-01-25: Thank you so much!! you are my hero! Comment by Ferreira_Robson on 2020-12-19: Thank you Guys !! I had the same problem ! May I ask : I neet to remove the CMakeLists.txt ? Comment by wbadry on 2021-06-16: Amazing. Thank you so much.
{ "domain": "robotics.stackexchange", "id": 26827, "tags": "ros, catkin-make, catkin, rplidar" }
unable to install rosserial on my laptop
Question: ankit@ubuntu:~$ sudo apt-get install rosserial_python Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package rosserial_python Originally posted by anantsarin on ROS Answers with karma: 1 on 2014-03-04 Post score: 0 Answer: The ROS debs are prefixed with 'ros-{distro}-' and have underscores replaces with dashes (a limitation of the deb naming rules). On Hydro, try installing rosserial as: sudo apt-get install ros-hydro-rosserial-python Originally posted by ahendrix with karma: 47576 on 2014-03-04 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 17166, "tags": "rosserial" }
What is the temperature of an accretion disc surrounding a supermassive black hole?
Question: What is the temperature of an accretion disc surrounding a supermassive black hole? Is there plasma in the disc? Answer: It depends on the distance from the central body. This gives the temperature $T$ at a given point as a function of the distance from that point to the center ($R$): $$T(R)=\left[\frac{3GM \dot{M}}{8 \pi \sigma R^3} \left(1-\sqrt{\frac{R_{\text{inner}}}{R}} \right) \right]^{\frac{1}{4}}$$ where $G$, $\pi$, and $\sigma$ are the familiar constants, $M$ is the mass of the central body (and $\dot{M}$ is the rate of accretion onto the body), and $R_{\text{inner}}$ is the inner radius of the disk - possibly (if the object is a black hole) the Schwarzschild radius $R_s$, in which case we can simplify this a little more. So the temperature in the accretion disk is far from constant. Whether or not there is plasma depends on the exact nature of the disk, the central object and the region around it. For example, a supermassive black hole may have different matter in its disk than that of a stellar-mass black hole. I should think, though, that black holes in binary systems accreting mass from a companion should have plasma in their accretion disks, and supermassive black holes might also have plasma from nearby stars.
{ "domain": "astronomy.stackexchange", "id": 682, "tags": "supermassive-black-hole, temperature, accretion-discs" }
ROS Answers SE migration: roscore error
Question: Hi, I installed ROS onto Gumstix Overo and roscore command works before. But after I tried to download some ROS packages (though I didn't succeed), I found roscore does't work anymore. This is the error message: root@linaro-alip:~# roscore WARNING: unable to configure logging. No log files will be generated Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://linaro-alip:46926/ ros_comm version 1.9.41 SUMMARY ======== PARAMETERS * /rosdistro * /rosversion NODES auto-starting new master process[master]: started with pid [1170] Traceback (most recent call last): File "/root/ros_core_ws/install/bin/rosmaster", line 35, in <module> rosmaster.rosmaster_main() File "/root/ros_core_ws/install/lib/python2.7/dist- packages/rosmaster/main.py", line 73, in rosmaster_main configure_logging() File "/root/ros_core_ws/install/lib/python2.7/dist- packages/rosmaster/main.py", line 57, in configure_logging _log_filename = rosgraph.roslogging.configure_logging('rosmaster', logging.DEBUG, filename=filename) File "/root/ros_core_ws/install/lib/python2.7/dist-packages/rosgraph/roslogging.py", line 104, in configure_logging logging.config.fileConfig(config_file, disable_existing_loggers=False) File "/usr/lib/python2.7/logging/config.py", line 78, in fileConfig handlers = _install_handlers(cp, formatters) File "/usr/lib/python2.7/logging/config.py", line 153, in _install_handlers klass = _resolve(klass) File "/usr/lib/python2.7/logging/config.py", line 94, in _resolve __import__(used) ImportError: No module named RosStreamHandler [master] process has died [pid 1170, exit code 1, cmd rosmaster --core -p 11311 __log:=/root/.ros/log/ba8ca0d2-6f47-11e2-91a2-0015c928fd79/master.log]. log file: /root/.ros/log/ba8ca0d2-6f47-11e2-91a2-0015c928fd79/master*.log ERROR: could not contact master [http://linaro-alip:11311/] [master] killing on exit This is the output of roswtf: root@linaro-alip:~# roswtf No package or stack in context ================================================================================ Static checks summary: Found 3 error(s). ERROR Not all paths in ROS_PACKAGE_PATH[/root/catkin_ws/src:/root/ros_core_ws/install/share:/root/ros_core_ws/install/stacks] point to an existing directory: * /root/ros_core_ws/install/stacks ERROR Not all paths in PYTHONPATH [/root/catkin_ws/devel/lib/python2.7/dist-packages:/root/ros_core_ws/install/lib/python2.7/dist-packages] point to a directory: * /root/catkin_ws/devel/lib/python2.7/dist-packages ERROR ROS_TEST_RESULTS_DIR is invalid: ROS_TEST_RESULTS_DIR[/root/catkin_ws/build/test_results] is not writable ================================================================================ ROS Master does not appear to be running. Online graph checks will not be run. ROS_MASTER_URI is [http://localhost:11311] This is environment variables output: root@linaro-alip:~# echo $PYTHONPATH /root/catkin_ws/devel/lib/python2.7/dist-packages:/root/ros_core_ws/install/lib/python2.7/dist-packages root@linaro-alip:~# echo $ROS_ROOT /root/ros_core_ws/install/share/ros root@linaro-alip:~# echo $ROS_MASTER_URI http://localhost:11311 root@linaro-alip:~# echo $ROS_PACKAGE_PATH /root/catkin_ws/src:/root/ros_core_ws/install/share:/root/ros_core_ws/install/stacks Anyone can help me? Thanks in advance!!! Originally posted by AdrianPeng on ROS Answers with karma: 441 on 2013-02-04 Post score: 1 Original comments Comment by tfoote on 2013-02-04: It's likely you lost your environment when you were doing other things such that the libraries are not in your PYTHON_PATH. Can you update your question to include your environment variables? Comment by AdrianPeng on 2013-02-05: Hi tfoote, I updated my question with environment variables output, could you please have a look? Thanks! Comment by AdrianPeng on 2013-02-05: Ok, when I delete "source /root/catkin_ws/devel/setup.bash" in .bashrc , roscore works. Maybe I have done something wrong with catkin_ws. Anyway, when I recreate catkin_ws and source it in .bashrc, it works. Answer: I solve this problem just by deleting "source /root/catkin_ws/devel/setup.bash" in .bashrc and recreate a catkin work space. Originally posted by AdrianPeng with karma: 441 on 2013-02-05 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12739, "tags": "ros, roslaunch, roscore, roswtf" }
Why does "bulk" yellow ink look red?
Question: A transparent blue inkjet cartridge looks deep blue, a red one looks deep red, but a yellow one looks red. Tea also looks yellowish when it's shallow and reddish otherwise. Red is another colour, a different wavelength, so why does deep yellow look red? Answer: Not all liquid or things that are yellow become red when they exist in large quantity. For example, beer, vegetable oil, and normal urine never look red even in large quantity, like in a vat. It really depends on the optical properties of the liquid and the suspended particles in it. The optical properties of something depend on both its intrinsic factors such as its reflectance, transmittance, and absorptance of the light and the extrinsic factors such as its thickness, the surface that it’s on, the angle of light incidence, and the polarization of the incident light. (This is not to mention psychological factors like the mood when you look at that thing or the surroundings colors or compositions of that thing, which can make that thing look different.) That’s why a thin film of water is colorless but a large body of water like the lake looks blue. And that’s why there is a phenomenon called iridescence – the phenomenon that certain surfaces appear to change color as the angle of view or the angle of illumination changes – like changing rainbow colors on a soap bubble or on a colorless CD surface. In your case, I think the color reflection and transmission properties of the ink are different. The ink probably reflects mainly the yellow color, so it looks yellow when reflection of colors is the major event – as when the ink is painted on a surface. But the ink probably transmits more of the red color than the yellow color, so it looks red when transmission of colors is the major event – as when you look through it. N.B. I think you’ll probably get better answers than mine by posting the question on the Physics forum. References. II.8. Reflection, Transmission, and Absorption Gigahertz-Optik. Elements that Affect the Appearance of Color Konica Minolta. Iridescence Wikipedia.
{ "domain": "physics.stackexchange", "id": 65162, "tags": "visible-light" }
Product on Tensor Products
Question: I'm trying to understand how products on tensor products work. For instance, in quantum mechanics, you have ($x$ tensor $y$) times ($z$ tensor $a$), where $x$, $y$, $z$, $a$ are all operators acting on a Hilbert space. I want to believe that it's just $xz$ tensor $ya$, but I'm looking online and that only applies for von Neumann algebras (http://en.wikipedia.org/wiki/Von_Neumann_algebra), but I'm not sure if the operators in quantum mechanics forms a ring. Answer: If you are a mathematician that just wants to prove results you can try to make up definition of products of "operators" and try to make a $W^*$ or a $C^*$ algebra out of them (and make them a ring or not or whatever you want). But you are being concrete and saying that these are operators on a Hilbert Space, so you can answer your own question by looking at how the operators act on the Hilbert Space. For instance if you take a tensor product of Hilbert Spaces, with elements $v\otimes w$ and your operator $V\otimes W$ is defined by $(V\otimes W)(v\otimes w)= (Vv)\otimes (Ww)$, then since you've said how your operators act, then we can deduce that: $$(V_2\otimes W_2)\circ (V_1\otimes W_1)=(V_2\circ V_1)\otimes (W_2\circ W_1).$$ Since both obviously send $v\otimes w$ to $V_2(V_1(v)) \otimes W_2(W_1(w))$.
{ "domain": "physics.stackexchange", "id": 20298, "tags": "quantum-mechanics, operators, tensor-calculus" }
Fixing Medical Claim Files through Text File Read/Write v3
Question: This is another review on a program I've asked about before, now translated from VBA into C#. I'm sure I've brought over a lot of bad habits with me, so I'm spotlighting some key areas I'd love open-ended feedback on. This is a WPF desktop app. Program Function As before, this program makes emergency changes to medical claim files, delivers all the corrected files analysts need, and produces a changelog as well for their review. Main method - Fix names changed to protect the innocent class FixTextFiles { [System.STAThreadAttribute()] //[System.Diagnostics.DebuggerNonUserCodeAttribute()] //[System.CodeDom.Compiler.GeneratedCodeAttribute("PresentationBuildTasks", "4.0.0.0")] static void Main() { var model = new Model(); ActiveFixes activeFixes = new ActiveFixes(); ActiveReports activeReports = new ActiveReports(); FileInfo[] files = null; if (!ModelIsSetUpBasedOnArgumentsOrUI(model, activeFixes, activeReports, ref files)) { return; } var DoubleProgressBar = new ViewPlusViewModel.DoubleProgressBar(); DoubleProgressBar.ProgressBarFilesCompleted.Maximum = files.GetUpperBound(0) + 1; DoubleProgressBar.Show(); DoubleProgressBar.Activate(); for (var i = files.GetLowerBound(0); i <= files.GetUpperBound(0); i++) { DoubleProgressBar.ProgressBarFilesCompleted.Value = i; DoubleProgressBar.LabelFilesCompleted.Content = "Files Completed: " + i + " / " + DoubleProgressBar.ProgressBarFilesCompleted.Maximum + ", " + files[i].Name; string entireFile = File.ReadAllText(files[i].FullName); if (entireFile != string.Empty) { var originalFileLines = entireFile.Split(new string[] { model.Delimiter1 }, StringSplitOptions.None); var revisedFileLines = entireFile.Split(new string[] { model.Delimiter1 }, StringSplitOptions.None); activeFixes.ResetIndicatorsOnNewFile(); activeReports.ResetIndicatorsOnNewFile(); DoubleProgressBar.ProgressBarLinesCompleted.Maximum = originalFileLines.GetUpperBound(0) + 1; // --- Begin Manipulation --- for (var currentLineNumber = originalFileLines.GetLowerBound(0); currentLineNumber <= originalFileLines.GetUpperBound(0); currentLineNumber++) { DoubleProgressBar.ProgressBarLinesCompleted.Value = currentLineNumber; DoubleProgressBar.LabelLinesCompleted.Content = "Lines Completed: " + currentLineNumber + " / " + DoubleProgressBar.ProgressBarLinesCompleted.Maximum; // --- Ongoing variables --- var currentLine = originalFileLines[currentLineNumber]; var segmentType = currentLine.Substring(0, currentLine.IndexOf(model.Delimiter3)); // --- Fixes --- if (activeFixes.FixClassThatFixesIndividualLines1 != null) { // check things and do FixClass1 methods } if (activeFixes.FixClassThatFixesIndividualLines2 != null) { // check things and do FixClass2 methods } // repeat for 40+ fix/report classes } DoubleProgressBar.ProgressBarLinesCompleted.Value = DoubleProgressBar.ProgressBarLinesCompleted.Maximum; DoubleProgressBar.ProgressBarFilesCompleted.Value = i + 1; if (activeReports.Count == 0) { if (model.CreateChangelogSheets) { WriteOriginalAndUpdatedSegmentsToCSVAndNoteDifferences(originalFileLines, revisedFileLines, files[i]); } WriteUpdatedSegmentsToNewFile(revisedFileLines, model.Delimiter1, files[i], model.FixedFilesDestination); } if (activeReports.ReportClassThatReportsOnWholeFiles1 != null) { // check things and do ReportClass1 methods } if (activeReports.ReportClassThatReportsOnWholeFiles2 != null) { // check things and do ReportClass2 methods } } } activeFixes.ReprotectSettingsSheets(); activeReports.ReprotectSettingsSheets(); var tempDirectoryPath = Path.Combine(new string[] { Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), "(Program Name)" }); if (Directory.Exists(tempDirectoryPath)) { try { Directory.Delete(tempDirectoryPath, true); } catch (IOException e) { } } UpdateTelemetryFile(files[0].DirectoryName, files.GetUpperBound(0) + 1, activeFixes, activeReports); if (model.LaunchedFromBatchFile == false) { MessageBox.Show("All files have been fixed and saved to the same folders as the source files with \"Fixed\" added to their filenames. No original files have been modified."); } } This is mostly a console app with some light UI tacked on for easy user configuration - start getting ready for user config private static bool ModelIsSetUpBasedOnArgumentsOrUI(Model model, ActiveFixes activeFixes, ActiveReports activeReports, ref FileInfo[] files) { var commandLineArgs = Environment.GetCommandLineArgs(); switch (commandLineArgs.GetUpperBound(0)) { case 0: if (!UserPromptedSettingsWereWrittenToModel(ref model, ref activeFixes, ref activeReports)) { return false; } files = GetFilesToWorkWith(); if (files.Length == 0) { return false; } else { return true; } case 7: // Model is setup based on command line arguments instead } } This is the major UI portion - user goes through a sequence of two main config windows (and potentially one child window) before selecting their files with GetFilesToWorkWith() public static bool UserPromptedSettingsWereWrittenToModel(ref Model model, ref ActiveFixes activeFixes, ref ActiveReports activeReports) { var viewModel = new ViewModel(); viewModel.Setup(); var tempDirectoryPath = Path.Combine(new string[] { Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), "(Program name)", "Model Data" }); if (!Directory.Exists(tempDirectoryPath)) { Directory.CreateDirectory(tempDirectoryPath); } var tempFilePath = Path.Combine(new string[] { tempDirectoryPath, "Fixes Reports List.csv" }); File.WriteAllText(tempFilePath, Properties.Resources.Fixes_Reports_List); viewModel.FixesReportsTable = GeneralTools.GetDataTableFromCSV(tempFilePath, "|", true, false); var parseSettings = new ViewPlusViewModel.ParseSettings(); parseSettings.InitializeComponent(); var fixSelector = new ViewPlusViewModel.FixSelector(viewModel); fixSelector.InitializeComponent(); var seeAllFixesReports = new ViewPlusViewModel.SeeAllFixesReports(viewModel); seeAllFixesReports.InitializeComponent(); parseSettings.ShowDialog(); var nextWindowToOpen = "TBD"; while (!string.IsNullOrEmpty(nextWindowToOpen) && !nextWindowToOpen.Equals("Select Text Files")) { switch (GetNextWindowToOpen(parseSettings, fixSelector, seeAllFixesReports)) { case "Parse Settings": parseSettings.ShowDialog(); break; case "Fix Selector": fixSelector.ShowDialog(); break; case "See All Fixes And Reports": if (fixSelector.FixesOrReports.Equals("Fixes")) { seeAllFixesReports.UpdateTableType("Fix"); } if (fixSelector.FixesOrReports.Equals("Reports")) { seeAllFixesReports.UpdateTableType("Report"); } seeAllFixesReports.ShowDialog(); break; case "Select Text Files": nextWindowToOpen = "Select Text Files"; break; case null: parseSettings.Close(); if (fixSelector.IsLoaded) fixSelector.Close(); if (seeAllFixesReports.IsLoaded) { seeAllFixesReports.Close(); } return false; } } if (fixSelector.FixesOrReports.Equals("Fixes")) { activeFixes.Setup(fixSelector.ActiveFixes, viewModel.FixesReportsTable); } if (fixSelector.FixesOrReports.Equals("Reports")) { activeReports.Setup(fixSelector.ActiveReports, viewModel.FixesReportsTable); } model.Setup(parseSettings.Delimiter1, parseSettings.Delimiter2, parseSettings.Delimiter3, parseSettings.Delimiter4, Convert.ToBoolean(parseSettings.CreateChangelogSheets)); parseSettings.Close(); fixSelector.Close(); if (seeAllFixesReports.IsLoaded) { seeAllFixesReports.Close(); } return true; } Since I'm newing and ShowDialoging UIwindows from the code rather than Main()ing in a window in a more pure WPF fashion, I use code to determine where to go next private static string GetNextWindowToOpen(ViewPlusViewModel.ParseSettings parseSettings, ViewPlusViewModel.FixSelector fixSelector, ViewPlusViewModel.SeeAllFixesReports seeAllFixesReports) { if (fixSelector.GoBack) { fixSelector.GoBack = false; return "Parse Settings"; } if (parseSettings.GoToNextWindow || seeAllFixesReports.GoToNextWindow || seeAllFixesReports.GoBack) { parseSettings.GoToNextWindow = false; seeAllFixesReports.GoToNextWindow = false; seeAllFixesReports.GoBack = false; return "Fix Selector"; } if (fixSelector.GoToChildWindow) { fixSelector.GoToChildWindow = false; return "See All Fixes And Reports"; } if (fixSelector.GoToNextWindow) { fixSelector.GoToNextWindow = false; return "Select Text Files"; } return null; } The first UI window, ParseSettings - users put in 5 values for the Model. Delimiter names changed to protect the innocent. public partial class ParseSettings : Window { public static readonly DependencyProperty Delimiter1Property = DependencyProperty.Register("Delimiter1", typeof(string), typeof(ParseSettings), new PropertyMetadata("(Default A)")); public static readonly DependencyProperty Delimiter2Property = DependencyProperty.Register("Delimiter2", typeof(string), typeof(ParseSettings)); public static readonly DependencyProperty Delimiter3Property = DependencyProperty.Register("Delimiter3", typeof(string), typeof(ParseSettings), new PropertyMetadata("(Default B)")); public static readonly DependencyProperty Delimiter4Property = DependencyProperty.Register("Delimiter4", typeof(string), typeof(ParseSettings), new PropertyMetadata("(Default C)")); public static readonly DependencyProperty CreateChangelogSheetsProperty = DependencyProperty.Register("CreateChangelogSheets", typeof(string), typeof(ParseSettings), new PropertyMetadata("True")); public string Delimiter1 { get { return GetValue(Delimiter1Property) as string; } set { SetValue(Delimiter1Property, value); } } public string Delimiter2 { get { return GetValue(Delimiter2Property) as string; } set { SetValue(Delimiter2Property, value); } } public string Delimiter3 { get { return GetValue(Delimiter3Property) as string; } set { SetValue(Delimiter3Property, value); } } public string Delimiter4 { get { return GetValue(Delimiter4Property) as string; } set { SetValue(Delimiter4Property, value); } } public string CreateChangelogSheets { get { return GetValue(CreateChangelogSheetsProperty) as string; } set { SetValue(CreateChangelogSheetsProperty, value); } } public bool GoToNextWindow { get; set; } public ParseSettings() { InitializeComponent(); GoToNextWindow = false; } protected void MakeChangelogSheetsButton_Clicked(object sender, RoutedEventArgs e) { var senderAsButton = sender as Button; if (senderAsButton == ButtonCreateChangelogSheets) { switch (this.CreateChangelogSheets) { case "True": this.CreateChangelogSheets = "False"; senderAsButton.Background = Brushes.Pink; break; case "False": this.CreateChangelogSheets = "True"; senderAsButton.Background = Brushes.PaleGreen; break; } } } private void SeeIfGoToNextWindowButtonCanBeEnabled(object sender, TextChangedEventArgs e) { if (Validation.GetHasError(this.InputDelimiter1) || Validation.GetHasError(this.InputDelimiter2) || Validation.GetHasError(this.InputDelimiter3) || Validation.GetHasError(this.InputDelimiter4)) { this.ButtonSelectFixesOrReports.IsEnabled = false; } else { this.ButtonSelectFixesOrReports.IsEnabled = true; } } private void GoToNextWindow_Click(object sender, RoutedEventArgs e) { if (sender == this.ButtonSelectFixesOrReports) { this.GoToNextWindow = true; } this.Hide(); } } The second UI window, FixSelector - users can search for desired fixes or pick from a telemetry file-populated list of the most popular. Each fix listed in either search results or the popular options is a button in a ListView that can be clicked to activate the fix, change the button's color visually, and add or remove it from appropriate ListView-populating dictionaries public partial class FixSelector : Window { public static readonly DependencyProperty FixesOrReportsProperty = DependencyProperty.Register("FixesOrReports", typeof(string), typeof(FixSelector), new PropertyMetadata("Fixes")); public static readonly DependencyProperty SegmentFixNameSearchProperty = DependencyProperty.Register("SegmentFixNameSearch", typeof(string), typeof(FixSelector)); public string FixesOrReports { get { return GetValue(FixesOrReportsProperty) as string; } set { SetValue(FixesOrReportsProperty, value); } } public string SegmentFixNameSearch { get { return GetValue(SegmentFixNameSearchProperty) as string; } set { SetValue(SegmentFixNameSearchProperty, value); } } public bool GoToNextWindow { get; set; } public bool GoToChildWindow { get; set; } public bool GoBack { get; set; } public List<(string FixOrReportName, long CountInTelemetryFile)> MostPopularFixes { get; set; } public List<(string FixOrReportName, long CountInTelemetryFile)> MostPopularReports { get; set; } public DataTable CompleteFixesReportsTable { get; set; } public DataTable CompatibilityTable { get; set; } public Dictionary<string, string> ActiveFixes { get; set; } public Dictionary<string, string> ActiveReports { get; set; } public Dictionary<Button, Button> ClickedButtonsByAddedButtonsInSelectedFixes { get; set; } public Dictionary<Button, Button> ClickedButtonsByAddedButtonsInSelectedReports { get; set; } public Dictionary<Button, Button> AddedButtonsInSelectedFixesByClickedButtons { get; set; } public Dictionary<Button, Button> AddedButtonsInSelectedReportsByClickedButtons { get; set; } public FixSelector(ViewModel viewModel) { InitializeComponent(); GoToNextWindow = false; GoToChildWindow = false; GoBack = false; MostPopularFixes = viewModel.MostPopularFixes; MostPopularReports = viewModel.MostPopularReports; PopulateMostPopularOptionsListView(); CompleteFixesReportsTable = viewModel.FixesReportsTable; CompatibilityTable = viewModel.CompatibilityTable; ActiveFixes = new Dictionary<string, string>(); ActiveReports = new Dictionary<string, string>(); ClickedButtonsByAddedButtonsInSelectedFixes = new Dictionary<Button, Button>(); ClickedButtonsByAddedButtonsInSelectedReports = new Dictionary<Button, Button>(); AddedButtonsInSelectedFixesByClickedButtons = new Dictionary<Button, Button>(); AddedButtonsInSelectedReportsByClickedButtons = new Dictionary<Button, Button>(); } private void PopulateMostPopularOptionsListView() { this.MostPopularOptions.Items.Clear(); if (this.FixesOrReports.Equals("Fixes")) { if (!(this.MostPopularFixes is null)) { for (var i = 0; i < this.MostPopularFixes.Count; i++) { var button = new Button { Content = this.MostPopularFixes[i].FixOrReportName }; button.Click += new RoutedEventHandler(this.IndividualFixReportButton_Click); this.MostPopularOptions.Items.Add(button); } } } if (this.FixesOrReports.Equals("Reports")) { if (!(this.MostPopularReports is null)) { for (var i = 0; i < this.MostPopularReports.Count; i++) { var button = new Button { Content = this.MostPopularReports[i].FixOrReportName }; button.Click += new RoutedEventHandler(this.IndividualFixReportButton_Click); this.MostPopularOptions.Items.Add(button); } } } } protected void FixesOrReportsButton_Clicked(object sender, RoutedEventArgs e) { var senderAsButton = sender as Button; if (senderAsButton == ButtonFixesOrReports) { switch (this.FixesOrReports) { case "Fixes": this.FixesOrReports = "Reports"; senderAsButton.Background = Brushes.LightBlue; PopulateMostPopularOptionsListView(); this.SelectedFixes.Visibility = Visibility.Hidden; this.SelectedReports.Visibility = Visibility.Visible; if (this.ActiveReports.Count == 0) { this.ButtonSelectTextFiles.IsEnabled = false; } break; case "Reports": this.FixesOrReports = "Fixes"; senderAsButton.Background = Brushes.Orange; PopulateMostPopularOptionsListView(); this.SelectedFixes.Visibility = Visibility.Visible; this.SelectedReports.Visibility = Visibility.Hidden; this.ButtonSelectTextFiles.IsEnabled = true; break; } PopulateSearchResults(this.SearchResults, e as TextChangedEventArgs); } } protected void PopulateSearchResults(object sender, TextChangedEventArgs e) { this.SearchResults.Items.Clear(); if (string.IsNullOrEmpty(this.SegmentFixNameSearch)) { return; } string typeToReturn; if (this.FixesOrReports.Equals("Fixes")) { typeToReturn = "Fix"; } else { typeToReturn = "Report"; } var nameFilter = GetCompleteFixesReportsTableRowFilterNameExpression(); this.CompleteFixesReportsTable.DefaultView.RowFilter = nameFilter + " AND FixOrReport = '" + typeToReturn + "'"; var matchingFixesTable = this.CompleteFixesReportsTable.DefaultView.ToTable(); var matchingFixesList = new List<(string Name, string Description)>(); for (var i = 0; i < matchingFixesTable.Rows.Count; i++) { matchingFixesList.Add((Name: matchingFixesTable.Rows[i]["Name"].ToString(), Description: matchingFixesTable.Rows[i]["Description"].ToString())); } for (var i = 0; i < matchingFixesList.Count; i++) { var button = new Button { Content = matchingFixesList[i].Name, ToolTip = matchingFixesList[i].Description }; button.Click += new RoutedEventHandler(this.IndividualFixReportButton_Click); if (typeToReturn.Equals("Fix")) { if (this.ActiveFixes.ContainsKey(button.Content.ToString())) { button.Background = Brushes.PaleGreen; } } if (typeToReturn.Equals("Report")) { if (this.ActiveReports.ContainsKey(button.Content.ToString())) { button.Background = Brushes.PaleGreen; } } this.SearchResults.Items.Add(button); } } private void IndividualFixReportButton_Click(object sender, RoutedEventArgs e) { var senderAsButton = sender as Button; if (senderAsButton.Background == Brushes.PaleGreen) { senderAsButton.Background = (Brush)new BrushConverter().ConvertFrom("#FFDDDDDD"); RemoveFixOrReportFromAppropriateDictionary(senderAsButton); } else { senderAsButton.Background = Brushes.PaleGreen; AddFixOrReportToAppropriateDictionary(senderAsButton); } } private void GoToNextWindow_Click(object sender, RoutedEventArgs e) { if (sender == ButtonSelectTextFiles) { this.GoToNextWindow = true; } if (sender == ButtonBackToParseSettings) { this.GoBack = true; } if (sender == ButtonSeeAllFixesReports) { this.GoToChildWindow = true; } this.Hide(); } } Last major section: Based upon the dictionary of selected fixes or reports, here's how those ActiveFixes/ActiveReports classes get populated class ActiveFixes { public long Count { get; set; } public DataTable FixesReportsTable { get; set; } public Fixes.FixClass1 FixClass1 { get; set; } public Fixes.FixClass2 FixClass2 { get; set; } // etc. for 40+ fixes public ActiveFixes() { Count = 0; } public void Setup(Dictionary<string, string> selectionsFromFixSelector, DataTable fixesReportsTable) { this.FixesReportsTable = fixesReportsTable; InstantiateSelectionsAsProperties(selectionsFromFixSelector); var properties = this.GetType().GetProperties(); for (var i = properties.GetLowerBound(0); i <= properties.GetUpperBound(0); i++) { if (!properties[i].Name.Equals("Count") && !properties[i].Name.Equals("FixesReportsTable")) { if (properties[i].GetValue(this) != null) { this.Count += 1; var activeFix = properties[i].GetValue(this) as IFixReport; var fixName = ConversionMethods.GetFixReportNameFromIFixReportName(properties[i].Name); var fixRows = this.FixesReportsTable.Select("Name = '" + fixName + "'"); activeFix.Settings = _(ProgramName)Tools.GetSettingsSheet(fixRows[0]["SettingsSheet"].ToString()); if (activeFix.Settings != null) { activeFix.Settings.Unprotect(); activeFix.ConfirmAndSpeedUpSettingsSheet(); } activeFix.Setup(); } } } } public void InstantiateSelectionsAsProperties(Dictionary<string, string> selectionsFromFixSelector) { foreach (var key in selectionsFromFixSelector.Keys) { switch (key) { case "Fix Class 1 Full Name": this.FixClass1 = new Fixes.FixClass1(); break; case "Fix Class 2 Full Name": this.FixClass2 = new Fixes.FixClass2(); break; // etc. } } } Answer: A couple of small pointers. I'm skipping over larger architecture issues because it seems that this code is not the complete example of working code. Local variables are usually started with lower case letter, so I would suggest renaming DoubleProgressBar to doubleProgressBar You are iterating over files with this for (var i = files.GetLowerBound(0); i <= files.GetUpperBound(0); i++). Why not just use ordinary for (var i = 0; i < files.Length; i++)? You are using some string in switch cases. I would suggest using Enums, because those are safer to change, even if they are not as expressive as a string can be. Also, with enums you get a better support from tools. In Main you seem to be using regular string concat ("a" + "b" + "c"). Consider using string interpolation or string.Format() In Main you are settings files = null and then passing it as a ref parameter. When you are initializing a value inside method ("multiple return values"), you should use out (see Int32.TryParse-for example). You could also use tuples to avoid out and ref (C# 7 onwards?). On general note, I would avoid methods that do multiple things. Method ModelIsSetUpBasedOnArgumentsOrUI does answer to that question, but it also initializes the files. If possible, those should be done in separate methods to make to code easier to understand.
{ "domain": "codereview.stackexchange", "id": 36827, "tags": "c#, wpf" }
Decidability of Unary Languages / One-to-One Mapping
Question: I'm trying to prove that there exists an undecidable subset of {1}* by showing a one-to-one correspondence between it and {0, 1}* (which would imply a one-to-one correspondence between their power sets), but I'm struggling with how to do the one-to-one mapping. Isn't it just surjective? That is, there's one unary representation of potentially many binary strings (e.g., 1 = 01 = 0000000000001). What am I misunderstanding here? Or am I just taking the wrong overall strategy? (This isn't homework; I'm reviewing for a midterm, and it's a little concerning I'm getting tripped up here) Answer: Instead of mapping the string $x\in\{0,1\}^*$ to $1^{\mathrm{bin}(x)}$ (where $\mathrm{bin}(x)$ is the number denoted by interpreting $x$ as a string in binary, map it to $1^{\mathrm{bin}(1x)}$. Now every string in $\{0,1\}^*$ maps to a unique number of $1$s.
{ "domain": "cs.stackexchange", "id": 7603, "tags": "turing-machines, undecidability, uncountability, mapreduce" }
Spin of Supermasive Blackhole
Question: How long does it take for the black hole at the center of our galaxy to make 1 full rotation? Answer: The spin rate of the black hole at the centre of our Galaxy has not yet been established. However, X-ray observations of gas falling into the supermassive black holes in many other galaxies appears to have established that most rotate at close to the maximum possible (see below, from here). If we assume the 4 million solar mass black hole in our Galaxy is not unusual, then it too will have an event horizon moving at more than 60% of the speed of light. As the spin increases, the event horizon shrinks, becoming half the Schwarzschild radius at the maximal spin. So for a 4 million solar mass black hole, the event horizon will be perhaps a little bigger than $GM/c^2 \simeq 6$ million km. If rotating near the speed of light, then an answer to your question would be a bit more than 2 minutes.
{ "domain": "physics.stackexchange", "id": 43794, "tags": "black-holes, astronomy, galaxies, angular-velocity, milky-way" }
Twin Paradox: Fewer Seconds or Shorter Seconds?
Question: In Einstein's Twin Paradox thought experiment, the travel time is shorter for the traveling brother than for the stationary brother. Since the SI unit of time in physics is the second, is the travel duration shorter because: It contains fewer seconds or The number of seconds is the same for both brothers, but the traveler's seconds are shorter? Answer: Interesting question. Let's first understand what is actually happening to twins' times. Twins are A and B. A remains on Earth, B travels with the speed $v$ forward, and then turns at some point and travels back with the speed $-v$. Let's assume that turn time is negligible compared to the total travel time (i.e. B turns back almost immediately). Dots on the line $Ob$ correspond to the units of time in the frame of the twin A. Similarly, dots on the line $O\alpha b$ correspond to the units of time in the frame of the twin B. By counting the dots on both lines, you can see that the line $O\alpha b$ "contains fewer seconds". That is why twin B is actually younger than twin A when they meet at the end of the journey. To understand why the picture is not symmetric w.r.t. A and B, look at the events $\alpha_1$ and $\alpha_2$. These are the events that are simultaineous (in the reference frame of B) with the event $\alpha$ just before and just after the turn. It looks like the "simultaneity" jumped from $\alpha_1$ (which is an earlier event from A's perspective) to $\alpha_2$ (which is much later event from A's perspective). This effect, however, is not observable. The second picture shows the light signals sent from A to B (say, A sends to B his photo after every second passed in A's frame). From B's perspective, photos reach him at roughly equal intervals - a bit longer before the turn, and bit shorter after the turn. From B's perspective, A's "seconds" are always "short" during the trip, but they are even "shorter" after the turn then before the turn. Conclusion: The number of seconds is different. The duration of A's seconds is shorter from B's perspective, and vice versa (symmetrically). All the magic happens during the turn, when "simultaneity" jumpes from $\alpha_1$ to $\alpha_2$.
{ "domain": "physics.stackexchange", "id": 90064, "tags": "special-relativity, spacetime, time" }
Minimize $\sum_i||Y_i-AX_i||^2$
Question: I have N data vectors $X_i$ and N target vectors $Y_i$ where $i$ indexes the sample. I would like to learn a linear map $A$ between the data and the target i.e find the matrix $A$ that minimize $$\sum_i^N||Y_i-AX_i||^2.$$ Is that a well know machine learning problem ? What would be the equivalent model in scikit-learn ? I thought this is a linear regression, but in scikit-learn the documentation of the linear regression states LinearRegression fits a linear model with coefficients w = (w1, …, wp) to minimize the residual sum of squares between the observed targets in the dataset, and the targets predicted by the linear approximation. So it seems like scikit-learn's linear regression learns a list of coefficients w = (w1, …, wp), not a matrix A. Answer: This is a normal linear regression where the target variable has multiple components. This is often referred to as "multivariate linear regression". To implement it with scikit-learn, you can use a normal LinearRegression model. Given that you have no intercept term, you should use fit_intercept=False. After fitting the model, you can find $A$ in the coef_ attribute of the model: coef_: array of shape (n_features, ) or (n_targets, n_features) Estimated coefficients for the linear regression problem. If multiple targets are passed during the fit (y 2D), this is a 2D array of shape (n_targets, n_features), while if only one target is passed, this is a 1D array of length n_features.
{ "domain": "datascience.stackexchange", "id": 12162, "tags": "scikit-learn, linear-regression, linear-models" }
Conservation in space-time curvature
Question: Pardon this possibly naive question. I'm starting to poke around in the topic of General Relativity (as soon as I can pull myself back up out of the vortex of underlying mathematics that I've gotten sucked into) and started to wonder this: is there any sort of "conservation" law(s) associated with space-time curvature? Perhaps I'm stuck trying to visualize the effects of mass (or acceleration), so let me explain my question a bit more. If the observable universe is expanding from every observer's viewpoint, one model that supports this consists of the observable universe on the surface of an expanding "sphere." In trying to visualize the curvature of the surface of that "sphere," I began to wonder if the "inward" bulging of the "sphere" might not somehow need to be compensated for by a corresponding "outward" bulge elsewhere? Does this question make sense? Answer: Your visualization is a good one for exploring the "no-center" concept of the universe - that is, if you only count the universe as the boundary of the hyper-sphere. Technically, though, it could be wrong. As you'll find as you look at GR (if you haven't already) is that there are three types of curvature: positive, negative, and flat. A positively-curved surface is like the surface of a sphere. An example of a negatively-curved surface is a saddle. A surface with flat curvature is ordinary Euclidean space - like a perfectly flat tabletop. So why would a physicist call your idea possibly incorrect? Well, while the "shape of the universe" has been debated for decades, there are some signs that it is flat (Wikipedia covers this pretty well in https://en.wikipedia.org/wiki/Shape_of_the_Universe - see the data for WMAP). Now, this data is not conclusive proof that the universe is flat - other curvatures are still possible - but it seems to swing in favor of a flat-universe. There's still one thing to answer, though - your main question about "conservation" of curvature. Well, on a flat universe, what is the curvature? There is none, and so - at least in our universe - the question is moot. I hope this helps.
{ "domain": "physics.stackexchange", "id": 18819, "tags": "general-relativity, conservation-laws, space-expansion, curvature" }
pointcloud2 data
Question: Hello, I'm currently using kinect to receive image message and pointcloud2 message and make a depth and image with message information. Then,I want to do some classification and label each pointcloud data and send back as ros message. when I send back this pointcloud data that has label for each point as message. How should I have this label information?? Thank you Min Originally posted by ros_beginner on ROS Answers with karma: 11 on 2011-08-12 Post score: 1 Answer: One possible solution is to add a new field to the pcl::PointCloud datatype (that is, create a custom point type in PCL). There is a tutorial on how to do this on the PCL website: http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php Then, you would simply publish this new pointcloud with an additional custom label field. Originally posted by Helen with karma: 261 on 2011-09-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 6401, "tags": "ros, messages, pointcloud" }
Is it experimentally verified that the neutrinos are affected by gravity?
Question: If neutrinos (or any other particles) wasn't affected by gravity that would contradict the general theory of relativity. I'm convinced that the postulate of the equivalence between inertial mass and gravitational mass is adequate, but not totally convinced that it is the truth. From my dialectical point of view there are no total unity. In every distinction there is a gap to explore. What is supposed to be identical will be found split complex when examined more closely. And therefor I would like to know if it's experimentally verified that the neutrinos are affected by gravity? Answer: It would help if you gave some context. Is there any evidence, or even theoretical work, that suggests neutrinos are not affected by gravity? I suppose you could argue that the similar arrival times of photons and neutrinos from SN 1987A was evidence that neutrinos and photons are following the same path through spacetime and both being "gravitationally delayed" by the same amount as they travel from the Large Magellanic Cloud (see Shapiro delay). However, I am unsure to what extent this is degenerate/confused with assumptions about the neutrino masses. There must also be indirect evidence in the sense that if neutrinos had mass but were unaffected by gravity, then the large scale structure in the universe could look very different. However, I feel that given neutrinos are already an example of hot dark matter, such a signature could be extremely elusive. Firm evidence may need new neutrino telescopes. One test would be to search for neutrinos from the centres of other stars using the gravitational focusing effect of the Sun. There are predictions that, for instance, the neutrinos from Sirius would be focused at around 25 au from the Sun and would have an intensity about one hundredth of the neutrino flux from the Sun at the Earth. Such a detection would be very clear evidence that neutrinos are being affected by gravity as expected (Demkov & Puchkov 2000). In a similar vein, any positive detection of the cosmic neutrino background should be modulated by gravitational focusing by the Sun at the level of about 1 per cent (Safdi et al. 2014). This is because an isotropic neutrino background will form a "wind" that the Sun passes through. When the Earth is leeward of the Sun, neutrinos would be gravitationally focused and there should be a larger flux.
{ "domain": "physics.stackexchange", "id": 23529, "tags": "gravity, neutrinos, inertia" }
Translation of coordinates to generalised coordinates
Question: The translation form $r_i$ to $q_j$ language start forms the transformation equation: $r_i=r_i (q_1,q_2,…,q_n,t)$ (assuming $n$ independent coordinates) Since it is carried out by means of the usual “chain rules” of the calculus of partial differentiation. $$ v_i\equiv \frac{d r_i}{dt}= \sum_k \frac{\partial r_i}{\partial q_k} \dot{q_k} + \frac{\partial r_i}{\partial t}. $$ Similarly, the arbitrary virtual displacement $δr_i$ can be connected with the virtual displacement $δq_j$ by: $$ \delta r_i= \sum_j \frac{\partial r_i}{\partial q_j} \delta q_j. $$ I am having trouble understanding how the first equation is derived and where the second equations is coming form. I am having trouble applying the chain rule in this context i was wondering if a more detailed derivation can be given. I am having trouble understanding where the second equation comes form (arbitrary virtual displacement)? These equation are in terms of d’Alembert principle and Lagrange’s equations Answer: A total derivative of a function $f(x_1,x_2,\cdots ,x_n)$ is defined as $$df\equiv \sum_{i=0}^n\left(\frac{\partial f}{\partial x_i}\right)dx_i.$$ In your situation, the total differential of a function $r_i(q_1,\cdots ,q_n,t)$ is given by $$dr_i= \sum_{j=0}^n\left(\frac{\partial r_i}{\partial q_j}\right)dq_j+\frac{\partial r_i}{\partial t}dt,$$ or $$\boxed{v_i\equiv\frac{dr_i}{dt}= \sum_{j=0}^n\left(\frac{\partial r_i}{\partial q_j}\right)\dot{q_j}+\frac{\partial r_i}{\partial t}.}$$ Considering virtual displacement ($dt=0$) we get: $$\boxed{\delta r_i= \sum_{j=0}^n\left(\frac{\partial r_i}{\partial q_j}\right)\delta q_j.}$$
{ "domain": "physics.stackexchange", "id": 74734, "tags": "classical-mechanics, coordinate-systems, differentiation, calculus" }
Why do we use "true labels" that are based on the output of our network in Deep Q-Learning?
Question: In the original DQN paper, the $\ell_2$ loss is taken over the distance between our network output, $\hat{q}(s_j,a_j,w)$ and the labels $y_j=r_j+\gamma \cdot \max\limits_{a'} \hat{q}(s_{j+1},a',w^-)$, where $\hat{q}(s_{j+1},a',w^-)$ is our network with static weights $w^-$, that are updated to be $w^-=w$ every $C$ steps. This is troubling to me, as $y_j$ aren't really "true" labels as we know from supervised learning, so why should I even think that this loss updates the weights such that the output policy is something meaningful? It seems as if my network could output some arbitrary $\hat{q}$ and with respect to this $\hat{q}$, I will try to minimize a loss, But when $\hat{q}$ isn't "optimal" per se, It is not clear that we can converge to an optimal policy. Answer: The labels in DQN, and in Q-learning in general, are not "true" in the sense that they represent optimal action value functions. Instead they represent approximate action values of a current target policy. The target policy changes every C time steps, when the network with static weights is updated. This update will include both corrections to the action value approximations, and changes to which actions are considered optimal. The reason this converges towards an optimal action value function is related to the policy improvement theorem. With function approximation, as in DQN, the convergence is not guaranteed, but the process is stll based on the same idea. In summary it is a two-step repeated feedback process: learn value function of a current policy update the policy to select actions with maximum values What this means for the TD target "labels" in DQN is: They are not ground truth for the optimal action value function, until after the algorithm has converged. They are biased, initially almost completely arbitrarily by however the target network has been initialised, and from then on due to a slowly-reducing impact from that initial bias and from lagging behind collected data. They are non-stationary. This means an online learning model class is required (neural networks are fine). It is also the reason why many Deep RL algorithms can suffer from catastrophic forgetting. When using experience replay, the TD target labels should be recalculated each time they are used.
{ "domain": "ai.stackexchange", "id": 3345, "tags": "reinforcement-learning, deep-rl, dqn, objective-functions" }
Design narrow bandpass filter for signal with high sampling rate
Question: I have to bandpass-filter a signal which has been sampled with 4000 Hz. Only frequencies around 15 Hz shall remain after filtering (let's say in a band between 10 Hz and 20 Hz; the narrower the better). I have several questions here: What is the recommended way to perform tasks like this? What bandpass filter is suitable? Do I need to downsample the signal before filtering. If yes, what type of additional low pass filter should I use to avoid aliasing? In my case, it is important that the phase of the original signal will not be distorted or shifted in the whole process. Furthermore​, the filter step response should have close to zero overshoot. The settling time could therefore be a little larger. I do not want to perform a sophisticated filter design which delivers optimal results for exactly this specific signal. I am more interested in a general solution that delivers solid results using standard IIR or FIR filters (e. g. as available in Python's SciPy library) which could be reused afterwards for similar tasks. UPDATE: After the answer of @MarcusMüller and the provided links, I basically tried every FIR filter design method available in Python's SciPy library and deciced to go with remez. I developed the following code which may be used for further discussion: import math import matplotlib.pyplot as plt import numpy as np from scipy.signal import lfilter, remez F_test = 20.0 duration = 10.0 fs = 8000 samples = int(fs*duration) t = np.arange(samples) / fs signal_test = (5.0 * t * np.sin(2.0*np.pi*F_test*t)) + (0.5 * np.sin(2.0*np.pi*5.0*t)) + (0.5 * np.sin(2.0*np.pi*100.0*t)) #design filter ntaps = 5000 edges = [0, F_test - 5.0, F_test - 2.5, F_test + 2.5, F_test + 5.0, 0.5 * fs] taps = remez(ntaps, edges, [0, 1, 0], Hz=fs, maxiter=2500) #apply filter signal_test_filtered = lfilter(taps, 1, signal_test) #create plot fig = plt.figure() ax0 = fig.add_subplot(111) ax0.plot(t, signal_test_filtered, label='signal_test_filtered') ax0.set_xlabel("time [s]") ax0.legend() fig.show() Answer: So, first, to put things into perspective: 4kHz is not a high sampling rate these days (add 5 orders of magnitudes, and things become hard). Your 15 kHz passband doesn't say anything about the complexity of the filter; what counts is the transition width, ie. the distance between pass- and stopband, as well as the attenuation of the stopband. For a slightly specialized answer, see the answers to http://dsp.stackexchange.com/questions/31066/how-many-taps-does-an-fir-filter-need/31077. What is the recommended way to perform tasks like this? Do a filter design, apply filter. What bandpass filter is suitable? Any bandpass filter that suits your needs – which you haven't specified. What's missing is: transition width stop-band attenuation acceptable ripple length constraints What we know is: You want a linear phase filter (because linear phase means constant group delay, and that means un-broken phase relationships) Thus, you probably want a FIR that is symmetric in time Furthermore​, the filter step response should have close to zero overshoot. Gibb's phenomenon is non-negotiable :) so yeah, use a longish filter with a nice rolloff. The settling time could therefore be a little larger. That will be the effect, yes. You could use a filter design tool that allows you to design with a windowing method and use a window that suits your application well – which I don't know. I am more interested in a general solution that delivers solid results using standard IIR or FIR filters (e. g. as available in Python's SciPy library) Well, yeah, that's what I'd recommend, but you say just shortly above that you don't want to do a filter design? I'm a bit conflicted. Anyway, there's a lot of functions that will give you a proper design. I'd recommend the following: Design a low-pass FIR filter convert it to a bandpass filter, by multiplying with a cosine (real-valued data, symmetric filter) or $e^{j2\pi\frac{f_{center}}{f_s}\cdot n}$ (complex-valued data, one-sided filter) as needed. Designing is easy, something along (untested, straight from the back of my head) from scipy.signal import fir_filter_design from math import cos, pi f_center=ToBeDefined!!! f_cut = 15 f_s = 4e3 f_rel = f_s/2/f_cut transition_width = ToBeDefined!!! # e.g. 10 trans_rel = f_s/2/transition_width attenuation = ToBeDefined!!! # e.g. 10.0**-5 num_of_taps = estimate_by_reading_my_answer_above(f_rel, trans_rel, attenuation) lowpass_taps = fir_filter_design.firwin2(numtaps=num_of_taps, cutoff=f_rel, window = "hamming"|"blackmanharris"|"hann"|"chebwin"|…) bandpass_taps = [lp_tap * cos(2*pi*f_center/f_s*n) for n, lptap in enumerate(lowpass_taps)] Do I need to downsample the signal before filtering. If yes, what type of additional low pass filter should I use to avoid aliasing? Well, considering you won't need most of your signal, yes, that seems advantageous. Typically, you'd just filter away (and decimate in the same step) an 1/N of your initial sampling rate. For example, if your bandpass lies about 100 Hz - 15 / 2 Hz to 100 Hz + 15 / 2 Hz, you won't need anything above 200 Hz – so just decimate to 1/20 of your input sampling rate, with a low pass filter that has 1/40 input sampling rate transition width. But again, 4kHz is laughably little, and a couple thousand taps won't stress your PC at all. If in doubt, from scipy import signal signal.convolve(your_input_signal, filter_taps, method="fft") does a fast convolution. (you can also just omit the method, or set it to auto, because scipy will just pick fast convolution for long filters, automatically). For a bit of an impression on what's possible with halfway-optimized code on a PC, see my answer to how to implement signal generation on a GPU (and why not); there, I showed that you can do a 107-tap filter at 160 MS/s on my PC back then, so you should be able to do a let's say 700 tap filter at easily 20 MS/s (just a wild guess). That's only 5 thousand times as fast as your sampling.
{ "domain": "dsp.stackexchange", "id": 5427, "tags": "filters, discrete-signals, signal-analysis, filter-design, bandpass" }
How to calculate the redshift of reionization?
Question: I am trying to calculate equalities given Omega parameters. For example given, $\Omega_L = 0.6889083, \Omega_M = 0.311, \Omega_R = 9.17$ x $10^{-5}$ and $\Omega_K = 0$ $H_0 = 67.7$ km/sec/Mpc The redshift when Matter was equal to Radiation can be calculated as follows: $$ Z_{eq} = \Omega_M/\Omega_R - 1= 0.311/0.0000917 - 1 = 3390.49 $$ $(1)$ The redshift when Dark energy and Matter were equal can be calculated as follows: $$ \Omega_L = \Omega_M \rightarrow (\Omega_L/\Omega_M)^{1/3} - 1 = (0.6889083/0.311)^{1/3} - 1 = 0.3036$$ $(2)$ But how do you calculate the following: The redshift when Dark Energy was equal to Radiation (that is at Reionization)? As per first answer below the answer in part is as follows: $$z = (\Omega_L / \Omega_R)^{1/4} - 1 $$ $(3)$ However, I was hoping for a different answer that includes the Reionization Optical Depth $(\tau)$ as such: $$z = 92 *(0.03 * (H_0 /100)* \tau / \Omega_bh^{2})^{2/3} * \Omega_M^{1/3} $$ $(4)$ where $\Omega_bh^{2}$ is the physical baryon density parameter as referenced in the link in the comment below. The problem is the "92" and "0.03" figures. The result from that formula is close to the real one as obtained at (3). How are these two figures derived? Answer: Dark energy had equal density to radiation when Ω_L = Ω_R/a^4. The red shift would be z = (1/a) - 1. Therefore a = (Ω_R/Ω_L)^(1/4) and z = (Ω_L/Ω_R)^(1/4) - 1.
{ "domain": "physics.stackexchange", "id": 87230, "tags": "cosmology, universe, redshift" }
Question On Balancing
Question: I recently worked on a lab which involved the following reaction: $$\ce{HCl(aq) + CuO(s) -> CuCl2(aq) + H2O(l)}$$ In my description of the reaction I wrote the following "The equation shown above represents a multi–step reaction between copper (II) oxide and hydrochloric acid in aqueous solution. To predict the products, we must recall that a metal oxide turns into a metal hydroxide when placed in water. The initial copper ion had a +2 charge, so the hydroxides must have a net charge of –2. This is accomplished through two hydroxyl groups. Once copper (II) hydroxide has formed, it may react with hydrochloric acid. These reactants undergo a special type of double replacement reaction known as a neutralization reaction. In essence, the metals combine ($\ce{CuCl2}$) and water is formed" From here, I was wondering how exactly to balance the equation. Specifically, do I have $\ce{CuO(s) + 2 HCl(aq) -> Cu(OH)2(aq) + 2 HCl(aq) -> CuCl_2(aq) + 2 H2O(l)}$ $\ce{CuO(s) + 2 HCl(aq) -> Cu(OH)2(aq) + 2 HCl(aq) -> CuCl2(aq) + H2O(l)}$ It seems like I can "cancel out" a water molecule when going from middle reactants to final reactants. Is this true? Answer: As andselisk points out, CuO is poorly soluble, but the hydroxide is listed in the CRC Handbook (blue, insoluble, no numbers), so it can be assumed to exist under some conditions. The Reference Book of Inorganic Chemistry, Latimer and Hildebrand, 3rd ed, 1951, p110, mentions Cu(OH)2 being formed in solution by addition of hydroxide, cold, to Cu++ as green or bluish-green (in hot solution, black CuO is formed). Cu(OH)2 is a very weak base; K = 5.6 e-20 (this must be Ksp). Now, your experiment: you start with black CuO and wind up with green CuCl2. Because the blue/bluish-green Cu(OH)2 color will be so faint because of its low solubility, I'm going to suggest that under ordinary concentrations of HCl (~0.1 - 1 M), you will not be able to detect the very fast appearance and disappearance of Cu(OH)2 in solution. However, with a large excess of CuO and very dilute HCl, you might, under some conditions, be able to get some Cu(OH)2 into solution if you did it delicately. But I wouldn't think this would be easy. If you wanted to see Cu(OH)2, the way to go would be to add cold NaOH solution to a cold copper salt solution. Now, for your equations: #1 is not balanced: you have two total hydrogens on the left and four on the right. #2 is balanced right and left, but the middle has too many hydrogens. I think you overthought the reaction. It might be two-step, but that intermediate step is very difficult to investigate and can be swept under the carpet. But its better to overthink and walk back than to underthink and miss the boat.
{ "domain": "chemistry.stackexchange", "id": 13111, "tags": "stoichiometry" }
What is the ratio of "real" stars in the sky?
Question: When you look at the sky without any utilities like binocular or telescope, what is the rough ratio of "real" stars to other objects like planets, the moon, asteroids, satellites? 90% stars - 10 % others? 99% stars - 1% others? Almost 100% stars - apart from a very few you can easily name as a hobby astronomer? Answer: It's almost 100% stars. In good conditions, you can see perhaps 2000 stars. (There are about 6000 naked-eye visible stars; of these, 3000 are above the horizon at any time, and about 1000 are hidden because they're too close to the horizon and blocked by the atmosphere.) The number of non-star objects you can see without assistance is tiny in comparison: The Sun (obviously not at night, and of course it's a star) The Moon The International Space Station, but only a tiny fraction of the time The visible planets: Mercury, Venus, Mars, Jupiter, Saturn (and maybe Uranus if you have excellent conditions and better eyes than mine) A handful of other artificial satellites might be visible, but only rarely. Comets can be very visible, but again that's rare. Visible asteroids are even rarer. Vesta, a large asteroid, may be barely visible, but I've never seen it. Note that some of these objects are quite obviously not distant stars just based on their appearance. Some galaxies (Andromeda, and the two Magellanic Clouds if you're far enough south) are visible, but they don't look like stars. A few of the brighter nebulas and globular clusters might be visible; the latter are groupings of stars, so I don't know how they'd count. Meteors and aircraft can be visible, but they're within the atmosphere, and probably not covered by your question.
{ "domain": "astronomy.stackexchange", "id": 572, "tags": "star, near-earth-object" }
Save loose soil from erosion on a slope
Question: I have a drive way which has knick at point ot smoothen is out i leveled it by putting loose soil But its monsoon season and rain washed away all my soil as water comes running down slope. In pic red is soil i am putting to level road, IS there way to stabilize this loose soil , so that water does not wash away soil Answer: The key to preventing erosion and soil loss is to slow the speed of the water traveling down the slope during rain events and to keep the soil in place. One thing that could be done is to construct a retaining wall at the bottom of the slope. This can be expensive and depending on the amount of soil that needs to be retained may require and engineer to design it. If possible, the slope surface of the soil should be "moonscaped". This involves creating dips and bumps up and down and along the full length of the soil slope. The bumps should be staggered and not line up with each other. This will slow down the speed of surface water on the slope and reduce erosion. The other thing that needs to be done is to lock the grains of soil in place. This is best done with roots of plants and you may need to go through a staged serious of plantings: grasses, shrubs and trees. Don't just plant trees and expect them to do everything. The roots of the grasses will lock soil near the surface of the slope. The shrubs will lock the soil a little deeper into the slope and tress and even deeper. After having done all of this, covering the slope with heavy mulch, that will not be carried away by water will also assist in slowing the speed of surface water and help retain moisture in the soil. Geotextiles and landscape netting can be used to prevent the top portion of the soil from being washed away. Or the slope surface could be sprayed with a mixture of bitumen and grass seed. Once the grass seeds have established themselves, the bitumen will eventually degrade and be absorbed by the environment. Construction of a drainage ditch, just above the deposited soil will also help reduce the amount of water the soil slope needs to deal with. This could be a dug earth ditch of a concrete/paved ditch, as long as the run off water is directed away from the slope. Other things that can be done is to place hay bales, or rocks, on the soil slope and on the slope above the deposited soil. These can help to reduce the speed of surface water running down the slope. If hay bales are used they should be placed in a staggered, off-set pattern, so that long drainage channels, which would lead to the formation of erosion gullies, are not created by the bales. Moonscaping of the upper natural slope, above the deposited soil slope would also help in preserving the deposited soil slope.
{ "domain": "engineering.stackexchange", "id": 2205, "tags": "civil-engineering" }
Can two quantum systems ever NOT interact?
Question: Given that two quantum systems will always be connected by fields, is it really possible for two quantum systems to remain completely unentangled? Even if the entanglement is destroyed by observation, won't the entanglement immediately creep back in? Answer: Given that two quantum systems will always be connected by fields, is it really possible for two quantum systems to remain completely unentangled? In the abstract: no, it is not possible. In practice: entanglement is not really a black-and-white dichotomy, and the degree of entanglement matters a great deal. (Suitable measures are the purity of the state of the system, the entanglement entropy, and the entanglement spectrum, among others.) If you have a system which is in-principle in a pure state, but it has some unwanted van-der-Waals interactions with a hydrogen atom on the Moon which takes the purity from $\mathrm{Tr}(\rho^2)=1$ down to $\mathrm{Tr}(\rho^2)=1-10^{-10^{10}}$, does it really matter that the system is formally not factorizable? (particularly if you take into account that no practical measurement would ever be able to detect it.) As a general rule, there are extremely few effects where the difference between a "truly" pure state, and a mixed state which is "$\epsilon$ away" from a pure state, actually makes any real difference; to the extent that such schemes exist, they are generally (and rightly) regarded with suspicion, as they represent an intolerance to noise and experimental uncertainty that is incompatible with practical reality. Edit: As pointed out in the comments, this reasoning holds only if you want the system of interest to be in a pure state. It is possible to protect against entanglement (with some pre-specified environment) by placing your system in a mixed state. But, particularly if you start from a pure state, this can only be done by bringing in some ancilla system, entangling it with your system, and then discarding your ancilla. (It is an open question, currently down to unanswerable questions of quantum interpretation, whether it is possible to ever have a "truly mixed" state, i.e. a system in a mixed state which is not in that condition because it is actually in a globally-pure entangled state with some other system out there which we're unable to identify.)
{ "domain": "physics.stackexchange", "id": 74482, "tags": "quantum-mechanics, hilbert-space, quantum-entanglement, quantum-states" }
Efficiently find smallest unique substring
Question: Let's say we have a string $s$. We define a unique substring in $s$ as a substring that occurs only once in $s$. How can we efficiently find such a substring with the smallest length in $s$? The most obvious solution is in $O(n^3)$ by checking every substring. What if we can preprocess the string? Answer: You can try suffix array approach which find all suffix of a given string of length n in O(n) time. There are many algorithm to construct suffix array from a given input string. Look at complete taxonomy here http://www.cas.mcmaster.ca/~bill/best/algorithms/07Taxonomy.pdf For your problem you can use suffix array couple with some additional counter to find solution.
{ "domain": "cs.stackexchange", "id": 9215, "tags": "algorithms, data-structures, regular-expressions, strings, pattern-recognition" }
When is a DNA sequence a gene?
Question: I am a newbie is DNA sequencing and BioInformatics. I am writing a school project that determines whether a DNA sequence is a gene or not using a Machine Learning Algorithm, Hidden Markov Model. After understanding the the algorithm, I now want to know when we can say this particular DNA sequence is a gene. Any help or pointers would be appreciated. Thanks Answer: We can devide algorithms for protein-coding genes in eukaryotes in two main categories: Extrinsic, these algorithms rely on the comparison with external data sources, such as comparing genomes Intrinsic, I suggest you are more interested in this one when you acctualy want to predict gene coding regions on your own (If I'm wrong I will further explain extrinsic methods). The intristic methods are based on ab into predictions. I will continue on Intrinsic methods, so what can we use to identify coding genes in the sequence. Ofcourse we need to know the charasticts of those coding regions: Most of the time we see CpG Islands(regions of a higher than expected occurrence of CpG dinucleotides over a patricular region). About 70% of human promoters have a high CpG content (Wiki) Those CpG islands are important for the regulation of gene expresion. the basics of a coding sequence are the promotor, introns and exons. you are probably familiar with the fact that a mRNA has to be spliced in order to cut the introns (non coding regions) out, this is done by the splicesome wich recognizes special sites where it can cut, So you can search for these sites in your DNA sequence. I will summarize some points about the coding regions here: CpG Island near the promotor splice sites for splicesomes Start and stop codons (I hope you are familiar with these) There are more things to consider but I would to suggest to look at the algorithms of these programs: GENSCAN, FgeneSH, GeneMark, Xpound and to read this atricle which is about identifying the protein-coding regions
{ "domain": "biology.stackexchange", "id": 6078, "tags": "genetics, dna, dna-sequencing" }
Game of life - JavaScript
Question: I would like to kindly ask others to review my code. I'd like suggestions, criticisms, and discussions on what is good and what could be done better. const canvas = document.querySelector('.gameOfLife'); const ctx = canvas.getContext('2d'); // create an array const canvasWidth = 100; const canvasHeight = 100; canvas.width = 3 * canvasWidth; canvas.height = 3 * canvasHeight; let gameBoard = createBoard(canvasWidth, canvasHeight); let gameBoard2 = createBoard(canvasWidth, canvasHeight); let gameBoard3 = createBoard(canvasWidth, canvasHeight); function createBoard(width, height) { let board = new Array(0); for (let i = 0; i < width; i++) { board [i] = []; for (let j = 0; j < height; j++){ board[i][j] = 0; } } return board; } function randomNumber(max) { return Math.floor(Math.random() * max); } function randomPos() { const x = randomNumber(canvasWidth); const y = randomNumber(canvasHeight); return [x, y]; } function randomBoard(arr) { for (let i = 0; i <= 0.2 * canvasHeight * canvasWidth; i++) { const [x, y] = randomPos(); arr[x][y] = 1; } } function drawBoard(arr) { ctx.clearRect(0, 0, canvas.width, canvas.height) for (let i = 0; i < canvasWidth; i++) { for (let j = 0; j < canvasHeight; j++){ if (arr[i][j] === 1) { ctx.fillStyle = "#012345"; ctx.fillRect(i*3, j*3, 3, 3); } } } } // calculate new cycle function checkSurrounding(x, y) { let sum = 0; startX = x-1 < 0 ? x : x-1; stopX = x+1 >= canvasWidth ? x : x+1; startY = y-1 < 0 ? y : y-1; stopY = y+1 >= canvasHeight ? y : y+1; for (let i = startX ; i <= stopX; i++) { for (let j = startY ; j <= stopY; j++) { if (i !== x || j !== y) { sum += gameBoard[i][j]; } } } return sum; } function deadOrAlive(x, y) { const surround = checkSurrounding(x, y); if (gameBoard[x][y] === 1 && (surround === 2 || surround === 3)) return 1; else if (gameBoard[x][y] === 0 && surround === 3) return 1; else return 0; } function nextGene() { let newBoard = createBoard(canvasWidth, canvasHeight); for (let i = 0; i < canvasWidth; i++) { for (let j = 0; j < canvasHeight; j++) { newBoard[i][j] = deadOrAlive(i, j); } } gameBoard = newBoard; drawBoard(newBoard); } randomBoard(gameBoard); drawBoard(gameBoard); setInterval(nextGene, 150); <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Game of Life - Canvas</title> </head> <body> <canvas class="gameOfLife" width="400" height="400"></canvas> <script type="text/javascript" src="scripts.js"></script> </body> <style> body { background-color: #ffc600; } .gameOfLife{ margin: 15px auto; background-color: #ff6c00; display: block; } </style> </html> Answer: I have gone over the code briefly and looked at a couple optimizations, but haven't come up with any drastic changes. I will try to go over it again when I find time. What is good I like how the various functions are generally short. And the CSS/HTML is very simple too! It is a nice use of the canvas. Suggestions Iterations Anytime you iterate over an array using a for loop, consider using a functional approach like .forEach(), .map(). That way, you won't have to handle the bookkeeping of iterating the variable (e.g. var i) and indexing into the array. For more information, I recommend going through these exercises. Though be aware that functional approaches are not always optimal/faster. Creating board: Inspired by the comment from Longfei Wu on this question, the function to create the board could take on a functional approach using Array.fill() and Array.map() function createBoard(width, height) { return Array(width).fill(0).map(function() { return Array(height).fill(0); }); } And if you are open to using ES-6 arrow functions, that can be even shorter: function createBoard(width, height) { let board = Array(width).fill(0).map(() => Array(height).fill(0)); return board; } See comparison in this jsPerf. When I ran it in Chrome the array fill-map approach was faster, though the opposite was true in Firefox, Mobile Safari and emphasized textEdge. Drawboard: .forEach() can be used here instead of the for statements: function drawBoard(arr) { ctx.clearRect(0, 0, canvas.width, canvas.height) arr.forEach(function(innerArray, i) {// for (let i = 0; i < canvasWidth; i++) { innerArray.forEach(function(cell, j) {//for (let j = 0; j < canvasHeight; j++){ if (cell === 1) { ctx.fillStyle = "#012345"; ctx.fillRect(i*3, j*3, 3, 3); } }); }); } And the same is true for nextGene(). deadOrAlive() Because checkSurrounding() checks 3-9 spaces around the given cell, I see cases where it returns values greater than 3. Should deadOrAlive() have logic in those cases? It appears that it only has logic for return values of 2 and 3. Updated snippet: const canvas = document.querySelector('.gameOfLife'); const ctx = canvas.getContext('2d'); // create an array const canvasWidth = 100; const canvasHeight = 100; canvas.width = 3 * canvasWidth; canvas.height = 3 * canvasHeight; let gameBoard = createBoard(canvasWidth, canvasHeight); let gameBoard2 = createBoard(canvasWidth, canvasHeight); let gameBoard3 = createBoard(canvasWidth, canvasHeight); function createBoard(width, height) { return Array(width).fill(0).map(function() { return Array(height).fill(0); }); } function randomNumber(max) { return Math.floor(Math.random() * max); } function randomPos() { const x = randomNumber(canvasWidth); const y = randomNumber(canvasHeight); return [x, y]; } function randomBoard(arr) { for (let i = 0; i <= 0.2 * canvasHeight * canvasWidth; i++) { const [x, y] = randomPos(); arr[x][y] = 1; } } function drawBoard(arr) { ctx.clearRect(0, 0, canvas.width, canvas.height) arr.forEach(function(innerArray, i) {// for (let i = 0; i < canvasWidth; i++) { innerArray.forEach(function(cell, j) {//for (let j = 0; j < canvasHeight; j++){ if (cell === 1) { ctx.fillStyle = "#012345"; ctx.fillRect(i*3, j*3, 3, 3); } }); }); } // calculate new cycle function checkSurrounding(x, y) { let sum = 0; startX = x-1 < 0 ? x : x-1; stopX = x+1 >= canvasWidth ? x : x+1; startY = y-1 < 0 ? y : y-1; stopY = y+1 >= canvasHeight ? y : y+1; for (let i = startX ; i <= stopX; i++) { for (let j = startY ; j <= stopY; j++) { if (i !== x || j !== y) { sum += gameBoard[i][j]; } } } return sum; } function deadOrAlive(x, y) { const surround = checkSurrounding(x, y); if (gameBoard[x][y] === 1 && (surround === 2 || surround === 3)) return 1; else if (gameBoard[x][y] === 0 && surround === 3) return 1; else return 0; } function nextGene() { let newBoard = createBoard(canvasWidth, canvasHeight); newBoard.forEach(function(innerArray, i) {// for (let i = 0; i < canvasWidth; i++) { innerArray.forEach(function(cell, j) {//for (let j = 0; j < canvasHeight; j++){ newBoard[i][j] = deadOrAlive(i, j); }); }); gameBoard = newBoard; drawBoard(newBoard); } randomBoard(gameBoard); drawBoard(gameBoard); setInterval(nextGene, 150); <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Game of Life - Canvas</title> </head> <body> <canvas class="gameOfLife" width="400" height="400"></canvas> <script type="text/javascript" src="scripts.js"></script> </body> <style> body { background-color: #ffc600; } .gameOfLife{ margin: 15px auto; background-color: #ff6c00; display: block; } </style> </html>
{ "domain": "codereview.stackexchange", "id": 27235, "tags": "javascript, performance, html5, game-of-life, canvas" }
Greibach Normal Form: Proof every sentential form is of the form xy with x terminals and y variables
Question: For any grammar in Greibach normal form, every sentential form obtained from S by a partial left-most derivation is of the form xy with x terminals and y variables. I think that this can be proven inductively based on the length of the derivation. Base case: length 1 S -> a with 'a' being a terminal. This follows from the definition of Greibach Normal Form. Induction step: Let's assume that it is true for all derivations of length n, and imagine I have a derivation of length n + 1. S =>* xABC... => xadBC... if A -> ad with 'a' being a terminal and d being some variables. Since a grammar in Greibach Normal Form only contains variables to the right of a single terminal symbol, this induction step is also proven and so xa are terminals and dBC... are variables. I'm not sure about my reasoning and wanted to ask if someone could correct/finish my proof? Answer: Management frowns upon "check my proof" questions, but this is a good exercise on induction. Your answer is basically correct, except for a little tuning at the base case. We are not necessarily dealing with succesful derivations, so we cannot restrict to terminal productions in the first step. In fact, if this would the only case we could not find longer derivations at all. Actually we can start with derivations of length 0 as our base case. Obviously we now only have $S \Rightarrow_\ell^0 S$, and $S$ is of the proper form (to be pedantic: with $S=xy$ where $x=\varepsilon$ consisting of terminals and $y=S$ consisting of variables). The inductive step is then as you describe, but I would replace the "ABC..." by more abstract symbols. Given a leftmost derivation in $n+1$ steps $S \Rightarrow_\ell^{n+1} \alpha$ we apply inductive hypothesis on the first $n$ steps and have $S \Rightarrow_\ell^n xAy \Rightarrow_\ell \alpha$, where $x$ consists of terminals, $y$ consists of non-terminals, and $A$ is (the leftmost) nonterminal. Productions in GNF are of the from $A\to a v$ with terminal $a$ and a string of nonterminal variables $v$ (that might be empty). If we apply such a production to $A$ in the last step of the above derivation we get $\alpha = xavy$. This is of the proper form $\alpha = x'y'$ with $x'= xa$ consisting of terminals and $y'= vy$ consisting of variables. Done.
{ "domain": "cs.stackexchange", "id": 20361, "tags": "formal-languages, context-free, formal-grammars, normal-forms, derivation" }
Metal-insulator transition (material properties)
Question: When studying about metal-insulator transitions, I was wondering which material properties can give direct information about this phenomena. Also, what information can be derived from these properties. I cannot seem to find any good readings about this topic. One last thing: which type of materials are useful in experimentally studying the metal-insulator transitions and why is this the case ? Is there like a kind of way to accurately move through the transition ? Thanks in advance ! Answer: The metal to insulator transition is a characteristic of some materials that cannot be adequately described by a mean field theory like band theory. The reason that these materials may still behave like conventional band insulators while being gapless is due to strong correlation of electrons- meaning the energy penalty for a valence electron to occupy the same orbital angular momentum state as another electron is on the order of the hopping matrix element. These materials are often oxides such as vanadium-oxide. More generally this is a characteristic of some metallic compounds with 3d and 4f electrons. This is largely due to the fact that electrons in these orbitals are relatively localized in space compared to other outer shells. As a result, the couloumbic repulsion when multiple electrons occupy a 3d or 4f state is relatively high. So why would this phenomenon occur in an metal-oxide compound but not in 3d metals alone? There are a variety of factors for this, but the most trivial way to think about some simpler systems is by comparing the hopping matrix element to the energy penalty from correlation. The hopping matrix element, in a simplified picture, corresponds to how electrons may lower their energy by hopping from site to site. In typical 3d metals, the magnitude of the hopping matrix element is larger than the correlation energy, and therefore the material is well-described by band theory. On the other hand, for an material that may be a Mott insulator, the magnitude of the hopping matrix element would be less than the correlation energy. This prevents electrons from easily moving. Perhaps the most physically intuitive (but not necessarily most accurate) way to think about why this may happen for an oxide is because the oxygen atoms effectively act as "spacers" that increase the distance between metal atoms in the solid. As a result, the wavefunctions involved in the hopping integral have lesser spatial overlap. This makes the hopping energy lesser than it would be in the pure metal, which may cause the magnitude to fall below the correlation energy penalty. With this toy model in mind, it would then make sense that there could be an effective balance between the hopping energy and the correlation energy. The ratio of the two would determine whether or not your material will act as a metal or a gapless insulator. So it would make sense that you could have a phase transition when the correlation energy is the same in magnitude to the hopping energy. So what could tip the balance? There are a variety of factors. Strain, temperature, electric field, and degree of doping have all been observed as parameters that may be varied to cause a metal-insulator transition. I encourage you to read about the Hubbard model if you want a more detailed account of the model I described. As an example of an experimental application of this, I encourage you to check out this (somewhat controversial) paper of using a metal-insulator transition in a field-effect transistor.
{ "domain": "physics.stackexchange", "id": 67737, "tags": "condensed-matter, solid-state-physics, phase-transition, metals, insulators" }
Reference Request: Overlaps between complexity theory and dynamical systems?
Question: Per Wikipedia: In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake. At any given time a dynamical system has a state given by a set of real numbers (a vector) that can be represented by a point in an appropriate state space (a geometrical manifold). The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic; in other words, for a given time interval only one future state follows from the current state however, some systems are stochastic, in that random events also affect the evolution of the state variables. Is there any result on dynamical system (i.e. solving dynamical equations, finding asymptotic states to a system, dynamics in chaotic system) with complexity theory, hardness for finding a solution, etc.? Thanks Answer: This is a well-researched area. For a representative result, see Kawamura's proof that solving ODEs is difficult. A different line of works studies the hardness of computing Nash equilibria and related problems. See for example the recent breakthrough of Bitansky, Paneth and Rosen, who base hardness of cryptographic assumptions; earlier work based it on complexity theoretic assumptions.
{ "domain": "cs.stackexchange", "id": 6195, "tags": "complexity-theory, reference-request, discrete-mathematics, mathematical-foundations, physics" }
Typesetting A* in LaTeX using algorithm2e - follow-up
Question: (See the previous and initial iteration.) I have this second version of my LaTeX code. I made it less dry by removing the duplicate keyword definitions shared by the two algorithms being typeset. Also, in the previous version, in the argument of main While loop, OPEN was typeset instead of desired OPEN. See what I have now: \documentclass[10pt]{article} \usepackage{amsmath} \usepackage[ruled,vlined,linesnumbered]{algorithm2e} \SetArgSty{textnormal} % Make the While argument non-italic % Define special keywords. \SetKw{Nil}{nil} \SetKw{Is}{is} \SetKw{Not}{not} \SetKw{Mapped}{mapped} \SetKw{In}{in} \SetKw{ChildNode}{child node} \SetKw{Of}{of} \SetKw{Continue}{continue} \begin{document} % A*. \begin{algorithm} $\text{OPEN} = \{ s \}$ \\ $\text{CLOSED} = \emptyset$ \\ $\pi = \{ (s \mapsto$ \Nil $)\}$ \\ $g = \{ (s \mapsto 0) \}$ \\ \While{$|\text{OPEN}| > 0$}{ $u = \textsc{ExtractMinimum}(\text{OPEN})$ \\ \If{$u$ \Is $t$}{ \KwRet \textsc{TracebackPath}$(u, \pi)$ \\ } $\text{CLOSED} = \text{CLOSED} \cup \{ u \}$ \\ \ForEach{\ChildNode $v$ \Of $u$}{ \If{$v \in \textsc{CLOSED}$}{ \Continue \\ } $c = g(u) + w(u, v)$ \\ \If{$v$ \Is \Not \Mapped \In $g$}{ $g(v) = c$ \\ $\pi(v) = u$ \\ \textsc{Insert}$(\text{OPEN}, v, c + h(v))$ \\ } \ElseIf{$g(v) > c$}{ $g(v) = c$ \\ $\pi(v) = u$ \\ \textsc{DecreaseKey}$(\text{OPEN}, v, c + h(v))$ \\ } } } \KwRet $\langle \rangle$ \caption{\textsc{AStarPathFinder}$(s, t, w, h)$} \end{algorithm} % Traceback path. \begin{algorithm} $p = \langle \rangle$ \\ \While{$u$ \Is \Not \Nil}{ $p = u \circ p$ \\ $u = \pi(u)$ \\ } \KwRet $p$ \caption{\textsc{TracebackPath}$(u, \pi)$} \end{algorithm} \end{document} The result looks like this: Any critique is much appreciated. Answer: Nice. I mean if you're going for perfection I'm going to write something but otherwise that looks really good. The documentation for algorithm2e mentions that every line should be terminated with \; - and then has a flag to disable showing that semicolon, \DontPrintSemicolon - take that how you will. At least Emacs highlights that less garishly than the forced newline. The variables could be marked via \SetKwData. If you want to keep the same style, modify it via \SetDataSty{textnormal}. Similarly for the functions via \SetKwFunction. The style for the captions could also just be set to smallcaps globally via \let\AlCapNameSty\textsc (works and should be correct, but I'm not a \$\LaTeX\$ expert by any means. Also there's the procedure environment (instead of algorithm) that has some more shortcuts. Note that I added procnumbered and \SetAlgoProcName to set the names and numbering back to what is the default algorithm settings. Looks like this: \documentclass[10pt]{article} \usepackage{amsmath} \usepackage[ruled,vlined,linesnumbered,procnumbered]{algorithm2e} \SetArgSty{textnormal} % Make the While argument non-italic \SetDataSty{textnormal} \SetFuncSty{textsc} \let\AlCapNameSty\textsc \DontPrintSemicolon \SetAlgoProcName{Algorithm} % Define special keywords. \SetKw{Nil}{nil} \SetKw{Is}{is} \SetKw{Not}{not} \SetKw{Mapped}{mapped} \SetKw{In}{in} \SetKw{ChildNode}{child node} \SetKw{Of}{of} \SetKw{Continue}{continue} \SetKwData{Open}{OPEN} \SetKwData{Closed}{CLOSED} \SetKwFunction{ExtractMinimum}{ExtractMinimum} \SetKwFunction{TracebackPath}{TracebackPath} \SetKwFunction{Insert}{Insert} \SetKwFunction{DecreaseKey}{DecreaseKey} \begin{document} % A*. \begin{procedure} $\Open = \{ s \}$ \; $\Closed = \emptyset$ \; $\pi = \{ (s \mapsto$ \Nil $)\}$ \; $g = \{ (s \mapsto 0) \}$ \; \While{$|\Open| > 0$}{ $u = \ExtractMinimum(\Open)$ \; \If{$u$ \Is $t$}{ \KwRet \TracebackPath{$u$, $\pi$} \; } $\Closed = \Closed \cup \{ u \}$ \; \ForEach{\ChildNode $v$ \Of $u$}{ \If{$v \in \Closed$}{ \Continue \; } $c = g(u) + w(u, v)$ \; \If{$v$ \Is \Not \Mapped \In $g$}{ $g(v) = c$ \; $\pi(v) = u$ \; \Insert{\Open, $v$, $c + h(v)$} \; } \ElseIf{$g(v) > c$}{ $g(v) = c$ \; $\pi(v) = u$ \; \DecreaseKey{\Open, $v$, $c + h(v)$} \; } } } \KwRet $\langle \rangle$ \caption{AStarPathFinder($s$, $t$, $w$, $h$)} \end{procedure} % Traceback path. \begin{procedure} $p = \langle \rangle$ \; \While{$u$ \Is \Not \Nil}{ $p = u \circ p$ \; $u = \pi(u)$ \; } \KwRet $p$ \caption{TracebackPath($u$, $\pi$)} \end{procedure} \end{document}
{ "domain": "codereview.stackexchange", "id": 22857, "tags": "tex, a-star" }
Does dissociation require energy to occur?
Question: I assume spontaneous separation to ions of salt solution (or acid) requires energy to occur, right? Where does that energy come from? Is it from the heat of the solution? Now, if someone devised a way to draw potential energy from those separated ions, could he make this process continuous (assuming solution would draw heat from its surroundings), or would it eventually stop to work? Answer: I assume spontaneous separation to ions of salt solution (or acid) requires energy to occur, right? Yes, it requires energy to occur. This energy is usually referred to as the activation energy. It is a measure of the energy barrier that must be surmounted for the process to occur. In the case of salts ionizing in solution, the barrier is usually (but not always) quite small. Salts that dissolve and give off heat (exothermic) will have a lower barrier than salts that require energy input (endothermic) to dissolve. image source: http://web.campbell.edu/faculty/nemecz/323_lect/enzyme_mech/mech_chapter.html Where does that energy come from? Is it from the heat of the solution? The energy comes from the surrounding environment. The surrounding molecules have a thermal energy that can be transferred through collisions with the reactants. The reactants will convert to products once enough energy has been imparted to them to cross the barrier. Now, if someone devised a way to draw potential energy from those separated ions, could he make this process continuous I think you're asking if the energy withdrawal can be made continuous. If so, then the answer is "yes" - at least until all of the starting material has been consumed and converted to product. This is done today, examples would include batteries, gasoline, power generators in general, etc.
{ "domain": "chemistry.stackexchange", "id": 2255, "tags": "inorganic-chemistry" }
Why is the direction of net force on an object and the direction of acceleration of that object different in this problem?
Question: A $ 2.0 kg $ box of cucumber extract is being pulled across a frictionless table by a rope at an angle $ \theta=60° $ (from positive direction of $ x $ axis, we have taken horizontal surface of table as $ x $ axis) The tension in the rope is $ 12N $ and causes the box to slide across the table to the right with an acceleration of $ 3.0 m/s² $ But the direction of net force is along the rope. The direction of acceleration isn't. Why is that? Answer: Let the unit vector in the upward direction be $\hat u$. If the box is the system with no rope pulling on it then there are two external forces acting on the box: downward gravitational attraction due to the Earth $\vec W = -W \hat u$ upward force due to the ground (normal reaction) $\vec N = N \hat u$ Since the box is in static equilibrium $\vec W + \vec N = \vec 0 \Rightarrow -W + N = 0 \Rightarrow N=W=2g$ Now allow the rope to apply an extra external force on the box $\vec F$. A common misconception is that the normal reaction force stays the same. Just to show you that there are times when this is not so assume that the force due to the rope acts vertically upwards thus $\vec F = F \hat u$ If $F<W$ it will still be a static equilibrium situation $\vec W + \vec N' + \vec F = \vec 0 \Rightarrow -W + N' +F = 0 \Rightarrow N'=W-F=2g-F$ So the magnitude of the new normal reaction force $N'$ has been reduced because of the introduction of a third external force acting on the box. Not realising this is where, I think, the following misconception has arisen. But the direction of net force is along the rope. In this case you cannot just take the static equilibrium situation without the rope and then add another force and assume that the two forces before are still of the same magnitude. Certainly $W$ stays the same but if the force due to rope has an upward component (ie will be trying to lift the box up) the magnitude of the new normal reaction force $N''$ will be less than $N$. The free body diagram is on the left and the vector addition of the three external forces (roughly to scale) to give the net force $R$ is shown on the right. Resolving the applied force $\vec F$ into a component in the up direction $F_{\rm u}$ and a component in the right direction $F_{\rm r}$ and applying Newton's third law $\vec W + \vec N'' + \vec F =m \vec a \Rightarrow -W \hat u + N'' \hat u + F_{\rm u} \hat u +F_{\rm r} \hat r = m a_{\rm u} \hat u + m a_{\rm r} \hat r$ Which gives two equations $W = F_{\rm u} +N''$ as the upward acceleraction $a_{\rm u}$ is zero and $F_{\rm r} = m a_{\rm r}$ for the horizontal motion.
{ "domain": "physics.stackexchange", "id": 55234, "tags": "homework-and-exercises, newtonian-mechanics, forces, classical-mechanics, kinematics" }
Why is Hessian Matrix called 'force-constant' matrix?
Question: Hessian Matrix is a square matrix containing the elements as the second-order partial derivatives of energy-function of a molecule; the derivative is done with respect to geometric coordinates of the molecule. However, I couldn't get why Hessian matrix is referred to as force-constant matrix; as is done in the book I'm following viz. Computational Chemistry: Introduction to the Theory and Applications of Molecular and Quantum Mechanics by Errol G. Lewars and in many other sources. If the energy function is a quadratic function which is mostly the case for a molecule following 1D PES; the function which a Hookean spring follows for small displacement, then the second derivative does indeed give the value of force constant, that is $$\begin{align}E- E_0&= k(q-q_0)^2\\ \implies \frac{\mathrm{d}^2 E}{\mathrm{d}q^2}&= 2k\;.\end{align}$$ However, for molecules following multidimensional PES, most of the time, the energy function is not simple quadratic; how can the second order derivative give the force constant? Second order derivatives vary from point to point unlike the simple case above. So, can anyone tell why Hessian matrix is called force-constant matrix? Answer: You are right that multidimensional PES is not simple quadratic, but close to the PES well minima it can be approximated as harmonic oscillator. Also if you really want to include the real PES you can include anharmonicity in your calculation. In that case you have to include an anharonic constant $x_e\;.$ Your PES can be written as $$E=\left(n+\frac{1}{2}\right)h\nu-x_e\left(n+\frac{1}{2}\right)^2h\nu$$ Usually in computational chemistry the energy can be expanded into a Taylor series $$E=E_0+\frac{\mathrm{d}E}{\mathrm{d}x}(x-x_0)+\frac{1}{2}\frac{\mathrm{d}^2E}{\mathrm{d}x^2}(x-x_0)^2+\textrm{higher order terms}\;.$$ And you can usually neglect those first order term and higher order terms for most of the practical problems. And then your PES will follow the trajectory of a harmonic oscillator.
{ "domain": "chemistry.stackexchange", "id": 4937, "tags": "computational-chemistry" }
Does BERT use GLoVE?
Question: From all the docs I read, people push this way and that way on how BERT uses or generates embedding. I GET that there is a key and a query and a value and those are all generated. What I don't know is if the original embedding - the original thing you put into BERT - could or should be a vector. People wax poetic about how BERT or ALBERT can't be used for word to word comparisons, but nobody says explicitly what bert is consuming. Is it a vector? If so is it just a one-hot vector? Why is it not a GLoVE vector? (ignore the positional encoding discussion for now please) Answer: BERT cannot use GloVe embeddings, simply because it uses a different input segmentation. GloVe works with the traditional word-like tokens, whereas BERT segments its input into subword units called word-pieces. On one hand, it ensures there are no out-of-vocabulary tokens, on the other hand, totally unknown words get split into characters and BERT probably cannot make much sense of them either. Anyway, BERT learns its custom word-piece embeddings jointly with the entire model. They cannot carry the same type of semantic information as word2vec or GloVe because they are often only word fragments and BERT needs to make sense of them in the later layers. You might say that inputs are one-hot vectors if you want, but as almost always, it is just a useful didactic abstraction. All modern deep learning frameworks implement embedding lookup just by direct indexing, multiplying the embedding matrix with a one-hot-vector would be just wasteful.
{ "domain": "datascience.stackexchange", "id": 7397, "tags": "nlp, bert, transformer, attention-mechanism" }
How to pass arrays in custom messages
Question: Hello all, I have a node publishing temperature data from a string of 8 batteries written in python. Each battery has 8 temperature sensors, and so rather than building a message with 64 fields I have it broken down into 8 arrays each of length 8. I have written a healthMap node in C++ and I want to subscribe to the temperature data. What should my message structure look like? Python wants to use a tuple and C++ wants a vector. Does anyone have a code example that illustrates how to approach this? I also have little experience working with vectors in C++, so hopefully you could give me a sketch of what the subscriber looks like as well. Sorry for the long-winded question Cheers Gideon Originally posted by Gideon on ROS Answers with karma: 239 on 2011-09-02 Post score: 5 Answer: This is similar to a previous question. There is a good answer there that has some example code and points to this tutorial. The tutorial has example code for publishers and subscribers. For C++ vectors you can do something like the following (someone correct my C++ if I'm wrong please): std::vector<double>::iterator my_iterator; int i = 0; for (my_iterator = msg.vector_field.begin(); msg.vector_field.end(); my_iterator++) { my_local_battery_value[i] = msg.vector_field[my_iterator]; i++; } or follow this page. Originally posted by Thomas D with karma: 4347 on 2011-09-02 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by dornhege on 2011-09-02: Ignore this ... triggered the 5 comments bug. Comment by dornhege on 2011-09-02: You could replace your code with temp = msg.temp; which would copy the vector. If that's even necessary depends on what you want to do with it. Comment by Thomas D on 2011-09-02: It should definitely be possible. It might even be easier than my little example code (I just haven't worked with vectors that much either). Comment by dornhege on 2011-09-02: Well, you can also just copy the whole vector, no need for a for loop. Or ideally subscribe a shared pointer, that you store, then no copying is needed at all. Comment by Gideon on 2011-09-02: Would this work? for(int i = 0; i<numTemps;i++){ temp[i] = msg->temp[i]; } Comment by Gideon on 2011-09-02: so something like this wouldnt work? Comment by Gideon on 2011-09-02: Thanks. If my subscriber in written in C++, do I manually copy all of the fields to my local variables with a for loop?
{ "domain": "robotics.stackexchange", "id": 6596, "tags": "ros, messages" }
Pituitary giants - is the fusing of growth plates dependent on amount of growth hormone in blood?
Question: I wanted to ask a couple questions related to pituitary giants (people who are giants because of some anomaly, such as a tumor, in their pituitary gland). Some of these giants seem to keep growing and growing until the tumor (or the gland) is removed. Is this because the fusing of the growth plates is controlled by the amount of growth hormone in blood? The reason for this question is that many of the giants have been described to grow past when the growth plates should probably have fused. For example (all heights in meter, m): Bernard Coyne This giant is described as having been 2.36m at the age of 20, but 2.54m at the time of his death at the age of 23. Edouard Beaupré At the age of 17, this giant was measured as 2.16m tall and at the age of 21, he was measured at 2.50m tall. At the time of his death, at the age of 23, he was listed (in his death certificate) as having been 2.51m tall and "still growing". Väinö Myllyrinne This giant was 2.22m at the age of 21 but is described as having "experienced a second phase of growth in his late thirties", attaining a height of 2.51m by the time of his death at 54. Adam Rainer This giant was, unusually enough, a "dwarf" at the age of 18, reaching a height of 1.22m. However he had a growth spurt afterwards, reaching a height of 2.18m at the age of 30 and 2.34m by the time of his death at age 49. Also, in some cases the giant individuals facial (bone) structure changes as he/she ages. Considering that the human skull does not have any growth plates, or anything that would (as far as I know) put a stop to the skull bones response to growth hormone, can changes in face structure also happen to "regular" people who take growth hormones? Answer: Gigantism is caused by too much growth hormone (GH) and causes increased growth in kids (healthline website). After adulthood, increased GH leads to acromegaly which leads to continuing growth of fingers, feet, forehead, jaw, nose, and lips (UCLA Health website). Note that structures like the nose and ears also keep on growing in healthy folks. Hence, GH does not prevent closure of the growth plates, which happens normally as far as I am aware. Taking GH during childhood will likely lead to gigantism, and to acromegaly in healthy individuals. EDIT: In answer to the QUESTION EDIT: the comment by @Raoul is interesting here, as the first guy Bernard Coyne was an eunich. Perhaps that was why his plates never fused as he will have had no testosterone production. I couldn't find specifics in the other 3 cases, but my guess is they are also exceptional cases (otherwise they wouldn't have made it in the record books). As to the question title I dare say that GH is not involved in plate fusion.
{ "domain": "biology.stackexchange", "id": 3383, "tags": "human-biology, endocrinology, growth" }
Under the equivalence principle, does a bullet experience gravitational time dilation during acceleration?
Question: Does the equivalence principle include gravitational time dilation for an accelerating object? A bullet accelerates in the barrel of a gun at 10^6 m/s^2. If this acceleration was the gravity of a planet, it would have very significant gravitational time dilation. Does the bullet experience this gravitational time dilation under the equivalence principle? Answer: No. Gravitational time dilation is based on potential, not acceleration. The clock postulate says that time dilation does not directly depend on acceleration, and this has been tested experimentally in particle accelerators to a high degree of accuracy. (Of course acceleration does have an indirect effect on time dilation, since it changes what inertial frame is considered "at rest".) In an accelerating frame (like a rocket) there's a pseudo-potential which causes clocks at the front of the rocket to tick faster than clocks at the back. This is true even for a rigid rocket (where both clocks are accelerating at the same speed). When viewed from outside the rocket the difference is explained by the relativity of simultaneity -- the accelerated rocket observer has different surfaces of simultaneity than the unaccelerated "outside" observer.
{ "domain": "physics.stackexchange", "id": 88436, "tags": "time-dilation, equivalence-principle" }
VSLAM based navigation with Kinect, where do I begin?
Question: I'm interested in developing a robot (UAV) indoor navigation system basing on a MS Kinect (or Kinect-like, like the PrimeSense dev kit) sensor. I'm specifically interested in corridor following. I've recently bought a BeagleBoard-xM platform and installed Ubuntu and ROS on it. I'd like to start off without any flying platform and implement the navigation layer first. So here is my plan (and questions): I have to buy a sensor for VSLAM (ie. Kinect or so): do you have any suggestions on which one should I choose? I unfortunately wasn't able to connect PrimeSense recently, is it even worth it, or should I stick with the original Kinect? Which stacks should I start playing with? I'm particularly interested in indoor navigation and attitude estimation (I could fuse with IMU based attitude data afterwards). I would then try to implement a distributed system (Desktop PC + BB-xM) via ROS, with both peers doing a part of calculations. How would you design such a system, what potential risks do you see right away? Any suggestions welcome, tom. Originally posted by tom on ROS Answers with karma: 1079 on 2011-02-16 Post score: 2 Original comments Comment by mmwise on 2011-02-17: when you see an answer you like, mark it as an accepted answer Answer: I have to buy a sensor for VSLAM (ie. Kinect or so): do you have any suggestions on which one should I choose? I unfortunately wasn't able to connect PrimeSense recently, is it even worth it, or should I stick with the original Kinect? I was also unable to get a Primesense SDK from them directly, however you might want to continue trying as the Primesense device is quite a bit smaller, lighter, and less power hungry than a Kinect (which certainly has to be a good thing for a UAV). Originally posted by fergs with karma: 13902 on 2011-02-16 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by fergs on 2011-02-18: I only got one by winning one in the ROS 3D Contest Comment by tom on 2011-02-17: Thanks fergs, and is there a way to get a Primesense SDK indirectly anyhow?
{ "domain": "robotics.stackexchange", "id": 4766, "tags": "ros, navigation, uav, kinect, vslam" }
How is prestressing with losses self-balanced?
Question: By definition, prestressing is a self-balanced load. This is because it is in fact merely an applied state of internal stresses. This can also be demonstrated mathematically (in this case for a simple parabolic cable) via Lin's load-balancing method. For a cable which follows the layout given by $$ y(x) = \dfrac{4e}{L^2}x^2 $$ where $e$ is the vertical distance between the maximum and minimum points of the parabola, $L$ is the span of the cable, and $x=y=0$ is located at the minimum point of the layout, the equivalent load is $$ q = \dfrac{8Pe}{L^2}$$ where $P$ is the prestress force. Therefore, the total vertical load given by the distributed load is $\dfrac{8Pe}{L}$. This force is countered by the vertical forces applied at the anchors. These are equal to $P\sin(\theta) \approx P\theta$ and $y'(x) \approx \theta$, so the forces are each equal to $Py'\left(\frac{L}{2}\right) = \dfrac{4Pe}{L}$ and combined are equal (and opposite) to the total distributed load. So the total global load applied is null. Now, what if we consider friction losses between the cable and the duct in a post-tensioned beam? Then the combined forces at the extremities remain equal to $\dfrac{8Pe}{L}$, but the total distributed load will be reduced by the losses, implying in a non-balanced load, which is impossible. This may be adjusted by the anchorage slip losses, but what if we assume a hypothetical anchor which doesn't slip? Is this impossible and slippage is necessary to make the load balanced? And how are the remaining losses (elastic deformation, creep, shrinkage and relaxation) affected? Answer: Comments by @Mr.P nudged me to realize that there is a redistribution of equivalent loads due to losses which cannot be trivially encompassed by Lin's method. To demonstrate this, take the following simply-supported beam (ignore all concepts of units or scale here, this is a thought exercise). The bending moment diagram solely due to prestress for this beam will be the following polygonal diagram (before any losses, assuming $P\cos\theta \approx P$): This is equivalent to the diagram obtained by a concentrated vertical load at midspan equal to $F = 2P\sin\theta \approx 40$, which is balanced out by the two vertical loads at the supports, each equal to $P\sin\theta \approx 20$ (in the opposite direction). However, let us now consider friction losses. Let's assume they cause a 10% reduction to the stress in the tendon at midspan. This implies that the bending moment at midspan will also be reduced by 10%, and will therefore equal 90. However, the diagram's profile is no longer polygonal. That can be easily observed by looking at the bending moment at any other point. In an isostatic structure, the bending moment is simply equal to $P \cdot e$, where $e$ is the distance between the cable and the centroid. Looking at quarter-span, the bending moment before losses was equal to $100\cdot0.5=50$, or exactly half of the mid-span moment. To calculate after losses, however, we need to calculate the force at this point. Simplifying things considerably, let's assume the loss here is half of that at midspan, so only 5%.1 In that case, the bending moment at quarter-span will be equal to $95\cdot0.5=47.5$, which is not equal to half of the mid-span moment of 90. Indeed, the bending moment diagram from support to midspan becomes of the form $$M = Pe\left(\dfrac{2x}{L}-0.1\left(\dfrac{2x}{L}\right)^2\right)$$ where $e$ is the distance of the cable to the centroid at midspan, $L$ is the span, and $0.1$ represents the 10% loss of prestress at midspan. Getting the first and second derivatives of this equation gives us the equivalent concentrated load at midspan and the uniform load distributed along the entire span, respectively: $$\begin{align} M' = Q &= Pe\left(\dfrac{2}{L} - 0.2\dfrac{4x}{L^2}\right) \\ Q\left(\dfrac{L}{2}\right) &= \dfrac{1.6Pe}{L} \therefore F = 2Q\left(\dfrac{L}{2}\right) = 32 \\ M'' = q &= \dfrac{0.8Pe}{L^2} = 0.8 \end{align}$$ Therefore, the equivalent loading which generates the correct (simplified) bending moment diagram after friction losses is the following: This implies in a new uniform load which did not appear before losses, a representation of the load redistribution described by @Mr.P. The equivalent concentrated load at midspan is also no longer equal to $F = 2P\sin\theta$, which would have resulted in $F \approx 36$. The prestress, however, is balanced, since $32 + 0.8\cdot10 = 40$, as expected. The same reasoning can be applied to parabolic cables. For the following beam: the uniform equivalent load is $q = \dfrac{8Pe}{L^2} = 2$, which generates a total upwards force of $2\cdot32=64$, to be cancelled out by the concentrated downwards forces of 32 at each support. The bending moment diagram before any losses is: However, assuming once again a 10% loss due to friction at midspan, and that the friction-loss profile is linear, the bending moment diagram becomes: which has the following cubic equation (with $x=0$ at the midspan): $$M = 2.00\cdot128\left(1-\left(\dfrac{2x}{L}\right)^2\right)\left(0.9+0.1\dfrac{2x}{L}\right)$$ getting the first and second derivatives, I can find the equivalent loading, which in this case is equal to: Given what we saw with the first example, it comes as no surprise that there's an increase in the distributed load near the supports. It also makes intuitive sense that the distributed load is reduced near the midspan. How that concentrated load at midspan comes to be, however, I have no idea. However, once again the prestress is self-balanced: $\dfrac{1.8+2.4}{2}\cdot32-3.2=64$. This answer therefore answers the question posed of how prestressing with losses is self-balanced: the equivalent load is redistributed, but the total value is not modified. That being said, I cannot explain how to calculate this redistribution in a general case because, well, I don't know how that's done. 1 Though friction losses are usually quite linear (or polygonal), this is a poor assumption. After all, the 10% loss at midspan must include the effect of the concentrated angle change at that point. At quarter-span, the losses are only due to linear friction loss, and will therefore probably be substantially lower than 5%. That being said, we can just state that 10% is at the point immediately before the angle change, where the bending moment approaches 100 but where only linear friction losses have occurred. All diagrams obtained with Ftool, a free 2D frame analysis tool.
{ "domain": "engineering.stackexchange", "id": 876, "tags": "civil-engineering, structural-engineering, concrete, prestressed-concrete" }
Mutual information and entropy to prove minimal Relevance Maximum Dependency
Question: I'm reading through a paper on feature selection: Feature Selection Based on Mutual Information: Criteria of Max-Dependency, Max-Relevance,and in-Redundancy but I'm unable to understand parts of the proof presented in section 2.3 where they are using information theory to prove that Max-Dependency is equivalent to the algorithm they present (mRMR). I don't follow how they derive the equality for minimal redundancy... ... from ... ... and ... (There is a similar step for the maximal relevance term which I also don't grasp) My best understanding of what they are doing is that the entropy for each individual feature is found in the summation of the H(x_i) terms, and then the mutual information between all features is subtracted out in the J term, and that this is done (some how) through the chain rule, and then collection of the mutual information terms? I don't have a strong back ground in information theory - so this is mostly conjecture. Hopefully some one with a stronger information theoretic background could elucidate this. Thanks. Answer: For the equations you are reporting, $J(S_m) = I(S_{m-1}; x_m)$, i.e. $J(S_m)$ is the mutual information between $S_{m-1}$ and $x_m$. Since for any two random variables $X$ and $Y$ $I(X; Y) = H(X) + H(Y) - H(X; Y)$ the equation follows directly from the definition of mutual information (the chain rule does not apply here, since there is no need to deal with conditional entropy): $\begin{gathered} H({S_{m - 1}},{x_m}) = H({S_m}) \\ = \sum\limits_{i = 1}^m {H({x_i})} - J({S_m}) \\ = \sum\limits_{i = 1}^m {H({x_i})} - I({S_{m - 1}};{x_m}) \\ = \sum\limits_{i = 1}^m {H({x_i})} - (H({S_{m - 1}}) + H({x_i}) - H({S_{m - 1}},{x_m})) \\ = \sum\limits_{i = 1}^m {H({x_i})} - (\sum\limits_{i = 1}^m {H({x_i})} - H({S_{m - 1}},{x_m})) \\ = H({S_{m - 1}},{x_m}) \\ \end{gathered}$
{ "domain": "cstheory.stackexchange", "id": 2382, "tags": "machine-learning, it.information-theory" }
Redefining queue with different front and rear
Question: #include <stdio.h> #include <limits.h> #define MAX_SIZE 5 #define FULL INT_MAX #define EMPTY INT_MIN int Q[MAX_SIZE]; // linear queue // int rear = -1; int front = 0; // condition for empty is Qsize is zero (front > rear) // // condition for full is rear is at last index (rear == MAX_SIZE - 1) // // basic operations on Queue // int add_Q(int item); // if full returns (INT_MAX) else returns item inserted // int delete_item_in_Q(); // if empty returns (INT_MIN) else returns deleted item // size_t Qsize(); // returns 0 if (front > rear) else returns the current size of Q // // read operations // int rear_data(); // if empty returns (INT_MIN) else returns rear data // int front_data(); // if empty returns (INT_MIN) else returns front data // void read_Q(); void delete_Q(); void get_Q(size_t size); int main(void){ int size; printf("\nenter your size(positive) for Queue (less than %d) : ",MAX_SIZE); scanf("%d",&size); get_Q(size); read_Q(); add_Q(66); read_Q(); delete_item_in_Q(); read_Q(); delete_Q(); read_Q(); return 0; } void get_Q(size_t size){ int item; if(size > MAX_SIZE){ printf("enter valid size \n"); return; } for(int i = 0 ; i < size ; i ++){ printf("enter your queue elements : "); scanf("%d",&item); add_Q(item); } } int add_Q(int item){ return rear == (MAX_SIZE - 1) ? FULL : (Q[++rear] = item); } int delete_item_in_Q(){ return Qsize() ? Q[front++]:EMPTY; } size_t Qsize(){ if(front > rear){ front = 0;rear = -1; } return (rear - front + 1); } int front_data(){ return Qsize() ? Q[front]:EMPTY; } int rear_data(){ return Qsize() ? Q[rear]:EMPTY; } void read_Q(){ if(!Qsize()){ printf("\nthe queue is empty\n"); return; } int i = front; printf("\nthe queue is : "); while(1){ printf("%d=>",Q[i++]); if(i == MAX_SIZE){ printf("\nqueue is full\n"); return; } else if (i > rear){ putchar('\n'); return; } } } void delete_Q(){ while(1){ if(!Qsize()){ printf("\nthe queue is empty\n"); return; } printf("the deleted element is : %d\n",Q[front++]); } } In my queue(linear) rear = -1 and front = 0 unlike(front = rear = -1) and assumes user doesn't enter INT_MIN or INT_MAX. I am not checking the return values of many functons like scanf(),add_Q() for simplicity. rear and front points to valid data if (front <= rear) empty if (front > rear). Qsize() always reinitialize the index back to original in empty state In order to access queue the above conditions must be "strictly" met or else we say it is empty, so is this queue a valid one ? my teacher said that it is not a good one (rear or front is pointing at locations inside the array in empty conditions). Is there any problem in my code ? I have tried to look for errors but none was found (based on my definition). and I need a better solution that doesn't assume user never enter INT_MIN or INT_MAX (but it should return the added or deleted data). read_Q() also looks bad. I need a better one for it. Answer: Your teacher is right. This is not a good FIFO. Is there any problem in my code ? We're not ready to go there, yet. Let's first worry about problems with the design. We want to implement a queue that others would want to use, and currently this doesn't fit the bill. production use, 24x7 It is perfectly valid for a consumer to do this all day long: add_Q(get_val()); while (TRUE) { add_Q(get_val()); add_Q(get_val()); process(delete_item_in_Q(), delete_item_in_Q()); } That is, the queue receives an unbounded stream of values, and queue depth is bounded, it always hovers between 1 .. 3 inclusive. It never becomes EMPTY. And that's ok. It definitely never becomes FULL, though in your implementation it would. circular reasoning How can we accommodate such a usage pattern? Malloc / free of Linked List nodes is certainly one way. I'm glad you didn't go that route. Another way is to use a fixed size array as you have, but allow it to begin at any element, wherever rear points to. As elements are added, we wrap around using modulo MAX_SIZE. Let front always be the index of a free element, and rear (potentially) points at the oldest valid element. When front == rear, the queue is empty, there is no valid element. When (front + 1) % MAX_SIZE == rear then we have wrapped all the way around, and the queue is full. It will not accept any new elements. Notice that we "waste" one cell in that instance, since front == rear is reserved for the empty case rather than the full case. Define the constant to be "one more" if needed by your requirements. code style If an_identifier starts with lower case, good, stick with that, avoid tacking on a capital letter. Prefer ternary a ? b : c for read-only expressions. Clearly you can do a ? readonly : sideeffect as in the OP, and it will work. But that doesn't mean it is easily readable by others. Prefer if (a) { b } else { c } for side effects, even in the case that if (a) { return b } else { return c } needs a pair of return statements. Find a code formatter / pretty-printer, and use it. Doesn't matter if it's GNU, Google, K&R, or another style, as long as it's consistent throughout the code. Find a C unit test framework and use it. Automated tests "know the answer", to give Red / Green bar results upon running. They don't require the user to eyeball what was printed and decide whether that was the right result.
{ "domain": "codereview.stackexchange", "id": 45133, "tags": "c, array, queue" }
Are coral cays and coral atolls just volcanic islands?
Question: Are coral cays and coral atolls just a different type of volcanic island? When I hear: "volcanic island" I think at places like Moorea or Hawaii, but ultimately, coral cays and coral atolls lay their foundation on the residuals of a volcano, right? Answer: No, they are not necessarily volcanic. Both of these terms, along with many others like fringing reef and barrier reef, are just morphologic — to do with shape. They tend not to imply anything genetic — to do with origin. (The separation of description and interpretation is an important principle in observational sciences like geoscience.) Cay A low island, often of sand or coral. See Wiktionary. Atoll A ring-shaped coral reef. See Oxford English Dictionary. The name Cay is sometimes applied more generally to an island or group of islands. In some cases, as in Cay Sal Bank, Bahamas, the feature is actually an atoll: a ring-shaped reef. Cay Sal Bank is one of the largest atolls in the world, and did not form on a volcano. Charles Darwin hypothesized in his monograph The structure and distribution of coral reefs that atolls form when a fringing reef around a volcanic islands grows as the volcano subsides: Indeed, Darwin turned out to be correct that atolls often have a volcanic origin, but Cay Sal Bank is a counter-example. I think it's fair to say that most people would assume that origin though, and lots of Internet definitions , so it's notable if an atoll formed some other way, e.g. on a non-volcanic seamount (see Keating et al., 1987, in Seamounts, Islands, and Atolls, AGU), or by island subsidence due to dissolution.
{ "domain": "earthscience.stackexchange", "id": 446, "tags": "geology, volcanology" }
PLZ HELP MEH IMPROVA MEH LOLCAT CODEZ, WHUCH CALCULUTS FAWCTORIALS
Question: SUNCE MEH BEEN WRUTEN CODEZ IN MUNY DIFFRNT PROGRAMMING LANGUAGES, MEH THOUGHTZ THAT ME TRY TO WRITE CODEZ IN AN ESOTERIC LANGUAGE. MEH CODEZ CALCULUTS A FAWCTORIAL OF UH USR INPUTTED NUMBAH IN LOLCODE, LOL. SOME OF MEH QUSTIONS: IZ MEH CODEZS A IDMITIC LOLCAT PROGUM? IZ THERE A GOODER WAY OF WRUTING MEH factorial FUNCSHUNS? TRANSLATION: Since I've been writing and trying programs in many different languages, I thought I'd give esoteric language a crack. So I used LOLCODE to write a program that calculates the factorial of a user given number: Questions for consideration: Is my LOLCODE code idiomatic to the LOLCODE language? Is there a easier way to write my factorial function? LOLCAT.lol BTW Calculates the factorial for a user given integer BTW Author: Pythonic BTW Version: 1.0 HAI 1.2 HOW DUZ I factorial YR n BOTH SAEM n AN 0, O RLY? YA RLY FOUND YR 1 NO WAI FOUND YR PRODUKT OF n AN factorial DIFF OF n AN 1 OIC IF U SAY SO HOW DUZ I main I HAS A user_input VISIBLE "ENTER AN NUMBAH" VISIBLE "TO FINDZ ITZ FAWCTORIAL:: " GIMMEH user_input VISIBLE "DE FAWCTORIAL OF " user_input " IZ:: " factorial user_input VISIBLE "KTHXBYE!" IF U SAY SO main KTHXBYE For those of you who are not fortunate enough to have a LOLCODE compiler, here is a link to a REPL.IT of the code. NOTE: All answer must be given in proper LOLCODE English, with an optional translation in regular English. Failure to do so will result in me releasing my LOLCATS upon you.... Answer: Your factorial function enters an infinite loop if passed a negative integer. Since the entire program seems to take 1 input and then exit, I think it would be best if the program gave an error message and then exited.
{ "domain": "codereview.stackexchange", "id": 24859, "tags": "recursion, functional-programming, lolcode" }
Butterworth filter's gain formula does not agree with R's `signal` package
Question: I'm trying to calculate the Butterworth filter gain. If I use the formula mentioned on Wikipedia: $$ G^2(\omega) = \frac{G_0^2} {1+\left(\frac{j\omega}{j\omega_c}\right)^{2n}} $$ I don't get a matching result from calculating the gain directly from the filter's magnitude using R's signal package. library(signal) # Butterworth filter # Gain formula from wikipedia Butterworth_gain <- function(freq, cutoff_frequency, n = 1) { 1/(1+(freq/cutoff_frequency)^(2*n)) } bf <- butter(n = 2, W = .6, type = "low") bfr <- freqz(bf) plot(bfr$f, abs(bfr$h)^2, ty ='l') lines(bfr$f, Butterworth_gain(bfr$f, pi*.6, n = 2), col = 'red') Answer: The function butter() computes the coefficients of a discrete-time ("digital") Butterworth filter, whereas the gain formula you used is valid for a continuous-time ("analog") Butterworth filter. According to the R documentation you can use butter() to compute an analog filter using the plane argument. The gain of a discrete-time Butterworth filter is obtained by the bilinear transform, which substitutes the "analog" frequency variable by $\tan(\omega/2)$, where $\omega$ is the frequency in radians, normalized by the sampling frequency. For a discrete-time unit gain lowpass filter of order $n$, the squared gain is given by $$G^2(\omega)=\frac{1}{1+\left(\frac{\tan(\omega/2)}{\tan(\omega_c/2)}\right)^{2n}},\qquad |\omega|\le\pi\tag{1}$$
{ "domain": "dsp.stackexchange", "id": 9893, "tags": "filters, filter-design, lowpass-filter, butterworth, r" }
Higgs mechanism in QED
Question: I'm trying to understand the Higgs mechanics. For that matter, I'm exploring the possibility of giving mass to the photon in a gauge-invariant way. So, if we introduce a complex scalar field: $$ \phi=\frac{1}{\sqrt{2}}(\phi_{1}+i\phi_{2}) $$ with the following Lagrangian density (from now on, just Lagrangian) $$ \mathcal{L}=(\partial_{\mu} \phi)^{\star}(\partial^{\mu} \phi)-\mu^2(\phi^{\star}\phi)+\lambda(\phi^{\star}\phi)^2$$ and $\mu^{2}<0$. We note that the potential for the scalar particle has an infinity of vacuums all of them in a circle of radius $v$ around (0,0). We introduce two auxiliary fields $\eta,\xi$ to express the perturbations around the vacuum $$ \phi_0=\frac{1}{\sqrt{2}}[(v+\eta)+i \xi ]$$ Introducing the covariant derivative and the photon field, I have to compute the following thing $$(D^{\mu} \phi)^{\dagger}(D_{\mu} \phi) $$ The derivatives included in $(D^{\mu}\phi)^{\dagger}$ are supposed to act upon the $(D_{\mu} \phi)$? Answer: The answer is no. Just as in the case without a gauge field, it is just a product of two derivatives of the field $\phi$. You might be interested in the chapter "Scalar Electrodynamics" in Srednicki's book.
{ "domain": "physics.stackexchange", "id": 10651, "tags": "notation, differentiation, higgs" }
What's the algorithm for floating points equality test?
Question: I've found that 0.1 + 0.2 == 0.3 is not true in Java (see this demo). So, I'm interested in how equality testing can be implemented for floats. Is there a standardized algorithm? Answer: You cannot meaningfully test floating point values for equality. A floating point value does not represent a real number, it represents a range of real numbers, but it fails to store the width of this interval. All you can do with floating point values is to test them for approximate equality, and it's up to you to define the approximation you're willing to make. You can test whether the difference is smaller than a certain constant ($|x-y| \le \epsilon$), or whether it's relatively small ($|x-y| \le \epsilon \min\{x,y\}$), or something else; the right method and the value of $\epsilon$ depend on the calculation. There is a seminal article on this topic: the aptly titled What Every Computer Scientist Should Know About Floating Point Arithmetic by David Goldberg [PDF]. Sure, it's long — that shows how difficult floating point is to use, if you care about things like whether 0.1 + 0.2 equals 0.3. And no, there is no magic bullet that would let you sweep the difficulties under the carpet. There is a good, more accessible summary called What Every Programmer Should Know About Floating-Point Arithmetic*, or, Why don’t my numbers add up? Applications that require exact computations, such as financial computations, do not use floating point. They use integer or fixed-point arithmetic. Applications that use floating point require a careful, expensive analysis to find out what the uncertainty in the result is. Keep in mind that a floating point value does not really represent a real number $m \cdot 2^e$ nor even a specific interval like $\left[(m-\frac12) 2^e, (m+\frac12) 2^e\right]$, but an interval whose width increases as the computation becomes longer as the uncertainty accumulates. There is a standard for floating point operations, which most (but not all) systems today implement: IEEE 754 (“IEEE floating point”). This standard defines storage formats, rouding rules, denormalized numbers, etc. It tries to arrange for typical computations to be as precise as possible. Arranging for 0.1 + 0.2 to equal 0.3 is beyond its abilities (that would be possible, but then it would cause 0.2 + 0.3 not to equal 0.5 or some similar misfit). Some threads on Stack Overflow on this topic: Is floating point math broken? Why Are Floating Point Numbers Inaccurate? Floating point inaccuracy examples Why does changing the sum order returns a different result?
{ "domain": "cs.stackexchange", "id": 3893, "tags": "floating-point" }
If we mix alcohol and water, then we heat them, will the temperature stop at 78 degree which is boiling point of alcohol?
Question: I understand that if we heat alcohol only, the temperature will stop at its boiling point, which is 78 degree. Then the boiling point of water is 100 degree. What will happen to the boiling point if we mix alcohol and water in ratio of 1:1? Will the mixture stop at boiling point of alcohol or exceed it? Answer: The diagram above shows the observed vapor-liquid equilibrium behavior of ethanol-water mixtures at one atmosphere. In the diagram, the curve to the left is the mole fraction of ethanol in the liquid and the curve to the right is the mole fraction in the vapor. For a 1:1 molar mixture in the liquid, it shows that the bubble point is 353 K (80 C) rather than 78 C. For a 1:1 mass mixture, the mole fraction ethanol in the liquid is 0.28 and the bubble point is 82 C.
{ "domain": "physics.stackexchange", "id": 73896, "tags": "thermodynamics, temperature, water, physical-chemistry" }
Is there a "Standard Algorithm" language, as used in academic papers?
Question: In many academic papers algorithms are described. They seem to use similar "syntax". Is there a standard for this language? If I want to describe an algorithm, would I improvise my description? For example, note that papers in general use a <-- b to assign a, not a = b. But where is that standard? Answer: No. There is no universal standard. There are some conventions that have become more popular over time, through gradual evolution. A good starting place to look would be to look at the pseudocode notation used in a few common algorithms textbooks, pick one you like, and try to emulate it. Probably anything done in a popular and well-regarded textbook is going to be reasonable and understandable to others.
{ "domain": "cs.stackexchange", "id": 21035, "tags": "algorithms, pseudocode" }
Why is static recompilation not possible?
Question: I'm researching static recompilation but there doesn't seem to be too much information about the subject. I've heard that dynamic recompilation (emulation) can be up to 6 times slower than native assembly, but I'm curious why we aren't able to translate to a different architecture ahead of time. Even though some instructions wouldn't be 1:1, can't we just shift the rest of the code, and all of the jump instructions with it? Furthermore, if the problem is a lack of source code, would this mean that Super Mario 64 (which has already been completely reverse engineered, to exactly identical binaries), can be fairly easily recompiled into a different architecture? Answer: Static recompilation from a binary is hard, because it is challenging to reconstruct the structure of the program. It is hard to statically figure out the location of all instructions that will be executed, the starting point of all functions, and the set of all jump targets. This information is needed for natural methods of recompilation: we need to know where all the instructions are, so we can recompile them; we need to know where function prologues and epilogues are, so we can translate them to other function calling conventions; we need to know the set of all jump targets, because all of those locations need to be recompiled. It's not impossible, but it can be extremely challenging to do with 100% fidelity. Given a binary executable, it is hard to reliably find all of the executable code statically, due to the presence of indirect jumps. In particular, on x86, it is possible to jump into the "middle" of an instruction, which will cause the stream of bytes to be interpreted differently than you might expect. Since we can't predict all possible jump targets of indirect jumps (this is as hard as the halting problem), it is hard to know all locations that might be executed as code, and at what offset. This makes reliable static disassembly hard. And of course, if you can't even disassemble, it's challenging to re-assemble or re-compile. See, e.g., work on static disassembly to learn about the subject. Here are some sample papers: From Hack to Elaborate Technique—A Survey on Binary Rewriting. Matthias Wenzl, Georg Merzdovnik, Johanna Ullrich, and Edgar Weippl. ACM Computing Surveys (CSUR), 52(3), 1-37 An In-Depth Analysis of Disassembly on Full-Scale x86/x64 Binaries . Dennis Andriesse, Xi Chen, Victor van der Veen, Asia Slowinska, Herbert Bos. Usenix Security 2016. SoK: All You Ever Wanted to Know About x86/x64 Binary Disassembly But Were Afraid to Ask. Chengbin Pang, Ruotong Yu, Yaohui Chen, Eric Koskinen, Georgios Portokalidis, Bing Mao, Jun Xu. 2021 IEEE Symposium on Security and Privacy (SP) (pp. 833-851). See also https://hexterisk.github.io/blog/posts/2020/04/02/disassembly-and-binary-analysis-fundamentals/. This problem is particularly intractable for obfuscated binaries, but even for normal non-obfuscated binaries, existing methods have difficulty fully recovering 100% of the instructions, function starts, and jump targets with perfect accuracy. This is a problem, because if there is even one mistake, then the entire program might crash. An additional challenge is that if the program does any kind of runtime code generation or runtime JIT or runtime recompilation, then this only makes the static recompilation problem even harder. In contrast, runtime (dynamic) methods avoid this problem, because they can observe which instructions and code paths actually get executed and recompile only the ones that are executed, at the time they are executed.
{ "domain": "cs.stackexchange", "id": 20635, "tags": "computer-architecture, assembly" }