anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is my PHP login system following best practices? Is the code really OOP?
Question: I'm learning about OOP and putting my knowledge into practice. I created a login system in PHP, so I wish someone could tell me if I'm on the right track. I feel like I'm getting better at coding, but it's never good enough, so I want someone more experienced (especially in OOP) to look at my code and tell me if I'm doing it right and what I can improve. Just a note, I recognize that the PHP code contained in HTML is not something appropriate, but I just put it to make it easier to understand how the system works. Therefore, I ask you to focus on the login system part, more precisely on OOP. Tree Login(directory):. │ composer.json │ Dashboard.php │ Main.php │ Signin.php │ ├───.idea │ .gitignore │ Login.iml │ modules.xml │ php.xml │ workspace.xml │ ├───src │ HttpTransport.php │ Login.php │ Session.php │ UserAccount.php │ └───vendor │ autoload.php │ └───composer autoload_classmap.php autoload_namespaces.php autoload_psr4.php autoload_real.php autoload_static.php ClassLoader.php LICENSE UserAccount.php <?php declare(strict_types = 1); namespace Login; class UserAccount { private string $email; private string $password; public function setEmail(string $email): void { $this->email = $email; } public function getEmail(): string { return $this->email; } public function setPassword(string $password): void { $this->password = $password; } public function getPassword(): string { return $this->password; } } Session.php <?php declare(strict_types = 1); namespace Login; class Session { public static function startSession(string $sessionName, string $value) { return $_SESSION[$sessionName] = $value; } } HttpTransport <?php declare(strict_types = 1); namespace Login; class HttpTransport { public static function setFlashMessage(string $message): string { return $message; } public static function redirect(string $path): void { header('Location: ' . $path); exit; } } Login.php <?php declare(strict_types = 1); namespace Login; use \Login\HttpTransport; use \Login\Session; class Login { public function __construct(public \PDO $conn) { } public function login(UserAccount $user) { $sql = "SELECT * FROM users WHERE email = ?"; $statement = $this->conn->prepare($sql); $statement->execute([ $user->getEmail() ]); if ($statement->rowCount() > 0) { $rows = $statement->fetchAll(\PDO::FETCH_ASSOC); foreach ($rows as $row) { $passwordHash = $row['passwordHash']; $hash = password_verify($user->getPassword(), $passwordHash); if ($hash) { Session::startSession('username', $row['username']); HttpTransport::redirect('/login/dashboard.php'); } else { $flashMessage = HttpTransport::setFlashMessage('<p>Incorrect e-mail or password.</p>'); Session::startSession('error', $flashMessage); HttpTransport::redirect('/login/signin.php'); } } } else { $flashMessage = HttpTransport::setFlashMessage('<p>Incorrect e-mail or password.</p>'); Session::startSession('error', $flashMessage); HttpTransport::redirect('/login/signin.php'); } } } Signin.php <?php session_start(); session_regenerate_id(true); if (isset($_SESSION['username'])) { header('Location: dashboard.php'); exit; } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Login</title> </head> <body> <form method="POST" action="Main.php"> <?php if (isset($_SESSION['error'])) { echo $_SESSION['error']; unset($_SESSION['error']); } ?> <p>E-mail</p> <input type="email" placeholder="Enter e-mail" name="email" required autofocus><br><br> <p>Password</p> <input type="password" placeholder="Enter password" name="password" required><br><br> <input type="submit" value="Login"> </form> </body> </html> Dashboard.php <?php session_start(); session_regenerate_id(true); $username = $_SESSION['username']; if (isset($username)) { echo "Logged in!"; echo "<br><br>"; echo "Welcome, " . htmlentities($username); } else { header('Location: signin.php'); exit; } Main.php <?php declare(strict_types = 1); require_once __DIR__ . '/vendor/autoload.php'; session_start(); try { $username = 'root'; $password = ''; $conn = new \PDO("mysql:host=localhost;dbname=customers;charset=utf8mb4", $username, $password, [ \PDO::ATTR_ERRMODE => \PDO::ERRMODE_EXCEPTION, \PDO::ATTR_EMULATE_PREPARES => false ]); } catch (\PDOException $e) { throw new \PDOException($e->getMessage(), (int) $e->getCode()); } $user = new \Login\UserAccount(); $user->setEmail(filter_input(INPUT_POST, 'email', FILTER_SANITIZE_EMAIL)); $user->setPassword(filter_input(INPUT_POST, 'password', FILTER_SANITIZE_STRING)); (new \Login\Login($conn))->login($user); SQL -- phpMyAdmin SQL Dump -- version 5.1.1 -- https://www.phpmyadmin.net/ -- -- Host: 127.0.0.1 -- Generation Time: Nov 11, 2021 at 07:49 PM -- Server version: 10.4.21-MariaDB -- PHP Version: 8.0.12 SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO"; START TRANSACTION; SET time_zone = "+00:00"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8mb4 */; -- -- Database: `customers` -- -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` int(11) NOT NULL, `username` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `passwordHash` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `email` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; -- -- Dumping data for table `users` -- INSERT INTO `users` (`id`, `username`, `passwordHash`, `email`) VALUES (1, 'PHP', '$2y$10$sJgO/trlE5Ik1PuIEgzKkuur2V3vpNJ1kQAZfkyjURQD2AWuF3IWK', 'php@mail.com'), (3, 'Warlock', '$2y$10$B3VHrEmkjUUk2yncfQs.V.lR8FXat0tBj.LhxWEU9U5fuXhkYcSAi', 'warlock@mail.com'); -- -- Indexes for dumped tables -- -- -- Indexes for table `users` -- ALTER TABLE `users` ADD PRIMARY KEY (`id`), ADD UNIQUE KEY `username` (`username`), ADD UNIQUE KEY `email` (`email`); -- -- AUTO_INCREMENT for dumped tables -- -- -- AUTO_INCREMENT for table `users` -- ALTER TABLE `users` MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=4; COMMIT; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; If you want to make login: E-mail: warlock@mail.com Pass: Rptw36VWBU%7DF composer.json { "autoload": { "psr-4": { "Login\\" : "src/" } } } Questions Is my code clean? Am I following the sole responsibility principle that a class should only do one thing? Regarding naming, do you think I'm naming things correctly? What do you suggest for me to become a better developer? I appreciate any help, if you can give tips on how to improve the code or how to learn more, something like recommending books, I would be happy to. My goal is to learn, I believe the code is well structured and easy to read, but maybe you have more experience and can find something wrong, however, my goal is to learn, so feel free to criticize and give tips. Answer: This is a pretty good attempt at a login system. The code looks much better than 99% of code that I see on Stack Overflow. Keep up the good work! Your code is almost clean. Your classes do so little that I would consider it too little (more on this later). The naming is almost perfect IMHO. Nonetheless, I have noticed a number of smaller issues that I think you ought to know to become a better developer. Redirects You follow the general rule for 302 redirects: header and exit. Once you switch over to PHP 8.1 I highly recommend using the never type. Take a look at your Login::login() method. You have redirect() followed by else block. That's unnecessary. The code after redirect will never be executed. Login::login() needs refactoring It does what it was supposed to do, but the code is overly complex. Despite calling fetchAll() your foreach loop will never iterate more than one row. Either way the code exits due to redirects. password_verify() does not return hash, so you named the variable incorrectly. There's also no reason for the temporary variable. if($statement->rowCount() > 0 is generally considered an antipattern and is unnecessary in this code. SELECT * is also an antipattern you should avoid. Consider how it could be simplified and still do the same: public function login(UserAccount $user):void { $sql = "SELECT passwordHash, username FROM users WHERE email = ?"; $statement = $this->conn->prepare($sql); $statement->execute([ $user->getEmail() ]); if ($row = $statement->fetch(\PDO::FETCH_ASSOC)) { if (password_verify($user->getPassword(), $row['passwordHash'])) { Session::startSession('username', $row['username']); HttpTransport::redirect('/login/dashboard.php'); } } $flashMessage = HttpTransport::setFlashMessage('<p>Incorrect e-mail or password.</p>'); Session::startSession('error', $flashMessage); HttpTransport::redirect('/login/signin.php'); } I also added void return type, but once you move to PHP 8.1 you should use never. Session class At the moment this class does nothing. You could remove it. However, I think it's a good idea for a class like this with methods to start, regenerate and kill the session. In the start method, you should ensure you use the right storage method, secure cookies, HTTP only. Use session_set_cookie_params() to set these options. You should also set the session name and start it (session_start() should belong in this class). Also, session_regenerate_id() after session_start() is not such a good idea. You should regenerate it after successful login and at regular intervals, but not every time the page is loaded. FILTER_SANITIZE_STRING is deprecated in 8.1 Do not use this filter. You simply don't need it. Passwords should not be modified in any way. Whatever the value you got from the user, should be the value used in password_verify() without any modification. You can use FILTER_UNSAFE_RAW which leaves the value unchanged. Database credentials are hardcoded Database credentials should never be present in the code. Use config file (do not store it in VCS). You can use any format you want for the config file, PHP file, JSON, INI, YAML, NEON, etc. Just do not put it in the code! Document root It looks like your entry points are stored in the main directory. You should create public directory that will be accessible from the internet and store your entry points there. All other code should be inaccessible from the outside. Login does too little This is the last point as this is just an opinion. You don't need to listen to this, but in my opinion the class is too restricted. I would call the class Auth and put all your authentication-related functionality there. There could be a method called logout, a method that rehashes passwords, a method that stores invalid attempts in the database (without passwords) so that you can rate limit or show captcha. On the other hand, I would also move all password-related functionality to a separate class. There could be a method that calls https://api.pwnedpasswords.com/range API to check whether the password is compromised. Another method that checks whether the password needs to be rehashed (see. password_needs_rehash()). And of course, a method to generate a hash (remember to forbid empty passwords and ones that contain NUL bytes; throw an exception if such password is provided). Conclusion You are very knowledgeable and you are using the latest PHP 8 features. I see you have also read good online resources, e.g. https://phpdelusions.net/. The code is a very good start. You need a lot more security considerations if this is to become a real login system, e.g. secure sessions/cookies, password rehashing, checking passwords against leaked passwords. You have also avoided many pitfalls common to beginners. You don't have SQL injection, you avoid XSS, you catch PDO exception to prevent accidental credential leaks in error logs, you use strict types, utf8mb4 in the database, password hashing, and you didn't add arbitrary password restrictions. I'd consider you an experienced developer and a valuable asset to any team.
{ "domain": "codereview.stackexchange", "id": 42437, "tags": "php, object-oriented, authentication" }
Differential operators in curvilinear coordinates
Question: In the appendix A of Griffith's Electrodynamics text, he cites Spivak's Calculus on Manifolds as a reference more a more complete treatment of taking the gradient, curl, divergence, and Laplacian in general coordinate systems. This appealed to me because I wanted to understand this from the view of differential geometry, instead of the long, ad hoc computations for cylindrical and spherical coordinates. I haven't read Spivak's book, but have I have a little understanding of basic differential geometry (first two chapters of Tu's Intro to Manifolds). Anyway, there's an online version of Spivak's text, but looking through the table of contents, I'm not really sure where I'd find this treatment of differential operators in curvilinear coordinates in his book. If anyone's familiar with it, could you please cite where in the book he treats this topic? Or if anyone is familiar with another reference or could provide a good explanation, that would also be greatly appreciated. Answer: I'm not completely sure what you want, but honestly the entirety of Spivak's Calculus on manifolds is devoted to exactly that. If you want something that feels familiar, you can simply find $\nabla$ in various coordinate systems in Wikipedia, but if you want a less coordinate-centric view then you're probably going to need to step outside of your comfort zone. In particular, you should be aware that the concepts of grad, curl and div are not particularly useful by themselves in an arbitrary manifold with arbitrary coordinates. Instead, functions and vector fields are replaced by differential forms, and div/grad/curl get replaced by the exterior derivative $d$. With that in mind, the sections of Calculus on manifolds which deal most directly with these things are §4.2 (Fields and forms), §5.2 (Fields and forms on manifolds) and §5.3 (Stokes' theorem on manifolds). Honestly, though, you should read the whole thing: it's very short (137 pages of large type on a small page), and it's delightfully constructed. It's honestly a little jewel! (On the other hand, the compact construction does make it a bit dense at times.) Beyond this, if you want a more expanded view of what happens to functions, vector fields, and differential operators on more general manifolds, I would really recommend Spivak's A comprehensive introduction to differential geometry (vol I). In particular, §3.4 (The tangent bundle of a manifold) and chapters 7 (Differential forms) and 8 (Integration) deal with (the appropriate generalizations of) $\nabla$ from a general differential geometric perspective.
{ "domain": "physics.stackexchange", "id": 21189, "tags": "soft-question, differential-geometry, resource-recommendations, differentiation, calculus" }
What is the name for 100 litres?
Question: 1 litre = 1 litre 10 litres = decalitre 100 litres = ? 1000 litres = kilolitre Is there a scale for the naming like there is for data? Answer: Hectolitre See the linked Wikipedia page for all the prefixes.
{ "domain": "physics.stackexchange", "id": 83432, "tags": "terminology, conventions, si-units, volume, metrology" }
Why define xml will cause multiple nodes named error?
Question: I followed PR2PluggingIn doc. For example, I have a runable launch file: <launch> <!-- Set params for sim --> <param name="detect_wall_norm/sim" value="True" type="bool"/> <param name="detect_outlet/sim" value="True" type="bool"/> <param name="r_gripper_controller/gripper_action_node/stall_velocity_threshold" value="0.01" type="double"/> <!-- Start PR and load the plug and outlet in the world --> <include file="$(find pr2_plugs_gazebo_demo)/launch/pr2_plug_outlet.launch"/> <!-- Start the plugs app --> <include file="$(find pr2_plugs_actions)/launch/plug_actions.launch"/> <!-- ik action --> <node pkg="pr2_arm_move_ik" type="arm_ik" name="r_arm_ik" output="screen"> <param name="joint_trajectory_action" value="r_arm_controller/joint_trajectory_generator" /> <param name="arm" value="r" /> <param name="free_angle" value="2" /> <param name="search_discretization" value="0.01" /> <param name="ik_timeout" value="5.0" /> </node> <!-- Trajectory generator --> <node pkg="joint_trajectory_generator" type="joint_trajectory_generator" output="screen" name="joint_trajectory_generator" ns="r_arm_controller" > <param name="max_acc" value="2.0" /> <param name="max_vel" value="2.5" /> </node> <node pkg="joint_trajectory_generator" type="joint_trajectory_generator" output="screen" name="joint_trajectory_generator" ns="l_arm_controller" > <param name="max_acc" value="2.0" /> <param name="max_vel" value="2.5" /> </node> <!-- tuckarm action --> <node pkg="pr2_tuck_arms_action" type="tuck_arms.py" name="tuck_arms_action" output="screen"> <param name="r_joint_trajectory_action" value="r_arm_controller/joint_trajectory_generator" /> <param name="l_joint_trajectory_action" value="l_arm_controller/joint_trajectory_generator" /> <param name="move_duration" value="0.0" /> </node> </launch> But when I add the following lines: <!-- navstack --> <include file="$(find pr2_2dnav)/pr2_2dnav.launch" /> I will get error when I running: sam@sam:~/code/ros/temp$ optirun roslaunch ./t2.launch ... logging to /home/sam/.ros/log/68f97e50-42ad-11e2-8d06-e0b9a5f829db/roslaunch-sam-26744.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. roslaunch file contains multiple nodes named [/r_arm_controller/joint_trajectory_generator]. Please check all <node> 'name' attributes to make sure they are unique. Also check that $(anon id) use different ids. sam@sam:~/code/ros/temp$ How could that possible? I try to trace pr2_2dnav.launch. <launch> <include file="$(find pr2_machine)/$(env ROBOT).machine" /> <include file="$(find pr2_navigation_global)/amcl_node.xml" /> <include file="$(find pr2_navigation_teleop)/teleop.xml" /> <include file="$(find pr2_navigation_perception)/lasers_and_filters.xml" /> <include file="$(find pr2_navigation_perception)/ground_plane.xml" /> <include file="$(find pr2_navigation_global)/move_base.xml" /> </launch> It seems very normal. I am confused what situation I met. Is there anything I missing? How to fix it? Thank you~ Originally posted by sam on ROS Answers with karma: 2570 on 2012-12-09 Post score: 0 Answer: The same node occurs twice in the overall launch. This can happen if you include something that defines the same node. You can use roslaunch --find-node to figure out where/if your launch file and the include define this node. You should disabled/comment the node from one of the launches, so it is only started once. Originally posted by dornhege with karma: 31395 on 2012-12-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by sam on 2012-12-11: I can't use --find-node or --nodes or --args to find the node name that happen twice. But I find the xml have the same structure with launch file... Why launch file structure not use the launch extension but use xml extension? Thank you~
{ "domain": "robotics.stackexchange", "id": 12037, "tags": "ros, roslaunch, xml, node" }
What factors affect drift current in a PN junction?
Question: My textbook just says that in a unbiased PN junction the diffusion current and drift current are equal. In forward biased junction the diffusion current is higher than drift current and in reverse biased junction the drift current is higher than diffusion current. I understand the above concepts but in reality what factors affect drift current in depletion layer in a PN junction? Answer: In the basic model, the drift current is proportional to the number of charge carriers, electron-hole pairs thermally generated within the depletion layer. The strong electrical field will swipe the electrons one way and the holes in the other direction, both contributing to the current. Basically all carriers that are there, so that the drift current does not depend on field strength. So in this picture, the drift current depends exponentially on reciprocal temperature: $$ I_d \propto e^\frac{-E_g}{2kT}. $$ But things are a bit more complicated, see for example these notes by Fernsler.
{ "domain": "physics.stackexchange", "id": 62503, "tags": "semiconductor-physics" }
Jetpack Compose: Length-Units Converter
Question: I have made a length-units converter with Jetpack Compose. Here's the source-code: class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { LengthConverterTheme { Surface( modifier = Modifier.fillMaxSize(), color = MaterialTheme.colors.background ) { MainUI() } } } } } fun convertInput(fromUnit: Units, toUnit: Units, fromValue: Double): Double { var lengthMeter = 0.0 when (fromUnit) { Units.meter -> lengthMeter = fromValue * 1.0 Units.kilometer -> lengthMeter = fromValue * 1000.0 Units.feet -> lengthMeter = fromValue * 0.3048 Units.yard -> lengthMeter = fromValue * 0.9144 Units.miles -> lengthMeter = fromValue * 1609.34 } var resultVal = 0.0 when (toUnit) { Units.meter -> resultVal = lengthMeter Units.kilometer -> resultVal = lengthMeter / 1000.0 Units.feet -> resultVal = lengthMeter * 3.28084 Units.yard -> resultVal = lengthMeter * 1.09361 Units.miles -> { resultVal = lengthMeter * 0.000621371 } } return resultVal } @Composable fun MainUI() { val context = LocalContext.current var isSelectedFrom by remember { mutableStateOf(Units.meter) } var isSelectedTo by remember { mutableStateOf(Units.meter) } var userInput by remember { mutableStateOf("0.0") } var currentResult by remember { mutableStateOf(0.0) } Column(modifier = Modifier .fillMaxWidth() .padding( top = 15.dp, start = 25.dp, end = 25.dp )) { TextField(value = userInput, modifier = Modifier .fillMaxWidth() .padding(top = 10.dp), placeholder = { Text("Enter value to convert") }, colors = TextFieldDefaults.textFieldColors( backgroundColor = Color.White, textColor = Color.Black), onValueChange = { if (it.isNotEmpty()) { userInput = it currentResult = convertInput(isSelectedFrom, isSelectedTo, it.toDouble()) } }) UnitPicker(title = "Convert from: ", currentlySelected = isSelectedFrom) { isSelectedFrom = it currentResult = convertInput(isSelectedFrom, isSelectedTo, userInput.toDouble()) } UnitPicker(title = "Convert to: ", currentlySelected = isSelectedTo) { isSelectedTo = it currentResult = convertInput(isSelectedFrom, isSelectedTo, userInput.toDouble()) } Text("Result: ${currentResult.toString()}", modifier = Modifier.padding(top = 25.dp), fontSize = 20.sp, fontWeight = FontWeight.Bold) } } @Composable fun UnitPicker(title: String, currentlySelected: Units, setUnit: (Units) -> Unit) { Text("Convert from: ", fontSize = 20.sp, fontWeight = FontWeight.Bold) Units.values().forEach { Row( Modifier.fillMaxWidth(), horizontalArrangement = Arrangement.Start, verticalAlignment = Alignment.CenterVertically) { Text(it.name.replaceFirstChar { if (it.isLowerCase()) it.titlecase(Locale.ROOT) else it.toString() }) var isSelected = it == currentlySelected RadioButton(selected = isSelected, onClick = { setUnit(it) }) } } Divider() } enum class Units { meter, kilometer, feet, yard, miles } Could the central algorithm (within the function 'convertInput') become improved? Is there are more elegant solution? What should me modified to accomplish a more Kotlin-ideomatic code? Looking forward to reading your answers and comments? Answer: Regarding making this more idiomatic Kotlin code you can take advantage of direct value assignment like so: val lengthMeter = when (fromUnit) { Units.Meter -> fromValue * 1.0 Units.Kilometer -> fromValue * 1000.0 Units.Feet -> fromValue * 0.3048 Units.Yard -> fromValue * 0.9144 Units.Miles -> fromValue * 1609.34 } Likewise you can return from a function in this way: return when (toUnit) { Units.Meter -> lengthMeter Units.Kilometer -> lengthMeter / 1000.0 Units.Feet -> lengthMeter * 3.28084 Units.Yard -> lengthMeter * 1.09361 Units.Miles -> lengthMeter * 0.000621371 } The complete function then looks like this: fun convertInput(fromUnit: Units, toUnit: Units, fromValue: Double):Double { val lengthMeter = when (fromUnit) { Units.Meter -> fromValue * 1.0 Units.Kilometer -> fromValue * 1000.0 Units.Feet -> fromValue * 0.3048 Units.Yard -> fromValue * 0.9144 Units.Miles -> fromValue * 1609.34 } return when (toUnit) { Units.Meter -> lengthMeter Units.Kilometer -> lengthMeter / 1000.0 Units.Feet -> lengthMeter * 3.28084 Units.Yard -> lengthMeter * 1.09361 Units.Miles -> lengthMeter * 0.000621371 } } Also. The naming convention for enum values in Kotlin is to use uppercase as below enum class Units { Meter, Kilometer, Feet, Yard, Miles }
{ "domain": "codereview.stackexchange", "id": 44340, "tags": "android, kotlin, kotlin-compose" }
Changing Number to Words in JavaScript
Question: One day I saw a question on Stack Overflow asking about changing numbers to words, I thought about it, and next day I started coding. When I had the code working, I thought it could, most likely, be much better. Then I started looking for some existing code about it and found this one. I wasn't able to understand much of it, but I am sure it tackles the problem much differently than I did. So I got interested about knowing other possible solutions for it. I am new to programming and this my first "useful" code, so it would be very rewarding to have my code criticized for things that should, or could, have been done in another way. This doesn't work with , or . (1,000 or 0.50 won't work). I am not sure why, but passing a number like (036) will return wrong results if it is a number primitive. I think it is interpreting it as an octal, but I defined the radix, so it shouldn't. function numToWords(number) { //Validates the number input and makes it a string if (typeof number === 'string') { number = parseInt(number, 10); } if (typeof number === 'number' && !isNaN(number) && isFinite(number)) { number = number.toString(10); } else { return 'This is not a valid number'; } //Creates an array with the number's digits and //adds the necessary amount of 0 to make it fully //divisible by 3 var digits = number.split(''); var digitsNeeded = 3 - digits.length % 3; if (digitsNeeded !== 3) { //prevents this : (123) ---> (000123) while (digitsNeeded > 0) { digits.unshift('0'); digitsNeeded--; } } //Groups the digits in groups of three var digitsGroup = []; var numberOfGroups = digits.length / 3; for (var i = 0; i < numberOfGroups; i++) { digitsGroup[i] = digits.splice(0, 3); } console.log(digitsGroup) //debug //Change the group's numerical values to text var digitsGroupLen = digitsGroup.length; var numTxt = [ [null,'one','two','three','four','five','six','seven','eight','nine'], //hundreds [null, 'ten', 'twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'], //tens [null,'one','two','three','four','five','six','seven','eight','nine'] //ones ]; var tenthsDifferent = ['ten','eleven','twelve','thirteen','fourteen','fifteen','sixteen','seventeen','eighteen','nineteen'] // j maps the groups in the digitsGroup // k maps the element's position in the group to the numTxt equivalent // k values: 0 = hundreds, 1 = tens, 2 = ones for (var j = 0; j < digitsGroupLen; j++) { for (var k = 0; k < 3; k++) { var currentValue = digitsGroup[j][k]; digitsGroup[j][k] = numTxt[k][currentValue] if (k === 0 && currentValue !== '0') { // !==0 avoids creating a string "null hundred" digitsGroup[j][k] += ' hundred '; } else if (k === 1 && currentValue === '1') { //Changes the value in the tens place and erases the value in the ones place digitsGroup[j][k] = tenthsDifferent[digitsGroup[j][2]]; digitsGroup[j][2] = 0; //Sets to null. Because it sets the next k to be evaluated, setting this to null doesn't work. } } } console.log(digitsGroup) //debug //Adds '-' for grammar, cleans all null values, joins the group's elements into a string for (var l = 0; l < digitsGroupLen; l++) { if (digitsGroup[l][1] && digitsGroup[l][2]) { digitsGroup[l][1] += '-'; } digitsGroup[l].filter(function (e) {return e !== null}); digitsGroup[l] = digitsGroup[l].join(''); } console.log(digitsGroup) //debug //Adds thousand, millions, billion and etc to the respective string. var posfix = [null,'thousand','million','billion','trillion','quadrillion','quintillion','sextillion']; if (digitsGroupLen > 1) { var posfixRange = posfix.splice(0, digitsGroupLen).reverse(); for (var m = 0; m < digitsGroupLen - 1; m++) { //'-1' prevents adding a null posfix to the last group if(digitsGroup[m]){ // avoids 10000000 being read (one billion million) digitsGroup[m] += ' ' + posfixRange[m]; } } } console.log(digitsGroup) //debug //Joins all the string into one and returns it return digitsGroup.join(' ') }; //End of numToWords function JSFiddle Answer: I want to suggest a different overall approach to this: Your friend for this sort of stuff is the modulo operator. There's no need to treat the number as a string when breaking it apart, when a number can be broken apart with a little math. If you've got your number (see thriggle's answer for parseInt usage), you can break it into "thousand-chunks" like so: function chunk(number) { var number = 23456098325, thousands = []; while(number > 0) { thousands.push(number % 1000); number = Math.floor(number / 1000); } return thousands; } chunk(23456098325) // => [ 325, 98, 456, 23 ] Note that it's "backwards": The lowest part of the number first, than the thousands, then the millions, etc.. Now, you have two tasks: Covert each chunk into English, and then add a scale (thousand, million, billion, etc.) to each of them. For the first task, we can again use the modulo operator, since we want hundreds, tens, and single digits. The only exception is for numbers below 20, whose names don't follow the same system as the later ones. So if you have an array of the words "one" to "nineteen", and another for the words for "twenty", "thirty", etc. up to "ninety", you can take any 1-999 number and turn it into words. And since the first bit of code breaks a large number into chunk of 1-999, that's what we need. Final bit is to add the scale ("thousand", "million", "billion" etc.), which we can do based on the index of the chunks in the array. So for instance, we can do this: var ONE_TO_NINETEEN = [ "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen" ]; var TENS = [ "ten", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety" ]; var SCALES = ["thousand", "million", "billion", "trillion"]; // helper function for use with Array.filter function isTruthy(item) { return !!item; } // convert a number into "chunks" of 0-999 function chunk(number) { var thousands = []; while(number > 0) { thousands.push(number % 1000); number = Math.floor(number / 1000); } return thousands; } // translate a number from 1-999 into English function inEnglish(number) { var thousands, hundreds, tens, ones, words = []; if(number < 20) { return ONE_TO_NINETEEN[number - 1]; // may be undefined } if(number < 100) { ones = number % 10; tens = number / 10 | 0; // equivalent to Math.floor(number / 10) words.push(TENS[tens - 1]); words.push(inEnglish(ones)); return words.filter(isTruthy).join("-"); } hundreds = number / 100 | 0; words.push(inEnglish(hundreds)); words.push("hundred"); words.push(inEnglish(number % 100)); return words.filter(isTruthy).join(" "); } // append the word for a scale. Made for use with Array.map function appendScale(chunk, exp) { var scale; if(!chunk) { return null; } scale = SCALES[exp - 1]; return [chunk, scale].filter(isTruthy).join(" "); } Worth noting: inEnglish recurses for numbers >= 20. inEnglish will return a false'y for the number zero. That's why I'm using Array.filter to remove false'y values before I join the array. For instance, the number 300 is (through some recursion) more or less constructed as [ ONE_TO_NINETEEN[3-1], "hundred", TENS[0-1], ONE_TO_NINETEEN[0-1] ]. This'll become ["three", "hundred", undefined, undefined], so we can't just join that because we'd get some trailing nonsense. So the undefined values are removed before joining. I'm using Math.floor in chunk, but the bitwise-floor-trick (| 0) elsewhere. The reason is that the bitwise-floor-trick can't handle numbers larger than 2,147,483,647 (max value of a signed 32-bit integer, which is what bitwise operators work on), so it would break for large numbers. But in inEnglish we assume that the input is 0-999, so we can safely use the bitwise trick. With that, you can take numbers and convert them to English like so: var string = chunk(810238903242) .map(inEnglish) .map(appendScale) .filter(isTruthy) .reverse() .join(" "); which yields: eight hundred ten billion two hundred thirty-eight million nine hundred three thousand two hundred fourty-two Of course, there are few preliminary checks you might want to make: If the input number is zero return "zero". If the input number is negative, you can complain to the user, or you can use Math.abs, convert the result to English like above, and prepend "negative" afterward. That the input number is between Number.MIN_SAFE_INTEGER and Number.MAX_SAFE_INTEGER. Otherwise things may get weird because 64-bit floats (as all JS numbers are) can no longer accurately represent the value. Not all runtimes have those two constants though, but you can make them yourself. Incidentally, Number.MAX_SAFE_INTEGER is nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-one. So there's that.
{ "domain": "codereview.stackexchange", "id": 13556, "tags": "javascript, numbers-to-words" }
Is the oasis object recognition and training framework available for download
Question: Is the oasis object recognition and training framework available for download and are there any other alternate frameworks for object learning that anyone would suggest? Thanks, -Scott Originally posted by Scott on ROS Answers with karma: 693 on 2012-09-10 Post score: 0 Answer: i've not heard about oasis. for object training and recognition, i think the following two are amazing: one is tod(texture object detection): http://ros.org/wiki/tod_core the other is MOPED(Object Recognition and Pose Estimation for Manipulation): http://personalrobotics.ri.cmu.edu/projects/moped Originally posted by yangyangcv with karma: 741 on 2012-09-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10969, "tags": "ros" }
Change of basis for density operator
Question: Considering a state that describes two subsystems $X$ and $Y$: $$ \left|\psi \right\rangle =\sum_{i,k} \beta_{ik} \left| \theta_i^X \right\rangle \otimes \left| \lambda_i^Y \right\rangle$$ I am trying to show that it is possible to choose a basis for the space of states of $X$ and $Y$ such that the density matrix would reduce to the following form: $$ \left| \psi \right\rangle \left\langle\psi \right| = \sum_{i,j} \alpha_{ij} \left| \phi_i^X \right\rangle \otimes \left| \psi_i^Y \right\rangle \left\langle\phi_j^X \right| \otimes \left\langle\psi_j^Y \right| \hspace{0,7cm}(1)$$ My attempt consisted in plugging in four identity operators that come from two new orthonormal basis: $\sum_i\left| \phi_i^X \right\rangle \left\langle\phi_i^X \right|= \hat{\mathbb{I}}$ and $\sum_j\left| \psi_j^Y \right\rangle \left\langle\psi_j^Y \right|= \hat{\mathbb{I}}$ such that: $$ \left| \psi \right\rangle \left\langle\psi \right| = \sum_{i,j} \sum_{k,l} \beta_{ij} \beta_{kl}^* \hat{\mathbb{I}}\left| \theta_i^X \right\rangle \otimes \hat{\mathbb{I}}\left| \lambda_j^Y \right\rangle \left\langle\theta_k^X \right|\hat{\mathbb{I}} \otimes \left\langle\lambda_l^Y \right|\hat{\mathbb{I}}$$ $$ \left| \psi \right\rangle \left\langle\psi \right| = \sum_{i,j} \sum_{k,l} \beta_{ij} \beta_{kl}^* \left( \sum_n\left| \phi_n^X \right\rangle \left\langle\phi_n^X \right| \right)\left| \theta_i^X \right\rangle \otimes \left( \sum_{\alpha}\left| \psi_{\alpha}^Y \right\rangle \left\langle\psi_{\alpha}^Y \right| \right)\left| \lambda_j^Y \right\rangle \left\langle\theta_k^X \right| \left( \sum_n\left| \phi_n^X \right\rangle \left\langle\phi_n^X \right| \right) \otimes \left\langle\lambda_l^Y \right|\left( \sum_{\alpha}\left| \psi_{\alpha}^Y \right\rangle \left\langle\psi_{\alpha}^Y \right| \right)$$ $$ \left| \psi \right\rangle \left\langle\psi \right| = \sum_{i,j} \sum_{k,l} \beta_{ij} \beta_{kl}^* \sum_n\left| \phi_n^X \right\rangle \left\langle\phi_n^X | \theta_i^X \right\rangle \otimes \sum_{\alpha}\left| \psi_{\alpha}^Y \right\rangle \left\langle\psi_{\alpha}^Y | \lambda_j^Y \right\rangle \sum_n \left\langle\theta_k^X | \phi_n^X \right\rangle \left\langle\phi_n^X \right| \otimes \sum_{\alpha} \left\langle\lambda_l^Y | \psi_{\alpha}^Y \right\rangle \left\langle\psi_{\alpha}^Y \right|$$ I know that I can rearrange the terms a little and use the closure relations for the $\theta $ and $\lambda$ basis but these terms dont have the same indicies, therefore I am stuck. I can't obtain equation $(1)$. How can I do this? Answer: I think you are almost there. We can write $|\theta_i^X\rangle$ as linear combination of $|\phi_n^X\rangle$ $$ |\theta_i^X\rangle = \sum_m c_m^i |\phi_m^X\rangle$$ Similarly, $$ |\lambda_j^Y\rangle = \sum_m d_m^j |\psi_m^Y\rangle$$ So the inner products become $$\langle \phi_n^X| \theta_i^X\rangle = c_n^i$$ $$\langle \psi_{\alpha}^Y| \lambda_j^Y\rangle = d_{\alpha}^j$$ Following your last equation, $$ \left| \psi \right\rangle \left\langle\psi \right| = \sum_{i,j} \sum_{k,l} \beta_{ij} \beta_{kl}^* \sum_n\left| \phi_n^X \right\rangle c_n^i \otimes \sum_{\alpha}\left| \psi_{\alpha}^Y \right\rangle d_{\alpha}^j \sum_n c_{n}^{k*} \left\langle\phi_n^X \right| \otimes \sum_{\alpha} d_{\alpha}^{l*} \left\langle\psi_{\alpha}^Y \right|$$ Rearranging the terms, $$ = \sum_{n,\alpha} \underbrace{\left(\sum_i \beta_{ij}c_n^id_{\alpha}^j \sum_j \beta_{kl}^*c_n^{k*}d_{\alpha}^{l*}\right)}_{\chi_{n,\alpha}} \left| \phi_n^X \right\rangle \otimes \left| \psi_{\alpha}^Y \right\rangle \left\langle\phi_n^X \right| \otimes \left\langle\psi_{\alpha}^Y \right|$$ You can change the dummy variables and symbols to get equation (1)
{ "domain": "physics.stackexchange", "id": 83685, "tags": "quantum-mechanics, homework-and-exercises, density-operator, linear-algebra" }
Draw a Schematic Diagram of Ammeter Connected in Parallel to explain difference between the diagram of a parallel and series connection
Question: I know that voltmeters are commonly in parallel and ammeters are commonly in series. I believe that the voltmeters in diagrams 2 are actually in series. How am I wrong? ![For example][1] In Diagram 4, I don't understand how the voltmeter is connected in parallel. I don't see how the Ammeter is connected in Series. Can you explain how the connection of the voltmeters are in parallel? Usually, when I see a parallel connection, I see multiple resistors as seen in diagram 1. I doubt that the Voltmeters in figure 2 and 3 are in parallel. How is the ammeter in diagram one in series? I know that series is defined as "there is only one path for the electrons to take between any two points in this circuit." But the electrons can either take the path through the resistor or through the ammeter. More likely, they'll go through the path of the ammeter. Why do voltmeters have high resistance? How can "high resistance affect as little as possible the current that flows in the actual circuit when in parallel with it"? Why is it that " If the voltmeter wasn't connected in parallel it couldn't measure the potential across a particular circuit or circuit component - which is the purpose of a voltmeter."? How can keeping a voltmeter in parallel reduce the effect of the resistance on the circuit? Can you mathematically explain the quotations that I wrote? What would happen if a voltmeter were wired in series? Answer: In Diagram 4, I don't understand how the voltmeter is connected in parallel. Parallel vs. Series - needs a reference. Either Producer (Source of electricity) or consumer (the Resistor in your diagrams). Diagram 4: Vmeter is in Parallel with the Resistor because the current is split. The Ameter is in Series with the Resistor (and Vmeter, but you can safely ignore that since the Vmeter has high impedance so negligible current will go that way) - same current goes through Ameter and R, not split, so they are in Series. This is the correct diagram for measuring the current that goes through the R and the Voltage at its terminals. I don't see how the Ammeter is connected in Series. Answered above. Can you explain how the connection of the voltmeters are in parallel? Answerd. Usually, when I see a parallel connection, I see multiple resistors as seen in diagram 1. It's not multiple resistors, but this diagram makes no sense, it is parallel though. I doubt that the Voltmeters in figure 2 and 3 are in parallel. Again, you need a reference. "in parallel" with what? How is the ammeter in diagram one in series? It's not. I know that series is defined as "there is only one path for the electrons to take between any two points in this circuit." This is correct, but it makes no sense to discuss it on diagram 1, because the Ameter there is connected in parallel with R. Why do voltmeters have high resistance? Because you want to use it in parallel with the device you want to measure the voltage at its terminals and the principle is that you want to be as non-invasive as possible. How can "high resistance affect as little as possible the current that flows in the actual circuit when in parallel with it"? Why is it that " If the voltmeter wasn't connected in parallel it couldn't measure the potential across a particular circuit or circuit component - which is the purpose of a voltmeter."? How can keeping a voltmeter in parallel reduce the effect of the resistance on the circuit? Can you mathematically explain the quotations that I wrote? What would happen if a voltmeter were wired in series? This sounds more and more as a homework, so I'll let you get on with it. I'm sure you can do it now :-)
{ "domain": "physics.stackexchange", "id": 17050, "tags": "electric-circuits" }
Student project that calculates the return on an investment
Question: This is a simple student project that calculates the return on an investment with a given investment amount, number of years invested, and annual interest rate. I know this is pretty basic, but I'm just looking for feedback for formatting and other general improvements I can make to the code. // Author: Joshua Ferrell // Date: 3/25/2017 import javafx.application.Application; import javafx.geometry.Pos; import javafx.geometry.HPos; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.control.TextField; import javafx.scene.layout.GridPane; import javafx.stage.Stage; // A program that caculates the return on an investment. public class Exercise15_5 extends Application { // globals private TextField tfInvestmentAmount = new TextField(); private TextField tfNumYears = new TextField(); private TextField tfAnnualInterestRate = new TextField(); private TextField tfFutureValue = new TextField(); private Button btCalc = new Button("Calculate"); @Override public void start(Stage primaryStage) { // Create UI GridPane gridPane = new GridPane(); gridPane.setHgap(5); gridPane.setVgap(5); gridPane.add(new Label("Investment Amount:"), 0, 0); gridPane.add(tfInvestmentAmount, 1, 0); gridPane.add(new Label("Number of Years:"), 0, 1); gridPane.add(tfNumYears, 1, 1); gridPane.add(new Label("Annual Interest Rate:"), 0, 2); gridPane.add(tfAnnualInterestRate, 1, 2); gridPane.add(new Label("Future Value:"), 0, 3); gridPane.add(tfFutureValue, 1, 3); gridPane.add(btCalc, 1, 4); // Set Properties for UI gridPane.setAlignment(Pos.CENTER); tfInvestmentAmount.setAlignment(Pos.BOTTOM_RIGHT); tfNumYears.setAlignment(Pos.BOTTOM_RIGHT); tfAnnualInterestRate.setAlignment(Pos.BOTTOM_RIGHT); tfFutureValue.setAlignment(Pos.BOTTOM_RIGHT); tfFutureValue.setEditable(false); gridPane.setHalignment(btCalc, HPos.RIGHT); // Process event (caculate future value) btCalc.setOnAction(e -> calculateFutureValue()); // Create a scene and place it in the stage primaryStage.setScene(new Scene(gridPane, 300, 200)); primaryStage.setTitle("Exercise15_05"); // Set Title primaryStage.show(); // Display the Stage } private void calculateFutureValue() { // Get Values from text fields double amount = Double.parseDouble(tfInvestmentAmount.getText()); double interest = Double.parseDouble(tfAnnualInterestRate.getText()); int years = Integer.parseInt(tfNumYears.getText()); // Get Monthly interest rate double monthlyInterestRate = interest / 12 / 100; // Get Future Value double futureValue = amount * Math.pow(1 + monthlyInterestRate, years * 12); // set futureValue to tfFutureValue tfFutureValue.setText(String.format("$%.2f", futureValue)); } } Answer: The first thing that is noticeable when looking at this code is that there are too many comments. Let me explain. Example 1 - stating the obvious // set futureValue to tfFutureValue tfFutureValue.setText(String.format("$%.2f", futureValue)); This comment does not add any value to your program, it creates mess. The code itself is self-explanatory, and even if I had 2 weeks experience in Java, I would know what it does. Example 2 - stating the untrue public class Exercise15_5 extends Application { // globals private TextField tfInvestmentAmount = new TextField(); private TextField tfNumYears = new TextField(); private TextField tfAnnualInterestRate = new TextField(); The comment that says //globals is simply wrong - the TextFields are private fields of Exercise15_5 class. You can create a "global" variable in Java by creating a public static field - then you can use it everywhere, but of course you don't want to do that in your application Example 3 - extract code to a method instead of commenting In your start() method, you have 4 big blocks of code that are doing different things and are commented accordingly, like: // Set Properties for UI gridPane.setAlignment(Pos.CENTER); tfInvestmentAmount.setAlignment(Pos.BOTTOM_RIGHT); tfNumYears.setAlignment(Pos.BOTTOM_RIGHT); tfAnnualInterestRate.setAlignment(Pos.BOTTOM_RIGHT); tfFutureValue.setAlignment(Pos.BOTTOM_RIGHT); tfFutureValue.setEditable(false); gridPane.setHalignment(btCalc, HPos.RIGHT); We can extract those lines to a new method and give it some meaningful name... Just like we would comment those lines! private void setPropertiesForUI() { gridPane.setAlignment(Pos.CENTER); tfInvestmentAmount.setAlignment(Pos.BOTTOM_RIGHT); tfNumYears.setAlignment(Pos.BOTTOM_RIGHT); tfAnnualInterestRate.setAlignment(Pos.BOTTOM_RIGHT); tfFutureValue.setAlignment(Pos.BOTTOM_RIGHT); tfFutureValue.setEditable(false); gridPane.setHalignment(btCalc, HPos.RIGHT); } Then, your start() method becomes much cleaner: @Override public void start(Stage primaryStage) { createUI(); setPropertiesForUI(); calculateFutureValue(); createSceneAndPlaceItInTheStage(); } Split logic and UI into separate classes I also believe that the calculateFutureValue() method should not be in the Exercise15_5 class. It should be extracted to another class, possibly something named like InvestmentCalculator. The method should take doubles and int as input parameters, not Strings (it doesn't know about textfields and UI). You can make the calculateFutureValue() static in the InvestmentCalculator class since it's just performing some calculations and returning a value - you don't need an instance of it. You can even make InvestmentCalculators constructor private to prevent creating an instance of it - in other words, making the InvestmentCalculator a Util Class. I don't want to go into the implementation details since this is tagged as homework, but you can always edit your question or post another one after the refactoring.
{ "domain": "codereview.stackexchange", "id": 24918, "tags": "java, homework, calculator, finance, javafx" }
How do we see the sun?
Question: This question was originally posted almost a year ago and it was misunderstood probably because of its wording. I understand my question better now. Let me put the question this way now: If possible what characteristics of light change as it hits some object ? For example if we see the sun because of some characteristic properties of the light that it emits, then how do such properties change when the light hits some other object and we are able to see the object and not the sun? The earlier form of the question: We know that it is light that enables us to see objects. Non luminous objects reflect light received from luminous objects. But how do we see luminous objects themselves, for example the sun? I suppose it must be other luminous objects that enable us to see the sun, for example other stars. But shouldn't then we see the sun brighter sometimes and fainter sometimes because light falling on it may vary during its revolutionary course. Answer: All material bodies,i.e.masses made up of atoms and molecules, radiate according to the black body radiation. Black body radiation is one of the pillars for the need of quantum mechanics as classical electromagnetism could not explain the measured black body distributions. The plot above is for 5000K which is about the temperature of the sun, and one can see that the maximum of radiation is in the visible spectrum. This is also true for all stars, that is why they are sources of radiation, not reflected radiation. Most bodies on earth have much lower radiation, and there is no self luminosity in the visible, except for candle flames and other incandescent metals and materials. The moon gets reflected light from the sun. When the light is reflecting coherently, it carries the information/phases between light rays so images as in mirrors are the result.
{ "domain": "physics.stackexchange", "id": 45262, "tags": "visible-light, vision" }
How to use custom message arduino [SOLVED]
Question: Hi guys, I've been doing some novice work in arduino and ros and i would like to use a custom message already created as the official tutorial says. The custom package is in: ~/catkin_ws/src/pkg_prova1 and under its msg folder, we have the message file: Num.msg This message works perfectly in a normal ROS node (not arduino) created by a C++ code. I can modify without issues all its fields and get the message published efficiently. Everything in the CMakeLists.txt and in the package.xml has been edited as the ROS tutorial suggests. The problem is: how can my arduino use that custom message?? In order to make the arduino have it as a library, i've used: rosrun rosserial_arduino make_libraries.py ~/sketchbook/libraries/ros_lib Seems to have worked well despite that some packages like rtabmap, etc haven't been made. Anyway this isn't important. Then, I open the Arduino IDE, open the HelloWorld example (just for testing the new message), and reference the message as a normal library in the very first line of the code: #include <pkg_prova1/Num.h> #include <ros.h> #include <std_msgs/String.h> ros::NodeHandle nh; ......(all the stuff that follows) Unfortunately, it gives me the next compiling error: HelloWorld.pde:6:28: fatal error: pkg_prova1/Num.h: No such file or directory compilation terminated. I guess either the new library that i have created is not well referenced so the IDE can find it or the make_libraries command is not what i should use. Any ideas to do it? I would appreciate a detailed answer as my ROS knowledge is still some fuzzy. Correct me if i have a wrong understanding of anything. Thanks in advance! Originally posted by thepirate16 on ROS Answers with karma: 101 on 2016-03-18 Post score: 0 Answer: Update: I got it working removing ROS completely from my computer and re-installing it. Seems like there was a mess of files and folders in my catkin and in opt, possibly from several trials and errors in a first contact with ROS, Now it has compiled but i have a different problem, the rosserial connection doesn't work. I guess that is because of the arduino code written, as the rosserial works when using the HelloWorld example. Thanks saleem for your answer. I open another question with the actual state (where I solved the problem!) http://answers.ros.org/question/229600/rosserial-connection-not-working-with-custom-messages-solved/ Originally posted by thepirate16 with karma: 101 on 2016-03-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24176, "tags": "arduino, rosserial, custom-message" }
Trying to implement a digital A frequency filter
Question: I'm trying to implement a digital filter that has the frequency response shape equal to the image below: Where i will use equation (11) to implement it with a sampling frequency of 48kHz. The filter coefficients can be found in the same document: Where each w' follows the formula above. So i put everything into matlab: fs = 48000; f1 = 20.598997; f2 = 107.65265; f3 = 737.86223; f4 = 12194.217; w1 = 2*tan(pi*(f1/fs)); w2 = 2*tan(pi*(f2/fs)); w3 = 2*tan(pi*(f3/fs)); w4 = 2*tan(pi*(f4/fs)); %testeb2 = 2*tan(pi*(250/1000))*2*tan(pi*(250/1000))*(1/sqrt(2)) % Filter coefficients for the A weighting filter a0 = 64 + (16*w2*w1*w3) + (4*w2*w1*w1*w3) + (32*w2*w1*w4) + (8*w2*w1*w1*w4) + (32* w1 * w3 * w4) + (16 *w2 * w3 * w4) + (64 *w1) + (32 *w2) + (32 *w3) + (64 *w4) + (32 *w2 * w1) + (8 *w2 * w1*w1) + (16 *w1*w1) + (16 *w2 * w1 * w3 * w4) + (4 *w2 * w1*w1 * w3 * w4) + (32 *w1 * w3) + (16 *w2 * w3) + (8 *w1*w1 * w3) + (64 *w1 * w4) + (32 *w2 * w4) + (32 *w3 * w4) + (16 *w1*w1 * w4) + (8 *w1*w1 * w3 * w4) + (16 *w4*w4) + (16 *w4*w4 * w1) + (4 *w4*w4 * w1*w1) + (4 *w4*w4 * w1 * w2 * w3) + (w4*w4 * w1*w1 * w2 * w3) + (8 *w4*w4 * w1 * w2) + (2 *w4*w4 * w1*w1 * w2) + (8 *w4*w4 * w2) + (8 *w4*w4 * w3) + (8 *w4*w4 * w1 * w3) + (2 *w4*w4 * w1*w1 * w3) + (4 *w4*w4 * w2 * w3) a1 = -128 + (64 *w2 * w1 * w3) + (24* w2 * w1*w1 * w3) + (128 *w2 * w1 * w4) + (48 *w2 * w1*w1 * w4) + (128 *w1 * w3 * w4) + (64 *w2 * w3 * w4) + (64 *w2 * w1) + (32 *w2 * w1*w1) + (32 *w1*w1) + (96 *w2 * w1 * w3 * w4) + (32 *w2 * w1*w1 * w3 * w4) + (64 *w1 * w3) + (32 *w2 * w3) + (32 *w1*w1 * w3) + (128 *w1 * w4) + (64 *w2 * w4) + (64 *w3 * w4) + (64 *w1*w1 * w4) + (48 *w1*w1 * w3 * w4) + (32 *w4*w4) + (64 *w4*w4 * w1) + (24 *w4*w4 * w1*w1) + (32 *w4*w4 * w1 * w2 * w3) + (10 *w4*w4 * w1*w1 * w2 * w3) + (48 *w4*w4 * w1 * w2) + (16 *w4*w4 * w1*w1 * w2) + (32 *w4*w4 * w2) + (32 *w4*w4 * w3) + (48 *w4*w4 * w1 * w3) + (16 *w4*w4 * w1*w1 * w3) + (24 *w4*w4 * w2 * w3) a2 = -192 + (48 *w2 * w1 * w3) + (52 *w2 * w1*w1 * w3) + (96 *w2 * w1 * w4) + (104 *w2 * w1*w1 * w4) + (96 *w1 * w3 * w4) + (48 *w2 * w3 * w4) - (320 *w1) - (160 *w2) - (160 *w3) - (320 *w4) - (96 *w2 * w1) + (24 *w2 * w1*w1) - (48 *w1*w1) + (208 *w2 * w1 * w3 * w4) + (108 *w2 * w1*w1 * w3 * w4) - (96 *w1 * w3) - (48 *w2 * w3) + (24 *w1*w1 * w3) - (192 *w1 * w4) - (96 *w2 * w4) - (96 *w3 * w4) + (48 *w1*w1 * w4) + (104 *w1*w1 * w3 * w4) - (48 *w4*w4) + (48 *w4*w4 * w1) + (52 *w4*w4 * w1*w1) + (108 *w4*w4 * w1 * w2 * w3) + (45 *w4*w4 * w1*w1 * w2 * w3) + (104 *w4*w4 * w1 * w2) + (54 *w4*w4 * w1*w1 * w2) + (24 *w4*w4 * w2) + (24 *w4*w4 * w3) + (104 *w4*w4 * w1 * w3) + (54 *w4*w4 * w1*w1 * w3) + (52 *w4*w4 * w2 * w3) a3 = 512 - (128 *w2 * w1 * w3) + (32 *w2 * w1*w1 * w3) - (256 *w2 * w1 * w4) + (64 *w2 * w1*w1 * w4) - (256 *w1 * w3 * w4) - (128 *w2 * w3 * w4) - (256 *w2 * w1) - (64 *w2 * w1*w1) - (128 *w1*w1) + (128 *w2 * w1 * w3 * w4) + (192* w2 * w1*w1 * w3 * w4) - (256 *w1 * w3) - (128 *w2 * w3) - (64 *w1*w1 * w3) - (512 *w1 * w4) - (256 *w2 * w4) - (256 *w3 * w4) - (128 *w1*w1 * w4) + (64 *w1*w1 * w3 * w4) - (128 *w4*w4) - (128 *w4*w4 * w1) + (32 *w4*w4 * w1*w1) + (192 *w4*w4 * w1 * w2 * w3) + (120 *w4*w4 * w1*w1 * w2 * w3) + (64 *w4*w4 * w1 * w2) + (96 *w4*w4 * w1*w1 * w2) - (64 *w4*w4 * w2) - (64 *w4*w4 * w3) + (64 *w4*w4 * w1 * w3) + (96 *w4*w4 * w1*w1 * w3) + (32 *w4*w4 * w2 * w3) a4 = 128 - (224 *w2 * w1 * w3) - (56 *w2 * w1*w1 * w3) - (448 *w2 * w1 * w4) - (112 *w2 * w1*w1 * w4) - (448 *w1 * w3 * w4) - (224 *w2 * w3 * w4) + (640 *w1) + (320 *w2) + (320 *w3) + (640 *w4) + (64* w2 * w1) - (112 *w2 * w1*w1) + (32 *w1*w1) - (224 *w2 * w1 * w3 * w4) + (168 *w2 * w1*w1 * w3 * w4) + (64 *w1 * w3) + (32 *w2 * w3) - (112 *w1*w1 * w3) + (128 *w1 * w4) + (64 *w2 * w4) + (64 *w3 * w4) - (224 *w1*w1 * w4) - (112 *w1*w1 * w3 * w4) + (32 *w4*w4) - (224 *w4*w4 * w1) - (56 *w4*w4 * w1*w1) + (168 *w4*w4 * w1 * w2 * w3) + (210 *w4*w4 * w1*w1 * w2 * w3) - (112 *w4*w4 * w1 * w2) + (84 *w4*w4 * w1*w1 * w2) - (112 *w4*w4 * w2) - (112 *w4*w4 * w3) - (112 *w4*w4 * w1 * w3) + (84 *w4*w4 * w1*w1 * w3) - (56 *w4*w4 * w2 * w3) a5 = - (448 *w2 * w1 * w3 * w4) - (224 *w1*w1 * w3 * w4) + (384 *w3 * w4) - (112 *w2 * w1*w1 * w3) - (112 *w4*w4 * w1*w1) + (384 *w1 * w3) - (224 *w4*w4 * w1 * w3) + (192 *w2 * w3) - (224 *w2 * w1*w1 * w4) + (192 *w1*w1) + (252 *w4*w4 * w1*w1 * w2 * w3) + (384 *w2 * w1) - (768) - (224 *w4*w4 * w1 * w2) - (112 *w4*w4 * w2 * w3) + (384 *w2 * w4) + (192 *w4*w4) + (768 *w1 * w4) a6 = 128 + (224 *w2 * w1 * w3) - (56 *w2 * w1*w1 * w3) + (448 *w2 * w1 * w4) - (112 *w2 * w1*w1 * w4) + (448* w1 * w3 * w4) + (224 *w2 * w3 * w4) - (640 *w1) - (320 *w2) - (320 *w3) - (640 *w4) + (64 *w2 * w1) + (112 *w2 * w1*w1) + (32 *w1*w1) - (224 *w2 * w1 * w3 * w4) - (168 *w2 * w1*w1 * w3 * w4) + (64 *w1 * w3) + (32 *w2 * w3) + (112 *w1*w1 * w3) + (128 *w1 * w4) + (64 *w2 * w4) + (64 *w3 * w4) + (224 *w1*w1 * w4) - (112 *w1*w1 * w3 * w4) + (32 *w4*w4) + (224 *w4*w4 * w1) - (56 *w4*w4 * w1*w1) - (168 *w4*w4 * w1 * w2 * w3) + (210 *w4*w4 * w1*w1 * w2 * w3) - (112 *w4*w4 * w1 * w2) - (84 *w4*w4 * w1*w1 * w2) + (112 *w4*w4 * w2) + (112 *w4*w4 * w3) - (112 *w4*w4 * w1 * w3) - (84 *w4*w4 * w1*w1 * w3) - (56 *w4*w4 * w2 * w3) a7 = 512 + (128 *w2 * w1 * w3) + (32 *w2 * w1*w1 * w3) + (256 *w2 * w1 * w4) + (64 *w2 * w1*w1 * w4) + (256 *w1 * w3 * w4) + (128 *w2 * w3 * w4) - (256 *w2 * w1) + (64 *w2 * w1*w1) - (128 *w1*w1) + (128 *w2 * w1 * w3 * w4) - (192 *w2 * w1*w1 * w3 * w4) - (256 *w1 * w3) - (128 *w2 * w3) + (64 *w1*w1 * w3) - (512 *w1 * w4) - (256 *w2 * w4) - (256 *w3 * w4) + (128 *w1*w1 * w4) + (64 *w1*w1 * w3 * w4) - (128 *w4*w4) + (128 *w4*w4 * w1) + (32 *w4*w4 * w1*w1) - (192 *w4*w4 * w1 * w2 * w3) + (120 *w4*w4 * w1*w1 * w2 * w3) + (64 *w4*w4 * w1 * w2) - (96 *w4*w4 * w1*w1 * w2) + (64 *w4*w4 * w2) + (64 *w4*w4 * w3) + (64 *w4*w4 * w1 * w3) - (96 *w4*w4 * w1*w1 * w3) + (32 *w4*w4 * w2 * w3) a8 = -192 - (48* w2 * w1 * w3) + (52 *w2 * w1*w1 * w3) - (96 *w2 * w1 * w4) + (104 *w2 * w1*w1 * w4) - (96 *w1 * w3 * w4) - (48 *w2 * w3 * w4) + (320 *w1) + (160 *w2) + (160* w3) + (320 *w4) - (96 *w2 * w1) - (24 *w2 * w1*w1) - (48 *w1*w1) + (208 *w2 * w1 * w3 * w4) - (108 *w2 * w1*w1 * w3 * w4) - (96 *w1 * w3) - (48 *w2 * w3) - (24 *w1*w1 * w3) - (192* w1 * w4) - (96 *w2 * w4) - (96* w3 * w4) - (48 *w1*w1 * w4) + (104 *w1*w1 * w3 * w4) - (48 *w4*w4) - (48 *w4*w4 * w1) + (52 *w4*w4 * w1*w1) - (108 *w4*w4 * w1 * w2 * w3) + (45 *w4*w4 * w1*w1 * w2 * w3) + (104 *w4*w4 * w1 * w2) - (54 *w4*w4 * w1*w1 * w2) - (24 *w4*w4 * w2) - (24 *w4*w4 * w3) + (104 *w4*w4 * w1 * w3) - (54 *w4*w4 * w1*w1 * w3) + (52 *w4*w4 * w2 * w3) a9 = -128 - (64* w2 * w1 * w3) + (24 *w2 * w1*w1 * w3) - (128 *w2 * w1 * w4) + (48 *w2 * w1*w1 * w4) - (128 *w1 * w3 * w4) - (64 *w2 * w3 * w4) + (64 *w2 * w1) - (32 *w2 * w1*w1) + (32 *w1*w1) + (96 *w2 * w1 * w3 * w4) - (32 *w2 * w1*w1 * w3 * w4) + (64 *w1 * w3) + (32 *w2 * w3) - (32 *w1*w1 * w3) + (128 *w1 * w4) + (64 *w2 * w4) + (64 *w3 * w4) - (64 *w1*w1 * w4) + (48* w1*w1 * w3 * w4) + (32 *w4*w4) - (64 *w4*w4 * w1) + (24 *w4*w4 * w1*w1) - (32 *w4*w4 * w1 * w2 * w3) + (10 *w4*w4 * w1*w1 * w2 * w3) + (48 *w4*w4 * w1 * w2) - (16 *w4*w4 * w1*w1 * w2) - (32 *w4*w4 * w2) - (32 *w4*w4 * w3) + (48 *w4*w4 * w1 * w3) - (16 *w4*w4 * w1*w1 * w3) + (24 *w4*w4 * w2 * w3) a10 = 64 - (16 *w2 * w1 * w3) + (4 *w2 * w1*w1 * w3) - (32 *w2 * w1 * w4) + (8 *w2 * w1*w1 * w4) - (32 *w1 * w3 * w4) - (16 *w2 * w3 * w4) - (64 *w1) - (32 *w2) - (32 *w3) - (64 *w4) + (32 *w2 * w1) - (8 *w2 * w1*w1) + (16 *w1*w1) + (16 *w2 * w1 * w3 * w4) - (4 *w2 * w1*w1 * w3 * w4) + (32 *w1 * w3) + (16 *w2 * w3) - (8 *w1*w1 * w3) + (64 *w1 * w4) + (32 *w2 * w4) + (32 *w3 * w4) - (16* w1*w1 * w4) + (8 *w1*w1 * w3 * w4) + (16 *w4*w4) - (16 *w4*w4 * w1) + (4 *w4*w4 * w1*w1) - (4 *w4*w4 * w1 * w2 * w3) + (w4*w4 * w1*w1 * w2 * w3) + (8 *w4*w4 * w1 * w2) - (2 *w4*w4 * w1*w1 * w2) - (8 *w4*w4 * w2) - (8 *w4*w4 * w3) + (8 *w4*w4 * w1 * w3) - (2 *w4*w4 * w1*w1 * w3) + (4 *w4*w4 * w2 * w3) b0 = 16*w4*w4 b1 = 32*w4*w4 b2 = -48*w4*w4 b3 = -128*w4*w4 b4 = 32*w4*w4 b5 = 192*w4*w4 b6 = 32*w4*w4 b7 = -128*w4*w4 b8 = -48*w4*w4 b9 = 32*w4*w4 b10 = 16*w4*w4 Ga = 10^(2/20) %teste x=[0 0 0 0 0 0 0 0 0 0 0]; y=[0 0 0 0 0 0 0 0 0 0 0]; t = linspace(0,1,48000); yy = zeros(1, 48000); for c= 1:48000 x(1) = sin(2*pi*100*t(c)); y(1) = (1/a0)*(b0*x(1) + b1*x(2) + b2*x(3) + b3*x(4) + b4*x(5) + b5*x(6) + b6*x(7) + b7*x(8) + b8*x(9) + b9*x(10) + b10*x(11) + a1*y(2) + a2*y(3) + a3*y(4) + a4*y(5) + a5*y(6) + a6*y(7) + a7*y(8) + a8*y(9) + a9*y(10) + a10*y(11)); yy(c) = y(1); % update x and y data vectors for i = 10:-1:1 x(i+1) = x(i); % store xi y(i+1) = y(i); % store yi end end plot(t,yy) And nothing but disaster happens when testing with a sine wave with 100Hz: Signal just gets huge, same happens with other frequencies. What am i doing wrong? Edit: sys = tf(10^(2/20).*[b0 b1 b2 b3 b4 b5 b6 b7 b8 b9 b10],[a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10],1/fs); P = pole(sys) zer=0 zplane(zer,P) P = -1.0001 + 0.0001i -1.0001 - 0.0001i -0.9999 + 0.0001i -0.9999 - 0.0001i 0.9973 + 0.0000i 0.9973 - 0.0000i 0.9860 + 0.0000i 0.9078 + 0.0000i -0.0127 + 0.0000i -0.0127 - 0.0000i The poles seem to be almost in the unstable region. Do the poles with real part equal to 1.0001 ruin everything? how can i fix them? Answer: IIR filters should, almost exactly universally* be broken down into first- and second-order sections and cascaded**. The sensitivity of filter pole locations to coefficient values goes up with filter order; even the slightest rounding error will screw up a 10th-order filter. If you have the signal processing toolbox, you should use the built-in IIR filter function. If you don't, you should still vectorize a and b My second-choice recommendation is to vectorize a and b and use a polynomial root-finder to verify that the poles and zeros are in sensible locations, then try to find the bug that's making them wrong. The poles seem to be almost in the unstable region. No, the poles are in the unstable region. Anything outside the unit circle ($|z| > 1$) is unstable. Do the poles with real part equal to 1.0001 ruin everything? No. They are diagnostic of the fact that everything is ruined. You have some underlying trouble that needs to be fixed. how can i fix them? Sensibly, take my first-choice recommendation, below, or do what I'd be tempted to do myself. That's why I'm recommending them. Note that someone has already volunteered you a link to a solution. If you must persist, try a math package that has arbitrarily high precision (there may be a Matlab extension that does this), and try the root-finding at higher precision. But be aware that you're going down a rabbit-hole, and in a world where people publish designs for this sort of thing for free and because it's fun, it's a pointless rabbit-hole. My first-choice recommendation is to try to find a paper that gives you the poles and zeros of a cascade of 2nd-order filters. What I'd be tempted to do if my first choice didn't work out would be to fit my own IIR filters to the recommended filter shape. Even if I did have that first-choice reference, I may do it anyway and compare which looks better. * unless you're an absolute freaking expert and willing to argue with your colleagues and expect to win, you should read this as "absolutely universally". ** or, rarely, cascade-parallel -- this would apply if you have a wide notch filter or similar.
{ "domain": "dsp.stackexchange", "id": 11608, "tags": "filters" }
Transcribing DNA to mRNA with introns
Question: I have a problem in my bioinformatics class that I thought I was doing right, but someone else is getting a different answer. Here is the problem: Given the following DNA sequence, 5'-GGATCGTGCCACCATCCACCATCGTTA-3', if two introns are in bases 3-9 and 15-22, what is the mRNA transcribed? Give the answer 5' to 3'. Note that the first base is base 1. And here are the steps that I took: 5'-GG ATCGTGC CACCA TCCACCAT CGTTA-3' (remove the bolded) 5'-GGCACCACGTTA-3' (new string) take reverse complement swap T's with U's Is this correct? Or where am I going wrong? Answer: I think the only place you are going wrong is in getting confused about strands ("take reverse complement"). 5'-GGATCGTGCCACCATCCACCATCGTTA-3' << coding strand 3'-CCTAGCACGGTGGTAGGTGGTAGCAAT-5' << template strand The coding strand has the same sequence as the transcribed RNA (apart from T>U), so the primary transcript is: 5'-GGAUCGUGCCACCAUCCACCAUCGUUA Then, positions of introns: 5'-GG AUCGUGC CACCA UCCACCAU CGUUA and after splicing: 5'-GGCACCACGUUA
{ "domain": "biology.stackexchange", "id": 1763, "tags": "molecular-biology, bioinformatics" }
Why is Google's quantum supremacy experiment impressive?
Question: In the Nature paper published by Google, they say, To demonstrate quantum supremacy, we compare our quantum processor against state-of-the-art classical computers in the task of sampling the output of a pseudo-random quantum circuit. Random circuits are a suitable choice for benchmarking because they do not possess structure and therefore allow for limited guarantees of computational hardness. We design the circuits to entangle a set of quantum bits (qubits) by repeated application of single-qubit and two-qubit logical operations. Sampling the quantum circuit’s output produces a set of bitstrings, for example {0000101, 1011100, …}. Owing to quantum interference, the probability distribution of the bitstrings resembles a speckled intensity pattern produced by light interference in laser scatter, such that some bitstrings are much more likely to occur than others. Classically computing this probability distribution becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow. So, from what I can tell, they configure their qubits into a pseudo-randomly generated circuit, which, when run, puts the qubits into a state vector that represents a probability distribution over $2^{53}$ possible states of the qubits, but that distribution is intractable to calculate, or even estimate via sampling using a classical computer simulation. But they sample it by "looking" at the state of the qubits after running the circuit many times. Isn't this just an example of creating a system whose output is intractable to calculate, and then "calculating" it by simply observing the output of the system? It sounds similar to saying: If I spill this pudding cup on the floor, the exact pattern it will form is very chaotic, and intractable for any supercomputer to calculate. But I just invented a new special type of computer: this pudding cup. And I'm going to do the calculation by spilling it on the floor and observing the result. I have achieved pudding supremacy. which clearly is not impressive at all. In my example, I'm doing a "calculation" that's intractable for any classical computer, but there's no obvious way to extrapolate this method towards anything actually useful. Why is Google's experiment different? EDIT: To elaborate on my intuition here, the thing I consider impressive about classical computers is their ability to simulate other systems, not just themselves. When setting up a classical circuit, the question we want to answer is not "which transistors will be lit up once we run a current through this?" We want to answer questions like "what's 4+1?" or "what happens when Andromeda collides with the Milky Way?" If I were shown a classical computer "predicting" which transistors will light up when a current is run through it, it wouldn't be obvious to me that we're any closer to answering the interesting questions. Answer: To elaborate on my intuition here, the thing I consider "impressive" about classical computers is their ability to simulate other systems, not just themselves. When setting up a classical circuit, the question we want to answer is not "which transistors will be lit up once we run a current through this?" We want to answer questions like "what's 4+1?" or "what happens when Andromeda collides with the Milky Way?" There isn't a real distinction here. Both quantum and classical computers only do one thing: compute the result of some circuit. A classical computer does not fundamentally know what $4+1$ means. Instead current is made to flow through various transistors, as governed by the laws of physics. We then read off the final state of the output bits and interpret it as $5$. The real distinction, which holds in both cases, is whether you can program it or not. For example, a simple four-function calculator is a classical system involving lots of transistors, but the specific things it can compute are completely fixed, which is why we don't regard it as a classical computer. And a pudding is a quantum system involving lots of qubits, but we can't make it do anything but be a pudding, so it's not a quantum computer. Google can control the gates they apply in their quantum circuit, just like loading a different program can control the gates applied in a classical CPU. That's the difference.
{ "domain": "physics.stackexchange", "id": 62627, "tags": "quantum-computer" }
Does electric field obey the triangle law of vector addition and subtraction?
Question: I know that electric field strength is force per unit charge but what I have not yet understood properly is that how electric field can obey the laws of vector addition and subtraction excluding the linear situation? Does electric field completely obey the triangle law of vectors? Does it produce a resultant like other vectors when two electric fields meet at an angle? If yes, then is it used practically anywhere in our world. Answer: Electric Field is force per unit charge. Since force is a vector, a vector divided by scalar also gives a vector. Just think in this way; at a point the net force is obtained by laws of vector addition, so electric field is also effectively obtained by same way. That makes sense
{ "domain": "physics.stackexchange", "id": 71239, "tags": "electromagnetism, electric-fields, vectors, vector-fields, linear-systems" }
How is this lamda function beeing executed in an example for the Y combinator
Question: I have spent a few hours now trying to understand how the Y Combinator is working and how it allows us to construct recursive functions with higher order functions. I have been going through this derivation http://mvanier.livejournal.com/2897.html which is using the factorial function for explanation which I find quite helpful but at some point I am always getting lost. I still understand this part (define (part-factorial self) ((lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1)))))) (self self))) (define factorial (part-factorial part-factorial)) (factorial 5) ==> 120 We have a function part-factorial which takes as a formal parameter self and applies it as argument (self self) to the first lambda function. So inside the first lambda f will evaluate to (self self) which in case of the factorial function will evaluate to (part-factorial, part-factorial). So in the second lambda then if we take 5 as our n the call to the function f will evaluate to part-factorial part-factorial 4 which will start the recursion. After a few syntax changes we now define the functions as (define almost-factorial (lambda (f) (lambda (n) (if (= n 0) 1 (* n (f (- n 1))))))) (define factorial ((lambda (x) (x x)) (lambda (self) (almost-factorial (self self))))) (factorial 5) ==> 120 And at this point I am confused. I understand the almost-factorial function as it is pretty much unchanged compared to part-factorial however the new factorial function is unclear to me, from a syntax perspective. I don't understand anymore which lamda will be executed with which parameters and in which order. Could anybody explain to me the execution of this function? Answer: I am just going to write h for almost-factorial because long names can sometimes obscure math. We consider: ((lambda (x) (x x)) (lambda (self) (h (self self)))) Write foo for ((lambda (x) (x x)) and write bar for (lambda (self) (h (self self)))). Then our expression is just (foo bar) Using the definition of foo we see that this is equal to (bar bar) Then we use the definition of bar on the first occurrence of bar to see that this is equal to ((lambda (self) (h (self self)))) bar) which in turn is just (h (bar bar)) We discovered that (bar bar) is equal to (h (bar bar)). Let us write g for (bar bar), so g is equal to (h g). Taking into account the definition of h (which is jsut almost-factorial from your question) we see that (h g), and therefore g, is equal to (lambda (n) (if (= n 0) 1 (* n (g (- n 1))))) In other words, g is the usual factorial function. I hope I got all the silly lisp parentheses where they are supposed to be.
{ "domain": "cs.stackexchange", "id": 10876, "tags": "lambda-calculus, functional-programming" }
What happens when an internet connection is faster than the storage write speed?
Question: If one attempted to download a file at a speed of 800 Mb/s (100 MB/s) onto a hard drive with a write speed of 500 Mb/s (62.5 MB/s), what would happen? Would the system cap the download speed? Answer: Many protocols, including TCP which is most widely used protocol on the Internet, use something called flow control. Flow control simply means that TCP will ensure that a sender is not overwhelming a receiver by sending packets faster than it can empty its buffer. The idea is that a node receiving data will send some kind of feedback to the node sending the data to let it know about its current condition. So, two way feedback allows both machine to optimally use their resources and prevent any problems due to mismatch in their hardware. https://en.wikipedia.org/wiki/Flow_control_(data)
{ "domain": "cs.stackexchange", "id": 20700, "tags": "computer-networks" }
A black hole that doesn't take in matter?
Question: According to List of Common Misconceptions by Wikipedia, under the heading of astronomy, this line can be found: A black hole can act like a "cosmic vacuum cleaner" and pull a substantial inflow of matter, but only if the star it forms from is already having a similar effect on surrounding matter. This is referenced to a paper written in Yale university: "Frontiers And Controversies In Astrophysics Transcript 9", although that's a dead link. By linking so, I believe that it is a reliable source. This got me thinking that is there a black hole that doesn't suck in surrounding matter? And is there a star that don't take in surrounding matter? Answer: This is the full quote If, for example, the Sun were replaced by a black hole of equal mass, the orbits of the planets would be essentially unaffected. A black hole can act like a "cosmic vacuum cleaner" and pull a substantial inflow of matter, but only if the star it forms from is already having a similar effect on surrounding matter It's badly written, but what it's basically saying is that it doesn't matter the form of the massive object that's being orbited by a planet or other object. A stable orbit would remain stable, so, if our sun was to collapse into a black hole and nothing else changes, Earth's orbit would remain the same. It would get dark and cold, but the orbit would be unchanged. The misconception is, if the sun became a black hole than Earth would be sucked into the sun. That's 100% false and that's all they're saying. Gravity is a function of mass and distance. Black holes have very high gravity, but a big part of the reason for that is because when they form out of dying stars, they become very small, a few solar masses squeezed into only about 10 miles across so the distance to the center of mass gets very small. For a planet to be in any danger from a black hole, it would probably need to be inside the Roche Limit, perhaps only a few million miles away, which is several times closer than Mercury is to the sun, for example. The safe distance, of-course, varies with the density and solidity of the orbiting object. Here's a Q on Roche Limits and Black holes if interested.
{ "domain": "astronomy.stackexchange", "id": 1228, "tags": "star, black-hole" }
Echo topic from a docker container from another machine in the same network
Question: I just came back to ROS after a long while and now I'm learning ROS2. My objective is simply to echo or see a topic broadcasted inside a docker container from Machine 1 to Machine 2. Machine 1 and Machine 2 are both in the same network. Some basic things I tried is adding the parameter of --net=host to spin up my docker container like the following: Machine 1: $ docker run -it --name ros2_container --net=host ros_foxy_image Inside the container: $ source /opt/ros/foxy/install/setup.bash ros2 run demo_nodes_cpp talker Machine 2: $ source /opt/ros/foxy/setup.bash I expect to at least see /chatter from using ros2 topic list however I only see /parameter_events and /rosout Some context: Machine 1 is a jetson_nano and I installed the foxy docker image as stated in their repos. Machine 2 is running on a Ubuntu 20 VM with foxy installed from the debian package. I can see the topics being broadcasted from Machine 1 just fine but I can't see the topics from Machine 2. Originally posted by J. J. on ROS Answers with karma: 60 on 2021-04-12 Post score: 0 Answer: This was mainly due to the network of the VM connected to NAT. The fix was to enable a Bridge connection to enable the VM to be seen as a physical machine. Using the settings in VirtualBox the fix was simply: Settings -> Network -> Change to Bridge. Originally posted by J. J. with karma: 60 on 2021-04-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 36309, "tags": "ros, ros2, network" }
Why does near-field attenuate at $\frac{1}{r^6}$?
Question: So far-field makes intuitive sense to me, it attenuates at $\frac{1}{r^2}$ much like gravity. This is just common sense since the area for the surface of a sphere relative to its radius has a $r^2$ relationship. However the near-field attenuates at a rate of $\frac{1}{r^6}$ instead. Is there some equally intuitive explanation for this? How is this derived? The above numbers can be confirmed from the following Wikipedia article. To quote the article: According to Maxwell's equation for a radiating wire, the power density of far-field transmissions attenuates or rolls off at a rate proportional to the inverse of the range to the second power ($\frac{1}{r^2}$) or −20 dB per decade. This slow attenuation over distance allows far-field transmissions to communicate effectively over a long range. The properties that make long range communication possible are a disadvantage for short range communication systems. NFMI systems are designed to contain transmission energy within the localized magnetic field. This magnetic field energy resonates around the communication system, but does not radiate into free space. This type of transmission is referred to as "near-field." The power density of near-field transmissions is extremely restrictive and attenuates or rolls off at a rate proportional to the inverse of the range to the sixth power ($\frac{1}{r^6}$) or −60 dB per decade. Answer: This answer overlaps with the answer by Roger Vadim, which quotes from this Wikipedia article, but I had to read his answer more than once before I understood, and my clarification outgrew the comment box. If you have a function which diverges at the origin and vanishes at infinity, a Taylor-expansion-ish thing to do is to expand in powers of $1/r$: $$ f(\omega,t,r) = \frac{a_1(\omega,t)}r + \frac{a_2(\omega,t)}{r^2} + \cdots $$ So at large distances you only care about $a_1$, but there may be some intermediate distance where $a_2$ starts to “win,” while at even closer distances the “winner” becomes $a_3$, then maybe $a_4$, and so on. For antennas, this expansion is done for the field amplitudes. The power density is proportional to the square of the amplitude, and is therefore expanded in even powers of $r$. The statement that “the near field power varies like $r^{-6}$” is approximately equivalent to “we can usefully describe this antenna keeping only the first three terms in the multipole expansion of the field.”
{ "domain": "physics.stackexchange", "id": 85689, "tags": "electromagnetism, electromagnetic-radiation, magnetic-fields, electric-fields" }
Simple elevator-like animation android
Question: private void moveViewToScreenCenter(final ImageView img, int x) { DisplayMetrics dm = new DisplayMetrics(); this.getWindowManager().getDefaultDisplay().getMetrics(dm); img.animate() .translationX(0) .withEndAction(new Runnable() { @Override public void run() { enableAll(); showDialog(); } }) .translationY(-x * dm.heightPixels / 6) .setDuration(2000) .setInterpolator(new LinearInterpolator()) .setStartDelay(0); } public void showDialog() { final CharSequence[] items = {"0", "1", "2","3","4","5"}; AlertDialog.Builder alertDialogBuilder = new AlertDialog.Builder(this); alertDialogBuilder.setTitle("which floor is your destination"); alertDialogBuilder.setSingleChoiceItems(items, -1, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int item) { if (items[item].equals("0")) { moveViewToScreen(img, 0); } else if (items[item].equals("1")) { moveViewToScreen(img, 1); } else if (items[item].equals("2")) { moveViewToScreen(img, 2); } else if (items[item].equals("3")) { moveViewToScreen(img, 3); } else if (items[item].equals("4")) { moveViewToScreen(img,4); }else if (items[item].equals("5")) { moveViewToScreen(img,5); } dialog.dismiss(); } }); alertDialogBuilder.show(); } private void moveViewToScreen(final ImageView img, int x) { DisplayMetrics dm = new DisplayMetrics(); this.getWindowManager().getDefaultDisplay().getMetrics(dm); img.animate() .translationX(0) .withEndAction(new Runnable() { @Override public void run() { enableAll(); } }) .translationY(-x * dm.heightPixels / 6) .setDuration(2000) .setInterpolator(new LinearInterpolator()) .setStartDelay(0); } Buttons are disabled (disableALL()) during animation but animation seems fast between far floors and slow between consecutive ones . i'm still new to android and simple animation stuff like this Answer: Janos' advice is applying DRY principle to your code. It can be extended to also cover moveViewToScreenCenter and moveViewToScreen functions, since they are almost identical: private void moveViewToScreenCenter(final ImageView img, int x, boolean center) { DisplayMetrics dm = new DisplayMetrics(); this.getWindowManager().getDefaultDisplay().getMetrics(dm); img.animate() .translationX(0) .withEndAction(new Runnable() { @Override public void run() { enableAll(); if (center) showDialog(); } }) .translationY(-x * dm.heightPixels / 6) .setDuration(2000) .setInterpolator(new LinearInterpolator()) .setStartDelay(0); } and call this function by providing true/false to your center parameter. Also, functional scalability might be an issue: what if you have to handle 11 eleven floors in the future? (0 -> 10). You will be forced make several changes to handle a simple request like this. In order to accommodate larger values, I would dynamically define your array and not base my condition on char (which allows only one character). Something like this (not tested): // place this in some generic place List<Integer> makeSequence(int begin, int end) { List<Integer> ret = new ArrayList(end - begin + 1); for(int i = begin; i <= end; i++, ret.add(i)); return ret; } int maxFloors = 5; List<Integer> items = makeSequence(0, 5); public void onClick(DialogInterface dialog, int item) { moveViewToScreen(img, items[item]); dialog.dismiss(); }
{ "domain": "codereview.stackexchange", "id": 17877, "tags": "java, android, animation" }
Password-generation function using custom seed
Question: After reading an article on Skull Security mentioning the potential weakness of php's mt_rand function because of weak auto-seeding (http://ow.ly/4nrne), I decided to see what -- if any -- entropy I could find available from within php. The idea is to have enough (weak) sources that even if one or two are manipulated, lost or recovered, there's enough left to thwart brute force against the resulting passwords later. Hopefully the result is both readable and usable, although I don't expect it to be production-quality. <?php /** * Return a random password. * * v1.01 * Jumps through many hoops to attempt to overcome autoseed weakness of php's mt_rand function * */ function myRandomPassword() { // Change this for each installation $localsecret = 'qTgppE9T2c'; // Determine length of generated password $pwlength = 10; // Character set for password $pwchars = 'ABCDEFGHJKLMNPQRSTUVWXYZabcdefghjkmnpqrstuvwxyz0123456789'; $l = strlen( $pwchars ) - 1; // Get a little bit of entropy from sources that should be inaccessible to outsiders and non-static $dat = getrusage(); // gather some information from the running system $datline = md5(implode($dat)); // wash using md5 -- it's fast and there's not enough entropy to warrant longer hash $hardToGuess = $datline; $self = __FILE__; // a file the script should have read access to (itself) $stat = stat($self); // information about file such as inode, accessed time, uid, guid $statline = md5(implode($stat)); // wash $hardToGuess .= $statline; $preseed = md5(microtime()) . getmypid() . $hardToGuess . memory_get_usage() . disk_free_space('.') . $localsecret; $seed = sha1( $preseed ); // final wash, longer hash // Seed the mt_rand() function with a better seed than the standard one mt_srand ($seed); // Pick characters from the lineup, using the seeded mt_rand function $pw = ''; for ( $i = 0; $i < $pwlength; $i++ ) { $pw .= $pwchars{ mt_rand( 0, $l ) }; } // Return the result return $pw; } echo myRandomPassword(); ?> Revision 1.01 adds a local secret. Answer: I actually don't get the point of injecting so much system information into the mt_srand function. Looks like a total (and maybe even pointless) paranoia :) But here you go with a cleaner code: <?php /** * Random password generator * v2.0 */ define('APP_SECRET_KEY', 'qTgppE9T2c'); function randomPassword($length=10) { $charset = 'ABCDEFGHJKLMNPQRSTUVWXYZabcdefghjkmnpqrstuvwxyz0123456789'; $charsetSize = strlen($charset) - 1; // Seeding the generator with a bunch of different system data and the secret key mt_srand(crc32(md5(microtime()) . getmypid() . md5(implode(getrusage())) . md5(implode(stat(__FILE__))) . memory_get_usage() . disk_free_space('.') . APP_SECRET_KEY) ); $password = ''; foreach (range(1, $length) as $_) $password .= $charset{mt_rand(0, $charsetSize)}; return $password; } echo randomPassword(), "\n"; Maybe you'll like the more perverted superslow version which returns CRC32 of randomly ordered entropy each time you generate a new symbol. <?php /** * Random password generator * v2.1 */ function randomPassword($length=10) { $charset = 'ABCDEFGHJKLMNPQRSTUVWXYZabcdefghjkmnpqrstuvwxyz0123456789'; $charsetSize = strlen($charset) - 1; $seeders = array( function () { return md5(microtime()); }, function () { return md5(getmypid()); }, function () { return md5(implode(getrusage())); }, function () { return memory_get_usage(); }, function () { return disk_free_space('.'); } ); $randomSeed = function () use ($seeders) { shuffle($seeders); $entropy = ''; foreach ($seeders as $seeder) $entropy .= $seeder(); return crc32($entropy); }; $password = ''; foreach (range(1, $length) as $_) { mt_srand($randomSeed()); $password .= $charset{mt_rand(0, $charsetSize)}; } return $password; } echo randomPassword(), "\n";
{ "domain": "codereview.stackexchange", "id": 206, "tags": "php, security, random" }
Queue Interview Code basic methods made from struct Node
Question: Thanks for all the feedback, I optimized the code here. Here I'm Writing a very simple Queue of struct Nodes with only these methods get_front(), get_back(), pop_front(), push_back(), print. I hope using namespace std; is okay, I will note it should not be used this way in production and I am only using the approach to write my code as quickly as possible so I can have more time to test and refactor while I discuss the code with my interviewer. This is not for production and is to be treated as code that could be used in an interview or a quick and dirty prototype. I'm really curious about the approach I've taken here to keep track of the size, empty status, and back and front pointers, and use of only previous in my Node struct instead of next and previous pointers. I felt having both was not needed and previous makes more sense for a queue. I'd like to know also if my member functions for the Queue have any edge cases I am not catching and any improvements I can make to run time efficiency. Anyway I can simplify this code further with c++11 features, shorter variable names or any other suggestions would be appreciated too. Lastly if you optionally would like to share the memory/space complexity for my code that would be a huge help! I have noted some examples in the member data of my Node struct. #include <iostream> using namespace std; struct Node { int data; // 4 bytes for primatives Node* previous; // 8 bytes for pointers Node(int data) : data(data), previous(nullptr) { } }; struct Queue { Node* queue; int size; bool is_empty; Node* front; Node* back; Queue() : queue(nullptr), size(0), is_empty(true), front(nullptr), back(nullptr) { } string get_front() { if (front == nullptr) { return "empty"; } else { return to_string(front->data); } } string get_back() { if (back == nullptr) { return "empty"; } else { return to_string(back->data); } } void push_back(int data, Node* current) { if (current->previous == nullptr) { Node* n = new Node(data); current->previous = n; back = n; } else { push_back(data, current->previous); } } void push_back(int data) { if (is_empty) { queue = new Node(data); back = queue; front = queue; is_empty = false; } else { push_back(data, queue); } size++; } void pop_front() { size--; if (front->previous == nullptr) { front = nullptr; back = nullptr; delete queue; is_empty = true; } else { Node* dangling = front; front = front->previous; delete dangling; } } void print(Node* current, string queue_string) { if (current->previous == nullptr) { queue_string = to_string(current->data) + " " + queue_string; cout << queue_string << endl; } else { queue_string = to_string(current->data) + " " + queue_string; print(current->previous, queue_string); } } void print() { if (is_empty) { cout << "_____________\n\n"; cout << "_____________\n"; } else { cout << "_____________\n"; print(front, ""); cout << "_____________\n"; } } }; int main() { Queue queue; queue.push_back(9); queue.push_back(8); queue.push_back(7); queue.push_back(6); queue.push_back(5); queue.print(); cout << "front " << queue.get_front() << endl; cout << "back " << queue.get_back() << endl; cout << "size " << to_string(queue.size) << endl; cout << boolalpha << "queue empty status is " << queue.is_empty << endl; queue.pop_front(); queue.pop_front(); queue.pop_front(); queue.print(); cout << "front " << queue.get_front() << endl; cout << "back " << queue.get_back() << endl; cout << "size " << to_string(queue.size) << endl; cout << "queue empty status is " << queue.is_empty << endl; queue.pop_front(); queue.pop_front(); queue.print(); cout << "front " << queue.get_front() << endl; cout << "back " << queue.get_back() << endl; cout << "size " << to_string(queue.size) << endl; cout << "queue empty status is " << queue.is_empty << endl; } Answer: I see a number of things that could help you improve your code. Don't abuse using namespace std Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. Know when to use it and when not to (as when writing include headers). If I were hiring, I'd prefer that the candidate actually write production quality code, rather than point out that the just-produced sample was not, even if they could articulate and justify the difference. After all, they're probably not hiring someone to produce non-production sample code, right? Don't define a default constructor that only initializes data Instead of writing this: struct Queue { Node* queue; int size; bool is_empty; Node* front; Node* back; Queue() : queue(nullptr), size(0), is_empty(true), front(nullptr), back(nullptr) { } // etc. }; write this: struct Queue { Node* queue = nullptr; int size = 0; bool is_empty = true; Node* front = nullptr; Node* back = nullptr; // no need to write default constructor // other code }; See Cpp Core Guidelines C.45 for details. Use class rather than struct if there are invariants The current Queue has pointers and size and is_empty members. Will everything still work if those are arbitrarily changed to random values? No, it will not. There are expectations that size and is_empty will always have the right values and the values of the pointers are critical to the operation of the data structure, therefore this must be a class and not a struct. See Cpp Core Guidelines C.2. Eliminate redundant data Rather than maintaining a separate is_empty data item, I'd suggest only keeping the size and defining a function instead like this: bool is_empty() const { return size == 0; } As per the previous advice, I'd also keep size private and provide a public access function if needed: std::size_t size() const { return size_; } Also, you don't really need both queue and front. The code would be both clearer and more compact if only front and back pointers were included. Use the appropriate data types Would it ever make sense to have a negative size for a queue? I'd suggest not, and so it would make more sense to have size be a std::size_t type. Rethink the interface If a user of this Queue were to invoke get_front() on an empty queue, I think it would be a much better interface to either throw an exception or to return an empty string rather than the special value "empty". It's also quite peculiar to push ints and then pop strings. That's not what I'd want. Here's how I'd write get_front: int get_front() const { if (is_empty()) { throw std::out_of_range{"cannot get data from empty queue"}; } return front->data; } Use const where practical The current print() functions do not (and should not) modify the underlying object, and so both should be declared const: void print(const Node* current, std::string queue_string) const; void print() const; I would also make the first variant private because the user of the class should not have any pointer to an internal structure. Fix the bugs There are several problems with pop_front. First, it doesn't check for an empty queue before decrementing the size which is an error. Second, it does not correctly update queue and leads to dereferencing freed memory which is undefined behavior. Don't use std::endl if '\n' will do Using std::endl emits a \n and flushes the stream. Unless you really need the stream flushed, you can improve the performance of the code by simply emitting '\n' instead of using the potentially more computationally costly std::endl. Avoid to_string The print function currently contains these lines: queue_string = to_string(current->data) + " " + queue_string; cout << queue_string << endl; This creates another string and then prints that string which is not needed. Instead, just print directly: cout << current->data << ' ' << queue_string << '\n'; Use all of the required #includes The type std::string is used but its declaration is in #include <string> which is not actually in the list of includes. Prefer iteration over recursion Recursion tends to use additional stack space over iteration. For that reason (and often for clarity) I'd recomment writing print like this instead: void print() const { std::cout << "_____________\n"; for (const auto *item = front; item; item = item->previous) { std::cout << item->data << ' '; } std::cout << "\n_____________\n"; } Even better would be to pass a std::ostream & argument to this function to allow printing to any stream. Simplify your code The recursive push_back function is much longer than it needs to be. I'd write it like this: void push_back(int data) { auto temp = new Node{data}; ++size_; if (back == nullptr) { // adding to empty queue front = back = temp; } else { back->previous = temp; back = temp; } } Note that this version assumes that the data members queue and is_empty have already been removed per the suggestions above. Make test success obvious The current test code exercizes the queue, but it doesn't indicate what is expected to be printed. I'd instead write both test scenarios and also the expected result so that it would be clear to anyone running the code whether everything was working as expected or not. Make private structures private Nothing outside of Queue needs to know anything about Node, so I'd strongly recommend making the definition of Node private within Queue. Don't leak memory This program leaks memory because the Queue's destructor doesn't free all resources. This is a serious bug. Consider using a template A queue is a fairly generic structure that could hold any kind of data if the class were templated, and not just an int. Consider possible uses For any code production, but especially if you're in an interview, think about how the class is being used and whether there are any restrictions or limits inherent in the design. For example, think about copy and move operations. If you write this, does the code do the right thing? Queue queue; queue.push_back(5); queue.push_back(6); queue.push_back(7); auto a_copy{queue}; a_copy.pop_front(); queue.print(); a_copy.print(); Also consider multithreaded code. Would it be thread-safe to push from one thread and pull from another? If not, what would be needed to make that work? Don't make platform assumptions Although it doesn't adversely affect the code, the assumptions about data sizes in the comments for Node are simply incorrect on my machine and would be a red flag in an interview.
{ "domain": "codereview.stackexchange", "id": 33828, "tags": "c++, c++11, queue, pointers" }
Excessive use of lambda with variant of Sieve of Eratosthenes?
Question: This is a function which produces the sum of primes beneath (not including) a certain number (using a variant on the sieve of Eratosthenes). erastosum = lambda x: sum([i for i in xrange(2, x) if i == 2 or i == 3 or reduce(lambda y,z: y*z, [i%j for j in xrange(2,int(i ** 0.5) + 1)])]) Excessive use of lambda? Perhaps. Beautification would be nice, but I'm looking for performance optimizations. Sadly, I'm not sure if there is any further way to optimize the setup I've got right now, so any suggestions (on how to, or what else to do) would be nice. Answer: As much as I love anonymous functions, they can be a nightmare to debug. Splitting this code up piece wise (into an actual function or otherwise) shouldn't and wouldn't decrease it's performance while improving maintenance and portability for you later on. This class is quite efficient for determining primes. Despite being quite lengthy is more efficient than the more usual approach.
{ "domain": "codereview.stackexchange", "id": 848, "tags": "python, optimization, lambda" }
relationship between voltage and current
Question: I know that this question had been asked many times but I think it will be a new info in Ohm’s law: $$R = V/I$$ So Voltage is directly proportional to Current In Electric Power’s law: $$P = VI$$ so Voltage is inversely proportional to Current. I am very confused about this. I made many researches in many sites and here but no results. Answer: If you keep the resistance constant, then $V=IR$ means that voltage is directly proportional to current. If you keep the power constant, then $V=\frac{P}{I}$ means that voltage is inversely proportional to current. However, because $V=IR$, we can write that $P=I^2R$. Therefore, if we say resistance is constant, then power must change with current, which means that voltage is no longer inversely proportional to current. There is no contradiction here, you simply need to be mindful of what you are holding constant and ask yourself if you are being consistent
{ "domain": "physics.stackexchange", "id": 24403, "tags": "electric-current, electrical-resistance, voltage, power" }
ROS installation on intel edison
Question: I am getting following error while installing ROS on intel edison with UbiLinux. Following these steps for installation http://wiki.ros.org/wiki/edison /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp:380:43: error: macro "BOOST_SCOPE_EXIT" passed 2 arguments, but takes just 1 /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp: In member function ‘ros::CallbackQueue::CallOneResult ros::CallbackQueue::callOneCB(ros::CallbackQueue::TLS*)’: /home/edison/ros_catkin_ws/src/ros_`enter code here`comm/roscpp/src/libros/callback_queue.cpp:380:7: error: ‘BOOST_SCOPE_EXIT’ was not declared in this scope /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp:381:7: error: expected ‘;’ before ‘{’ token /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp:384:7: error: ‘struct boost::scope_exit::aux::undeclared’ has no member named ‘value’ /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp:384:7: error: ‘boost_se_guard_384’ was not declared in this scope /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp:398:9: error: ‘result’ was not declared in this scope /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp: At global scope: /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp:408:3: error: expected unqualified-id before ‘else’ /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp:413:3: error: expected unqualified-id before ‘return’ /home/edison/ros_catkin_ws/src/ros_comm/roscpp/src/libros/callback_queue.cpp:416:1: error: expected declaration before ‘}’ token make[2]: *** [CMakeFiles/roscpp.dir/src/libros/callback_queue.cpp.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[1]: *** [CMakeFiles/roscpp.dir/all] Error 2 make: *** [all] Error 2 root@ubilinux:/home/edison/ros_catkin_ws/build_isolated/roscpp# Originally posted by Mrutyunjay on ROS Answers with karma: 1 on 2017-03-19 Post score: 0 Answer: edit: I've been working through this on my beaglebone board throughout the day and actually couldn't push this all the way through. I'm going to leave my earlier reply in case it's helpful to someone, but it was easier for me to just upgrade my distro to jessie and just install kinetic... I believe the problem here is that the ros install depends on c++11 features which are not available on your device. I solved this exact problem (on a beagleebone running wheezy) by adding this flag to the build command: --cmake-args -DCMAKE_CXX_FLAGS=--std=c++0x I also needed to upgrade my boost library to 1.53 to get the entire install to complete. See http://yplam.com/ros/2017/03/19/edison-ros-install.html as well Originally posted by emef with karma: 41 on 2017-03-31 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ahendrix on 2017-03-31: Do you mean boost 1.53 ? Comment by emef on 2017-03-31: yep, fixed Comment by dlheard on 2017-03-31: What was the build command that you added this flag to? Comment by emef on 2017-03-31: ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/indigo
{ "domain": "robotics.stackexchange", "id": 27358, "tags": "ros" }
Coherent state being the eigenstate of the annihilation operator
Question: From what I understand, the physical relevance and interest of a coherent state is that its dynamics closely resembles the one of its classical analogue. For example, for a quantum SHO $\langle x \rangle \sim \cos(\omega t)$ and $\langle p \rangle \sim \sin(\omega t)$ just like in the classical case. Mathematically, a coherent state $|\alpha\rangle$ is defined to be the eigenstate of the annihilation operator $a$, such that $$ a|\alpha\rangle = \alpha|\alpha\rangle.$$ Question: is there a relation between being the eigenstate of the annihilation operator and having a classical-resembling dynamics, or is it just a pure coincidence? Answer: A maximally classical state should have minimum and equally distributed uncertainty in $X$ and $P$. In other words the uncertain in $X$ should equal the uncertainty in $P$ and this uncertainty should be as small as possible. This leads to $$ (X-\langle X\rangle)|\alpha\rangle=-i(P-\langle P\rangle)|\alpha\rangle $$ or if we rearrange the equation $$ \alpha|\alpha\rangle=\langle X+iP\rangle|\alpha\rangle=(X+iP)|\alpha\rangle=a|\alpha\rangle $$ where $a=X+iP$ is the lowering operator. It is important to realize that $|\alpha\rangle$ is still a quantum state. As you have pointed out, $\langle X\rangle$ and $\langle P\rangle$ follow the classical trajectory, but if you calculate the variance of $X$ and the variance of $P$ you will find they are non-zero. The uncertainty principle must be satisfied.
{ "domain": "physics.stackexchange", "id": 55417, "tags": "quantum-mechanics, harmonic-oscillator, eigenvalue" }
Scikit-Learn - Learned model description?
Question: Is there a way I can "look inside" a model once it's trained? For example, if I train a spam filter with a multinomialNB, is there a way I can extract which words are most likely to make an email classify as spam? I'd like to see how the models determine the outcome once fitted. Answer: For the particular case of the MultinomialNB you can look here. However if want you want is to determine which features are the most important, the SelectFromModel for selecting the most important features for the model.
{ "domain": "datascience.stackexchange", "id": 1455, "tags": "scikit-learn" }
Will ROS lunar run on ubuntu 18.04?
Question: Will the ROS distribution lunar run on the Ubuntu 18.04? Originally posted by nmelchert on ROS Answers with karma: 143 on 2018-03-02 Post score: 1 Answer: This is slightly difficult to answer, as you ask "will it run?", which is different from "is it supported?". As to supported: no, Lunar is not supported on 18.04. See REP 3: Platforms by Distribution - Lunar Loggerhead, which shows that Lunar is only supported on 16.04, 16.10 (might not be the case any more) and 17.04. ('supported' in this case means that official binaries are made available and those are guaranteed to work on a specific OS) As to "will it run": a from-source build (using the steps outlined in wiki/lunar/Installation/Source) may succeed, but there are no guarantees. There are many questions about building ROS from source on various OS, even newer and older Ubuntu versions. See #q277021 for an example. Perhaps another board member can tell you more conclusively whether they've been able to build Lunar on 18.04 from source. Note that ROS Melodic is / will be supported on 18.04: REP 3: Platforms by Distribution - Melodic Morenia. Originally posted by gvdhoorn with karma: 86574 on 2018-03-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by gvdhoorn on 2018-03-02: And, as always: running ROS on any 'unsupported' OS is definitely possible with something like Docker. The OSRF supplies Docker images for just about every ROS release there is. Comment by nmelchert on 2018-03-02: Ok thanks! And is there any ROS distribution that is officially supported by Ubuntu 16.04 and 18.04? Comment by gvdhoorn on 2018-03-02: Well, I would say check REP-3. I linked you to it twice. That contains all the info you need. Comment by nmelchert on 2018-03-05: Thank you!
{ "domain": "robotics.stackexchange", "id": 30192, "tags": "ros-lunar, ubuntu, ubuntu-bionic" }
Generalization Error Definition
Question: I was reading about PAC framework and faced the definition of Generalization Error. The book defined it as: Given a hypothesis h ∈ H, a target concept c ∈ C, and an underlying distribution D, the generalization error or risk of h is defined by The generalization error of a hypothesis is not directly accessible to the learner since both the distribution D and the target concept c are unknown. However, the learner can measure the empirical error of a hypothesis on the labeled sample S. I can not understand the equation. Can anyone please tell me how it can be interpreted? Also what is x~D? Edit: How do I formally write this term? Is something like $$\mathbb{E}_{x \sim D} [1_{h(x)\neq c(x)}] = \int_X 1_{h(\cdot) \neq c(\cdot)} (\omega) dD(\omega)$$ correct or do I need to define some random variable? Also, to show that the empirical error $$ \hat{R}(h) = \frac{1}{m} \sum_{i =1}^m 1_{h(x_i)\neq c(x_i)} $$ is unbiased, we have $$\mathbb{E}_{S \sim D^m} [\hat{R}(h)] = \frac{1}{m} \sum_{i =1}^m \mathbb{E}_{S \sim D^m} ~ \left[ 1_{h(x_i)\neq c(x_i)} \right] = \frac{1}{m} \sum_{i =1}^m \mathbb{E}_{S \sim D^m} ~ \left[ 1_{h(x)\neq c(x)} \right]$$, but how do we formally get $$ \mathbb{E}_{S \sim D^m} ~ \left[ 1_{h(x)\neq c(x)} \right]= \mathbb{E}_{X \sim D} ~ \left[ 1_{h(x)\neq c(x)} \right] = R(h)$$ I think that I understand it intuitionally, but I can't write it down formally. Any help is much appreciated! Answer: There exists somewhere in the world a distribution $D$ from which you can draw some samples $x$. The notation $x \sim D$ simply states that the sample $x$ came from the specific distribution that was noted as $D$ (e.g. Normal or Poisson distributions, but also the possible pixel values of images of beaches). Say you have some ground truth function, mark it as $c$, that given a sample $x$ gives you its true label (say the value 1). Furthermore, you have some function of your own, $h$ that given some input, it outputs some label. Now given that, the risk definition is quite intuitive: it simply "counts" the number of times that $c$ and $h$ didn't agree on the label. In order to do that, you (ideally) will go over every sample $x$ in your distribution (i.e. $x \sim D$). run it through $c$ (i.e. $c(x)$) and obtain some label $y$. run it through $h$ (i.e. $h(x)$) and obtain some label $\hat{y}$. check if $y \neq \hat{y}$. If so, you add 1 to your count (i.e. $1_{h(x) \neq c(x)}$ - that notes the indicator function) Now last thing to note is that I wrote above "count", but we don't really care if the number is 500 or 100, we care for the relative number of mistakes (like 40% or 5% of the samples that were checked were classified differently). That is why it is noted as expectancy ($\mathbb{E}$). Let me know if that was clear enough :-)
{ "domain": "datascience.stackexchange", "id": 1571, "tags": "machine-learning, deep-learning, pac-learning" }
Inductors in parallel. Link between inductance and current in a D.C. circuit
Question: A problem from the FIITJEE review package: Or, paraphrased: When two inductors in parallel connected to a battery with some internal resistance, what is the current through each of the inductors after achieving a steady state. The one conclusion that I can make is that the potential across the inductors after achieving steady state is 0. How do I link inductance with current? Is there any expression for resistance of an inductor? Answer: Since this is a circuit theory question, it's possible the question author wants us to assume the inductors are ideal, that is they have zero series resistance. If the inductors initially have 0 current through them1 as the switch is closed, and the voltage across them is $v(t)$, then the current through the first is $$i_1(t) = \int_0^t \frac{v(t)}{L_1}\rm{d}t$$ And across the second one $$i_2(t) = \int_0^t \frac{v(t)}{L_2}\rm{d}t$$ Because the integral is a linear operator, we can pull out the constant term and find $$L_1 i_1(t) = L_2 i_2(t)$$ for any choice of $t$, including in the limit as $t\to\infty$. Therefore in the steady state we find2 $$I_1 = \frac{L_2}{L_1}I_2$$ From this you can find the current through the individual inductors in the situation given. (Hint: the answer will be very similar to if they were two resistors with values $R_1$ and $R_2$) Note 1: This is actually not a great assumption for the circuit as given, because when the switch is open there is no current flow through the resistor, and therefore no mechanism for the inductors to lose energy and decay to a 0-current state when the switch is open. Note 2: In a real world circuit, however, the steady state behavior would be dominated by the equivalent series resistance of the two inductors, and their inductance would not come in to play. See rob's answer for more details.
{ "domain": "physics.stackexchange", "id": 36927, "tags": "electric-circuits, electrical-resistance, electromagnetic-induction, textbook-erratum" }
Infinite calculations in finite time
Question: This is probably a silly thought, but suppose we have a computer that's programmed to perform an infinite sequence of calculations and suppose the $i^\text{th}$ calculation takes $1/2^i$ seconds to complete. Then this computer can do an infinite number of calculations in a finite amount of time. Why is this impossible? Is there a lower bound on how long it takes to carry out a non-trivial calculation? Answer: This "kind" of computer is known as a Zeno Machine. Its computational model falls into a category called Hypercomputation. Hypercomputational models are mathematical abstractions, and because of the ways in which they are defined to work, they aren't physically possible. Take your Zeno Machine for example. If we imagine the Zeno Machine to be a calculating machine of any kind, whether it uses an abacus or integrated circuit doesn't matter. Say the program data used by the machine is fed to it by an infinitely long tape of symbols (just like a Turing Machine). Of course, we know from mathematics that: $\frac{1}{2}+\frac{1}{4}+\frac{1}{8}... = \sum_{n=1}^{\infty}(\frac{1}{2})^n $ which we say is equal to $1$. Thus the computation should complete in 1 second because the sum absolutely converges. But this convergence is, of course, dependent on $n$ going to (and reaching) infinity. In the physical sense, this means that as the time required for each calculation gets smaller, the "read head" of the calculating machine will have to zip along the symbols in the tape faster and faster. At some point, this speed will exceed the speed of light. So answering your second question, the absolute lowest-possible bound on a calculation would probably be on the order of the Planck time, given the speed of light as the primary limiting factor in theoretical, but physically-plausible models of computation.
{ "domain": "cs.stackexchange", "id": 5211, "tags": "computability, computation-models" }
Object formatter using reflection
Question: I am making an object formatter for use when debugging. Formatted class: package com.myname.somepackage; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; // Allows a variable to be displayed when using Formatter.format @Retention(RetentionPolicy.RUNTIME) public @interface Formatted { } Formatter class: package com.myname.somepackage; import java.lang.reflect.Field; public final class Formatter { private Formatter() {} // Returns a string containing the object's information, for debugging // Format: ClassName[var1=somevalue, var2=somevalue] // The object's variables must have the Formatted annotation to be displayed here public static String format(Object object) { String className = object.getClass().getSimpleName(); Field[] fields = object.getClass().getDeclaredFields(); String string = className + "["; for (Field field : fields) { field.setAccessible(true); Formatted annotation = field.getAnnotation(Formatted.class); if (annotation != null) { String varName = field.getName(); try { String value = field.get(object).toString(); string += varName + "=" + value + ", "; } catch (IllegalAccessException e) { e.printStackTrace(); string += varName + "=" + "{Unavailable}, "; } } } // remove last ", " if (string.endsWith(", ")) string = string.substring(0, string.length() - 2); string += "]"; return string; } } A class for testing this: package com.myname.somepackage.math.geom.r2; import com.myname.somepackage.Formatted; import com.myname.somepackage.Formatter; public final class Point2d { @Formatted private final double x, y; public Point2d(double x, double y) { this.x = x; this.y = y; } public double getX() { return this.x; } public Point2d setX(double x) { return new Point2d(x, this.y); } public double getY() { return this.y; } public Point2d setY(double y) { return new Point2d(this.x, y); } @Override public String toString() { return Formatter.format(this); } } The code to test it: Point2d point = new Point2d(4, 2); System.out.println(point); The console then outputs "Point2d[x=4.0, y=2.0]". How does my code look? I understand reflection is considered bad, but this is just my lazy way of quickly debugging. Thanks Answer: It's pretty nice IMO but can be improved. If your project uses apache commons (this library is often included), you should consider using the FieldUtils class to get the fields : https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/reflect/FieldUtils.html Notably, the getFieldsWithAnnotation method would reduce your code complexity by a bit. That's up to you though ;) This part : try { String value = field.get(object).toString(); string += varName + "=" + value + ", "; } catch (IllegalAccessException e) { e.printStackTrace(); string += varName + "=" + "{Unavailable}, "; } may fail if your field is null, you should exploit the String + operator to avoid it like this : string += varName + "=" + field.get(object) + ", "; string is really a bad name for your variable, maybe rename it as res or something ? I'm no big fan of the printStackTrace, you should consider using the various logging utilities proposed by java : https://docs.oracle.com/javase/9/docs/api/java/util/logging/Logger.html or slf4j. I think those 4 modifications will already make the code neater but we can do a bigger refactoring : instead of using a String that we concatenate bit by bit and then remove the final comma, you should consider using a Stream over the fields array and generating the result with the Collectors#joining method. In the end, you'd have the following method : private static final String SEPARATOR = ", "; public static String format(Object object) { final String className = object.getClass().getSimpleName(); final String prefix = className + "["; String res = Arrays.stream(FieldUtils.getFieldsWithAnnotation(object.getClass(), Formatted.class)) .map(field -> { String varName = field.getName(); try { return varName + "=" + field.get(object); } catch (IllegalAccessException e) { log.severe(e.toString()); return varName + "=" + "{Unavailable}"; } }).collect(joining(SEPARATOR)); return prefix + res + "]"; }
{ "domain": "codereview.stackexchange", "id": 29582, "tags": "java, strings, reinventing-the-wheel, formatting, reflection" }
Understanding Incident/Exitant Radiance
Question: Reading "Physically Based Rendering", I'm trying to understand what the meaning of the incident and exitant radiance functions. I understand that radiance $L(p,\omega) =\frac{d^2\phi}{d\omega dA^{\perp}}$ where $\phi$ is the Flux, $\omega$ is the direction of the light coming towards the surface and $A^{\perp}$ is the surface perpendicular to $\omega$ . So, what I'm effectively trying to measure is the "brightness" of the light at this direction $\omega$ . This is where the incident and exitant radiance come in: $L_{i}(p,w)$ is described as the radiance arriving at the point p and $L_{o}(p,w)$ as the outgoing reflected radiance from the surface. I don't understand this concept at all. Isn't $L_{i}(p,w)$ what $L(p,w)$ is in the first place? Is it the case that $L(p,w) = L_{i}(p,w) + L_{o}(p,w) $ since the "brightness" of a ray can be described as the radiance from all the lightsources in that direction + the radiance from emitted from the surface in that direction as well? Can someone please explain this concept more intuitively, as I'm trying to understand it for Computer Graphics? Answer: Since $L_i(p,\omega)$ and $L_o(p,\omega)$ are specific kinds of radiance, it is meaningless to compare them to $L(p,\omega)$. In a vacuum, provided that the $\omega$ vectors point outward from the surface, it is the case that $$L_i(p \leftarrow \omega) = L_o(p \rightarrow -\omega).$$ For more details, read Section 2.2.3 of Wojciech Jarosz's thesis. Moreover, the reflection equation holds: $$L_o(p,\omega_o) = L_e(p,\omega_o) + \int_{H^2} f_r(p, \omega_i \to \omega_o) L_i(p,\omega_i) \cos(\theta_i) d \omega_i,$$ where $H^2$ is the hemisphere, $f_r$ is the BRDF, and $L_e$ is the emitted radiance. This relation relates the outgoing radiance at a point to the incoming radiance, BRDF of the surface, and emitted radiance.
{ "domain": "physics.stackexchange", "id": 44494, "tags": "visible-light, radiometry" }
Do records with the same key in two RDDs repartitioned by key reside in the same node in spark?
Question: I have two RDDs named "data" and "model", they are repartitioned by key described as below : Does the tuple records with the same key reside in the same node in my cluster ? Should it save IO cost in shuffle operation, such as "data.cogroup(model)" , if it comes true ? Answer: The tuple of one partition is always on the same node because a partition itself is impartible. So if you do a groupBy or write your own partitioner which partitions by key, all records with the same key/partition number will be shuffled to the same node. Otherwise, transformations like mapPartition which pass an iterator to a user defined function wouldn't work.
{ "domain": "datascience.stackexchange", "id": 1292, "tags": "machine-learning, bigdata, apache-spark, parallel" }
Save total number of attempts NSDefault
Question: I'm very new to Swift and programming in general. I'm creating a quiz application and am trying to create a function that will save the number of times a user has completed a quiz. It appears to work through testing but I was wondering if there was a more elegant way to write this function: func plusTestsTaken() { var testsTakenSave = NSUserDefaults.standardUserDefaults() if testsTakenSave.integerForKey("testsTaken") >= 1 { var testsTakenCount = testsTakenSave.valueForKey("testsTaken") as! Int testsTakenCount++ testsTakenSave.setValue(testsTakenCount, forKey: "testsTaken") testsTakenSave.synchronize() println("Test again \(testsTakenCount)") } else { var testsTakenCount = 1 testsTakenSave.setValue(testsTakenCount, forKey: "testsTaken") testsTakenSave.synchronize() println("Test number \(testsTakenCount)") } } Answer: Calling synchronize is completely unnecessary and is actually a performance bottleneck. For more information about synchronize, take a look at this Stack Overflow answer. So we can go ahead and remove that line. We can also get rid of the println as that's not really doing us much good. There's not much point to it at all, especially if this is an iOS application. We can make our logic a little better here too. Let's avoid grabbing our value out of NSUserDefaults multiple times using an if let. We can also take advantage of the fact that if the value has never been set for our key, grabbing it with integerForKey will return 0. So, we can handle all scenarios the same: let kKEY_TestsTaken = "testsTaken" let defaults = NSUserDefaults.standardUserDefaults() let testsTaken = defaults.integerForKey(kKEY_TestsTaken) defaults.setInteger(testsTaken + 1, forKey: kKEY_TestsTaken) The only remaining comment I have is that we can make our method name more clear. Also, there's no reason not to give the user the ability to increment the test count by more than one. func incrementTestCount(value: Int = 1) { let kKEY_TestsTaken = "testsTaken" let defaults = NSUserDefaults.standardUserDefaults() let testsTaken = defaults.integerForKey(kKEY_TestsTaken) defaults.setInteger(testsTaken + value, forKey: kKEY_TestsTaken) }
{ "domain": "codereview.stackexchange", "id": 14143, "tags": "swift" }
Why does paper under pressure flatten over time?
Question: If I have a crumpled paper I can put it under a heavy book. If I remove the book in a minute the paper will still be rather crumpled, but if I leave it on for a longer time it will flatten more. But why? There is no movement involved, so where does the energy to flatten the paper come from? Answer: "Practical Considerations for Humidifying and Flattening Paper" (2003) by Stephanie Watkins, a conservationist, has this to say about how to flatten paper (emphasis mine): The aim of humidification is to reintroduce moisture into the paper support to relax the fibers... Gravity and time, or pressure and time, can be as effective, depending on the relative humidity of the storage area. Curled paper that is sturdy can be hung from flat clips, such as paper-protected bull clips, and left over a short time to slowly uncurl (e.g. panoramas, large blueprints, etc.). Protect the items from dust and light exposure during this process as it may take several weeks. However, humidification relaxes the paper in a faster manner and fibers are less likely to be stressed. Thus: Paper becomes flat once the microscopic cellulose fibers in the paper relax. This process does not complete instantaneously upon application of a force, i.e. it takes time so the longer you leave it the flatter it becomes. The energy should be coming from the book lowering ever so slightly as the creases in the paper disappear.
{ "domain": "physics.stackexchange", "id": 42264, "tags": "everyday-life, home-experiment" }
I keep getting this error while running navigation with robot_pose_ekf package
Question: Timed out waiting for transform from base_link to odom to become available before running costmap, tf error: Could not find a connection between 'odom' and 'base_link' because they are not part of the same tree.Tf has two or more unconnected trees.. canTransform returned after 0.100956 timeout was 0.1. I use robot_pose_ekf package to transform odom -> basefootprint but i don know why the transform doesn't work this is my TF tree Originally posted by Hungnguyen on ROS Answers with karma: 3 on 2022-05-16 Post score: 0 Answer: This may not help you exactly, but just to check: is the input frame correctly set to base_link (the default is base_footprint) and is the output frame correctly set to odom (the default is odom_combined): From the tf graph it seems like robot_pose_ekf is not doing the job of transforming odom to base_footprint. This could be due to parameter errors (as suggested above) or a lack of 'proper' information being fed into the odom Originally posted by ParkerRobert with karma: 113 on 2022-05-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37676, "tags": "navigation, ros-melodic, robot-pose-ekf" }
Ising model for dummies
Question: I am looking for some literature on the Ising model, but I'm having a hard time doing so. All the documentation I seem to find is way over my knowledge. Can you direct me to some documentation on it that can be parsed by my puny undergrad brain? If the answer is negative, can you explain it right there, on the answer form? Answer: The Ising model is a model, originally developed to describe ferromagnetism, but subsequently extended to more problems. Basically, it is an interaction model for spins. Imagine you have a system which is a collection of $N$ spins. Each spin $S_i$ has two possible states $+1$ or $-1$. Here you can imagine already a possible extension to more states. You can also imagine a different interpretation to the spins: $-1$ is a box containing no gas particle, $+1$ is a box containing a gas particle. But I'm getting ahead of myself. Let's stick to spins for now. The next step is to define the energy of the system. $$E= - \sum_i h_i S_i - \sum_{i \neq j} J_{ij} S_i S_j$$ The first term can be interpreted as the contribution to the energy of the interaction of a spin with a local magnetic field. If the magnetic field is the same for all spins, then $h_i=h$ for all $i$. You see that aligning with the field will means a lower energy than going against the field. The second term represents interactions between spins within the system. If $J_{ij}>0$ spins which align will contribute negatively to the energy of the system, thus lowering the total energy. If $J_{ij}<0$ then anti-alignment will contribute. What I still haven't specified is how the spins are structured. By choosing the coefficients $J_{ij}$ appropriately, I can introduce this structure. Suppose I want a 1-dimensional system, which is the one Ising originally solved. Then, you have a countable infinity of spins arranged along the real line with equal spacing. Ising imposed an interaction only allong neighboring spins. So spin $S_n$ can interact with spin $S_{n-1}$ and spin $S_{n+1}$. The energy formula I gave above becomes: $$E= - J \sum_{n} S_n S_{n+1} \; ,$$ if all spins can interact equally strongly. Now, to each configuration of the system corresponds a certain energy. In statistical mechanics, we know that the probability of a certain configuration is $$P(\{S\}) \sim e^{-E(\{S\})/kT}$$ where $T$ is the temperature. Or, if we compute the partition sum $$Z=\sum_{\{S\}} e^{-E(\{S\})/kT}$$ we can deduce the complete equilibrium thermodynamic properties of the system. In particular, we can see if there are phase transitions for certain values of the parameters. (There is none in the 1D case, which at the time, combined with the lack of attention his model was getting, made Ising abandon physics.) Of course, there are many more generalizations of this model. You can also make dynamical versions of the model where you are not just interested in the equilibrium configurations. Here's the wikipedia page for Ising models. Some references: H.E. Stanley 'Introduction to Phase Transitions and Critical Phenomena' Clarendon Press Oxford J.M. Yeomans 'Statistical Mechanics of Phase Transitions' Clarendon Press Oxford R.J. Baxter 'Exactly Solved Models in Statistical Mechanics' Academic New York
{ "domain": "physics.stackexchange", "id": 110, "tags": "statistical-mechanics, resource-recommendations, ising-model" }
Massless limit of Dirac fermion correlation functions
Question: In the 2D massless Dirac fermion CFT we have correlation functions like $$\langle J(z,\bar{z})J(0)\rangle \sim \frac{1}{z^2},$$ where in terms of real Euclidean coordinates $x^0,x^1$, we have $z=x^0+ix^1$, and $J = \frac{1}{2}(J_0-iJ_1)$. I want to see how this arises a bit more concretely, from a perturbative QFT approach ("perturbative" in style, but this is a free theory so it is exact). From Ward identities the current correlation function must be schematically $$G_{\mu\nu}(p)\equiv \int d^2x\langle J_\mu(x) J_\nu (0)\rangle e^{ixp}=G(p^2)\left(g_{\mu\nu}-\frac{p_\mu p_\nu}{p^2}\right).$$ We should be able to calculate the function $G(p^2)$ using massive Dirac propagators and take some kind of limit as $m^2\rightarrow 0$ to compare with the CFT results. We get an integral like $$G_{\mu\nu}(p)=\int \frac{d^2 k}{(2\pi)^2} \frac{\text{Tr}\left[\gamma_\mu \left(\gamma_\rho k^\rho +m\right)\gamma_\nu \left(\gamma_\sigma (k-p)^\sigma +m\right)\right]}{(k^2+m^2)((k-p)^2+m^2)}$$ This is calculated for instance in Adam, Bertlmann, Hofer (1993). The result is $$G(p^2)=\frac{1}{\pi}\left(1-\frac{2m^2}{p^2 R}\log\left(\frac{R+1}{R-1}\right)\right),\qquad R \equiv \sqrt{1+4\frac{m^2}{p^2}}.$$ This function looks a little ugly but it is actually very common in 1-loop calculations in 2D QFT and we have $G(0)=0$ at $p^2=0$ and $$G(p^2)\sim -\frac{2m^2}{\pi p^2} \log\frac{p^2}{m^2},\qquad p^2\gg m^2.$$ Now my question is how to compare this result for massive fermions to the CFT result. How can we take the $m^2\rightarrow 0$ limit and Fourier transform (or in opposite order) to see they agree? Answer: Denote $x=m^2/p^2$, then $G=\frac{1}{\pi}\Big(1-\frac{2x}{\sqrt{1+4x}}\ln \frac{\sqrt{1+4x}+1}{\sqrt{1+4x}-1}\Big)$ Now send $x\rightarrow 0$ (so $m^2\ll p^2$), the second term becomes $2x\ln x\rightarrow 0$, and we recover the expected result.
{ "domain": "physics.stackexchange", "id": 96794, "tags": "quantum-field-theory, fourier-transform, conformal-field-theory, correlation-functions, asymptotics" }
How does this simplification in expectation value algebra work?
Question: We have a Hamiltonian of form: $$\hat{H} = \hat{H}_0 + \hat{H}_1$$ Where $\hat{H}_1$ is a time dependent perturbation which can be written as: $$\hat{H}_1(t) = - \hat{A}F(t)$$ Now $B$ is another observable. The change in the expectation value $\langle B\rangle$ due to the perturbation term is given by: $$\Delta\langle B\rangle = \beta F\big(\langle AB\rangle_o - \langle A\rangle_o\langle B\rangle_o\big)$$ I don't have any confusion in the derivations leading up to this point. But in the next step the author writes: $$\Delta\langle B\rangle = \beta F \langle\delta A \delta B\rangle_o$$ So my confusion is why is $$\big(\langle AB\rangle_o - \langle A\rangle_o\langle B\rangle_o \big) = \langle\delta A \delta B\rangle_o$$ and what do $\delta A$ and $\delta B$ signify? Answer: $\delta A = A - \langle A \rangle_o$, and similar for B. If you substitute this into the RHS and multiply out, you'll get the LHS.
{ "domain": "physics.stackexchange", "id": 66040, "tags": "quantum-mechanics, operators" }
Why is it good to use Glycerol as carbon source to produce intermediates in pharmaceuticals?
Question: Why is it good to use Glycerol as carbon source to produce intermediates in pharmaceuticals? My best guess would be because it produces a lot of ATP per carbon. But I haven't found anything to verify my hunch. Answer: First, glycerol is economically and environmentally interesting as a substrate. It is a side product of first generation biodiesel production, which has been increasing in the past years. So large amounts of glycerol are being accumulated, while we make little use of it. Therefore it has become a necessity to develop glycerol utilizing industries. Another consequence is that glycerol is now a cheap and abundant substrate. The metabolism of glycerol is quite complex and has many end products that are of industrial relevance. For example, 1,3-propanediol and succinic acid have commercial applications and can be produced directly from glycerol by some microorganisms. So depending on the strain you use, you may not need heavy metabolic engineering. Here is an article that might help you.
{ "domain": "biology.stackexchange", "id": 6662, "tags": "biotechnology" }
Problem viewing Turtlebot in Rviz
Question: I'm following the turtlebot tutorials and I'm at the 3D visualization part. So i've SSH'd into the Turtlebot and run the following, Bring up the turtlebot (GOOD) roslaunch turtlebot_bringup minimal.launch Start vision system (GOOD) roslaunch openni_launch openni.launch (instead of roslaunch turtlebot_bringup 3dsensor.launch ) Visualize turtlebot in Rviz (PROBLEM) roslaunch turtlebot_rviz_launchers view_robot.launch Rviz shows up and there is a white blob robot model instead of my turtlebot 2. I can't see the point cloud even though my asus xtion is definitely running and even though the URDF is parsed OK, the robot is not showing up and every robotpart of the model has a red error. A screenshot of Rviz is below https://docs.google.com/file/d/0B1yulGt-BPu-dFBDSlhobVJYQ2c/edit?usp=sharing How do i show the point cloud in rviz and my turtlebot 2 model properly? Originally posted by llSourcell on ROS Answers with karma: 236 on 2013-08-06 Post score: 0 Original comments Comment by jep31 on 2013-08-06: Can we see your tf graph. Usually, this result is due to a no publication of tf. Please make sure you have the node robot_state_publisher active. Comment by llSourcell on 2013-08-06: ok, the output of rosrun tf view_graph as a pdf is here https://docs.google.com/file/d/0B1yulGt-BPu-OTZDSXdybkVibzg/edit?usp=sharing Did rosnode list, robot state publisher is running. Any ideas? Comment by jep31 on 2013-08-06: your tf graph is really incomplete. You should have all the robot tf and more tf even for vision system. Maybe robot state publisher doesn't publish topic because it doesn't receive joint_state. Are you using two computer ? And what happens when you first use $rosnode list before running bringup ? Answer: fixed it randomly by using the default turtlebot 3dsensor.launch instead of openni.launch Originally posted by llSourcell with karma: 236 on 2013-08-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15187, "tags": "rviz, urdf, turtlebot, xtion" }
no GetAngles() member in JointState Class in Gazeo Api 9?
Question: error: /home/robot/rtt_gazebo/rtt_gazebo_examples/src/default_gazebo_component.cpp:107:41: error: ‘__gnu_cxx::__alloc_traits<std::allocator<gazebo::physics::JointState> >::value_type {aka class gazebo::physics::JointState}’ has no member named ‘GetAngles’ state_pos_[j] = joint_states[j].GetAngles(); code: joint_states[j](JointPtr gazebo_joints_[j]); state_pos_[j] = joint_states[j].GetAngles(); std::vector<gazebo::physics::JointPtr> gazebo_joints_; std::vector<gazebo::physics::JointState> joint_states; i'm using gazebo9 version. as i studied this gazebo 9 api jointstate class, their is presence of getangle member in class jointstate, than why is this error coming. Originally posted by hari1234 on Gazebo Answers with karma: 56 on 2018-02-18 Post score: 0 Answer: The documentation you linked to is for Gazebo 7.1.0, you should look here for the Gazebo 9.0.0 documentation (note the version on the URL). If you check the Migration guide, you'll see that the GetAngles changed to Positions: Deprecation: const std::vector<math::Angle> GetAngles() const Replacement: const std::vector<double> &Positions() const Originally posted by chapulina with karma: 7504 on 2018-02-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Roberto Z. on 2022-05-11: The link is broken. Found this one instead: https://github.com/osrf/gazebo/blob/gazebo11/Migration.md
{ "domain": "robotics.stackexchange", "id": 4239, "tags": "gazebo" }
Uses of Chlorine
Question: Why in the reaction between sodium chlorate and water is chloric acid produced as well as Na+ and OH- ions. Shouldn't both the ions just react to produce NaOH? Answer: It seems at first that there are three materials involved, but then we subdivide them and sometimes lose track of what's important. The first material mentioned is sodium chlorate. It is a solid, and if you could break it up into very tiny pieces, at last you would get Na+ ions and ClO3- ions (they could break apart further, but let's not be brutal). The second material mentioned is water. Break it down to H2O molecules - a few break down further to H+ and OH- automatically, but only about once in 10 million times at pH=7. The third material is the solution of sodium chlorate in water: sodium ions and chlorate ions dissolve and separate in water molecules (with a very few H+ and OH- ions). Chloric acid is a strong acid; in water the ClO3- doesn't tend to hold on to its H+ or grab H+ from water molecules, so extra OH- ions are not produced. Overall, nothing much is happening beyond the dispersal of the Na+ and ClO3- ions into water and some association with individual water molecules that makes dissolution possible. If you evaporate the water, you get solid sodium chlorate back; there is no chloric acid produced or detectable.
{ "domain": "chemistry.stackexchange", "id": 10790, "tags": "inorganic-chemistry" }
What is the correct direction of turbulence energy cascade?
Question: I have learned from a fluid mechanics textbook [1] that the turbulence energy is cascaded from the largest eddy to the smallest eddy and is then dissipated by the molecular viscous effect. But recently I was reading Chapter 3 of a thermodynamics textbook [2], Prof. A Bejan claimed that such a classic Richardson picture is wrong and the turbulence is ALWAYS cascaded from the smaller scale to the larger scale, which is supported by Prof. C H Gibson [3]. While I found the terminology "inverse cascade" from this thread and it seems that there is already a lot of research on this topic, e.g. [4]. Hence, I got confused if Prof. A Bejan and Prof. C H Gibson are talking about this "inverse cascade" phenomenon and what is the correct direction of turbulence energy cascade? [1] Pope, Stephen B., Turbulent flows. Cambridge university press, 2000. [2] Bejan, Adrian. Entropy generation minimization: the method of thermodynamic optimization of finite-size systems and finite-time processes. CRC press, 2013. [3] https://thejournalofcosmology.com/APSPittsGibson.pdf [4] Chen, Shiyi, Robert E. Ecke, Gregory L. Eyink, Michael Rivera, Minping Wan, and Zuoli Xiao. "Physical mechanism of the two-dimensional inverse energy cascade." Physical review letters 96, no. 8 (2006): 084502. Answer: The answer depends on what kind of fluid theory you are considering. In the 3D viscous incompressible flow the kinetic energy is transferred from large scale eddies to small scale eddies, in particular if you inject energy at a wavenumber $k_F$ kinetic energy will be transferred to the wavenumbers k s.t. $k>k_F$. This is well know fact both numerically and theoretically, see the Kolmogorov41 model (which doesn't rely on NS) theory which prescribes an energy spectrum: $E(k)= C \epsilon^{2/3}k^{-5/3}$, where $\eta$ is the dissipation rate. In 2D visc. incompr. flow the kinetic energy is transferred from small to large eddies, in particular if you inject energy at $k_F$ your energy will be transferred both to the larger and smaller wavenumbers, this fact can be flagged as an inverse canscade. See in this respect Kraichnan-Leith-Batchelor phenomenology according which $E(k) \approx \epsilon^{2/3} k^{-5/3}\theta(k<k_f) + \eta^{2/3}k^{-3} \theta(k>k_F)$ where $\eta$ is the Kolmogorov scale. In other words, you have two different power law behaviors. This double cascade is due to the presence of two inviscid quadratic invariants: energy and enstrophy. Let's go back to Gibson's point. In an older paper he states: Everything that wiggles is not turbulence From this point we should be aware that his defintion of turbulence is different from the standard one. Moreover he states that: Eddies form at the Kolmogorov scale (Fig. 1), pair with neighboring eddies, and these pairs pair with neighboring pairs, etc. ... I believe he is referring to the large structures formation rather than the energy carried from them. The point of the energy cascade is to define how energy is transferred across the scales not how energy transfer determines the vortex merging or structure formation. In addition the presence of backscatter can be emergent in other more complicated theories, in fact Gibson presented an exotic topic like dark matter planets.
{ "domain": "physics.stackexchange", "id": 91539, "tags": "fluid-dynamics, turbulence" }
Why is the angular momentum needed in theories where the linear momentum is locally conserved?
Question: It is a well-known result that angular momentum conservation is related to the invariance of the Lagrangian respect spatial rotations, here a demonstration of how infinitesimal rotations do not alter the Lagrangian is done. I will reproduce part of it. Namely an infinitesimal rotation $\delta \vec{\phi}$ causes a displacement of $\delta \vec{r}=\delta \vec{\phi} \times \vec{r}$ and a change in velocity of $\delta \vec{v}=\delta \vec{\phi} \times \vec{v}$. These changes in position and velocity change the Lagrangian as: $$\delta L=\frac{\partial L}{\partial \vec{r}}\cdot \delta \vec{r} + \frac{\partial L}{\partial \vec{v}}\cdot \delta \vec{v}.$$ Since $\frac{\partial L}{\partial \vec{v}}=\vec{p}$ and $\frac{\partial L}{\partial \vec{r}}=\dot{\vec{p}}$, and also $A\cdot(B\times C)=B\cdot(C\times A)$, the Lagrangian variation can be written as: $$\delta L=\delta \vec{\phi}\cdot(\vec{r}\times\dot{\vec{p}} + \vec{v}\times\vec{p})$$ This is the same as $\delta \vec{\phi}\cdot\frac{d}{dt}(\vec{r}\times\vec{p})$, therefore, if the Lagrangian is conserved with infinitesimal rotations, the angular momentum must be conserved too. However, when the linear momentum is locally conserved, it seems to me redundant to specify that the angular momentum is conserved since, for each point $\vec{r}$ is just a constant, therefore, if $\vec{p}$ is conserved, $\vec{r}\times\vec{p}$ must be conserved too. I see the value of angular momentum conservation in non-local theories such as action at distance theories in which the force is exerted directly between separated particles and therefore if the force is not radial, it would cause a torque in the system breaking the angular momentum conservation. I would like to know if angular momentum is important in theories in which the energy and momentum are conserved locally and if there is some conserved current for angular momentum as should be expected when a quantity is locally conserved. Answer: I) With the phrase the linear momentum is locally preserved OP presumably implies that there is no external force on each particle. If the only forces are internal forces that satisfy the weak Newton's 3rd law $$\vec{F}_{ij}+\vec{F}_{ji}~=~\vec{0},\tag{1}$$ then the total momentum $$\vec{p}_{\rm tot}~:=~ \sum_{i=1}^N \vec{p}_i\tag{2}$$ is conserved: $$ \dot{\vec{p}}_{\rm tot} \stackrel{(2)}{=} \sum_{i=1}^N \dot{\vec{p}}_i ~=~\sum_{i,j}^{i\neq j}\vec{F}_{ij} ~\stackrel{(1)}{=}~\vec{0}. \tag{3}$$ If furthermore the internal forces are collinear, i.e. satisfy the strong Newton's 3rd law $$\vec{F}_{ij} ~\parallel ~\vec{r}_i-\vec{r}_j,\tag{4}$$ and if $$~\vec{p}_i \parallel ~\dot{\vec{r}}_i,\tag{5}$$ then the total angular momentum $$\vec{L}_{\rm tot} ~:=~ \sum_{i=1}^N \vec{r}_i\times\vec{p}_i\tag{6}$$ is conserved: $$\begin{align} \dot{\vec{L}}_{\rm tot} ~\stackrel{(5)+(6)}{=}&~ \sum_{i=1}^N \vec{r}_i\times\dot{\vec{p}}_i ~=~\sum_{i,j}^{i\neq j}\vec{r}_i\times\vec{F}_{ij}\cr ~\stackrel{(1)}{=}~&\sum_{i,j}^{i< j}(\vec{r}_i-\vec{r}_j)\times\vec{F}_{ij}~\stackrel{(4)}{=}~\vec{0}. \end{align} \tag{7}$$ II) More generally, in field theory the angular momentum current $$M^{\mu\alpha\beta}~=~x^{\alpha}T^{\mu\beta}-x^{\beta}T^{\mu\alpha}\tag{8}$$ satisfies a continuity equation $$ d_{\mu}M^{\mu\alpha\beta}~\stackrel{(8)+(10)+(11)}{=}~0\tag{9} $$ if the SEM tensor $T^{\mu\nu}$ is symmetric $$ T^{\mu\nu}~=~ T^{\nu\mu}\tag{10} $$ and satisfies a continuity equation $$ d_{\mu}T^{\mu\nu}~=~0.\tag{11} $$
{ "domain": "physics.stackexchange", "id": 95528, "tags": "lagrangian-formalism, angular-momentum, symmetry, conservation-laws, noethers-theorem" }
Commutator with exponential $[A, \exp(B)]$
Question: How can I tell if $A$ and $\exp(B)$ commute? For $[A, B]$ it's simply $AB-BA$ and for $[\exp(A), \exp(B)]$ I think it'd be $\exp(A)\exp(B) - \exp(B)\exp(A) = \exp(A+B) - \exp(B+A) = 0$. Update: it's not generally true. Is there a 'simple' way to find $[A, \exp(B)]$? Or is this one of those problems where, if you encounter them at all, you are probably doing something wrong? The example I am encountering is $[\vec{S}, \exp(S_z)]$). Answer: If OP wants to evaluate $[A,e^B]$ in terms of $[A,B]$, there is a formula $$\tag{1} [A,e^B] ~=~\int_0^1 \! ds~ e^{(1-s)B} [A,B] e^{sB}. $$ Proof of eq.(1): The identity (1) follows by setting $t=1$ in the following identity $$\tag{2} e^{-tB} [A,e^{tB}] ~=~ \int_0^t\!ds~e^{-sB}[A,B]e^{sB} .$$ To prove equation (2), first note that (2) is trivially true for $t=0$. Secondly, note that a differentiation wrt. $t$ on both sides of (2) produces the same expression $$\tag{3} e^{-tB}[A,B]e^{tB},$$ where we use the fact that $$\tag{4}\frac{d}{dt}e^{tB}~=~Be^{tB}~=~e^{tB}B.$$ So the two sides of eq.(2) must be equal. Remark: See also this related Phys.SE post. (It is related because $[A, \cdot]$ acts as a linear derivation.)
{ "domain": "physics.stackexchange", "id": 6172, "tags": "quantum-mechanics, operators, commutator" }
Collapse of Quantum State and Coefficients
Question: I have found the following exercise in one of my problem sheets: Suppose we have an observable $Q$ and its corresponding operator $\hat{Q}$ has three eigenfunctions $\varphi_1, \varphi_2, \varphi_3$ with eigenvalues $2, 2,$ and $0$, respectively. Let $\psi$ be the following superposition state: $$\psi(t=0) = \varphi_1 + \frac{1}{\sqrt{2}}\varphi_2 + i\varphi_3$$ If a precise measurement of $Q$ yields the value $2$, what will be the wavefunction immediately after the measurement? My guess was that $\psi$ must now be a superposition of the eigenstates $\varphi_1$ and $\varphi_2$, since these are the only ones with eigenvalue 2 for the observable $Q$. That is, immediately after the measurement, $\psi = a\varphi_1 + b\varphi_2$, for some $a, b \in \mathbb{C}$ (up to normalisation). However, the solution turns out to be that $\psi = \varphi_1 + \frac{1}{\sqrt{2}}\varphi_2$ up to a normalising constant. I do not understand why the coefficients $1$ and $\frac{1}{\sqrt{2}}$ are "preserved". Wouldn't any arbitrary linear combination of $\varphi_1$ and $\varphi_2$ be possible? Doesn't the state $\psi$ collapse into the eigenspace spanned by $\varphi_1$ and $\varphi_2$? Answer: Your observation make sense. This is a known point with wavefunction collapse on degenerate eigenvalues. Let's say you measure observable $A$, obtain eigenvalue $a$ and the orthogonal projector onto the eigenspace of $a$ is $\Pi$. If the eigenvalue is non degenerate, $\Pi =|\phi\rangle\langle\phi|$. Now, immediately after the measurement the wavefunction must be $|\phi\rangle$ because if we repeated the measurement we would get $a$ with certainty. However if $a$ is degenerate this argument doesn't work, and in principle one could answer as you do. It is a separate axiom of quantum mechanics (verified in experiments) that in this case after the measurement the wavefunction is $$ \frac{\Pi |\psi\rangle}{ \Vert \Pi |\psi\rangle \Vert}, $$ where $|\psi\rangle$ is the wavefunction before the measurement. This procedure gives the result you saw in the book. You can interpret this extra axiom as a sort of Jaynes - maximum entropy principle.
{ "domain": "physics.stackexchange", "id": 99574, "tags": "quantum-mechanics, wavefunction, quantum-states, wavefunction-collapse" }
Can the same input for a plain neural network be used for a convolutional neural network?
Question: Can the same input for a plain neural network be used for CNNs? Or does the input matrix need to be structured in a different way for CNNs compared to regular NNs? Answer: There is no restriction on how you input a data to a NN. You can input it in 1D arrays and do element-wise multiplication using 4-5 loops and imposing certain conditions(which will be slow and hence $nD$ matrix notations are used for a CNN). Ultimately, the library you are using (TensorFlow, NumPy might convert it into its own convenient dimensions). The main thing different of a CNN from a normal NN is: The number of parameters of a CNN in a convolutional layer is less than the number of input features. $parameters \le features$ (in general it is less than). Different people have different ways of viewing how the convolutional layer work but the general consensus is that the weights of the convolutional layers of a CNN are like digital filters. It will be a $nD$ filter if input dimension is $nD$. The output is obtained by superimposing the filter on a certain part of the input and doing element-wise multiplication of the values of filter and the values of the input upon which the filter is superimposed upon. How you implement this particular operation depends on you. So the answer to your question will be same network cannot be used, but it might be used with modifications (a normal NN is the limiting case of a CNN where $features=parameters$.
{ "domain": "ai.stackexchange", "id": 847, "tags": "neural-networks, convolutional-neural-networks" }
What is a good source for info on disease frequency distribution among age groups?
Question: I need information on the disease frequency distribution among age groups for an Android app I'm building... Hopefully I've come to the right place. Is there a good data source for this? Like a bioinformatics database? Answer: You may need to study each disease individually or do a literature review. The Online Mendelian Inheritance in Man website may be a good starting point http://www.omim.org/ The Centers for Disease Control and Prevention lists several diseases with a variety of different statistics here: http://www.cdc.gov/DiseasesConditions/
{ "domain": "biology.stackexchange", "id": 1267, "tags": "bioinformatics, statistics, database" }
Authentication Class
Question: I've wrote this class in PHP for my future projects: <?php /** * Auth * This class is used to securely authenticate and register users on a given website. * This class uses the crypt() function, and PDO database engine. * * @package Class Library * @author Truth * @copyright 2011 * @version 1.00 * @access public */ class Auth { const HOST = 'localhost'; //Holds the database host. For most cases that would be 'localhost' const NAME = 'users'; //Holds the database name for the class to use. Assums 'users', change if needed. const USER = 'root'; //Holds the username for the database. CHANGE IF NEEDED!! const PASS = ''; //Holds the password for the database. CHANGE IF NEEDED const TABLE = 'users'; //Holds the name of the table. CHANGE IF NEEDED #NOTE, THE USER/PASS COMBO MUST HAVE SUFFICIENT PRIVLIGES FOR THE DATABASE IN NAME. /** * @var Auth::$db * Holds pointer to the PDO object */ protected static $db; /** * @var Auth::$user * Holds username information in the instance object */ protected $user; /** * @var Auth::$pass * Holds password information in the instance object */ protected $pass; /** * @var Auth::$hash * Holds the hashed password ready for database storage */ protected $hash; /** * Auth::__construct() * * @param string $user * @param string $pass * @return void */ public function __construct($user = "", $pass = "") { if (empty($user) || empty($pass)) { throw new Exception("Empty username or password."); } $this->user = $user; $this->pass = $pass; } /** * Auth::hash() * Pass a user/password combination through the crypt algorithm and return the result. * * @return string hash Returns the complete hashed string. */ private function hash() { $this->hash = crypt($this->pass, '$2a$10$'.sha1($this->user)); return $this->hash; } /** * Auth::getUname() * Return the username * * @return string The username */ public function getUname() { return $this->user; } /** * Auth::getHash() * Check if there's a hash, if not, make one, then return the hash. * * @return string Hashed password */ public function getHash() { if (empty($this->hash)) { $this->hash(); } return $this->hash; } public function __toString() { return $this->getHash(); } /** * Auth::databaseConnect() * Establish connection to database and store the connection on self::$db. * * @param PDO $pdo an optional PDO object to have an existing connection instead of a new one * @return void */ public static function databaseConnect($pdo = null) { #The class accepts an external if (is_object(self::$db)) { #Should the connection already been established, an exception will be thrown. #If you want to start the connection yourself, make sure that this function is called before any other class functions. throw new Exception('Could not complete operation. Connection was already established.'); } if (is_object($pdo)) { if (!($pdo instanceof PDO)) { throw Exception('Illegal Argument: Supplied argument is not a PDO object.'); } #The function accepts an external PDO object as an argument. #WARNING: Do not use an object other then a PDO object, as it might cause unexpected results. self::$db = $pdo; return 0; } $dsn = "mysql:host=".self::HOST.";dbname=".self::NAME; try { self::$db = new PDO($dsn, self::USER, self::PASS); //Connect to the database, and store the pdo object. self::$db->setAttribute(PDO::ATTR_ERRMODE,PDO::ERRMODE_EXCEPTION); } catch (PDOException $e) { throw new Exception("There have been an error in the connection: ". $e->getMessage()); } #Here, connection is established. } /** * Auth::databaseCreate() * Create a default table (named self::TABLE) on the database. * Only use from within an installation page or a try/catch statement * * @return void */ public static function databaseCreate() { if (!is_object(self::$db)) { self::databaseConnect(); } $table = self::TABLE; $query = <<<EOQ CREATE TABLE $table( `uname` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL COMMENT 'Holds usernames (who also act as salt)', `phash` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL COMMENT 'Holds hashed passwords', `id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Unique user ID', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE utf8_bin COMMENT='Default user table' AUTO_INCREMENT=1 ; EOQ; //var_dump(self::$db); $stmt = self::$db->prepare($query); $stmt->execute(); } /** * Auth::registerUser() * Insert a user into the database. * * @param Auth $auth The $auth object in question. Escaping is not needed. * @return void */ public static function registerUser(Auth $auth) { if (!is_object(self::$db)) { self::databaseConnect(); } try { $query = "INSERT INTO ".self::TABLE." (`uname`, `phash`, `id`) VALUES (:uname, :phash, NULL)"; $stmt = self::$db->prepare($query); $stmt->bindValue(':uname', $auth->getUname()); $stmt->bindValue(':phash', $auth->getHash()); $stmt->execute(); } catch (PDOException $e) { echo "There has been an error registering: ". $e->getMessage(); } } /** * Auth::validateAgainstDatabase() * Validate a user against the database. * Exceptions detailing the potential error will be thrown. Make sure to catch them! * * @param Auth $auth The Auth object in question. No escaping needed. * @return bool $success Whether the user is valid or not. */ public static function validateAgainstDatabase(Auth $auth) { if (!is_object(self::$db)) { self::databaseConnect(); } try { $query = "SELECT `uname`, `phash` FROM ".self::TABLE." WHERE `uname` = :uname"; $stmt = self::$db->prepare($query); $stmt->bindValue(':uname', $auth->getUname()); $stmt->execute(); $row = $stmt->fetch(PDO::FETCH_ASSOC); if (!$row) { throw new Exception('Username does not exist in the system.'); } if ($row['phash'] != $auth->getHash()) { throw new Exception('Password does not match username.'); } return true; } catch (PDOException $e) { echo "There was an error during the fetching: ". $e->getMessage(); } } } ?> What do you guys think? Can it be improved? Does it look good? Anything to consider? Answer: I would never write static methods in OO code. Others argue that they are ok in certain circumstances, however I make different design decisions to them. At the very least you should have thought long and hard about why you are making it static and be prepared to be stuck with the static dependency created in every class that will call your static method (and the testing overhead that this will add). They are very hard to test. See Misko Hevery's article Static Methods are a death to testability. They create a tight coupling in your code. Read nikic's Don't be STUPID: GRASP SOLID (especially the STU part).
{ "domain": "codereview.stackexchange", "id": 1185, "tags": "php, classes, security" }
Custom msg: strange error during building
Question: Hi, I really don t understand what is wrong in my program. I cannot get a custom msg working, I think it depends on the dependecies with the other packages. Here is my bone program: a simple publisher that should output a float32 just for testing: #include <ros/ros.h> #include "msg/customMsg.msg" int main( int argc, char **argv ) { ros::init( argc, argv, "node_anadyr" ); ros::NodeHandle node; ros::Publisher pub = node.advertise<msg::customMsg>( "topic_alpha", 100 ); ros::Rate rate( 2 ); while( ros::ok() ) { msg::customMsg message; message.data = 3.3; pub.publish( message ); rate.sleep(); } } Of course i created a folder in my package: jack@D-21:~/workspace_ros/src/anadyr$ ls -l | grep msg drwxrwxr-x 2 jack jack 4096 Jul 25 23:26 msg inside I put the following file customMsg.msg containing the following line: float32 data but when I run the catkin_make I get the following error .... .... #### #### Running command: "make -j1 -l1" in "/home/wilhem/workspace_ros/build" #### Scanning dependencies of target anadyr [ 10%] Building CXX object anadyr/CMakeFiles/anadyr.dir/main.cpp.o In file included from /home/wilhem/workspace_ros/src/anadyr/main.cpp:2:0: /home/wilhem/workspace_ros/src/anadyr/msg/customMsg.msg:1:1: error: ‘float32’ does not name a type float32 data ^ make[2]: *** [anadyr/CMakeFiles/anadyr.dir/main.cpp.o] Fehler 1 make[1]: *** [anadyr/CMakeFiles/anadyr.dir/all] Fehler 2 make: *** [all] Fehler 2 Invoking "make" failed Here is my CMakelist.txt cmake_minimum_required(VERSION 2.8.3) project(anadyr) find_package(catkin REQUIRED COMPONENTS roscpp message_generation std_msgs) ## Generate messages in the 'msg' folder add_message_files( FILES customMsg.msg ) ## Generate added messages and services with any dependencies listed here generate_messages( DEPENDENCIES std_msgs ) catkin_package( CATKIN_DEPENDS message_runtime std_msgs ) # add_executable(anadyr_node src/anadyr_node.cpp) add_executable(anadyr main.cpp) target_link_libraries(anadyr ${catkin_LIBRARIES}) and here my package.xml: <?xml version="1.0"?> <package> <name>anadyr</name> <version>0.0.0</version> <description>The anadyr package</description> <maintainer email="">wilhem</maintainer> <license>TODO</license> <buildtool_depend>catkin</buildtool_depend> <build_depend>roscpp</build_depend> <build_depend>message_generation</build_depend> <build_depend>std_msgs</build_depend> <run_depend>roscpp</run_depend> <run_depend>message_runtime</run_depend> <run_depend>std_msgs</run_depend> </package> I read all the tutorials I could find, but it didnt work. What worng in my dependencies?!? What should I do to use custom messages in my application? Regards Originally posted by Andromeda on ROS Answers with karma: 893 on 2014-07-25 Post score: 0 Answer: Instead of including the message file msg/customMsg.msg, you should be including the generated C++ header: anadyr/customMsg.h You may also need to add add_dependencies(anadyr ${catkin_EXPORTED_TARGETS}) to your CMakeLists.txt to make sure that the message generation is run before your program is compiled. Originally posted by ahendrix with karma: 47576 on 2014-07-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Andromeda on 2014-07-25: Thanx but it doesn t find the generated file /home/jack/workspace_ros/src/anadyr/main.cpp:2:27: fatal error: anadyr/customMsg.h: file or folder not found #include "anadyr/customMsg.h" ^ compilation terminated. make[2]: *** [anadyr/CMakeFiles/anadyr.dir/main.cpp.o] Fehler 1 make[1]: *** [anadyr/CMakeFiles/anadyr.dir/all] Fehler 2 make: *** [all] Fehler 2 Invoking "make" failed What s wrong`? Comment by Andromeda on 2014-07-26: could you tell me, what should I change if I want to put all *.msg files in a third directory and not in the same folder of the package?
{ "domain": "robotics.stackexchange", "id": 18781, "tags": "ros, build, msg" }
Conversion of mass into energy with 100% efficiency
Question: Suppose you are given a mass of 1 kg. Einstein's theory of relativity says that all the mass can be converted into energy as per the equation $E = mc^2$. So if you want to convert this whole mass of 1 kg into energy, what will you do to that mass? Answer: Although pop sci sources often phrase it in terms of “converting” mass into energy, it is incorrect. $E=mc^2$ says that if you have a stationary mass $m$ then it already has an amount of energy $E=mc^2$. No conversion is necessary. The energy is already there otherwise energy would not be conserved. Also, an amount of stationary energy $E$ has a mass $m=E/c^2$, again no conversion necessary. What you can do is to convert matter at rest with total mass $m$ into photons with total energy $E=mc^2$. In principle you could do that by using antimatter, so if your mass consisted of 0.5 kg of electrons and 0.5 kg of positrons then it would work.
{ "domain": "physics.stackexchange", "id": 63345, "tags": "special-relativity, mass-energy, antimatter" }
NewtonSoft Json.Net serialiser
Question: I'm just starting to develop more in C# after being mainly a VB.NET developer and was looking for someone to critique my implementation of a NewtonSoft Json.Net serialiser. Can you provide some feedback on the following points: Is this a good way to build the class (using Unity)? Is it acceptable to be throwing an exception from the constructor? Is the Async/Await implementation correct? Interface using System.Threading.Tasks; namespace Helper.Core.Serialisation { public interface ISerialiser { /// <summary> /// Serialise the passed in object with the Json.Net serialiser /// </summary> /// <typeparam name="T">Generic type of the serialised object</typeparam> /// <param name="serialseObject">The object to be serialised</param> /// <returns>A serialised Json string</returns> Task<string> SerialiseAsync<T>(T serialseObject); /// <summary> /// Serialise the passed in object with the Json.Net serialiser and compress the string using the IStreamCompression implementation /// </summary> /// <typeparam name="T">Generic type of the serialised object</typeparam> /// <param name="serialseObject">The object to be serialised</param> /// <returns>A compressed byte array of the serialised object</returns> Task<byte[]> SerailseAndCompressAsync<T>(T serialseObject); /// <summary> /// Deserialise the Json string into the generic object /// </summary> /// <typeparam name="T">Generic type of the serialised object</typeparam> /// <param name="serialseObject">The object to be serialised</param> /// <returns>A deserialsied object of type T</returns> Task<T> DeserialiseAsync<T>(string serialsedString); /// <summary> /// Uncompress and deserialise the Json string into the generic object /// </summary> /// <typeparam name="T">Generic type of the serialised object</typeparam> /// <param name="serialed">The object to be serialised</param> /// <returns>An uncompressed & deserialsied object of type T</returns> Task<T> DeserialseAndUnCompressAsync<T>(byte[] serialed); } } Implementation using System; using System.Threading.Tasks; using Helper.Core.Compression; using Helper.Core.Logging; using Microsoft.Practices.Unity; using Newtonsoft.Json; namespace Helper.Core.Serialisation { /// <summary> /// Json.Net implementaiton of the ISerialiser interface /// </summary> internal class JsonSerialiser : ISerialiser { private readonly IStreamCompression _streamCompressor; private readonly ILogger _logger; /// <summary> /// Creates a new instance of the Json.Net Serialiser implementaton /// </summary> /// <param name="streamCompressor">IStreamCompression implementation composed via the IOC container</param> /// <param name="logger">ILogger implementation composed via the IOC container</param> [InjectionConstructor] public JsonSerialiser(IStreamCompression streamCompressor, ILogger logger) { if (streamCompressor == null) throw new ArgumentNullException("streamCompressor"); if (logger == null) throw new ArgumentNullException("logger"); this._streamCompressor = streamCompressor; this._logger = logger; } /// <summary> /// Serialise the passed in object with the Json.Net serialiser /// </summary> /// <typeparam name="T">Generic type of the serialised object</typeparam> /// <param name="serialseObject">The object to be serialised</param> /// <returns>A serialised Json string</returns> public async Task<string> SerialiseAsync<T>(T serialseObject) { if (serialseObject == null) throw new ArgumentNullException("serialseObject"); try { return await JsonConvert.SerializeObjectAsync(serialseObject); } catch (JsonSerializationException ex) { _logger.LogEntry(ex); throw new SerialisationException("Could Not Serialse The Object", ex); } } /// <summary> /// Serialise the passed in object with the Json.Net serialiser and compress the string using the IStreamCompression implementation /// </summary> /// <typeparam name="T">Generic type of the serialised object</typeparam> /// <param name="serialseObject">The object to be serialised</param> /// <returns>A compressed byte array of the serialised object</returns> public async Task<byte[]> SerailseAndCompressAsync<T>(T serialseObject) { if (serialseObject == null) throw new ArgumentNullException("serialseObject"); try { string serialised = await SerialiseAsync(serialseObject); return await _streamCompressor.CompressStringAsync(serialised); } catch (StreamCompressionException ex) { _logger.LogEntry(ex); throw new SerialisationException("Could Not Compress The Object", ex); } } /// <summary> /// Deserialise the Json string into the generic object /// </summary> /// <typeparam name="T">Generic type of the serialised object</typeparam> /// <param name="serialseObject">The object to be serialised</param> /// <returns>A deserialsied object of type T</returns> public async Task<T> DeserialiseAsync<T>(string serialsedString) { if (serialsedString == null) throw new ArgumentNullException("serialsedString"); try { return await JsonConvert.DeserializeObjectAsync<T>(serialsedString); } catch (JsonSerializationException ex) { _logger.LogEntry(ex); throw new SerialisationException("Could Not Deserialse The Object", ex); } } /// <summary> /// Uncompress and deserialise the Json string into the generic object /// </summary> /// <typeparam name="T">Generic type of the serialised object</typeparam> /// <param name="serialed">The object to be serialised</param> /// <returns>An uncompressed & deserialsied object of type T</returns> public async Task<T> DeserialseAndUnCompressAsync<T>(byte[] serialed) { if (serialed == null) throw new ArgumentNullException("serialed"); try { string decompressedSerialised = await _streamCompressor.DecompressStringAsync(serialed); return await DeserialiseAsync<T>(decompressedSerialised); } catch (StreamCompressionException ex) { _logger.LogEntry(ex); throw new SerialisationException("Could Not Decompress The Object", ex); } } } } Answer: Code looks good, I like this IoC style. 3 points to your consideration: You should catch an AggregateException over await. I wouldn't bother passing a logger to a serializer - that's none of his business. Let the serializer throw if he's not happy. Fix some typo in names and messages ("Deserialse" and so). I somewhat doubt the whole concept of async serialization. I take serialized data to be an object snapshot in a known 'time point'. But if it's useful for you go for it. (Oops didn't address your actual questions) Yes I think it's great. Sure. Lacking meaningful 'default object', you don't have many alternatives. This is probably the main issue here, and the hardest to answer. I have some doubts about returning a non-cancellable Task. I suspect if the object to be serialized has changed completely, the user may want to cancel the serialization.
{ "domain": "codereview.stackexchange", "id": 13452, "tags": "c#, json, serialization, async-await, json.net" }
samtools mpileup skipping read
Question: I run the command: samtools mpileup -O -s -q20-B -Q20 -f hg19.fa -r chr1:569929-569931 myFile.bam and get: chr1 569929 G 7 ...,,., EEEEEEE NTTVTTU 53,48,42,60,30,29,27 chr1 569930 C 6 ...,., EEAEEE NTTTTU 54,49,43,31,30,28 chr1 569931 G 6 ...,., EEEEEE NTTTTU 55,50,44,32,31,29 I am interested in position 569930. I inspect the bam with: samtools view myFile.bam chr1:569929-569931 and I see: NS500355:NS500355:HHYN5BGXB:2:21311:2303:9255 99 chr1 569869 0 76M = 569900 107 CTCCATAACGCTCCTCATACTAGGCCTACTAACCAACACACTAACCATATACCAATGATGGCGCGATGTAACACGA AAAAAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE<EEEEEEEAEEEEEEE NM:i:2 MD:Z:5C3C66 MC:Z:76M AS:i:66 XS:i:71 RG:Z:../H2O XA:Z:chrM,+9321,76M,1; NS500355:NS500355:HHYN5BGXB:4:22603:25857:2799 99 chr1 569877 45 76M = 569896 95 CGCTCCTCATACTAGGCCTACTAACCAACACACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCAC AAAAAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE NM:i:1 MD:Z:1C74 MC:Z:76M AS:i:74 XS:i:71 RG:Z:../H2O XA:Z:chrM,+9329,76M,1; NS500355:NS500355:HHYN5BGXB:1:12105:11868:12011 99 chr1 569882 51 76M = 569900 94 CTCATACTAGGCCTACTAACCAACACACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACC /AAAAEEEEEAEEEEEEEEEEEEEEEEEEEEEAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAEEEEEA NM:i:0 MD:Z:76 MC:Z:76M AS:i:76 XS:i:71 RG:Z:../H2O XA:Z:chrM,+9334,76M,1; NS500355:NS500355:HHYN5BGXB:2:13301:2681:6114 99 chr1 569888 9 76M = 569952 140 CTAGGCATACTAACCAACACACTAACAATATACCAATGATGGAGCGATGTAACACGAGAAAGCACATACCAACGCC AAAA/E/EE//AAEEEEE/EAAEAA//EEAEEE/E/EEEE/////A/EE/EE/EE/E//EE/EEEE/E/AEA#### NM:i:4 MD:Z:6C19C15C29G3 MC:Z:76M AS:i:57 XS:i:52 RG:Z:../H2O XA:Z:chrM,+9340,76M,5; NS500355:NS500355:HHYN5BGXB:2:23110:10125:9334 99 chr1 569888 51 76M = 569904 92 CTAGGCCTACTAACCAACACACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACCAAGGCC AAA/AEE6EEEE6E///E/A/AAE/E/AEEEEEEE/EE/EEEAEEEEAEE//EEEEEEAEEEAEEAE/AEE/AEE< NM:i:0 MD:Z:76 MC:Z:76M AS:i:76 XS:i:71 RG:Z:../H2O XA:Z:chrM,+9340,76M,1; NS500355:NS500355:HHYN5BGXB:1:13209:10692:11045 83 chr1 569888 53 18S58M = 569888 -58 CGTGTGCTCTTCCGATCTCTAGGCCTACTAACCAACACACTAACCATATACCAATGATGGCGCGATGTAACACGAG EE<A/EE//AE/AA///AAE///EEEEE//6///E//EE/6EE/EEEEEEEA/A/A/E/E//AEAEEEEE6A6AAA NM:i:0 MD:Z:58 MC:Z:59M17S AS:i:58 XS:i:53 RG:Z:../H2O XA:Z:chrM,-9340,18S58M,1; NS500355:NS500355:HHYN5BGXB:1:21304:26480:15673 83 chr1 569890 11 76M = 569796 -170 AGGCCTACTAACCAACACACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACCAAGGCCAC AEEEEEEEAAEEEEEEEEEEEE/EEEEEEEEEEAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAAAAA NM:i:0 MD:Z:76 MC:Z:76M AS:i:76 XS:i:71 RG:Z:../H2O XA:Z:chrM,-9342,76M,1; NS500355:NS500355:HHYN5BGXB:4:11602:21237:3028 83 chr1 569900 51 76M = 569900 -76 ACCAACACACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACCAAGGCCACCACACACCAC EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAAAAA NM:i:0 MD:Z:76 MC:Z:76M AS:i:76 XS:i:71 RG:Z:../H2O XA:Z:chrM,-9352,76M,1; NS500355:NS500355:HHYN5BGXB:3:13504:4369:8812 99 chr1 569901 51 76M = 569901 76 CCAACACACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACCAAGGCCACCACACACCACC AAAAAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE6EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE NM:i:0 MD:Z:76 MC:Z:76M AS:i:76 XS:i:71 RG:Z:../H2O XA:Z:chrM,+9353,76M,1; NS500355:NS500355:HHYN5BGXB:2:11203:19774:18704 99 chr1 569904 11 76M = 569952 124 ACACACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACCAAGGCCACCACACACCACCTGT 6AAAAEEEEEEEEEEEEEAEEEEEEEEEEEEEEEE6EEEEAEEEAEEEEEEEEEEEEEEEEEEAE/E6EEAEEEEA NM:i:0 MD:Z:76 MC:Z:76M AS:i:76 XS:i:71 RG:Z:../H2O XA:Z:chrM,+9356,76M,1; NS500355:NS500355:HHYN5BGXB:2:11309:13954:17567 99 chr1 569905 11 76M = 569972 143 CACACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACCAAGGCCACCACACACCACCTGTC AAAAAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAEEEEEEEEEEE NM:i:0 MD:Z:76 MC:Z:76M AS:i:76 XS:i:71 RG:Z:../H2O XA:Z:chrM,+9357,76M,1; NS500355:NS500355:HHYN5BGXB:1:21202:13252:14049 99 chr1 569905 11 76M = 569927 98 CACACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACCAAGGCCACCACACACCACCTGTC AAAAAEE6EEEEEEEEEEEEEAEEEEEEEEEEEEEEEEEEE6EEEEEEEEEEEEEEEEE/EEEE<EEEEEEEEEEE NM:i:0 MD:Z:76 MC:Z:76M AS:i:76 XS:i:71 RG:Z:../H2O XA:Z:chrM,+9357,76M,1; NS500355:NS500355:HHYN5BGXB:1:22311:21964:12072 83 chr1 569908 52 5S71M = 569908 -71 GATCTACTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACCAAGGCCACCACACACCACCTG <AEEEA6EEE6EEEEEEEEEEEEEE/EEEEEEEEEEEEEEEEE/EEEEEEEEEEEEEEEEEEAAEEEAEEEAA6AA NM:i:0 MD:Z:71 MC:Z:71M5S AS:i:71 XS:i:66 RG:Z:../H2O XA:Z:chrM,-9360,5S71M,1; NS500355:NS500355:HHYN5BGXB:4:12602:7858:19418 99 chr1 569909 15 76M = 569917 84 CTAACCATATACCAATGATGGCGCGATGTAACACGAGAAAGCACATACCAAGGCCACCACACACCACCTGTCCAAA AAAAAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAEE/AAEEEEEEEEEEEEEEE NM:i:1 MD:Z:74G1 MC:Z:76M AS:i:74 XS:i:71 RG:Z:../H2O XA:Z:chrM,+9361,76M,1; NS500355:NS500355:HHYN5BGXB:2:12205:19456:7598 83 chr1 569927 0 76M = 569907 -96 TGGCGCGATGTAACACGAGAAAGCACATACCAAGGCCACCACACACCACCTGTCCAAAAAGGCCTTCGATACGGGA EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAA/EEEEEEEEEAAAAA NM:i:1 MD:Z:56G19 MC:Z:76M AS:i:71 XS:i:76 RG:Z:../H2O XA:Z:chrM,-9379,76M,0; Based on mapq and positions (respect to 569930) I see that the read with cigar=18S58M is skipped by mpileup. WHY? The soft clip is way far from position chr1:569930 Answer: Examining the mpileup output for a single read on its own is a good way to figure out why bases are not appearing as expected. So if you suspect that the cigar=18S58M read is the missing one (you are correct), prepare a SAM file containing just that read: @SQ SN:chr1 LN:248956422 NS500355:NS500355:HHYN5BGXB:1:13209:10692:11045 83 chr1 569888 53 18S58M = 569888 -58 CGTGTGCTCTTCCGATCTCTAGGCCTACTAACCAACACACTAACCATATACCAATGATGGCGCGATGTAACACGAG EE<A/EE//AE/AA///AAE///EEEEE//6///E//EE/6EE/EEEEEEEA/A/A/E/E//AEAEEEEE6A6AAA NM:i:0 MD:Z:58 MC:Z:59M17S AS:i:58 XS:i:53RG:Z:../H2O XA:Z:chrM,-9340,18S58M,1; and run it through mpileup: $ samtools mpileup -Q0 -B oneread.sam This will show you that this read's base quality dips below 20 at the position you are interested in. Thus, as finswimmer suspected, your -Q20 is removing it at this position.
{ "domain": "bioinformatics.stackexchange", "id": 1077, "tags": "samtools, mpileup" }
is there any mnemonic for the AXE method?
Question: Is there any mnemonic that helps remembering the names of the different geometries (Shapes) that we obtain from applying the AXE method ? Answer: I think you shouldn't abuse with mnemonic. In this case you should start with this rule of VSPR: each atom in a molecule will be positioned so that there is minimal repulsion between the valence electrons of that atom. Now you should begin to train your three dimensional thinking! This is the method I've adopted to learn AXE, of course everyone has his own but maybe it could help to you to find yours: Learn to draw the $AX_nE_0$ series: This for me was the first step: learn to draw the basic geometry 0 lone pair. This is quite intuitive knowing the previous law. Begin with the $A$ then add the $X$s. Note that the linear module could help you to remember it. Add 1,2,3 lone pairs series This is the next step. I've brutally modified the wikipedia table to show how you can easily do this step from the Basic geometry 0 lone pair. Simply begin to substitute $X$ with $E$: one $E$ for 1 lone pair, two for 2 lone pair... Nomenclature The last step is to add the nomenclature. First of all you should take in account that the lone pair ($E$) don't appear as a part of the final shape from which the nomenclature is derived. If not considering the $E$ group you obtain a planar molecule you have to add the word planar after the name, if you obtain a pyramidal shape you should use pyramidal after the name of the planar pyramid's base. So imagine that $E$ groups are invisible and try use this method if you can't find it only with pyramids try to figure out which of the following figures is more close to the actual shape of the molecule. The first two are platonic solids (Octhaedron, and Tetrahedron) then there is a T for a T-shaped, and a bent figure. In the original wikipedia table respective molecule configuration is already rotated to the right position, I think you can do it on your own, with your 3D thinking. For the last, seesaw shaped, there is a powerful "mem": Here is an old photo of two boys in a $SF_4$ molecule.Good ol' times!
{ "domain": "chemistry.stackexchange", "id": 837, "tags": "inorganic-chemistry, crystal-structure" }
Does quantum mechanics play a role in the brain?
Question: I'm interested in whether the scale of processes that occur in the brain is small enough to be affected by quantum mechanics. For instance, we ignore quantum mechanics when we analyze a game of tennis because a tennis ball is much too large to be affected by quantum mechanics. However, signals in the brain are mostly (all?) electrical, carried by electrons, and electrons are definitely 'small' enough to be affected by quantum mechanics. Does that mean the only way we will be able to further understand how the mind works is through an application of quantum mechanics? Answer: Quantum mechanics has almost no bearing on the operation of the brain, except insofar as it explains the existence of matter. You say that signals are carried by electrons, but this is very imprecise. Rather, they are carried by various kinds of chemical signals, including ions. Those signals are released into a warm environment that they interact with over a very short timescale. Quantum mechanical processes like interference and entanglement only continue to show effects that differ from classical physics when the relevant information does not leak into the environment. This issue has been explained the context of the brain by Max Tegmark in The importance of quantum decoherence in brain processes. In the brain, the leaking of information should take place over a time of the order $10^{-13}-10^{-20}$s. The timescale over which neurons fire etc. is $0.001-0.1$s. So your thoughts are not quantum computations or anything like that. The brain is a classical computer.
{ "domain": "physics.stackexchange", "id": 29184, "tags": "quantum-mechanics, biophysics, estimation, perception" }
Why is C4 of butynone electrophilic?
Question: A question asks whether C4 in 3-butyne-2-one can act as a nucleophile, electrophile, or acid (it can be more than one). The answer is that it can be both electrophilic and acidic. The acidity makes sense, it's a terminal alkyne, sp orbitals stabilize the negative charge on the conjugate base. The electrophilicity I don't quite understand. I can write a resonance structure that puts some positive charge on C4, analogous to an alpha-beta unsaturated carbonyl compound, but it looks bizarre. I know ketenes are a real thing, but is this resonance structure a real contributor? Is there another explanation for the electrophilicity? Google searches for alkynone, ynones, etc. yielded mostly peer-reviewed literature that's above my understanding. Answer: Yes, that second resonance structure, the one on the right, is a significant contributor to the overall description of the molecule. That resonance looks a little different because the p orbitals on the alpha and beta carbons are on $\ce{sp}$ hybridized carbons, rather than the $\ce{sp^2}$ hybridized carbon more commonly seen in enones. As this second resonance structure explains, alpha-beta unsaturated enones and ynones are both subject to attack by nucleophiles at the electrophilic beta carbon. Resonance structures, like the one pictured below, explain the reactivity seen in enamines and yneamines (the triple-bond analogue); however, now the positive charge is on the nitrogen and a negative charge on the beta carbon. Therefore these nitrogen compounds are subject to attack by an electrophile at the nucleophilic beta carbon.
{ "domain": "chemistry.stackexchange", "id": 2014, "tags": "organic-chemistry, resonance" }
Is work done by torque due to friction in pure rolling?
Question: This question has been asked and answered numerous times. I went through almost all of them and found no consensus. I found that all of the answers can be divided into two categories: Friction does work, but that work is converted to rotational kinetic energy: A B C Friction does not do work, because the point of contact has no instantaneous displacement/has no relative motion/moves in a cycloid path which is perpendicular to direction of friction acting at that point: D E The first argument feels sketchy because derivation A is wrong, B does not seem rigorous and C offers none. The second argument makes sense but reason varies depending on who is answering. I also would like to point out that the force of friction creates a torque which rotates the body about the Centre of Mass and hence does rotational work. Which answer is correct? If the answer is the first, is a more rigorous derivation available? If the answer is second, how do you explain the work done by torque due to friction in rotating the body? EDIT: I did not find any of the answers completely satisfactory. I thought for a while and came to a conclusion which I think satisfactorily provides and answer to this question and have added it as an answer. Answer: There are numerous variants of this question on stack overflow namely: 1, 2, 3 , 4, 5. In this post I want to collect all those posts together and attempt to resolve this question. I have also attached a link to an excellent post by John Darby on this topic below. I have received 5 answers to my question, but none of them really felt correct: Answer by Dale: The crux of Dale's answer is that "It is entirely possible for a force to provide a torque and change angular momentum but not provide a power and change energy." S/he argues that the Torque, Power, and Energy relation $P=\vec{\tau} . \vec{\theta}$ " is only valid in the frame where the axis is at rest", which is is something is have never found in any book. Answer by mmessers314: The answer is correct, but I wanted to discuss the problem by considering rotation about the center of mass, which I had mentioned in the original post. Answer by Claudio Saspinski: The derivation is beautiful, but according to definition of work, it has to be calculated about the point of application of force, not about the COM. Claudio Saspinski calculates pseudowork (Bruce Sherwood) or centre-of-mass energy (Resnik and Halliday) from Newton's Second Law of Motion Answer by Sabat Anwar and Answer by John Darby: Sabat Anwar and John Darby have written essentially the same answer. Essentially, the answers state that net work done by friction is zero, because during rolling without slipping "work done by friction on the CM " and "the work by friction with respect to the CM " cancel out. John Darby goes into quite some detail in a separate post on this topic. The only problem with this answer is that friction acts on the point of contact, and hence work has to be done on the point of contact, not on CM. I thought about the question for a while and came to realisation that the question can be answered by a modified form of Sabat Anwar and John Darby's answer. Let us take the point of contact of the rolling disc/sphere/cylinder as $Q$. The instantaneous displacement of this point can be divided into two parts: 1) $\vec{X_{Q_T}}=\vec{X_{CM}}$ due to translational motion of the body, and 2) $\vec{X_{Q_R}}=\vec{\theta}\times\vec{R}$ do to rotation of the body. Hence, work done by friction is: $$W_{fric}=\vec{F_{fric}}.(\vec{X_{Q_T}}+\vec{X_{Q_R}}) \tag{1}$$ From this there are two ways to answers by work done by friction is $0$: From eq. 1, $$W_{fric}=\vec{F_{fric}}.(\vec{\dot X_{Q_T}}+\vec{\dot X_{Q_R}})dt$$ From condition for pure rolling, we know $\vec{\dot X_{Q_T}}$ and $\vec{\dot X_{Q_R}}$ are equal in magnitude and opposite in direction, resulting in instantaneous velocty of point $Q$ being $0$. Hence friction does no work during rolling without slipping on an incline because the instantaneous velocity of the point of contact is $0$ at every moment of its motion. From eq. 2, $$W_{fric}=\vec{F_{fric}}.(\vec{X_{Q_T}}+\vec{X_{Q_R}}) \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \vec{F_{fric}}.\vec{X_{CM}} + \vec{F_{fric}}.(\vec{\theta}\times\vec{R}) \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \vec{F_{fric}}.\vec{X_{CM}} + \vec{\theta}.(\vec{R}\times\vec{F_{fric}}) \\ \ \ \ \ \ \ \ = \vec{F_{fric}}.\vec{X_{CM}}+\vec{\tau}.\vec{\theta}$$ On calculating the rotational work done by friction about CM and the translational work done by friction along point of contact, we find that magnitude is equal due to pure rolling, but the rotational work done is positive and translational work done is negative. Thus it may be said that translational kinetic energy is sapped away by friction and turned into rotational kinetic energy. Hence, the net work done by friction at point of contact is $0$.
{ "domain": "physics.stackexchange", "id": 88545, "tags": "newtonian-mechanics, rotational-dynamics, work, friction, rigid-body-dynamics" }
C++ Stack Implementation Using Templates and Linked List
Question: I have a simple stack class. It uses a linked list for its data structure and it is of type template so any datatype (including pointers) can be passed in. It can be initialized on the stack or on the heap. It does only what a stack should do and nothing more. It is constructed entirely in the header file to avoid compiler errors for missing types(I believe that's standard procedure for template). I created it with Xcode on a mac and have not tested it on Linux or Windows yet for compiler errors. The code is well commented and warns the caller they are responsible for deletion of heap allocated objects(just like a C++ vector would be). I want to discuss this topic in a blog or something so I want to make sure it is correct. Please review my code for completeness and correctness My Stack: #ifndef TStack_h #define TStack_h #include <iostream> #include <stdexcept> template <class T> class TStack{ public: //#################################### // Constructor. //#################################### TStack(); //#################################### // Destructor. //#################################### ~TStack(); //#################################### // Class methods. //#################################### /** * Adds an item to the stack. * <b>Notes:</b><br> * &nbsp; N/A <br> * ------<br> * <b>Arguments:</b><br> * &nbsp; template<class T>: the type of the class.<br> * ------<br> * <b>Return:</b><br> * &nbsp; N/A<br> * ------<br> * <b>Throws</b><br> * &nbsp; N/A<br> */ void push(T elem); /** * Removes the data item at the beginning of the stack. * <b>Notes:</b><br> * &nbsp; Caller is responsible for releasing objects that are popped from the stack.<br> * ------<br> * <b>Arguments:</b><br> * &nbsp; N/A <br> * ------<br> * <b>Return:</b><br> * &nbsp;dataType T: the type<br> * ------<br> * <b>Throws</b><br> * &nbsp; out_of_range exception for an empty stack.<br> */ T pop(); /** * The size of the stack. * <b>Notes:</b><br> * &nbsp;N/A<br> * ------<br> * <b>Arguments:</b><br> * &nbsp; N/A <br> * ------<br> * <b>Return:</b><br> * &nbsp;int : The size of the stack.<br> * ------<br> * <b>Throws</b><br> * &nbsp; N/A<br> */ int getSize(); /** * Reports if the stack is empty. * <b>Notes:</b><br> * &nbsp;N/A<br> * ------<br> * <b>Arguments:</b><br> * &nbsp; N/A <br> * ------<br> * <b>Return:</b><br> * &nbsp;int : Whether the stack is empty of not.<br> * ------<br> * <b>Throws</b><br> * &nbsp; N/A<br> */ bool isEmpty(); //#################################### // End - Class methods. //#################################### private: /** * A linked list node struct. * <b>Notes:</b><br> * &nbsp;N/A<br> **/ struct Node{ T data_; Node* next_; }; /** * The size of the stack. * <b>Notes:</b><br> * &nbsp;N/A<br> **/ int size_; /** * The head of the linked list(stack). * <b>Notes:</b><br> * &nbsp;N/A<br> **/ Node *head_; }; //#################################### // Constructor. //#################################### template <class T> TStack <T>::TStack(){ this->size_ = 0; this->head_ = NULL; } //#################################### // Destructor. //#################################### template <class T> TStack <T>::~TStack(){ // Nothing to tear down. } //#################################### // Class TStack Methods. //#################################### template<class T> void TStack< T >::push(T elem){ Node * newNode = new Node(); newNode->data_ = elem; newNode->next_ = NULL; // If the head is NULL just assign it to newNode(); if(this->head_ == NULL){ this->head_= newNode; }else{ newNode->next_ = this->head_; this->head_ = newNode; } this->size_ += 1; } template<class T> T TStack< T >::pop(){ // Suppress compile error for "Control reaches end // of statement". We will throw an exception if the // stack is empty. #pragma GCC diagnostic ignored "-Wreturn-type" try{ if(this->isEmpty() == false){ Node *temp = this->head_; this->head_ = this->head_->next_; this->size_ --; return temp->data_; // If we just popped the last node, set head to NULL. if(this->isEmpty() == true) this->head_ = NULL; }else{ throw std::out_of_range("The Stack Is Empty!"); } }catch (const std::out_of_range& e) { std::cerr <<e.what() <<std::endl; } } template<class T> int TStack<T>::getSize(){ return this->size_; } template<class T> bool TStack<T>::isEmpty(){ if(this->size_ > 0) return false; return true; } //#################################### // End Class TStack Methods. //#################################### //#################################### // End Class TStack. //#################################### #endif Example main.cpp: #include <iostream> #include "TStack.h" int main(int argc, const char * argv[]) { int* one = new int(34); int* two = new int(68); int* three = new int(72); TStack<int*> myStack; myStack.push(one); myStack.push(two); myStack.push(three); while(myStack.getSize() > 0){ int* ans = myStack.pop(); std::cout<<"Value: "<<*ans<<std::endl; delete ans; } // Throws and catches exception gracefully and logs the stack is empty. int* ans = myStack.pop(); return 0; } Answer: Your code is pretty nifty, yet I have some comments: Advice 1 In your destructor you should deallocate all the stack nodes, or, otherwise, you leak memory. Something like this: template <class T> TStack <T>::~TStack(){ Node* node = head_; Node* next; while (node) { next = node->next_; delete node; node = next; } } Advice 2 In you pop method, you effectively print a message to the standard output if the stack is empty. The better idea would be just throwing an exception and let the user catch it. Something like this: template<class T> T TStack< T >::pop(){ if (isEmpty()) { throw std::runtime_error{"The stack is empty."}; } T ret = head_->data_; Node* remove_node = head_; head_ = head_->next_; size_--; delete remove_node; // DON'T FORGET TO DELETE THE STACK NODE! return ret; } Advice 3 Also, consider providing the top method that just returns the topmost element without removing it.
{ "domain": "codereview.stackexchange", "id": 23974, "tags": "c++, linked-list, template" }
Is this a valid usage of structure assignment in C?
Question: Q: Please comment on the usage of structures and structure assignment operations in C I am working on converting a MATLAB program to C using BLAS and LAPACK for linear algebra support. The MATLAB code uses cell arrays. I created a Matrix datatype and a Cell data-type. A section of the header file/implementation: #define ASSERT(c,m) #define PREC double #define ZEROS(r,c) (zeros(r,c)) #define ONES(r,c) (ones(r,c)) #define EYE(r,c) (eye(r,c)) #define ALLOCM(r,c) (alloc_matrix(r,c)) #define PRINTM(M) (print_matrix(M)) #define FREEM(M) (free_matrix(M)) #define ALLOCC(r,c) (alloc_cell(r,c)) #define GETMC(C,r,c) (get_matrix_from_cell(C,r,c)) #define SETMC(C,r,c,M) (set_matrix_in_cell(C,r,c,M)) #define FREEC(C) (free_cell(C)) /* Matrix */ typedef struct { PREC * array; int rows; // The number of rows in the matrix int cols; // The number of columns in the matrix }Matrix; /* Cell of Matrices */ typedef struct { Matrix * array; // Cell array of matrices stored in row major form int rows; // Number of rows in Cell array int cols; // Number of cols in Cell array }Cell; /* Matrix utility functions */ Matrix alloc_matrix(int rows, int cols); Matrix zeros(int rows, int cols); Matrix ones(int rows, int cols); Matrix eye(int rows, int cols); Matrix corrcov(Matrix matrix); void print_matrix(Matrix matrix); void free_matrix(Matrix matrix); /* Cell array utility functions */ Cell alloc_cell(int rows, int cols); INLINE Matrix get_matrix_from_cell(Cell cell, int row, int col); INLINE void set_matrix_in_cell(Cell cell, int row, int col, Matrix matrix); void free_cell(Cell cell); // Implementation Matrix alloc_matrix(int rows, int cols){ Matrix matrix; ASSERT(rows > 0 && cols > 0, FATAL_NEGATIVE_DIMENSIONS); matrix.array = (PREC *) malloc(sizeof(PREC) * rows * cols); ASSERT(matrix.array != NULL, FATAL_NO_MEMORY); matrix.rows = rows; matrix.cols = cols; return matrix; } Matrix zeros(int rows, int cols){ int i; int size; Matrix matrix; matrix = alloc_matrix(rows, cols); for(i = 0, size = rows * cols; i < size ; i++){ matrix.array[i] = 0.0; } return matrix; } void print_matrix(Matrix matrix){ int i; int j; int k; int rows = matrix.rows; int cols = matrix.cols; ASSERT(rows > 0 && cols > 0, FATAL_NEGATIVE_DIMENSIONS); ASSERT(matrix.array != NULL, FATAL_NULL_POINTER); printf("\n Rows: %d, Cols: %d\n", rows, cols); for(i = 0 ; i < rows; i++){ for(j = 0, k = i * cols; j < cols; j++){ printf("%8.6f ", matrix.array[ k + j ]); } printf("\n"); } } void free_matrix(Matrix matrix){ ASSERT(matrix.array != NULL, FATAL_NULL_POINTER); free(matrix.array); matrix.array = NULL; } Cell alloc_cell(int rows, int cols){ Cell cell; ASSERT(rows > 0 && cols > 0, FATAL_NEGATIVE_DIMENSIONS); cell.array = (Matrix *) malloc(sizeof(Matrix) * rows * cols); ASSERT(cell.array != NULL, FATAL_NO_MEMORY); cell.rows = rows; cell.cols = cols; return cell; } void free_cell(Cell cell){ int i; int size; int rows = cell.rows; int cols = cell.cols; ASSERT(rows > 0 && cols > 0, FATAL_NEGATIVE_DIMENSIONS); ASSERT(cell.array != NULL, FATAL_NULL_POINTER); for( i = 0, size = rows * cols; i < size; i++){ free_matrix(cell.array[i]); } free(cell.array); } INLINE Matrix get_matrix_from_cell(Cell cell, int row, int col){ Matrix matrix; int rows = cell.rows; int cols = cell.cols; ASSERT(cell.array != NULL, FATAL_NULL_POINTER); ASSERT(rows > 0 && cols > 0, FATAL_NEGATIVE_DIMENSIONS); ASSERT(row >= 0 && row < rows, FATAL_INDEX_OUT_OF_BOUNDS); ASSERT(col >= 0 && col < cols, FATAL_INDEX_OUT_OF_BOUNDS); matrix = cell.array[(row * cols) + col]; return matrix; } INLINE void set_matrix_in_cell(Cell cell, int row, int col, Matrix matrix){ int rows = cell.rows; int cols = cell.cols; ASSERT(cell.array != NULL, FATAL_NULL_POINTER); ASSERT(rows > 0 && cols > 0, FATAL_NEGATIVE_DIMENSIONS); ASSERT(row >= 0 && row < rows, FATAL_INDEX_OUT_OF_BOUNDS); ASSERT(col >= 0 && col < cols, FATAL_INDEX_OUT_OF_BOUNDS); cell.array[(row * cols) + col] = matrix; } Notice that I do not return pointers from functions that allocate Matrix type objects or Cell type objects. I return a plain Matrix or Cell object, but I allocate the arrays (PREC type for matrices and Matrix type for cells - where PREC is double or float) inside these functions. This is convenient because: struct assignment is a valid operation in C (I know that the dynamically allocated data is not duplicated, only a reference to it) to get to an element inside a Matrix, I can use the dot syntax instead of -> syntax (also I believe I may have to de-reference just once if I use the dot syntax - matrix.array[i] vs matrix->array[i] (I mean the matrix in matrix->array[i] is a pointer to a Matrix object, as I am comparing items on a stack vs those that are dynamically allocated) Most of the Matrix objects and Cell objects ( barring the dynamically allocated memory ) are on the stack and are automatically freed when function exits and I feel, its easier to keep track of objects that are no longer in use and free them when necessary. I profiled a sample program (that uses this interface) with Valgrind. The sample program: #include "matutil.h" int main(){ Matrix mz; Cell cell; mz = ZEROS(3,3); PRINTM(mz); cell = ALLOCC(1,1); SETMC(cell,0,0,mz); PRINTM(GETMC(cell, 0 , 0)); FREEC(cell); return 0; } The problem is matrix object (Matrix mz) that I start with does not have its array field explicitly initialized and so, Valgrind reports ==17433== Conditional jump or move depends on uninitialised value(s) Is this an issue? Are there any pitfalls that I should be aware of before I proceed with this design? Thank you. Answer: About your valgrind issue I don't think the valgrind warning is about uninitialized fields in Matrix. Your zeros functions does the initialization, right? I tested myself with this code: Matrix zeros(int rows, int cols) { Matrix tmp; int i; tmp.rows = rows; tmp.cols = cols; tmp.array = malloc(sizeof(float) * rows * cols); for(i = 0; i < rows * cols; i++) tmp.array[i] = 0; return tmp; } I have no valgrind warnings. Sharing a snippet of code wich really produces the valgrind warning would help. Other remarks If you want to free the memory automatically and have your allocated memory copied, C++ will help. I don't understand your point about dot syntax though. You're only adding one level of nesting, this does not mean you're going to use "*" or "->" any less. matrix->array[i] is wrong since -> and [] both dereference your pointer: this is not going to compile Why don't you use a PREC** pointer? It is probably easier to use and will avoid errors Are you trying to make your MATLAB code fast? Matrix operations are probably very fast using matlab. If there's another part of your code wich is slow, consider using MEX-files.
{ "domain": "codereview.stackexchange", "id": 1392, "tags": "c, memory-management, matlab" }
Would muscle fatigue still occur if aerobic conditions for a working muscle is maintained?
Question: Put another way if the muscle is given everything it needs to contract and do work will it ever get tired or have a reduction in energy efficiency? As far as I understand muscles depend upon a blood supply delivering oxygen and nutrients (e.g. glucose and calcium) to effectively contract at its best level of performance. With the ability to work under anaerobic conditions if need be but producing lactic acid as a by-product which reduces the muscles ability to contract and therefore producing fatigue. I also know that muscles are dependent upon temperature to work efficiently like the rest of the body. So, if the muscles temperature was able to be regulated well enough to maintain efficiency and aerobic conditions are met could fatigue be negated? Answer: It seems that you are asking about activity significantly above basal metabolic rate. If aerobic conditions are maintained (and with appropriate training), muscles can operate more or less continuously for very long durations, days to weeks. In non-humans: Godwits have been recorded flying over 7000 miles (>11000 km) without stopping for 9 days Arctic terms migrate 44000 miles (>70000 km), albeit with stopovers Humpback whales migrate 5000 miles (8000 km). Their muscles are probably operating close to continuously. Many (most?) species of sharks swim continuously. There are probably other examples as well (feel free to add). For humans, usually the limit to endurance is sleep. Two forms of racing push human endurance, but the longer races almost always require at least minimal sleep: Ultramarathons, foot races of 50 mi (80 km), 100 mi (160 km) or more (e.g., 24 hours, multi-day). Race Across America, Paris-Brest-Paris, and similar endurance bicycle races. The RAAM has a record time of 3107 mi (4971 km) in 8 days, 9 hours for a mean velocity of 15.4 mi/hr (24.6 km/hr).
{ "domain": "biology.stackexchange", "id": 1217, "tags": "physiology, muscles" }
Keras pattern finding between hash and word
Question: My goal is to build a neural net that can find patterns between a hash and a word on it's own. So that it returns the word of any hash that I will input. Unfortunatally my skill in the area of neural net isn´t advanced, and I want to use this project to learn more. So I use a German dictionary and encode it via one_hot encoding. Then I generate the sha256 value of every word inside (before I have done this I cleaned the file and wrote every word in another line) it. So I got an big array with the shape of 20000x20000 for the words and another for the hashs. So then I used the a example of the Keras homepage for binary classification because the one_hot values are represented by ones and zeros. So if I want to predict a hashs I get these error: Error when checking : expected dense_1_input to have shape (20000,) but got array with shape (1,). So I don't know if this model is working for my problem but I couldn't convert one hash into a size of 20000x20000. (The hash will one_hot encoded for that prediction). So how could I get it to accept different shaped hashs/one hash only? Is there a way to train the model with each hash after another for example with a for loop?! EDIT: So I figured out that I can convert a list of characters into a numpy.array with 2 dimensions. So I hot_encoded every character and create a list of them, these list I passed inside the np.array(words,ndim=2). So this I have done for my hashs aswell. Then after I run the code I got this error: ValueError: setting an array element with a sequence So I tried to reshape the array with the .reshape(20000) command but nothing chaged. So what to do with that? EDIT2: I figured out now that the problem is that enhot_encoding generates diffrent sized "arrays" for each word, and if I fill this into a real array and this into a neuronal net it have to return this error. But still the question is: How to convert single words and hashs to a format that I can train a neuronal net with and get usefull output so I can enter any hash and it should return some kind of word(lable). If you need the actual code please inform me and I will upload it`s current state. Code: model = Sequential() model.add(Dense(64, input_shape=20000, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu')) model.add(Dense(units=64, activation="relu")) model.add(Dropout(0.5)) model.add(Dense(19957, activation='sigmoid')) model.compile(loss='binary_crossentropy',optimizer='rmsprop', metrics=['accuracy']) print("Fitting data...") model.fit(test_hashs,test_words ,epochs=10,batch_size=128, verbose=1) train_y=input("Input a hash that is not contained in the training data: ") #train_x=pd.Series(hashlib.sha256(str.encode(train_y)).hexdigest()) train_y=pd.Series(train_y) #test_x=pd.get_dummies(train_x) test_y=pd.get_dummies(train_y) model.save("first_test") print(model.evaluate(test_y)) #score=model.evaluate(test_x, test_y, batch_size=128,) print("Score: "+score) prediction=model.predict(test_x,verbose=1) for i in prediction: print(i) Answer: I'm not quite sure it's possible. Hash functions are used to map an input to an output in a way that is not reversible. Many companies store a hash of your password on their servers so in case of a security breach they haven't given the adversaries a long list of passwords. As far as it goes for finding the exact hash of a word, it seems infeasible. Edit: Binary classification refers to the possible output being two possible states. A ten dimensional one-hot vector is not binary.
{ "domain": "ai.stackexchange", "id": 434, "tags": "unsupervised-learning, keras, python" }
Why is the partition function called ''partition function''?
Question: The partition function plays a central role in statistical mechanics. But why is it called ''partition function''? Answer: First, recall what a partition is. A partition of a set $X$ is a way to write $X$ as a disjoint union of subsets: $X=\coprod_i X_i$, $X_i\cap X_j=\emptyset$ for $i\neq j$. When the elements of the set $X$ are considered undistinguishable, what matters are the cardinals of the set only, and we have a partition of an integer number, $n=n_1+\ldots+n_k$. For numbers, the name "partition function" denotes the number of ways in which the number $n$ can be written like this. It is different than the "partition function" in statistical mechanics, but both refer to partitions. In statistical mechanics, a partition describes how $n$ particles are distributed among $k$ energy levels. Probably the "partition function" is named so (indeed a bit uninspired), because it is a function associated to the way particles are partitioned among energy levels. An interesting explanation of this can be found in "The Partition Function: If That’s What It Is Why Don’t They Say So!". But I don't know a historical account of this.
{ "domain": "physics.stackexchange", "id": 4632, "tags": "statistical-mechanics, soft-question, conventions, history, partition-function" }
Nested Grids layout with ScrollViewer and TextBoxes with wrappable text
Question: I need to handle a resizable WPF ScrollViewer that scrolls vertically and stretches horizontally and contains stretched wrappable TextBoxes. I have the following XAML WPF layout (the smallest example I have managed to produce): <Window x:Class="ESiftClient.TestWindow1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="TestWindow1" Height="800" Width="1000"> <!--Main grid--> <Grid> <Grid.RowDefinitions> <!--Top fixed size Tools controls--> <RowDefinition Height="115"></RowDefinition> <!-- Excel like scrollable grid --> <RowDefinition Height="200" MinHeight="150"></RowDefinition> <!-- Grid splitter --> <RowDefinition Height="Auto"></RowDefinition> <!-- Text boxes with wrappable text editor, there I have the problem--> <RowDefinition Height="*" MinHeight="150"></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"></ColumnDefinition> </Grid.ColumnDefinitions> <Border Grid.Row="0" Background="Gray"> <Label Content="Some controls within a grid here"></Label> </Border> <DockPanel Grid.Row="1" Grid.Column="0" LastChildFill="True" Width="Auto" Height="Auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="10,0,0,0" > <Grid Width="Auto" Height="Auto" VerticalAlignment="Stretch" HorizontalAlignment="Left"> <Grid.RowDefinitions> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="*"></RowDefinition> </Grid.RowDefinitions> <StackPanel Grid.Row="0" Grid.Column="1" Orientation="Horizontal" VerticalAlignment="Center" HorizontalAlignment="Left" > <Label Content="Title and buttons" /> </StackPanel> <ScrollViewer VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Auto" HorizontalAlignment="Left" VerticalAlignment="Top" Grid.Row="1"> <Grid> <!-- This grid is excel like grid with many rows and columns--> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> </Grid.RowDefinitions> <Label Content="ItemsControl Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="0" Grid.ColumnSpan="2" /> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="1" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="2" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="3" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="4" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="5" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="6" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="7" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="8" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="9" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="9" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="9" Grid.ColumnSpan="2"/> <Label Content="Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label Label " Grid.Row="9" Grid.ColumnSpan="2"/> </Grid> </ScrollViewer> </Grid> </DockPanel> <GridSplitter Grid.Row="2" Grid.Column="0" ResizeDirection="Rows" Height="5" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="0" KeyboardNavigation.IsTabStop="False" /> <Grid Grid.Row="3"> <Grid.RowDefinitions> <RowDefinition Height="50"></RowDefinition> <RowDefinition Height="*"></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"></ColumnDefinition> </Grid.ColumnDefinitions> <DockPanel Grid.Row="0" LastChildFill="True" Width="Auto" Height="Auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <StackPanel Orientation="Horizontal" Grid.Row="0" Grid.Column="0"> <Label Content="Title and buttons" /> </StackPanel> </DockPanel> <!--HorizontalScrollBarVisibility="Hidden" must be set, otherwise it shows horizontal scrollbar --> <ScrollViewer VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Hidden" x:Name="ScrollViewerXML" Grid.Row="1"> <!--!!! Hacking the Width of the Grid using Binding !!! --> <Grid Height="500" VerticalAlignment="Top" Grid.Row="1" x:Name="GridWithTextBoxes" Width="{Binding ActualWidth, ElementName=ScrollViewerXML}" > <Grid.ColumnDefinitions> <ColumnDefinition Width="100"></ColumnDefinition> <!--This column needs to have the size not overflowing parent control size--> <ColumnDefinition Width="*"></ColumnDefinition> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> </Grid.RowDefinitions> <Border Background="Gray"> <StackPanel Orientation="Vertical" Margin="0,5,5,0" Grid.Row="0" Grid.Column="0"> <Label Content="Title" FontWeight="Bold" HorizontalAlignment="Right"></Label> </StackPanel> </Border> <Border Grid.Row="0" Grid.Column="1" Background="Gray"> <!-- Here comes the problem, without the Binding Hack the text box has auto size Width of 16578 and I need the Width to not to overflow actual Grid Column Width--> <TextBox TextWrapping="Wrap" Width="Auto" MinHeight="60" AcceptsReturn="True" FontSize="14" Margin="0,5,5,5"> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla ultricies, libero eu pellentesque laoreet, ligula arcu lobortis orci, non molestie turpis nunc vel mi. Nulla dapibus neque eget nulla sodales semper. Donec luctus, quam ut tincidunt condimentum, magna purus luctus risus, at malesuada ligula est in libero. Fusce sit amet lorem in dui pharetra vestibulum a euismod sem. Suspendisse tincidunt elementum sapien vel sollicitudin. Phasellus nec arcu ipsum. Curabitur lacinia hendrerit nisl non accumsan. Nam id nunc odio. Quisque dapibus sed urna mattis tristique. Quisque ornare condimentum erat ut aliquet. Aenean interdum interdum erat, ut sagittis metus commodo ac. Maecenas eu augue eget lectus aliquam finibus quis vel nulla. Fusce dui purus, efficitur quis dapibus eget, pulvinar a massa. Praesent pellentesque nunc sit amet nibh dictum gravida. Suspendisse dignissim turpis nec enim scelerisque aliquet. Proin rhoncus viverra efficitur. Etiam sed faucibus diam, at sagittis ante. Maecenas augue est, sodales vitae sem ac, auctor pellentesque orci. Donec placerat ligula vel nibh pellentesque consequat. Nunc non ipsum lorem. Mauris quis commodo purus, at ultricies nulla. Aenean ut finibus mi. Proin condimentum aliquam ornare. Vestibulum id pretium felis. Fusce at rutrum erat. Vivamus nisi quam, tincidunt a lectus nec, volutpat rutrum libero. Duis ac turpis ac dolor iaculis rhoncus. Morbi finibus sem id mi commodo, in ultricies ante sollicitudin. Phasellus eu tellus non erat pulvinar commodo id ut nisi. Nulla tristique efficitur ipsum ac feugiat. Nunc laoreet massa id nisi sagittis tempus. Phasellus viverra nibh tellus, nec dignissim elit vehicula eget. Etiam lobortis est et nulla volutpat, in varius turpis malesuada. Nullam a tempus ante. Praesent libero dui, laoreet vitae eleifend non, aliquet vitae dolor. Nunc ac est non lacus imperdiet semper. Ut consectetur dolor neque, ut maximus est ultricies quis. Aliquam porta mi eu sodales semper. Nulla tristique feugiat mauris facilisis eleifend. Suspendisse vel magna dignissim, interdum massa nec, posuere urna. Curabitur vehicula commodo ligula, quis imperdiet risus accumsan at. Vestibulum erat enim, gravida a rhoncus quis, tristique et erat. Vivamus pulvinar pharetra scelerisque. Nunc arcu ex, imperdiet at mi a, egestas interdum diam. Mauris ornare ut massa nec dignissim. Donec suscipit quis nisi quis lacinia. Donec risus massa, pretium at orci id, consequat vulputate orci. Nam bibendum orci id libero placerat, sed venenatis turpis vestibulum. Vestibulum ut elit quis lorem feugiat suscipit eu nec neque. Sed in mauris vel sapien sagittis commodo a auctor dolor. Nam eu ultricies turpis. Mauris pellentesque molestie hendrerit. </TextBox> </Border> <Border Grid.Row="1" Grid.Column="0" Background="Gray"> <StackPanel Orientation="Vertical" Margin="0,0,5,0" > <Label Content="Title" FontWeight="Bold" HorizontalAlignment="Right"></Label> </StackPanel> </Border> <Border Grid.Row="1" Grid.Column="1" Background="Gray" HorizontalAlignment="Left"> <TextBox TextWrapping="Wrap" Width="Auto" MinHeight="70" AcceptsReturn="True" FontSize="14" Margin="0,0,5,5"> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla ultricies, libero eu pellentesque laoreet, ligula arcu lobortis orci, non molestie turpis nunc vel mi. Nulla dapibus neque eget nulla sodales semper. Donec luctus, quam ut tincidunt condimentum, magna purus luctus risus, at malesuada ligula est in libero. Fusce sit amet lorem in dui pharetra vestibulum a euismod sem. Suspendisse tincidunt elementum sapien vel sollicitudin. Phasellus nec arcu ipsum. Curabitur lacinia hendrerit nisl non accumsan. Nam id nunc odio. Quisque dapibus sed urna mattis tristique. Quisque ornare condimentum erat ut aliquet. Aenean interdum interdum erat, ut sagittis metus commodo ac. Maecenas eu augue eget lectus aliquam finibus quis vel nulla. Fusce dui purus, efficitur quis dapibus eget, pulvinar a massa. Praesent pellentesque nunc sit amet nibh dictum gravida. Suspendisse dignissim turpis nec enim scelerisque aliquet. Proin rhoncus viverra efficitur. Etiam sed faucibus diam, at sagittis ante. Maecenas augue est, sodales vitae sem ac, auctor pellentesque orci. Donec placerat ligula vel nibh pellentesque consequat. Nunc non ipsum lorem. Mauris quis commodo purus, at ultricies nulla. Aenean ut finibus mi. Proin condimentum aliquam ornare. Vestibulum id pretium felis. Fusce at rutrum erat. Vivamus nisi quam, tincidunt a lectus nec, volutpat rutrum libero. Duis ac turpis ac dolor iaculis rhoncus. Morbi finibus sem id mi commodo, in ultricies ante sollicitudin. Phasellus eu tellus non erat pulvinar commodo id ut nisi. Nulla tristique efficitur ipsum ac feugiat. Nunc laoreet massa id nisi sagittis tempus. Phasellus viverra nibh tellus, nec dignissim elit vehicula eget. Etiam lobortis est et nulla volutpat, in varius turpis malesuada. Nullam a tempus ante. Praesent libero dui, laoreet vitae eleifend non, aliquet vitae dolor. Nunc ac est non lacus imperdiet semper. Ut consectetur dolor neque, ut maximus est ultricies quis. Aliquam porta mi eu sodales semper. Nulla tristique feugiat mauris facilisis eleifend. Suspendisse vel magna dignissim, interdum massa nec, posuere urna. Curabitur vehicula commodo ligula, quis imperdiet risus accumsan at. Vestibulum erat enim, gravida a rhoncus quis, tristique et erat. Vivamus pulvinar pharetra scelerisque. Nunc arcu ex, imperdiet at mi a, egestas interdum diam. Mauris ornare ut massa nec dignissim. Donec suscipit quis nisi quis lacinia. Donec risus massa, pretium at orci id, consequat vulputate orci. Nam bibendum orci id libero placerat, sed venenatis turpis vestibulum. Vestibulum ut elit quis lorem feugiat suscipit eu nec neque. Sed in mauris vel sapien sagittis commodo a auctor dolor. Nam eu ultricies turpis. Mauris pellentesque molestie hendrerit. </TextBox> </Border> </Grid> </ScrollViewer> </Grid> </Grid> </Window> The problem is that I need to modify the Width of the bottom Grid named GridWithTextBoxes with Binding to the parent element (ScrollViewer) and I believe this is a hack and there must be a better way to do it. When I remove the Binding Width hack, the TextBoxes with wrappable long text do not wrap but extend to the maximum Width without wrapping. Could the same behavior be achieved in some other way without setting HorizontalScrollBarVisibility="Hidden" on the ScrollViewer named ScrollViewerXML and without binding the Width of the Grid named GridWithTextBoxes to the ActualWidth of the ScrollViewer named ScrollViewerXML? Answer: I believe this is the correct way to accomplish your goal. The ScrollViewer is designed so as not to restrict the size of its content to its viewable area, but in this case (at least for the width) this is exactly what you want. So in order to achieve vertical scrolling without horizontal scrolling, you must match the contents' width to that of the ScrollViewer. To do this you have two options: Explicitly set the width of each control. This is not recommended since WPF is designed to be dynamic and reactive to changes in layout. Bind the width of the content to the width of the ScrollViewer. This is perfect since you want the content width to match the container width. The binding will ensure that. You elected the better option. Well done. P.S. I've done this in my code many times. It does feel hackish at first, but it's the right way to do it. P.P.S. There may be a gotcha in here that your ScrollViewer may come with some padding (space on the inside before it shows content) or a border. Both of these can cause your Grid to be a few pixels too wide.
{ "domain": "codereview.stackexchange", "id": 16817, "tags": "wpf, xaml" }
Just how fast is a Fast Radio Burst thought to be?
Question: According to Wikipedia's Fast Radio Bursts; Features are recorded they The component frequencies of each burst are delayed by different amounts of time depending on the wavelength. This delay is described by a value referred to as a dispersion measure (DM). This results in a received signal that sweeps rapidly down in frequency, as longer wavelengths are delayed more. The time between the arrival of the pulse at two different frequencies can be used to generate a kind of measure of distance, based on a dispersion constant. The measure does not have units of length, but of integrated electron density over the path from source to observer. Using some fancy Fourier tricks one could first undo the $1/\nu^2$ delay and then try to reconstruct what the original pulse might have looked like before dispersion. Has this been done? If so, how fast (narrow in time) might the original disturbance be? A millisecond? Less? Answer: The publication describing the original detection of the first known FRB (Lorimer, D. R., Bailes, M., McLaughlin, M. A., Narkevic, D. J. & Crawford, F.: A Bright Millisecond Radio Burst of Extragalactic Origin. arXiv:0709.4301) has a plot of the measurement that makes the effects of dispersion on this particular FRB nicely visible. Take a look at Fig. 2 in the paper. The actual signal is less than 10 ms, while dispersion delays the signal by around 200 ms over the 200 MHz frequency range between 1.3 and 1.5 GHz (note that this relationship is nonlinear). Your idea about algorithmically removing the effects of dispersion on the signal is regularly done in practice, search for "dedispersion". At our (hobbyist) observatory, we are using D. Lorimer's own sigproc package to do this, and it seems to be in widespread use amongst professional observers as well. The basic idea is to simulate a classical filterbank arrangement and shift each filter channel according to the DM. From Lorimer et al. (cited above): Figure 2: Frequency evolution and integrated pulse shape of the radio burst. The survey data, collected on 2001 August 24, are shown here as a two-dimensional ‘waterfall plot’ of intensity as a function of radio frequency versus time. The dispersion is clearly seen as a quadratic sweep across the frequency band, with broadening towards lower frequencies. From a measurement of the pulse delay across the receiver band using standard pulsar timing techniques, we determine the DM to be 375±1 cm−3 pc. The two white lines separated by 15 ms that bound the pulse show the expected behavior for the cold-plasma dispersion law assuming a DM of 375 cm−3 pc. The horizontal line at ∼ 1.34 GHz is an artifact in the data caused by a malfunctioning frequency channel. This plot is for one of the offset beams in which the digitizers were not saturated. By splitting the data into four frequency sub-bands we have measured both the half-power pulse width and flux density spectrum over the observing bandwidth. Accounting for pulse broadening due to known instrumental effects, we determine a frequency scaling relationship for the observed width W = 4.6 ms (f/1.4 GHz)−4.8±0.4 , where f is the observing frequency. A power-law fit to the mean flux densities obtained in each sub-band yields a spectral index of −4 ± 1. Inset: the total-power signal after a dispersive delay correction assuming a DM of 375 cm−3 pc and a reference frequency of 1.5165 GHz. The time axis on the inner figure also spans the range 0–500 ms.
{ "domain": "astronomy.stackexchange", "id": 3723, "tags": "radio-astronomy, fast-radio-bursts" }
Gauge field and Lie group
Question: I'm studying $SU(N)$ gauge theory, but I'm confused. Here(Gauge fields -- why are they traceless hermitian?), the reason why a gauge field is in the Lie algebra of a gauge group $G$ is that we have to cancel out the term which comes from the kinetic term by acting gauge transformation. For simplification I want to use the $F_{\mu \nu}$. I know this transforms as $$ F_{\mu \nu} \to gF_{\mu\nu}g^{-1} $$ and it's called adjoint representation ($g\in G$). However, is it true that $F_{\mu \nu}$ belongs to the Lie algebra, just because it undergoes a change as an adjoint representation? In my understanding, adjoint representation means that an element of a certain set $x\in X$ changes $$ Ad(g)x=gxg^{-1} $$ and then $x$ does not have to be in $G$. For example we can show this act becomes representation: $Ad(g_1)Ad(g_2)=Ad(g_1g_2)$, even if $x\notin G$. Anyway, my question is: Why the adjoint representation has the important role for $A_{\mu}, F_{\mu \nu}\in$ (Lie algebra of $G$)? Answer: The field strength $F_{\mu \nu}$ is a (local representation of) a Lie-algebra valued two-form. In components, it is often written as $F_{\mu \nu}^a$, where the $\mu \nu$ are the space-time indices (making it a two-form) and a is the Lie-algebra index. As such, it is indeed in the Lie-algebra. However, it is not any Lie-algebra valued two form. Instead, it is the (local representation of) the covariant derivative of a special one-form, the connection one-form (whose local representation is usually denoted by $A_{\mu}$). This necessarily transforms in the adjoint representation. An intuitive reasoning for this can be found in this question (1). This is the mathematical viewpoint. The physical viewpoint is that the adjoint rep gives the "correct" transformation, simply because of the way we have defined the field strength in non-abelian gauge theories. Specifically, the field strength is defined as (this definition comes from the differential geometry picture sketched above): $$ F_{\mu \nu} = \partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}-i[A_{\nu},A_{\mu}] $$ And if you perform a gauge transformation of this object (as is done e.g. here (2), you find that it transforms in the adjoint representation. (1)Why quarks in the fundamental and gluons in the adjoint? (2)https://www.worldscientific.com/doi/pdf/10.1142/9789813234192_0001
{ "domain": "physics.stackexchange", "id": 73940, "tags": "gauge-theory, group-theory, representation-theory, lie-algebra" }
Why are wet slippers slippery and grippy at the same time?
Question: Wet slippers are slippery on some surfaces and extremely grippy on others. Why is it so? I believe the answer lies in the way water interacts with the two surfaces. But how can this interaction give rise to such contrasting effects on different surfaces? Answer: There are several factors at play here: 1.The tread pattern of given slippers 2.The surface roughness (i.e, the unevenness of given given surface) 3.The coefficient of friction between surface and the slippers. To put in perspective it is obvious that more the coefficient of friction between surfaces, less the slipperiness. Unevenness of the surface relates as, if we increase it from smooth to rough the slipperiness first decreases and then increases (my guess is with extreme smoothness the treads create some sort of suction and as smoothness decreases this suction reduces and later on more roughness leads to the increase in contact area for normal friction to act). Also tread patterns play a vital role in determining slipperiness as different patterns of different heights provide different conditions of areas of contact and different amounts of liquid displacement in the case that liquid (here water) is present on surface. Liquid on surface reduces the friction due to fact that it slides on the surface, more viscous the surface, more the time to slide resulting in greater chance of slipping. NOTE: I have sourced this from the following reference https://www.kgk-rubberpoint.de/wp-content/uploads/migrated/paid_content/artikel/1507.pdf.
{ "domain": "physics.stackexchange", "id": 62095, "tags": "friction, everyday-life" }
Solubility of Ag3AsO4 in 0.02 M K3AsO4
Question: Calculate the solubility of $\ce{Ag3AsO4}$ in $\pu{0.02M}~\ce{K3AsO4}$ neglecting the activity coefficients. Find the relative error. $K_\mathrm{sp}(\ce{Ag3AsO4}) = \pu{6.0e-23}$ I know how to calculate the relative error but I get a very complicated equation finding the concentration solubility product constant ($K'_\mathrm{sp}$). There should be a quicker way to solve this since it is a midterm question. I tried this: $$\ce{Ag3AsO4 -> 3Ag+ + AsO4^3-}$$ $$K_\mathrm{sp} = 27x^4$$ $$x = \pu{2.78e-6}$$ $$[\ce{Ag+}] = 3x = \pu{8.34e-6M}$$ $$[\ce{AsO4^3-}] = x = \pu{2.78e-6M}$$ $0.02~\mathrm{M}\ \ce{AsO4^3-}$ comes from $\ce{K2AsO4}$. So there should be an equation like: $$K_\mathrm{sp} = (\pu{8.34e-6} - 3x)^3 \times (\pu{2.78e-6} + 0.02 - x) = \pu{6.0e-6}$$ And things get complicated. After finding $x$, I will also have found the final concentrations of silver and $\ce{AsO4^3-}$ ions. Then I will read the activity coefficients of them from the appendix table. I have two questions: Is my method true or false? What is an easier way of solving this problem? Answer: The correct solution: $$K_\mathrm{sp} = a_\mathrm{Ag^+}^3 \cdot a_\mathrm{AsO_{4}^{3-}} = 6.0 \cdot 10^{-23}$$ $$K_\mathrm{sp} = K_\mathrm{sp}^{'} \cdot γ_\mathrm{Ag^+}^3 \cdot γ_\mathrm{AsO_{4}^{3-}}$$ $$K_\mathrm{sp}^{'} = \frac{K_\mathrm{sp}}{γ_\mathrm{Ag^+}^3 \cdot γ_\mathrm{AsO_{4}^{3-}}}$$ Ionic strength: $$μ = \frac{1}{2} \cdot (0,06\cdot1^2 + 0,02\cdot3^2) = 0,12~\mathrm{M}$$ $$0,12~\mathrm{M} ≈ 0,10~\mathrm{M} $$ Activity coefficients at ionic strength $0,12~\mathrm{M}$ are: $0.75$ for $Ag^+$ $x$ for $AsO_{4}^{3-}$ Then: $$K_\mathrm{sp}^{'} = \frac{K_\mathrm{sp}}{0.75^3 \cdot x}$$ And the relative error is: $$\%~error = \frac{|K_\mathrm{sp}-K_\mathrm{sp}^{'}|}{K_\mathrm{sp}} \cdot 100~\% = \frac{|6.0 \cdot 10^{-23}-K_\mathrm{sp}^{'}|}{6.0 \cdot 10^{-23}} \cdot 100~\%$$ I can't find the activity coefficient for arsenate. If someone can find it, please share it with us.
{ "domain": "chemistry.stackexchange", "id": 4709, "tags": "equilibrium, solubility" }
Do Maxwell's equation describe a single photon or an infinite number of photons?
Question: The paper Gloge, Marcuse 1969: Formal Quantum Theory of Light Rays starts with the sentence Maxwell's theory can be considered as the quantum theory of a single photon and geometrical optics as the classical mechanics of this photon. That caught me by surprise, because I always thought, Maxwell's equations should arise from QED in the limit of infinite photons according to the correspondence principle of high quantum numbers as expressed e.g. by Sakurai (1967): The classical limit of the quantum theory of radiation is achieved when the number of photons becomes so large that the occupation number may as well be regarded as a continuous variable. The space-time development of the classical electromagnetic wave approximates the dynamical behavior of trillions of photons. Isn't the view of Sakurai in contradiction to Gloge? Do Maxwell's equation describe a single photon or an infinite number of photons? Or do Maxwell's equations describe a single photon and also an infinite number of photons at the same time? But why do we need QED then at all? Answer: Because photons do not interact, to very good approximation for frequencies lower than $m_e c^2 / h$ ($m_e$ = electron mass), the theory for one photon corresponds pretty well to the theory for an infinite number of them, modulo Bose-Einstein symmetry concerns. This is similar to most of the statistical theory of ideal gases being derivable from looking at the behavior of a single gas particle in kinetic theory. Put another way, the single photon behavior $\leftrightarrow$ Maxwell's equations correspondence only holds if you look at the Fourier transform version of Maxwell's equations. The real space-time version of Maxwell's equations would require looking at a superposition of an infinite number of photons — one way to describe the taking an inverse Fourier transform. If you want to think of it in terms of Feynman diagrams, classical electromagnetism is described by a subset of the tree-level diagrams, while quantum field theory requires both tree level and diagrams that have closed loops in them. It is the fact that the lowest mass particle photons can produce a closed loop by interacting with, the electron, that keeps photons from scattering off of each other. In sum: they're both incorrect for not including frequency cutoff concerns (pair production), and they're both right if you take the high frequency cutoff as a given, depending on how you look at things.
{ "domain": "physics.stackexchange", "id": 35836, "tags": "photons, quantum-electrodynamics, maxwell-equations, semiclassical" }
Why is chess still a benchmark for Artificial Intelligence?
Question: Even though modern chess playing programs have demonstrated themselves to be as strong (or stronger) than even the best human players for nearly 20 years now (1997 when IBM's Deep Blue defeated the world chess champion Gary Kasparov), why would a game like chess still be considered a valuable research subject in Artificial Intelligence? In other words, what can be gained by continuing to advance AI in areas that have already surpassed human capabilities? For instance, as recently as November 2017, Google successfully challenged its deep learning technology against one of the world's strongest chess-playing programs. Answer: Chess isn't really a benchmark per say. The method developed in AlphaGo to play Go should in principle generalize quite nicely to other games of this sort, such as chess. Since Stockfish is quite dominantly the strongest Chess AI, the natural question would be to see how well AlphaGo's method compares to Stockfish. Being one of the most well developed AI agents of all time, the situation concerning the defeat of Stockfish (AlphaZero was trained for only 4 hours entirely via self-play, without access to historical data) signifies the complete dominance of modern neural-network methods over classic methods (hard coded evaluation functions). Also as @DukeZhou♦ mentioned in the comments, while Chess bots can regularly beat human players, it's still a useful metric to evaluate bots against each other via "games" of this sort. edit: But as the more recent results of Stockfish 13 versus Lc0 (an open source AlphaZero clone) show, handcrafted/traditional algorithms (search in particular), paired with neural network techniques, can still outmatch pure neural networks. This perhaps highlights the value of classical techniques in the face of more modern approaches.
{ "domain": "ai.stackexchange", "id": 389, "tags": "chess, intelligence-testing, alphago, benchmarks" }
How to expand lists?
Question: In lists the main noun is often only mentioned at the end. However, for e.g. NER-tasks, I would like to "expand" them: Outward and return flight -> outward flight and return flight Proboscis, Vervet, and golden snub-nosed monkey -> Proboscis monkey, Vervet monkey, and golden snub-nosed monkey education in mathematics or physics -> education in mathematics or education in physics Are there already tools out there (bonus points for support of German language)? Google only led me to expanding contractions ("I've" -> "I have")... Answer: In general this is related to syntactic analysis: one needs to obtain a parse tree of the noun phrase, then it's possible to expand by mapping the head of the phrase with the different parts of the conjunction. I think you can find dependency parsers for German, for instance in the NLTK library or Spacy. I don't know if you would find a library which provides precisely the expansion though, I would expect that there is a bit of programming to do from the parse tree.
{ "domain": "datascience.stackexchange", "id": 10001, "tags": "nlp" }
What is the reason of work done by normal force zero?
Question: Recently I was solving a problem involving a block on triangular wedge kept on horizontal fricitionless surface and the system is released from rest. All things are indicated in figure, Now I am finding final velocities of blocks after block reaches ground using several methods. $N$ represents normal between wedge and block and $N_s$ represents the normal between wedge and surface, $x_1$ represents horizontal displacement of $m$, $x_2$ represents horizontal displacement of $M$, $v_1$ & $v_2$ are final velocities of $m$ & $M$ respectively. Appling work energy theorem : $$\begin{align} N\cos\alpha\cdot x_1+(mg-N\sin\alpha)\cdot h+N\cos\alpha\cdot x_2+(N_s-N\sin\alpha)\cdot 0+Mg\cdot 0= \frac{mv_1^2}{2}+ \frac{Mv_2^2}{2} -0 \end{align}$$ Rearranging this gives, \begin{equation}\tag{1} mgh+N((x_1+x_2)\cos\alpha-h\sin\alpha)=\frac{mv_1^2}{2}+ \frac{Mv_2^2}{2}\end{equation} Conservation of energy of center of mass : $$\tag{2} mgh=\frac{mv_1^2}{2}+ \frac{Mv_2^2}{2}$$ By $(1)$ and $(2)$ $$N((x_1+x_2)\cos\alpha-h\sin\alpha)=0$$ So, this gives me that work done by normal is zero. But this was mathematics not physics. What is exactly workdone by normal force? Since point of contact changes every time as the block goes down, simply there is not a single particle which exerts normal on the other particle of block. And can you tell me why this thing comes to be zero? In the other words, Why is the workdone by normal on wedge is of opposite sign and equal in magnitude to that of workdone by normal on block? Please try to use mathematics as less as possible Answer: Because both bodies move, the point of contact between them travels along a given path for an inertial observer at rest with the system before the movement starts. The work done on the block due to $N$ is the integral of the dot product between $N$ and the block displacement along the contact path. But the work done by the block on the wedge is the same integral because the path is the same, except for the force, that instead of $N$ is $-N$. So, the total work due to the Normal force is zero.
{ "domain": "physics.stackexchange", "id": 96093, "tags": "homework-and-exercises, newtonian-mechanics, energy, reference-frames, energy-conservation" }
Factor before Dirac delta in magnetic dipole field formula
Question: I bumped into this formula for the magnetic induction field generated by a dipole, containing Dirac's delta, while studying hyperfine splitting: $$\textbf{B}(\textbf{r}) = \frac{2}{3}\mu_0 \textbf{m}\delta(\textbf{r}) - \mu_0 \nabla\frac{1}{4\pi} \frac{\textbf{m}\cdot \textbf{r}} {|\textbf{r}|^3}.\tag{1}$$ If I try to compute the curl of the vector potential of a dipole, which should be $$\textbf{A}=-\frac{\mu_{0}}{4\pi}\cdot\textbf{m}×\nabla\frac{1}{r}\tag{2}$$ to obtain the $B$ field, I end up getting the formula quoted here -> Equation for the field of a magnetic dipole. In this formula, the delta hasn't got the $\frac{2}{3}$ factor on its side because it comes from the Laplacian of $\frac{1}{r}$. However every book and article reporting the equation says the 2/3 factor has to be there. Is there a way to reconcile the two formulae or one of the two is wrong? Why? Answer: It is somewhat problematic to rigorously define $\partial_i\partial_j\frac{1}{r}$ in 3D distribution theory, cf. e.g. this Math.SE post. Nevertheless, due to the identity $$ \nabla^2\frac{1}{r}~=~-4\pi\delta^3(\vec{ r}),\tag{A}$$ it makes heuristic/physical sense to assign $$ \partial_i\partial_j\frac{1}{r}~=~-\frac{4\pi}{3}\delta_{ij}\delta^3(\vec{ r}) ~+~ {\rm P.V.}\left(\frac{3x_ix_j}{r^5} -\frac{\delta_{ij} }{r^3}\right), \tag{B}$$ where ${\rm P.V.}$ stands for the Cauchy principal value. Using $$ \vec{A}~\stackrel{(2)}{=}~-\frac{\mu_{0}}{4\pi}\cdot\vec{m}\times \vec{\nabla}\frac{1}{r}, \tag{C} $$ we can calculate $$\begin{align} \vec{B} ~=~&\vec{\nabla}\times\vec{A} \cr ~\stackrel{(C)}{=}~&-\frac{\mu_{0}}{4\pi}\left(\vec{m}\nabla^2\frac{1}{r} ~-~\vec{\nabla}(\vec{m}\cdot \vec{\nabla}\frac{1}{r}) \right) \cr ~\stackrel{(A)}{=}~&\mu_{0} \vec{m}\delta^3(\vec{ r})~+~ \frac{\mu_{0}}{4\pi}\vec{\nabla}(\vec{m}\cdot \vec{\nabla}\frac{1}{r}) \cr ~\stackrel{(B)}{=}~&\frac{2}{3}\mu_{0} \vec{m}\delta^3(\vec{ r}) ~+~ {\rm P.V.}~\frac{\mu_{0}}{4\pi}\frac{3\vec{m}\cdot\vec{r}-\vec{m}r^2}{r^5}\cr ~=~&\frac{2}{3}\mu_{0} \vec{m}\delta^3(\vec{ r}) ~+~ {\rm P.V.}~\frac{\mu_{0}}{4\pi}\vec{\nabla}(\vec{m}\cdot \vec{\nabla}\frac{1}{r}) ,\end{align}\tag{D}$$ which is OP's sought-for eq. (1).
{ "domain": "physics.stackexchange", "id": 80087, "tags": "electromagnetism, magnetic-fields, differentiation, dirac-delta-distributions, magnetostatics" }
Solving Hidoku problem using reduction to Hamilton path
Question: It seems like people have discussed this reduction here: Is Hidoku NP complete? But it looks like the solution that was given only tells you if the Hidoku problem has a solution or not (if an hamilton path/cycle exists), what about extracting the solution itself from this reduction? Can someone see how it is possible? Best regards, Tal Answer: For any NP-complete problem where the question is, "Can you complete this partial solution?" you can extract a solution from the decision problem just by filling in components of the solution one at a time and asking if there's still a solution. Is there a solution with $1$ in the top-left corner? No. With $2$ in the top-left? No. With $3$? Yes. OK, let's fix the top-left to be $3$: we know this can be extended to a solution. Is there a solution with $3$ in the top-left corner and $1$ in the square next to it? And so on. (In fact, you can be a bit more efficient by using the fact that two of the neighbours of the square labeled $i$ must be $i-1$ and $i+1$.) This is a specific case of the concept of "self-reducibility": a problem is self-reducible if you can find the answer given an oracle for smaller instances of the decision problem. (In the case of Hidoku, the instances the oracle is called on are smaller in the sense that they have fewer blank squares.) Some other examples of self-reducible problems are: SAT. Suppose you want to know whether a formula in variables $X_1, \dots, X_n$ is satisfiable. If it's satisfiable, then either there's a satisfying assignment with $X_n=t$, or there's one with $X_n=f$, or both. So replace $X_n$ with $t$ and ask if the resulting formula in $n-1$ variables is satisfiable. If so, recurse; if not, set $X_n=f$ and recurse. $3$-colourability. Suppose you want to know whether a graph $G$ is $3$-colourable. If it is $3$-colourable and it has more than three vertices, then two of the vertices must receive the same colour. So, pick two non-adjacent vertices $x$ and $y$ and merge them (delete $y$ and make every vertex that was adjacent to it be adjacent to $x$). If the resulting graph is $3$-colourable, then $G$ has a $3$-colouring in which $x$ and $y$ have the same colour, so recurse. If they don't, then any $3$-colouring must assign different colours to $x$ and $y$, so we can add an edge between them and try a different pair of non-adjacent vertices. If this process ever gives a graph that contains a $4$-clique –which we can test in polynomial time – then $G$ is not $3$-colourable. Otherwise, we eventually reduce the graph down to a $3$-clique, which is $3$-colourable. We can recover the $3$-colouring by tracking which vertices got merged into the different vertices of the $3$-clique. Independent set. Suppose we wish to know if a graph $G$ has an independent set of size at least $k$. Pick a vertex $v$ and ask if $G-v$ contains a $k$-independent set. If so, $G$ has a $k$-independent set that doesn't include $v$, so recurse on $G-v$ to find out what it does contain. Otherwise, any $k$-independent set must contain $v$, so it can't contain any of $v$'s neighbours. So ask if $G-v-\Gamma(v)$ contains a $(k-1)$-independent set. If it does, recurse. If it doesn't, $G$ has no $k$-independent set.
{ "domain": "cs.stackexchange", "id": 8657, "tags": "reductions, np, hamiltonian-path" }
Password Generator with GUI
Question: I just finished my first GUI app, a Password Generator that can take any characters you want(I've added some default characters to make your life easier), and a password length up to 999, and some small features like copy to clipboard and clear button... I used PyQt5 to build my GUI and some helpful modules like pyperclip, webbrowser... This is what the app looks like on Windows: I used Pyinstaller to convert .py to .exe download the Password Generator.exe here Here's the source code: #!/usr/bin/env python3.5.2 from PyQt5.QtWidgets import* from PyQt5.QtCore import* from PyQt5.QtGui import* import random import pyperclip import webbrowser import sys class Password_Generator(QWidget): def __init__(self): QWidget.__init__(self) layout = QGridLayout() self.menuBar = QMenuBar() self.default_characters = QPushButton() self.characters = QLineEdit() self.passwordlength = QLineEdit() self.pl_option = QComboBox() self.progress = QProgressBar() self.generate = QPushButton("Generate Password") self.result = QLineEdit() self.clipboard = QPushButton("Copy to clipboard") self.clear = QPushButton("Clear") self.fileMenu = QMenu("File", self) self.clearAction = self.fileMenu.addAction("Clear") self.exitAction = self.fileMenu.addAction("Exit") self.menuBar.addMenu(self.fileMenu) self.helpMenu = QMenu("Help", self) self.source_code = self.helpMenu.addAction("Source Code") self.information = self.helpMenu.addAction("About Me") self.menuBar.addMenu(self.helpMenu) self.characters.setPlaceholderText("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890$@^`,|%;.~()/\{}:?[]=-+_#!") self.characters.setFixedWidth(524) self.passwordlength.setPlaceholderText("password length") self.passwordlength.setFixedWidth(85) self.passwordlength.setAlignment(Qt.AlignHCenter) self.passwordlength.setValidator(QIntValidator(0, 999)) self.passwordlength.setMaxLength(3) self.pl_option.setFixedWidth(58) self.pl_option.addItems(["Default", "8", "16", "32", "64", "128"]) self.progress.setValue(0) self.progress.setAlignment(Qt.AlignHCenter) self.generate.setFixedWidth(125) self.default_characters.setFixedWidth(24) self.default_characters.setIcon(QIcon(r'C:\Users\SalahGfx\Desktop\Password Generator Files\file-default-icon-62367.png')) self.result.setReadOnly(True) self.result.setFixedWidth(425) layout.addWidget(self.default_characters, 0, 0) layout.addWidget(self.characters, 0, 1, 1, 2) layout.addWidget(self.passwordlength, 0, 3) layout.addWidget(self.pl_option, 0, 4) layout.addWidget(self.generate, 1, 0, 1, 2) layout.addWidget(self.result, 1, 2) layout.addWidget(self.progress, 1, 3, 1, 2) layout.addWidget(self.clipboard, 3, 0, 1, 3) layout.addWidget(self.clear, 3, 3, 1, 2) layout.setMenuBar(self.menuBar) self.setLayout(layout) self.setFocus() self.setWindowTitle("Password Generator") self.generate.clicked.connect(self.generated) self.clipboard.clicked.connect(self.clipboard_copy) self.clear.clicked.connect(self.cleared) self.default_characters.clicked.connect(self.default) self.pl_option.currentIndexChanged.connect(self.numbers) self.clearAction.triggered.connect(self.cleared) self.exitAction.triggered.connect(self.exit) self.source_code.triggered.connect(self.get_source_code) self.information.triggered.connect(self.info_window) self.new_window = Info_Window() self.new_window.setWindowIcon(QIcon(r'C:\Users\SalahGfx\Desktop\Password Generator Files\Lock_closed_key_2-512.png')) self.new_window.setWindowTitle("About Me") def get_source_code(self): webbrowser.open(r"C:\Users\SalahGfx\Desktop\Password Generator Files\Source Code.txt") def info_window(self): self.new_window.setWindowFlags(Qt.WindowCloseButtonHint) self.new_window.show() self.new_window.setFixedSize(665, 350) def exit(self): sys.exit(app.exec_()) def numbers(self): if self.pl_option.currentText() == 'Default': self.passwordlength.setText(None) else: self.passwordlength.setText(self.pl_option.currentText()) def default(self): characters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890$@^`,|%;.~()/\{}:?[]=-+_#!" self.characters.setText(characters) def generated(self): try: characters = self.characters.text() password_length = int(self.passwordlength.text()) except Exception: return self.password = "" for i in range(password_length): try: characters_index = random.randrange(len(characters)) except Exception: return self.password = self.password + characters[characters_index] self.progress.setValue(100) self.result.setText(self.password) def clipboard_copy(self): if len(self.result.text()) > 0: pyperclip.copy(self.result.text()) QMessageBox.information(self, "Information", "Password has been copied to clipboard!") else: return def cleared(self): self.characters.setText("") self.passwordlength.setText("") self.progress.setValue(0) self.result.setText("") self.passwordlength.setText(None) self.pl_option.setCurrentIndex(0) class Info_Window(QDialog): def __init__(self): QDialog.__init__(self) info_layout = QGridLayout() self.info = QLabel("Password Generator\n" "Version 1.0\n" "Contact Riverbank at info@riverbankcomputing.com\n" "Copyright © free 2016 Riverbank Computing Limited under GNU General Public License version 3\n" "Contact Me at iteleport2015@gmail.com\n" "Copyright © free 2017 Salah Gfx Open Source Project\n") self.about_me = QLabel("My name is Salah, I'm a college student, I study computer science and mathematics, I'm also a graphic designer\n" "and python programmer, I'm a self taught, I have 3 years of experience in 3D design and 1 year in coding, I wish\n" " to make 3D games in the future, combining those knowledges I believe I can make it, this app is my first GUI app,\n" " I used to run all my codes and scripts in the console but now it feels different how you can share your work with\n" " normal people, it is just amazing and helpful, have a nice day and I wish for you a successful life.") self.Font = QFont() self.Font.setBold(True) self.about_me.setFont(self.Font) self.Hline = QFrame() self.Hline.setFrameShape(self.Hline.HLine) self.Hline.setFrameShadow(self.Hline.Sunken) self.Vline = QFrame() self.Vline.setFrameShape(self.Vline.VLine) self.Vline.setFrameShadow(self.Vline.Sunken) self.image_label = QLabel() pixmap = QPixmap(r"C:\Users\SalahGfx\Desktop\Password Generator Files\SLH-GFX2.png") self.image_label.setPixmap(pixmap) info_layout.addWidget(self.info, 0, 2) info_layout.addWidget(self.Vline, 0, 1) info_layout.addWidget(self.image_label, 0, 0) info_layout.addWidget(self.Hline, 1, 0, 1, 3) info_layout.addWidget(self.about_me, 2, 0, 1, 3) self.setLayout(info_layout) app = QApplication(sys.argv) window = Password_Generator() window.setFixedSize(732, 120) window.setWindowIcon(QIcon(r'C:\Users\SalahGfx\Desktop\Password Generator Files\Lock_closed_key_2-512.png')) window.show() sys.exit(app.exec_()) The files needed here Answer: You shouldn't use paths C:\Users\SalahGfx\Desktop\Password Generator Files\ - other user can put program in different folder and it will not work (even on other Windows). You should use sys.argv[0] to get correct folder on any computer. And then you can use os.path.join(folder, "some_file.png") to create path to images. #!/usr/bin/env python3 should be enought - you don't have to set 3.5.2 - and it will work even if someone has 3.4 or 3.6 but don't have 3.5.2. You don't need else: return Code will work without it except Exception: return It is not good practice to skip exception. It could print message in console (using print()) so you could run it in terminal and see more information about problem. Method exit() should rathern has name run() because app.exec_() starts Qt framework - it runs its engine (its mainloop/event-loop). Besides, you have this method but you don't use it - you run sys.exit(app.exec_()) - but you could run window.exit() You could read PEP8 - https://www.python.org/dev/peps/pep-0008/ It suggests to use UpperCase names for classes - without underscore ie. PasswordGenerator, InfoWindow. Even Qt doesn't use underscore in class names.
{ "domain": "codereview.stackexchange", "id": 25695, "tags": "python, python-3.x, pyqt" }
best and worst case number of key comparisons of an algorithm
Question: Consider the following algorithm: function randomQueries(A, n <- A.size) if n > 1 then p <- n - 1; x <- partition(A, p); randomQueries(A[0,1,...,x-1], x) randomQueries(A[x+1,...,n-1], n-1-x); for y = 1 to x do for z = x + 1 to n - 1 do query(A, y, z); else if n = 1 then query(A,0,0); partition is Hoare's partition-routine (so after the partition, all keys in A[0,...,x-1] are <= A[x] and all keys in A[x+1,...,n-1] are >= A[x]. But I think it only really matters that partition performs n key comparisons. Also, query(A, y, z) uses one key comparison. Deduce with justification, asymptotically tight bounds for the worst and best case number of key comparisons. I think the best case is $\Theta(n^2)$; choosing $x=n/2$ each time (which is possible if you arrange the array so that the last element of each subarray visited by the randomQueries function is always the median of the subarray) shows that the best case number $T^{best}(n) \leq n + n/2(n/2-1) + T^{best}(n/2) + T^{best}(n/2-1) \leq n^2 + 2T^{best}(n/2)$ for n large enough, which simplifies to $O(n^2).$ Also, each call takes $n$ key comparisons and there are $\Omega(n)$ calls because as long as $n > 1,$ there will be a recursive call and each call "removes" one index (x) each time. Similarly, if one always chooses the last element ($x=\frac{n}2$), the worst case runtime should be at least $n + T^{worst}(n-1)$, which evaluates to $\Omega(n^2).$ I'm not sure how to find an upper bound for the worst case. Answer: Denote by $T(n)$ the number of comparisons on a list of length $n$. Then $T(0) = 0$, $T(1) = 1$, and $$ T(n) = n + T(x) + T(n-1-x) + x(n-1-x) $$ for some value of $x$ that could depend on the array. Let us prove by induction that $T(n) = n(n+1)/2$, whatever $x$ is. This holds when $n = 0$ and $n = 1$. For $n > 1$, we have $$ T(n) = n + \frac{x(x+1)}{2} + \frac{(n-1-x)(n-x)}{2} + x(n-1-x) = \frac{n(n+1)}{2}. $$ You can also prove this combinatorially: every two elements get compared, including an element and itself.
{ "domain": "cs.stackexchange", "id": 19100, "tags": "algorithms, time-complexity, algorithm-analysis, asymptotics, big-o-notation" }
How to interpret the non-relativistic limit of $E=mc^2$?
Question: Classical non-relativistic newtonian mechanics can be derived from special relativity by letting the speed of light tend to infinity $c \rightarrow \infty $. But doing this for Einsteins famous formula $$ E = mc^2 $$ would yield $$ E = \infty $$ This would mean, that in classical mechanics the rest energy is infinite? Or how does one interpret this fact? Or should one rather look at $m =\frac{E}{c^2}$ and conclude that in classical mechanics the rest mass of a particle must be zero? Answer: In relativity, the absolute value of the energy of an object is important and measurable, because it contributes to the object's inertia by $E = mc^2$, as you wrote. But outside of relativity, energy and mass have no relation whatsoever. So it doesn't matter what the rest mass contribution to energy is, even if it's formally infinite -- we can just subtract it out, and nothing changes. However, we do need to make sure that changes in energy agree. To do this, we need to use the full relativistic energy formula $$E = \gamma mc^2 = \frac{mc^2}{\sqrt{1-v^2/c^2}}$$ which applies to moving bodies as well as stationary ones. Performing a Taylor expansion in $v/c$, $$E = mc^2 + \frac12 mc^2 \frac{v^2}{c^2} + O(v^4/c^4)$$ which approaches $mc^2 + mv^2/2$ in the nonrelativistic limit. Subtracting out the constant rest energy contribution, we find $E = mv^2/2$, as we should.
{ "domain": "physics.stackexchange", "id": 35204, "tags": "special-relativity, mass-energy" }
Homogeneity of space
Question: How do I verify homogeneity and isotropy of space, for example for hyperbolic space? My idea is to verify that Lorentz transformations in 4-vectors can move any desired point on the hyperboloid into the origin, and can rotate the hyperboloid about the origin by an arbitrary angle. The problem is I do not know how to map these statements into a precise mathematical language. Can you please help me? Answer: The hyperbolic space is defined by the constraint, $$t^2-\vec{x}^2=1 \quad \textbf{(1)},$$ where we think of $t$ as being a time coordinate and $\vec{x}=(x,y,z)$ as a spatial coordinate. This space is homogeneous and isotropic in the spatial coordinates which is what you are trying to prove. Isotropy Intuitively isotropy means that from the origin space looks the same in every direction. Formally this means that the space is invariant under rotations in space, since rotations can map any direction onto any other direction. By definition a rotations is a linear transformation of the spatial coordinates which leaves $\vec{x}^2 = x^2+y^2+z^2$ invariant or unchanged. The following three matrices are rotations about the $x$,$y$, and $z$ axes respectively. Every rotation can be written as a product of these matrices. $$ R_x(\theta) = \left[ \begin{array} \ 1 & 0 & 0 \\ 0 & \cos(\theta) & -\sin(\theta) \\ 0 &\sin(\theta) & \cos(\theta) \\ \end{array} \right] $$ $$ R_y(\theta) = \left[ \begin{array} \ \cos(\theta) & 0 & -\sin(\theta) \\ 0 & 1 & 0 \\ \sin(\theta) & 0& \cos(\theta) \\ \end{array} \right] $$ $$ R_z(\theta) = \left[ \begin{array} \ \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \end{array} \right] $$ Where it is understood that these matrices are acting on the vector $\vec{x}=\left[x,y,z\right]^T$. In order to extend the definition of these operators to act on the three dimensional subspace of our space of four vectors $x^\mu=[t,x,y,z]^T$ we use the augmented matrix below which leave the time component alone, $$ \left[ \begin{array} \ 1 & \vec{0}^T \\ \vec{0} & R(\theta) \\ \end{array}\right] $$ At this point it is elementary to see that this transformation leaves the quadratic form $t^2-\vec{x}^2$ invariant since the transformation doesn't change the value of $t$ or $\vec{x}^2$. Homogeneity We now wish to establish that the hyperbolic space is homogeneous. Intuitively this means that space is the same at every location. If we can show that space is isotropic when viewed from any location we will have established homogeneity. Mathematically this means that we need to show that every spatial coordinate can be mapped to the origin. So we need to find a transformation which leaves $t^2-\vec{x}^2$ invariant and maps $\vec{x}\rightarrow \vec{0}$. To start suppose we have $\vec{x} = \vec{x}_0=(x_0,y_0,z_0)$. We start by rotating the vector so that it is parallel to the $x$ axis. The following transformation will accomplish this, $$\frac{1}{\sqrt{x_0^2+y_0^2+z_0^2}\sqrt{y_0^2+z_0^2}} \left[\begin{array} \ x_0 & \sqrt{y_0^2+z_0^2} & 0 \\ -\sqrt{y_0^2+z_0^2} & x_0 & 0 \\ 0 & 0 & 1 \end{array}\right] \left[\begin{array} \ 1 & 0 & 0 \\ 0 & y_0 & z_0 \\ 0 & -z_0 & y_0 \end{array}\right] \left[\begin{array} \ x_0 \\ y_0 \\ z_0 \\ \end{array}\right] = \left[\begin{array} \ \sqrt{x_0^2+y_0^2+z_0^2} \\ 0 \\ 0 \\ \end{array}\right] $$ Now we can map the spatial coordinate to zero by boosting. This will have the consequence of changing the $t$ coordinate of the four vector. Since the $y$ and $z$ coordinates are both $0$ due to the rotation we will only explicitly write the transformation in the two dimensions with nonzero entries $t$ and $x$. $$ \left[ \begin{array} \ \cosh(\psi) & \sinh(\psi) \\ \sin(\psi) & \cosh(\psi) \end{array} \right] \left[ \begin{array} \ t_0 \\ \vert \vec{x}_0 \vert \end{array} \right] = \left[ \begin{array} \ \cosh(\psi)t_0+\sinh(\psi)\vert \vec{x}_0 \vert \\ \sinh(\psi) t_0 + \cosh(\psi)\vert \vec{x}_0 \vert \end{array} \right] $$ We see that we can make the last spatial coordinate zero if $$\sinh(\psi) = -\frac{\vert \vec{x}_0 \vert}{\sqrt{t_0^2 - \vec{x}_0^2 }} \qquad \cosh(\psi) = \frac{t_0}{\sqrt{t_0^2-\vec{x}_0^2 }},$$ which is certainly possible for real values of $\psi$ so long as $\vec{x}_0^2 < t_0^2$ which is guaranteed by the condition $t^2=1+x^2$. So we have shown that every point in space can be mapped to the origin.
{ "domain": "physics.stackexchange", "id": 14345, "tags": "cosmology, relativity" }
How do I find the derivation for the formula for total clearance?
Question: In Rang & Dale (10 ed) on page 152, the formula for total clearance is given by: $$Cl_{tot}=\frac{Q}{AUC_{0-\infty}}$$ $CL_{tot}=$total clearance, Q=initial does given, $AUC_{0-\infty}$ Area under the concentration/time curve from time 0 to infinity. No explanation for this formula is given, at least not as far as I can see, it's just stated as some kind of obvious fact. How do I arrive at this formula (i.e, where is the derivation?), is it given by some kind of elementary rule? I did consider posting in math.stackexchange.com but they didn't have tags for pharmacology. Answer: Previously on that same page it says that: $$Cl_{tot}=\frac{X}{C_{SS}}$$ Where $Cl_{tot}$ is volume of plasma cleaned per unit of time, X is the amount of drug supplied per unit of time and $C_{SS}$ is the plasma contentration at the steady state. This formula is quite similar to the one in my question, although that concerns a single bolus dose, not a continuous supply of the drug. The first thing I needed to do was to realize that this too describes a unit of time. More precisely, it describes the time between o and infinity. If $Cl_{tot}$ is interpreted as the volume of plasma cleared of that drug between 0 and infinity, the formulae are almost equivalent. The only remaining stumbling block is whether or not $AUC_{0-\infty}$ can be said to be equal to $C_{SS}$. In a philosophical sense you can, since that "is" the total "sum of concentrations" which the body has gotten exposed to between 0 and infinity. We can make this even more similar to the formula for continuous drug administration by assuming that "sum of concentrations" could probably be spread out evenly over the entire timeline, giving us a mean steady state concentration (which would then hopefully yield the same result). There might be some incorrect formalities somewhere but this is what made sense to me intuitively.
{ "domain": "biology.stackexchange", "id": 12469, "tags": "pharmacokinetics" }
What recovers normal polarisation after hyperpolarisation?
Question: I have been taught that a $\ce{Na+/~K+}$ pump helps to recover normal polarisation after-hyperpolarisation in neurones. I could not find out how it does that, since I've also been taught that such a pump moves $\ce{3Na+}$ out of the cell and $\ce{2K+}$ into the cell. That implies that the potential can only get more negative. How does this work? My hypothesis was that the potential 'leaks' out and recovers the 70 mV that way. I couldn't verify that, however. Answer: Remember that the action potential gets more positive in the first place, so increasing positivity is achievable. Net Na+ movement into the cell makes the potential more positive. This occurs as the Na+ gate (right on the image below) are open. The key message is that the membrane can move charge to cause an increase in either positive and negative potential. Imagine one side becomes more negative. The Na+ diffusion will correct this to reach an equilibrium around 3 to 4 milliseconds (ms) after the stimulus to reach the resting potential (as alluded to my @MCM in the comments). There are some great animations demonstrating this principle on the University of Bristol's website. They clearly show how positive charge can be conferred over the membrane. A (now deleted) comment linked to the below image that shows the potential and when the gates are opened and closed. I would still go to the bristol website too to see it in action, but this figure is a quick go to once you get your head around the timing of ion flux. As with most biology, it is a little more complicated than that overly simple model. There are entire books on how neurones control their charges. Note: The key terminology is somewhat muddled and there doesn't seem to be a standard term. I have seen this "recovery" phase called undershoot, overshoot (and overshoot rather unhelpfully also refers to another part of the action potential), the refractory period, and discussed as part of hyperpolarization. The go to term is afterpolarisation if you want to research this more thoroughly.
{ "domain": "biology.stackexchange", "id": 3932, "tags": "neuroscience, cell-membrane, action-potential" }
How can brass still be made even though the crystal structures of zinc and copper are not the same?
Question: Based on the Hume-Rothery rules, to dissolve an element into a metal, the crystal structures for both must be the same. But the structure for zinc is HCP, whereas copper has FCC structure. It does not follow the rule. But why can brass still be made? Answer: The $\ce{Cu-Zn}$ phase diagram is, to put it mildly, complicated. (source) See that wide area denoted $\alpha\rm (Cu)$? Now, when we add a little amount of one metal into the crystal lattice of another metal, typically the minority metal just "agrees" to coexist in that structure, even though different from the structure that it would form on its own. And it just so happens that "a little" can sometimes be not that little. Now, there are different kinds of brass with different composition, and not all of them fit into the said area. What happens to those which don't? Well, then we have a whole bunch of other Greek letters, and each of them means a particular phase which has a structure of its own, different from both metals. Sometimes cooling will produce grains of different phases, which is of course very different from having just one phase. This explains widely different mechanical and corrosion properties of different brasses. As for Hume-Rothery, it is good for simple cases. This one, like I said, is not simple. Neither is $\ce{Cu-Sn}$ (bronze), or $\ce{Fe-C}$ (steel), or pretty much any other binary phase diagram of any importance, for that matter. Actually, simple cases are hard to come by. If you still want to see one, look for $\ce{Pb-Sn}$.
{ "domain": "chemistry.stackexchange", "id": 16677, "tags": "solid-state-chemistry" }
Should I use my redundant feature as an auxiliary output or as another input feature?
Question: For example, given a face image, and you want to predict the gender. You also have age information for each person, should you feed the age information as input or should you use it as auxiliary output that the network needs to predict? How do I know analytically (instead of experimentally) which approach will be better? What is the logic behind this? Answer: For extra input that does not matter, you should not input it to the network. Feature selection, the process of finding and selecting the most useful features in a dataset, is a crucial step of the machine learning pipeline. Unnecessary features decrease training speed, decrease model interpretability, and, most importantly, decrease generalization performance on the test set. Source: A Feature Selection Tool for Machine Learning in Python As the source says, unnecessary features decreases accuracy and training speed. Moreover, they have no mapping to the labels, so they won't end up being used. They are unnecessary and adding them will only cause you trouble. Hope this helps you and have a nice day!
{ "domain": "ai.stackexchange", "id": 1484, "tags": "neural-networks, machine-learning, deep-learning, convolutional-neural-networks, feature-selection" }
Looping through asset types from Dictionary
Question: I have some code that needs to branch depending on if a value from a database is absent. Essentially our database contains some key value pairs from a dictionary. If the key is absent from a set, we should append zero as the value for that key. Otherwise, we should append the value from the database. values = [] known_asset_types = {'A', 'B', 'C'} total_external_assets = assets_service.get_total_external_assets().items() visible_asset_types = set() # total_external_assets is a dictionary for db_asset_type, asset_count in total_external_assets: if db_asset_type in known_asset_types: visible_asset_types.add(asset_type) values.append({"name": db_asset_type, "value": asset_count}) for asset in known_asset_types.difference(visible_asset_types): values.append({"name": asset, "value": 0}) Is there a simpler or more pythonic way to accomplish what I'm doing? I got a comment saying I can loop through the known_asset_types and check if the key is in total_external_assets and if True use that value else zero, but having a nested loop feels...worse to me. Answer: First of all, your code doesn't work. When you loop over total_external_assets you don't get a key-value pair tuple but instead just the keys. You need to access .items() to get both. I also don't follow what you mean by a nested loop. The suggestion you got from a colleague was to loop over known_asset_types instead of looping over total_external_assets. This seems reasonable if you only care about the known_asset_types, which it appears that you do; you'll have to process way fewer items, and to then get the value of a key in known_asset_types from total_external_assets is then only a hashmap/dictionary lookup, which is O(1) - i.e. very fast. This if db_asset_type in known_asset_types: visible_asset_types.add(asset_type) values.append({"name": db_asset_type, "value": asset_count}) is also a bit unnecessary; you're first looking for the item, and if it exists you retrieve it. That's two lookups when you only needed a single one. You can do try: asset = known_asset_types[db_asset_type] except KeyError: # doesn't exist pass else: # does exist visible_asset_types.add(db_asset_type) values.append({"name": db_asset_type, "value": asset}) Finally, you should know that you can replace try: value = dict[key] except KeyError: value = default with value = dict.get(key, default)
{ "domain": "codereview.stackexchange", "id": 44167, "tags": "python, python-3.x" }
Uncertainty in position measurement in two ensembles. First with same $\psi$ but different $N$, and second with same $\psi,N$ but different apparatus
Question: Imagine an ensemble of $N$ identical and identically prepared quantum systems, all of which are in the state $\psi(x,t)$ at time $t$. Given the state (which could be a Gaussian in position) the postulates of quantum mechanics tell us, for example, what will be the result of position measurements on this ensemble at time $t$ i.e. which position eigenvalue will be obtained with what probability. It allows us to theoretically calculate $\Delta x$ from $\psi(x,t)$, ONLY. Given $\psi(x,t)$, the calculation yields a definite value for $\Delta x$ (say, $\Delta x=0.05$mm). This value, solely obtained from $\psi(x,t)$, seems to be blind to how the process of measurement is (or will be) carried out. For a given ensemble with fixed $N$ and given $\psi(x,t)$, it is not true that $\Delta x$ will depend on how precise an apparatus is used to make the measurements? However, I don't think there is any serious problem here. If, for example, $x\in[-5,+5]$ in some units, and if the measuring apparatus has a least count $1$ in the same units, the only allowed values that can arise in the measurement are $[-5,-4,-3,...+3,+4,+5]$ (something like $1.3$ or $3.7$ is not measurable). Therefore, the theoretical value of $\Delta x$ should also be calculated by discretizing the integrals over $x$. On the other hand, if the least count of the apparatus were $0.5$ in the same unit, allowed $x$ values will be more in number than the previous case. Thus the theoretical $\Delta x$ should be re-calculated accordingly. So it seems that theoretical $\Delta x$ also has a direct bearing on how the measurement is carried out. However, experimentally, is it also not true that $\Delta x$ will be different for an ensemble with $N=1000$ and another with $N=10000$ both ensembles being specified by the same state $\psi(x,t)$)? How do we resolve this? Answer: Quantum mechanical uncertainty - that which we denote by $\Delta x$ - has nothing to do with measurement, see for example this question and its linked questions. The $\Delta x$ we compute in quantum mechanics is the standard deviation of $x$ assuming a perfect measurement apparatus. It is an abstract statistical quantity derived from the probability distribution for the position variable that is encoded in the quantum state ("wavefunction") and has no direct relation with any actual measurements being performed. Think about flipping a fair coin, i.e. a coin which you believe has 50% probability to show heads and 50% to show tails. If we assign heads the value -1 and tails the value 1, then the expected value is 0, with a standard deviation of 1. The expected value of $n$ coin tosses is still 0, with a standard deviation of $\sqrt{n}$. If you actually go and flip $n$ coins, you can try to estimate the standard deviation of the underlying distribution with one of the common expressions for standard deviations of samples. This might come out to be close to $\sqrt{n}$, it might not - the only thing that is guaranteed is that the estimation converges to the theoretical value as $n\to\infty$. Note that in this case, the measurement apparatus is perfect - we can tell whether or not a coin is heads or tails without any room for error. That is, the "$\Delta x$" you compute from a sample is not actually the same quantity as the $\Delta x$ we compute from the theory for a quantum state - the former is merely an estimation of the latter, even if we have a perfect measurement apparatus.
{ "domain": "physics.stackexchange", "id": 77185, "tags": "quantum-mechanics, measurements, quantum-measurements" }
Stability of circular orbit in attractive inverse cube central force field
Question: Considering a motion of a body under an attractive inverse cube central force, $\textbf{F}(\textbf{r}) = -\frac{k}{r^3} \hspace{1mm}\hat{\textbf{r}}$ with $k>0$. Is it possible for a body to move in an stable circular orbit? Since the derivation of the effective potential $U_{eff}(r) = \frac{l^2}{2mr^2}+U(r)$ (where $l$ is the angular momentum) has to be $0$ for a circular orbit, the only solution would be that $k = \frac{l^2}{m}$. But that would lead to an effective potential $U_{eff}(r) = 0$ for any $r$ (except $r = 0$). Is this a valid solution? Answer: The possible trajectories of a particle subject to an inverse-cube force $F = - k/r^3$ can actually be derived exactly; they are known as Cotes's spirals. Depending on the relative values of the particle's angular momentum $\ell$, its mass $m$, and the constant of proportionality $k$ in the force law, they take on the form $$ r(\theta) = \begin{cases} (A \cos C \theta + B \sin C \theta)^{-1} & km < \ell^2 \\ (A \cosh C \theta + B \sinh C \theta)^{-1} & km > \ell^2 \\ (A + B \theta)^{-1} & km = \ell^2 \end{cases} $$ where $$ C = \sqrt{\left| \frac{ m k}{\ell^2} - 1 \right|} $$ and $A$ and $B$ are determined by the initial conditions of the trajectory. It is not hard to see that almost all of these functions will either have $r \to \infty$ for some finite value of $\theta$, or $r \to 0$ as $\theta \to\infty$, or both. The only case in which $r$ is bounded and does not reach $r = 0$ is when $km = \ell^2$ and $B = 0$, which corresponds to circular motion. Most perturbations from this trajectory will either involve changing $\ell$ (if the particle is given an extra tangential "push"), or changing $B$ (if the particle is given a purely radial push, since now we must have $dr/d\theta = 0$.) Thus, most perturbations will lead to the particle either spiraling in to $r = 0$ or flying out to $r \to \infty$. Thinking about this in terms of the effective potential: to have a circular orbit in an inverse-cube field, you must have $\ell = \sqrt{km}$. A perturbation will either change $\ell$ or leave $\ell$ the same. If $\ell$ is changed from its initial value, then the effective potential becomes $U_\text{eff}(r) = Q/r^2$ for some value of $Q$, which has no maxima or minima; the radial motion must either go to $0$ or $\infty$. If $\ell$ is unchanged, then we must have $dr/dt = 0$, and the effective 1-D problem is that of a particle moving with some initial velocity in a potential $U_\text{eff} = 0$. Again, this radial motion must either go to $0$ or $\infty$.
{ "domain": "physics.stackexchange", "id": 73650, "tags": "homework-and-exercises, forces, classical-mechanics, orbital-motion, stability" }
pass parameters of xacro file in another xacro file
Question: I have this cameras.urdf.xacro file: <?xml version="1.0"?> <robot name="origins"> <link name="$(arg tf_prefix_camera1)_pose_frame"/> <link name="$(arg tf_prefix_camera2)_link"/> <joint name="$(arg tf_prefix_camera1)_to_$(arg tf_prefix_camera2)" type="fixed"> <parent link="$(arg tf_prefix_camera1)_pose_frame"/> <child link="$(arg tf_prefix_camera2)_link"/> <origin xyz="0.009 0.021 0.027" rpy="0.000 -0.018 0.005"/> </joint> </robot> So, this file has parameters tf_prefix_camera1 and tf_prefix_camera2. And i want to include this file in another xacro file and set this parameters. how can i do it? <xacro:include filename="$(find realsense2_camera)/urdf/mount_t265_d435.urdf.xacro tf_prefix_camera1:=t265 tf_prefix_camera2:=d435" /> like this? Originally posted by june2473 on ROS Answers with karma: 83 on 2019-08-23 Post score: 0 Answer: You are not that far from the correct solution you just need some adjustments : Your cameras.urdf.xacro should actually use some macros to receive the parameters : <?xml version="1.0"?> <robot name="camera" xmlns:xacro="http://ros.org/wiki/xacro"> <xacro:macro name="camera" params="tf_prefix_camera1 tf_prefix_camera2"> <link name="${tf_prefix_camera1}_pose_frame"/> <link name="${tf_prefix_camera2}_link"/> <joint name="${tf_prefix_camera1)_to_${tf_prefix_camera2)" type="fixed"> <parent link="${tf_prefix_camera1}_pose_frame"/> <child link="${tf_prefix_camera2}_link"/> <origin xyz="0.009 0.021 0.027" rpy="0.000 -0.018 0.005"/> </joint> </xacro:macro> </robot> The important parts here are xmlns:xacro="http://ros.org/wiki/xacro" to properly use the macro and to define the macro camera like a function, i.e. specifying its arguments (here its params). Note also that it's not like in the launch files, $(arg ARG) you just need brackets around the name of your param. In your other file you just need : <?xml version="1.0"?> <robot name="camera" xmlns:xacro="http://ros.org/wiki/xacro"> <xacro:include filename="$(find YOUR_PKG)/urdf/cameras.urdf.xacro"/> <camera tf_prefix_camera1="t265" tf_prefix_camera2="d435"/> </robot> In your example you didn't include cameras.urdf.xacro but another urdf file is this a copy/paste mistake ? To call the macro you have previously defined, you just need a tag with the name of your macro and simply assign a value to your parameters. That will simply copy and paste everything from the camera.urdf.xacro file between the <xacro> tag (so everything except the 2 first lines and the last one) and replace the paramaters with their value. Originally posted by Delb with karma: 3907 on 2019-08-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 33675, "tags": "ros-kinetic, xacro" }
The meaning of γt−t0 in Reinforcement learning with pytorch
Question: When reading pytorch tutorial: Our aim will be to train a policy that tries to maximize the discounted, cumulative reward Rt0=∑∞t=t0γt−t0rt, where Rt0 is also known as the return I know γ is the discount factor, but I am not sure that what t-t0 ofγt−t0 mean? Thank you. Answer: I have no experience with reinformcement learning, however looking at the figure I think I understand what is meant. Gamma is the discount factor, which is taken to the power t-t0, i.e. the number of episodes starting from t. This gives the discount factor for a specific episode, which is then multiplied by the return of that episode, r_t, to get the discounted reward for one specific episode. The total return is then computed by summing all the future rewards for future episodes.
{ "domain": "datascience.stackexchange", "id": 7195, "tags": "reinforcement-learning, pytorch" }
Norm preserving Unitary operators in Rigged Hilbert space
Question: If we take the free particle Hamiltonian, the eigenvectors (or eigenfunctions), say in position representation, are like $e^{ikx}$. Now these eigenfunctions are non-normalisable,so they don't belong the normal $L^2(\mathbb R^d)$ but the rigged Hilbert space. My question hereto is that any unitary operator defined as a map in Hilbert space preserves the norm. But in the case of free particle, although the operator $e^{iHt}$ is unitary (since $H$ is hermitian), there is (atleast) no (direct) condition of norm preserval, as the norm cannot be defined for these eigenfunctions. Now, how can one connect the unitarity of $e^{iHt}$ and norm preserval in this context ? PS : I know one can use box-normalised wavefunctions and do away with the calculations and then take $L \rightarrow \infty$ limit. But I am rather interested in the actual question of unitarity and norm in rigged Hilbert spaces. Answer: The so-called rigged spaces are made with a triple $(S,\mathscr{H},S')$; where $\mathscr{H}$ is the usual Hilbert space, $S$ is a dense vector subspace of $\mathscr{H}$, and $S'$ the dual of $S$. Usually when $\mathscr{H}=L^2(\mathbb{R}^d$, then $S$ is taken to be the rapidly decrasing functions, and $S'$ the tempered distributions. If this is the case, then there is no notion of norm for the "extended" eigenvectors in $S'$, since the latter is not a Banach (or metrizable) space. So even if $e^{-itH}e^{ikx}=e^{-itk^2/2m}e^{ikx}$, and therefore the evolution indeed acts as a phase, there is no norm to be preserved. The point is that rigged Hilbert spaces are, as far as I know, simply a mathematical convenience to justify the emergence of "generalized eigenvectors" for some (very special) self-adjoint operators that have purely continuous spectrum. If you want to do (meaningful) quantum mechanics, you have to consider states of the Hilbert space, where the evolution is indeed unitary and everything works.
{ "domain": "physics.stackexchange", "id": 31635, "tags": "quantum-mechanics, hilbert-space, unitarity" }
What’s the object between the Earth and The Sun currently showing in Google maps?
Question: If I select satellite imagery in Google maps and zoom out, the Earth is shown in space. It shows light/dark regions correctly updated, but there’s an object between the Earth and the sun. It looks too small to be The Moon. Is it real? What is it? Answer: That's the Moon alright, and it's definitely real and definitely there. If you go outside and look at the Sun right now, the Moon will be almost but not quite on top of it, though it's impossible to make out due to the Sun's glare. If it were any closer we would have had a solar eclipse around the time of the New Moon that took place 20 minutes ago. Two weeks from now at Full Moon, it'll be in even better alignment, though on the opposite side of the Earth from the Sun this time, which will result in a total lunar eclipse. Mark your calendar!
{ "domain": "astronomy.stackexchange", "id": 315, "tags": "near-earth-object" }
Relation between Dirac spinor and its adjoint
Question: I'm trying unsuccessfully to solve the following problem in Thomson's Modern Particle Physics: "Starting from $(\gamma^{\mu} p_{\mu} - m) u =0, $ show that the corresponding equation for the adjoint spinor is $\bar{u} ( \gamma^{\mu} p_{\mu} - m) = 0.$ Hence, without using the explicit form for the $u$ spinors, show that the normalisation condition $u^{\dagger} u = 2E$ leads to $\bar{u} u = 2m $ and that $\bar{u} \gamma^{\mu} u = 2 p^{\mu}.$" Here, $u$ is a free-particle solution to the Dirac equation (in the basis of momentum, so here the $p^{\mu}$ are c-numbers) and $\bar{u} = u^{\dagger} \gamma^0$ is as usual its adjoint spinor. The equation for the adjoint spinor is very easy to derive by just taking Hermitian conjugates of both sides of the Dirac equation, but for the life of me I can't derive a priori the latter two equations. I've tried all manner of substitutions and tricks to no avail. Could someone guide me in the right direction? Answer: Figured it out. If I write \begin{align}\bar{u} \gamma^{\mu} u &= \frac{1}{m} \bar{u} \gamma^{\mu} \gamma^{\nu} p_{\nu} u = \frac{1}{m} \bar{u} \left( \{\gamma^{\mu},\gamma^{\nu}\}-\gamma^{\nu} \gamma^{\mu} \right) p_{\nu} u \\ &= \frac{1}{m} \bar{u} \left( 2 g^{\mu \nu} - \gamma^{\nu} \gamma^{\mu} \right) p_{\nu} u = \frac{2}{m} \bar{u} u p^{\mu} - \frac{1}{m} \bar{u} \gamma^{\nu} p_\nu \gamma^{\mu} u \\ &= \frac{2}{m} \bar{u} u p^{\mu} - \bar{u} \gamma^{\mu} u \end{align} I can rearrange to get the relationship \begin{align} \bar{u} \gamma^{\mu} u = \frac{1}{m} \bar{u} u p^{\mu} \end{align} In particular, \begin{align} \frac{1}{m} \bar{u} u p^0 = \frac{E}{m} \bar{u} u = \bar{u} \gamma^{0} u = u^{\dagger} u = 2E \end{align} Then I can solve for $\bar{u} u$ and $\bar{u} \gamma^{\mu} u$.
{ "domain": "physics.stackexchange", "id": 12172, "tags": "homework-and-exercises, quantum-field-theory, dirac-equation" }
The complexity of 3SAT
Question: It is well known that 3SAT remains NP-complete if every variable occurs exactly twice positively, exactly once negated. then, does 3SAT remain NP-complete if every variable occurs exactly once positively, exactly once negated? Answer: Satisfiability of CNFs where each variable occurs at most twice is easily seen to be computable in P. Repeat in any order the following steps, each of which decreases the number of variables, and preserves satisfiability: Remove clauses containing both a literal and its negation. If some variable occurs only positively or only negatively, remove the corresponding clauses (i.e., set the occurring literal to true). Pick any variable that occurs both positively and negatively, and resolve the two clauses where it occurs (i.e., remove $C\cup\{x\}$ and $D\cup\{\neg x\}$, and replace them with $C\cup D$). If neither step is no longer applicable, no variable can occur in the CNF any longer, which means that either the CNF is empty (whence true, i.e., satisfiable), or it consists of the empty clause (whence it is false, i.e., unsatisfiable).
{ "domain": "cstheory.stackexchange", "id": 5436, "tags": "np-hardness, sat" }
Is the order of operations in jacking a cable relevant?
Question: If you have a prestressed concrete beam or slab and are jacking a cable from both ends, does the order of operations alter the result? For a symmetric curve, the tension diagram along the cable's span after friction losses is always shown as symmetric. Is this the case regardless of whether the ends were jacked simultaneously or one after the other? If the ends are jacked simultaneously then the symmetric friction loss makes perfect sense. However, if the cable is jacked first from the left and then from the right, I'd expect the tension profile to be asymmetric. When jacking from the left, the cable is pulled and on its left end the cable force is equal to $P_0$, the jacking force. On the right side, the cable force is $P_0 - \Delta P$, where $\Delta P = P_0\left(1-e^{-(\mu\alpha+kL)}\right)$ is the friction losses. The left side is then anchored (disregard the anchorage slip losses for a moment). The right side is then jacked. Effectively, this jacking adds a force of $\Delta P$ to the right end, so that it too is now at $P_0$. However, wouldn't the left end also be affected with an increment equal to $\Delta Pe^{-(\mu\alpha+kL)}$, such that the diagram is asymmetric? What about if one considers the fact that when the left side is anchored, anchorage slip losses occur? So the diagram should go: friction losses from jacking the left side, anchorage slip losses from anchoring the left side, friction losses from jacking the right side and then anchorage slip losses from anchoring the right side. EDIT After a few honest-to-God minutes freaking out thinking I was the biggest idiot on Earth I realized that what @sanchises commented doesn't rule out my question. The cable in my question does remain in static equilibrium, as can be seen here: When jacking from the left, the cable suffers at its left extremity a force of $P$. Friction losses along the way ($\Delta P$) cause the right extremity to end up with $P - \Delta P$. The right extremity is then jacked, which effectively applies a force of $\Delta P$. Friction losses also apply here, now with a value of $\Delta\Delta P$, such that the left anchor is additionally jacked with $\Delta P - \Delta\Delta P$. This results in the left anchor with a force of $P + \Delta P - \Delta\Delta P$ and the right anchor with only $P$. These are not, however, the only forces acting on the cable: the friction losses are what keeps it in static equilibrium. Globally, we have to the left $$P + \Delta P - \Delta\Delta P + \Delta\Delta P = P + \Delta P$$ and to the right $$\Delta P + P - \Delta P + \Delta P = P + \Delta P$$ both of which equal the total jacking force (first $P$ on the left and then $\Delta P$ on the right). Answer: After the discussion with @sanchises in the comments, the answer is that, yes, the order of operations in jacking does affect the final tension diagram of the cable. A symmetric cable simultaneously jacked will have a symmetric tension diagram, while the same cable jacked from one side and then the other will not. The diagram actually ends up as a line almost parallel to the tension diagram after the jacking of the left side: The cable in this example is jacked from the left side (grey line) and loses around 15% of its tension at the right anchor ($\Delta P$). It is then jacked from the right side by that same amount, adding to the tension profile according to the yellow line, which results in the blue line, with the left anchor being incremented by $0.15\cdot0.85=13\%$.
{ "domain": "engineering.stackexchange", "id": 247, "tags": "civil-engineering, structural-engineering, beam, concrete, prestressed-concrete" }
Error handling Beacon scanning
Question: I've recently began trying to put more error handling in to my code so I thought I'd post a use-case to check I'm on the right path. I feel like this is going to make my functions much larger and more indented/nested than before. Is this the right way to go about things or should I be utilising try? instead? Where do people generally put these Error enum definitions? fileprivate enum LocationError: Error { case noAuthorization case noBeaconSupport case rangingUnavailable } class ViewController: UIViewController, CLLocationManagerDelegate { @IBOutlet weak var distanceReading: UILabel! var locationManager: CLLocationManager! override func viewDidLoad() { super.viewDidLoad() locationManager = CLLocationManager() locationManager.delegate = self locationManager.requestWhenInUseAuthorization() view.backgroundColor = UIColor.gray } func locationManager(_ manager: CLLocationManager, didChangeAuthorization status: CLAuthorizationStatus) { do { guard status == .authorizedWhenInUse else { throw LocationError.noAuthorization } guard CLLocationManager.isMonitoringAvailable(for: CLBeaconRegion.self) else { throw LocationError.noBeaconSupport } guard CLLocationManager.isRangingAvailable() else { throw LocationError.rangingUnavailable } startScanning() } catch LocationError.noAuthorization { print("User has not authorized us to use location") } catch LocationError.noBeaconSupport { print("User's device does not support Beacons") } catch LocationError.rangingUnavailable { print("User's device does not support ranging") } catch { fatalError() } } Answer: Throwing Swift errors is a mechanism how a function/method can report a failure to its caller. Your code throws and catches the error within the same method, and I can see no advantage of using try/catch in that situation. Your code is equivalent to func locationManager(_ manager: CLLocationManager, didChangeAuthorization status: CLAuthorizationStatus) { guard status == .authorizedWhenInUse else { print("User has not authorized us to use location") return } guard CLLocationManager.isMonitoringAvailable(for: CLBeaconRegion.self) else { print("User has not authorized us to use location") return } guard CLLocationManager.isRangingAvailable() else { print("User's device does not support ranging") return } startScanning() } which is shorter and easy to understand, makes the enum LocationError obsolete (which is fileprivate, and therefore apparently not used anywhere else), makes the catch-all with fatalError() obsolete. I would even go a step further: guard is useful in connection with optional binding (to avoid the "optional binding pyramid of doom"). In your case, the same can be achieved with a simple if/else if/.../else statement: func locationManager(_ manager: CLLocationManager, didChangeAuthorization status: CLAuthorizationStatus) { if status != .authorizedWhenInUse { print("User has not authorized us to use location") } else if !CLLocationManager.isMonitoringAvailable(for: CLBeaconRegion.self) { print("User has not authorized us to use location") } else if !CLLocationManager.isRangingAvailable() { print("User's device does not support ranging") } else { startScanning() } }
{ "domain": "codereview.stackexchange", "id": 28558, "tags": "error-handling, swift" }
A class to create and modify SQLite3 databases with a terminal
Question: I would like feedback on the class I've written. The purpose is to dynamically create and interact with SQLite3 databases, accepting lists of complete or incomplete statements. import sqlite3 class DB(object): """DB initializes and manipulates SQLite3 databases.""" def __init__(self, database='database.db', statements=[]): """Initialize a new or connect to an existing database. Accept setup statements to be executed. """ #the database filename self.database = database #holds incomplete statements self.statement = '' #indicates if selected data is to be returned or printed self.display = False self.connect() #execute setup satements self.execute(statements) self.close() def connect(self): """Connect to the SQLite3 database.""" self.connection = sqlite3.connect(self.database) self.cursor = self.connection.cursor() self.connected = True self.statement = '' def close(self): """Close the SQLite3 database.""" self.connection.commit() self.connection.close() self.connected = False def incomplete(self, statement): """Concatenate clauses until a complete statement is made.""" self.statement += statement if self.statement.count(';') > 1: print ('An error has occurerd: ' + 'You may only execute one statement at a time.') print 'For the statement: %s' % self.statement self.statement = '' if sqlite3.complete_statement(self.statement): #the statement is not incomplete, it's complete return False else: #the statement is incomplete return True def execute(self, statements): """Execute complete SQL statements. Incomplete statements are concatenated to self.statement until they are complete. Selected data is returned as a list of query results. Example: for result in db.execute(queries): for row in result: print row """ queries = [] close = False if not self.connected: #open a previously closed connection self.connect() #mark the connection to be closed once complete close = True if type(statements) == str: #all statements must be in a list statements = [statements] for statement in statements: if self.incomplete(statement): #the statement is incomplete continue #the statement is complete try: statement = self.statement.strip() #reset the test statement self.statement = '' self.cursor.execute(statement) #retrieve selected data data = self.cursor.fetchall() if statement.upper().startswith('SELECT'): #append query results queries.append(data) except sqlite3.Error as error: print 'An error occurred:', error.args[0] print 'For the statement:', statement #only close the connection if opened in this function if close: self.close() #print results for all queries if self.display: for result in queries: if result: for row in result: print row else: print result #return results for all queries else: return queries def terminal(self): """A simple SQLite3 terminal. The terminal will concatenate incomplete statements until they are complete. """ self.connect() self.display = True print ('SQLite3 terminal for %s. Press enter for commands.' % self.database) while True: statement = raw_input('') if statement == '': user = raw_input( 'Type discard, exit (commit), or press enter (commit): ') if not user: self.connection.commit() elif user == 'discard': self.connect() elif user == 'exit': break self.execute(statement) self.display = False self.close() if __name__ == '__main__': statement = ('CREATE TABLE %s (id INTEGER, filename TEXT);') tables = ['source', 'query'] database = 'io.db' statements = [statement % table for table in tables] #setup db = DB(database, statements) #a single statement db.execute( ["INSERT INTO source (id, filename) values (8, 'reference.txt');"]) #a list of complete statements db.execute(["INSERT INTO query (id, filename) values (8, 'one.txt');", "INSERT INTO query (id, filename) values (9, 'two.txt');"]) #a list of incomplete statements db.execute(["INSERT INTO query (id, filename) ", "values (10, 'three.txt');"]) #retrieving multiple query results queries = ['SELECT * FROM source;', 'SELECT * FROM query;'] for result in db.execute(queries): print result [(8, u'reference.txt')] [(8, u'one.txt'), (9, u'two.txt'), (10, u'three.txt')] Answer: Overall this looks good. Documentation and comments look pretty good, and the code is fairly sensible. Some comments: Beware of mutable default arguments! In particular, statements=[]. If the statements variable gets mutated within the class, that persists through all instances of the class. Better to have a default value of None, and compare to that, i.e.: def __init__(self, database='database.db', statements=None): if statements is None: statements = [] Although perhaps it would be better to defer this checking to the point where you’re about to connect and run the command – if there are no commands to execute (whether the user skipped that argument, or supplied an empty list) – you could skip setting up and closing the DB connection entirely. It’s common to print errors to stderr, so I’d modify the print() on line 46–7 accordingly. Also, you’ve misspelt occurred. It’s better to use exceptions to indicate control flow, not just printing errors. This allows to caller to handle them accordingly. Within DB.incomplete(), I’d consider throwing a ValueError if the user passes multiple statements. Likewise lines 96–8 in DB.execute(). In DB.incomplete(), you can simplify the return statement: return not sqlite3.complete_statement(self.statement) This is both simpler and more Pythonic. PEP 8 convention is that comments start by a hash following by a space, not a hash and then straight into prose. There’s use of print as a statement and as a function. You should be consistent – I’d recommend going to the print function, and adding from __future__ import print_function to the top of the file. This is future-proofing for Python 3. Because you have separate connect() and close() methods, it’s possible that somebody could call connect(), execute some statements, and then hit an exception before they could close(). This means the database connection could stick around for longer than you were expecting. You might want to consider defining a context manager for your class. This allows people to use with statement and your class, similar to the with open(file) construction: with DB(database='chat.db') as mydb: # do stuff with mydb And then the cleanup code on the context manager always runs, even if you hit an exception in the body of the with statement. I don’t know what the two lists defined at the end of the file are. Left-over cleanup code?
{ "domain": "codereview.stackexchange", "id": 21057, "tags": "python, object-oriented, python-2.x, sqlite" }
The displacement gradient tensor transformation rule
Question: The transformation rule of a 2nd rank tensor expresssed in a given basis is often written as follow: $$F' = P^T FP $$ where $F$ is the matrix representation of the tensor in the old basis B, $F'$ its representation in the new basis B', $P$ is the transformation matrix and finally $P^T$ its transpose. I'm currently trying to proove this using the displacement gradient tensor as an example. Its elements in a given basis can be defined from the derivatives of the displacement field $\overrightarrow u$ with respect to the coordinates $(x_i, i = 1,2,3)$: $$ u_{ij} = \frac{\partial{u_i}}{\partial{x_j}} $$ I've tried to express the tensor components in a new basis $u_{ij}^{'} = \frac{\partial{u_{i}^{'}}}{\partial{x_{j}^{'}}}$ as a function of the $u_{ij}$ using the conventionnal chain rule and vector decomposition and I end up with $F' = P^{-1}FP$ which is correct only if the old and new basis are related by a rotation. In the most general case it's not true because $P^{T}\neq P^{-1}$ and I can't figure out where is my mistake. Is my reasoning wrong or did I make a calculus error ? Any help in solving that issue would be greatly appreciated, thanks a lot. Answer: The $P^TFP$ rule is for tensors that have both indices downstairs such as the strain tensor $e_{ij}$. Remembering that a displacement is a contravariant vector, the displacement gradient tensor $$ {e^i}_j = \frac{\partial u^{i}}{\partial x^j} $$ has one index upstairs and one downstairs. It therefore transforms as $P^{-1} FP$ which is what you found. The two transformation rules coincide if your restrict to orthogonal transformations for which $P^T=P^{-1}$. That's why intro elasticity usually resricts to cartesian coordinates. In curvilinear coordinates, the strain tensor is much more complicated than the symmetrised displacement gradient. If you want an genuine tensor under more than orthogonal transformations you need a Lie derivative. If the metric is $$ ds^2 = g_{\mu\nu}(x) dx^\mu dx^\nu $$ then the strain tensor due to an infinitesimal displacement $x^\mu \to x^\mu+\eta^\mu$ is given by one-half of the Lie derivative of the metric with respect to the displacement: $$ e_{\mu\nu}= \frac 12 ({\mathcal L}_\eta g)_{\mu\nu} \stackrel{\rm def}{=} \frac 12 (\eta^\lambda \partial_\lambda g_{\mu\nu} + g_{\mu\lambda}\partial_\nu \eta^\lambda+ g_{\lambda\nu} \partial_\mu \eta^\lambda). $$ This reduces to the orthogonal cartesian expression $$ e_{\mu\nu}= \frac 12 \left(\partial_\mu \eta_\nu + \partial_\nu \eta_\mu\right) $$ when $g_{\mu\nu}(x)= \delta_{\mu\nu}$ so there is no need to make a distinction between $\eta^\mu$ and $\eta_\mu = g_{\mu\lambda} \eta^\lambda$. When $g_{\mu\nu}$ is constant but not orthonormal, the first term in the Lie derivative vanishishes, but your still need the metric to lower the indices on the $\eta^\mu $to get your $P^T FP$. For a discussion of the Lie derivative and its connection with the strain tensor see pages 433 and 435 in our book.
{ "domain": "physics.stackexchange", "id": 88657, "tags": "solid-state-physics, tensor-calculus, continuum-mechanics" }
Difference between Gunn Peterson trough and the Lyman Alpha Forest? Cosmological implications?
Question: I'm having difficulty understanding the full implications of the Lyman alpha forest and its use in cosmology. My understanding is this: we detect features in the Intergalactic Medium (IGM) by very bright and very far away quasars via IGM absorption lines in the spectra of these quasars. If the IGM was made up primarily of neutral hydrogen HI, we would see the strongest absorption line as the 1s to 2p transition, but naturally redshifted. So, viewing this absorption trough in the spectrum of quasars gives us an estimate of the amount of neutral hydrogen HI in our line of sight. However, there is not a continuous absorption trough but rather a series of absorption lines, i.e. the Ly-$\alpha$ forest. Therefore, we conclude there is NOT a uniform distribution of neutral hydrogen HI along the line of sight, but rather a series of clouds of HI and that there is very little HI gas outside these clouds in the IGM. This lack of HI is due to "reionization" of HI from formation of the first galaxies, stars, and quasars. How does one derive the redshift when reionization happened, around $z\approx6$? And I still don't understand the implications to cosmology. What's the Gunn-Peterson trough? Papers appreciated. Answer: Lyman Alpha absorption systems The Lyman-$\alpha$ Forest and Gunn-Peterson troughs are two extremes on the scale of absorption features that are left by neutral hydrogen in intergalactic space. When ultraviolet light from a background source, typically a Quasar or a young, strongly star forming galaxy, travels through intergalactic space, it is continuously redshifted on the way towards our detectors. When encountering systems of HI on its way, as is shown in this this video. What is shown in these videos are only Ly$\alpha$ forest systems, that is, hydrogen clouds with a column density of less than $N \approx 10^{16}$. Denser systems leave deeper, broader absorption profiles in the UV continuum of the source, so called Lyman Limit ($10^{16} < N < 10^{21}$) or Damped Lyman Alpha ($N > 10^{21}$) systems; but the principle is the same: the wavelength of each absorption feature reveals the redshift and thus distance from us (and the source) at which it resides. An example of a $z \approx 3$ Quasar spectrum with a strong Ly$\alpha$ Forest and two DLA systems at redshifts 2.4 and 2.5 is shown here: Already from this, a couple of cosmological applications are available: The wavelength-distance mapping allows of to map the density of systems of different mass at different redshifts along a certain line of sight. These HI clouds are distributed by falling into the potential wells of Dark Matter, they trace the mass distribution of the Universe, not just the distribution of luminous matter as emitting galaxies or Quasars do. The densest of these systems, the DLA's trace early galaxies. While they are much rarer than the Ly$\alpha$ Forest systems and Lyman Limit systems, they are much denser and contain the large majority of the neutral gas in the Universe - so they MUST be the reservoirs from which galaxy formation material is drawn. However, DLAs are biased towards systems of larger extent, which is not the same as the more luminous systems. Therefore, DLAs provide a valuable complementary way of mapping the galaxy formation of history, helping us overcome the inherent selection biases that comes from observing galaxies in emission. Since galaxy formation is also coupled to the Dark Matter distribution in the Universe and its evolution with redshift, this can also help us constrain Dark Matter models in much the same way that studies of Lyman Alpha Emitters, Lyman Break Galaxies, Luminous InfraRed Galaxies (LIRGs) can do, with each their different bias. Gunn-Peterson Troughs and the Epoch of Reionization As the OP states, the Universe was largely neutral From around 300.000 years after the Big Bang - the Cosmic Microwave Background is emitted at this time, as this was the first time the Universe got transparent to the wavelength of the CMB at the time of emission, and these photons could travel freely. However, the Universe was not transparent to radiation with wavelengths shorter than the ionization wavelength of Hydrogen (912 Å), or to the strong transitions in Neutral Hydrogen, of which Ly$\alpha$ is by far the strongest. More importantly, light emitted from a hypothetical source in this neutral Universe would be redshifted as it travelled through it, meaning that all light bluewards of Ly$\alpha$ would be continually redshifted into absorption, leaving only flux on the red side of Ly$\alpha$ line center. However, at some redshift the Universe is completely reionized, after which photons bluer than Ly$\alpha$ at this redshift are unaffected and can travel freely. The result is a deep trough of almost zero flux ranging from Lyman$\alpha$ at the redshift of the emitter to Lyman$\alpha$ at the redshift where reionization happens. However, as the OP states, reionization is not a neither smooth, homogeneous nor instantaneous process, so the above view is strongly idealized. Reionization started already with the formation of the first stars, at least as early as at redshift $z=11$, probably higher. It started out in small bubbles that grew, started overlapping and in time left only little "islands" of neutral Hydrogen behind, like is illustrated here (taken from a great review paper by Mark Dijkstra): In the above figure, reionization seems complete already around $z \approx 8$, but there is still enough neutral hydrogen floating around to make it opaque to Ly$\alpha$ - as small a neutral fraction as $\sim 10^{-3}$ is enough, so Ly$\alpha$ marks only the very end of the process of reionization, around $z \sim 6$. The way this shows in our spectra is, as we go towards higher redshifts, that the Ly$\alpha$ forest systems become denser and denser in our spectrum until they start overlapping and only leaving a small fraction of the intrinsic flux bluewards of Ly$\alpha$. When exactly we move from a very dense Ly$\alpha$ fores to a bona fide Gunn-Peterson Trough is a bit wishy-washy when looking in the specrum. Below is a figure from the first paper reporting obervation of a G-P trough; only the lowest panel is deemed a G-P trough by the authors. Interestingly, G-P troughs observed in Quasar and Galaxy spectra report slightly different values of the redshift at the end of reionization. This is because Quasars tend to cluster in the regions of highest density, which reionize first. These quasars can reside in huge bubbles that are reionized ealier than the Universe as a whole, and their Ly$\alpha$ line can be redshifted into "safety" before encountering any neutral hydrogen. Young Ly$\alpha$ emitting galaxies are a sligtly better tracer of the general reionization state of the Universe, as is described in more detail in Mark Dijkstra's review paper. A proper determination of this redshift, of course, also needs to be averaged over a large number of sight lines. Summary The Lyman $\alpha$ Forest grows denser and denser with redshift, until it leaves no flux behind and becomes a Gunn-Peterson Trough. This is the very end of reionization, because Ly$\alpha$ is very sensitive to even small amounts of residual HI. The Ly$\alpha$ Forest can be used to trace the Dark Matter large scale structure of the Universe and its evolution. Damped Lyman $\alpha$ Absorber systems can be used as a tracer of galaxy formation history complementary to observations in emission, because DLAs are selected by size rather than luminosity.
{ "domain": "physics.stackexchange", "id": 20136, "tags": "cosmology, astronomy, stars, galaxies, distance" }